Why Digital Inclusion Programmes Keep Failing
- Z. Maseko
- Sep 1, 2025
- 7 min read
Updated: 4 days ago

Why the Numbers Lie
Step five is the part nobody writes up. The model runs to about four steps. Identify an underserved community. Deploy infrastructure. Photograph local children looking excited about tablets. Publish an impact report. Step five arrives about six months after the report, when the equipment is broken, nobody was locally trained to fix it, and the corporate ISP's helpline rings out to a call centre in a different country. The community ends up exactly where it started, except now with a broken router gathering dust in the corner of a community centre.
The underlying failure is one of measurement design. Development funders have spent decades optimising for what is easy to count and easy to photograph, and the gap between "covered by infrastructure" and "sustainably connected" is precisely where most digital inclusion programmes go quiet.
The Association for Progressive Communications (APC), which has studied community network development across Kenya, South Africa, and a range of comparable markets, calls this the hardware-first trap. Hardware deployment is technically simple. Capability building sits in a different order of difficulty. Most funders only pay for the simple part.
The consequences show up in the data. Community-led networks in APC-documented initiatives consistently outperform corporate ISPs in comparable underserved areas on both uptime and outage response time. The mechanism is not mysterious. Community network technicians live in the neighbourhoods they serve. Corporate ISPs operate through remote call centres with no meaningful local connection. In a community network, the technician is the person two streets over who completed a six-month training programme and earns a stipend for keeping things running. In a corporate ISP model, the technician is somewhere in a queue. The performance difference comes from proximity and ownership, and both are design choices. The most direct test of a digital inclusion programme is how many people in that community can fix the network when something breaks. Most impact reports track how many can access it. Those are different questions with very different answers.
The Three-Layer Capability Framework
After studying what made community networks survive past their initial funding cycle, the APC identified three sequential capability layers present in every network that held together over time. The pattern is consistent enough to function as a diagnostic tool. Ask where a network sits in this stack, and you have a reasonable estimate of whether it will still be running in three years.
Layer One: Technical Skills (3 to 6 Months)
Router configuration, network troubleshooting, hardware maintenance, and user support. These can be taught through structured training with a consistent curriculum and hands-on practice. The V-NET initiative in Cape Town recruited young women specifically for these roles, structuring programmes around the assumption of no prior technical background. Within a few months, participants were running their community's entire digital infrastructure. The technical layer fits inside a typical project funding cycle, which is why it is the only layer most programmes reach.
Layer Two: Management Capacity (6 to 12 Months)
Project planning, budget tracking, resource allocation, and vendor relationships. This layer requires mentorship, not just instruction. In Kenya's Global Innovation Valley initiative, a young man named Buom grew up in Kakuma refugee camp, home to around 200,000 people in structures that became permanent over time, per IRC estimates. Electricity was unreliable. Internet access, until recently, was nonexistent. His options followed a familiar pattern. Waiting. Depending on systems that treated him as a problem to be managed rather than a person with capabilities to develop. Then the programme arrived with something different from the usual infrastructure play. Training. Specific, marketable skills. QuickBooks. Project management. Network administration. Today, Buom manages the project's finances. He runs budgets, forecasts costs, and tracks expenditures. The shift came from months of managed responsibility, built over time and under live conditions.
Management capacity is where most projects stall. Organisations fund the technical training, declare success when devices come online, and exit before the management layer has time to develop. The result is technically competent communities that still depend on external organisations for every financial decision, contract, and growth choice. Technically competent but operationally dependent is a long way short of included.
Layer Three: Governance Structures (12 to 24 Months)
Community decision-making processes, conflict resolution, and long-term financial sustainability models. This layer cannot be imported. It emerges through practice. Communities learn to govern networks by governing networks.
The coordinator of the Mankosi initiative in South Africa put it plainly: "The hardware is the easy part. Getting people the skills to maintain it. That's where the work is. And keeping those people engaged and supported over the years, that's the part most funders won't pay for."
Most project funding covers twelve to eighteen months. The governance layer requires a minimum of twenty-four. The arithmetic is not complicated. The gap between those two timelines is precisely where community networks collapse, and funders write their lessons-learned reports.
Community Networks: What Works and Why
When communities own their infrastructure, they solve problems differently. In rural Kenya, the Dunia Moja Community Network faced a standard challenge: how do you provide consistent connectivity across a dispersed population where fixed infrastructure costs are prohibitive? Their answer was the Boda-Fy project. Kenya's boda boda motorcycle taxis already travel between villages daily. The project equipped these motorcycles with mobile hotspots. Operators earn supplemental income. Riders and bystanders connect during commutes. Communities access the internet without fixed household infrastructure costs.
No corporate ISP would design this system. It requires too much local social knowledge, too much embedding in existing community economic patterns. That is precisely why it works where standardised deployment models have not.
Detroit's Digital Stewards programme arrived at the same principle from a different direction. Rather than deploying infrastructure and then training people to maintain it, they built community network ownership by embedding training directly into neighbourhood operations before the hardware went in. The sequence matters more than most programme designers acknowledge. When training follows deployment, you are teaching people to maintain a system they had no hand in designing. When training precedes deployment, you are building operators. The network that follows tends to behave accordingly.
Community Networks in Africa (CNAF) has documented this pattern across multiple initiatives. When communities control implementation, they optimise for local context over scalability. The result is infrastructure calibrated to specific needs in ways that standardised frameworks cannot accommodate, because those frameworks are built to travel between contexts, which means they trade precision fit for portability. Across documented initiatives, the APC dataset is consistent enough that "inspiring exception" understates the case for the Boda-Fy project, the Detroit Digital Stewards approach, and the Kakuma training model. They are the baseline cases, and they are still running.
Designing Ownership In
Cape Town's V-NET initiative delivers what the sector has debated for years without operationalising. Meaningful technical roles for women, grounded in institutional architecture rather than aspiration.
Most digital inclusion programmes that describe themselves as gender-inclusive give women a place in the training programme. V-NET gave women the programme itself, designing them in as the technical core. That distinction is operational. The initiative sought out young women with no prior technical background, paid market rates from day one, built cohort structures so trainees were never learning in isolation, and created explicit promotion pathways into network governance. Six months in, participants were running their community's entire digital infrastructure. Technical decisions flowed through women. Maintenance sat with the women. The network's day-to-day functioning depended on women's competence. That is what ownership built into the architecture looks like.
The outcome is a community network where female technical expertise is built as a byproduct of basic operations, rather than running as a separate diversity programme alongside the main work. Colnodo in rural Colombia applied the same governance-first design principle and produced over 1,200 trained women network operators. Student educational outcomes in V-NET's served area also improved, as reliable connectivity enabled homework completion and online learning access. The causal chain runs from governance design to measurable community outcomes. That is the sequence worth studying.
The Metric Funders Should Track
Coverage numbers miss the point. A network that serves 20,000 people and trains fifty local technicians, develops ten community project managers, and establishes functioning governance creates something qualitatively different from a network that serves 200,000 people and trains no one. The first community can extend, repair, and adapt its infrastructure without external permission. The second remains dependent on whoever deployed the hardware, for as long as that organisation remains interested.
The metric that distinguishes networks built to last from those built to report is capability density: the ratio of locally trained maintainers to network users. Funders measuring success by device count are optimising for the wrong variable, which is a polite way of saying they are measuring what is easy and calling it success.
Target ratios for sustainable community networks, based on APC field documentation, sit around one trained maintainer per 400 users, combined with at least one locally capable project manager per network and a functioning governance body within twenty-four months of launch. Every benchmark here comes from observation. These are the characteristics of community networks that still operate after their first funder exits.
The World Bank's Digital Development Partnership and the IFC's digital infrastructure investment arm have both scaled their commitments to digital inclusion in recent years. Whether the programmes they fund are measuring capability density or device count is, in most cases, still device count. Pointing this out has become a familiar exercise at development finance conferences. The measurement frameworks change more slowly than the conference agendas suggest.
Building Digital Inclusion Programmes That Last
The technology required for community-owned network infrastructure is well-documented and has been deployed across dozens of countries. The constraint sits in the funding model design.
Durable redesign has four components. Fund training as generously as hardware, which in practice means allocating at least 40–50% of programme budgets to capability building rather than the current typical 20-30%. Build twenty-four-month capability development timelines into every project. Pay community members market rates from the start of training. Stipend-based models consistently produce lower retention and shallower capability depth. Measure success by how many people can maintain the network rather than how many can access it.
Every one of these design characteristics appears in the APC's global research dataset across every community network that survived its first funding cycle. They are also, incidentally, the design characteristics that do not appear in most RFPs, grant frameworks, or impact measurement guides from major development finance institutions. The networks in that dataset collectively serve thousands of people across Kenya, South Africa, and beyond.
Modest, by corporate ISP metrics. The point is that they are still running.
Funders who continue to measure success by device count are making a choice, even if they are not describing it that way. The evidence on what makes community networks last is no longer thin or preliminary. At this point, deploying hardware-first programmes and calling them digital inclusion is a bit like opening a restaurant, installing a kitchen, and declaring the neighbourhood fed.




Comments