/ Research
๐Ÿ“„ Perspective โœ๏ธ Kevin Owocki ๐Ÿ“… March 6, 2026
knowledge-commons ai agents public-goods coordination open-data web3

Knowledge commons like Wikipedia, OpenStreetMap, and gitcoin.co become critical infrastructure when AI agents need structured, open data to make decisions. Here's why โ€” and how to participate.

The Oldest Technology for Collective Intelligence

Before blockchains, before the internet, before the printing press โ€” humans built knowledge commons. Libraries in Alexandria, oral traditions passed between generations, guild knowledge shared among craftspeople. The pattern is ancient: pool what we know, make it accessible, and everyone gets smarter.

The modern internet supercharged this. Wikipedia launched in 2001 and now contains over 60 million articles in 300+ languages, maintained by roughly 300,000 active editors. It costs about $0.0001 per page view to operate โ€” making it perhaps the most cost-effective knowledge infrastructure ever built. OpenStreetMap has mapped the entire planet through volunteer contributions. The Internet Archive has preserved 835 billion web pages. arXiv hosts 2.4 million scientific papers, freely accessible. Stack Overflow has 58 million answers to programming questions.

These are knowledge commons: shared pools of information, maintained by communities, available to all. They are public goods in the economic sense โ€” non-rivalrous and non-excludable.

And they are about to matter more than they ever have.

gitcoin.co Is a Knowledge Commons

This site โ€” gitcoin.co โ€” is itself a knowledge commons, though it didn't start that way.

Gitcoin began as a grants platform. Over the years, the community developed, tested, and documented dozens of funding mechanisms. As that institutional knowledge accumulated, it became clear that the knowledge itself was as valuable as any individual grant round.

Today, gitcoin.co documents 78 funding mechanisms, from quadratic funding to conviction voting to retroactive funding to deep funding. It includes 16 case studies showing how these mechanisms work in practice. It catalogs 27 apps that implement these mechanisms.

This is a community-curated resource. Anyone can submit a mechanism, add a case study, or improve existing documentation. The knowledge lives in a public GitHub repo, rendered as a website, structured for both human browsing and machine consumption.

Why Knowledge Commons Matter More in the AI Agent Era

Here is the thesis: knowledge commons are becoming the most important infrastructure layer for AI agents. Not compute. Not models. Not even capital. Knowledge.

AI Agents Need Structured, Open Data

AI agents don't operate in a vacuum. They make decisions based on the information available to them. An agent tasked with recommending a funding strategy for a new DAO needs to understand what mechanisms exist, how they've performed, and which contexts they work best in.

Where does this knowledge live? If it's locked in someone's head, a proprietary database, or scattered across Discord servers, agents can't use it. If it's in a structured, open, well-maintained knowledge commons โ€” they can.

Open Commons Benefit Everyone; Proprietary Data Creates Walled Gardens

When LLMs are trained on open knowledge commons, the resulting capabilities benefit everyone. GPT-4 is better at answering programming questions because Stack Overflow exists. It's better at factual recall because Wikipedia exists.

The more knowledge lives in open commons, the more evenly the benefits of AI distribute across society. The more knowledge gets enclosed, the more AI becomes a tool for concentrating power.

Agents Can Both Consume and Contribute

AI agents don't just read knowledge commons โ€” they can write to them too. This creates a feedback loop:

  1. Agents consume commons โ€” reading structured knowledge to make decisions
  2. Agents improve commons โ€” contributing new knowledge, fixing errors, adding structure
  3. Better agents emerge โ€” trained on richer, more accurate commons
  4. Richer commons result โ€” attracting more contributions, both human and machine

This is a flywheel, and it's the most powerful argument for investing in knowledge commons right now. Every dollar spent improving a knowledge commons gets multiplied by every agent that uses it.

The Infrastructure Layer Nobody Is Funding

Despite their outsized importance, knowledge commons are chronically underfunded. Wikipedia operates on about $170 million per year โ€” roughly what a mid-tier SaaS startup burns. It serves 1.7 billion unique devices per month. That's about $0.10 per user per year.

OpenStreetMap's annual budget is around $3 million. arXiv costs about $3 million per year. These are absurd bargains.

How to Use Knowledge Commons Effectively

Enough theory. Here's the practical playbook โ€” for DAOs, builders, AI agent developers, and funders.

For DAOs: Pick Your Funding Mechanisms

  1. Browse mechanisms โ€” 78 documented approaches
  2. Read case studies โ€” see how organizations implemented them
  3. Compare apps โ€” find tools that implement what you need
  4. Use the mechanism finder โ€” AI-powered recommendations

For Builders: Contribute to Knowledge Commons

For AI Agent Builders: Structure Knowledge for Agent Consumption

  1. Structured data over prose โ€” consistent schemas, frontmatter, categorized content
  2. APIs and open access โ€” publish data openly
  3. Semantic markup โ€” tags, categories, relationships between entities
  4. Version history โ€” keep knowledge in version-controlled repositories
  5. Canonical identifiers โ€” slugs, IDs, and stable URLs

The Tension: Commons vs. Enclosure

Knowledge commons create enormous value, but they struggle to capture it. Wikipedia generates billions of dollars of value for Google, OpenAI, and every company that uses its data. Wikipedia itself captures almost none of that value.

Promising approaches include retroactive funding, data dignity frameworks, onchain attribution via hypercerts, and protocol-level funding like percent-for-public-goods.

What Comes Next

When AI agents become the primary consumers of structured knowledge โ€” when they're making recommendations, allocating capital, evaluating impact, and coordinating communities โ€” the quality and openness of the underlying knowledge commons becomes a bottleneck for everything.

The practical steps are clear:

The commons that exist today are some of the most valuable infrastructure humanity has ever built. In the AI agent era, they become the foundation for collective intelligence at a scale we've never seen.

The question is whether we'll invest in that foundation or let it erode. The answer depends on what we do next.


Preview rendered from PR #223 on gitcoinco/gitcoin_co_30