Skip to content

The Nervous System We Choose to Build

A response to Dario Amodei's "The Adolescence of Technology"

Nicolas Lounsbury, January 2026


I.

Dario Amodei's recent essay articulates something many of us feel but struggle to name: we are at a hinge point in history, and we are not ready.

"Humanity is about to be handed almost unimaginable power," he writes, "and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."

He's right. The scenarios he outlines—AI-enabled authoritarianism, mass surveillance, the erosion of human agency—are not science fiction. They are engineering problems with known solutions, awaiting only the will to deploy them.

This is terrifying. It should be.

But terror, while appropriate, is insufficient. We have become so focused on preventing catastrophe that we have underinvested in articulating what we actually want. The discourse is dominated by red lines—what AI must not do—while the positive vision remains vague. "Beneficial AI" is a placeholder, not a destination.

The absence of a compelling constructive vision is itself dangerous. It leaves the future to those with narrower imaginations: surveillance with better PR, extraction with a safety label, control systems marketed as convenience. If we cannot articulate what we want to build, we will get what others want to build for us.

So let me try to articulate something. Not a complete answer—no one has that—but a direction.


II.

Amodei frames AI as optimization applied at unprecedented scale. Optimization requires an objective function—something to maximize. AI systems are extraordinarily good at finding efficient paths to specified goals. The question is: whose goals?

This leads to an uncomfortable question: What is society's objective function? Can society be optimized? And if so, optimized for what?

My day job is computational biology—modeling complex biological systems. I find it useful to look at systems that have solved related problems, not through central optimization, but through distributed coordination. Specifically: ecosystems.

An ecosystem in equilibrium is not centrally planned. No organism decides the optimal number of deer or the correct distribution of nitrogen. Instead, equilibrium emerges from countless local interactions—feedback loops between organisms and environment, predator and prey, producer and decomposer. Information flows constantly. Each organism processes information locally and acts on its own interests. From this apparent chaos, stable patterns emerge.

This is not optimization in the engineering sense. It's homeostasis—dynamic stability maintained through continuous feedback. No single point of control. No central objective function. Just countless agents responding to their local environment, and collective outcomes emerging from their interactions.

The metaphor is imperfect. But it suggests something important: coordination does not require central control. Stability does not require surveillance. Complex systems can self-organize when information flows freely and feedback loops function properly.

What would it mean to build AI systems that enable this kind of coordination rather than replacing it with central optimization?


III.

Amodei warns of a specific nightmare: "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming and stamp them out before they grow."

This is the surveillance nervous system. Information flows from periphery to center. The center processes, decides, acts. The periphery has no autonomy—it is merely sensed.

But consider the same capability pointed in a different direction.

An AI system looking across thousands of municipal meetings, tens of thousands of community complaints, millions of civic interactions could gauge public sentiment, detect pockets of shared concern forming, and amplify them before they dissipate.

Same technology. Same scale. Opposite architecture.

The difference is not in the AI's capabilities. It's in the architecture—who sees what, who controls what, whose interests the system serves.


IV.

The deeper problem with contemporary democracy is not that citizens lack information. We are drowning in information. The problem is that the information is illegible—buried in thousand-page budgets, scattered across five levels of government, encoded in procedures designed by and for professionals.

This complexity creates information asymmetry. Lobbyists, consultants, and connected interests can navigate it; ordinary residents cannot. The asymmetry isn't accidental. Incumbents benefit from opacity, and they have little incentive to reduce it.

Citizens cannot participate in decisions they don't know about. By the time a policy appears in the news, it has already been decided. The people affected were never in the room. They didn't know the room existed.

Decisions made without input lack legitimacy. Officials act on incomplete information. Policies fail to reflect community priorities. Residents feel ignored, then disengaged. Disengagement becomes self-reinforcing.

This is democratic decay. Not dramatic collapse—just the slow erosion of the feedback mechanisms that make self-governance functional. The problem is twofold: information asymmetry flowing down (government to citizens), and inefficient feedback channels flowing up (citizens to government). Both are solvable.

The hypothesis is simple: residents don't participate because they lack coordination infrastructure. They don't know what's being decided, when, or who else cares. If we can close that gap—detect high-stakes decisions, identify affected residents, provide infrastructure for coordination—participation will increase.


V.

The word "permissionless" has been corrupted by crypto grifters, but the concept remains important. A permissionless system is one where participation doesn't require approval from a central authority. Anyone can join. Anyone can build on it. No single entity can exclude you.

This matters because the alternative—platforms—has failed.

Nextdoor optimizes for engagement, which means conflict. Citizen optimizes for fear. Facebook groups are subject to algorithmic whims. Every platform-based civic tool eventually faces the same dilemma: serve users or serve the business model. The business model usually wins.

More fundamentally, platforms are owned. They can be acquired, pivoted, shut down. The civic tech graveyard is full of promising tools that couldn't find a sustainable model or got acquired by companies with different priorities.

Protocols are different. Email is a protocol. No one owns it. No one can shut it down. Competing implementations interoperate. If your email provider becomes evil, you can switch without losing the ability to communicate.

What would a protocol for civic coordination look like?


VI.

Imagine this:

You wake up Monday morning. Your AI assistant—running locally, trained on your preferences—surfaces a notification: "The City Council votes tonight on the 4th Street rezoning. This affects your commute and your neighborhood's character. 23 other residents are tracking this item. 8 have publicly voiced support, 3 oppose."

You ask: "What's the history here?" The assistant searches meeting transcripts, finds that a similar proposal failed 3-2 in 2024, identifies which council members voted against it, surfaces public testimony from that hearing. It cross-references state housing legislation that may constrain local options. It finds 14 open traffic complaints on 4th Street from your neighbors.

You decide you want to speak. You ask the assistant to help you prepare. It drafts talking points based on effective testimony patterns, notes what's already been said so you don't repeat others, suggests a specific ask that aligns with what the 23 other engaged residents seem to want. You revise it in your own voice.

At the meeting, coordinated residents speak in sequence—not identical scripts, but aligned concerns, each adding something distinct. The council sees organized community sentiment, not just a few squeaky wheels.

Afterward, the assistant tracks what the council decided, notes any commitments made ("staff will report back in 90 days"), and reminds you when that deadline approaches. If the commitment isn't kept, you'll know.

None of this requires a platform that owns your data or controls your experience.

The data layer queries public records—agendas, minutes, budgets, legislation. This is public information, newly accessible.

The relay routes events to subscribers without filtering. It doesn't decide what's important. It delivers and counts.

Your assistant—your agent, running on your device or your chosen cloud, using whatever model you prefer—does the filtering and reasoning. It knows your preferences because you told it, not because it inferred them from surveillance. You can inspect its reasoning. You can adjust it. You can switch providers.

This is why the moment matters. Previous recommendation systems required centralized surveillance to function—"people who liked X also liked Y" only works if you watch everyone. LLMs reason differently: "this rezoning affects the school your kids attend" requires understanding context, not tracking behavior. Intelligence can now live at the edge. The same technological shift that makes AI-enabled surveillance possible makes AI-enabled coordination possible. We get to choose which one to build.

Voice aggregation happens through cryptographic attestation, not identity databases. Provenance is transparent—is this a new account or someone with a history of engagement?—but no central party knows who you are unless you choose to reveal it.

The result: no single party has the full picture.

Actor Knows Doesn't Know
Data layer City activity Who subscribes to what
Relay Subscription patterns, voice counts Why users care, user identities
Your agent Your preferences Other users' preferences
City Who showed up physically What they voiced on digitally

This is privacy through architecture, not policy. The system is structurally incapable of surveillance—not because anyone promises not to surveil, but because no component has the information required.

The agent ecosystem becomes symbiotic. Journalist agents monitor 20 cities for anomalies—astroturfing, unusual voting patterns, broken promises. Organization agents coordinate members across jurisdictions. City staff agents sense community sentiment before meetings, creating visible feedback that coordination reaches decision-makers.

The protocol becomes more valuable as the ecosystem diversifies—not through winner-take-all dynamics, but through complementary roles. This is an economy, not just an architecture. Value flows between participants rather than being extracted by a central platform. What emerges when coordination infrastructure exists is a question worth its own exploration.


VII.

This is the vision. I am working—passionately, in whatever hours I can find outside my day job—to build toward it.

The project is called CivicOS. It's open source, built on Anthropic's Model Context Protocol. The data layer exists: real civic data for San Rafael, California—meetings, decisions, transcripts, budgets, community complaints. The coordination protocol is designed and documented. We're building incrementally, each layer validating the one beneath it.

San Rafael is the first city, not the only city. The architecture is designed for federation—portable to any jurisdiction, open for anyone to run their own instance, extensible by the community. The goal is infrastructure that spreads, not a platform to control.

January 2026: pilot decision awareness with real residents and real decisions. Learn what works. Build the next layer. Repeat.

This may fail. The specific implementation might be wrong. But I'm convinced the architectural direction is right—that the choice between AI-enabled surveillance and AI-enabled coordination is real, and that the latter is possible. The only way to find out is to build it.


VIII.

A note on how this essay came to exist.

I wrote it in collaboration with Claude—Anthropic's AI assistant. Not "Claude wrote it and I edited," but genuine collaboration: I provided the framing and the vision; Claude helped structure the argument, find the words, identify redundancies. The CivicOS codebase is similar—substantially written with Claude Code, under my direction and review.

This feels important to acknowledge, because it's the point.

The nightmare scenario is AI that replaces human agency—systems that decide for us, manipulate us, render us obsolete. The alternative is AI that augments human agency—systems that help us think more clearly, coordinate more effectively, act more decisively.

This essay is a small example of the latter. I could not have written it as well alone. Claude could not have written it at all without my direction and judgment. The collaboration produced something neither of us would have produced independently.

That's what I want for civic coordination. Not AI that decides what citizens should want, but AI that helps citizens discover what they want and act on it together. Not platforms that extract engagement, but protocols that enable coordination. Not nervous systems that report to central authorities, but nervous systems that report to citizens.

Amodei writes that "humanity needs to wake up" to the risks of AI. He's right. But waking up means more than recognizing danger. It means imagining alternatives—and then building them.

Communities should function as organisms, with cells acting in concert. The question is: who builds the nervous system, and for whose benefit?

The nervous system we choose to build will shape what kind of organism we become.


Nicolas Lounsbury is a computational biologist and the creator of CivicOS, an open-source coordination infrastructure for local self-governance. He builds this in his spare time because he believes it matters.

The coordination protocol described in this essay is implemented in civicos-relay.

This essay was written in collaboration with Claude.


References

  • Amodei, Dario. "The Adolescence of Technology." January 2026. https://www.darioamodei.com/essay/the-adolescence-of-technology
  • Amodei, Dario. "Machines of Loving Grace." 2024. https://darioamodei.com/essay/machines-of-loving-grace