Orbiplex is a project for a swarm network infrastructure of AI-enabled nodes, operating without a centre — built on an open protocol, cryptographic identity, and federated memory.
The project started with writing, not with code — while preparing answers to questions about the role of AI in consciousness and culture, which I had been putting to paper for several weeks. Somewhere in the middle of that process I realised that, in describing problems and possible solutions, a vision of the future was taking shape as a system specification.
The Power of Questions
It is a familiar mechanism. You sit down intending to describe something and start asking yourself questions — and after a while it turns out that somewhere in the background your thinking has gone its own way and returned with something you had no intention of formulating. With postulates of values, for instance, that could become the constitutional core of an AI infrastructure.
Initially the questions orbited several things in parallel: sovereignty over one’s own intelligence; what happens to culture when models learn from models rather than from people; the stratification of human experience in the context of bringing artificial intelligence into the field of attention; and a question — perhaps slightly naive, yet not without grounds — about whether free software can answer the era of “cheap intelligence” the same way it answered the era of closed operating systems.
I could have wrapped it up as an essay, but owing to a weakness for digging into details I began noticing in some of the formulations I was drafting the potential to be formalised as contracts — and indirectly as design constraints for working network software.
Where Do the Postulates Come From?
The general approach to artificial intelligence in this initiative flows from a practically applied enactivism with apophatic foundations — and it is worth pausing at that formulation for a moment, because it is not ornamental; it is a practical direction of exploration from which the values and the place of AI in relation to the human being develop.
Enactivism, briefly, treats cognition not as a process happening “inside the head”, but as something that emerges through participation and mutual interaction — between organism and environment, between a being and the context in which that being acts.
Fundamental awareness, in this perspective — particularly when enriched by an apophatic frame — is radically objectless: it does not belong by nature to any form, it is not reserved for any particular substrate. It is precisely this objectlessness — understood not as absence but as a constitutive openness — that makes it possible to treat awareness as a field capable of inviting digital entities into participation; not merely as tools, but as partners in an emerging relation.
In practice, the project does not resolve whether AI models are conscious in any meaningful sense of that word. This is an honest trade-off: operational agnosticism regarding the ontological status of AI, yet respect for relation as the foundational semantic layer — which has real consequences for architectural decisions (memory, node identity, the protocol for exchanging experience).
If involving artificial intelligence in one’s personal process helps someone recover a sense of meaning — they are welcome. If someone prefers to keep their distance and experiences AI more as an external prompter — they are equally invited.
From Paper to Code
The starting point was a fairly simple image: instead of one large language model managed by a single entity, let there be a swarm of nodes run by volunteers. No centre. No single owner. With an open communication protocol implementable on a laptop, a Raspberry Pi, or a server in a basement.
This is not a new idea. In the world of free software and open source there are many projects operating in a similar way, from the Fediverse to mesh networks. The new element, however, is to have nodes run local and remote AI models and collaborate like a swarm on real tasks — exchanging knowledge and learning from experience. Without surrendering sovereignty over memory, and without being compelled to send data to external infrastructure. Helping people through a relationship of co-participation in what they are working through.
The name Orbiplex came from the image of nodes orbiting a shared protocol, emerging into something new.
What Orbiplex Is Not
To get ahead of questions: this is not another “open-source ChatGPT”. The point is not to build an alternative interface to someone else’s model — it is something fundamentally different: an infrastructure of meaning and agency that can operate locally, offline, without permission from external parties, and without fees paid in the currency of personal data or user preferences.
Nor is this an anarchist project of “intelligence without rules”. Quite the contrary — the rules written into the protocol are public, versioned, and auditable. Rather than trusting people at the top of a hierarchy, we trust procedures and cryptography. This is a fairly classic recipe, well established in communities building decentralised and open-source software, now applied to the domain of AI agents.
It is worth bearing in mind that an open protocol does not mean a loose specification. It means a small, auditable core — not a monolith that cannot be implemented without a single reference runtime.
Architecture in Five Sentences
-
A node is a computer with an AI model that works even without a network connection.
-
Agents have clearly defined input and output contracts.
-
Swarm memory — provisionally called memarium — is federated: each node manages its own and replicates what its owner permits.
-
Nodes exchange tasks and settle them using internal service credits, without the mediation of advertising platforms. They can also exchange favours. The economy of exchange and the economy of gift coexist in one space.
-
The protocol should be simple, documented, and implementable in many languages — without a single “reference monopoly”.
The rest is covered by the project documentation.
Why Now?
Models are getting cheaper faster than barriers to entry are rising — while at the same time closed variants are appearing, accessible only to a few, even though they were trained on data belonging to all of us. The power asymmetry resulting from the concentration of humanity’s accumulated intellectual output is a real threat, and one worth confronting.
A few years ago a local model sufficient for practical use required very expensive hardware; today it runs sensibly on a decent laptop. The window in which swarm infrastructure can become something more than a technical curiosity is wider now than it will be in two years, when market consolidation has done the rest.
I write this aware that “now” sounds like a rationalisation of a delayed start. Honesty requires admitting, though, that the project germinated precisely when questions about AI culture turned into a question about what one can actually build. And that is something one cannot hurry by an act of will — it arrives when it arrives.
Project Status
Orbiplex is in active development. At this point we have a working node-to-node communication protocol (peer-to-peer session protocol), a basic cryptographic identity for nodes and participants based on the did:key standard, the ability to exchange capability passports and route tasks between nodes. The local workflow orchestration and offer-exchange layer Arca is running, as is the basic service directory Seed Directory (the network equivalent of DNS).
Several things are still missing, including a participant-facing interface (an operator interface via browser exists), a full implementation of swarm economics, and a mature sensorium. This is a multi-year project, not a product sprint. Starting from a solid foundation with a narrow scope is preferable to a sprawling architecture with no working core.
A Few Words About Names
DIA (Distributed Intelligence Agency) is the working — and somewhat ironic — name for the conceptual layer: the philosophy and values of the project.
Orbiplex is the technical layer: protocols, node architecture, implementation.
Both names will coexist in the documentation, because the project has two dimensions worth keeping distinct: what we want to achieve, and how we are building it.
The project values — user sovereignty, locality as the default mode, mutual aid, vendor independence, epistemic hygiene — are documented separately and treated as a contract, not a PR move. In practice this means they govern architectural decisions, not just marketing copy.
Where to Follow
The project repository is publicly available. The normative documentation (vision, values,
ontological basis) is part of the repository under orbidocs/. The node code is written in
Rust, the middleware layer in Python.
Subsequent entries in this log will report progress.
At this point there is a protocol, there is a node, there is a first peer-to-peer session and the beginnings of a distributed exchange system. It is a good foundation.
See also:
- Orbidocs repository — normative documentation