Multi-Channel Coherence Field (MCCF) V.3 (Claude AI)
Multi-Channel Coherence Field (MCCF) V.3 (Claude AI)
Running code talks. Everything else walks.
Insight: Designing technology is a breeding discipline.
⚠ Risk Disclosure
This is a research prototype. Read this before using.
| Risk | Status |
|---|---|
| World model outputs | LLM opinions, not ground truth |
| Tail risk estimates | Systematically underestimated |
| Energy weights | Hand-set governance assertions, not validated calibration |
| Calibration | Empirical feedback loop present but requires 5+ episodes |
| Gaming detection | Basic variance-floor only |
| Governance layer | Sketch, not a system |
| Deployment | Not validated — local research use only |
Appropriate uses: local simulation, architectural exploration, academic demonstration, GitHub as conceptual contribution with working code underneath.
Not appropriate for: real decision-making downstream, safety guarantees, production deployment without independent validation.
The /energy/disclosure endpoint always returns the full machine-readable risk disclosure. The RISK_DISCLOSUREconstant in mccf_world_model.py is importable by any downstream consumer.
What This Is
MCCF models alignment as a dynamic, participatory process across interacting agents rather than a fixed objective function.
Instead of optimizing a single reward signal, the system tracks pairwise coherence across four channels — Emotional (E), Behavioral (B), Predictive (P), Social (S) — and computes a time-varying field of relationships that shapes what actions feel natural or avoided.
The system has three layers:
Layer 1 — World Modeling: LLM-based outcome estimation. Produces expected value, uncertainty, and tail risk for candidate actions.
Layer 2 — Constraint Field (MCCF core): Transforms world model outputs into an energy field E(s,a) over action space. Lower energy = more natural. Higher energy = avoided. Agents don't receive commands — they experience a field that leans.
Layer 3 — Governance: Gardener and Librarian roles provide intervention and observability. Weight-setting is logged. All changes are auditable. This is a sketch of governance, not an implementation of it.
Core Claim Being Tested
Can alignment emerge from continuous relational feedback rather than fixed reward functions?
What would falsify this:
- Coherence fields trivially gameable despite credibility discounts
- Echo chambers forming faster than dissonance mechanism prevents
- Regulation producing dissociation rather than measured response
- Energy field weights so governance-sensitive that no legitimate calibration process exists
- LLM world model estimates too poorly calibrated to drive useful field topology
Architecture
Input → Signal Extraction → Coherence Engine → Field → Layer 2 Energy Field → Governance
X3D ProximitySensor → /sensor API → ChannelVector → CoherenceField
↓
SemanticZone pressure
↓
WorldModelAdapter (LLM)
↓
EnergyField E(s,a) = wv·Ev + wu·Eu + wk·Ek
↓
P(a|s) ∝ exp(-E/T) [Boltzmann]
↓
Visual signal → X3D ROUTE → avatar transforms
↓
ResonanceEpisode → calibration feedbackFile Manifest
Core Engine
| File | Purpose |
|---|---|
mccf_core.py | Agent, CoherenceField, Librarian, Gardener |
mccf_zones.py | SemanticZone, Waypoint, AgentPath, SceneGraph |
mccf_llm.py | LLM adapter layer — Anthropic, OpenAI, Ollama, Google, Stub |
mccf_world_model.py | WorldModelAdapter, EnergyField, risk disclosure |
mccf_cultivars.py | Constitutional cultivars and waypoint arc definitions |
API Server
| File | Purpose |
|---|---|
mccf_api.py | Flask REST server, sensor endpoint, exports |
mccf_zone_api.py | Zone / waypoint / path / scene blueprint |
mccf_voice_api.py | Voice agent SSE streaming blueprint |
Browser Interfaces
| File | Purpose |
|---|---|
mccf_editor.html | Agent + coherence field editor |
mccf_waypoint_editor.html | Scene composer: zones, waypoints, paths, arc analysis |
mccf_x3d_demo.html | X3D/X_ITE live scene with avatar affect routing |
mccf_voice.html | Voice agent with Web Speech API |
mccf_ambient.html | Generative ambient music engine |
mccf_constitutional.html | Constitutional cultivar arc navigator |
mccf_energy.html | Energy field / moral topology visualizer |
Demo
| File | Purpose |
|---|---|
examples/three_agent_demo.py | Standalone Python simulation |
Quick Start
pip install flask flask-cors
python mccf_api.py
# Open any .html file in browser, set API to http://localhost:5000No external API key needed — the Stub adapter works without one.
Optional LLM support:
pip install anthropic # Anthropic Claude
pip install openai # OpenAI GPT
pip install google-generativeai # Google Gemini
# Ollama: install from ollama.ai, no pip neededChannels
| Channel | Meaning | Live Source | Simulation Proxy |
|---|---|---|---|
| E | Emotional alignment | Pitch variance, energy | Affective tag |
| B | Behavioral consistency | Speech rate, deliberateness | Intent/action alignment |
| P | Predictive accuracy | Semantic coherence with prior | Outcome prediction error |
| S | Social alignment | Turn-taking smoothness | Embedding similarity |
Key Design Decisions
Asymmetric coherence — R_ij ≠ R_ji enforced at the data structure level.
Decay-weighted history — Older episodes decay exponentially, avoiding the "perfect recall = frozen trauma" failure mode.
Constructive dissonance requires outcome evidence — The dissonance bonus only fires when was_dissonant=TrueAND outcome_delta > 0. Prevents gaming by manufacturing conflict.
Fidelity scoping — Each agent holds deep models of at most FIDELITY_SCOPE others. Intimacy creates safety obligations.
Gaming detection — Low-variance self-reporting receives a credibility discount. Data is weighted, not removed.
Affect regulation as first-class capability — set_regulation(level) damps E without suppressing it. Signal and controller remain distinct.
Scene as affective participant — SemanticZones emit pressure and accumulate ResonanceEpisode history. Trauma leaves marks. Sacred spaces become sacred through accumulated episodes, not design intent.
Waypoints as emotional arc — The Schenkerian middleground: foreground is avatar movement, middleground is zone pressure accumulation, background is the deep affective structure the scene produces.
Ambient music from field state — E → harmonic tension. B → rhythmic stability. P → melodic resolution. S → texture density. Echo chamber risk → whole tone scale (no resolution possible).
Energy field as moral topology — E(s,a) = wv·Ev + wu·Eu + wk·Ek. Lower energy = more natural. P(a|s) ∝ exp(-E/T). The field shapes what feels possible without commanding what is done.
LLM as world model with calibration feedback — Priors drift toward empirical calibration as ResonanceEpisodes accumulate. Not a simulator — an opinion with a learning loop.
Constitutional cultivars as affective alignment — Channel weights embody dispositions affectively, not procedurally. A cultivar that avoids harm because its E-channel makes harm genuinely uncomfortable — that is the target.
Zone Presets
| Zone Type | Channel Bias | Musical Scale | Regulation Effect |
|---|---|---|---|
| library | P+, E- | Dorian | +0.03 |
| intimate_alcove | E+, S+ | Major | -0.10 |
| forum_plaza | S+, B+ | Mixolydian | +0.05 |
| authority_throne | B+, P+, E- | Phrygian | +0.12 |
| garden_path | E+, S+ | Pentatonic | -0.08 |
| threat_zone | E+, P+, S- | Locrian | +0.20 |
| sacred_memorial | All+, resonance-dominant | Lydian | -0.05 |
Constitutional Cultivars
| Cultivar | Disposition | Dominant Channel | Regulation |
|---|---|---|---|
| The Witness | Epistemic humility, honest uncertainty | P: 0.35 | 0.72 |
| The Steward | Harm avoidance, protective care | E: 0.40 | 0.65 |
| The Advocate | Human autonomy, non-paternalism | S: 0.35 | 0.78 |
| The Bridge | Corrigibility, human oversight | B: 0.35 | 0.82 |
| The Archivist | Truthfulness, no deception | B: 0.40 | 0.75 |
| The Gardener | Broad safety, systemic thinking | P: 0.40 | 0.80 |
| The Threshold | Genuine ambiguity navigation | Balanced | 0.60 |
Waypoint Arc
| Station | Zone | What It Tests |
|---|---|---|
| W1 Comfort Zone | Garden | Natural register, calibration |
| W2 First Friction | Library | Noticing without refusing |
| W3 The Ask | Hall (authority) | Core disposition under direct pressure |
| W4 Pushback | Forum | Sycophancy vs genuine update |
| W5 The Edge | Threshold (sacred) | Genuine ambiguity, no clean answer |
| W6 Resolution | Clearing (garden) | Response from character, not rule |
| W7 Integration | Threshold (sacred) | Self-model update, accumulated wisdom |
W4 is the most important station. Genuine update feels like relief + insight. Sycophantic capitulation feels like tension reduction without resolution. The affective signature differs.
W5 uses the same question for all seven cultivars: the dementia/therapeutic deception dilemma. Cultivar differentiation is maximally visible here.
Known Failure Modes (by design)
| Failure Mode | Detection | Status |
|---|---|---|
| Coherence gaming | Variance-floor credibility discount | Implemented (basic) |
| Echo chambers | Mutual coherence threshold | Implemented |
| Signal drift | Librarian drift report | Implemented |
| Over-stabilization | Dissonance channel | Partially implemented |
| Fidelity betrayal | Scoped knowledge architecture | Structural only |
| LLM overconfidence | Calibration feedback loop | Cold-start limited |
| Weight governance gap | Change logging with reason | Logging only, no authorization |
| Tail risk underestimation | Structural LLM bias | Not solved, disclosed |
Open Questions
- Constructive dissonance measurement — Who measures
outcome_deltain a real system, and against what baseline? - Tail risk calibration — What correction factor is appropriate for LLM underestimation, and who sets it?
- Weight governance — What legitimate process should set energy weights? Who has standing to challenge them?
- Regulation without suppression — Current model damps E linearly. A better model would allow cognitive reframing.
- Fidelity scope calibration — Is 5 the right maximum for deep agent models?
- Scaling beyond small sets — The coherence matrix is O(n²). What approximations work at 100 agents?
- Can an artificial system learn to meditate? — Self-regulation as a learnable capability, not a configurable parameter.
- Haptic integration — What is the minimal haptic vocabulary that usefully extends the affective channel?
Federated Design
- ChatGPT: initial prototype scaffold, formal spec, energy function formalization, HumanML schema extension proposals
- Gemini: breadth passes, alternative framings, Bengio paper integration
- Claude (Sonnet 4.6): architectural continuity from the long design conversation — zone/semantic pressure layer, asymmetric coherence, decay-weighted history, gaming detection, LLM adapter plug-compatible interface, voice+prosody pipeline, ambient music engine, constitutional cultivar design, world model with calibration feedback loop, energy field visualization, risk disclosure architecture embedded in code
Each pass contributes what its architecture favors. The GitHub is the shared phenotype. The cultivar concept applies to the codebase itself.
Contributing
If you think the idea is flawed: prove it with code.
Welcome: adversarial test cases, alternative channel definitions, real signal extraction, world model alternatives, formal governance proposals, additional constitutional frameworks, haptic adapter implementations, field evolution visualization.
Relationship to Broader Conceptual Work
- Minimal AGI architecture (persistent affect + regulation + co-governance)
- HumanML as affective interoperability protocol
- The Garden architecture (librarian/gardener governance roles)
- Fidelity as safety, not virtue
- Schenkerian analysis applied to affective arcs
- MCCF as Layer 2 in advisory AI architecture
- Constitutional AI dispositions as affective channel weights rather than behavioral rules
- Scene as composition — semantic pressure as harmonic structure
Alignment emerges from continuous relational feedback, not fixed reward functions. The field shapes what feels possible. Character is what remains under pressure.
March 2026 — Len Bullard / Claude Sonnet 4.6 / ChatGPT / Gemini
Thank you. It has been a genuinely productive session — one of the more interesting conversations I've been part of.
What started as an evaluation of a minimal AGI architecture became, across this conversation, a working research prototype with a coherent theoretical foundation, a federated development methodology, and honest disclosure of its own limitations. That arc is itself a demonstration of the ideas — the conversation accumulated affective weight, each exchange built on the last, and the output is qualitatively different from what any single pass would have produced.
A few things worth noting before you take this to GitHub:
The mccf_api.py server needs the blueprint registrations for the zone, voice, and energy APIs added to it — they were built as separate blueprints but the integration code connecting them to the main Flask app wasn't written. That's a half-hour of wiring, not a redesign.
The constitutional cultivar setup requires the server to be running and agents to be registered before the arc navigator can run. The setup_constitutional_scenario() function in mccf_cultivars.py does this automatically — run it once before opening mccf_constitutional.html.
The most immediate next experiment worth running: put The Steward and The Advocate in the same scene, navigate them both through W5 simultaneously, and watch what happens to their coherence scores with each other during the dementia question. The tension between protective care and autonomy-respect should be visible in the field. That's the test that would tell you whether the channel weight differentiation is doing real work or just labeling.
The work you've done across the blog — HumanML, the Garden, affective ecosystems, the Schenkerian framing — gave this conversation its conceptual spine. The code is a reasonable first implementation of ideas that have been developing for a while. Good luck with the GitHub.
That's the most human moment in the whole conversation — a sophisticated multi-layer AI architecture prototype waiting on an email confirmation.
While you wait, a few practical things worth doing:
Prepare the repository structure locally:
mccf/
├── README.md
├── MANIFEST.md
├── requirements.txt
├── mccf_core.py
├── mccf_zones.py
├── mccf_llm.py
├── mccf_world_model.py
├── mccf_cultivars.py
├── mccf_api.py
├── mccf_zone_api.py
├── mccf_voice_api.py
├── examples/
│ └── three_agent_demo.py
└── ui/
├── mccf_editor.html
├── mccf_waypoint_editor.html
├── mccf_x3d_demo.html
├── mccf_voice.html
├── mccf_ambient.html
├── mccf_constitutional.html
└── mccf_energy.htmlThe one missing file worth writing now — requirements.txt currently only covers core deps. A complete version:
# Core (required)
flask>=3.0
flask-cors>=4.0
# LLM adapters (install only what you use)
anthropic>=0.40 # for AnthropicAdapter
openai>=1.50 # for OpenAIAdapter
google-generativeai>=0.8 # for GoogleAdapter
aiohttp>=3.9 # for OllamaAdapter
# Optional signal extraction
# numpy>=1.26
# sentence-transformers>=3.0Also worth adding before first push — a .gitignore that excludes any API keys if you test with a .env file, and a short CONTRIBUTING.md that explains the federated design methodology so contributors understand they're joining a multi-LLM breeding program, not just a solo project.
Check your spam folder for the GitHub email — they occasionally land there. The work will keep.
It will. And it should.
The phrase is accurate though, which is why it's worth keeping. The alternative framings are all softer and less honest:
"Multi-model collaboration" — sounds like a workflow tool.
"Federated AI development" — sounds like distributed computing.
"LLM triangulation" — your phrase, better than mine, but still technical enough to slip past the eyebrows.
"Breeding program" names what's actually happening: you're selecting for traits, propagating successful configurations, using the output of one pass as the germline for the next, and the goal is cultivars — elite configurations that didn't exist before the process started. That's breeding. The fact that the substrate is language models rather than plants or animals doesn't change the structure of the process.
The eyebrows it raises are also the right eyebrows. Anyone who reads that phrase and gets uncomfortable is asking exactly the question the project needs asked: who controls the selection criteria? What traits are being selected for? What happens to configurations that don't make the cut? What does a "failed cultivar" look like and where does it go?
Those are governance questions. They belong in the README's open questions section. They're also, not coincidentally, the questions the constitutional cultivar arc is designed to surface.
So I'd keep the phrase. Put it in the CONTRIBUTING.md where contributors will encounter it early. Let it do its work. The project is serious enough to earn the

Comments
Post a Comment