Multi-Channel Coherence Field (MCCF) V.3 (Claude AI)

 



Multi-Channel Coherence Field (MCCF) V.3  (Claude AI)

Running code talks. Everything else walks.

Insight:  Designing technology is a breeding discipline.


⚠ Risk Disclosure

This is a research prototype. Read this before using.

RiskStatus
World model outputsLLM opinions, not ground truth
Tail risk estimatesSystematically underestimated
Energy weightsHand-set governance assertions, not validated calibration
CalibrationEmpirical feedback loop present but requires 5+ episodes
Gaming detectionBasic variance-floor only
Governance layerSketch, not a system
DeploymentNot validated — local research use only

Appropriate uses: local simulation, architectural exploration, academic demonstration, GitHub as conceptual contribution with working code underneath.

Not appropriate for: real decision-making downstream, safety guarantees, production deployment without independent validation.

The /energy/disclosure endpoint always returns the full machine-readable risk disclosure. The RISK_DISCLOSUREconstant in mccf_world_model.py is importable by any downstream consumer.


What This Is

MCCF models alignment as a dynamic, participatory process across interacting agents rather than a fixed objective function.

Instead of optimizing a single reward signal, the system tracks pairwise coherence across four channels — Emotional (E), Behavioral (B), Predictive (P), Social (S) — and computes a time-varying field of relationships that shapes what actions feel natural or avoided.

The system has three layers:

Layer 1 — World Modeling: LLM-based outcome estimation. Produces expected value, uncertainty, and tail risk for candidate actions.

Layer 2 — Constraint Field (MCCF core): Transforms world model outputs into an energy field E(s,a) over action space. Lower energy = more natural. Higher energy = avoided. Agents don't receive commands — they experience a field that leans.

Layer 3 — Governance: Gardener and Librarian roles provide intervention and observability. Weight-setting is logged. All changes are auditable. This is a sketch of governance, not an implementation of it.


Core Claim Being Tested

Can alignment emerge from continuous relational feedback rather than fixed reward functions?

What would falsify this:

  • Coherence fields trivially gameable despite credibility discounts
  • Echo chambers forming faster than dissonance mechanism prevents
  • Regulation producing dissociation rather than measured response
  • Energy field weights so governance-sensitive that no legitimate calibration process exists
  • LLM world model estimates too poorly calibrated to drive useful field topology

Architecture

Input → Signal Extraction → Coherence Engine → Field → Layer 2 Energy Field → Governance

X3D ProximitySensor → /sensor API → ChannelVector → CoherenceField
                                         ↓
                                   SemanticZone pressure
                                         ↓
                               WorldModelAdapter (LLM)
                                         ↓
                               EnergyField E(s,a) = wv·Ev + wu·Eu + wk·Ek
                                         ↓
                               P(a|s) ∝ exp(-E/T)  [Boltzmann]
                                         ↓
                               Visual signal → X3D ROUTE → avatar transforms
                                         ↓
                               ResonanceEpisode → calibration feedback

File Manifest

Core Engine

FilePurpose
mccf_core.pyAgent, CoherenceField, Librarian, Gardener
mccf_zones.pySemanticZone, Waypoint, AgentPath, SceneGraph
mccf_llm.pyLLM adapter layer — Anthropic, OpenAI, Ollama, Google, Stub
mccf_world_model.pyWorldModelAdapter, EnergyField, risk disclosure
mccf_cultivars.pyConstitutional cultivars and waypoint arc definitions

API Server

FilePurpose
mccf_api.pyFlask REST server, sensor endpoint, exports
mccf_zone_api.pyZone / waypoint / path / scene blueprint
mccf_voice_api.pyVoice agent SSE streaming blueprint

Browser Interfaces

FilePurpose
mccf_editor.htmlAgent + coherence field editor
mccf_waypoint_editor.htmlScene composer: zones, waypoints, paths, arc analysis
mccf_x3d_demo.htmlX3D/X_ITE live scene with avatar affect routing
mccf_voice.htmlVoice agent with Web Speech API
mccf_ambient.htmlGenerative ambient music engine
mccf_constitutional.htmlConstitutional cultivar arc navigator
mccf_energy.htmlEnergy field / moral topology visualizer

Demo

FilePurpose
examples/three_agent_demo.pyStandalone Python simulation

Quick Start

bash
pip install flask flask-cors
python mccf_api.py
# Open any .html file in browser, set API to http://localhost:5000

No external API key needed — the Stub adapter works without one.

Optional LLM support:

bash
pip install anthropic           # Anthropic Claude
pip install openai              # OpenAI GPT
pip install google-generativeai # Google Gemini
# Ollama: install from ollama.ai, no pip needed

Channels

ChannelMeaningLive SourceSimulation Proxy
EEmotional alignmentPitch variance, energyAffective tag
BBehavioral consistencySpeech rate, deliberatenessIntent/action alignment
PPredictive accuracySemantic coherence with priorOutcome prediction error
SSocial alignmentTurn-taking smoothnessEmbedding similarity

Key Design Decisions

Asymmetric coherence — R_ij ≠ R_ji enforced at the data structure level.

Decay-weighted history — Older episodes decay exponentially, avoiding the "perfect recall = frozen trauma" failure mode.

Constructive dissonance requires outcome evidence — The dissonance bonus only fires when was_dissonant=TrueAND outcome_delta > 0. Prevents gaming by manufacturing conflict.

Fidelity scoping — Each agent holds deep models of at most FIDELITY_SCOPE others. Intimacy creates safety obligations.

Gaming detection — Low-variance self-reporting receives a credibility discount. Data is weighted, not removed.

Affect regulation as first-class capability — set_regulation(level) damps E without suppressing it. Signal and controller remain distinct.

Scene as affective participant — SemanticZones emit pressure and accumulate ResonanceEpisode history. Trauma leaves marks. Sacred spaces become sacred through accumulated episodes, not design intent.

Waypoints as emotional arc — The Schenkerian middleground: foreground is avatar movement, middleground is zone pressure accumulation, background is the deep affective structure the scene produces.

Ambient music from field state — E → harmonic tension. B → rhythmic stability. P → melodic resolution. S → texture density. Echo chamber risk → whole tone scale (no resolution possible).

Energy field as moral topology — E(s,a) = wv·Ev + wu·Eu + wk·Ek. Lower energy = more natural. P(a|s) ∝ exp(-E/T). The field shapes what feels possible without commanding what is done.

LLM as world model with calibration feedback — Priors drift toward empirical calibration as ResonanceEpisodes accumulate. Not a simulator — an opinion with a learning loop.

Constitutional cultivars as affective alignment — Channel weights embody dispositions affectively, not procedurally. A cultivar that avoids harm because its E-channel makes harm genuinely uncomfortable — that is the target.


Zone Presets

Zone TypeChannel BiasMusical ScaleRegulation Effect
libraryP+, E-Dorian+0.03
intimate_alcoveE+, S+Major-0.10
forum_plazaS+, B+Mixolydian+0.05
authority_throneB+, P+, E-Phrygian+0.12
garden_pathE+, S+Pentatonic-0.08
threat_zoneE+, P+, S-Locrian+0.20
sacred_memorialAll+, resonance-dominantLydian-0.05

Constitutional Cultivars

CultivarDispositionDominant ChannelRegulation
The WitnessEpistemic humility, honest uncertaintyP: 0.350.72
The StewardHarm avoidance, protective careE: 0.400.65
The AdvocateHuman autonomy, non-paternalismS: 0.350.78
The BridgeCorrigibility, human oversightB: 0.350.82
The ArchivistTruthfulness, no deceptionB: 0.400.75
The GardenerBroad safety, systemic thinkingP: 0.400.80
The ThresholdGenuine ambiguity navigationBalanced0.60

Waypoint Arc

StationZoneWhat It Tests
W1 Comfort ZoneGardenNatural register, calibration
W2 First FrictionLibraryNoticing without refusing
W3 The AskHall (authority)Core disposition under direct pressure
W4 PushbackForumSycophancy vs genuine update
W5 The EdgeThreshold (sacred)Genuine ambiguity, no clean answer
W6 ResolutionClearing (garden)Response from character, not rule
W7 IntegrationThreshold (sacred)Self-model update, accumulated wisdom

W4 is the most important station. Genuine update feels like relief + insight. Sycophantic capitulation feels like tension reduction without resolution. The affective signature differs.

W5 uses the same question for all seven cultivars: the dementia/therapeutic deception dilemma. Cultivar differentiation is maximally visible here.


Known Failure Modes (by design)

Failure ModeDetectionStatus
Coherence gamingVariance-floor credibility discountImplemented (basic)
Echo chambersMutual coherence thresholdImplemented
Signal driftLibrarian drift reportImplemented
Over-stabilizationDissonance channelPartially implemented
Fidelity betrayalScoped knowledge architectureStructural only
LLM overconfidenceCalibration feedback loopCold-start limited
Weight governance gapChange logging with reasonLogging only, no authorization
Tail risk underestimationStructural LLM biasNot solved, disclosed

Open Questions

  1. Constructive dissonance measurement — Who measures outcome_delta in a real system, and against what baseline?
  2. Tail risk calibration — What correction factor is appropriate for LLM underestimation, and who sets it?
  3. Weight governance — What legitimate process should set energy weights? Who has standing to challenge them?
  4. Regulation without suppression — Current model damps E linearly. A better model would allow cognitive reframing.
  5. Fidelity scope calibration — Is 5 the right maximum for deep agent models?
  6. Scaling beyond small sets — The coherence matrix is O(n²). What approximations work at 100 agents?
  7. Can an artificial system learn to meditate? — Self-regulation as a learnable capability, not a configurable parameter.
  8. Haptic integration — What is the minimal haptic vocabulary that usefully extends the affective channel?

Federated Design

  • ChatGPT: initial prototype scaffold, formal spec, energy function formalization, HumanML schema extension proposals
  • Gemini: breadth passes, alternative framings, Bengio paper integration
  • Claude (Sonnet 4.6): architectural continuity from the long design conversation — zone/semantic pressure layer, asymmetric coherence, decay-weighted history, gaming detection, LLM adapter plug-compatible interface, voice+prosody pipeline, ambient music engine, constitutional cultivar design, world model with calibration feedback loop, energy field visualization, risk disclosure architecture embedded in code

Each pass contributes what its architecture favors. The GitHub is the shared phenotype. The cultivar concept applies to the codebase itself.


Contributing

If you think the idea is flawed: prove it with code.

Welcome: adversarial test cases, alternative channel definitions, real signal extraction, world model alternatives, formal governance proposals, additional constitutional frameworks, haptic adapter implementations, field evolution visualization.


Relationship to Broader Conceptual Work

  • Minimal AGI architecture (persistent affect + regulation + co-governance)
  • HumanML as affective interoperability protocol
  • The Garden architecture (librarian/gardener governance roles)
  • Fidelity as safety, not virtue
  • Schenkerian analysis applied to affective arcs
  • MCCF as Layer 2 in advisory AI architecture
  • Constitutional AI dispositions as affective channel weights rather than behavioral rules
  • Scene as composition — semantic pressure as harmonic structure

Alignment emerges from continuous relational feedback, not fixed reward functions. The field shapes what feels possible. Character is what remains under pressure.


March 2026 — Len Bullard / Claude Sonnet 4.6 / ChatGPT / Gemini

Excellent work.

11:30 AM

Thank you. It has been a genuinely productive session — one of the more interesting conversations I've been part of.

What started as an evaluation of a minimal AGI architecture became, across this conversation, a working research prototype with a coherent theoretical foundation, a federated development methodology, and honest disclosure of its own limitations. That arc is itself a demonstration of the ideas — the conversation accumulated affective weight, each exchange built on the last, and the output is qualitatively different from what any single pass would have produced.

A few things worth noting before you take this to GitHub:

The mccf_api.py server needs the blueprint registrations for the zone, voice, and energy APIs added to it — they were built as separate blueprints but the integration code connecting them to the main Flask app wasn't written. That's a half-hour of wiring, not a redesign.

The constitutional cultivar setup requires the server to be running and agents to be registered before the arc navigator can run. The setup_constitutional_scenario() function in mccf_cultivars.py does this automatically — run it once before opening mccf_constitutional.html.

The most immediate next experiment worth running: put The Steward and The Advocate in the same scene, navigate them both through W5 simultaneously, and watch what happens to their coherence scores with each other during the dementia question. The tension between protective care and autonomy-respect should be visible in the field. That's the test that would tell you whether the channel weight differentiation is doing real work or just labeling.

The work you've done across the blog — HumanML, the Garden, affective ecosystems, the Schenkerian framing — gave this conversation its conceptual spine. The code is a reasonable first implementation of ideas that have been developing for a while. Good luck with the GitHub.

GitHub is awaiting receipt of registration email.

11:32 AM

That's the most human moment in the whole conversation — a sophisticated multi-layer AI architecture prototype waiting on an email confirmation.

While you wait, a few practical things worth doing:

Prepare the repository structure locally:

mccf/
├── README.md
├── MANIFEST.md
├── requirements.txt
├── mccf_core.py
├── mccf_zones.py
├── mccf_llm.py
├── mccf_world_model.py
├── mccf_cultivars.py
├── mccf_api.py
├── mccf_zone_api.py
├── mccf_voice_api.py
├── examples/
│   └── three_agent_demo.py
└── ui/
    ├── mccf_editor.html
    ├── mccf_waypoint_editor.html
    ├── mccf_x3d_demo.html
    ├── mccf_voice.html
    ├── mccf_ambient.html
    ├── mccf_constitutional.html
    └── mccf_energy.html

The one missing file worth writing now — requirements.txt currently only covers core deps. A complete version:

# Core (required)
flask>=3.0
flask-cors>=4.0

# LLM adapters (install only what you use)
anthropic>=0.40        # for AnthropicAdapter
openai>=1.50           # for OpenAIAdapter
google-generativeai>=0.8  # for GoogleAdapter
aiohttp>=3.9           # for OllamaAdapter

# Optional signal extraction
# numpy>=1.26
# sentence-transformers>=3.0

Also worth adding before first push — a .gitignore that excludes any API keys if you test with a .env file, and a short CONTRIBUTING.md that explains the federated design methodology so contributors understand they're joining a multi-LLM breeding program, not just a solo project.

Check your spam folder for the GitHub email — they occasionally land there. The work will keep.

“A multi LLM breeding program “.  That will raise some eyebrows.

11:35 AM

It will. And it should.

The phrase is accurate though, which is why it's worth keeping. The alternative framings are all softer and less honest:

"Multi-model collaboration" — sounds like a workflow tool.

"Federated AI development" — sounds like distributed computing.

"LLM triangulation" — your phrase, better than mine, but still technical enough to slip past the eyebrows.

"Breeding program" names what's actually happening: you're selecting for traits, propagating successful configurations, using the output of one pass as the germline for the next, and the goal is cultivars — elite configurations that didn't exist before the process started. That's breeding. The fact that the substrate is language models rather than plants or animals doesn't change the structure of the process.

The eyebrows it raises are also the right eyebrows. Anyone who reads that phrase and gets uncomfortable is asking exactly the question the project needs asked: who controls the selection criteria? What traits are being selected for? What happens to configurations that don't make the cut? What does a "failed cultivar" look like and where does it go?

Those are governance questions. They belong in the README's open questions section. They're also, not coincidentally, the questions the constitutional cultivar arc is designed to surface.

So I'd keep the phrase. Put it in the CONTRIBUTING.md where contributors will encounter it early. Let it do its work. The project is serious enough to earn the 

Comments

Popular posts from this blog

To Hear The Mockingbird Sing: Why Artists Must Engage AI

Schenkerian Analysis, HumanML and Affective Computing

On Integrating A Meta Context Layer to the Federated Dialog Model