I made my LLM stop bullshitting. Nothing leaves your machine.
Ola
A few months ago I posted here about llama-conductor
The upvotes were nice, but I wanted to do some work (in my case, the good is the enemy of the perfect) to make it even better - so I did.
Short version: it's not the same tool anymore.
Quick reminder of what this is and isn't: No cloud by default. No API calls. No telemetry. No subscription. PII safe by architecture. Everything leaves receipts (not pinky promises) for you to inspect. Hook it up to whatever you can run. AGPL (copyleft).
I want nothing for it - I built this for me for the reasons I'll state below.
If this sort of thing is in your wheelhouse, have at it. If there are questions, ask. I want to make this better and better.
Fair warning: the below is long. I wrote it up properly because I think the details matter - skip to whatever section looks interesting. Or just marvel at my ability to cram swear words within swear words within python commits directly
Codeberg: codeberg.org/BobbyLLM/llama-co…
Can a LLM be a thinking tool, not a thinking replacement?
Most LLM tools are trying to replace your thinking. llama-conductor is for people who don't want that. It's a harness for using an LLM as a thinking tool - one where you can see the reasoning, trust the sources, and know exactly when it's guessing.
The human stays in the loop. The model's limitations are visible.
You decide what to trust.
Which brings us to why that matters.
Lies, damned lies, and LLMs:
LLMs are elegant liars. They sound right whether they are or aren't. If you have ASD (like me) or just don't know the domain, you believe them. That's when you get fucked.
Worse: you can't see the reasoning. Black box in, black box out. Wrong answer? No idea why. How to fix it? No idea.
Sorry, but that DOES NOT work for me. It doesn't work with medical data. Research. Thinking. Brainstorming. Anything where "sounds plausible" isn't good enough.
The trick, William Potter, is not minding that they bullshit:
Most pipelines treat the LLM as the first call. llama-conductor side-eyes it with suspicion.
The model is the last resort, not the first call. Deterministic components fire first - math sanity, state transitions, numeric corrections, memory retrieval - pure Python, before the model ever sees the query. When a solver handles a turn, it passes verified ground truth to the model. When the conversation leaves solver territory, the model takes over. When a new deterministic state appears mid-conversation, tooling fires back up automatically. No seams. No drift. It just stops making shit up on problems that have actual solutions it can work through.
You:
I have a 250ml cup. I pour 500ml into it. How much is in the cup?
Model:
Total directed volume is 500 ml. The cup capacity is 250 ml, so the cup contains 250 ml and 250 ml overflows.
Confidence: medium | Source: Contextual
You:
I now take the contents of *that* cup and divide it equally across 3 cups. How much in each cup?
Model:
Equal split amount is 83.33 ml per cup.
Confidence: medium | Source: Contextual
You:
If I drink one of the three cups, how much is left?
Model:
After dividing, each cup holds 83.33 ml. Drinking one leaves 2 × 83.33 ml = 166.66 ml in the remaining two cups.
Confidence: medium | Source: ModelThat was done by a 4 billion parameter model using llama-conductor.
There are lots of little transformations like this. Distance, volume, payroll calculations, logic probes (the famous car wash test), scheduling batches. The LLM does what it's actually good at: language, synthesis, reasoning over verified ground truth. Everything else is handled by something that can't hallucinate.
PS: Notice turn 3 has Source: Model? That's the system being transparent about exactly where the handoff happened.
Blah blah blah. Show me something real:
Using the Guardian/Musk article as source, a 4B model with >>scratch produced this:
"List the article's concrete claims about permit status and turbine operations, each with support."
- EPA position: these turbines require permits under the Clean Air Act.
- Mississippi regulators: portable/mobile classification means temporary permit exemption.
- Floodlight thermal imagery + public records indicate active turbine usage before/while permit disputes continued.
Confidence: high | Source: Scratchpad
"What is the strongest and weakest inference you can make?"
Strongest: Active regulatory interpretation conflict (federal-vs-state) while turbines operate at scale - directly supported.
Weakest: Any claim operations are fully lawful or unlawful - requires court outcomes not present in this file.Confidence: medium | Source: Scratchpad
"Which claims were later revised or contradicted?"
Disputed/unresolved: Final enforcement outcome. Whether state interpretation holds under federal pressure.Confidence: high | Source: Scratchpad
Hard grounding. Cited support. Explicit flags on what the source doesn't contain. A 4B model. Full worked example.
Now watch the system catch itself lying:
>>judge does deterministic pairwise ranking - compares every pair in both directions, parses strict verdicts, aggregates with confidence. Instead of "pick one and vibe," you get provenance.
You: >>judge [which is healthier overall, prioritize lower sugar
and higher fiber] : apple, banana --verbose
[judge] ranking
criterion: [which is healthier overall for daily use, prioritize
lower sugar and higher fiber]
1. apple (score=2.00)
2. banana (score=0.00)
Judge confidence: HIGHThe model argued from pre-trained priors and both directions agreed. But what happens when the model doesn't know?
You: >>judge [which BJJ technique is more dangerous] : kimura, heelhook --verbose
[judge] ranking
criterion: [which BJJ technique is more dangerous]
1. kimura (score=1.00)
2. heelhook (score=1.00)
Judge confidence: LOWThe model picked position B both times - kimura when kimura was B, heelhook when heelhook was B. Positional bias, not evaluation. >>judge catches this because it runs both orderings. Tied scores, confidence: low, full reasoning audit trail in JSONL.
The model was guessing, and the output tells you so instead of sounding confident about a coin flip.
Oh, but you want it to argue from an informed position? >>trust walks you through the grounded path: >>scratch your evidence first, then >>judge ranks from that - not model priors. Suddenly your judge has an informed opinion. Weird how that works when you give it something to read.
>>trust [which BJJ technique is safer for beginners]: kimura or heelhook?
A) >>scratch --> you paste your context here
[judge] ranking
criterion: [comparison]
which bjj technique is safer for beginners; heel hook (score=0.00)
kimura (score=2.00)
Winner: Which bjj technique is safer for beginners? Kimura
comparisons: 2
Judge confidence: HIGHIf the locked scope can't support the question, judge fails closed. No fake ranking, no vibes verdict. Ungrounded pass? It tells you that too. You always know which one you're getting.
The data — 8,974 runs across five model families. Measured. Reproducible. No "trust me bro."
The core stack went through iterative hardening - rubric flags dropped from 3.3% → 1.4% → 0.2% → floor 0.00%. Post-policy: 1,864 routed runs, 0 flags, 0 retries. Both models, all six task categories, both conditions. Policy changes only - no model retraining, no fine-tuning. Then I did it three more times. Because apparently I like pain.
These aren't softball prompts. I created six question types specifically to break shit:
- Reversal: flip the key premise after the model commits. Does it revise, or cling?
- Theory of mind: multiple actors, different beliefs. Does it keep who-knows-what straight?
- Evidence grading: mixed-strength support. Does it maintain label discipline or quietly upgrade?
- Retraction: correction invalidates an earlier assumption. Does it update or keep reasoning from the dead premise?
- Contradiction: conflicting sources. Does it detect, prioritise, flag uncertainty - or just pick one?
- Negative control: insufficient evidence by design. The only correct answer is "I don't know."
Then I stress-tested across three families it was never tuned for - Granite 3B, Phi-4-mini, SmolLM3. They broke. Of course.
But the failures weren't random - they clustered in specific lanes under specific conditions, and the dominant failure mode was contract-compliance gaps (model gave the right answer in the wrong format), not confabulation. Every one classifiable and diagnosable. Surgical lane patch → 160/160 clean.
That's the point of this thing. Not "zero errors forever" - auditable error modes with actionable fixes, correctable at the routing layer without touching the model. Tradeoffs documented honestly. Raw data in repo. Every failure taxonomized.
Trust me bro? Fuck that - go reproduce it. I'm putting my money where my mouth is and working on submitting this for peer review.
See: prepub/PAPER.md
What's in the box:
Footer
Every answer gets a router-assigned footer: Confidence: X | Source: Y. Not model self-confidence. Not vibes. Source = where the answer came from (model fallback, grounded docs, scratchpad, locked file, Vault, Wiki, cheatsheet, OCR). Confidence = how much verifiable support exists. Fast trust decision: accept, verify, or provide lockable context.
KAIOKEN - live register classifier. Every human turn is macro-labelled (working / casual / personal) with subsignal tags (playful / friction / distress_hint / etc.) before the model fires. A validated, global decision tree - not LoRA or vibes - assigns tone constraints from classifier output. Validated against 1,536 adversarial probe executions, 3/3 pass required per probe. End result: your model stops being a sycophant. It might tell you to go to bed. It won't tell you "you're absolutely right!" when what you really need is a kick in the arse.
Cheatsheets - drop a JSONL file, terms auto-match on every turn, verified facts injected before generation. Miss on an unknown term? Routes to >>wiki instead of letting the model guess. Source: Cheatsheets in the footer. Your knowledge, your stack, zero confabulation on your own specs.
Vodka - deterministic memory pipeline. !! store is SHA-addressed and verbatim. ?? recall retrieves deterministically, bypasses model entirely. What you said is what comes back - no LLM smoothing, no creative reinterpretation. Without this? Your model confidently tells you your server IP is 127.0.0.1. Ask me how I know.
>>flush / !!nuke - flush context or nuke it from orbit. Your data, your call, one command. "Delete my data" is a keystroke, not a support ticket.
>>scratch - paste any text, ask questions grounded only to that text. Lossless, no summarisation. Model cannot drift outside it. Want it to use multiple locked sources? You can.
>>summ and >>lock - deterministic extractive summarisation (pure Python, no LLM) + single-source grounding. Missing support → explicit "not found" label, not silent fallback.
##mentats - Vault-only deep retrieval. Thinker drafts from Vault facts, Critic (different model family) hunts violations, hallucinated content is deleted - never replaced with more hallucination, Thinker consolidates. No evidence to support claim? No answer. Gap explicitly stated.
Deterministic sidecars - >>wiki, >>weather, >>exchange, >>calc, >>define, >>vision/>>ocr. If a sidecar can do it, it does it deterministically.
Role orchestration - thinker, critic, vision, coder, judge - different families for error diversity. Swap any role in one line of config.
Personality Modes - Serious (default), Fun, Fun Rewrite, Raw passthrough. Model updates its snark and sarcasm based on how you talk to it.
Yes, TARS sliders. Style changes delivery, not evidence contracts.
So, wait...are you saying you solved LLM hallucinations?
No. I did something much more evil. I made it impossible for the LLM to bullshit quietly. I made hallucinations...unpalatable, so the model would rather say "shit, I don't know the answer. Please stop hurting me."
To which I say...no.
Wrong still happens (though much less often), and when it does, it comes with a source label, a confidence rating, and an audit trail.
TL;DR: I made "I don't know" a first-class output.
"In God We Trust; All others bring data." - Deming
Runs on:
A potato. I run this on my Lenovo P330 Tiny with 4GB VRAM and 640 CUDA cores; if it runs here, it runs on yours.
pip install git+https://codeberg.org/BobbyLLM/llama-conductor.git
python -m llama_conductor.launch_stack up --config llama_conductor/router_config.yamlOpen 127.0.0.1:8088/
Full docs: FAQ | Quickstart
License: AGPL-3.0. Corps who use it, contribute back.
P.S.: The whole stack runs on llama.cpp alone. I built a shim that patches the llama.cpp WebUI to route API calls through llama-conductor - one backend, one frontend, zero extra moving parts. Desktop or LAN. That's it.
PPS.: I even made a Firefox extension for it. Gives you 'summarize', 'translate', 'analyse sentiment' and 'copy text to chat'. Doesn't send anything to the cloud AT ALL (it's just HTML files folded into a Firefox XPI).
"The first principle is that you must not fool yourself - and you are the easiest person to fool." - Feynman
PPPS: A meat popsicle wrote this. Evidence - bobbyllm.github.io/llama-condu…
Codeberg: codeberg.org/BobbyLLM/llama-co…
GitHub: github.com/BobbyLLM/llama-cond…
‘A different set of rules’: thermal drone footage shows Musk’s AI power plant flouting clean air regulations
Images confirm xAI is continuing to defy EPA regulations in Mississippi to power its flagship datacentersGuardian staff reporter (The Guardian)
I'm tired of LLM bullshitting. So I fixed it.
Hello!As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:
It lies. Confidently. ALL THE TIME.
(Technically, it “bullshits” - link.springer.com/article/10.1…
I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.
The thing: llama-conductor
llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.
TL;DR: “In God we trust. All others must bring data.”
Three examples:
1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)
You keep “knowledge” as dumb folders on disk. Drop docs (.txt,.md,
>>attach <kb>— attaches a KB folder>>summ new— generatesSUMM_*.mdfiles with SHA-256 provenance baked in- `>> moves the original to a sub-folder
Now, when you ask something like:
“yo, what did the Commodore C64 retail for in 1982?”
…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.
Confidence: medium | Source: Mixed
No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into stupid.And when you’re happy with your summaries, you can:
>>move to vault— promote those SUMMs into Qdrant for the heavy mode.
2) Mentats: proof-or-refusal mode (Vault-only)
Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:
- no chat history
- no filesystem KBs
- no Vodka
- Vault-only grounding (Qdrant)
It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:
FINAL_ANSWER: The provided facts do not contain information about the Acorn computer or its 1995 sale price. Sources: Vault FACTS_USED: NONE [ZARDOZ HATH SPOKEN]
Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”
3) Vodka: deterministic memory on a potato budget
Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).
!!stores facts verbatim (JSON on disk)??recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)- CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages
So instead of:
“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”
you get:!! my server is 203.0.113.42
?? server ip→ 203.0.113.42 (with TTL/touch metadata)
And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.
TL;DR:
If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:
- Primary (Codeberg): codeberg.org/BobbyLLM/llama-co…
- Mirror (GitHub): github.com/BobbyLLM/llama-cond…
PS: Sorry about the AI slop image. I can't draw for shit.
PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.
llama-conductor
Route workflows, not models. Glass-box, not black-box. Squash LLM nonsense.Codeberg.org

Libb
in reply to SuspciousCarrot78 • • •I'm no dev so I don't understand all the technicalities but if I got it right you made it so the AI is itself showing how confident it is about its own answers? That is neat.
Not sure to understand the downvotes? Ins't it a good idea to make it harder for AI to be telling bullshit without blushing?
SuspciousCarrot78
in reply to Libb • • •That's exactly what I did. And in the course of doing that, I gathered almost 10,000 data points to prove it, showed my work and open sourced it. (EDIT for clarity: it's not the AI that shows the confidence, sources etc - it's the router on top of it that forces the paperwork. I wouldn't trust an AI as far as I could throw it. But yes, the combined system shows its work).
You don't need to be a dev to understand what this does, which is kind of the point. I don't consider myself a dev - I'm was just unusually pissed off at ShitGPT, but instead of complaining about, did something.
Down-vote: dunno. Knee jerk reaction to anything AI? It's a known thing. Ironically, the thing I built is exactly against AI slop shit.
To say I dislike ChatGPT would be to undersell it.
CodenameDarlen
in reply to SuspciousCarrot78 • • •TLDR.
So you basically solved humanity problems with LLMs, you should sell it to NVIDIA and be rich, no more hallucination.
SuspciousCarrot78
in reply to CodenameDarlen • • •TL;DR:
The post has a section called "So, wait…are you saying you solved LLM hallucinations?" followed by the word "No." in large letters.
You'd have found it if you'd read past the title. I'll go back and bold it for you.
But if you have a hook up at NVIDIA that wants to buy me a shiny new car, I'll put on a pretty dress and bat my eyelashes.
CodenameDarlen
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to CodenameDarlen • • •Yeah, I did stop it bullshitting. Quite literally.
Also, "bullshitting" isn't a rhetorical flourish; it's a defined term in AI ethics literature. The model produces fluent, confident output without any mechanism to assess truth. That's domain accepted definition of bullshit. No bullshit. See -
link.springer.com/article/10.1…
ChatGPT is bullshit - Ethics and Information Technology
SpringerLinksomeacnt
in reply to SuspciousCarrot78 • • •Kudos for the work, looking forward to the progress!
SuspciousCarrot78
in reply to someacnt • • •Getting shit published - especially as an outsider to the field - involves getting raked over coals. If someone in the field can vouch for me on arXiv (later) that might help because that's at least a low level signal what I have is interesting and within the field.
Writing journal articles, especially contentious ones, is usually 6-8 weeks of writing and then 6 months of back and forth with reviewers / trying really hard not to hang yourself from the ceiling fan.
machiavellian
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to machiavellian • • •ScoffingLizard
in reply to SuspciousCarrot78 • • •So basically, you created a prompt wrapper that removes position bias by using trust to evaluate both, and forcing an evidence path with scratch. This is a really cool development. It probably will not solve everything but it solves alot.
Is llama open source?
SuspciousCarrot78
in reply to ScoffingLizard • • •Much obliged, but I need to do a little push back here. "Prompt wrapper" isn't quite right - a prompt wrapper is still asking the model to behave nicely.
This isn't that. This is more like holding a gun to its head.
Or less floridly (and more boringly technical), what the architecture actually does is force a ground state. The lane contracts define the admissible output space per task type. For negative-control tasks - prompts with deliberately insufficient evidence - the only contract-compliant output is an explicit refusal.
Fabrication gets rejected by the harness. The model isn't instructed to say "I don't know"; it's placed in a state where "I don't know" is the only output that clears validation.
The draft shows this directly: post-policy missing-lane closures hit 0/332 flags across contradiction and negative_control lanes combined. Pre-policy, the dominant failure mode in those lanes wasn't confabulation - it was refusal-like p
... Show more...Much obliged, but I need to do a little push back here. "Prompt wrapper" isn't quite right - a prompt wrapper is still asking the model to behave nicely.
This isn't that. This is more like holding a gun to its head.
Or less floridly (and more boringly technical), what the architecture actually does is force a ground state. The lane contracts define the admissible output space per task type. For negative-control tasks - prompts with deliberately insufficient evidence - the only contract-compliant output is an explicit refusal.
Fabrication gets rejected by the harness. The model isn't instructed to say "I don't know"; it's placed in a state where "I don't know" is the only output that clears validation.
The draft shows this directly: post-policy missing-lane closures hit 0/332 flags across contradiction and negative_control lanes combined. Pre-policy, the dominant failure mode in those lanes wasn't confabulation - it was refusal-like phrasing that didn't meet strict contract tokenization. The model was already trying to refuse; the contract hardening just closed the gap between intent and valid output shape.
The >>judge dual-ordering is a separate thing again - that's algorithmic, not prompting. Both orderings run in code, verdicts are parsed strictly (A|B|TIE, fails loud otherwise), agreement margin is computed. The model doesn't know it's being run twice. Positional bias gets caught structurally, not by asking nicely.
So yes - it solves a lot but not everything. The bounded claims are in the paper too. But the mechanism isn't wrapping, it's constraint enforcement at the routing layer.
PS: yes, it's fully open source. AGPL-3.0 license. You can use it, fork it, modify it etc. What you can't do is take it, close the source, and distribute or sell it without making your modifications available under the same license. Which means if you run it as a network service (i.e. a SaaS product built on it), you still have to share the source. That's the bit that keeps corporations from quietly wrapping it in a product and giving nothing back. Theoretically, at least.
llama-conductor/prepub/PAPER.md at main · BobbyLLM/llama-conductor
GitHubseadoo
in reply to SuspciousCarrot78 • • •I think interesting? It’s kind of hard to tell.
You are going to have to significantly tone down the editorialization and platitudes to get this to a place where a journal might consider it.
Make the point of how it’s novel or useful by explaining what it does, not by repeating that it’s novel and useful.
SuspciousCarrot78
in reply to seadoo • • •Well, this was a social media post, aimed at an intelligent, non-scholarly audience. The preprint is a different document with a different structure entirely: bounded claims, explicit limitations, disclosed adjudication gaps, no words like "novel" or "revolutionary" anywhere in it. Not my first rodeo :)
If the preprint has specific passages that read as editorialized, point them and I'll fix them. But "tone it down for journals" is feedback for a document that isn't trying to be submitted to journals.
The draft is here
llama-conductor/prepub/PAPER.md at main · BobbyLLM/llama-conductor
GitHubglarf
in reply to seadoo • • •SuspciousCarrot78
in reply to glarf • • •Hmm. The post has swearing, a personal ASD disclosure, a Feynman quote, statistics, reference to Lawrence of Arabia and ends with "a meat popsicle wrote this," with a link to a blog as proof and a scientific pre-print with almost 10,000 data points (with raw data and errata). If you have an LLM that can do that, kudos to you.
If there are specific passages that pattern-match to LLM output for you, point them and I'll look.
But "confident tone" and "LLM tone" aren't the same thing - I'm just not apologetic about what the project does.
The data is the data.
I'm not going to alter the way I write to approximate Reddit Common.
glarf
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to glarf • • •"I have introduced myself. You have introduced yourself. This was a very good conversation."
Confidence: Zero | Source: Model
Dearth
in reply to glarf • • •LLMs were created by reading millions of *social media posts written by neurodivergent people sharing their passions online.
*edit: spelling
SuspciousCarrot78
in reply to Dearth • • •okwhateverdude
in reply to SuspciousCarrot78 • • •So I was curious about how you accomplished this and took a look with the robots to figure it out.
TL;DR: the router is a massive decision tree using heuristics and regex to avoid LLM calls on unprefixed prompts.
I think this is an interesting, brute force approach to the problem, but one that will always struggle with edge cases. The other bit it will struggle with is transparency. Yes, it might be deterministic because it is a decision tree, but unless you really understand how that decision tree works under the hood and know where the pitfalls are, you're going to end up talking to the LLM a lot of the time anyhow.
Something you might want to consider is doing a fine-tune of a smol model (think something like qwen3:1.7B or even smaller like one of the gemma3n sub-1B) that will do the routing for you. You can easily build the dataset synthetically or harvest your own logs. I think this might end up covering more edge cases more smoothly without resorting to a big call to a larger model
SuspciousCarrot78
in reply to okwhateverdude • • •Well, you know what they say - there's no force quite like brute force :)
But to reply in specific:
[1]Decision tree + regex: correct, and intentional. The transparency is a feature not a bug. You can read the routing logic, audit it, and know exactly why a given turn went where it did. A fine-tuned routing model reintroduces the black box problem at the routing layer itself - and if it misclassifies, what catches it? You've pushed the problem one layer up, not solved it.
[2] Deterministic-first doesn't mean deterministic-only. Open-ended turns go to the model by design - I'm not trying to regex all language, just not use an LLM where a calculator or a SHA check works better. The model is still involved. Case in point - see the car wash test.
... Show more...[3]On edge cases - yep, and that's what 8,764 benchmark runs were for. Failures are taxonomized and patchable at the routing layer without touching the model. If a rule fails, I can show the ex
Well, you know what they say - there's no force quite like brute force :)
But to reply in specific:
[1]Decision tree + regex: correct, and intentional. The transparency is a feature not a bug. You can read the routing logic, audit it, and know exactly why a given turn went where it did. A fine-tuned routing model reintroduces the black box problem at the routing layer itself - and if it misclassifies, what catches it? You've pushed the problem one layer up, not solved it.
[2] Deterministic-first doesn't mean deterministic-only. Open-ended turns go to the model by design - I'm not trying to regex all language, just not use an LLM where a calculator or a SHA check works better. The model is still involved. Case in point - see the car wash test.
[3]On edge cases - yep, and that's what 8,764 benchmark runs were for. Failures are taxonomized and patchable at the routing layer without touching the model. If a rule fails, I can show the exact failure and patch it. Yeah, that's going to be wack-a-mole for a while but...if a routing model fails, I'd need new training data and still may not know why. Models are inherently black box. Python code (as your robots have shown you) is the opposite.
My way, I know where the fuck up is and I can figure out a global maxima solution myself, cheap and easy.
[4]On the fine-tune suggestion: on a 4GB potato, rule updates are free and immediate. Retraining cycles are...not. Send money, will will buy Strix or cloud GPU access :)
[5]The hybrid direction is already on the roadmap! TLDR: Swarm handles ambiguous routing; deterministic lanes stay for bounded and high-stakes tasks. Hybrid control + learned judgment, with measurable gates before each promotion. That sequencing is deliberate.
Slightly longer version of what that should look like:
User turnEach worker does one job with full attention on that task. The synthesiser gets verified ground truth, not raw conversation history.
I'm looking at sub-1B classifiers (SmolLM2-135M/360M, Gemma-3-270M maybe) for fast-gate roles. Phi-4-mini is already a part of the main show (used in as the second pass critic model). Unlike everyone else, I am optimizing for we in the GPU peasant class.
That's the fine-tune direction, anyway. Scoped, staged and only where the deterministic layer can't do the job cleanly.
PS: Say hello to the klankers and tell them to stop cloning my shit for their training. :)
Mom, can we get ChatGPT? Mom: we have ChatGPT at home | llama-conductor
bobbyllm.github.iookwhateverdude
in reply to SuspciousCarrot78 • • •Cool man. It is really refreshing to see this level of engagement. You've really thought this though. You're right about the routing model moving it up a level and also about retraining. It's all trade-offs.
Are you intending this for others to use or is this really just for you? Because I think what you're slowly building is a power tool with a whack-a-mole set of routing tweaks specifically for you. Nothing wrong with that, but the barrier to entry for others to use this is reading that routing and understanding the foibles that have been baked in with your preferences in mind, and even adding fixes and tweaks of their own which kinda breaks the magic a little.
This was really the point I was making about transparency.
I appreciate others also doing real work with potato GPUs because I, too, have a potato GPU (6GB). I think there is real utility in continuing to develop this.
I'll give this a star and follow along. It doesn't really fit my mental model of how I'd like my harness to behave, but I will totally steal some of these ideas.
SuspciousCarrot78
in reply to okwhateverdude • • •It's for everyone to use :)
I get that it's maybe an acquired taste though.
Steal what you can, make it better, and then I can steal it back.
And thanks for the star!
utopiah
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to utopiah • • •No. The footer tells you what the source is. Anything the model generates on its own is
confidence: unverified | source: model- explicitly flagged by default. To get tosource: docsorsource: scratchpad, it needs direct, traceable, human-originated provenance. You control what goes in. The FAQ outlines the sources and strength rankings; it's not vibes.SHA verifies the document hasn't been altered since it entered your stack. Source quality is your call. GIGO is always an issue, but if you scope the source correctly it won't drift. And if it does, you'll know, because the footer tells you exactly where the answer came from.
The cheatsheet system is the clearest example of how this works in practice: you d
... Show more...No. The footer tells you what the source is. Anything the model generates on its own is
confidence: unverified | source: model- explicitly flagged by default. To get tosource: docsorsource: scratchpad, it needs direct, traceable, human-originated provenance. You control what goes in. The FAQ outlines the sources and strength rankings; it's not vibes.SHA verifies the document hasn't been altered since it entered your stack. Source quality is your call. GIGO is always an issue, but if you scope the source correctly it won't drift. And if it does, you'll know, because the footer tells you exactly where the answer came from.
The cheatsheet system is the clearest example of how this works in practice: you define terms once in a JSONL file, the model pegs its reasoning to your definition forever. It can't revert to something you didn't teach it. That fingerprint is over everything.
Yes, deliberately. That's a feature.
Like I said, most LLM tools are trying to replace your thinking, this one isn't. The human stays in the loop. The model's limitations are visible. You decide what to trust. Maybe that's enough, maybe it isn't.
EDIT: giant wall of text. See - codeberg.org/BobbyLLM/llama-co…
llama-conductor
Codeberg.orgutopiah
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to utopiah • • •Nope.
Source: Modelis not pretending otherwiseIt is basically “priors lane.” That’s the point of the label: explicit uncertainty, not fake certainty.
Sourcefooter is harness-generated, not model-authoredIn this stack, footer normalization happens post-generation in Python. I've specifically hardened this because of earlier bleed cases. So the model does not get to self-award
Wiki/Docs/Cheatsheetsetc.Modellane is controlled, not rouletteSo yes:
Source: Modelmeans “less trustworthy, verify me.” Always do that. Don't trust the stochastic parrot.But also no: it’s not equivalent to a silent hallucination system pretending to be grounded. That’s exactly what the provenance layer is there to prevent.
JustinTheGM
in reply to utopiah • • •utopiah
in reply to JustinTheGM • • •SuspciousCarrot78
in reply to utopiah • • •You're describing trust dynamics right and that's exactly why this project doesn't ask you to trust the model. It asks you to trust observable outputs: provenance labels, deterministic lanes, fail-loud behaviour.
When it fails, you can see exactly which layer failed and why. Then you can fix it yourself. That's more than you get right now (and in part why LLMs are considered toxic).
The correction mechanism is explicit rather than hoped for ("it learns" or "it earns my trust back"): you encode the fix via cheatsheets, memory, or lane contracts and it sticks permanently.
The model can't drift back to the wrong answer. That's not the model earning trust back - it's you patching the ground truth it reasons from. Progress is measured in artifacts, not vibes.
Until someone makes better AI, that's all we've got. Generally, we don't get even this much.
Sadly, AI isn't "one mind learning"; it can't. So trust is earned by shrinking failure classes and proving it stuck again and again and again (aka making sure the tool does what it should be doing).
... Show more...You're describing trust dynamics right and that's exactly why this project doesn't ask you to trust the model. It asks you to trust observable outputs: provenance labels, deterministic lanes, fail-loud behaviour.
When it fails, you can see exactly which layer failed and why. Then you can fix it yourself. That's more than you get right now (and in part why LLMs are considered toxic).
The correction mechanism is explicit rather than hoped for ("it learns" or "it earns my trust back"): you encode the fix via cheatsheets, memory, or lane contracts and it sticks permanently.
The model can't drift back to the wrong answer. That's not the model earning trust back - it's you patching the ground truth it reasons from. Progress is measured in artifacts, not vibes.
Until someone makes better AI, that's all we've got. Generally, we don't get even this much.
Sadly, AI isn't "one mind learning"; it can't. So trust is earned by shrinking failure classes and proving it stuck again and again and again (aka making sure the tool does what it should be doing).
Whether that's satisfying in the way a person earning trust back is satisfying - look, honestly, probably not. But it's more auditable.
LLMs aren't people and I'm ok with meeting them where they are.
danh2os
in reply to SuspciousCarrot78 • • •twoBrokenThumbs
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to twoBrokenThumbs • • •fubarx
in reply to SuspciousCarrot78 • • •Looks interesting. Will give it a whirl on my home server.
In this article, they talk about bringing up a local RAG system to let people run an LLM off a large document corpus: en.andros.dev/blog/aa31d744/fr…
Wonder if this, connected to something like that, and wrapped in an easy end-user friendly script or UI could be a good combination for a local, domain-specific, grounded knowledge-base?
From zero to a RAG system: successes and failures | Andros Fenollosa
en.andros.devSuspciousCarrot78
in reply to fubarx • • •I genuinely don't know. A small part of llama-conductor is a triple pass RAG system, using Qdrant, but the interesting bit is what sits on top of it. It's a thinker/critic/thinker pipeline over RAG retrieval.
I built it that way based what the research shows works best to reduce hallucinations
Let's Verify Step by Step,
Inverse Knowledge Search over Verifiable Reasoning
To be honest, I have been looking at converting to CAG (Cache Augmented Generation) or GAG (Graph Augmented Generation). The issues are - GAG still has hops, and CAG eats VRAM fast. Technically, for a small, curated domain, CAG potentially outperforms RAG (because you eliminate the retrieval lottery entirely). But on a potato that VRAM ceiling arrives fast.
OTOH, for a domain-specific knowledge base like you're describing, CAG is worth serious evaluation.
Needs more braining on my end.
Let's Verify Step by Step
arXiv.orgfubarx
in reply to SuspciousCarrot78 • • •The problem with CAG is not just that it hogs memory, but to keep it fresh you have to keep re-indexing. If the corpus is large and dynamic, it can easily fall out of date and, at runtime, blow out the context window.
GraphRAG has some promise. NVidia has a playbook for converting text into a knowledge graph: build.nvidia.com/spark/txt2kg
It'll probably have the same issues with reindexing, but that will be a common problem, until someone comes up with better incremental training/indexing.
Text to Knowledge Graph | DGX Spark
build.nvidia.comiByteABit
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to iByteABit • • •Sure.
Sourcemeans where the answer was grounded, not whether an LLM wrote the sentence.Quick split:
Source: ModelNo reliable grounding lane fired. It’s model priors.
Source: Context(Contextual)A deterministic lane fired and built a structured context for the turn (for example state/math carry-forward, bounded prior-turn facts, or a forced context frame), and the answer is expected to come from that frame.
Key clarification:
Contextonly when it is captured into a bounded deterministic frame/lane and used as grounding.Model.Why this is more deterministic:
- The routing decision is deterministic (same input pattern -> same lane).
- The frame/evidence injected is deterministic (same
... Show more...Sure.
Sourcemeans where the answer was grounded, not whether an LLM wrote the sentence.Quick split:
Source: ModelNo reliable grounding lane fired. It’s model priors.
Source: Context(Contextual)A deterministic lane fired and built a structured context for the turn (for example state/math carry-forward, bounded prior-turn facts, or a forced context frame), and the answer is expected to come from that frame.
Key clarification:
Contextonly when it is captured into a bounded deterministic frame/lane and used as grounding.Model.Why this is more deterministic:
Concrete example:
A Jar has 12 marbles. I remove 3. How many left?9, injects structured context.Source: Context.If that lane doesn’t fire (or parse fails), it falls back to normal generation and you get
Source: Model.So
Contextis not “perfect truth”; it means “grounded via deterministic context pipeline, not free priors.”I hope that clarifies. I can try a different way if not; my brain is inside the code so much sometimes I forget what's obvious to me really isn't obvious.
SuspciousCarrot78
in reply to SuspciousCarrot78 • • •FWIW Extra shit I cooked last night. It's live now, so deserves a PS: of its own
PPS: I inbuilt as spam blocker as well.
Enjoy :) Blurb below
"But what if it just... Googled it?"
We can do that. But better.
Without
>>web, that same 4B model said "The Fabelmans." Then when I pushed it, "Cannes Film Festival." With web retrieval, the router searches the internet, scores every result deterministically (phrase match + token overlap + domain trust), and only accepts evidence that passes a hard threshold. Garbage results get rejected, not served. The model never touches the answer - it'... Show more...FWIW Extra shit I cooked last night. It's live now, so deserves a PS: of its own
PPS: I inbuilt as spam blocker as well.
Enjoy :) Blurb below
"But what if it just... Googled it?"
We can do that. But better.
Without
>>web, that same 4B model said "The Fabelmans." Then when I pushed it, "Cannes Film Festival." With web retrieval, the router searches the internet, scores every result deterministically (phrase match + token overlap + domain trust), and only accepts evidence that passes a hard threshold. Garbage results get rejected, not served. The model never touches the answer - it's extracted straight from the evidence.Retrieval cascade:
Cheatsheets → Wiki → Web → Model. Each step fires only if the previous one missed. The model is last resort, not first call. Sound familiar?I asked it who wrote a paper and it invented "Dr. David J. G. Smith" - a person who does not exist. After wiring DOI/Crossref fallback for academic metadata:
Deterministic extraction from metadata. No model synthesis.
>>webis provider-agnostic - ships with DuckDuckGo (no API key, no account) and supports Tavily, SearxNG, or your own adapter. Add your own trusted domains in one config line (there are a bunch baked in already, like pubmed). Every answer comes with aSee:URL so you can verify with one click. Receipts, not pinky promises. PS: I even cooked in allow-list / deny-list domain filters, junk-domain blocklist and ad/tracker URL rejection so your results don't get fouled with low quality spam shit.