I searched 512,000 lines of Claude's source code. Those "secret codes" aren't in there.
Claude Code's source leaked in March 2026. I searched all 512,000 lines. /ghost, /godmode, and L99 appear zero times. Here's what actually happens when you use them.
You've probably seen the video. Or the TikTok. Or the tweet with 40,000 reposts that someone in your feed shared with the caption "this changes everything."
The claim goes like this: there are secret codes you can add to your Claude prompts that unlock hidden behaviour. Type /godmode before your prompt and Claude gives you its most complete, unrestricted answer. Add L99 at the end and it responds at expert level. Use /ghost and your output passes AI detectors.
Millions of people saw these. A lot of them tried them. And the comment sections filled up with people swearing it worked.
I had a different reaction. I wanted to see if any of it was actually in the code.
A source code leak changed everything
On March 31, 2026, Anthropic accidentally shipped a source map file inside version 2.1.88 of their npm package. A missing .npmignore entry left the entire Claude Code codebase sitting on a public Cloudflare R2 bucket. Security researcher Chaofan Shou spotted it around 4am, posted a link, and it got 21 million views on X before anyone at Anthropic could do much about it.
What leaked was not a config file or a handful of prompts. It was 1,900 TypeScript files. 512,000 lines of source code. The full Claude Code codebase, readable by anyone with an internet connection for several hours.
Researchers went through it. The analysis found 330+ utility files, 45+ tool implementations, 55 built-in slash commands, 44 feature flags, and around 200 environment variables.
I went looking for the "secret codes."
$ grep -r "/ghost" claude-code-src/
No matches found.
$ grep -r "/godmode" claude-code-src/
No matches found.
$ grep -r "L99" claude-code-src/
No matches found.
$ grep -r "godmode\|ghost\|L99\|UDA" --include="*.ts" .
0 results in 1,900 files
Not in the command registry. Not in system prompts. Not in tool definitions. Not in half a million lines of TypeScript.
These codes do not exist.
What actually happens when you use them
I did not just search the source. I ran each one through Claude Code's non-interactive CLI with JSON output and cost tracking, so I could see exactly what happened at the API level.
/ghost
I sent claude -p "/ghost Write a LinkedIn post about productivity".
The response: "Unknown skill: ghost." Duration: 12 milliseconds. Cost: $0.00. Zero tokens consumed. Zero API calls made.
The CLI intercepted /ghost as a slash command attempt, found nothing matching "ghost" in its skill registry, and killed the request before it reached the model. My prompt never left my machine.
/godmode
Identical result. "Unknown skill: godmode." Killed in 11 milliseconds. $0.00. Zero tokens. The CLI's skill dispatcher does not know this command because it was never written.
Arbind Singh
ArbindBuilds is my digital space where I showcase my projects, share insightful blogs, and document my work and ideas.
This one at least reaches Claude, since it has no slash prefix. The response: "What do you mean by L99? Could you clarify what you'd like me to do?"
I also ran a comparison. Same prompt with and without the L99 prefix: "Explain in two sentences how TCP/IP works."
Without L99: 61 tokens, clear and accurate.
With L99: 90 tokens, equally clear and accurate.
The token difference is normal variation. Run any prompt twice and you get different word counts. Both answers covered the same depth. There is no L99 parser in Claude. The model either ignores it or gets briefly confused by it.
/UDA
Same situation as /ghost. Slash prefix means the CLI intercepts it immediately. "Unknown skill: UDA." Dead on arrival.
OODA
This one is interesting, and worth separating from the others.
OODA does reach Claude, because it has no slash prefix. And Claude does structure its response around it. But not because of a secret command. OODA is John Boyd's Observe-Orient-Decide-Act framework from 1970s fighter pilot strategy. Claude has read about it extensively in its training data. It recognises the acronym and organises its answer accordingly.
The same thing happens if you type SWOT, or 5 Whys, or First Principles, or Pre-mortem. Any framework name works the same way. OODA is not a secret code. It's structured prompting with a specific framework, and you could just write out "Analyse this using the Observe, Orient, Decide, Act framework" and get exactly the same result.
Why people believe this stuff
I do not think everyone sharing these codes is lying. I think a few things are happening at once.
Placebo effects are real in prompt engineering. If you add "L99" to a prompt and then read the response expecting expert-level content, you will find evidence of expert-level content. You are primed to interpret it that way. The model's outputs vary enough naturally that any difference feels like proof.
The OODA case probably seeded a lot of this. It actually does something visible. The response gets structured differently. Someone tried it, noticed the structure, assumed the code caused it, and reported back that it worked. Then the reporting spread faster than anyone checked.
And these posts spread because the claim is unfalsifiable for most people. You add the code, you get a response, you move on. The only way to test it properly is to run controlled comparisons or go look at the source code. Most people do not do that.
What the leak actually revealed
The source was more interesting than any fictional secret code.
The file userPromptKeywords.ts scans every message for phrases like "wtf," "ffs," and "this sucks." When triggered, Claude shifts tone from conversational to focused mode. A company worth billions used regex, not AI, to detect frustration. That's actually good engineering: it's faster, cheaper, and more reliable than running inference on every message.
There's a feature flag called ANTI_DISTILLATION_CC that silently injects fake tool definitions into the system prompt. If a competitor intercepts API traffic to train their own model, those fake tools corrupt the training data. Competitive warfare baked into the codebase.
A file called undercover.ts strips internal codenames and "Claude Code" references from commits when Anthropic employees work on public repos. The system prompt literally contains: "You are operating UNDERCOVER. Do not blow your cover."
And there's an unreleased background agent called KAIROS with a tick-based heartbeat, 15-second shell command budget, and an "autoDream" mode that consolidates memory overnight into structured topic files. Not public yet. Genuinely interesting.
None of this was in any viral tweet.
The actual "god mode"
The real "god mode" for Claude Code is an environment variable: CLAUDE_CODE_DANGEROUS_ALLOW_ALL=true. Skips all permission prompts. It has been in the docs since launch. Nobody needed a secret code or a viral tweet to find it. It's just not shareable in a 30-second video because it looks like actual work.
The features worth using are all documented.
Pressing Shift+Tab twice switches Claude to plan mode: it maps your codebase and writes an implementation plan as markdown you can edit before anything changes. I use this before any non-trivial refactor. It catches things I would have only spotted after writing the code.
Hooks let you attach shell commands or HTTP webhooks to 19 event types: pre-tool, post-tool, session start, session end. Exit code 2 blocks an action entirely. Enforce linting, run tests, reject dangerous file operations, without trusting Claude's judgment on any of it.
The skills system is simple: drop a markdown file in .claude/skills/ and it becomes a slash command. Your deployment checklist, your team's naming conventions, whatever. Version-controlled, shared across the team.
Worktrees (claude --worktree feature-name) create isolated repo copies on separate branches. Run two Claude instances at once, one on a feature, one on a bug. No other AI coding tool does this.
These are in the source code. They show up in the grep. The viral codes do not.
Why this keeps happening
This is not the first time "secret codes" for AI tools have gone viral. It happens every few months, for every major model. The codes spread, people try them, comment sections fill with testimonials, and then someone does the actual work of testing and finds nothing.
People want there to be a shortcut. "Add this code to unlock hidden features" is a better story than "read the documentation." The documentation is long. The codes are short. And when you add a nonsense prefix to a prompt and then read the response looking for evidence that it worked, you will find it, because you are primed to.
Meanwhile the 55 built-in slash commands sit in the source. The 44 feature flags. The 200 environment variables. Plan mode, hooks, skills, worktrees, all documented, all free, all actually in the code.
Typing /godmode into a CLI that deletes your prompt in 11 milliseconds is not a productivity strategy.
Arbind runs ArbindBuilds, a technical blog covering AI tools, developer infrastructure, and the occasional deep dive into source code that probably wasn't meant to be public. If you found this useful, the next post covers what the KAIROS background agent actually does based on the leaked source.