This is from a time when I was a complete beginner at cold outreach, when I used YouTube guru tactics in the name of strategy, and blamed AI for its lame outputs. I slid into people's DMs with a hundred words of what I believed was professional warmth — reference to our last call about Q3 budget concerns, mention of the case study, request for a 15-minute slot. I got ChatGPT to write my outreach messages. My DMs read like the fluorescent hum and the faint smell of coffee and death in a corporate office.

There's a high chance you've talked to an LLM. You wrote "professional," but you got reimagined synergistic integration. You wrote "better and engaging," but you got even more synonyms and adjectives. You wrote "compelling thought leadership article," but it calculated the statistical center of everything ever labeled with those words, and got you an average post that lands nowhere because it was engineered to fit everywhere.

Then, you blamed the model instead of confronting yourself on your command of language.

In 1953, Ludwig Wittgenstein proposed a thought experiment.

Imagine everyone has a box containing something they call a "beetle." No one can look inside anyone else's box. Everyone talks about what's in their box using the same word. What actually sits inside — a stone, a shadow, an absence, a thing that changes by the minute — remains entirely private.

Yet, Wittgenstein argues that communication still works. The word "beetle" functions in the language not because it refers to the thing inside, but because it plays a role in a game between 2 or more speakers. The private contents — whatever actually lives in your box, heavy or hollow, crawling or still — as he puts it, "cancel out." They have no place in the language game.

And now, the LLMs learned language exactly as he described — patterns of use, not private references. This token follows that token in these contexts with these probabilities.

No semantics. No beetle box to look into. Just the language game, played at scale.

When I wrote "professional" in my prompt, I was pointing at my beetle — 6 months of relationship-building with this client, the writing she praised in previous emails, the way she softened when I asked about her daughter's piano recital.

The AI model wasn't even aware of my box. It responded to "professional" as the word functions across billions of documents — the entire lexicon of corporate caution and compliance in its training data.

My beetle was my own experience and frequency. AI's beetle was the mean of everything ever written under that label.

After exhausting some tutorials and session limits, I started to grok with brutal tangibility and specificity — the client's voice and style, the exact phrase from our previous exchange I wanted to echo, to treat the reader like a childhood friend who knows and trusts me.

The output wasn't perfect, but it was something I might say to a friend over tea. An architecture imposes limits. But within those limits, the precision of my compression determines the precision of the outputs.

Ready-made phrases come crowding in. They will construct your sentences for you — even think your thoughts for you.

— George Orwell

Language is the base layer. Every word narrows possible worlds, activates patterns older than your grandparents, and constructs your reality. You've been building and accumulating meaning through language since your first syllable. Building it, loosely. Outside voices filled your gaps — parents, teachers, media, even friends who interpreted your half-formed thoughts into coherent stereotypes. You just didn't notice it because most people talk in abstracts and expect communication to happen automatically.

AI doesn't think for you, but rather shows you where you stopped thinking with mathematical precision. The slop it spills out is the beetle in your box.