Why I Hate SF AI Fanboys
In recent days, with all the noise around AI, one group just won’t shut up the San Francisco AI fanboys. You know the type. They’re always first to ride every hype train. First it was crypto, now it’s AI. Tomorrow, maybe it’s brain chips and climate tokens on the blockchain. They dont care if it trends, they sell it.
With every new billion-dollar round raised, every fancy demo, the SF AI guys are foaming at the mouth about how LLMs are the greatest invention since God. They’ll tell you this tech is perfect, flawless. They will swear its reasoning better than your entire family. Ask them a single critical question, and it’s like you committed a crime:
How dare you question it?? Sam Altman said AGI is coming???
That’s what I hate. This cult-like belief. This religion of artificial intelligence.
The Benchmark Paradox:
Let’s talk about benchmarks the holy scriptures of the SF fanboy. They love benchmarks because benchmarks say what they want to hear. But here’s the truth: benchmarks don’t prove reasoning . They measure repetition and scaling, not cognition.
LLMs are trained on massive datasets. Let’s say the model sees a million examples like:
[x] + [y] = z
Now, during evaluation, the benchmark gives it questions like:
[y] + [x] = ?
And the fanboys are shocked when the model gets it right? Of course it does it’s seen the same structure a million times. That’s not reasoning. That’s just pattern matching which is good not bad.
Even if you’re not training on the benchmark questions directly from the internet, you’re still feeding it very similar, simple patterns so you’re accidentally overfitting the model, even if you don’t mean to.
Why SF Loves AI So Much:
SF folks always jump on the latest trend. But AI? They love it more than anything. Why? Because unlike crypto AI gives them real results. It generates text. It automates jobs. It writes code. They can see the money and power it brings.
And so, they become evangelists pushing the myth of AGI like it’s the second coming.
The Illusion of Thinking:
Apple recently dropped a paper: “The Illusion of Thinking.” It argues LLMs don’t actually reason and its completely true. But of course, the SF AI fanboys hated it. Not because they read it they didn’t. But because it threatens their faith.
They want to believe AGI is coming. That they are living inside a sci-fi movie. But they never stop to ask:
If AGI really happens what happens to mankind?
They dont think. They just want to feel like they are part of this movie.
Do LLMs Needs to Reason Like Humans?
No. That’s the whole point.
AGI is a myth created by Sam Altman and his bro a myth sold to billions. He tells us it will “benefit humanity.” In reality, he’s the most dangerous man alive. He created a narrative so strong that people are afraid to speak against it.
LLMs don’t need to mimic humans. Let’s stop trying to teach machines to think like us.
Let them reason like LLMs in their own language, in their own way.
<think> Tags Won’t Save You
Now SF fanboys are obsessed with <think>
tags, claiming they make models “actually reason.” But here is the irony:
We don’t even understand how human reasoning works. We don’t fully grasp cognition. If we did, we would’ve cured Alzheimer’s by now.
So how are we trying to replicate something we can’t even define?
Stop Chasing AGI Start Fixing Reality
We don’t need AGI. We don’t need the myth. The AI we have today is already powerful enough to change the world.
Let’s fix its flaws. Make it safer. Make it more open. Give people the tools to understand and contribute to it.
Let’s build labs that are by the people and for the people. Not locked behind billion-dollar walls.
Who’s to Blame?:
Sam Altman.
His persona is so strong that no one dares to question whether LLMs even work as advertised.
“What do you mean LLMs don’t work? I raised $500 billion for this”
That’s the power of myth. That’s why I call him the Antichrist of AGI.
We Need AI That Works for People.
Forget the myths. Forget the religion.
AI doesn’t need to be human. It needs to be useful, safe, understood, and open.SF AI fanboys won’t build that.
But maybe we will.