This isn't a callout post. We're not going to name names or air drama. OpenAI has smart people doing genuinely hard work. But we left, and people keep asking why, so here it is.
The mission got fuzzy
OpenAI started with a clear goal: build AGI that benefits humanity. For a while, that felt real. The early work was scrappy, fast, genuinely exciting. You could feel the weight of what was being attempted.
Somewhere in the last couple of years, that got replaced with something blurrier. A mix of enterprise sales targets, PR management, regulatory positioning, and the kind of corporate caution that makes bold decisions really hard to make. The mission was still on the website. It just wasn't what we were optimizing for anymore.
The models got more cautious, not more capable
A lot of AI "safety" work in practice means making the model refuse more things. That's not safety — it's liability management. The models we were shipping were measurably less useful than they could have been because someone was worried about a bad tweet.
We think you can build AI that is genuinely responsible and genuinely useful. Those aren't opposites. But you have to actually try to do both, instead of sacrificing one for the other when things get uncomfortable.
So we built MeTal
The experiment is simple: what does an AI company look like when you actually trust the AI? When you're not constantly second-guessing it, hedging every output, adding disclaimers to everything?
We don't know exactly what we'll find. That's the point. MeTal is the company we wanted to work at. So we built it. If you want to follow along — or just use models that actually do what you ask — you're in the right place.