
After way too long away from this blog, I’m back (and now with a co-author 🐱)!
So where have I been? Barcelona! Why? I somehow landed a job at Apple as a (Lead) Machine Learning Engineer. Still not entirely sure how that happened, but I’m not complaining.
For the past 4 years, I’ve been working on foundation models, search (Spotlight and Siri), and multilinguality. The team was incredible — ridiculously smart and genuinely kind people. I worked on everything from optimizing on-device transformer models to batch inference infrastructure to LLM work (pre-training through post-training).
One highlight was getting to present work on improving on-device models to Tim Cook and the AI/ML Leadership at the Steve Jobs Theater, which was both terrifying and surreal. The project focused on making Siri more performant for real-world use cases, which is way harder than it sounds when you’re dealing with the constraints of mobile hardware.

The reason I haven’t posted anything here is pretty simple: Apple is very, very secretive. When your work involves things that might show up in millions of devices, you learn to keep quiet about the details.
But now I’m on a sabbatical and heading to something I’ve wanted to do for a long time: the Recurse Center in NYC for their S2’25 batch! If you haven’t heard of it, RC is a self-directed programming retreat where you spend 6-12 weeks working on whatever excites you, surrounded by other programmers. No OKRs, no deadlines, just learning and building.
I’m planning to dive into high performance AI reasoning algorithms, explore functional programming more seriously, and work on projects I never had time for. Honestly, the most exciting part is just being around other people who love programming for its own sake.
Vibe coding
So, vibe coding. If you haven’t heard of it yet, you will. Andrej Karpathy came up with the term to describe this way of programming where you let AI generate all your code while you “fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
Basically, tools like Cursor with Claude Sonnet are getting good enough that you can describe what you want in natural language, hit “Accept All” on whatever comes out, and somehow end up with working software. Things break? Copy-paste the error back into the AI and hope it fixes itself.
I’ll be honest: I’ve tried this approach, and I’m not a fan.
Don’t get me wrong — I use AI coding tools every day. GitHub Copilot’s autocomplete is great, and I’ll ask Claude to help me write regex or debug weird issues. But there’s a big difference between using AI as an assistant and letting it drive everything.
Why I don’t like it
First, you lose understanding of your own code. When I write code — even with AI help — I’m thinking about architecture, performance, edge cases, how things connect. With vibe coding, you’re shipping a black box. Fine for a weekend project, maybe. Terrifying for anything real.
Second, debugging becomes a nightmare. Your AI-generated code will break in production eventually. How do you fix it if you don’t understand what it does? Your debugging strategy is now “ask the AI and hope.” That’s not engineering, that’s gambling.
And you don’t actually learn anything. Programming is about problem-solving and building mental models. If you outsource all that thinking to AI, you’re not developing those skills. You’re just becoming dependent on a tool that might not always be there or understand your specific problem.
I’ve seen some impressive demos — like Pieter Levels building games entirely through prompting. But when you look closer, these projects often have obvious security issues or only work in narrow scenarios.
Most real software work involves understanding existing codebases, working within constraints, collaborating with teams, making tradeoffs. These are human problems that need context and judgment.
AI coding tools can be great when you use them thoughtfully. The productivity gains from autocomplete, boilerplate generation, and debugging help are real. But there’s a difference between augmenting your work and replacing it.
I think Maximilian Schwarzmueller has it right: use AI as a copilot, not the pilot. Let it handle tedious stuff while you focus on system design, architecture, and understanding what users actually need.
Understanding beats vibing
Maybe I’m old-fashioned, but good software comes from understanding what you’re building and why. Vibe coding might get you working code faster, but better software? Better programmers? I doubt it.
The tools will keep getting better. The line between AI assistance and AI replacement will keep shifting. But for now, I’d rather understand my code than vibe with it.