I’m Back. Here’s What I’ve Been Up To — And What’s Coming
So many things have happened lately! AI Stuff! Continues Deployment! Working Fullstack! RIBs! And a poll! :)
It’s been a couple of months since my last article.
I was on the tail end of my paternity leave. I’m lucky enough to work for a company that gives generous time to bond with your baby. It’s been great, but also kind of exhausting. I need sleep, lol.
Now that I’m back, here’s what’s been happening with me — and the world. If you haven’t noticed, there’s been a bit of an AI wave in tech lately. You may have heard of it.
The book
My last posts were about the revised async/await chapter for The iOS Interview Guide 2nd edition:
I incorporated a round of reader feedback and added four new sections covering legacy API bridging, actor reentrancy, debounced search, and a multi-paradigm network request comparison. Thank you to everyone who submitted feedback.
During the break I also made solid progress on the UIKit chapter — it’s essentially done, I just need to polish it up and post a feedback call the same way I did for the async chapter. That’s next on my list.
The book is still very much in progress. There’s a lot left to write, but I’m moving forward.
AI experiments on the side
Turns out paternity leave is great for late-night rabbit holes when the baby finally sleeps.
I spent a lot of that time playing with AI-augmented coding. Vibe coding, web development experiments, building out personal projects. The models have gotten dramatically better. The agentic stuff especially — it caught up fast. I have some stuff to share — thoughts, opinions, and current best practices. Will be posting about it, stay tuned.
I also went down a rabbit hole trying to run everything locally. I got OpenClaw running in a Docker container on an external drive — so it wouldn’t wreck my main machine — and connected it to locally running models via Ollama. It didn’t go well. Turns out an M1 MacBook Pro with 64GB of memory is not enough to do anything meaningful with local models. The models are just too big. Oh well. I did learn a lot about Docker, so there’s that.
What I came back to at work
This is where it gets interesting, and where I have a lot more to say in a dedicated article.
Short version: we went all in on AI-first development — not just engineering, but the whole organization. Every team got subscriptions to things like Claude and Cursor with very generous token budgets. I came back and dove right into this “token-galore” as I call it, and I really don’t want to go back to how I worked before.
My scope also changed. I’m now doing product work alongside engineering — customer research, data analysis, synthesizing feedback from users. Stuff I always wanted to do but never had the bandwidth for. AI makes it feasible. The scope of what a software engineer can actually build, own, and influence has gotten so much wider now than ever before. I already shared some of my thoughts on mobile devs being fullstack.
This is similar. I want to have a proper write-up on this.
What’s coming
I have a backlog. Here’s what I want to get to — tell me what you want to read first.
KMP deep dive, part 2 — and then part 3 about RIBs
I already published an intro to KMP and how we use it to share business logic across iOS and Android. I want to go deeper — a part 2 on the implementation details of the KMP side, and then a part 3 on how the iOS architecture fits together with RIBs. The full picture, end to end.
Mobile Continues Deployment on Demand
Mobile teams have always been stuck behind App Store review cycles and phased rollouts. We decided to close that gap with web deployment as much as possible. Automated everything, adopted Runway, and now ship to 100% of users with basically a click. We’ve been working this way for several quarters new and it surprisignly works! I want to write it up in detail.
Mobile engineers owning the full stack
I’ve argued before that mobile engineers make great full-stack engineers. Now we’re living it. Our team owns everything end-to-end — from the app screen and mobile business logic to the API contract and the backend implementation with the database queries, the whole thing. AI makes this actually feasible. I want to expand on this with real examples.
RIBs + Swift concurrency
As a maintainer of Uber’s iOS RIBs repo, I’ve been slowly modernizing it for Swift concurrency. I have a working approach — main actor isolation, full backwards compatibility, and a clear path for Swift 6 — and I want to document it properly for anyone running RIBs in production.
What do you want to read first?
Reply to this email, leave a comment, or vote in the poll below. I read everything.
And if none of this sounds interesting to you — tell me that too. I can take it :)
Happy coding!


