Hi.
I'm Phil. And this is Future Office Labs, my micro-laboratory focused on tools for thought, personal productivity and knowledge management, and the future of coding and work. I follow the Unix "small tools" philosophy and get inspired by Lisp, Smalltalk, Forth, wikis and outliners, "moldable development" etc. If all this resonates you probably already recognise the area.
I'm also going to use the blog here to talk about more artistic coding projects, because I believe the line between "art" and the kind of "practicality" that the (slightly ironic) name "office" might imply, is increasingly blurred.
Inevitably in 2025, all research into coding and working also involves exploring AI. And AI assisted coding.
And I think the greatest value we'll get from AI / "vibe-coding" is not to try to use it to reproduce existing development workflows and patterns, but to take advantage of it to rethink the entire development process.
1) Use vibe-coding to make small tools that add a lot of value, and can be combined into wider ecosystems. Keeping tools small and self-contained keeps them tractable for today's AI to get right. In a sense, we can see that the rise of N8N and many web APIs is a good indication that others are thinking this way too. Though I believe that rather than become dependent on a bunch of web APIs from other people, we should internalise that way of thinking and seek to create ecosystems of small tools within our own local machines.
2) AIs let us learn and work with programming languages we are unfamiliar with. So let's take advantage of that to upgrade our languages and thinking. Particularly AI can help us orchestrate multiple "domain specific languages". It can help us use languages which have formal constraints for, or are provable for, particular tasks. This will help us build software that is robust and trustworthy. Even if we built it with the help of AI.
3) Technology should empower us as individuals. Not put us at the mercy of large systems or remote cloud platforms. Two things are obvious to me. That we're in a period where AI is heavily subsidised as "loss-leader" to try to get us hooked. And that the existing big AI players want to get us locked in to their platforms. I believe we need a defensive attitude against this. We can and should still take advantage of the current boom and cheap AI. But we should plan for it going away. So my rule of thumb is to use AI at write-time to make systems that don't rely so much on AI at run-time.
Eventually I hope we will have locally running language models on our own machines. But we should imagine these as a thin layer of natural language UX over what will be more traditional, deterministic computing processes running. We don't want to try to make LLMs do all the work. We want them to help us formalize and route our intentions to other local tools as quickly as possible.