Building on a subsidized future
The Vibe Portfolio
I took a lot of car share rides in the early 2010s. It took me a little while to realize how much the impact of a subsidies commodity was having on my behaviour, and how much that behaviour was serving to change the world around me.
I went back and forth on what I believe about that change. I don't like the exclusivity and how "the commons" were being essentially mined by private interested. But also, I saw how it created opportunity for some, as on a research trip where I interviewed a woman who was able to reclaim hours of leisure time that had previously been spent commuting to her neighborhood that was underserved by public transit.
I'm a bit of a similar place with vibe coding. I don't really know where it's going, and I'm seeing a lot of threats — a lot of trends and signals that make me very uncomfortable. Many of my former colleagues in the design world are having full on existential crises. Engineers I know are either embracing fully or going full head-in-sand. I'm not sure where I'm landing yet, but I am sensing that we're living with heavily subsidies time right now and I'm trying to maximize what I can learn and explore within that.
So, let me share a bit of what I've been fiddling with.
The ecosystem
graph LR
subgraph Compute
MM[M4 Mac Mini<br/>Ollama · Qwen3 32B]
VPS[Hetzner VPS]
MM -.VPN.- VPS
end
subgraph ContentLayer[Obsidian Vault]
OB[Markdown + YAML]
end
OB --> PMT[PM Toolkit]
OB --> G24[2024.garden]
OB --> EDY[Edytor]
OB --> EV[Evolver]
MM --> RN[Roughneck]
RN --> ET[Etyde]
RN --> GV[GoVejle]
RN --> CNC[CNC Hub]
CNC -.monitors.-> ET
CNC -.monitors.-> GV
EW[Earworm] -->|organizes| ABS[Audiobookshelf]
HA[Home Assistant<br/>on TuringPi] -->|consumes| HV[Haven data]
WF[Wallflower] -->|local ML sidecar| PY[Python · demucs · essentia]
Unit Economics
To start, I operate on a particular principle with personal projects, which is that I obsess over the unit economics when it comes to day-to-day operaitons. Being somewhat distrustful of the current subsidies I'm leveraging, I'd incredibly wary of situations where you end up having your operations tied to a subsidy that can be rug pulled.
So, I've been using Claude in particular to explore the hell out of what I can do with a self-hosted LLMs, and how that can be used to drive certain services. The two "service" projects that I explored around this were GoVejle (which scratched a personal itch for English language events in my new city) and Etyde (music practice platform which admittedly I haven't really used). Both projects have a specific "generated" feature set which I desperately didn't want to pay for.
As I played around with these services, I ended up standardized this need into a service runner for my Mac Mini, which runs Qwen3 32B over Ollama. Basically, its an async AI job runner I'm calling Roughneck which several of my other projects: Etyde, GoVejle, CNC, etc.
While this doesn't resolve the token cost for the actual production of the thing, the operating cost of these tools is basically nil — a few dollars a month for hosting and a domain name. The token cost is then tied to a subscription, though the actual cost of those tokens is heavily subsidies.
My personal hope is that the local models and appropriate hardware to run them will improve enough in the next year that we'll be able to get a sonnet 4.6/opus 4.6 level of capability on a self hosted instance below around 5k USD with a relatively affordable operating cost. I've an M4 Mini right now and am finding it to be quite effective for specific tasks, but it's yet to prove itself as an appropriate alternative for the frontier hosted models for the software development and planning tasks I'm putting before it.
Vaults
The other big focus for me has been leaning very heavily on Obsidian markdown vaults. Several projects treat Obsidian as their source of truth, not a database. My main personal project, a personal PM Toolkit, basically operationalizes my hypothesis about product strategy and product ops in the form of a wildly evolving obsidian vault, and a set of dashboards and tools that I now find invaluable. The whole thing was built to be local first — so just a nextjs app that connects to a vault and reads from that. Another one is Evolver, which reads a 65,000-line curriculum for learning specific synthesizers from markdown. This blog publishes from an obsidian repo, though I've basically made my original self-coded blog from 2024 easier to deploy as opposed to vibe coded from scratch. Edytor analyzes it. Basically, decades long love afair with Markdown is paying off.
The thing is though, that Markdown is also serving to make the vibe coded work more specific. By building skills and agent instructions directly into the markdown vaults, it's been serving to create dramatically more effective workflows around navigating those vaults and generating the appropriate content.
One thing as a specific recommendation on this though — keep "self written" and "agent written" content seperated and distinct. A wise artist once said "Never Get High On Your Own Supply" and unfortunately Claude never listened to Life after Death.
More recent Outliers
My most recent projects are a bit different from the previous ones. Earworm, Haven, and Wallflower are different — Earworm has no AI dependency and is basically a replacement for the old Libation app; Haven is a claude-generated garden plan for my son; and Wallflower is a dream "recording management" tool that runs its own local Python ML sidecar rather than routing through the shared Ollama/Roughneck pipeline. Wallflower and Earworm both feel like a "step up" on maturing with the use of these tools as they were focused, easily open sourced, and also generally useful.
The product theory that shapes all of them
One thing that is worth mentioning is the PM Toolkit and how that's become a kind of "making to think" project. It's easily experienced the most scope creep, but in a way that is relatively easy to claw back — and that scope creep end up serving the same role as a prototype might in a design process. It teaches me where not to tread.
In general, the conceptual framework that's emerged around the PM Tookiit is something like this:
The spine is three paths:
graph TD
NS[North Star] --> IM[Input Metric]
IM --> KPI
KPI -.cascades health.-> INIT
BET[Bet] --> INIT[Initiative]
INIT --> RI[Roadmap Item]
RI --> EXP[Experiment]
SIG[Signal] --> VEC[Vector / Opportunity]
VEC --> RI
RI --> CL[Changelog]
CL -.closes loop.-> KPI
A Bet is a strategic hypothesis — a commitment of resources to a belief. An Initiative is a concrete work package that implements a bet. Roadmap Items are the deliverables inside an initiative. Experiments validate specific approaches inside roadmap items. That's the execution path: Bet → Initiative → Roadmap Item → Experiment.
Measurement rides parallel. A North Star is directional and un-measurable on its own. Input Metrics are its leading and lagging indicators. KPIs attach directly to initiatives and cascade health dots onto the execution path. Discovery flows the other direction: Signals (evidence from the field) validate Vectors (opportunities), which attach at the roadmap-item level. A Changelog closes the loop — did shipping actually move the metric?
I've made a LOT of corrections that only surfaces after sustained dogfooding with this tool, but it's been paying of tangibly both in how I'm approaching personal projects and how I approach my work as a PM.
One example has been generating the self-hosted analytics suite in the form of CNC; the decision to create a centralized job runner knowing that I'd be pulling in more applications needing locally hosted jobs; but also using AI to analyze my own trends in project implementation and interrogate decisions and patterns.
The movement away from software languages that I know and am familiar with into new territory (eg Go and Rust, which i've only done tutorials or tiny projects with) is a good example of this.
Next steps
I'm going to keep this post updated periodically as I keep exploring these projects. I'm emphatically not pursuing anything financial in any of this, which is I think helping the mindset quite a bit.
One fundamental belief I'm developing (along with many others) is that the SaaS age is hosed. Etyde came about as an attempt to build an app of similar complexity to Knowsi, and the fact that i came together in a few days was horrifying and liberating at the same time.
But the focus on unit economics and cost; the emphasis on building open projects where it makes sense and accepting private projects where it doesn't; the drive to try and do a bit of good for others while I'm making for myself (GoVejle, Haven, Earworm, Wallflower). I am feeling like there is a bit of the "old internet" that many of pine for that pops up in this frenetic building.
So, I'm going to learn into that while I anxiously eye the rising cost of tokens and the inevitability of Venture Capital's demand for return. There's no going backwards from this period, but going forward is likely to become quite expensive. My hope is that with these subsidies explorations, new habits, and emergent skills — well, I've been a homelab hobbiest for a while now. Maybe I have some agentic homesteading in my future.