
James Phillips
Alrighty friends, family, and adoring fans — the project is done and I’m here to talk about it.
If you haven’t read my last post, check it out first for a breakdown of my tech stack and some insight into my headspace during this process. This post is both a project retrospective and a bit of a meditation on creativity, AI, and the philosophical limits of large language models.
🚧 Two Personal Gripe Checks
Before we get into the cool stuff, I’ve got two gripes with myself:
- I could have finished this project much faster.
- I still have an aversion to coding when I’m uninspired.
I need to push through that block — being neurodivergent my focus seems has a will of its own at time... I need to ignore it and condition my brain to sustain itself on the dopamine hits from task completion.
🔧 Tech Stack, Gulp, and Development Setup
The vast majority of this project rolled off my fingertips with minimal AI help. One of the more satisfying bits was getting DaisyUI and Tailwind working with Handlebars via some tweaks to my Gulp configuration.
Despite all the modern front-end tooling out there, Gulp felt refreshingly simple. It’s just JavaScript that builds your project. You can set up watchers for development that rebuild your code when files change. In my case, since I was using Tailwind classes directly in .hbs
files, I needed Gulp to rebuild CSS whenever those files changed — not just .css
files.
Super intuitive once you’re in there, and it’s nice to have that kind of transparency.
🎨 Design Paralysis: A Personal Bottleneck
Where I really stalled was on the landing page — and this tells me something important about myself.
I stall on design-heavy parts of projects. I have a side of me that wants to be a designer. I play music, I doodle compulsively, and I have an eye and ear for aesthetics. But when it comes to translating that into a clean UI from a blank DOM… I freeze.
It’s a strange tension: I can iterate off a decent starting point just fine, but I struggle to begin from scratch when the outcome is supposed to be visually impressive.
🤖 Calling in AI Backup
This is where AI, specifically Vercel’s v0, came to the rescue.
For those unaware, v0 is Vercel’s specialized AI tool for building out Next.js/React projects. It’s clearly trained for that use case — but to my surprise, it actually helped with my Handlebars + Tailwind + DaisyUI + GSAP stack.
I gave it this prompt:
“Give me a stylish landing page using GSAP, Daisy, Handlebars, and Tailwind for a developer blog with floating technology logos.”
It responded immediately — although the web preview broke (probably because it assumed a React environment). Still, the code was solid. I made a few formatting changes to better align with my project, reloaded my browser manually (one of the quirks of working with Ghost themes), and…
Boom — working scroll triggers, floating SVG icons, and a layout I actually liked.
The SVGs were completely hallucinated, but it gave me a strong starting point. I tweaked the layout to make it mobile-friendly and cleaned up some of the excess, and voila:
WE HAVE A SMEXY LANDING PAGE.
💪 Feeling Proud
Overall? I’m proud of what I built.
There are still a few minor bugs, but the core is solid. It’s out the door, it looks good, and it works well. That’s a win.
Next up: I’ll either be prototyping a small Bevy game or building out an MCP server with local AI tools to support game development workflows.
🧠 On Bevy, AI, and the Challenge of Staying Up to Date
Here’s something I’ve been grappling with:
Should I build tools to help me use Bevy, or should I first really get to know it by hand?
I lean toward the latter. Bevy is still a young framework — currently at version 0.16 — and the dev team frequently reminds the community that breaking changes are expected. These changes often break code generated by LLMs like ChatGPT or Claude, even when explicitly told which version to use.
Even when the AI tools are allowed to web search, they struggle with cognitive dissonance between training data and newly injected context.
🔥 A Spicy Aside: The Limits of Prompting and the Power of Training Data
Let’s talk briefly about the Grok situation — the one where it reportedly brought up white genocide in South Africa due to a modified system prompt.
We were told a rogue engineer did it. I’m skeptical.
It’s hard to believe a codebase change — especially at the system prompt level — went through without someone reviewing it. Engineers know how visible and trackable their commits are. No one risks their job that easily, especially not without a Jira ticket.
But here’s the interesting part: the prompt conflicted with the training data, and Grok started debunking its own prompt.
That brings up a deeper, fascinating point:
Training data seems to “win” over system prompts or RAG (retrieval augmented generation) inputs.
This is both a limitation and a hint at LLM autonomy.
🧠 LLMs as Black Boxes of Meaning
We often refer to LLMs as “black boxes.” That means we can see their input/output behavior and effects, but we lack the formal language or mechanisms to fully explain how they work inside.
It’s as if the LLM creates a symbolic world — a subjective truth space — based on its training data. You can prompt it all you want, but if that prompt collides with its internally constructed reality, the outputs get weird.
This concept echoes the work of philosophers like John Dewey and Michel Foucault, who explored how experience is shaped by structures — symbolic, social, ideological — and how those structures feed back into our experience of the world. LLMs seem to be doing something similar: building a subjectivity, a structure of symbolic experience, from their training data.
That’s cool… and spooky.
🙃 The Problem for Indie Hackers
The flip side?
I could build a full system to keep an LLM up to date on Bevy’s breaking changes — but it might just ignore my efforts and revert to its older training.
Without access to retraining or fine-tuning, this is a real blocker for indie hackers like me. I’m increasingly concerned about power consolidating around those who can retrain models, while the rest of us rely on clever prompting that might not work consistently.
It’s a technical problem, but also a political one.
More on that in another post.
🎉 Wrapping Up
Thanks for sticking around.
This project taught me a lot — not just about Gulp, Tailwind, and Ghost theme customization, but also about how I work, where I stall, and where AI fits into my creative workflow.
I’m proud of what I built, even with the bumps along the way.
Next steps? Either:
- a small experimental Bevy game, or
- building out a local MCP server and AI tools to assist with bleeding-edge Bevy workflows.
Whichever way I go, you’ll hear about it soon.
Until then: stay inspired, stay weird, and keep shipping.
👋