Right, I'll be honest — I've been meaning to write this blog for months. Every week something happens with AI that genuinely stops me in my tracks, makes me put the kettle on, and forces me to reconsider what's actually possible. This week was no different. In fact, it was the week that finally pushed me over the edge into "okay, I have to document this." So here we are. Welcome to issue one.

I'm not going to give you tutorials or step-by-step walkthroughs — there are plenty of those on YouTube. What I want to do is share the moments where AI made me think differently, and why I think you should be paying attention too.

ChatGPT Images 2.0 —
This Is Not a Toy

Let's start with the one that absolutely got me this week: ChatGPT Images 2.0. OpenAI dropped the new GPT Image 2 model in late April, and I've been obsessing over it ever since. The short version: it thinks before it draws. It has actual reasoning built into the image generation process, which sounds like a small thing until you see what it produces.

Here's what I used it for. I've been thinking about redecorating a couple of rooms in the house — nothing major, just a refresh — but I had absolutely no idea what style I actually wanted. So I started uploading photos of my rooms and asking ChatGPT to reimagine them. What came back wasn't a vague mood board. It was photo-realistic renders of my actual rooms, respecting the layout, the light, the proportions, and giving me options — Japandi, mid-century modern, moody maximalism. I basically did an interior design consultation with an AI at 11pm in my kitchen. And it was brilliant.

"I basically did a full interior design consultation with an AI at 11pm in my kitchen."

— Jamie Gibbons, Issue #1

But I didn't stop there. I needed some eye-catching visuals for a few social media projects. With older tools, getting exactly what you had in your head was a frustrating back-and-forth of prompts and disappointment. With Images 2.0, the context-aware multi-turn editing changes everything. I'd generate an image, then say "change the background to something more cinematic, keep everything else" — and it would. Consistently. It actually remembered what we were building together.

The moment that really made my jaw drop? I used it to create the featured image for this very blog post and the TikTok reel below — I described the vibe, the text overlay, the colour palette, and it rendered legible, stylish text directly into the image. No garbled nonsense. That alone feels like a milestone.

Why it matters

GPT Image 2 supports up to 2K resolution, can generate up to 8 coherent images from a single prompt with character continuity across the batch, and renders multilingual text accurately. For anyone creating content, marketing assets, or wanting to visualise ideas fast — this is the new baseline.

Building StageTub
Without Writing Code

The second thing blowing my mind right now is Base44, and specifically what I'm building on it: an app called StageTub.

It actually started with a conversation. An old work colleague got in touch frustrated with an app and plugin he'd been using on his WordPress site — it wasn't doing what he needed, and he asked if I knew of anything similar. Honestly? I didn't. But I'd just started playing with Base44, so instead of pointing him elsewhere, I told him about it. Armed with a clear list of everything he actually wanted, I sat down and built StageTub.com.

The concept is a platform for the performing arts and live entertainment world — a dedicated space where performers, promoters, and audiences connect around live shows and stage talent. The entertainment industry is massive, but it's still weirdly fragmented online. StageTub is my attempt to fix a small slice of that. Head over to StageTub.com to see what we're building.

0
Lines of code written
1
Afternoon to build v1
Times I said "no way"

Here's the mind-blowing part: I'm not a developer. I've never been a developer. A year ago, building a fully functional web app would have meant hiring someone, spending months, and spending a lot of money. Base44 is an AI-powered no-code platform that lets you describe what you want to build — literally in plain English — and it builds the database, the user interface, the logic. Change your mind? Just tell it. It adapts.

This week I made real progress on StageTub. Features that would have taken a developer days to code came together in an afternoon. I'm not pretending it's magic with zero effort — you still need to think clearly about what you want, and there's a learning curve. But the gap between "idea in my head" and "thing that exists and actually works" has never been smaller. That's the bit that keeps blowing my mind.

But it didn't stop at the app. I also got AI to build a complete brand identity and design system for StageTub — and this is where things got properly impressive. We're talking a full Brand Hub: a logo concept (a sculpted slab 'S' with two angled cuts that nod to stage risers and sound waves — simultaneously a monogram, an equaliser and a stage), a four-colour palette of Stagetub Purple, Stage Lime, Backstage Cream and Stage Black, WCAG 2.2 contrast-tested across every combination, a typography system built entirely on Urbanist, voice and tone guidelines, and a full applications section showing how the brand lives across websites, social, merch and email. The kind of brand document a design agency would charge thousands for and take weeks to deliver.

What struck me most was how considered it felt. The four brand pillars — Raw, Bold, Practical, Connected — genuinely capture what StageTub is about. The voice guidelines ban words like "leverage" and "synergy" and tell you to write like "the person running the soundboard, not the one running the IPO." That's not filler. That's a real point of view. You can explore the full design system at eagleworks.co.uk/stagetub.

Websites That
Come Alive

The third strand of this week — and the one I've probably spent the most hours on — is using AI to design and build professional websites. I've been working on a full redesign of the John Cameron Music homepage (John Cameron being the legendary BAFTA-winning composer and arranger behind hits for Hot Chocolate, Heatwave, and a remarkable film career spanning decades), and a website for a George Michael tribute show.

What's striking isn't just that AI can write HTML and CSS. It's that it makes design decisions. It chose a deep chocolate and burnt orange palette for John Cameron's site because of who he is and the era he worked in. It created a parallax scrolling hero, animated vinyl records, photo galleries where portrait shots reveal full colour on hover, a "Through the Years" section with a slow auto-scrolling photo strip treated in warm sepia tones.

For the George Michael tribute site, it built a full tour date management system with sold-out states, tabs switching between 2026 and 2027 dates, animated floating song titles drifting across the hero section, and a counter that ticks up from zero when you scroll past the stats. All of that came from a conversation. Not from specifying every pixel.

"The gap between 'idea in my head' and 'thing that exists and works' has never been smaller."

— Jamie Gibbons, Issue #1

That collaborative quality — where I can say "it feels a bit static, make it come alive" and something genuinely interesting happens — is what keeps surprising me every single session.

If there's a thread running through all of this week's AI moments, it's creative leverage. AI isn't replacing creative vision — it's turbocharging the ability to act on it. I can visualise a room redesign without a designer, build apps without a developer, and create websites without an agency. The ideas are still mine. The execution just got a whole lot faster.

See you next Friday — there's already plenty queued up that's made my eyes go wide. And if something AI-related has blown your mind this week, drop it in the comments. I genuinely want to hear it.

AI #ChatGPT #GPTImage2 #Base44 #NoCode #StageTub #WebDesign #AITools