Vibe Coding in UK Defence

Share

As part of my technical learning I decided to conduct an experiment. Could I - not a software developer - use AI coding tools to build a working, public-facing web service from scratch, using real UK MOD data? And if so, what might that mean for Defence?

The result is here: Find military range activity near you

This allows users to enter a UK postcode and see if there's any military range activity nearby, how far away the nearest range is, and whether any activity is scheduled today. You can try it yourself. Currently it includes RAF flying training and Army firing ranges. Note that this is a static page - the data does not update (but could, if the code were so deployed).

This was built using Claude and Claude Code, Anthropic's AI coding assistant. I'm going to walk through what I did, how the "vibe coding" approach worked in practice, and what this might mean for Defence.

What I built

The app pulls data from various GOV.UK pages that list UK military ranges and their scheduled opening times. The clever part (arguably) is that no structured public dataset of range locations and activity schedules exists. The GOV.UK pages are designed for humans to read, not machines to consume. So the code has to create that dataset from scratch.

It does this through three AI-generated scripts running in sequence. The first scrapes a short list of relevant GOV.UK pages and extracts range names and activity timetables, handling the inconsistent formatting between pages. The second takes those place names and turns them into map coordinates using a free geocoding service. The third builds the interactive map. Once that pipeline has run, the web app is entirely static — no server needed, it runs in your browser and calls a free postcode lookup API to find your location.

I chose the GOV.UK design system because the output is a citizen-facing service and the GOV.UK style is familiar and trusted. But it also makes a point: AI coding tools can produce something that looks and feels professional very quickly. I was careful to add caveats to the page making clear this is an unofficial experiment, not a real Government service.

The full code is on GitHub if you want to look under the bonnet.

How it worked in practice

The process was conversational. I described what I wanted in plain English, iterated on the output, and used Claude to troubleshoot problems as they came up — things like the scraper picking up navigation elements instead of actual range names, or a location failing to geocode because of unusual characters in its name. I didn't write code line by line. I described intent, reviewed what came back, and steered.

This is what people mean by "vibe coding." You're working at the level of what you want rather than how to implement it. The AI handles the how. It's not magic - you still need to think clearly about the problem, check the output, and catch mistakes. But the barriers to building something real have dropped dramatically.

What this might mean for Defence

In my earlier post on the four edges of AI in Defence, I talked about the organisational and process edge — the need for Defence to become AI-native. Tools like this are relevant to that ambition.

On the positive side: imagine every analyst, staff officer, or planner being able to prototype tools for their own needs. Not waiting months or even years for a formal software requirement to work through the system, but building a working proof-of-concept in an afternoon to test whether an idea has legs, perhaps even to get something that works reasonably well. In effect we could have a suite of specialised tools for each military operation, or even for each operator.

That could meaningfully change the speed at which Defence iterates.

But there are legitimate questions too. If anyone can quickly build something that looks polished and official, how do we maintain quality, accuracy, and trust? My experiment scraped a few publicly-available pages - but the same approach pointed at messy or sensitive data could produce credible-looking nonsense. The ease of creation makes AI governance, assurance, and literacy more important, not less.

For me, the balance tips firmly in favour of making these tools widely available — but with eyes open about the risks, and with investment in the skills people need to use them well.