Is it Cheating if it Applies?

People staring at a person using AI for their job
Google Gemini

Some context to get you started.

If you’ve read my recent posts, you know I’ve been down a rabbit hole trying to apply more engineering-focused skills to my daily work. Over time I’ve been spending quite a bit of time going all-in on understanding how the current LLM models are generating code and how effective they can be at building things I might want to use day-to-day.

My current work environment doesn't naturally reward pushing AI workloads into daily workflows. The ecosystem isn't there yet, and as I’m not a 'real' engineer, it's not my general realm to dabble in. 

So what have I started? Well, I used an LLM to generate a Terraform migration script and some test policies to migrate part of our existing Jamf Pro environment, using the available provider, turns out they’re already pretty good at this.

Why is it cheating?

I face an infinite dilemma, especially since the advent of LLMs writing code: Is there value in learning to write code myself? Or should I focus on learning how to spot good code, prompt the agent for accuracy, and avoid spending 12–18 months burning out my brain trying to write it manually?

It creates a debate around non-engineers making commits and raising PRs on production code. That’s a whole topic unto itself.

In my case, I’m starting from scratch with entirely LLM-written Terraform. I prompt it with module documentation and examples, challenging it to comply with my concepts or tell me if what I’m asking is impossible by design. I'm self-learning how to properly put together a pull request and have better commit hygiene along the way too!

What I was able to cobble together in an hour could’ve done the job fine, what I threw together with some further tuning in the following hour would’ve probably been perfect. Spending the extra time to learn more about Terraform and how to properly utilise it for the specific purpose I feel made the extra few days spent making sure everything was flawless worked in my favour.

Do I know how well a human being with actual software engineering experience could’ve done the same job? Not at all, it could’ve been 30 minutes of typing for them, but I know I learned something through the journey at least.

Is the code good, though?

This is the part I’m stuck on, the script it has written for pulling down existing items from the staging environment works flawlessly. A validate and plan command pass with success and show that it will be updating, destroying or creating new policies based on what the export script has pulled down.

Passing validate and plan is necessary, not sufficient. I still need a targeted apply in staging to prove behaviour. Do we actually need to deploy one of these items to staging to actually confirm that the LLM did the right thing?

Yes it does, that’s the whole point of doing a plan before an apply , generally speaking.

I argue that based on what I’ve seen from other examples of codebases doing the same process that I’m doing, My LLM friend and I have done a far better job of making the Terraform config more extendible and less monolithic which in future is going to arguably make work less effort.

Not to mention that it has done an excellent job of documenting for future LLM agents to build more and have non-engineers contribute to the Terraform codebase.

The stigma sucks. (Or is it imposter syndrome?)

We’re in this turning point for LLM use when it comes to software engineering and perhaps I’m not the right voice for this because I’ve come in blind, with only what I’ve learned personally by trying and failing on probably a hundred different ‘projects’ built with an LLM. I know a lot of larger organisations are going all-in on ensuring engineers use LLMs on a daily basis in their workflows and I’m sure a great many of them are producing far superior code than I ever will, but why is what I’m doing feeling like such a bad smell?

I’m constantly going to force someone to peer review everything I write, there’s the kicker. I can be confident when validation and guardrails exist. But the moment those are gone, can we actually trust the code I spent a few days generating versus the months it would take to write manually?

I don’t think I know the right or wrong answer to this, but what I can say is that with time and better models that have even more context and memory capacity, people like me can start actually contributing to environments we never thought were possible before.

Where from here?

I’m pondering this heavily. Personally, the journey is never over. I’ll be spending my end-of-year break and January 2025 exploring how to make these tools work even better for me.

Professionally, though? I think the road is tapped out for now. There’s still too much stigma around using LLMs to do the work of an ‘actual engineer.’

Regulated industries are not environments where minds like mine, who want to ship ideas in a couple of hours with LLM tooling or agentic workflows, can thrive at the moment, I feel too far ahead-of-the-game.

It just doesn’t work like that.

I’m personally taking a step back from using LLMs to deliver success in my current role unless it cuts weeks into hours. I want to apply what I know to the world, but the cost is accepting that my current environment is not the one to deliver it into. At the end of the day, AI isn’t fun.

Curious to hear or see what others have been doing in this space and if they’ve applied some form of agent logic into these type of workflows yet!

This article was updated on December 11, 2025

Comments