Your business wasn’t ready for 2024.

A person on a beach in Hawaii enjoying AI doing their job.
ChatGPT

tl;dr: use AI every day and figure out how to make it work for you. Become a 10x individual at home so when business finally catches up, you’re a 10x worker on a beach in Hawaii while a cloud agent you prompted six months ago handles the boring stuff.

Nobody was prepared for the first wave of AI tools. Early ChatGPT felt like it was just spewing words, even with a mountain of training data behind it.

Where are we now

My last post said “AI isn’t fun.” Still true. What I’ve realised this week is a lot of big organisations, especially regulated ones without an army of lawyers on call, are 12 to 18 months behind. Not ready for 2024. And still dragging their feet on the groundwork to discover frontline use cases.

Frontline workers are the point. I’m on an IT helpdesk. All day is triage, calls, tickets, quick decisions. These are the people who keep the business moving, cut wasted time, and should be leading their own team of robots in the cloud through prompt engineering.

Picture this: it’s 9am, 10 new tickets land. An agent reads the end user text, checks internal knowledge and web search, drafts first responses, and routes by policy. First pass triage in 1 to 2 minutes, initial contact made. Will it be perfect? No. Is the trade off worth it? Yes.

The adoption cycle

We’re at the tail end of 2025 and most daily AI tools like ChatGPT and Gemini are treated like second class citizens at work because “they aren’t real tools.” That misses the point. The same people already using them could be twice as productive with the enterprise versions in their toolkit.

I’m lucky to have GitHub Copilot for Business through work. It has taught me a lot about the limits and the wins. Since early 2024 the hype was loud. By late 2025 has much changed? Are we deploying agents in cloud environments to do daily tasks? Are we using approved tools? I doubt it, if anything the fear of deploying anything these tools build into production is still a concern to most.

I’m probably top three Copilot users in my org. I use it to generate bash scripts, smash out one off fixes, and save a stack of time. Pour one out for StackOverflow. Outside work I’m seeing how far “vibe coding” goes before I bail and apply the lessons to the next project. My niche seems to be building MCP servers. Model Context Protocol servers are small services that standardise how tools and context plug into AI agents so workflows aren’t just vibes, they’re repeatable.

Here’s the meat.

Many orgs are stuck in a pre 2024 mindset because they can’t get past the risk conversation.

Who sets guardrails? Are the right people picking the right tools? Why one AI service for everyone? How many steak dinners did vendors promise?

Key issues that stand out:

  • Data leakage without DLP. Internal docs and PII slip into prompts. Fix it with scoped connectors, redaction, and audit trails.
  • Governance fears about training on customer data. Make contractual commitments clear and disable training by default.
  • Procurement lock in. Pilot with multiple vendors, evaluate at the task level, avoid monoculture.

The simple answer: Legal sets policy boundaries. Security enforces controls. IT operationalises access. The business unit defines success criteria and owns the pilot. Run multi vendor, measure outcomes, scale what works.

Wrapping it all up

Keep it short because this is a ramble. Start using ChatGPT, Gemini, or try Claude. Sign up for a month of t3.chat to play with basically every model. Use these tools like you would at work, but for your personal life. Prepare now so when your org finally starts taking this seriously, probably around 2030, people get off their high horse and learn how to use it properly instead of being ignorant.

Try this this week. Make sure web search or deep research is enabled.

Collect 3 to 5 quotes for a big home purchase, like a solar install, kitchen remodel, or fence replacement. Ask your AI to normalise line items, hardware, labour, warranties, permits, surface exclusions, summarise payment terms, and build a simple scorecard. Track time saved and whether the comparison changes your choice after five days.

Starter prompt: “You are my household procurement analyst. Normalise these vendor quotes into one comparable table: line items, unit counts, unit price, total, warranty coverage, years, parts and labour, performance guarantees, payment terms, deposit percentage and milestones, schedule, exclusions, and change order triggers. Then:

Flag governance risks, ambiguous warranties, penalty clauses, arbitration, hidden fees.

Create a weighted scorecard: price 30 percent, warranty 25 percent, installation quality 20 percent, schedule 15 percent, contract clarity 10 percent. Briefly justify each score.

Draft five follow up questions per vendor to reduce risk and confirm scope.

If information is missing, mark it ‘unknown’ and suggest how to obtain it.”

The bubble isn’t a bubble. You’re being told it is because the decision makers are most impacted by cloud robots doing what they currently do with less inherent bias and vastly better context and compute (if we’re considering the human brain and memory as-such).

This article was updated on October 8, 2025

Comments