When AI Hallucinates, You Take the Hit

Your weekly playbook to climb faster, lead sooner and earn more.

A playbook for analysts who want to use GPT-5 without risking data leaks, rework, or reputation.

Oh No!!! Tom!!

A post about how to use ChatGPT??

Yes. And you better read it.

The fastest way to lose credibility (and your job) in the GPT-5 era?

Acting like the model’s in charge. Divulging company secrets. Not being smart.

It’s 4:55 pm. The CFO wants “the AI answer.” Your model just contradicted itself for the third time this week. You’ve got three “urgent” follow-ups in your inbox and one exec already telling the room “we should just trust the new version.”

GPT-5’s hype-train launch is sparking another round of “we need to be using this” in boardrooms. The same happened with GPT-4. And in both cases, people that skipped structure lost months to rework and damage control.

I still have to teach my mentees and the people I coach on the right way to use AI.

So here we go once more, for everyone.

Give me 6 minutes - I’m going to show you how to keep control, protect your credibility, and make GPT-5 an asset rather than a liability. Read it slowly. Think about it.

  • 3 rules to ALWAYS follow

  • a master prompt to create a virtual council to level up every part of your work

  • a bonus tip that is so simple even smart people forget to do it

Lets go

NIST AI Risk Management Framework — The “Govern, Map, Measure, Manage” checklist every AI process should hit.
ISO/IEC 42001 — The emerging AI management standard that will shape enterprise policies. Want to level up your knowledge on how companies should be approaching AI?
OWASP LLM Top 10 — The leading risks in large language model use, and the mitigations that actually work. Lots to study and learn here.

You’ve seen this movie before.

New tech drops, the hype train goes full speed, and within a week, your inbox is flooded with screenshots of “what it just did for me!” The problem is, two weeks later no one can explain why the “AI answer” changed three times, or how that marketing deck ended up with numbers that don’t exist.

Now it’s GPT-5’s turn. And if you don’t set the rules now, it’s going to set them for you.

(Oh and I’m using GPT-5 here as a placeholder for any of your favourite internet based LLM tools.)

The Intern Mindset

Think of GPT-5 as your intern.

Smart. Fast. Knows a lot. (Its a trope, but a trope for a good reason)

But dumb at the same time - it has zero context for your business, your politics, or your compliance obligations.

You wouldn’t hand a real intern the keys to your data warehouse and tell them to “go nuts.” Same here.

Your job is to be the analyst-in-command. You decide what questions get asked, what information it sees, and what answers make it through to leadership.

That means:
• You strip or paraphrase anything sensitive before it goes in.
• You treat every output as a draft, not a decision.
• You make GPT show its work so you can check the reasoning.

Do not trust ANYTHING the model tells you without careful vetting, inspection, thought. Dont be lazy.

(I JUST read another hilarious newspaper article about a lawyer being reprimanded for using AI for a legal case, for which of course it promptly hallucinated case law and rulings. So stupid and inept!)

So here are my three rules

(That you must always follow)

1. The Data Leak You Don’t See Coming

Let’s get blunt: never put company data into a public AI tool.

ChatGPT’s free and Plus versions can retain your conversations. Even if they don’t, you’re still transmitting data outside your network. That means any client name, financial figure, strategy note, or unreleased product detail is at risk.

Instead, if you need to work with specifics:

  • Replace names with placeholders (“[ClientName]”).

  • Turn exact figures into ranges (“~$5–6M”).

  • Summarise the situation in general terms the AI can work with.

Done right, you still get the value without handing over your crown jewels. Or getting fired. When was the last time you read your company policy on responsible use of AI?

When in doubt, dont risk it.

And read your internal policy documents so you KNOW the rules. Thats your pro move right there. Read the documentation.

2. Can’t Prompt? No Problem.

One of the most underused GPT/LLM tricks is letting it write the prompt for you.

If you’re staring at a blank chat box thinking, “I don’t even know how to ask this,” just say:

“I want to [describe your outcome], but I’m not sure how to ask you. Draft me 3–5 prompt options, and tell me what extra context you need from me to get the best result.”

The model will ask clarifying questions that force you to think about audience, format, tone, and constraints. You’ll end up with a sharper prompt and a better output.

Try it. Do a simple A/B test .. ask a model something, then in another window with no history, ask the model to generate a prompt for you to ask the same question.

3. Ask why and how the answer is wrong

Done prompting? Happy with the answer? NO!

When you have your fine tuned answer after a long series of refinements (you didn’t just one-shot it did you?? Did you???) there is one more step.

Ask the model why and how the answer is incorrect, misleading, bad, dangerous, ethically questionable etc etc.

Yes - get the model to tell you all the ways that what it has produced might fail you. This is a power move, and gets you thinking like an actual leader. Leaders will always have a meta view of any situation; and no matter what presents itself they always need to think about the negatives, the gaps, what could go wrong.

This technique gets you in the habit.

Its also a foundational part of critical thinking - with roots many thousands of years ago. And that is also not a terrible habit to get into practicing!

The Advisory Council Technique

When the stakes are high — a board presentation, a big product decision, a proposal with millions attached — don’t settle for a single AI answer. `

Use GPT to simulate a panel of experts who challenge your thinking.

Here’s a safe version:

“Act as an advisory council with five roles: experienced analytics leader, legal counsel, ethics officer, product manager, and cybersecurity analyst. Evaluate my idea to <your problem> from each perspective. For each role, list one key concern, one suggestion, and one question I should answer. Do not invent data; if you need more context, ask me first.”

This forces the AI to surface blind spots and trade-offs you might miss — without exposing any sensitive details.

Pro move? NAME the experts, make them actual people in the real world. This way you can actually confirm if the answers given align with their teachings - those that have been committed to the internet anyway. I have used people like Jocko Willink, Jay Abraham, Peter Singer, Marcus Aurelius, Ralph Kimball on my councils. I know their material very well.

I have been using this technique for many years now - since the early days of LLM’s. It worked great back then, its an excellent tool in your toolbox now.

We have an exciting announcement!

This coming week sees the first Analytics Ladder guest edition. Its a welcome addition where a guest editor is invited to share their story, wisdom, perspective. All in the style of the Ladder - and designed to help elevate your career.

Did you action anything from last Wednesdays Ladder?

“Competing on Analytics” is a cracker of a book - a classic guide for building competitive advantage through analytics. We helped break it down as an essential strategic framework for data professionals.

Bonus tip

RUN THE QUERY TWICE.

Yeah, pretty simple right? Why settle for one answer, when you can run it again and ask for a different perspective.

Try:

  • Reimagine that last response, give it to me again from a different perspective

  • Pretend Im a 10 year old child and give me that answer again - simple & clear

  • How would a different random world leading expert have answered that last query

  • how would the world’s smartest person have answered that? Show your reasoning.

Now you are using it as a partner. You are the master, you are in control.

The Real Opportunity

Responsible GPT use isn’t about slowing you down — it’s about protecting your credibility. Analysts who can get great results without breaking rules are the ones leaders will trust with bigger decisions.

Every time you use GPT-5, you’re sending a signal. If you treat it like a toy, you’ll be treated like someone playing with toys. If you treat it like a power tool that requires skill and discipline, you’ll be seen as the kind of pro who’s ready for more responsibility.

Your Challenge This Week

Pick one workflow you run regularly — writing meeting notes, structuring reports, summarising industry news — and re-build it with GPT-5 using these three rules:

  1. No sensitive data leaves your control.

  2. If your prompt is weak, let GPT help you improve it.

  3. Pressure-test important outputs using the advisory council technique.

Document the process once, so you can repeat it safely. That’s how you go from dabbling with AI to leading with it.

Because GPT-5 won’t save you. But your system just might.

Responsible GPT use is your edge.

Not because it makes you slower, but because it makes you the person leadership can trust.

Try the challenge.

How is GPT-5 being used in your org right now?

Login or Subscribe to participate in polls.

Best,

Tom.

PS.. Forward this to one analytics teammate who worries AI is eating their lunch — and help them climb the Ladder.

Not a subscriber yet? Join us to get your weekly edition.

https://www.echeloniq.ai

Visit our website to see who we are, what we do.

https://echeloniq.ai/echelonedge

Our blog covering the big issues in deploying analytics at scale in enterprises.