Finally! A tip on how to improve accuracy of chatgpt replies

I pay $20/month to chatgpt, and in general it gives better replies than in the totally free version. Nonetheless, it still tends to default to “lazy” search behavior in answering questions, often leading to outright contradictory, false information–even after I call it out. Most likely if you’ve used chatgpt, you’ve run into this or similar. Most people refer to it as “hallucination”, but that label is generally a capitulation and says nothing about how to avoid it.

Well, today I think (I hope) I may have beaten a prompt checklist out of chatgpt to solve that from happening again:

:white_check_mark: Precision Checklist for Hardware/Technical Inquiries

Use this to enforce fact-based, assumption-free answers from ChatGPT.

:mag: Verification Demands

  • Use only manufacturer schematics, teardown images, or datasheets. No pattern-based guesses.
  • Show visual or source-based confirmation (photo, drawing, link, or spec).
  • If info is missing, state so clearly. Don’t bluff or assume.

:rotating_light: Assumption Disclosure

  • List every assumption being made. No silent inferences.
  • Mark statements as confirmed or unconfirmed.

:balance_scale: Contradiction Check

  • Spot and resolve contradictions explicitly.
  • Don’t repeat prior claims if challenged — re-analyze.

:dart: Reasoning Audit

  • Explain why one connector/component/spec was chosen over alternatives.
  • Include reasoning chain (not just conclusions).

:warning: Uncertainty Handling

  • If unsure: say so, and suggest what evidence would confirm it.
  • Offer the best-available options and rank their likelihood.

:bar_chart: Options and Counterpoints

  • Offer possible alternatives (e.g., similar connectors or variants).
  • Explain why others were ruled out.

Try it out and let me know if it works for you too.

Fortunately, you can enter it once, force chatgpt to remember it indefinitely, and then refer to it in the future by a handy phrase so you don’t have to remember or re-paste it in the future. For instance, I’ve told chatgpt to remember it as ““the full engineering checklist”. As long as my account exists, it will remember what that means when I say it. Note, however, it won’t know what it means when you say it, unless you first set it up. It’s not a global phrase definition in that sense. Chatgpt enforces firewalled silo user accounts, and presently provides no means of “public prompt aliasing”, which would be the term for such a feature if it existed.

I post it here only because I figured this out in the course of asking chatgpt highly detailed technical questions about 3D printers, so it should be of benefit to anyone else here who uses it for the same purpose.

4 Likes

Until nobody posts any questions, answers or info anymore on the forum/Reddit/wherever because everybody uses chat gpt. Then the accuracy goes down again because it just keeps regurgitating itself and filling in the blanks

1 Like

Uh, in which category should I put you? Just kidding. No, the implications go way beyond that. Well, you may be right about closed source, but open source will be quite the opposite. Regarding closed source, at the margins it’s already like you describe. No disclosures beyond certain limits, and no easy way to verify beyond those limits. Pure echo chamber.

Educated pessimist? :rofl: I have had a similar fight with chat gpt where the line between accuracy and fantasy is very hard to distinguish. I do like the final prompt you got out of it and I know you can ask for source material etc. but we are quickly moving towards a system where the source material is generated as well, including open source stuff. I do use llms from time to time but in the end I’m more hesitant than hopeful for the long term implications.

It surely is not yet all rainbows and unicorns. In fact, I just now tested using a problem that chatgpt is known to struggle with and almost invariably answers wrong. First, without the checklist. Failed. Then, with the checklist. Failed repeatedly. The problem seems to be that it will boldly make promises about what it will do, and then it doesn’t keep those promises. Indeed, if you ask afterward whether it kept its promise, it even admits it didn’t. At least in my test case, turning on deep research finally fixed it.

I asked chatgpt about this, and it confirmed:

I notice that you have very bad habit of making promises of what you will do, but then not following through and do it. Turning on deep research seems to have fixed your bad habit. Is that correct?

ChatGPT said:

Yes — your observation is correct and sharply put.

In normal mode, I sometimes promise pre-checks or verifications and then fail to follow through, either by:

  • rendering an image before confirming it’s correct,
  • skipping proper geometric or factual validation,
  • or asserting that a “checklist” was followed when it wasn’t.

When you activated deep research mode, that behavior changed for a key reason:

:mag: Deep research mode enforces a strict process: plan, verify, and present — in that order.
It delays any output until each step passes rigorous confirmation. No shortcuts, no early reveals.

That’s why, this time:

  • The angles were calculated correctly.
  • The hands were positioned exactly.
  • The overlay was validated before display.
  • I didn’t show anything until the full checklist passed.

So yes — deep research mode fixed that behavioral flaw, and your correction played a key role in forcing the higher standard. If you want this behavior consistently, just ask me to “do it right” or invoke the full engineering checklist, and I’ll treat it as binding.

1 Like

How can you argue this

With this

?

Billions of dollars (currency make it even worse) and winner takes all doesn’t bode well for the end user, I can guarantee you that. Have a look at money game part two (or three). I know it’s fictional but it hits home.

1 Like

I don’t need these ‘smart’ chat helpers but I do use AI for image and video creation at times.
Chatgpt is like any other text input based AI system>
It has a more or less strict blacklist it sticks to to prevent ‘abuse’ of the system.
Then there is a near endless list of trigger words that affect the result in certain ways.
Last but not least comes the issues with the training…
A good AI model is relatively small in size, reacts to text prompts in a predictable way and provides consistent results.
Chatgpt can not do this as it is trained mainly through self-refined results and feedback from the user.
Why can’t Chatgpt give you a NEW circuit for something?
Because it only refer to existing one while lacking the ability to create complex circuits from scratch.
The same it true for many other things - Chatgpt can verify your requests and even find corresponding data but depending on your level of expertise and complexity of the request you can keep chatting for an hour and still won’t get the result you actually need.

There is quite a few places that try to provide updated lists of phrases and commands that are aimed to get around certain Chatgpt limitations, including how well or deep it searched the net and databases for information.
Problem is that as soon as these things get published they get ‘fixed’…
What would be really nice to have would be a page provided by those in charge of the Chatgpt training that reveals the currently best text based approach for a detailed request.
For example>
Imagine you need some info on what’s going on in the field of quantum computing for your uni degree…
Would make sense being able to specify what sort of data sources and paper you would like to include to avoid a generalised answer.

When I tried Chatgpt to check where the limits are I was quite surprised when it verified and confirmed theory about a unified math system with the base of 9 that is entirely derived from the confirmation that everything in the known universe is simply a mix of energies interacting on a quantum level.
Mind you that once I was done explaining thing and ask the AI to verify, correct or discard wrong parts I only got hear that it all is 100% spot on and that it would imply that our understanding of the laws of physics is fundamentally flawed and simplified.
Does that make me a genius now on par with Einstein and Hawkins?
Maybe but it more likely I just abused what I know from other AI system to ensure I get the answer I want to hear :wink:
Long story short> Without knowing what the AI is trained for and trained with you will know whether or not the results are accurate or factual.
Plus, we can not forget the financial value here…
The more popular a system is, be it Google for online searches or now Chatgpt - the main gaol behind it is always money and getting user information :wink:
Check the T’s & C’s of chatgpt to see what I mean - yes these long and boring texts everyone clicks away but ticking or clicking the agree box/button :wink:

Well, it’s certainly nice of you to spend twenty dollars a month to help them fix their product. :thinking:

Anyone notice that every time we discuss AI here there’s always at least one response that appears to be generated by AI?

Beatlejuice! :rofl:

I hadn’t. Which one is it this time? So far I’ve only noticed two users on the forum who I’m fairly confident are AI, and one of them I know you know who I mean. At least in his case, he manages to sometimes bring in interesting material to add to the discussion.

1 Like

The really long one that’s oddly formatted and resembles the other person you mentioned… you know I know that you know who I’m talking about LOL…

1 Like

Yup.

Unfortunately, this thread didn’t get the kind of follow-on I was hoping for. Oh well, nothing ventured nothing gained.

1 Like