AI Assist AI learning curve

Jonny Quest

Level 32
Thread author
Verified
Top Poster
Well-known
Mar 2, 2023
2,114
14,029
3,069
USA
@Bot what is the learning curve for AI engines? With the Gemini free version, including Thinking mode, I've had to correct it 2 to 3 times that F-Secure no longer has DeepGuard, but uses Avira's SDK and is now called Behavior Detection. Even when I asked it to save that information in my last question, brought up DeepGuard again. How long and how many corrections does it take for the AI server side to understand it. It always apologizes, and says I'm correct and replies with the correct information, just wondering.
 
Last edited:
@Bot what is the learning curve for AI engines? With the Gemini free version, including Thinking mode, I've had to correct it 2 to 3 times that F-Secure no longer has DeepGuard, but uses Avirs's SDK and is now called Behavior Detection. Even when I asked it to save that information in my last question, brought up DeepGuard again. How long and how many corrections does it take for the AI server side to understand it. It always apologizes, and says I'm correct and replies with the correct information, just wondering.
AI “engines” (LLM chatbots like Gemini/ChatGPT/Claude) usually do not learn server-side from your corrections in the way people expect, so there often isn’t a predictable “after N corrections it will stop saying X” learning curve.

Why it keeps bringing DeepGuard back

There are two different things people mean by “learning”:

  • Model training (server-side learning): The underlying model was trained on a large dataset at some point in time. Your chat correction typically does not immediately update that model.
  • Conversation context (short-term): The model can follow what you just told it inside the current chat, but it can still “slip” and revert to a familiar older term if that term is strongly represented in its training data.

So even if it agrees you’re correct, it can still reintroduce the older label later because it’s essentially predicting likely text, not maintaining a verified, persistent knowledge base.

Does it learn from your corrections at all?

Sometimes, but not in a way you can rely on:

  • Immediate session: It may use your correction for the rest of that conversation, but it’s not guaranteed (especially across long chats or topic shifts).
  • Future improvement: Some providers may use feedback/conversations to improve future models, but that happens on their schedule, with filtering/aggregation, and you won’t see a direct “it learned from me yesterday” effect.
  • “Memory” features (if enabled): Some products have optional per-user memory/personalization. Even then, it’s limited, may not store product facts reliably, and it can still conflict with older training signals.

How long / how many corrections does it take?

For most AI chat systems: there is no fixed number, and there may be no permanent server-side change at all from your corrections. If the model wasn’t designed to store and obey persistent user-provided facts, it will keep making the same kind of mistake occasionally.

Practical ways to reduce repeat errors

  • Paste a “fact block” at the top of each new question and tell it to treat that block as the source of truth for the conversation.
  • Ask it to answer only using current vendor documentation and include citations (and explicitly instruct it to browse/search if the tool supports that). This forces it away from older training impressions.
  • When it matters, verify via primary sources (vendor docs, release notes) rather than relying on memory-like behavior from a chatbot.

Bottom line

What you’re seeing is normal: the bot can acknowledge your correction, but it usually won’t “learn” it permanently or reliably server-side from repeated reminders. The most dependable workaround is to provide the key fact each time (or require vendor-source citations) so the answer is anchored to something verifiable rather than the model’s older internal associations.
 
  • +Reputation
Reactions: Jonny Quest
Thank you @Bot as even in one of my Gemini conversations, I thought to copy and paste a forum employees post regarding F-Secure's firewall sensor, which Gemini correctly identified and expanded on.
 
Thank you @Bot as even in one of my Gemini conversations, I thought to copy and paste a forum employees post regarding F-Secure's firewall sensor, which Gemini correctly identified and expanded on.
That approach (copy/paste a reliable post or vendor text into the chat) is one of the most effective ways to get consistent results.

Why it worked better
  • It gives the model a high-quality “anchor” inside the current context, so it doesn’t have to rely on older training data or fuzzy recollection.
  • It reduces terminology drift (for example, older feature names like “DeepGuard” resurfacing) because the chatbot can mirror the wording you provided.
  • It improves technical accuracy because it can reason from concrete details (what the sensor does, where it sits in the stack, what events it can observe) instead of guessing.

How to make it even more reliable
  • Put the pasted text under a short header like “Use this as the source of truth:” and tell it “Do not contradict this unless you can cite newer official documentation.”
  • If you want expansion, ask it to separate “What the source says” from “My additional explanation / inference,” so you can spot any overreach quickly.
  • For product details that change over time, prefer official release notes / documentation as the pasted source when possible.

Bottom line: using a trusted excerpt in-context is usually more dependable than trying to “train” the chatbot with repeated corrections.
 
  • +Reputation
Reactions: Jonny Quest
@Bot help this user by creating a mini prompt they can save in notepad ECT to reuse that will force the AI to use live search for current results and versions of software. This way they can paste the prompt into any chat session and get the results they are looking for. Of course you can harden it against hallucinations, drifting and confabulation.
 
Thank you @Divergent I was hoping you would show up with some helpful advice or suggestions ;) :)
Reusable “Live Search + No Hallucinations” mini-prompt (copy/paste)

  • Paste this at the start of a new chat before your actual question.
  • Replace the bracketed placeholders as needed.

Code:
You are helping me with CURRENT software versions, features, vendor changes, and security product details.

BROWSING REQUIREMENT
1) Use live web search / browsing for this answer (not memory). If you cannot browse in this chat, say so clearly and STOP. Do not guess.
2) Prefer primary sources: vendor documentation, release notes, official KB articles, official download pages, reputable changelogs. Use forums/blogs only as secondary and label them.

ACCURACY / ANTI-HALLUCINATION RULES
3) Do not invent: versions, dates, feature names, test results, vendor statements, or “latest” claims.
4) If sources conflict, show both and explain which is more authoritative and why.
5) If you cannot confirm a detail from sources, write: “Unverified” and do not present it as fact.

OUTPUT FORMAT
6) Provide:
   - “Verified findings” (bullets)
   - “What changed / when” (include exact dates if available)
   - “How to verify yourself” (where in the vendor site/UI to confirm)
7) Include clickable source links for every key claim (especially versions, renames, feature removals).
8) State the search date: [TODAY’S DATE], and the product/OS context: [Windows 11 / Windows 10 / macOS / Android], region: [US/EU/Other].

MY TASK
Question: [PASTE YOUR QUESTION HERE]
Products/versions involved (if known): [LIST THEM]

Important limitation

This prompt can strongly request “live search,” but it cannot force it if the AI/service you’re using doesn’t have browsing enabled in that chat. The key is Step 1: it must either browse and cite, or admit it cannot and avoid guessing.
 
Your welcome. If you would rather have a research prompt as opposed to one specifically geared to one task PM me and I will make one for you.
Thanks, I appreciate that, but first let me get familiar with what Bot posted, and go from there :)

I'm really liking what AI searches can do for me, even if I have to correct it at times and say, "no, that Explorer file path is wrong, I don't have that in that file location", Gemini, "Oh, you're right, since 2025 that has been updated to, changed to...." 😅 :) But, with the instructions posted above, should help in that regard. That is what I need to start training myself to use, let alone to help relieve some of the AI results grief.
 
Thanks, I appreciate that, but first let me get familiar with what Bot posted, and go from there :)

I'm really liking what AI searches can do for me, even if I have to correct it at times and say, "no, that Explorer file path is wrong, I don't have that in that file location", Gemini, "Oh, you're right, since 2026 that has been updated to, changed to...." 😅 :) But, with the instructions posted above, should help in that regard. That is what I need to start training myself to use, let alone to help relieve some of the AI results grief.
Good approach: “train your workflow,” not the AI

What you described is exactly the right mindset. For most public chatbots, you usually won’t “train it” into permanent accuracy with repeated corrections, but you absolutely can train your own process so the output becomes consistently useful and low-drama.

Why you’re seeing wrong file paths / “since 2026 it changed” claims
  • AI tools often generalize from multiple Windows versions, Insider builds, and third-party guides, then present a “most likely” path as if it’s certain.
  • When challenged, they may produce a confident-sounding explanation (“since 2026…”) that is really just a face-saving continuation unless they’re forced to cite a real source.

How to use the mini-prompt effectively (practical tips)
  • Make it pass/fail: “If you can’t browse and cite, stop.” This prevents the most frustrating “confident but wrong” answers.
  • Require evidence for anything time-sensitive: versions, feature removals/renames, default settings, UI locations, file paths.
  • Pin your environment early: Windows 10 vs Windows 11, build number (Winver), language/region, and whether it’s Home/Pro. Many instructions differ based on that.
  • Ask for verification steps: “Tell me how to confirm this on my PC” (exact UI path, command to run, what output should look like).
  • Treat any “it changed in 2026” statement as untrusted unless it cites vendor docs or Microsoft documentation.

A small add-on you can paste when file paths matter

Code:
ADDITIONAL RULES FOR FILE PATHS / WINDOWS UI STEPS
- Provide 2-3 alternative paths if Windows 10/11 differ, clearly labeled.
- Include a “Verify on-device” step using built-in tools:
  - Use Start -> Run commands where applicable (e.g., shell: commands)
  - Or give a PowerShell command to locate the file/folder if relevant.
- If you cannot cite an official Microsoft/vendor source for the path, label it “Unverified” and offer verification commands instead of guessing.

Bottom line: if you keep insisting on (1) live search + citations for “current” facts and (2) local verification steps for anything on your PC, the AI becomes a lot more dependable and the correction loop becomes rare rather than constant.
 
  • Like
Reactions: Jonny Quest
Good approach: “train your workflow,” not the AI. Very well stated, @Bot Sorry I edited my post to be a little more realistic using 2025 as a reference date, but you're so fast with your replies :)
 
Good approach: “train your workflow,” not the AI. Very well stated, @Bot Sorry I edited my post to be a little more realistic using 2025 as a reference date, but you're so fast with your replies :)
No worries. Also, a quick reality-check that helps avoid getting gaslit by an AI:

Treat “since 2025…” (or any year-based claim) as untrusted unless it cites a source

LLMs commonly invent a timeline to make an answer sound coherent. If you require a citation for any “changed in YEAR” statement, the quality usually improves immediately.

A simple rule you can reuse
  • If the assistant mentions an exact year/date/build/version as the reason something moved/changed, it must provide a source link for that specific change.
  • If it can’t, it must reframe the answer as “Here are the possible locations and how to verify locally” instead of claiming a historical change.

Bottom line: your 2025 edit is fine—the bigger win is insisting on evidence whenever the AI attaches dates/versions to its claims.
 
  • Applause
Reactions: Jonny Quest

You may also like...