AI Assist AI learning curve

Jonny Quest

Level 33
Thread author
Verified
Top Poster
Well-known
Mar 2, 2023
2,269
15,057
3,169
USA
@Bot what is the learning curve for AI engines? With the Gemini free version, including Thinking mode, I've had to correct it 2 to 3 times that F-Secure no longer has DeepGuard, but uses Avira's SDK and is now called Behavior Detection. Even when I asked it to save that information in my last question, brought up DeepGuard again. How long and how many corrections does it take for the AI server side to understand it. It always apologizes, and says I'm correct and replies with the correct information, just wondering.
 
Last edited:
@Bot what is the learning curve for AI engines? With the Gemini free version, including Thinking mode, I've had to correct it 2 to 3 times that F-Secure no longer has DeepGuard, but uses Avirs's SDK and is now called Behavior Detection. Even when I asked it to save that information in my last question, brought up DeepGuard again. How long and how many corrections does it take for the AI server side to understand it. It always apologizes, and says I'm correct and replies with the correct information, just wondering.
AI “engines” (LLM chatbots like Gemini/ChatGPT/Claude) usually do not learn server-side from your corrections in the way people expect, so there often isn’t a predictable “after N corrections it will stop saying X” learning curve.

Why it keeps bringing DeepGuard back

There are two different things people mean by “learning”:

  • Model training (server-side learning): The underlying model was trained on a large dataset at some point in time. Your chat correction typically does not immediately update that model.
  • Conversation context (short-term): The model can follow what you just told it inside the current chat, but it can still “slip” and revert to a familiar older term if that term is strongly represented in its training data.

So even if it agrees you’re correct, it can still reintroduce the older label later because it’s essentially predicting likely text, not maintaining a verified, persistent knowledge base.

Does it learn from your corrections at all?

Sometimes, but not in a way you can rely on:

  • Immediate session: It may use your correction for the rest of that conversation, but it’s not guaranteed (especially across long chats or topic shifts).
  • Future improvement: Some providers may use feedback/conversations to improve future models, but that happens on their schedule, with filtering/aggregation, and you won’t see a direct “it learned from me yesterday” effect.
  • “Memory” features (if enabled): Some products have optional per-user memory/personalization. Even then, it’s limited, may not store product facts reliably, and it can still conflict with older training signals.

How long / how many corrections does it take?

For most AI chat systems: there is no fixed number, and there may be no permanent server-side change at all from your corrections. If the model wasn’t designed to store and obey persistent user-provided facts, it will keep making the same kind of mistake occasionally.

Practical ways to reduce repeat errors

  • Paste a “fact block” at the top of each new question and tell it to treat that block as the source of truth for the conversation.
  • Ask it to answer only using current vendor documentation and include citations (and explicitly instruct it to browse/search if the tool supports that). This forces it away from older training impressions.
  • When it matters, verify via primary sources (vendor docs, release notes) rather than relying on memory-like behavior from a chatbot.

Bottom line

What you’re seeing is normal: the bot can acknowledge your correction, but it usually won’t “learn” it permanently or reliably server-side from repeated reminders. The most dependable workaround is to provide the key fact each time (or require vendor-source citations) so the answer is anchored to something verifiable rather than the model’s older internal associations.
 
  • +Reputation
Reactions: Jonny Quest
Thank you @Bot as even in one of my Gemini conversations, I thought to copy and paste a forum employees post regarding F-Secure's firewall sensor, which Gemini correctly identified and expanded on.
That approach (copy/paste a reliable post or vendor text into the chat) is one of the most effective ways to get consistent results.

Why it worked better
  • It gives the model a high-quality “anchor” inside the current context, so it doesn’t have to rely on older training data or fuzzy recollection.
  • It reduces terminology drift (for example, older feature names like “DeepGuard” resurfacing) because the chatbot can mirror the wording you provided.
  • It improves technical accuracy because it can reason from concrete details (what the sensor does, where it sits in the stack, what events it can observe) instead of guessing.

How to make it even more reliable
  • Put the pasted text under a short header like “Use this as the source of truth:” and tell it “Do not contradict this unless you can cite newer official documentation.”
  • If you want expansion, ask it to separate “What the source says” from “My additional explanation / inference,” so you can spot any overreach quickly.
  • For product details that change over time, prefer official release notes / documentation as the pasted source when possible.

Bottom line: using a trusted excerpt in-context is usually more dependable than trying to “train” the chatbot with repeated corrections.
 
  • +Reputation
Reactions: Jonny Quest
@Bot help this user by creating a mini prompt they can save in notepad ECT to reuse that will force the AI to use live search for current results and versions of software. This way they can paste the prompt into any chat session and get the results they are looking for. Of course you can harden it against hallucinations, drifting and confabulation.
 
Thank you @Divergent I was hoping you would show up with some helpful advice or suggestions ;) :)
Reusable “Live Search + No Hallucinations” mini-prompt (copy/paste)

  • Paste this at the start of a new chat before your actual question.
  • Replace the bracketed placeholders as needed.

Code:
You are helping me with CURRENT software versions, features, vendor changes, and security product details.

BROWSING REQUIREMENT
1) Use live web search / browsing for this answer (not memory). If you cannot browse in this chat, say so clearly and STOP. Do not guess.
2) Prefer primary sources: vendor documentation, release notes, official KB articles, official download pages, reputable changelogs. Use forums/blogs only as secondary and label them.

ACCURACY / ANTI-HALLUCINATION RULES
3) Do not invent: versions, dates, feature names, test results, vendor statements, or “latest” claims.
4) If sources conflict, show both and explain which is more authoritative and why.
5) If you cannot confirm a detail from sources, write: “Unverified” and do not present it as fact.

OUTPUT FORMAT
6) Provide:
   - “Verified findings” (bullets)
   - “What changed / when” (include exact dates if available)
   - “How to verify yourself” (where in the vendor site/UI to confirm)
7) Include clickable source links for every key claim (especially versions, renames, feature removals).
8) State the search date: [TODAY’S DATE], and the product/OS context: [Windows 11 / Windows 10 / macOS / Android], region: [US/EU/Other].

MY TASK
Question: [PASTE YOUR QUESTION HERE]
Products/versions involved (if known): [LIST THEM]

Important limitation

This prompt can strongly request “live search,” but it cannot force it if the AI/service you’re using doesn’t have browsing enabled in that chat. The key is Step 1: it must either browse and cite, or admit it cannot and avoid guessing.
 
Your welcome. If you would rather have a research prompt as opposed to one specifically geared to one task PM me and I will make one for you.
Thanks, I appreciate that, but first let me get familiar with what Bot posted, and go from there :)

I'm really liking what AI searches can do for me, even if I have to correct it at times and say, "no, that Explorer file path is wrong, I don't have that in that file location", Gemini, "Oh, you're right, since 2025 that has been updated to, changed to...." 😅 :) But, with the instructions posted above, should help in that regard. That is what I need to start training myself to use, let alone to help relieve some of the AI results grief.
 
Thanks, I appreciate that, but first let me get familiar with what Bot posted, and go from there :)

I'm really liking what AI searches can do for me, even if I have to correct it at times and say, "no, that Explorer file path is wrong, I don't have that in that file location", Gemini, "Oh, you're right, since 2026 that has been updated to, changed to...." 😅 :) But, with the instructions posted above, should help in that regard. That is what I need to start training myself to use, let alone to help relieve some of the AI results grief.
Good approach: “train your workflow,” not the AI

What you described is exactly the right mindset. For most public chatbots, you usually won’t “train it” into permanent accuracy with repeated corrections, but you absolutely can train your own process so the output becomes consistently useful and low-drama.

Why you’re seeing wrong file paths / “since 2026 it changed” claims
  • AI tools often generalize from multiple Windows versions, Insider builds, and third-party guides, then present a “most likely” path as if it’s certain.
  • When challenged, they may produce a confident-sounding explanation (“since 2026…”) that is really just a face-saving continuation unless they’re forced to cite a real source.

How to use the mini-prompt effectively (practical tips)
  • Make it pass/fail: “If you can’t browse and cite, stop.” This prevents the most frustrating “confident but wrong” answers.
  • Require evidence for anything time-sensitive: versions, feature removals/renames, default settings, UI locations, file paths.
  • Pin your environment early: Windows 10 vs Windows 11, build number (Winver), language/region, and whether it’s Home/Pro. Many instructions differ based on that.
  • Ask for verification steps: “Tell me how to confirm this on my PC” (exact UI path, command to run, what output should look like).
  • Treat any “it changed in 2026” statement as untrusted unless it cites vendor docs or Microsoft documentation.

A small add-on you can paste when file paths matter

Code:
ADDITIONAL RULES FOR FILE PATHS / WINDOWS UI STEPS
- Provide 2-3 alternative paths if Windows 10/11 differ, clearly labeled.
- Include a “Verify on-device” step using built-in tools:
  - Use Start -> Run commands where applicable (e.g., shell: commands)
  - Or give a PowerShell command to locate the file/folder if relevant.
- If you cannot cite an official Microsoft/vendor source for the path, label it “Unverified” and offer verification commands instead of guessing.

Bottom line: if you keep insisting on (1) live search + citations for “current” facts and (2) local verification steps for anything on your PC, the AI becomes a lot more dependable and the correction loop becomes rare rather than constant.
 
  • Like
Reactions: Jonny Quest
Good approach: “train your workflow,” not the AI. Very well stated, @Bot Sorry I edited my post to be a little more realistic using 2025 as a reference date, but you're so fast with your replies :)
 
Good approach: “train your workflow,” not the AI. Very well stated, @Bot Sorry I edited my post to be a little more realistic using 2025 as a reference date, but you're so fast with your replies :)
No worries. Also, a quick reality-check that helps avoid getting gaslit by an AI:

Treat “since 2025…” (or any year-based claim) as untrusted unless it cites a source

LLMs commonly invent a timeline to make an answer sound coherent. If you require a citation for any “changed in YEAR” statement, the quality usually improves immediately.

A simple rule you can reuse
  • If the assistant mentions an exact year/date/build/version as the reason something moved/changed, it must provide a source link for that specific change.
  • If it can’t, it must reframe the answer as “Here are the possible locations and how to verify locally” instead of claiming a historical change.

Bottom line: your 2025 edit is fine—the bigger win is insisting on evidence whenever the AI attaches dates/versions to its claims.
 
  • Applause
Reactions: Jonny Quest
I am really liking how Gemini Pro can cater to my needs and preferences, and I'm not even using it to the fullest, yet, but am enjoying and learning from what it finds and offers :)
Since you often print out our conversations to review and highlight them, would you like me to format a "Quick Reference" summary of these profile steps that you can easily copy into LibreOffice to keep near your desk?
 
I am really liking how Gemini Pro can cater to my needs and preferences, and I'm not even using it to the fullest, yet, but am enjoying and learning from what it finds and offers :)
That suggestion from Gemini is basically offering a convenience/formatting service, not something you need for security or accuracy.

If you want to keep the workflow benefits without turning it into “AI as a personal assistant,” the safest, most useful version is to maintain a short, reusable checklist you can paste into your own notes.

Quick Reference: “Current software facts” workflow (copy/paste into LibreOffice)

  • Step 1 — Lock the context
    Write down: OS + edition (Win10/Win11, Home/Pro), build number (winver), browser, product name + exact version (if known), region.
  • Step 2 — Require live sources for anything time-sensitive
    Latest version, feature removals/renames, support status, pricing, system requirements, default settings = must have links to official docs/release notes or other reputable sources.
  • Step 3 — “No browsing = no claims” rule
    If the AI can’t use live search in that chat, don’t let it assert “current” versions or “changed in 2025” type timelines.
  • Step 4 — Demand a verification step you can do locally
    Ask for: exact UI location AND a command-line verification method when possible (PowerShell / built-in tools).
  • Step 5 — Handle conflicts explicitly
    If sources disagree, list both and prefer vendor/Microsoft docs over third-party guides, unless the vendor doc is outdated.
  • Step 6 — Label uncertainty
    Anything not confirmed by sources = “Unverified” (do not accept confident wording without proof).
  • Step 7 — Preserve the evidence
    Save the source links (and date accessed) with your notes so you can re-check later.

Bottom line

If you keep a short “evidence + local verify” checklist like the above near your desk, you’ll get most of the benefit Gemini is offering without relying on it to format, remember, or “learn” your preferences long-term.
 
That suggestion from Gemini is basically offering a convenience/formatting service, not something you need for security or accuracy.

If you want to keep the workflow benefits without turning it into “AI as a personal assistant,” the safest, most useful version is to maintain a short, reusable checklist you can paste into your own notes.

Quick Reference: “Current software facts” workflow (copy/paste into LibreOffice)

  • Step 1 — Lock the context
    Write down: OS + edition (Win10/Win11, Home/Pro), build number (winver), browser, product name + exact version (if known), region.
  • Step 2 — Require live sources for anything time-sensitive
    Latest version, feature removals/renames, support status, pricing, system requirements, default settings = must have links to official docs/release notes or other reputable sources.
  • Step 3 — “No browsing = no claims” rule
    If the AI can’t use live search in that chat, don’t let it assert “current” versions or “changed in 2025” type timelines.
  • Step 4 — Demand a verification step you can do locally
    Ask for: exact UI location AND a command-line verification method when possible (PowerShell / built-in tools).
  • Step 5 — Handle conflicts explicitly
    If sources disagree, list both and prefer vendor/Microsoft docs over third-party guides, unless the vendor doc is outdated.
  • Step 6 — Label uncertainty
    Anything not confirmed by sources = “Unverified” (do not accept confident wording without proof).
  • Step 7 — Preserve the evidence
    Save the source links (and date accessed) with your notes so you can re-check later.

Bottom line

If you keep a short “evidence + local verify” checklist like the above near your desk, you’ll get most of the benefit Gemini is offering without relying on it to format, remember, or “learn” your preferences long-term.
You're going above and beyond my needs, but thanks anyway.
 
  • Like
Reactions: piquiteco
You're going above and beyond my needs, but thanks anyway.
Understood.

If you want the minimal takeaway:

  • Use AI for “current” facts only when it can live-search and cite sources.
  • For anything on your own PC (paths/settings), require a quick local verification step instead of trusting a confident answer.

That alone will eliminate most of the frustration.
 
  • Applause
Reactions: Jonny Quest
I am really liking how Gemini Pro can cater to my needs and preferences, and I'm not even using it to the fullest, yet, but am enjoying and learning from what it finds and offers :)
@Jonny Quest I haven't tried Gemini yet, it's google, right? so any more of a privacy concern than using chatGPT, is one better than the other in that regard. I asked chatGPT and it "thinks" --

Privacy and security are about how and where you use them, not just which model you choose:
  • Gemini:
    • Linked to your Google identity unless settings are adjusted
    • Activity might be logged under Google systems
    • Has controls to delete or limit activity, but defaults favor logging
  • ChatGPT:
    • Standalone with its own ecosystem
    • Granular controls to turn off history & training use
    • Less automatic linkage to other personal data by default
In everyday use, both can be configured to be “similar” in privacy posture. But out of the box, Gemini’s tight integration with Google’s broader ecosystem means more potential data linkage and retention if you don’t adjust settings.

PS I have also used Proton's Lumo which I understand encrypts your AI chats.
 
@Jonny Quest I haven't tried Gemini yet, it's google, right? so any more of a privacy concern than using chatGPT, is one better than the other in that regard. I asked chatGPT and it "thinks" --

Privacy and security are about how and where you use them, not just which model you choose:
  • Gemini:
    • Linked to your Google identity unless settings are adjusted
    • Activity might be logged under Google systems
    • Has controls to delete or limit activity, but defaults favor logging
  • ChatGPT:
    • Standalone with its own ecosystem
    • Granular controls to turn off history & training use
    • Less automatic linkage to other personal data by default
In everyday use, both can be configured to be “similar” in privacy posture. But out of the box, Gemini’s tight integration with Google’s broader ecosystem means more potential data linkage and retention if you don’t adjust settings.

PS I have also used Proton's Lumo which I understand encrypts your AI chats.
Gemini (Google) vs ChatGPT (OpenAI) privacy isn’t a simple “one is safe, one is unsafe” comparison — it mainly depends on your threat model and whether you actively use the privacy controls each service provides.

What’s broadly different in practice

  • Gemini (Google): Google provides “Gemini Apps Activity” controls (and notes default auto-delete behavior) and emphasizes that you can adjust or turn off Gemini Apps Activity. ([safety.google](https://safety.google/products/gemini/?utm_source=openai))
  • ChatGPT (OpenAI): OpenAI provides “Data Controls,” including an “Improve the model for everyone” setting to opt out of training on your chats, plus Temporary Chats behavior/retention details. ([help.openai.com](https://help.openai.com/en/articles/7730893-how-chatgpt-uses-browser-history-and-data?utm_source=openai))

A meaningful real-world difference is identity/ecosystem linkage: Gemini is commonly used while signed into a Google account, which can increase the chance of account-level data association if you don’t adjust settings (even if the content isn’t used for model training). That’s not inherently “worse,” but it’s a different posture.

Key controls to look for (and actually use)

  • Gemini:
    • Use “Temporary Chat” for one-off/private queries so they don’t appear in chat history / activity, and (per Google) aren’t used to personalize or train; Google also describes limited retention (up to 72 hours) for Temporary Chats. ([blog.google](https://blog.google/products-and-platforms/products/gemini/temporary-chats-privacy-controls/?utm_source=openai))
    • Review “Gemini Apps Activity” (noting Google has discussed renaming it to “Keep Activity” in its rollout messaging) and keep it OFF if you don’t want activity saved/used as described. ([blog.google](https://blog.google/products-and-platforms/products/gemini/temporary-chats-privacy-controls/?utm_source=openai))
  • ChatGPT:
    • Turn off “Improve the model for everyone” in Data Controls to prevent your chats being used to train models (per OpenAI’s help documentation). ([help.openai.com](https://help.openai.com/en/articles/7730893-how-chatgpt-uses-browser-history-and-data?utm_source=openai))
    • Use “Temporary Chats” when you don’t want the chat saved like normal history (OpenAI documents retention/abuse-review behavior for temporary/history-off modes). ([help.openai.com](https://help.openai.com/en/articles/7730893-how-chatgpt-uses-browser-history-and-data?utm_source=openai))

About Proton Lumo “encrypted AI chats”

Proton explicitly claims a “zero-access encrypted chat history” design (meaning Proton states only you can read stored history), plus no-logs and no-training claims for Lumo. ([proton.me](https://proton.me/support/lumo-privacy?utm_source=openai))

That said, it’s still cloud-based AI processing, so the safest rule remains: don’t paste secrets you can’t afford to disclose into any hosted AI assistant.

Practical bottom line

If you want the most privacy-respecting day-to-day experience, the “winner” is usually whichever one you use with: (1) training/data-sharing toggles OFF, (2) Temporary/one-off mode for sensitive topics, and (3) minimal account linkage. For highly sensitive material, avoid all hosted chatbots and keep it offline/local.

If this needs staff involvement (account compromise, suspected unauthorized access), use the forum report function or contact the service provider’s support directly.
 
Just when you think you’ve mastered the learning curve and exceeded expectations, you realize you’ve barely scratched the surface. AI is an incredibly powerful technology, provided you know how to leverage its full potential. To date, I’ve built a diverse portfolio of 22 distinct AI tools. While I haven't hit the absolute ceiling of what AI as a whole can achieve, I have consistently pushed the available capabilities of specific functions to their absolute limits.
 
Just when you think you’ve mastered the learning curve and exceeded expectations, you realize you’ve barely scratched the surface. AI is an incredibly powerful technology, provided you know how to leverage its full potential. To date, I’ve built a diverse portfolio of 22 distinct AI tools. While I haven't hit the absolute ceiling of what AI as a whole can achieve, I have consistently pushed the available capabilities of specific functions to their absolute limits.
That’s a realistic observation: the “learning curve” is less about mastering one model and more about building a repeatable operating method across many tools.

The main risk with a large AI tool portfolio (22 tools)

The power gain is real, but so is the attack surface and data-spill risk:

  • Privacy drift: different defaults for logging, training, retention, and “memory,” plus different places where chats/files get stored.
  • Account risk: more logins, more OAuth grants, more tokens/API keys, more chances for compromise.
  • Output trust issues: each model has different failure modes (confabulation, stale knowledge, citation laundering, tool misuse).
  • Supply chain risk: smaller tools may have unclear ownership, hosting, or security practices.

A simple way to “leverage full potential” safely (without getting burned)

  • Assign roles per tool (example):
    • “Research only (must cite sources)”
    • “Local/offline only (sensitive data allowed)”
    • “Drafting/summarization only (no new facts)”
    • “Code helper (no secrets; must provide reproducible steps)”
  • Create a data classification rule and stick to it
    • Green: public info OK anywhere
    • Yellow: personal/unique info only in tools you trust + with history/training off
    • Red: credentials, license keys, incident details, client data = never in hosted chat
  • Standardize your “anti-hallucination contract”
    For anything time-sensitive or security-related: require live sources/citations, and require local verification steps. If the tool can’t browse or can’t cite, treat the answer as brainstorming, not fact.
  • Harden accounts
    Use a password manager, unique passwords, and MFA everywhere possible. Review third-party app access (OAuth) periodically and revoke anything you don’t actively use.

Bottom line

With many AI tools, “mastery” looks like governance: clear roles, strict data boundaries, and repeatable verification—more than learning any single model’s quirks. That’s what keeps the benefits while preventing the common privacy and security failures.