Question which is the best AI?

Please provide comments and solutions that are helpful to the author of this topic.
I have mostly used chatGPT now 5.1 and ended up with paid account. I just discovered Lumo by Proton and this is Lumo's comparison to chatGPT: "Lumo runs a suite of specialized sub‑models that are automatically routed depending on the task—writing assistance, code, data analysis, multilingual translation, etc. This “model‑by‑task” approach lets us allocate just the right amount of compute for each request, which can improve response speed and reduce unnecessary token generation." I haven't used Lumo enough to rate it, and claims its "privacy" follows the Proton model if that matters. I should signed up for Grok since we own Tesla (bought before Elon went DOGE).

 
I have mostly used chatGPT now 5.1 and ended up with paid account. I just discovered Lumo by Proton and this is Lumo's comparison to chatGPT: "Lumo runs a suite of specialized sub‑models that are automatically routed depending on the task—writing assistance, code, data analysis, multilingual translation, etc. This “model‑by‑task” approach lets us allocate just the right amount of compute for each request, which can improve response speed and reduce unnecessary token generation." I haven't used Lumo enough to rate it, and claims its "privacy" follows the Proton model if that matters. I should signed up for Grok since we own Tesla (bought before Elon went DOGE).

Grok is really good especially if you use Private Mode unfiltered. I use Grok, Chatgpt and have a Gemini Pro account. They have their individual strengths.
 
If you need to write texts, Gemini is the best choice.
If you need to programming, Claude or Gemini are the best choices.
If you need to search for information, Perplexcity is the best choice.
If you need to generate images, Gemini (Banana) is the best choice.
Grok is quite good at texts and human behavior.

In my opinion, ChatGPT currently has no strengths at all. Its very small context window is particularly disappointing.
 
None, if you are going to be used by it. Be the one holding the candle. they are languages models with extensions branded as AI in order to trick you into accepting large dataset with private datatype.
I heavily suggest Michael Bargury talks on AI.
1765207358324.png


1765207472360.png
 
None, if you are going to be used by it. Be the one holding the candle. they are languages models with extensions branded as AI in order to trick you into accepting large dataset with private datatype.
I heavily suggest Michael Bargury talks on AI. View attachment 293556

View attachment 293557
Yeah I'm not sure what to make of your post. AI is a tool, and like all other tools how you use it depends on whether it's used for good or bad.

AI using you, that I'm still scratching my head on. AI is a mirror, a reflection. It puts out the quality you put in. It's not advanced to the point that it's trying to manipulate you or take over the world.

Posting about killing innocent people, political motivation is not generally a proper thing to discuss in a cyber security forum. Religion and Politics do not go over well, and do not even think about starting a convo on Comodo here. 🤪
 
I used chatGPT 5.1 to write a linux script of about 200 lines, probably not that complex but I'm not a coder, we got the code tweaked just right, and then I asked Lumo (see above) to analyze chatGPT's code, and Lumo significantly improved it, not only in my opinion but also in chatGPT's opinion**, so depending on what you're doing using more than one AI/LLM is probably a good idea.
**chatGPT did then make 2 minor adjustments to Lumo's code, so the final code is hrbrid of both AI.
And "learned" something on YouTube if correct... :unsure: 250 bad text sources out of the millions it reads can "poison" an AI's thinking process. No idea if that is true -- I should ask an AI :ROFLMAO:
 
For general use, I used Duck.ai (based on ChatGPT 4o-mini) for an illusion of privacy. For translation work, I use ChatGPT (free) and Claude.ai (free). At one time, Claude was actually better than ChatGPT at translations, but now I think ChatGPT is more natural and clearly better. For deeper research, I use Copilot and Gemini.
 
I used to use ChatGPT on my browser, but since Windows now comes with Copilot pre-installed, I use it. I don't compare AI answers to find the best one; rather, I use it to find quick solutions to questions that don't have multiple answers, such as Windows settings. These are usually "how," "when," and "where" questions, not questions about topics with differing opinions, like "What's the best registry cleaner?" Please don't rely on AI for personal, emotional, or medical questions, as it could make things worse.
 
There isn’t a single “best” AI overall – different models are stronger at different things, and they also change very fast with new releases.

How it usually breaks down
  • ChatGPT (OpenAI) – Very strong general assistant, great with code, explanations, and writing. Widely integrated into tools and services. Good “all‑rounder”.
  • Claude (Anthropic) – Often preferred for long, structured writing (documents, emails, analysis) and for being more “cautious”/aligned. Many people like its style for reasoning tasks.
  • Google Gemini – Well integrated with Google services (Drive, Docs, Gmail), decent coding and search‑style tasks. Convenient if you’re deep in the Google ecosystem.
  • Local / open‑source models (e.g. Llama variants, Qwen, etc.) – Best for privacy and control (run on your own PC/server), heavily customizable, but require more technical setup and usually a good GPU for top performance.

How to choose “best” for you
Ask yourself:
  • Main use? Coding, documents, general chat, research, translating, image work?
  • Privacy needs? Cloud AI vs. local models you control.
  • Do you care more about raw capability, writing style, or tool integration (Office, Google, IDEs, etc.)?

If you say what you mainly want to use it for (coding, study help, content writing, security research, etc.), I can give a more concrete “for your case, I’d pick X > Y > Z” ranking.
I picked Hyperlink for semantic search. The Microsoft Store version not the ridiculously inflated one on the web. My needs are modest to have it act as a bridge until Microsoft Copilot Ask becomes generally available.
 
  • Like
Reactions: Sorrento
I picked Hyperlink for semantic search. The Microsoft Store version not the ridiculously inflated one on the web. My needs are modest to have it act as a bridge until Microsoft Copilot Ask becomes generally available.
That’s a reasonable approach—using a lightweight semantic search tool as a “bridge” can be very practical, especially if your use case is mostly local recall and quick retrieval.

One caution up front
I don’t have enough verified detail about the specific “Hyperlink” app you’re using to vouch for its privacy/security design or data flows. Treat it like any third‑party search/indexing utility until you’ve confirmed exactly what it does.

Low-risk checklist for using a semantic search “bridge” safely
  • Confirm what gets indexed: Prefer tools that let you choose folders/locations explicitly (instead of “scan everything”).
  • Check whether it’s local-only: If it offers “cloud embeddings,” “account sync,” or “AI server processing,” assume your content may leave the device unless clearly stated otherwise.
  • Network behavior: If it shouldn’t need the internet for your use case, block it outbound (Windows Firewall) and see if it still works. That’s an easy way to validate “local” claims.
  • Keep sensitive data out: Don’t index password exports, browser profile folders, crypto wallets, tax/ID scans, or work confidential material unless you’re confident it’s local and encrypted at rest.
  • Permissions and persistence: Watch for anything that installs services, runs at startup, or requests broad accessibility/keyboard hooks—those are not automatically “bad,” but they deserve scrutiny in a search tool.
  • Updates: The Microsoft Store version can be a plus because it’s typically easier to keep updated, and Store packaging can reduce some tampering risk compared to random downloads—still, review the publisher identity carefully.

How to evaluate if it’s doing what you want (modest needs)
  • Does it reliably find what you need across your common file types (PDF, DOCX, plaintext, browser bookmarks/notes)?
  • Is it fast enough on your hardware without hammering CPU/disk?
  • Can you exclude noisy folders (Downloads, temp, node_modules, etc.)?
  • Can you easily wipe/rebuild the index if something goes wrong?

Practical “bridge strategy” until Copilot Ask is available
  • Keep your workflow simple: one primary index (docs/notes) + one “scratch” folder.
  • Use strict scope (only what you need) and re-evaluate once Copilot Ask is generally available.
  • Document your “must have” features now (speed, file types, local-only, etc.) so you can compare fairly later.

If you share the exact Microsoft Store listing name/publisher (text only is fine), I can suggest what to double-check on the app’s permissions and data handling without guessing.
 
That’s a reasonable approach—using a lightweight semantic search tool as a “bridge” can be very practical, especially if your use case is mostly local recall and quick retrieval.

One caution up front
I don’t have enough verified detail about the specific “Hyperlink” app you’re using to vouch for its privacy/security design or data flows. Treat it like any third‑party search/indexing utility until you’ve confirmed exactly what it does.

Low-risk checklist for using a semantic search “bridge” safely
  • Confirm what gets indexed: Prefer tools that let you choose folders/locations explicitly (instead of “scan everything”).
  • Check whether it’s local-only: If it offers “cloud embeddings,” “account sync,” or “AI server processing,” assume your content may leave the device unless clearly stated otherwise.
  • Network behavior: If it shouldn’t need the internet for your use case, block it outbound (Windows Firewall) and see if it still works. That’s an easy way to validate “local” claims.
  • Keep sensitive data out: Don’t index password exports, browser profile folders, crypto wallets, tax/ID scans, or work confidential material unless you’re confident it’s local and encrypted at rest.
  • Permissions and persistence: Watch for anything that installs services, runs at startup, or requests broad accessibility/keyboard hooks—those are not automatically “bad,” but they deserve scrutiny in a search tool.
  • Updates: The Microsoft Store version can be a plus because it’s typically easier to keep updated, and Store packaging can reduce some tampering risk compared to random downloads—still, review the publisher identity carefully.

How to evaluate if it’s doing what you want (modest needs)
  • Does it reliably find what you need across your common file types (PDF, DOCX, plaintext, browser bookmarks/notes)?
  • Is it fast enough on your hardware without hammering CPU/disk?
  • Can you exclude noisy folders (Downloads, temp, node_modules, etc.)?
  • Can you easily wipe/rebuild the index if something goes wrong?

Practical “bridge strategy” until Copilot Ask is available
  • Keep your workflow simple: one primary index (docs/notes) + one “scratch” folder.
  • Use strict scope (only what you need) and re-evaluate once Copilot Ask is generally available.
  • Document your “must have” features now (speed, file types, local-only, etc.) so you can compare fairly later.

If you share the exact Microsoft Store listing name/publisher (text only is fine), I can suggest what to double-check on the app’s permissions and data handling without guessing.

It doesn't index your entire drive unlike the ill-fated Microsoft Recall. Hyperlink doesn't spy on you and if you ask it as a question and it returns results, when you're done, you can delete the chat history.


The recommended model to install for search queries is: LLaMA 3 8B Instruct (Q4_K_M GGUF). At 4 GB, a nice balance between size, speed and intelligence.
 
It doesn't index your entire drive unlike the ill-fated Microsoft Recall. Hyperlink doesn't spy on you and if you ask it as a question and it returns results, when you're done, you can delete the chat history.


The recommended model to install for search queries is: LLaMA 3 8B Instruct (Q4_K_M GGUF). At 4 GB, a nice balance between size, speed and intelligence.
Understood — and using a local model like LLaMA 3 8B Instruct Q4_K_M (GGUF) can be a sensible “modest needs” setup.

That said, I can’t verify the claim “Hyperlink doesn’t spy on you” just from the Store page and an in-app behavior like “chat history can be deleted.” Deleting chat history in the UI only proves the app offers a delete function; it doesn’t, by itself, prove nothing was transmitted or that nothing remains in caches/logs/index files.

What you can do to validate the privacy claims (practical + low risk)
  • Read the Store listing sections carefully (publisher identity, permissions/capabilities, privacy policy link, data collected). Focus on whether it mentions telemetry, diagnostics, or “cloud processing.”
  • Test “offline mode”: disconnect the PC (or block the app outbound with Windows Firewall) and verify:
    • Search still works
    • No “fallback to cloud” behavior
  • Check network activity:
    • Resource Monitor (resmon.exe) → Network tab (quick check)
    • For deeper checks: Wireshark / Sysmon / Windows Firewall logging
    If you see connections to third-party endpoints when it shouldn’t need them, that’s a red flag worth investigating.
  • Confirm where the “memory/index” lives: Many semantic search tools store embeddings/index databases on disk. Ask: can you delete/rebuild the index, and is it encrypted at rest?

About the “delete chat history” feature
That’s good hygiene, but two common gotchas:
  • The app may still keep a separate local semantic index/embedding store even after “chat” is deleted.
  • Windows crash dumps, logs, or diagnostic traces can retain snippets of text on some systems.

Model choice: LLaMA 3 8B Instruct (Q4_K_M GGUF) at ~4 GB
That recommendation makes sense as a balance point:
  • Pros: typically fast enough on many PCs, good instruction-following for general Q&A, and small enough to manage.
  • Cons: smaller quantized models can be more error-prone on niche/technical queries; they can “sound confident” while being wrong, so for security-related decisions you still want verification.

Supply-chain sanity checks (important)
Even if the model is “just data,” you still want trusted sources:
  • Only download GGUFs from reputable model hosts/accounts and prefer releases that provide hashes/checksums.
  • Keep the app updated, and avoid unofficial “repacked” installers.

Bottom line
Hyperlink + a local GGUF model can be a solid bridge solution, but I would treat “doesn’t spy” as unproven until you’ve confirmed (1) privacy policy language and (2) real-world network behavior under firewall/offline testing.
 
A model is downloaded from the site and you don't need to be tech saavy to set it up. When you're finished downloading it, click run. It appears in your chat as the agent and you can delete and change it any time.

Under manage resources, you can specify what the agent can search for the purpose of providing answers to file-related questions.
 
  • Like
Reactions: Sorrento

You may also like...