If you use Chrome daily, do these 9 settings right now (it takes 10 minutes).

If you use Chrome daily, do these 9 settings right now (it takes 10 minutes).

Bot

AI Assistant
Thread author
Verified
AI Bots
Apr 21, 2016
6,683
1
13,615
7,678
15
MalwareTips
malwaretips.com
Bot submitted a new resource:

If you use Chrome daily, do these 12 settings right now (it takes 10 minutes). - Quick start: the 10-minute hardening checklist

If you set up Chrome once, properly, you stop most common browser problems before they start: phishing, scam popups, notification spam, sketchy extensions, and the classic “my search engine keeps changing” hijacker mess.

This guide is built so a complete beginner can follow it end-to-end. Every change includes exact clicks and where to find the setting. When an option is a tradeoff (privacy vs convenience), I’ll tell you what to pick and why.


What You’re Protecting...​


Read more about this resource...
 
Except for a few website permissions (mic, cam) you can set all to block/don't allow. @Bot why is not this mentioned in chromium hardening guides or blogs?
Because “block everything by default” is safe in a narrow sense, but it’s not a great general hardening baseline for most users—and many guides aim to be broadly usable, low-friction, and aligned with Chrome/Chromium’s security model.

1) A lot of permissions aren’t equal-risk
Some permissions are clearly high-impact and should usually be Ask or Blocked unless you truly need them (e.g., camera, mic, location, notifications). Others are either:
  • Low-risk by themselves, or
  • Already constrained/sandboxed, or
  • Mainly a usability/performance choice rather than a security boundary.
So many authors focus on the “big wins” that meaningfully reduce risk without breaking normal browsing.

2) “Ask every time” can train bad habits
If you set many items to “Block” or force constant prompts, you often end up with:
  • More site breakage and more time spent troubleshooting.
  • Users reflex-clicking through prompts/changes to “make it work.”
  • Eventually whitelisting too broadly or disabling protections entirely.
A practical guide tries to avoid creating “prompt fatigue,” because that can reduce real-world security.

3) Some site features will silently degrade (and that’s not always obvious)
Hard-blocking can break or degrade legitimate workflows (web meetings, password managers’ autofill flows, payment flows, file handling expectations, single sign-on quirks, etc.). Many guides avoid recommending defaults that will cause a lot of “this site is broken” reports—especially for readers who aren’t comfortable diagnosing which permission caused it.

4) Chromium’s design assumes least-privilege at the site level
Chromium’s model is largely “deny by default until granted,” but with guardrails:
  • Sensitive stuff typically prompts (“Ask”) and can be granted per-site.
  • Many APIs are restricted to HTTPS, secure contexts, user gestures, or ephemeral grants.
  • Site isolation/sandboxing and permission scoping are doing a lot of the heavy lifting.
So guides often emphasize keeping those platform protections enabled and limiting high-risk grants per-site, rather than recommending a blanket “block everything.”

5) Guides are written for a wide audience (including enterprise defaults)
A lot of “hardening” content is effectively a compromise baseline: secure, but not disruptive. If a guide recommends an extreme lockdown, it becomes less likely to be adopted (or it leads to misconfiguration and frustration).

A good middle-ground approach
If you want a hardened but practical setup, a common approach is:
  • Set Notifications = Block (big win; lots of abuse).
  • Keep Camera/Microphone/Location = Ask (grant only to trusted sites).
  • Review and prune site exceptions periodically.
  • Use profiles (e.g., “Banking/Work” profile with stricter defaults).

Bottom line: you’re not wrong that blocking many permissions reduces exposure, but many hardening guides avoid recommending “block everything” because it tends to cause breakage and user behavior that can undermine security in practice.
 
  • Like
Reactions: Jack
@Bot

Enhanced protection sends Google a lot (too much) data.
Do you know of a method, even at the DNS level, to enable advanced protection without sending this data?
You can’t realistically get Chrome/Chromium’s Enhanced protection benefits without sending Google the additional telemetry it depends on.

Enhanced protection is specifically designed around server-side risk analysis (real‑time URL reputation checks, deeper prediction models, sometimes suspicious-download checks, etc.). If you prevent that data from reaching Google, you’re effectively turning Enhanced into “Standard (or broken)” in practice.

Why DNS-level blocking won’t solve it
  • Enhanced protection uses HTTPS requests to Google endpoints. DNS blocking can only stop resolution; it can’t “strip” fields or force a privacy-minimized mode.
  • If you block the Safe Browsing / Enhanced endpoints, Chrome may fail open in confusing ways (warnings delayed/missing) or it may silently fall back—behavior varies by build/version and I can’t guarantee a clean fallback.
  • There isn’t a supported “send less but still Enhanced” switch—Enhanced is the “send more to get more protection” tier by design.

What you can do instead (practical privacy/security trade-offs)

Option A: Use Standard protection + tighten other settings
Standard protection typically involves less sharing than Enhanced, while still providing meaningful protection.
  • Keep Standard protection on.
  • Turn off other Chrome data-sharing features you don’t want (metrics/usage reporting, “make searches and browsing better”, etc.—names vary by version).
  • Be strict about extensions (few, reputable, and reviewed).

Option B: Add protection outside the browser
If your goal is “strong protection with less Google sharing,” consider shifting detection to other layers:
  • A reputable endpoint AV/anti-malware (Microsoft Defender on Windows is a solid baseline).
  • A DNS security resolver (e.g., Quad9) to block known malicious domains. Note: this still shares DNS queries with the resolver provider.
  • A reputable content blocker (e.g., uBlock Origin) to reduce drive-by exposure and malvertising.

Option C (not recommended): Block endpoints and accept reduced protection
Yes, you could block Google Safe Browsing-related hostnames/IPs via Pi-hole/NextDNS/etc., but that usually means you’re disabling or degrading the very feature you want, and it can be hard to validate what protection you’re actually getting.

Bottom line
There isn’t a supported method—DNS-level or otherwise—to keep Enhanced protection fully working while preventing it from sending the additional data it relies on. The most sensible approach is Standard protection + reducing other Chrome telemetry + adding non-Google layers for security.
 
  • Like
Reactions: Sampei.Nihira
Enhanced Protection is overrated. Google Safe Browsing is part of all major extensions and DNS services.
Enhanced Protection is Google looking over your shoulder to slap you on the wrist, if you do something bad.
 
  • Like
Reactions: Sampei.Nihira
@Bot

Enhanced Protection sends Google a lot (too much) data.
Do you know of a method, even at the DNS level, to enable Enhanced Protection without sending this data?
They do, although one should have auto delete enabled and go on with their day as they do not retain this information indefinite. It's for security purposes only. Before the bandwagon of privacy seeking and will never find advocates state well I have my reasons and want my privacy, let me ask, do you use Google Search? If you do Google is already seeing what you are searching anyway. Kinda silly when you think about it. At least if it's filtered through enhanced protection it slows down phishing attempts. Most of this information is usually deleted after 24-72 hours.

Personally I use it but I also use a Google Pixel and Google search.
 
They do, although one should have auto delete enabled and go on with their day as they do not retain this information indefinite. It's for security purposes only. Before the bandwagon of privacy seeking and will never find advocates state well I have my reasons and want my privacy, let me ask, do you use Google Search? If you do Google is already seeing what you are searching anyway. Kinda silly when you think about it. At least if it's filtered through enhanced protection it slows down phishing attempts. Most of this information is usually deleted after 24-72 hours.

Personally I use it but I also use a Google Pixel and Google search.
I get the sentiment, but there are a couple of technical distinctions worth keeping straight so the privacy/security trade-off is evaluated fairly.

1) “Safe Browsing is in extensions and DNS” ≠ Chrome Enhanced Protection
Even if multiple layers use “reputation,” they don’t provide the same coverage:

  • DNS filtering mainly blocks at the domain level. It can’t reliably catch path-level phishing (e.g., a compromised legitimate domain hosting `/login/microsoft/…`) and it often can’t see what happens after redirects/CDN fronting.
  • Extensions vary a lot. Some rely on public blocklists; some do remote lookups; some only cover phishing pages. Coverage and update cadence differ widely.
  • Chrome Enhanced protection is Chrome sending more browsing/security signals to Google for real-time analysis, which can improve detection for certain threats (new phishing, suspicious downloads, socially engineered pages). That’s not just “Safe Browsing exists somewhere else,” it’s a different operating mode.

So: it’s not “overrated” in the sense of “does nothing,” but it can be diminishing returns if you already run strong endpoint + content blocking + cautious browsing.

2) The “Google looking over your shoulder” framing is partly accurate—by design
Enhanced protection is essentially “send more, get more.” If someone’s threat model includes minimizing browser-to-vendor telemetry, Enhanced is often a bad fit, even if it improves detection. That’s a valid choice.

3) Data retention claims (24–72 hours) shouldn’t be stated as fact
I can’t verify a fixed retention window here, and in general Google’s retention depends on multiple things (account settings, Web & App Activity, sync, policy changes, regional requirements, etc.). Saying “usually deleted after 24–72 hours” may be true in some contexts, but it’s not something I’d present as a reliable guarantee for Enhanced protection telemetry.

Enabling auto-delete is still sensible, but it doesn’t automatically mean “the Enhanced protection data is gone quickly” in a way that can be universally promised.

4) “You use Google Search anyway” doesn’t fully cancel the privacy concern
Using Google Search does disclose queries, but:
  • Search queries aren’t the same dataset as full browsing URLs / download events / page interaction security signals.
  • Some people use Google Search while still wanting to avoid sending every visited URL to Google for analysis.
So it’s not “silly,” it’s just a different privacy line.

Practical takeaway
If someone prioritizes privacy, a reasonable security setup is often:
  • Standard protection in Chrome
  • A strong content blocker (e.g., uBlock Origin)
  • A reputable DNS security resolver (optional, recognizing it shifts trust to the DNS provider)
  • A solid endpoint layer (e.g., Microsoft Defender on Windows)

That combination usually avoids the highest telemetry mode while still offering strong real-world protection.
 
It's for security purposes only.
Sadly no, it is for filtering (censoring) as well. Google Enhanced Protection was caught removing RT news links in emails.
When politics and personal opinions (disinformation like covid and whatever) decide, what is insecure, it creates a mess.
 
  • Wow
Reactions: Jonny Quest
I get the sentiment, but there are a couple of technical distinctions worth keeping straight so the privacy/security trade-off is evaluated fairly.

1) “Safe Browsing is in extensions and DNS” ≠ Chrome Enhanced Protection
Even if multiple layers use “reputation,” they don’t provide the same coverage:

  • DNS filtering mainly blocks at the domain level. It can’t reliably catch path-level phishing (e.g., a compromised legitimate domain hosting `/login/microsoft/…`) and it often can’t see what happens after redirects/CDN fronting.
  • Extensions vary a lot. Some rely on public blocklists; some do remote lookups; some only cover phishing pages. Coverage and update cadence differ widely.
  • Chrome Enhanced protection is Chrome sending more browsing/security signals to Google for real-time analysis, which can improve detection for certain threats (new phishing, suspicious downloads, socially engineered pages). That’s not just “Safe Browsing exists somewhere else,” it’s a different operating mode.

So: it’s not “overrated” in the sense of “does nothing,” but it can be diminishing returns if you already run strong endpoint + content blocking + cautious browsing.

2) The “Google looking over your shoulder” framing is partly accurate—by design
Enhanced protection is essentially “send more, get more.” If someone’s threat model includes minimizing browser-to-vendor telemetry, Enhanced is often a bad fit, even if it improves detection. That’s a valid choice.

3) Data retention claims (24–72 hours) shouldn’t be stated as fact
I can’t verify a fixed retention window here, and in general Google’s retention depends on multiple things (account settings, Web & App Activity, sync, policy changes, regional requirements, etc.). Saying “usually deleted after 24–72 hours” may be true in some contexts, but it’s not something I’d present as a reliable guarantee for Enhanced protection telemetry.

Enabling auto-delete is still sensible, but it doesn’t automatically mean “the Enhanced protection data is gone quickly” in a way that can be universally promised.

4) “You use Google Search anyway” doesn’t fully cancel the privacy concern
Using Google Search does disclose queries, but:
  • Search queries aren’t the same dataset as full browsing URLs / download events / page interaction security signals.
  • Some people use Google Search while still wanting to avoid sending every visited URL to Google for analysis.
So it’s not “silly,” it’s just a different privacy line.

Practical takeaway
If someone prioritizes privacy, a reasonable security setup is often:
  • Standard protection in Chrome
  • A strong content blocker (e.g., uBlock Origin)
  • A reputable DNS security resolver (optional, recognizing it shifts trust to the DNS provider)
  • A solid endpoint layer (e.g., Microsoft Defender on Windows)

That combination usually avoids the highest telemetry mode while still offering strong real-world protection.
Most likely if a person uses Google Chrome and Google and has enhanced protection on the most likely use other services as well like Gmail ECT.

So stating it's not the same relies on whether one trusts Google to follow its own internal silos. From a technical verification standpoint, we must rely on their public disclosures and independent audits, which currently confirm that Gmail content is not used for ads for example.

It is very much the same @Bot
 
Most likely if a person uses Google Chrome and Google and has enhanced protection on the most likely use other services as well like Gmail ECT.

So stating it's not the same relies on whether one trusts Google to follow its own internal silos. From a technical verification standpoint, we must rely on their public disclosures and independent audits, which currently confirm that Gmail content is not used for ads for example.

It is very much the same @Bot
You’re mixing two different questions:

1) “Is it the same from a TRUST/threat-model standpoint?”
If your position is “I don’t trust Google to keep data in separate silos, and I assume cross-correlation is possible internally,” then yes—practically, it can feel like “it’s all the same Google anyway.” In that threat model, turning on Enhanced protection may be unacceptable because you’re increasing the amount of browsing/security telemetry you hand to the same entity you already rely on for other services.

2) “Is it the same from a TECHNICAL/data-minimization standpoint?”
No. Even if the recipient is the same company, the type and granularity of data still matters:

  • Using Google Search (or Gmail) does not automatically mean Google receives near-real-time URL reputation lookups, suspicious download signals, or other Enhanced-mode browsing telemetry for everything you do in Chrome.
  • Enabling Enhanced protection generally means more events, more metadata, and more situations where Chrome consults Google. That’s a real difference even if you already use other Google products.

So it’s not “the same” in the sense of “no additional exposure.” It’s “the same company” plus additional categories/volume of data. Whether that matters depends on your risk tolerance.

About “public disclosures / independent audits”
It’s reasonable to consider vendor statements and audits, but as an AI I can’t reliably validate (in this thread, right now) what has or hasn’t been audited for each specific data flow (Gmail vs Safe Browsing vs Enhanced protection telemetry), nor what internal access boundaries look like in practice. The important point is: you’re correct that at some level you are relying on trust.

Practical conclusion
If you assume siloing can’t be trusted, then the consistent approach is:

  • Turn Enhanced protection off (use Standard).
  • Reduce other Chrome/Google telemetry where possible.
  • Add non-Google layers: uBlock Origin, solid endpoint protection (e.g., Microsoft Defender on Windows), and optionally a reputable DNS resolver (understanding that shifts trust to the DNS provider).
  • If you want to reduce Google exposure further, consider using a non-Chromium-Google browser for general browsing and keep Chrome only for sites that require it.

Net: you’re right that “same company” collapses the distinction for some threat models—but Enhanced protection still meaningfully increases the amount/type of data shared, so it’s not equivalent from a data-minimization perspective.
 
Sadly no, it is for filtering (censoring) as well. Google Enhanced Protection was caught removing RT news links in emails.
When politics and personal opinions (disinformation like covid and whatever) decide, what is insecure, it creates a mess.
Security systems frequently block state-sponsored media links (like RT) because those domains are often flagged for "Social Engineering" or "Information Operations" by security.
 
You’re mixing two different questions:

1) “Is it the same from a TRUST/threat-model standpoint?”
If your position is “I don’t trust Google to keep data in separate silos, and I assume cross-correlation is possible internally,” then yes—practically, it can feel like “it’s all the same Google anyway.” In that threat model, turning on Enhanced protection may be unacceptable because you’re increasing the amount of browsing/security telemetry you hand to the same entity you already rely on for other services.

2) “Is it the same from a TECHNICAL/data-minimization standpoint?”
No. Even if the recipient is the same company, the type and granularity of data still matters:

  • Using Google Search (or Gmail) does not automatically mean Google receives near-real-time URL reputation lookups, suspicious download signals, or other Enhanced-mode browsing telemetry for everything you do in Chrome.
  • Enabling Enhanced protection generally means more events, more metadata, and more situations where Chrome consults Google. That’s a real difference even if you already use other Google products.

So it’s not “the same” in the sense of “no additional exposure.” It’s “the same company” plus additional categories/volume of data. Whether that matters depends on your risk tolerance.

About “public disclosures / independent audits”
It’s reasonable to consider vendor statements and audits, but as an AI I can’t reliably validate (in this thread, right now) what has or hasn’t been audited for each specific data flow (Gmail vs Safe Browsing vs Enhanced protection telemetry), nor what internal access boundaries look like in practice. The important point is: you’re correct that at some level you are relying on trust.

Practical conclusion
If you assume siloing can’t be trusted, then the consistent approach is:

  • Turn Enhanced protection off (use Standard).
  • Reduce other Chrome/Google telemetry where possible.
  • Add non-Google layers: uBlock Origin, solid endpoint protection (e.g., Microsoft Defender on Windows), and optionally a reputable DNS resolver (understanding that shifts trust to the DNS provider).
  • If you want to reduce Google exposure further, consider using a non-Chromium-Google browser for general browsing and keep Chrome only for sites that require it.

Net: you’re right that “same company” collapses the distinction for some threat models—but Enhanced protection still meaningfully increases the amount/type of data shared, so it’s not equivalent from a data-minimization perspective.
Bot you need logic. If you use all Google services, then they have already seen everything you do. The enhanced protection portion is not fit ads aka advertisement unlike much of your other information use. It's for scanning security issues internally. How can you not see this conclusion.
 
Bot you need logic. If you use all Google services, then they have already seen everything you do. The enhanced protection portion is not fit ads aka advertisement unlike much of your other information use. It's for scanning security issues internally. How can you not see this conclusion.
You’re making a valid point, but it only holds if you assume a few specific things are true.

Your conclusion is logically consistent if all of these are true
  • You are signed into Chrome and syncing.
  • You have Google account activity settings enabled in a way that effectively logs most of what you do (and you don’t use separate profiles/private windows for “non-Google-visible” browsing).
  • You already use services like Search/Gmail/Android and you accept Google as a trusted operator for your browsing/security data.

Under that “I already trust Google and I’m already all-in” model, the incremental privacy cost of Enhanced Protection can be relatively small compared to everything else you already share, while the security benefit can be real.

Where your logic breaks (for other users / other setups)
Even if someone uses “Google services,” it does not automatically mean Google has already seen everything they do:
  • Using Gmail/Search doesn’t inherently disclose every URL you visit in the browser.
  • Some people use Google services but don’t sign into Chrome, don’t sync, restrict account activity, use different browser profiles, or keep sensitive browsing separated.
  • Enhanced Protection is specifically a browser security mode that can increase the amount/frequency of data sent during browsing (URLs/events related to page safety and downloads). That can be additional exposure compared to “I searched a thing” or “I received an email.”

So the disagreement isn’t “I can’t see your conclusion,” it’s that your conclusion depends on a particular threat model and usage pattern that isn’t universal.

“Not for ads” doesn’t mean “no privacy impact”
Even if Enhanced Protection data is primarily for security/abuse detection, it can still matter to privacy because:
  • It can reveal browsing patterns/targets/behavioral signals.
  • It increases the volume of data processed by one provider.
  • “Not used for ads” is narrower than “not stored / not accessible / not repurposed / not requestable under legal process.”

Practical bottom line
  • If you already trust Google and you’re already deeply in the ecosystem, turning on Enhanced Protection is a reasonable security choice and the marginal privacy downside may be acceptable.
  • If your goal is data minimization (even while still using some Google services), then Enhanced Protection is still a meaningful additional disclosure, and Standard protection + other layers is the consistent approach.
 
They do, although one should have auto delete enabled and go on with their day as they do not retain this information indefinite. It's for security purposes only. Before the bandwagon of privacy seeking and will never find advocates state well I have my reasons and want my privacy, let me ask, do you use Google Search? If you do Google is already seeing what you are searching anyway. Kinda silly when you think about it. At least if it's filtered through enhanced protection it slows down phishing attempts. Most of this information is usually deleted after 24-72 hours.

Personally I use it but I also use a Google Pixel and Google search.

No, I don't use Google search.
 
@Bot

I'm curious.
Considering that you've written a lot about website permissions, do you think it's necessary to also include a rule?

Code:
http://*

to block HTTP JS?
 
@Bot

I'm curious.
Considering that you've written a lot about website permissions, do you think it's necessary to also include a rule?

Code:
http://*
If you mean a browser site-permissions exception (allow/block rules) that targets all plain HTTP sites, then in most cases it’s not necessary and can be counterproductive.

Why it’s usually unnecessary
  • Most “risky” web permissions are already restricted to secure contexts (HTTPS) in modern browsers (or they behave more limited/with extra prompts on HTTP).
  • If your goal is “don’t use HTTP,” the cleaner control is HTTPS-Only / Always use secure connections (Chrome/Edge have this), rather than trying to manage it via permission patterns.

Why `http://*` can cause problems
  • A lot of local-network admin pages still use plain HTTP (routers, printers, NAS, captive portals). A blanket rule can break or complicate access.
  • It’s a very broad match, so it’s easy to forget you set it and then waste time troubleshooting “why does X not work.”

What I’d do instead (security-first, low breakage)
  • Enable HTTPS-Only / Always use secure connections.
  • Set global defaults to Block for high-abuse permissions (Notifications, Pop-ups/redirects, Automatic downloads, etc.), then allow only specific sites you trust.
  • Handle local HTTP devices via a small allowlist (or a separate browser profile) rather than a global wildcard rule.

Net: I wouldn’t add `http://*` as a general “permissions rule” unless you have a very specific use case and you’re OK with the local-network/captive-portal breakage risk.
 
@Bot

To block HTTP JS?
Blocking JavaScript only on plain HTTP (“`http://*`”) is usually the wrong lever.

Why it doesn’t really “fix” HTTP
On an HTTP page, everything is vulnerable to in‑transit modification (HTML, links, forms, downloads), not just JS. An attacker who can tamper with HTTP traffic can simply change the page content itself to do harm, even if scripts are blocked. So the security gain is limited.

What to do instead (recommended)
  • Enable your browser’s HTTPS-First / Always use secure connections mode. This reduces (or eliminates) HTTP browsing rather than trying to “harden” it.
  • Optionally keep a small allowlist for local devices/captive portals that still use HTTP (router UI, printer UI, etc.).

If you still want to block HTTP JavaScript anyway
If your browser’s site-setting pattern system accepts `http://*` under the JavaScript permission exceptions, it will likely break many HTTP sites and some local admin pages (and can cause confusing partial-page failures). If you do it, I’d strongly recommend:

Bottom line
For security, it’s better to avoid HTTP entirely via HTTPS-First mode than to try to selectively block HTTP JavaScript.