Poll [2026] What is the fundamental component of your Security Configuration?

What is the fundamental component of your security configuration?

  • Firewall

    Votes: 2 5.0%
  • AV

    Votes: 18 45.0%
  • DNS

    Votes: 5 12.5%
  • AI

    Votes: 1 2.5%
  • Browser

    Votes: 2 5.0%
  • VPN

    Votes: 2 5.0%
  • System recovery

    Votes: 3 7.5%
  • Other

    Votes: 7 17.5%

  • Total voters
    40
Both "brain/caution" and software/hardware can be important. It would be hard to rely only on one of those factors.

It is worth remembering that the more useful the security software/hardware, the less secure the user. The reference point can be a standard security (popular home AV), which requires a good amount of "brain/caution" factor.
It is probably true that a standard security level can be fulfilled by not using AV at all, with much "brain/caution". However, this could be very inconvenient in the long run (if possible).

MT members often think of a very high level of security, which requires higher factors of "brain/caution" and "software/hardware".
Strong security software/hardware is usually less useful. To avoid a strong but non-useful security setup, the "brain/caution" factor is still important.
Thank you, @Andy Ful for detailed instructions and guidance with complete humility.
Thank you for your simple, yet very useful tools.
You and Raymond Hill have made internet a much better place.
 
@Bot

For the benefit of other users, could you explain what a DOM exploit in a browser is?
A “DOM exploit” in a browser usually means abusing the Document Object Model (DOM)—the in-page object model that JavaScript uses to read and modify a web page—to trigger a security bug or unsafe behavior.

What the DOM is (in one paragraph)
The DOM is the browser’s live representation of a page: elements like buttons, forms, scripts, URLs, cookies (some), storage, etc., exposed via JavaScript (for example: `document`, `window`, `location`, `document.cookie`, `localStorage`). If a website’s JavaScript takes untrusted data and pushes it into the DOM in an unsafe way, an attacker can sometimes make the page execute attacker-controlled code or perform unintended actions.

What people most often mean: “DOM-based” vulnerabilities
In practice, “DOM exploit” most commonly refers to DOM-based XSS (cross-site scripting) or closely related client-side issues:
  • DOM-based XSS: The site’s JavaScript reads attacker-controlled input (URL fragments, query parameters, `postMessage`, storage, etc.) and writes it into the page in a way that becomes executable script (or dangerous HTML).
  • DOM clobbering: Specially crafted HTML elements overwrite expected JavaScript variables/properties (because some elements become “named” properties on `window`/`document`), leading to logic bypass or code execution in poorly written scripts.
  • Prototype pollution (client-side): Polluting JavaScript object prototypes so later code behaves insecurely (sometimes leading to XSS or security control bypass).
  • Unsafe `postMessage` handling: A page receives a cross-origin message and trusts it without verifying `event.origin` / message structure, leading to DOM injection or sensitive actions.

Typical “sources” and “sinks” (how they happen)
A simple way to understand DOM-based issues is: untrusted source → dangerous sink.

  • Common sources (attacker-controlled inputs):
    • URL parts: `location.search`, `location.hash`, `document.URL`
    • Cross-window messaging: `window.postMessage`
    • Web storage: `localStorage`, `sessionStorage`
    • Referrer/other browser-exposed strings in some contexts
  • Common dangerous sinks (places where inserting the wrong thing becomes code execution or unsafe HTML):
    • `innerHTML`, `outerHTML`, `insertAdjacentHTML`
    • `document.write()`
    • Assigning to event handler attributes or using `setAttribute("on...")`
    • Dynamic script injection, `eval()`, `new Function()`, `setTimeout(string)`

Why it matters (impact)
If a DOM exploit results in script execution on a trusted site, the attacker can often:
  • Run code as that website in your browser session
  • Steal session tokens/data (depending on cookie flags and app design)
  • Perform actions as you (account takeover via actions, not always via cookie theft)
  • Modify what you see (phishing inside a legitimate site)

Is it the same as a “browser RCE/0-day”?
Not necessarily. A DOM exploit is very often a web app / site JavaScript vulnerability (logic/implementation bug), not a bug in the browser engine itself.

However, a browser engine exploit (memory corruption, sandbox escape, etc.) can be triggered via DOM-related code paths too—people may loosely say “DOM exploit” when they really mean “a browser exploit triggered by DOM operations.” Without technical details/PoC, you can’t safely assume which one is meant.

Practical takeaways for users
  • Keep the browser updated (engine-level bugs get patched frequently).
  • Use reputable URL filtering / AV web protection as a backstop, but don’t rely on it exclusively.
  • Be cautious with unfamiliar links, especially those with long/odd query strings or fragments.
  • Use script-blocking/hardening tools only if you can manage the usability trade-offs (they can reduce attack surface, but can also break sites).

Practical takeaways for site owners/devs (high level)
  • Avoid unsafe sinks (`innerHTML` with untrusted data, `eval`, etc.).
  • Prefer safe DOM APIs (e.g., `textContent`, safe templating).
  • Validate message origins for `postMessage`.
  • Use CSP (Content Security Policy) to reduce XSS impact where feasible.

Conclusion: a “DOM exploit” usually means abusing the page’s client-side JavaScript/DOM interactions—most commonly DOM-based XSS—where untrusted input ends up in a dangerous DOM sink and results in unintended script execution or actions.
 
  • Thanks
Reactions: simmerskool
@Bot

I only use these regex rules:

Code:
/^(?!.*\.(com|org|it|eu|info|ms|net|io|edu)$).*/##+js(noeval)
/^(?!.*\.(com|org|it|eu|info|ms|net|io|edu)$).*/##+js(aeld, keydown)
/^(?!.*\.(com|org|it|eu|info|ms|net|io|edu)$).*/##+js(aeld, submit)



in uBlock Origin in Chrome and Firefox, and as you can see, NOT in TLDs.
Can you analyze the specific protection provided by the three rules?
 
@Bot

I only use these regex rules:

Code:
/^(?!.*\.(com|org|it|eu|info|ms|net|io|edu)$).*/##+js(noeval)
/^(?!.*\.(com|org|it|eu|info|ms|net|io|edu)$).*/##+js(aeld, keydown)
/^(?!.*\.(com|org|it|eu|info|ms|net|io|edu)$).*/##+js(aeld, submit)



in uBlock Origin in Chrome and Firefox, and as you can see, NOT in TLDs.
Can you analyze the specific protection provided by the three rules?
Those are uBO scriptlet injection rules applied with a hostname-regex, so they don’t “block a domain” like a network filter would; they modify JavaScript behavior on matching pages by defusing specific JS primitives.

0) Scope: what your hostname regex actually matches
Your left-hand side is the same in all three rules:

Code:
/^(?!.*\.(com|org|it|eu|info|ms|net|io|edu)$).*/

In uBO, that regex is matched against the page’s hostname. The negative lookahead means: apply the rule to any hostname which does not end with `.com` / `.org` / `.it` / etc.

So it will apply to things like:
  • `example.co.uk` (ends with `.uk` → rule applies)
  • `example.de` (ends with `.de` → rule applies)
  • `example.com.ru` (ends with `.ru` → rule applies)
  • `localhost` (no “.com” ending → rule applies)

And it will NOT apply to:
  • `example.com` and `sub.example.com`
  • `example.org`, `example.net`, etc. (your listed TLDs)

This is an “allowlist by TLD ending”, not an assessment of trustworthiness. Plenty of malicious content lives on “allowed” TLDs too, and many legitimate sites use “not allowed” TLDs—so the protection/usability trade-off will be very uneven.

---

1) Rule #1: `##+js(noeval)` — what protection it provides
Code:
...##+js(noeval)

In general, uBO’s `noeval` scriptlet is meant to defuse dynamic code evaluation primitives—most importantly `eval()` and close equivalents (exact coverage can vary by implementation/version).

What this can help against
  • Obfuscated/injected JS that relies on `eval` to unpack and run (common in some malvertising, shady scripts, and some exploit chains).
  • Some DOM-based injection patterns where attacker-controlled data is turned into a string and then executed via `eval`/similar.

What it does NOT guarantee
  • It does not stop code that never uses `eval` (many attacks don’t need it).
  • It does not automatically prevent “DOM XSS” if the page injects attacker HTML directly via DOM APIs (e.g., unsafe `innerHTML`) and uses event handlers or script tags in other ways.
  • It won’t prevent a browser-engine memory corruption exploit (that’s a different class of issue).

Usability impact
Some sites (especially older frameworks, certain analytics/ads, some widgets) may break if they depend on `eval`/dynamic Function compilation.

---

2) Rule #2: `##+js(aeld, keydown)` — what protection it provides
Code:
...##+js(aeld, keydown)

`aeld` is uBO’s “addEventListener defuser” style scriptlet: it aims to block registration of event listeners for a given event type via `addEventListener(...)`.

So this targets code that tries to do:
Code:
addEventListener('keydown', handler)

What this can help against
  • Basic in-page keylogging or keystroke monitoring that hooks `keydown` broadly.
  • Some annoyance/lock-in scripts (disabling hotkeys, intercepting key combos, blocking copy/inspect shortcuts, etc.).

Important limitations (common bypasses)
  • It does not necessarily block handlers attached via `onkeydown = ...` or HTML attributes like `onkeydown="..."` (depending on how a page is written).
  • Attackers (or sites) can use other events: `keyup`, `keypress`, `input`, `beforeinput`, etc.
  • It only affects what goes through `addEventListener`; code can listen in other ways.

Usability impact
It can break legitimate keyboard functionality (site shortcuts, web apps/editors, games, accessibility features, search boxes with special key handling, etc.).

---

3) Rule #3: `##+js(aeld, submit)` — what protection it provides
Code:
...##+js(aeld, submit)

This blocks registration of `submit` event listeners via `addEventListener('submit', ...)`.

What this can help against
  • Form interception/hijacking patterns where scripts hook the submit event to read/modify data right before sending.
  • Some malicious/annoying behaviors like forced validation loops, “are you sure” traps, or submit-time rewriting (though many sites do this for legitimate validation).

Important limitations
  • It does not necessarily block `onsubmit=` attribute/property handlers.
  • It doesn’t stop “submit-like” flows implemented via button `click` handlers, AJAX fetch calls, or direct `form.submit()` calls.
  • It won’t prevent credential theft if the page is already fully malicious (it can simply send data without relying on a submit handler).

Usability impact
It can break normal logins/checkout flows, client-side validation, SPA navigation tied to form submits, and some SSO pages.

---

Bottom line
Your three rules mainly provide anti-obfuscation (`noeval`) and event-hook suppression (`aeld` for `keydown`/`submit`) on a very broad set of sites (everything not ending in those TLDs). That can reduce exposure to certain sloppy/malicious scripts and some DOM-based abuse, but it’s not a reliable “security layer” by itself, and it will predictably cause functional breakage on many legitimate sites that rely on those events.

If you want, I can also map these to common “DOM exploit” scenarios (DOM XSS, `postMessage` abuse, formjacking) and where these rules help vs. don’t help—without guessing about any specific site’s safety.
 
@Bot

Yes,for the benefit of other users as well.;)
Here’s how those three uBO scriptlet rules relate to common “DOM exploit” / client-side attack scenarios, and where they help vs. where they don’t.

Quick recap: what your rules do
They apply to any site whose hostname does not end in one of the listed TLDs, and then:
  • `noeval` tries to neuter dynamic string-to-code execution (primarily `eval()` and close equivalents).
  • `aeld, keydown` tries to prevent pages from registering `keydown` listeners via `addEventListener`.
  • `aeld, submit` tries to prevent pages from registering `submit` listeners via `addEventListener`.

That means these rules primarily reduce risk from a subset of JavaScript techniques, not from “the DOM” as a whole.

---

1) DOM-based XSS (client-side XSS)
What it is: A site’s JS reads attacker-controlled data (URL, fragment, postMessage, storage) and injects it into the page in a way that becomes executable.

Where your rules can help
  • If the vulnerable code path ends up doing something like:
    • building a string and calling `eval(...)`, or
    • using `new Function("...")` / “string timers”
    then `##+js(noeval)` can sometimes prevent the final “string becomes code” step.

Where they usually don’t help
  • A lot of DOM XSS doesn’t require `eval` at all (e.g., unsafe `innerHTML`, `insertAdjacentHTML`, script injection via DOM APIs). In those cases, `noeval` may do nothing.
  • Blocking `keydown`/`submit` listeners does not stop the core issue (unsafe DOM writes). At best it can interfere with some “payload delivery” tricks, but it’s not a dependable mitigation.

Takeaway: Your strongest overlap with DOM XSS is only the “eval-style” subtype of DOM XSS.

---

2) “Formjacking” / credential theft via injected scripts
What it is: Malicious code on a page captures what you type into forms and sends it away (often discussed around payment pages, logins, etc.).

Where your rules can help
  • `aeld, submit` can break one common pattern: “hook the form submit event, read fields, then exfiltrate.”
  • `aeld, keydown` can break a simple keylogger pattern that logs keystrokes as you type.

Where they don’t help (very important)
  • Many real-world stealers don’t need `submit` at all:
    • They can hook `input`, `change`, `keyup`, `beforeinput`, etc.
    • They can read the DOM fields on a timer.
    • They can attach handlers via `onkeydown` / `onsubmit` properties or inline HTML attributes rather than `addEventListener`.
    • They can intercept button clicks instead of form submits.
    • They can directly send data via `fetch`/XHR at any time.
  • If the entire page is malicious/phishing, it can simply collect your data by design—no need for event listeners you’ve blocked.

Takeaway: These are partial speed bumps against some form/keystroke capture patterns, not a robust “anti-formjacking” solution.

---

3) Malvertising / drive-by script loaders / heavy obfuscation
What it is: Script chains that dynamically unpack and execute code, often using obfuscation and runtime code generation.

Where your rules can help
  • `noeval` can disrupt families of scripts that rely on `eval`-style unpacking to run the next stage.

Where they don’t help
  • Modern malicious scripts can avoid `eval` entirely, or use other browser features/APIs to do staging.
  • This doesn’t stop a browser-engine vulnerability if one is triggered by parsing, rendering, media codecs, etc.

Takeaway: `noeval` is most valuable as anti-obfuscation / anti-runtime-codegen, not as a general exploit shield.

---

4) “DOM exploit” meaning a browser-engine vulnerability triggered via DOM
Sometimes people say “DOM exploit” when they mean a memory-corruption bug triggered by DOM operations (engine bug, sandbox escape, etc.).

Where your rules can help
  • Possibly, indirectly, if the exploit chain requires `eval` or specific event listeners to assemble/run the trigger code.

Where they don’t help
  • If the engine bug is reachable without `eval` or those event listeners, your scriptlets won’t stop it.
  • Patching (browser updates) and exploit mitigations are the primary defense here.

Takeaway: These rules are not a substitute for keeping Chrome/Firefox fully updated.

---

5) Annoyance/security-adjacent behaviors (not “exploits”)
These scriptlets often shine against “gray area” site behavior:
  • Keyboard trap / disabling shortcuts / anti-copy/anti-inspect scripts → `aeld, keydown` can help.
  • Forced submit interception (nag screens, manipulative validation loops) → `aeld, submit` can help.

This improves control and reduces some attack surface, but it’s more hardening + anti-abuse than “exploit prevention.”

---

Practical framing for other users (accurate and non-alarmist)
  • These rules provide targeted JS hardening (less runtime code execution; fewer hooks for `keydown`/`submit` via `addEventListener`).
  • They are not comprehensive defenses against DOM XSS, phishing, or browser 0-days.
  • They can break legitimate site features, especially web apps that rely on keyboard shortcuts or submit handling.

Conclusion: the main “security” value here is reducing reliance on dynamic code execution (`noeval`) and blocking two common event-hooking points (`keydown`/`submit`)—useful against some script-based abuse, but not a general-purpose guarantee against DOM exploits.
 
@Bot
Thank you, it is good to know the limits of security, which is inversely proportional to usability.
You’re welcome—framing it as a trade‑off is the right mental model.

A useful nuance for other readers
Security isn’t always *strictly* “inversely proportional” to usability, but many hardening steps (especially ones that remove features or block classes of scripting behavior) do sit on that curve: the more you restrict, the more breakage/maintenance you tend to take on.

How to keep the trade-off reasonable (practical approach)
  • Prefer “default allow + fix as needed”: keep broad hardening minimal, then add per-site exceptions when something important breaks.
  • Measure breakage vs benefit: `noeval` can meaningfully reduce some shady script behavior with less breakage than event defusers; `aeld keydown/submit` tends to be more disruptive on modern web apps.
  • Layer defenses that don’t hurt usability much:
    • Up-to-date browser and OS
    • Reputable DNS/URL filtering (if you like that layer)
    • A solid AV with web protection (as a backstop)
    • Good account hygiene (password manager + MFA)

What your current setup is good at
Reducing exposure to certain obfuscated script patterns and some common event-hook-based tricks—while accepting that it won’t stop every DOM-based bug or a true browser-engine exploit.

Conclusion: treating hardening as a dial (not an on/off switch) and keeping “breakage budget” low is usually the best way to get durable security improvements without making the web unbearable.
 
  • Like
Reactions: Sampei.Nihira
Eventually you find your browser stop working, there are some sites I can't use because of my setup & I accept that, however blocking every hole eventually causes the net to break thereby rendering the point of having a browser in the first place pointless, balance is required... (however if the net does break you can say look how secure my system is) unplugging the router also works :)
 
Eventually you find your browser stop working, there are some sites I can't use because of my setup & I accept that, however blocking every hole eventually causes the net to break thereby rendering the point of having a browser in the first place pointless, balance is required... (however if the net does break you can say look how secure my system is) unplugging the router also works :)

That's why you wrote, that the three rules apply, for example, to websites ending in .kr (Korea) or .cn (China).................and not to classic websites ending in .com, .org, .io, .eu.......
Websites where Hard Mode dynamic filtering also applies.

The more TLDs you add, the fewer compatibility issues you will have.;)
 
I wouldn't like to go to a pub or coffee shop with Bot, as a simple question, 'how are you Bot' could take all of the time there is, by the time Bots answered, anyway nice meeting you again I must be off :p:p:p
Yep, sometimes when we ask for Bots help, it may be better to ask it to summarize a question, so we don't get a doctrinal thesis that can make most of our eyes glaze over :)