Malware News We live in a world where threat actors can sign their malware so it looks just like the official Zoom installer How can this be real?

Khushal

Level 13
Thread author
Verified
Top Poster
Well-known
Apr 4, 2024
601
3,615
1,169

(Don't) TrustConnect: It's a RAT in an RMM hat​


1771572628024.png
 
That Proofpoint post is essentially describing a common modern tradecraft pattern: attackers ship something that looks like a “legit” Remote Monitoring & Management (RMM) tool, but it’s operated like a RAT (remote access trojan) for covert, persistent control.

Why “RMM hat = RAT behavior” matters
Even if the software presents itself as “remote support,” what makes it dangerous is the surrounding behavior and operator intent:
  • Deceptive delivery (fake “Zoom / Teams / Adobe Reader” installers, “proposal” lures, etc.)
  • Silent/stealthy remote control and persistence
  • Credential theft / session hijacking risk once interactive access exists
  • Use as an entry point for follow-on activity (data theft, ransomware, lateral movement)

If you suspect TrustConnect (or any unknown RMM) was installed
Treat it as a potential compromise, not just a “potentially unwanted app.”

1) Contain first (low-risk, high-value)
  • Disconnect the affected PC from the network (unplug Ethernet / disable Wi-Fi).
  • If this is a business environment, isolate the endpoint at the switch/VLAN level if possible.
  • Do not continue logging into sensitive sites from that machine.

2) Verify whether it’s present
On Windows, check for:
  • Installed apps: Settings > Apps (or Programs and Features)
  • Running processes/services (Task Manager; services.msc)
  • Persistence points with Sysinternals Autoruns (Logon, Services, Scheduled Tasks, Drivers)
  • New local users / added admins: Computer Management > Local Users and Groups

3) Collect basic indicators (before removal, if possible)
This helps confirm what happened and supports a clean remediation:
  • Installer filename and full path
  • Any installed folder path (often under Program Files / ProgramData / AppData)
  • Service name(s), scheduled task name(s)
  • Recent remote access activity (Windows Event Logs, Security log for logons; RDP enabled changes)
  • Outbound connections (EDR telemetry if available; otherwise basic netstat / firewall logs)
If you can safely obtain the installer/binary, submit it to VirusTotal for multi-engine triage (expect false positives/variance, but it’s useful context).

4) Remediate (practical approach)
  • If confirmed malicious/unauthorized: remove it (uninstall + delete leftovers + remove persistence entries).
  • Run a full scan with a reputable AV and a second-opinion scanner (e.g., Microsoft Defender + Malwarebytes/ESET Online Scanner).
  • Assume credentials used on that machine may be exposed: reset passwords from a known-clean device, prioritize email, banking, VPN, and admin accounts.
  • Enable MFA wherever possible.

5) When to consider wipe/reimage
If the tool had admin-level control, persistence, or you can’t fully account for changes, the safest endpoint-level fix is typically a full wipe/reinstall (or restore from a known-good image) followed by credential resets.

Key takeaway
“Looks like remote support” is not a safety indicator—unauthorized RMM-style access should be handled like a full remote compromise until proven otherwise.

Sources
 
You've got to admit, though, having Zoom, Microsoft Teams, or Adobe Reader signed by TrustConnect seems pretty suspicious if you understand and pay attention.

I also don't get this distinction in the report:
However, the revocation of the certificate was not backdated, so the old signed files remained valid. This aligns with the actor stopping new subscriptions, but current customers could still distribute the files via email campaigns.
Since the revocation status is checked at runtime, running a previously signed (before the revocation) or subsequently signed (after the revocation) .exe would make little difference: the system would flag the file as having a revoked certificate.
 
Can such fake certificates always pass the reputation check of security solutions?
I am pretty sure that the ASR "Block executable files from running unless they meet a prevalence, age, or trusted list criteria" would block this for at least a few days because of low prevalence, young age, and probably not on the trusted list, but I am not sure how long it would remain blocked. My legit file manager (Free Commander), with a valid signature from the dev, was blocked for a few days after the latest update release.
 
I am pretty sure that the ASR "Block executable files from running unless they meet a prevalence, age, or trusted list criteria" would block this for at least a few days because of low prevalence, young age, and probably not on the trusted list, but I am not sure how long it would remain blocked. My legit file manager (Free Commander), with a valid signature from the dev, was blocked for a few days after the latest update release.
Can the fake certificate fool SAC easily? Does WDAC follow the exact same steps of SAC?
 
Technical Analysis & Remediation

MITRE ATT&CK Mapping

T1566.002

(Spearphishing Link)

T1553.002
(Code Signing)

T1219
(Remote Access Software)

CVE Profile
N/A [CISA KEV Status: Inactive]
(This campaign relies entirely on EV certificate trust abuse and social engineering rather than exploiting software vulnerabilities).

Telemetry

Hashes
cee6895f7df01da489c10bf5b83770ceede79ed4e1c8c4f8ea9787a4d035c79b (TrustConnectAgent.exe)

IPs
178.128.69.245.

Domains
trustconnectsoftware.com.

Registry Keys
Origin: Insufficient Evidence.

Constraint
The payload structure resembles a ~35 MB .NET Core single-file executable that bundles legitimate brand metadata.

Remediation - THE ENTERPRISE TRACK (NIST SP 800-61r3 / CSF 2.0)

GOVERN (GV) – Crisis Management & Oversight

Command
Update supply chain and software approval policies to strictly define permitted RMM tools, and explicitly quarantine unauthorized remote access software regardless of valid digital signatures.

DETECT (DE) – Monitoring & Analysis

Command
Implement SIEM hunting queries for network traffic to 178.128.69.245 and unauthenticated WebSocket connections typical of this RAT's C2 infrastructure.

RESPOND (RS) – Mitigation & Containment

Command
Isolate endpoints executing unexpected ZoomWorkspace.exe or MsTeams.exe binaries originating from non-standard directories.

RECOVER (RC) – Restoration & Trust

Command
Validate eradication by verifying the absence of unauthorized LogMeIn, ScreenConnect, or TrustConnect agents before initiating phased network restoration.

IDENTIFY & PROTECT (ID/PR) – The Feedback Loop

Command
Harden application control policies to block binaries signed by "TrustConnect Software PTY LTD" and restrict standard user execution of unknown .NET single-file applications.

Remediation - THE HOME USER TRACK (Safety Focus)

Priority 1: Safety

Command
Disconnect from the internet immediately if a fake Zoom or MS Teams installer was executed.

Command
Do not log into banking/email until verified clean.

Priority 2: Identity

Command
Reset passwords/MFA using a known clean device (e.g., phone on 5G).

Priority 3: Persistence

Command
Check Scheduled Tasks, Startup Folders, and installed applications for unauthorized remote management tools (such as unexpected ScreenConnect or LogMeIn instances).

Hardening & References

Baseline

CIS Benchmarks.

Framework
NIST CSF 2.0 / SP 800-61r3.

Source

Proofpoint Threat Insight

Vectra AI: Weaponization of EV Certificates

Proofpoint: RMM Tooling

CyberArk
 
  • Like
Reactions: harlan4096
Since the revocation status is checked at runtime, running a previously signed (before the revocation) or subsequently signed (after the revocation) .exe would make little difference: the system would flag the file as having a revoked certificate.
When a code-signing certificate expires or is revoked, it doesn't retroactively invalidate old signatures during verification checks unless the CA backdates the revocation accordingly. The files already signed will continue to be deemed legitimate.

This is the same reason that Windows system files with expired certificates are still valid. Their signatures aren't invalidated by this thanks to their verifiable timestamps.

Backdating isn't normal for routine revocations, but it would make sense when addressing a malware campaign or other abuse. I wonder why they didn't in this case.
 
When a code-signing certificate expires or is revoked, it doesn't retroactively invalidate old signatures during verification checks unless the CA backdates the revocation accordingly. The files already signed will continue to be deemed legitimate.

This is the same reason that Windows system files with expired certificates are still valid. Their signatures aren't invalidated by this thanks to their verifiable timestamps.

Backdating isn't normal for routine revocations, but it would make sense when addressing a malware campaign or other abuse. I wonder why they didn't in this case.
Proofpoint and researchers from "The Cert Graveyard" successfully mandated the revocation on February 6, 2026, but the unnamed CA did not backdate it. While the specific CA has not published an incident report explaining their refusal, cryptographic standards reveal the root cause, a long-standing conflict between strict IETF standards (RFC 5280) and practical malware mitigation.

CAs frequently refuse to backdate because legacy PKI standards dictate that the revocationDate must reflect the exact moment the CA hit the "revoke" button, reserving the actual time of compromise for a separate, rarely checked field called invalidityDate. Modifying this requires breaking standard protocol, which automated CA systems are not designed to do.
 
  • Like
Reactions: Wrecker4923
I agree with @Miravi : revoking a certificate doesn’t erase the validity of what was already signed. It’s like taking away someone’s key—if they had already made copies, those copies will still open the old locks. That’s why programs signed before revocation still look legitimate to Windows. And sure, even official keys can sometimes be cloned or stolen, but they’re still safer than downloading something from a shady website. In short: going to the official page isn’t a 100% bulletproof guarantee, but it will save you from 99% of the problems.” 🔑🛡️📦
 
  • Like
Reactions: rashmi and Miravi
Proofpoint and researchers from "The Cert Graveyard" successfully mandated the revocation on February 6, 2026, but the unnamed CA did not backdate it. While the specific CA has not published an incident report explaining their refusal, cryptographic standards reveal the root cause, a long-standing conflict between strict IETF standards (RFC 5280) and practical malware mitigation.

CAs frequently refuse to backdate because legacy PKI standards dictate that the revocationDate must reflect the exact moment the CA hit the "revoke" button, reserving the actual time of compromise for a separate, rarely checked field called invalidityDate. Modifying this requires breaking standard protocol, which automated CA systems are not designed to do.
Yes, RFC 5280 does technically recommend that the revocationDate reflects the moment that the revocation was processed. The separate invalidityDate extension is also prone to being ignored by a variety of PKI verifiers, legacy or otherwise. In the reality of code signing, however, it's totally commonplace to override this best practice. As the CA/Browser Forum explicitly states in a footnote:
Backdating the revocationDate field is an exception to best practice described in RFC 5280 (section 5.3.2); however, these Requirements specify the use of the revocationDate field to convey the “invalidity date” to support Application Software Supplier software implementations that process the revocationDate field as the date when the Certificate is first considered to be invalid.

Additionally, the situation has been simplified since 2021: Ballot CSC-12 required that for code signing certs (in CRL entries with thisUpdate after July 2022), invalidityDate must equal revocationDate if present.

For EV certificates like the one referenced in this particular article, DigiCert is a likely CA to have issued it. They hold the largest market share for publicly trusted EV certificates (~59% according to recent data), making them a frequent target of abuse by threat actors.

DigitCert's revocation interface specifically allows subscribers/admins to set a custom revocation date and time when the reason is key compromise. If you know the compromise date, you can backdate it directly in the portal. This is built into their workflow for code signing/EV code signing revocations, meaning it's an intentional, sensible feature of their automated systems.
 
Last edited:
  • Thanks
Reactions: Wrecker4923
Yes, RFC 5280 does technically recommend that the revocationDate reflects the moment that the revocation was processed. The separate invalidityDate extension is also prone to being ignored by a variety of PKI verifiers, legacy or otherwise. In the reality of code signing, however, it's totally commonplace to override this best practice. As the CA/Browser Forum explicitly states in a footnote:


For EV certificates like the one referenced in this particular article, DigiCert is a likely CA to have issued it. They hold the largest market share for publicly trusted EV certificates (~59% according to recent data), making them a frequent target of abuse by threat actors.

DigitCert's revocation interface specifically allows subscribers/admins to set a custom revocation date and time when the reason is key compromise. If you know the compromise date, you can backdate it directly in the portal. This is built into their workflow for code signing/EV code signing revocations, making it an intentional, sensible feature of their automated systems.

Additionally, the situation has been simplified since 2021: Ballot CSC-12 required that for code signing certs (in CRL entries with thisUpdate after July 2022), invalidityDate must equal revocationDate if present.
You are absolutely spot-on with both points. The 'CA/Browser Forum explicitly states in a footnote' that backdating is permitted precisely because they know client software generally ignores the standard invalidityDate extension.

Your point about DigiCert's CertCentral portal is also highly relevant. While the public threat intel hasn't explicitly named the CA (so we have to treat DigiCert as a very strong statistical probability rather than a confirmed fact for this specific payload), you've perfectly highlighted the operational reality, modern CA infrastructure does natively allow admins to set a 'custom revocation date and time'.

This completely shifts the failure domain. It means the failure to backdate the TrustConnect certificate wasn't a technical limitation of the CA's systems, but a human/process error during the incident response. Either the researchers reporting the abuse failed to explicitly calculate and legally request the compromise timestamp in their ticket, or the tier-1 CA abuse handler just slammed the default 'Revoke Now' button without utilizing the custom date workflow. It’s a perfect example of how administrative friction can leave a weaponized payload cryptographically valid even after the threat is 'taken down'.