Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Video Reviews - Security and Privacy
Huorong Internet Security v6 BETA
Message
<blockquote data-quote="ShenguiTurmi" data-source="post: 1081658" data-attributes="member: 99409"><p>No cloud.</p><p>But add ML, add memory detection, add ransom bait.</p><p>ML is on device model, not require network connection.</p><p></p><p>At the recent Intel AIPC press conference, Intel invited them to showcase the use of Intel NPUs to run their ML, which is said to significantly reduce CPU usage. But it seems that there is no ability to call NPU in this beta version yet.</p><p>Because the entire Intel press conference was conducted in Chinese, it was meaningless for me to post a video here. But I can briefly describe what I saw: in the version showcased by Huorong, there are three options to run the ML model, CPU (Onnx), GPU (OpenVINO? I'm not sure), and NPU.</p></blockquote><p></p>
[QUOTE="ShenguiTurmi, post: 1081658, member: 99409"] No cloud. But add ML, add memory detection, add ransom bait. ML is on device model, not require network connection. At the recent Intel AIPC press conference, Intel invited them to showcase the use of Intel NPUs to run their ML, which is said to significantly reduce CPU usage. But it seems that there is no ability to call NPU in this beta version yet. Because the entire Intel press conference was conducted in Chinese, it was meaningless for me to post a video here. But I can briefly describe what I saw: in the version showcased by Huorong, there are three options to run the ML model, CPU (Onnx), GPU (OpenVINO? I'm not sure), and NPU. [/QUOTE]
Insert quotes…
Verification
Post reply
Top