Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Video Reviews - Security and Privacy
An F Secure Safe follow up
Message
<blockquote data-quote="509322" data-source="post: 567321"><p>Unfortunately, test lab results are often mis-interpreted. On top of it, software publishers market the results as a generic validation of their software -- which is problematic on so very many levels. Every single 100 % performance score comes with caveats. Those caveats include a very broad range of exceptions and limitations - from capabilities to usability.</p><p></p><p>The fact of the matter is that testing is highly imperfect. Furthermore, the full range of testing problems is never explained in the test reports.</p><p></p><p>"This is what the test report states, but this is what it actually means." Two different things.</p><p></p><p>Readily available, easy-to-understand, comprehensive transparency is not one of the security software industry's strong points.</p><p></p><p>This is just my personal opinion. </p><p></p><p>There are many internal industry debates regarding such matters - with little agreement as to what is optimal. As with most things in life, cost - to a large extent - dictates how testing is performed and reported.</p></blockquote><p></p>
[QUOTE="509322, post: 567321"] Unfortunately, test lab results are often mis-interpreted. On top of it, software publishers market the results as a generic validation of their software -- which is problematic on so very many levels. Every single 100 % performance score comes with caveats. Those caveats include a very broad range of exceptions and limitations - from capabilities to usability. The fact of the matter is that testing is highly imperfect. Furthermore, the full range of testing problems is never explained in the test reports. "This is what the test report states, but this is what it actually means." Two different things. Readily available, easy-to-understand, comprehensive transparency is not one of the security software industry's strong points. This is just my personal opinion. There are many internal industry debates regarding such matters - with little agreement as to what is optimal. As with most things in life, cost - to a large extent - dictates how testing is performed and reported. [/QUOTE]
Insert quotes…
Verification
Post reply
Top