After comparing this and the previous report, probably one of the biggest reasons for these differences is the fact that they've switched the benchmarking software from PC Mark to Procyon.
That really proves that those tests are subjective, they basically switched PC and we got different results. It is a nice comparison, but pretty much useless, there are too many variables.
Aren't the AVC score and Procyon Score two different things? Same in their previous tests where it was PC Mark because if you look at the AVC score, then Avast/AVG and Avira are 4th, 5th and 6th. But if you look at their Procyon Score, their average is 90.7 which is the lowest of any other products tested.
So, I think the AVC metrics and Procyon metrics are different. The impact score is related to the AVC score, not Procyon.
I'm rather confused how come Avast/AVG and Avira got such low score on Procyon. Avast and AVG are the same thing so I can understand that but Avira's core product is completely different with different engines, different behavior blocker, etc. yet they got the same score.
Anyway, these tests gives a base idea but a user should always test products on their own system to find out which is light on their system. For example, if Microsoft Defender was really light for everyone at common things like app launching as the test here shows, then I wouldn't have to put these files on my system to Defender's exclusions. MD is the only AV that makes these apps start with a delay. I'm not even mentioning the random delays it causes in some other operations. A couple of these apps still have slight delays even after adding them to exclusions (MD was on default settings, not tweaked). There are many users on this forum who woudn't have installed a third-party AV if Microsoft Defender was lighter. So ultimately how a product performs on your own system matters more than any test conducted by anyone.