The fact is I have the files for which the VT detection is above 98% ..but Kaspersky and TRend gave them safe rating..
It could be nothing to do with copying detection's though but I know exactly what you're talking about (you already know that I agree copying goes on by some vendors because of my last reply) and I agree that Kaspersky and Trend-MIcro are good vendors. Aside from copying though, the detection's could be raised through generic signatures, static/dynamic heuristics and/or ML/Ai. Remember that reputation checks is nothing new for VirusTotal flagging as well.
A generic signature is one which will be used to flag multiple samples aside from just one. That isn't actually true, you can make a generic signature to detect only a single sample until another sample shows up in the future which triggers because of the signature for the first sample... but I'm sure you understand where I am going with this. There can be many different implementations for this though. You could rely on raw byte patterns as a signature (and include wild-cards where applicable to keep the signature more reliable for the future in-case the malware author updates the code), or you could rely on an engine implementation like YARA for pattern matching based on a wider-range criteria (or equivalent/different).
A new sample upload could be sent to a vendors cloud and be passed through extensive static and dynamic analysis. The logs could be used to determine whether the sample is flagged or not for the time being, and the sample could be re-assessed through the same system or manually analysed in the lab should a false-positive detection flops up on the support submissions.
A different example would be with Machine Learning/Artificial Intelligence. This could raise many false positives depending on the data-sets for the training of the engine... it could lead any packed binary (whether clean or malicious - genuine developers may use packing to lower file sizes or make reverse engineering harder) being flagged just for having a high entropy when the samples used in training to be identified as safe (and thus any scanned alike the trained also declared as safe). It could also do the opposite if the system was intentionally designed to do so. It depends on how it is implemented (the type of model, the data-sets, etc.).
In regards to reputation checks, vendors like Norton may flag a sample just because it has never encountered it before. It could be a harmless "hello world" with a hash checksum they've never encountered before, and it could be flagged just for this reason. Fair dues to Norton though because they usually outline it as being a reputation detection IIRC.
It all depends on what the engine supports and how everything combines together: which ones are relied on first, whether other systems are skipped depending on one implementations verdict, whether technology is licensed from a third-party and how they operate internally, etc.
I think it is worth noting that the engine a vendor puts on VirusTotal is not always the same engine in the services they provide to customers (be it home or enterprise). They have the right to make the engines they submit for VirusTotal usage more aggressive or weaker than those used in their actual services used by customers, and this is something which I see many people ignoring. If a vendor called "Farm-Straw Defender" (looks like a fake AV name actually) flags sample "post reply.exe" with a hash of <insert SHA-256 hash here>, it could be for a number of reasons, and it may not even be flagged when actually using a product of theirs.
You can try asking the vendor directly as to why they flagged a sample; it may not even be their fault if they rely on third-party technology.