Assigned Zemana- Deepfake Detection SDK

This thread is being handled by a member of the staff.

Miss Onnellisuus

From Zemana
Thread author
Verified
Developer
Dec 12, 2018
60
There is an old saying: “I don’t believe it until I see it” Well at this time you shouldn’t believe it even when you see it, because your eyes are not your ally anymore.

I am pretty sure most of you hear the term "deepfakes" which became pretty popular in the last few months. The technology is only a few years old, but it’s already exploded into something that’s both scary and fascinating. The term “deepfake" is used to describe the recreation of a human’s appearance or voice through artificial intelligence.

A deepfake is a fabricated video, either created from scratch or based on existing materials, typically designed to replicate the look and sound of a real human who is saying and doing things they haven’t done or would not ordinarily do. These videos are actually doctored videos swapping one face for another.

And maybe people who have at least a little technical background or knowledge would be suspicious in terms of having doubts which video is real, and which one is fake, but millions of people who get their news from the web don’t even know that "deepfakes" exist.

Individuals who become victims can be harmed in so many ways, losing reputation, jobs, being blackmailed and similar. And by the time they prove that video of them is fake the damage may be already done. Part of the threat from deepfakes is that even when we're told they are fake, they can still impact our beliefs. The emotional resonance of what we've seen can be stronger than the knowledge that we're being manipulated.

That's why here in Zemana we were working hard to find a solution to this problem. And we did it!

We proudly present to you our new product " Deepfake Detection SDK" which we are going to launch at Gitex Technology Week 2019 Show in Dubai.

Deepfake Detection SDK was designed to detect deepfake videos or, simply, any fake content in the areas of visual and audio communication.

This will enable governments, social media platforms, instant messaging apps and media to detect AI-made forgery in digital content before it can cause social harm.

If by any chance, you will be in Dubai on the 7th of October, visit the Gitex Cybersecurity Week to hear more about our new security solution! We would be happy to see you there!
 

Burrito

Level 24
Verified
Top Poster
Well-known
May 16, 2018
1,363
There is an old saying: “I don’t believe it until I see it” Well at this time you shouldn’t believe it even when you see it, because your eyes are not your ally anymore.

I am pretty sure most of you hear the term "deepfakes" which became pretty popular in the last few months. The technology is only a few years old, but it’s already exploded into something that’s both scary and fascinating. The term “deepfake" is used to describe the recreation of a human’s appearance or voice through artificial intelligence.

A deepfake is a fabricated video, either created from scratch or based on existing materials, typically designed to replicate the look and sound of a real human who is saying and doing things they haven’t done or would not ordinarily do. These videos are actually doctored videos swapping one face for another.

And maybe people who have at least a little technical background or knowledge would be suspicious in terms of having doubts which video is real, and which one is fake, but millions of people who get their news from the web don’t even know that "deepfakes" exist.

Individuals who become victims can be harmed in so many ways, losing reputation, jobs, being blackmailed and similar. And by the time they prove that video of them is fake the damage may be already done. Part of the threat from deepfakes is that even when we're told they are fake, they can still impact our beliefs. The emotional resonance of what we've seen can be stronger than the knowledge that we're being manipulated.

That's why here in Zemana we were working hard to find a solution to this problem. And we did it!

We proudly present to you our new product " Deepfake Detection SDK" which we are going to launch at Gitex Technology Week 2019 Show in Dubai.

Deepfake Detection SDK was designed to detect deepfake videos or, simply, any fake content in the areas of visual and audio communication.

This will enable governments, social media platforms, instant messaging apps and media to detect AI-made forgery in digital content before it can cause social harm.

If by any chance, you will be in Dubai on the 7th of October, visit the Gitex Cybersecurity Week to hear more about our new security solution! We would be happy to see you there!


Miss O, Always great to see you here at MT.

This sounds very interesting.

As you probably know, the more sophisticated 'deepfakes' in development have methods to effectively integrate the digital editing into the original material without the obvious markers.

I will be interested to learn more about this.

Thanks,

Your Buddy,

-Burrito
 

Burrito

Level 24
Verified
Top Poster
Well-known
May 16, 2018
1,363


Deepfakes have captured the imagination of politicians, the media, and the public. Video manipulation and deception have long been possible, but advances in machine learning have made it easy to automatically capture a person’s likeness and stitch it onto someone else. That’s made it relatively simple to create fake porn, surreal movie mashups, and demos that point to the potential for political sabotage.

....

Tech companies have promoted the idea that machine learning and AI will head off such trouble, starting with simpler forms of misinformation. In his testimony to Congress last October, Mark Zuckerberg promised that AI will help it identify fake news stories. This would involve using algorithms trained to distinguish between accurate and misleading text and images in posts.



1570366317809.png
 

Burrito

Level 24
Verified
Top Poster
Well-known
May 16, 2018
1,363

1570390419408.png

Deep Fake Burrito


Within the field of deepfakes, or “synthetic media” as researchers call it, much of the attention has been focused on fake faces potentially wreaking havoc on political reality, as well as other deep learning algorithms that can, for instance, mimic a person’s writing style and voice. But yet another branch of synthetic media technology is fast evolving: full body deepfakes.

Yeah, and what are the pron implications of all this?
 
Last edited:
F

ForgottenSeer 823865

What happened to JM Safe ?

Like, wasn't he supposed to be some leading malware analyst, reverse engineering, programming expert ?
Junior malware analyst.
Analyze and reverse-engineering samples, hence providing support for malware removal.

I know because Zemana offered me the position, during the interview, I made it clear it is not my field of expertise. Then they hired him.
 

KevinYu0504

Level 5
Verified
Well-known
Mar 10, 2017
227
Think they need to improve ZAM prior to adding new features?

Can't agree more .

I don't think he is working for Zemana anymore. Not 100% sure, but I think I heard that somewhere.

The information was from here :
 

[correlate]

Level 18
Top Poster
Well-known
May 4, 2019
801
A Deepfake Deep Dive into the Murky World of Digital Imitation
Deepfake technology is becoming easier to create – and that’s opening the door for a new wave of malicious threats, from revenge porn to social-media misinformation.
About a year ago, top deepfake artist Hao Li came to a disturbing realization: Deepfakes, i.e. the technique of human-image synthesis based on artificial intelligence (AI) to create fake content, is rapidly evolving. In fact, Li believes that in as soon as six months, deepfake videos will be completely undetectable. And that’s spurring security and privacy concerns as the AI behind the technology becomes commercialized – and gets in the hands of malicious actors.
Li, for his part, has seen the positives of the technology as a pioneering computer graphics and vision researcher, particularly for entertainment. He has worked his magic on various high-profile deepfake applications – from leading the charge in putting Paul Walker into Furious 7 after the actor died before the film finished production, to creating the facial-animation technology that Apple now uses in its Animoji feature in the iPhone X.
But now, “I believe it will soon be a point where it isn’t possible to detect if videos are fake or not,” Li told Threatpost. “We started having serious conversations in the research space about how to address this and discuss the ethics around deepfake and the consequences.”
 

eonline

Level 21
Verified
Well-known
Nov 15, 2017
1,064
I don't know whether to comment on this, however moderators can delete this comment, but this "new technology" called deepfake reminds me of the videos made by the world's most wanted man a few years ago. It seems to me that this is not new, since it was used before (at least 20 years ago) by a government.
 

upnorth

Moderator
Verified
Staff Member
Malware Hunter
Well-known
Jul 27, 2015
5,457
I don't know whether to comment on this, however moderators can delete this comment, but this "new technology" called deepfake reminds me of the videos made by the world's most wanted man a few years ago. It seems to me that this is not new, since it was used before (at least 20 years ago) by a government.
Correct! It ain't " new ". The tech itself been around for years but, it just recently gained momentum in the news. Nothing wrong with that in itself IMO. It has though improved and is used even in business etc. It's not just a political tool and that's important to understand.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top