Google Introduces SAIF, a Framework for Secure AI Development and Use


Level 26
Thread author
Top Poster
Aug 17, 2017
All new technologies bring new opportunities, threats, and risks. As business concentrates on harnessing opportunities, threats and risks can be overlooked. With AI, this could be disastrous for business, business customers, and people in general. SAIF offers six core elements to ensure maximum security in AI.

Many existing security controls can be expanded and/or focused on AI risks. A simple example is protection against injection techniques, such as SQL injection. “Organizations can adapt mitigations, such as input sanitization and limiting, to help better defend against prompt injection style attacks,” suggests SAIF.

Traditional security controls will often be relevant to AI defense but may need to be strengthened or expanded. Data governance and protection becomes critical to protect the integrity of the learning data used by AI systems. The old concept of ‘rubbish in, rubbish out’ is magnified manyfold by AI, but made critical where business and people decisions are based on that rubbish.

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.