BSI – Security of AI-Systems: Fundamentals – Adversarial Deep Learning

“In project 464 subproject 1 Security of AI Systems: Adversarial Deep Learning, BSI investigated the security of connectionist neural networks. This field of research is subsumed under the more general term adversarial machine learning. Among these threats are evasion attacks, i.e., specifically crafted inputs that shift the model’s output, poisoning and backdoor attacks, i.e., weaknesses implanted in the model, and privacy attacks, which extract information from the model. The study presents best practice guidelines for certification and verification of neural networks, as well as defense techniques against evasion, poisoning, backdoor, and privacy attacks.”

https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/Security-of-AI-systems_fundamentals.html