BSI – Security of AI-Systems: Fundamentals – Adversarial Deep Learning

“In project 464 subproject 1 Security of AI Systems: Adversarial Deep Learning, BSI investigated the security of connectionist neural networks. This field of research is subsumed under the more general term adversarial machine learning. Among these threats are evasion attacks, i.e., specifically crafted inputs that shift the model’s output, poisoning and backdoor attacks, i.e., weaknesses implanted in the model, and privacy attacks, which extract information from the model. The study presents best practice guidelines for certification and verification of neural networks, as well as defense techniques against evasion, poisoning, backdoor, and privacy attacks.”

Spain: The Spanish DPA translated the Anonymization Guide of Singapore’s DPA into Spanish

The Spanish AEPD has translated the Singapore Data Protection Authority Basic Anonymization Guide for its educational value and special interest to data protection officers, data processors and delegates.

The guide is complemented by a free data anonymization tool, which is made available to organizations by the AEPD

Both resources are aimed especially at SMEs and startups

Spanish material:

English source material from Singapore:

Italy: Garante against Google Analytics (Fastweb)

The Italian DPA follows same position as other EU DPAs.

Notable mentions of impact if GA used in connection with Google account logins, that “IP anonymisation by Google” is not truely anonymization and that Google’s processing mostly in EU being not sufficient.

Also, on encryption (google-translated below):

“With regard to the data encryption mechanisms highlighted above, they are not sufficient to avoid the risks of access, for national security purposes, to the data transferred from the European Union by the public authorities of the United States, as the encryption techniques adopted provide that the availability of the encryption key is in the hands of Google LLC which holds it, as an importer, by virtue of the need to have the data in clear text to carry out processing and provide services.”

CNIL: Guidance on artificial intelligence (AI) systems

General guidance on AI

Self-assessment guide which includes seven fact sheets

1. Asking the right questions before using an artificial intelligence system
2. Collecting and qualifying training data
3. Developing and training an algorithm
4. Using an AI system in production
5. Securing the processing
6. Ensuring individuals can fully exercise their rights
7. Achieving compliance

Sidley: The California Age-Appropriate Design Code Act Dramatically Expands Business Obligations

“A business, before offering any new online Product that is likely be accessed by children, must undertake a Data Protection Impact Assessment (“DPIA”) prior to making the product available. Such a report is a systematic survey to assess and mitigate risks to children, such as physical and mental health, and must be provided to the agency within twelve months of the Act’s enactment and reviewed every two years or before any new features are offered.””>

AEPD-EDPS Joint Paper – 10 Misunderstandings about Machine Learning

The EU has identified artificial intelligence (AI) as one of the most relevant technologies of the 21st century and highlighted 1 its importance on the strategy for EU’s digital transformation. Having a wide range of applications, AI can contribute in areas as disparate as helping in the treatment of chronic diseases, fighting climate change or anticipating cybersecurity threats.

  • MISUNDERSTANDING: Correlation implies causality.
    • Fact: Causality requires more than finding correlations.
  • MISUNDERSTANDING: When developing machine learning systems, the greater the variety of data, the better.
    • Fact: ML training datasets must meet accuracy and representativeness thresholds.
  • MISUNDERSTANDING: ML needs completely error-free training datasets.
    • Fact: Well-performing ML systems require training datasets above a certain quality threshold.
  • MISUNDERSTANDING: The development of ML systems requires large repositories of data or the sharing of datasets from different sources.
    • Fact: Federated learning allows the development of machine learning systems without sharing training data sets
  • MISUNDERSTANDING: ML models automatically improve over time.
    • Fact: Once deployed, ML models performance may deteriorate and will not improve unless it receives further training.
  • MISUNDERSTANDING: Automatic decisions taken by ML algorithms cannot be explained.
    • Fact: A well-designed ML model can produce decisions understandable to all relevant stakeholders.
  • MISUNDERSTANDING: Transparency in ML violates intellectual property and is not understood by the user.
    • Fact: It is possible to provide meaningful transparency to AI users without harming intellectual property.
  • MISUNDERSTANDING: ML systems are less subject to human biases.
    • Fact: ML systems are subjects to different types of biases and some of these come from human biases.
  • MISUNDERSTANDING: ML can accurately predict the future.
    • Fact: ML system predictions are only accurate when future events reproduce past trends.
  • MISUNDERSTANDING: Individuals are able to anticipate the possible outcomes that ML systems can make of their data.
    • Fact: The ability for ML to find nonevident correlations in data can end up with the discovery of new data, unknown to the data subject.