The Italian DPA follows same position as other EU DPAs.
Notable mentions of impact if GA used in connection with Google account logins, that “IP anonymisation by Google” is not truely anonymization and that Google’s processing mostly in EU being not sufficient.
Also, on encryption (google-translated below):
“With regard to the data encryption mechanisms highlighted above, they are not sufficient to avoid the risks of access, for national security purposes, to the data transferred from the European Union by the public authorities of the United States, as the encryption techniques adopted provide that the availability of the encryption key is in the hands of Google LLC which holds it, as an importer, by virtue of the need to have the data in clear text to carry out processing and provide services.”
General guidance on AI
Self-assessment guide which includes seven fact sheets
1. Asking the right questions before using an artificial intelligence system
2. Collecting and qualifying training data
3. Developing and training an algorithm
4. Using an AI system in production
5. Securing the processing
6. Ensuring individuals can fully exercise their rights
7. Achieving compliance
“A business, before offering any new online Product that is likely be accessed by children, must undertake a Data Protection Impact Assessment (“DPIA”) prior to making the product available. Such a report is a systematic survey to assess and mitigate risks to children, such as physical and mental health, and must be provided to the agency within twelve months of the Act’s enactment and reviewed every two years or before any new features are offered.”
The EU has identified artificial intelligence (AI) as one of the most relevant technologies of the 21st century and highlighted 1 its importance on the strategy for EU’s digital transformation. Having a wide range of applications, AI can contribute in areas as disparate as helping in the treatment of chronic diseases, fighting climate change or anticipating cybersecurity threats.
- MISUNDERSTANDING: Correlation implies causality.
- Fact: Causality requires more than finding correlations.
- MISUNDERSTANDING: When developing machine learning systems, the greater the variety of data, the better.
- Fact: ML training datasets must meet accuracy and representativeness thresholds.
- MISUNDERSTANDING: ML needs completely error-free training datasets.
- Fact: Well-performing ML systems require training datasets above a certain quality threshold.
- MISUNDERSTANDING: The development of ML systems requires large repositories of data or the sharing of datasets from different sources.
- Fact: Federated learning allows the development of machine learning systems without sharing training data sets
- MISUNDERSTANDING: ML models automatically improve over time.
- Fact: Once deployed, ML models performance may deteriorate and will not improve unless it receives further training.
- MISUNDERSTANDING: Automatic decisions taken by ML algorithms cannot be explained.
- Fact: A well-designed ML model can produce decisions understandable to all relevant stakeholders.
- MISUNDERSTANDING: Transparency in ML violates intellectual property and is not understood by the user.
- Fact: It is possible to provide meaningful transparency to AI users without harming intellectual property.
- MISUNDERSTANDING: ML systems are less subject to human biases.
- Fact: ML systems are subjects to different types of biases and some of these come from human biases.
- MISUNDERSTANDING: ML can accurately predict the future.
- Fact: ML system predictions are only accurate when future events reproduce past trends.
- MISUNDERSTANDING: Individuals are able to anticipate the possible outcomes that ML systems can make of their data.
- Fact: The ability for ML to find nonevident correlations in data can end up with the discovery of new data, unknown to the data subject.
Series of three articles by the CNIL on how AI algorithms can be audited in the real world.
Note on data retention related aspect of this fine:
INFOGREFFE stored the data beyond what was said in the privacy notice.
Reminder: Always make sure that what you state in your privacy notice is true, as you will be checked against this!
“The infogreffe.fr website provided that the personal data of members and subscribers (bank details, first and last names, postal and e-mail addresses, phone and mobile phone numbers, secret question and its answer) would be kept for 36 months from the last order for a service and/or document.
However, the CNIL found that the data of 25% of the service’s users was kept beyond the decided retention periods. The manual anonymisation implemented, only on request from users, concerned a very small number of accounts.”
English Summary: https://www.cnil.fr/en/infogreffe-fined-250000-euros