German DPAs release v3 of the Standard Data Protection Model

The German DPAs approved on 24-Nov the new version of their Standard Data Protection Model – which forms the basis of their enforcement.

https://www.bfdi.bund.de/SharedDocs/Downloads/DE/DSK/DSKBeschluessePositionspapiere/104DSK_SDM-3-0.pdf;jsessionid=5E36059D0001FFEF80688B352A66721D.intranet241?__blob=publicationFile&v=1

There is no translated version available yet.
A noteworthy change is the inclusion of the “SDM cube” – and further details on how infrastructure and applications relate to processing activities…

LfDIBW: Verhaltensregeln für Auftragsverarbeiter

“Um hier mehr Übersichtlichkeit und Rechtssicherheit zu schaffen, hat der LfDI daher die neue nationale Verhaltensregel „Anforderungen an die Auftragsverarbeiter nach Artikel 28 DS-GVO – Trusted Data Processor“ genehmigt. Unternehmen können sich fortan diese Verhaltensregeln zu eigen machen und nutzen damit die Möglichkeit, für sich mehr Rechtssicherheit zu schaffen.”

https://www.baden-wuerttemberg.datenschutz.de/verhaltensregeln-fuer-auftragsverarbeiter/

https://www.verhaltensregel.eu/

CNIL to focus on mobile apps and smartphones

CNIL will publish recommendations on the subject of mobile applications so that each player has a good understanding of their obligations and to facilitate their compliance.
Practical tools (sheet or practical guide, self-assessment checklist, etc.) intended for users may also be published to make them aware of the real risks and impacts represented by the processing of their data through mobile applications . In particular, issues related to applications aimed at vulnerable audiences or processing sensitive data (medical applications or applications intended for children, pregnant women, etc.) or the collection of data from smartphone sensors will be the subject of work specific.

Depending on the field observations made during the work carried out to clarify the legal framework, the CNIL may decide to implement a large-scale control plan , as had been carried out in the context of actions related to cookies and other tracers. It could in particular focus on processing likely to create significant specific risks for individuals, for example because it targets vulnerable groups or collects data in a particularly intrusive way.
These actions would supplement the controls already regularly carried out on the basis of complaints and aimed at ensuring compliance with the fundamental principles of the GDPR by the publishers of mobile applications.

At the end of these checks, depending on the nature and extent of any breaches observed, the CNIL may take corrective measures, in particular financial penalties .

https://www.cnil.fr/fr/applications-mobiles-la-cnil-presente-son-plan-daction-pour-proteger-votre-vie-privee

Cara Bloom, MITRE: Privacy Threat Modeling

Privacy Threat Modeling
Thursday, June 23, 2022 – 11:15 am–11:40 am
Cara Bloom, MITRE

Abstract:
This applied research talk will discuss the privacy threat modeling gap, challenges and opportunities of privacy threat modeling in practice, and a new qualitative threat model currently under development. In privacy risk management, there are well-respected methods for modeling vulnerabilities and consequences (or harms), but there is no commonly used model nor lexicon for characterizing privacy threats. We will discuss the gap in privacy risk modeling, how privacy threat-informed defense could better protect systems from privacy harms, and a working definition for a “privacy attack.” Then we will present a draft qualitative threat model – the Privacy Threat Taxonomy – developed to fill this gap in privacy risk modeling. This model was generated iteratively and collaboratively using a dataset of almost 150 non-breach privacy events, which includes directed, accidental, and passive attacks on systems. We will also discuss how practitioners can incorporate a threat model into their privacy risk management program.

https://www.usenix.org/conference/pepr22/presentation/bloom

BSI – Security of AI-Systems: Fundamentals – Adversarial Deep Learning

“In project 464 subproject 1 Security of AI Systems: Adversarial Deep Learning, BSI investigated the security of connectionist neural networks. This field of research is subsumed under the more general term adversarial machine learning. Among these threats are evasion attacks, i.e., specifically crafted inputs that shift the model’s output, poisoning and backdoor attacks, i.e., weaknesses implanted in the model, and privacy attacks, which extract information from the model. The study presents best practice guidelines for certification and verification of neural networks, as well as defense techniques against evasion, poisoning, backdoor, and privacy attacks.”

https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/Security-of-AI-systems_fundamentals.html