The AEPD publishes a checklist to help those responsible for carrying out data impact assessments (DPIAs)

The Spanish Data Protection Agency (AEPD) has published a checklist to help data controllers quickly identify and determine whether the process and documentation they are following to carry out a Data Impact Assessment contains the required elements.


Trans-Atlantic Data Privacy Framework (TADPF or TDPF?)

.. or is T-ADPF?

.. and why “Data Privacy” – and not “Privacy” nor “Data Protection”?

The EDPS already commented on Twitter that
“#EDPS welcomes, in principle, the announcement from @vonderleyen and @POTUS¨ on the new transatlantic data transfer agreement ” (see

Current (scant) information on the TADPF (ot TDPF) can be found at:

.. and we should probably avoid “Privacy Shield 2.0” (to avoid bad luck)

.. and Schrems III (or 3) likely still to come.

For ongoing details/news please see (or

ENISA: Deploying Pseudonymisation Techniques

“Pseudonymisation is increasingly becoming a key security technique for providing a means that can facilitate personal data processing, while offering strong safeguards for the protection of personal data and thereby safeguarding the rights and freedoms of individuals. Complementing previous work by ENISA, this report demonstrates how pseudonymisation can be deployed in practice to further promote the protection of health data during processing.”

FTC action on Weight Watchers: WW International and its Kurbo App are required to delete data, destroy any algorithms, and pay a monetary penalty

“In a complaint, filed by the Department of Justice on behalf of the Federal Trade Commission, the agency alleged that WW International, Inc., formerly known as Weight Watchers, and a subsidiary called Kurbo, Inc., marketed a weight loss app for use by children as young as eight and then collected their personal information without parental permission. The settlement order requires WW International and Kurbo to delete personal information illegally collected from children under 13, destroy any algorithms derived from the data, and pay a $1.5 million penalty.”

NIST SP-1270 Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

“Specifically, this special publication:

  • describes the stakes and challenge of bias in artificial intelligence and provides examples of how and why it can chip away at public trust;
  • identifies three categories of bias in AI — systemic, statistical, and human — and describes how and where they contribute to harms;
  • describes three broad challenges for mitigating bias — datasets, testing and evaluation, and human factors — and introduces preliminary guidance for addressing them.”