EU commission 2018 report
https://ec.europa.eu/info/sites/info/files/independent_study_on_automated_decision-making.pdf
Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms
Very nice summary paper on AI and AI bias, which covers many examples, including the Amazon AI recruitment tool bias case.
ICO releases interim report for AI guidance project
https://ico.org.uk/media/2615039/project-explain-20190603.pdf
Includes a very nice summary of GDPR expectations on AI
ICO AI blog: When it comes to explaining AI decisions, context matters
FDA – exploratory whitepaper on use of adaptive MIL/AI-based Software as a Medical Device (SaMD)
Three Artificial Intelligence papers by the DPAs of Norway, UK and France
Blackbox extraction of secrets from deep learning models
Fascinating paper: “The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets”, Nicholas Carlini, Chang Liu, Jernej Kos, Úlfar Erlingsson, Dawn Song at https://arxiv.org/abs/1802.08232
Turns out that your algorithm memorizes your secrets in the training data. -Even if the algorithm is a lot smaller than the actual secrets… – My jaw fell do the ground right here :
“The fact that models completely memorize secrets in the training data is completely unexpected: our language model is only 600KB when compressed , and the PTB dataset is 1.7MB when compressed. Assuming that the PTB dataset can not be compressed significantly more than this, it is therefore information-theoretically impossible for the model to have memorized all training data—it simply does not have enough capacity with only 600KB of weights. Despite this, when we repeat our experiment and train this language model multiple times, the inserted secret is the most likely 80% of the time (and in the remaining times the secret is always within the top10 most likely). At present we are unable to fully explain the reason this occurs. We conjecture that the model learns a lossy compression of the training data on which it is forced to learn and generalize. But since secrets are random, incompressible parts of the training data, no such force prevents the model from simply memorizing their exact details.”
[datenrecht.ch] Leitlinien des Beratenden Ausschusses der Europarats-Konvention 108 zu “Big Data”
[Paper] Google DeepMind and healthcare in an age of algorithms
DeepMind acquired NHS data “without obtaining explicit consent from any of the patients” – an “inexcusable failure”
Google DeepMind and healthcare in an age of algorithms