Responsible AI framework for Data Science Practitioners

As a data practitioners with concern for humanity we have been excited by the responsible AI conversations that have popped up over the last few years. We have also been increasingly frustrated by the lack of a consolidated practical development tools to ensure that we are doing the best that we can. So at Lelapa AI we have put one together. It aims to ensure that no form of analysis should be unleashed upon the world without clear communication of the mechanisms used to arrive at its conclusions. Nor without clear communication of the intended usage context.

The framework we have developed is a self check for data practitioners in industry (though it is likely to be useful for those in academia too) . It is designed to prompt the right questions in order to highlight potential problematics. It also aids in the placement of protective measures required for responsible development and the management of unintended outcomes.

The Framework in Context

As a data scientists not all the projects we work on include data sourcing or even modelling (much to many budding data scientists’ disappointment). Sometimes we find ourselves in positions where we are part dev and are requested to deploy models as a service. Sometimes we are responsible for the process end to end. Sometimes we are responsible for small pieces of the bigger picture. Either way, it leads to many different interconnected debug points where things could go wrong.

This framework brings together many pre existing tools that act as independent parts of a flexible structure. The structure serves a variety of projects depending on the projects needs. The areas included in this framework include but are not limited to ethics, security, privacy, responsibility, redress, transparency, fairness, explainability and accountability. These areas are considered throughout the entire lifecycle of a project from conceptualisation, through development, deployment and monitoring.

It encourages where possible to include experts from various fields to help navigate complex questions. We may be unicorns, but we are more ignorant than we may think.

Movements Toward AI regulation

If you need a little reminder of why regulation may be necessary (warning: shameless plug) read/watch this.

We still have a long way to go in terms of legally protecting the planet and its people from AI for Bad. It will be an agile process of keeping up with advancements while carefully selecting ways in which advancements can contribute to the better good. But the drive to put in place formal regulations for AI has begun.

Use of Facial recognition software by law enforcement has been banned in a number of states in the US and in Europe as well, and tech giants are in on it too (Facebook drops facial recognition softwareAmazon bans police use of facial recognition software). Measures have been put into place to protect personal information with GDPR in Europe implemented in 2018. It affects all institutions that interact with the union. South Africa followed behind with our POPI Act, officially coming into effect 5 years later (like, just the other day). Though personal data is protected, regulation for predictive models is virtually non-existent. Violations are considered instead through the lense of human rights indictments.

The need for regulating data and data applications is increasingly recognised internationally. With ,officially, The European Commission releasing a Proposal for a Regulation of the European Parliament and of the Council with their document Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence ACT) and Amending Certain Union Legislative Acts in April of 2021. The aim of the proposal is to encourage responsible uptake of AI technologies.

EU AI Regulation Proposal

The regulation proposed applies to all machine learning techniques. This includes supervised, unsupervised and reinforcement learning. It also includes logic and knowledge based systems and statistical approaches such as Bayesian estimation and optimisation methods. The proposal refers mainly to high risk applications which include Biometric Identification, Management of critical infrastructure, Education, Employment, Essential services access e.g Creditworthiness, Law enforcement, Migration management and Administration of justice. The proposal then goes on to include technical documentation to accompany analytical projects such that their particulars are adequately recorded. This includes:

1. A general description of the AI system
2. Detailed description of the AI system’s elements

The results of the checks and considerations stipulated in the proposal are summarised in an EU Declaration of Conformity. It is a type of certificate that states certain details of the application and confirms adherence to regulations. A list of these systems and their particulars and a list of procedures in place to ensure the quality management of these systems is required too.

The Framework

Framework Structure

The framework proposed here is inline with the recommendations in the EU proposal in that it follows the declaration of conformity format. But the idea here is that this framework is followed for ALL data related decision making processes and not just those that are considered high risk.

In the same way we weave value systems into people so that they behave in ways that support the ethical fabric of society. We believe it is imperative to weave values into the practise of developing decision enablement mechanisms. We do not refer to our values and virtues as check box items that are an inconvenient hindrance to what we want to achieve. Neither should this framework be considered as “just another thing for Data Scientists to consider and another thing for them to do”. Instead, it is an absolutely necessary process to achieving the outcomes we set out to achieve with the technology we create.

There are three main areas concerned with regulating AI. The first is the machine conceptualisation phase, the second the machine development phase and the third the machine management phase.

Phases of Intervention

Machine Conceptualisation

This phase is primarily concerned with the ethics of a particular use case. We want to ensure that the intentions of the use case are pure. Even in the case that the intensions of a use case are pure, the metrics used to determine the use cases success may be ill suited resulting in unethical models. Some use cases need to be pursued and others by virtue of their principles should not.

High risk type models generally teeter on the edge of the accepted ethical landscape. Application areas that fall into this category include: biometric identification and surveillance, employment screening processes, essential services access such as loans and credit risk, law enforcement and administration of justice as well as cases that infringe on the regulations set in place to protect privacy of information and persons. Both Section A and Section B of the framework deals with this concern.

Machine Development

This phase deals primarily with algorithmic fairness and addressing algorithmic bias which is largely a problem due to the black box nature of machine learning models, the overwhelming mathematical complexity of non machine learning models but also due to the lack of consideration in screening input data and the subsets and subpopulations that exist within it. This is addressed by a process we have dubbed as EMA (Exploratory Model Analyses. Like Exploratory Data Analysis…but models and analysis and stuff). There are many technical tools that have been built to assist in addressing these issues as can be found in this fantastic Github repo. Section C and Section D of this document deal with these areas.

Machine Management

This phase primarily deals with the assurance that the model system developed is replicable, robust and secure and that the relevant measures are put into place to ensure accountability of the systems and its creators. These are covered in Section E and F. It also includes some questions that some might find quite basic but may be helpful for new comers to the field. Also very important in the machine management phase is considerations around visualisation and interpretation of results. It can be quite easy to mislead through visualisation and reporting and this is covered in Section G.

The results of these checks (except maybe for some of the model deployment stuff) should be stored in your preferred file storage choice as metadata of your development practise. It should also be stored as a project inventory for future reference and for this documentation to be made freely available to anyone who makes decisions as a result of your work or to anyone who has decisions made about them as a result of your work. That is, within the confines of IP, Privacy and Security limitations of course.

Intended Use

In the case of any of the project categories below wherein the project results in an insight that informs any kind of decision making process, the relevant sections should be completed.

Project categories:
• Machine learning (supervised, unsupervised, semi-supervised, reinforcement learning)
• Logic and Knowledge based analysis and calculations
• Statistical projects e.g Bayesian Estimation, optimisation methods and other summary statistics

Each section relevant to the project in question should theoretically be performed as needed while each section is conducted in the project development process. If a project is following proper agile processes this implies that all checks should be conducted at each Sprint Review. But we will leave the choices regarding the practicalities of its implementation to what is convenient for you.

This framework does not, by any means, encompass everything and will require continuous updates along with cultural, societal and technological change. It will, however, hopefully get your brain thinking about things it may not have considered before. But we figured, it doesn’t have to be perfect for it to be out there. If you have some ideas, please get in touch with us.


One thought on “Responsible AI framework for Data Science Practitioners

  1. Hi Pelonomi,
    Thanks for sharing the link, it brings up many more questions. (!) much of the technical language I don’t quite understand, however it does open up some insight to the myriad of components involved.
    All power to you!
    Best wishes.

Leave a Reply

Your email address will not be published. Required fields are marked *