Operationalising Data Ethics: 1 of n


We’ve long argued that principles based approaches to data ethics are less than ideal. In fact, there’s a pretty strong argument to suggest that they further erode trust (we’ll be publishing in various publications about this over the coming months. Stay tuned!). Why? Because they’re little more than feel good statements. They quite literally support the ethical intent to action gap.

Instead of falling victim to the same old same old, we propose that organisations shift their focus away from data ethics principles, complex policies and an over-reliance on legal guidance (a very common pitfall), towards a whole of organisation approach to operationalising a Data Ethics Framework.

A Data Ethics Framework, as we define it, is the consistent process that an organisation executes to decide, document and verify that its data processing activities are socially preferable.

Nate and Mat

This article is part of an ongoing series that will introduce you to the specific functions of a Data Ethics Framework. this article will focus specific on Social Preferability Experiments.

Let’s go!

Preference >Acceptance

Social preferability is the outcome a Data Ethics Framework is designed to achieve. By social preferability we mean that an organisation can evidence that its key stakeholder groups – customers, service providers, employees, directors, shareholders, independent advocacy groups and regulators – overwhelmingly support the intent and outcomes of its data processing activities.

Social Preferability Experiments are a key component of the Data Ethics Frameworks’ design. This focus on social preferability assumes that ethical decisions are more effectively made when the individuals, groups, communities and organisations impacted by such decisions are actively involved in the decision-making process. These experiments provide a consistent method through which new empirical data can be generated. This empirical data helps enhance the confidence with which an ethical decision can be made.

Importantly, these experiments are practical. They simulate real world situations by getting research participants hands on with an interactive prototype. In addition, research participants are presented with information that clearly describes how a given product, service or data processing activity works. This helps participants build a more accurate mental model about how their data is used. We did very similar research with the Consumer Policy Research Centre. This helped produce the report, A Day in the Life of Data.

Here's how to do it

Let’s start by setting the scene. You’re deploying a new model. The goal is to design operational efficiencies that create customer value. Specifically, you want to better customer service requests. At the time of designing this experiment and proposing this new activity, you don’t have explicit permission from our customers to use this data for this processing purpose. This is a new activity. You’re proposing it because you believe it will deliver direct value to our customers and create operational efficiencies for our business. 

There are two key areas of focus for the experiment. This first is that you need customers to give us their explicit permission (we can touch on the legal fiction of consent another time) to use the data from their support request to train new models. The second is that you need to show the outcome this new data processing might enable for them (so they can appropriate the value). You need to communicate and demonstrate the new value of the activity you’re proposing. 

To do this you develop a basic prototype of the permission flow, clearly articulating the why, what and how of this new data processing activity. You would also build the desired outcome into the prototype. As above, this would enable research participants to better understand the impact of what you’re proposing. You do this because social preferability testing seeks to understand the support key stakeholders have for both the intent and outcomes of our data processing activities. Thinking of this in terms of moral analysis, this combines both deontological (duties based) reasoning with consequentialism (outcome based).

Once you’ve created the prototype, you design, recruit for and run a tight research program. We refer to this as outcome focused usability paired with contextual inquiry. It’s a hybrid research method that helps us gather proxy quantitative data by simulating real life usage. These metrics are then supported by qualitative and attitudinal data as a result of the researcher led contextual inquiry. This provides a more stable body of evidence to support decision making. If you rely to much on quant, it’s very likely you’ll end up understanding what without why. The opposite is true with an over-reliance on qual.

The simplest approach to this is embedding Likert Scale questions at different stages of the research participants’ experience, like directly after someone takes an action to grant permission. You can see an very detailed example of this type of approach in this video on consent based data sharing research.

The framing of the prompting questions are important. If (and you absolutely should) you are engaging other stakeholder groups like internal team members from legal or compliance, it’s important they approach this research session from the perspective of a customer. The same would be needed if you were to engage regulators in this research. We expect these stakeholders to contribute their professional perspective, but it’s more important they empathise with the individuals and groups this new proposition seeks to serve and as a result, directly impact. Additional qualitative data from the sessions can be used in analysis stages to put the self-asserted scores in context. 

In this scenario we’re measuring a single dimension where 1 = socially unacceptable, 4 = socially acceptable and 7 = socially preferable. If you wanted to enhance the rigour of this approach you might consider exploring the use of Social Value Orientations. However, for the purpose of building some foundational evidence to support ethical decision making, we have found this approach the most time and cost effective.

Seriously, start moving beyond principles and feel good statements

All of this is a very basic introduction to one component of an operational Data Ethics Framework. There are many. And they all need to work together to help produce the desired behaviours. After all, this is a behavioural thing, regardless of how it’s framed.

We’ve honed our approach to this complex issue by leading programs all around the world on this very topic. Now we’re making more accesible than ever before to a broader audience through Greater Than Learning.

If you’re motivated to act, get involved in the beta.