Responsible innovation is ethics washing. Here’s why

By NATHAN KINCH

Full disclosure: I am being deliberately provocative.

Why am I doing this? Because filter bubbles and echo chambers aren’t what we need. We need to challenge one another to achieve our very best. We need to productively and collectively confront the ideas, concepts and proposed changes that we may otherwise shy away from. There’s no time for The Ostrich Effect. We should aim for nothing less than a beautifully ambitious future.

What qualifies me to even suggest this? These types of things are always up for debate, somewhat subjective and often useless. But, you’re likely wondering this question. So, in short, I’ve spent most of my career working to design services, business models and organisational structures that behaviourally exhibit the qualities of trustworthiness. I’ve worked with federal governments, global corporations, fast growing startups and numerous research and policy institutes. Greater Than Learning is my third startup. It’s way to0 early to determine if it’ll be a home run. Greater Than X was my second. Based on all of the qualitative and quantitative indicators we monitor, this was a resounding success. We’ve been highly influential.

If you want to dive into any of this deeper, just check out my LinkedIn profile. It has enough info to act as a reasonable proxy (although I ‘get’ that very little on LinkedIn is verifiable… An issue for another post).

What am I hoping to achieve by writing this? In short, discussion that leads to action.

What is my agenda? I call this out very clearly in a blog, ‘Our Vision, in a nutshell’. I’m one of the founders of Greater Than Learning. We are a commercial organisation, albeit a pretty different one. For us to succeed, we need people to understand our perspective, buy into it and then act by becoming a member, engaging with our unique learning model and taking what they learn out into the real world. I write this type of content to support that purpose.

But what’s most important is that I really care. I have a daughter, a life partner and best friend (yes, the same person), people I care about and a heck of a lot to live for. I’d like to imagine I can contribute to leaving the world in better shape than when I entered it. This sounds cliche as fuck, but I know plenty of you align to this. I’m writing this for you.

With that out of the way, let’s dive in.

A story that begins explaining the title

We were sitting at Wildseed in San Francisco. Funnily enough, we ate there 7 times on our last trip. It’s good. Check it out if you’re in the area.

Back to it. We were in town for client work (which, coincidentally, we’ll be publishing about soon). We had the opportunity to meet with a few reps from different orgs that wanted to engage us. We typically did this in the evenings.

Cutting a long story short, we were being asked to support a genuinely operational approach to AI Ethics (or so we thought). We were asked to do this because of our distinct approach to operationalising Data Ethics Frameworks. We talked through the back and forth over some food. This, we find, tends to be a more productive way to get to know people. It helps them take their work hat off and remember that they’re people, not just leaders or practitioners.

We began proposing how to frame the work and how to approach it. We described the challenges and desired outcomes. We pushed for a perspective we believe is pretty well empirically backed.

Then it happened… “We can’t really call it ethics. People are too uncomfortable with this framing. This will need to sit within our ‘Responsible Innovation’ unit”*.

*The statement is paraphrased.

WTF. Why?

I can’t recall if this was our literal response. It may well have been. We tend to speak our minds, regardless of the setting.

I’ve since interacted with similar situations.

Don’t get me wrong. I get that the whole #ResponsibleTech and #ResponsibleInnovation movement is picking up steam, mostly in the U.S. But, speaking frankly, it all feels a little like the last season of Silicon Valley. #Tethics as it’s framed by the very funny figure that is Gavin Belson. Is this an attempt to shy away from meaningful openness and accountability? Is this a failure to tackle the complex, nuanced and ambiguous field of Applied Ethics?

We can’t say for sure. Some definitely think so. Let’s go deeper anyway.

The rest of this article should serve as a discussion starter. Below we will propose why these approaches are very likely to lead to ethics washing, even if that isn’t then intent of many individuals working on these initiatives. As my co-founder recently called out, we need to look broader and deeper by tackling the systems, not the symptoms (this was direct response to the Netflix documentary, The Social Dilemma).

The problem with the whole, “tech is neutral” narrative

This is something I interact with frequently. I don’t think it’s as simple as saying this is wrong. Zero sum approaches aren’t my style. But I do believe it’s grossly oversimplified and unhelpful.

Let me lead with some definitions.

Technology: The sum of techniques, skills, methods, and processes used in the production of goods or services or in the accomplishment of objectives, such as scientific investigation.

There are actually a lot of definitions out there. This comes from Wikipedia (and yes, I first looked to Oxford, Cambridge etc.). Bit this one gets to the point. 

Technology is a very broad description of a very broad set of things.

So to say something like, “technology is neither good nor bad. It’s how we use it that matters” just doesn’t make sense in practice. All technology (in the context we’re referring to it) is designed by people. It reflects their wants, needs, desires and biases. The specific technologies we design, particularly those that rely heavily on data processing activities, are not neutral. Not even close. This ideological view point is often referred to as technological instrumentalism. And although many people who say these things probably say them because they mean it (and think its a valid perspective), we have to move beyond this if we are to make progress.

Why you ask? Because “tech is neutral” helps us shy away from actual accountability. If it’s all about how people use it, then we just define policies and guidelines to described the ‘preferred’ use. Problem solved, right? Nope. Think again. We need to explicitly design our technologies for the most socially preferable outcomes.

Ethics: Ethics and morals are often used interchangeably. In this case, when we refer to ethics we are referring to a decision making process. We align to The Ethics Centre’s defintion of ethics. Basically that it’s the process we execute in a given situation where choice is available. The process is designed to help us make a decision best aligned to our purpose, values and principles. It’s about making the best possible choice given the circumstances.

This is an actionable defintion. But it can be confronting. It means that organisations actually have to uphold their values and principles. Rather problematically, and as we’ve called out before, “… there’s no correlation between an organisations’ stated values and its actions…”.

To conclude this brief section, ethics is hard but by this definition it is practical and actionable. Technology is way too broad a category to define sweeping assertions. As far as we can tell, this is a convenient excuse. It helps organisations shy away from meaningful accountability.

This point should really concern you. At the very least it should encourage you to pause and think about its validity.

How can we possibly design a beautifully ambitious future if we’re unwilling to be truly accountable to our actions? Why do we continue to allow system failures to erode trust and confidence? Why are we unwilling to do the work to make meaningful progress? This shit ain’t preordained. We need to stop acting like it is.

Skip the next section if you want to get to the conclusion and proposed actions.

Distrust is high and we are FAILING at ethics

In late 2018, Accenture systematically reviewed the performance of 7,030 companies as part of their competitive agility index. They discovered that trust disproportionately impacts bottom line business outcomes. Data from Europanel suggests that a 1% increase in ‘brand trust’ translates to 3% growth in value. Similar bodies of research have drawn aligned conclusions.

So why is trust at an all time low – as Edelman and other trust related research suggests – if it so clearly and positively impacts both business and consumer outcomes? 

The answer might well be explained, at least at a very high level, by Edelman’s Net Trust Score. What this proposes is that no organisational category is both competent and ethical. What’s more, ethics appears to be 3x more important than competence when it comes to trust.

This is an important insight, as it suggests many individuals around the world are unhappy and uncomfortable about the way modern organisations operate. Specifically in relation to data ethics and data protection, PEW Research Centre’s most recent privacy study from 2019 suggests that individuals are more concerned than ever before. They believe their data is less secure, that data collection poses more risks than benefits and that it is not possible to navigate their daily lives without being surveilled. 

Although Edelman’s data doesn’t explicitly focus on ‘data trust’, our experience leads us to believe that the current state of data mistrust and distrust is similar to the current state of mistrust and distrust broadly. It may, in fact, be even worse. 

In 2014, research for the Royal Statistical Society carried out by Ipsos MORI suggested that brand trust and data trust were different. And data trust, in many cases, was lower than brand trust.  

For organisations to maximise innovation, whilst effectively protecting the individuals and institutions they serve as customers, they must close the data trust gap. Closing this gap effectively will enhance an organisation’s ability to gain access to just the right data when it’s needed most. This access can be used to differentiate an organisation’s value proposition. These propositions can help individuals achieve more meaningful lifestyle outcomes. Helping people achieve valuable, meaningful and engaging lifestyle outcomes is how organisations can win their market.

Surely this is actually what BigTech and other organisations want? Great relationships with their customers. Positive social impact. A lasting legacy.

Our analysis and experience leads us to believe there are systemic root causes of this data trust gap. Incentive misalignment is perhaps the primary culprit. And no, #ResponsibleInnovation doesn’t even begin to tackle this. In fact, we’ve been asked to stay clear of this in proposed work. We never engaged in those projects.

When we refer to this misalignment, we mean that many business models and commercial incentives structures rely upon various data processing activities that most consumers would not support, given the chance. These are the very practices PEW’s Research highlights discomfort towards. A distinct example of this was highlighted effectively in recent research focused on cookie consent. As TechCrunch summarised: 

The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections). Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.

TechCrunch article

In addition, there are significant regulatory and technological considerations. For example, the client:server architecture of the web doesn’t enable personal data agency. Modern regulations, like the GDPR, are complicated and lack meaningful enforcement.

From our experience leading Data Trust by Design and Data Ethics programmes all around the world, this all seems rather obvious. Corporations – and regulations for that matter – largely focus on social acceptance. This is a symptom of various market dynamics. And it needs to be challenged if we are to close the data trust gap.

It’s our view that organisations have a responsibility to do better. They need to overcome their bias towards optimising for what is acceptable. They need to overcome their aversion to genuine openness and meaningful accountability. They need to explicitly commit to doing what is preferable, even if the process to make this transition takes time. In addition, markets need to support and even incentivise these positive behaviours. We need less of a focus on fines for doing ‘wrong’ and more of a focus on rewards for doing ‘right’.

This will lead to ethics washing because…

Organisations are unwilling to act. This is what the empirical evidence suggests.

The systems we design, rely on and make use of aren’t supportive of the type of action we’re referring to. The ethical intent to action gap is very real. Organisations consistently fail to behaviourally deliver on their stated values. It can therefore be concluded that, for the most part, organisations are ethics washing. Stating one thing then doing another, putting all the onus on how people use something you’ve designed, dragging out legal proceedings and consistently failing to help the people you harm… All of these things are clear signals. Virtue signalling and ethics washing are becoming common business practices.

We’d love to see you and others break the mould. Don’t stand for this. Step up. Design for action. Moved beyond principles, documents and discussions. Say what you’ll do, do what you say and then evidence that you’ve done what you said you would.

It has to start with working together

Right now so many things are fucked. I’ve written about this extensively on this blog. But there is a lot that gives me hope. There’s plenty of momentum to build upon. There are plenty of people working on incredible things. People don’t want to go back to normal. They expect something better.

But…

The whole process is ridiculously inefficient. Too many people are wasting time, effort, skills, learning opportunities, physical resources and capital because they’re doing the same things. They might be framing it differently, but it’s basically the same shit.

I recently reached out to a few ‘high profile’ figures, only to be met with brief and arrogant responses. This brings me back to a tweet from Chris Dixon. It’s stuck with me ever since.

This isn’t good enough. You can’t do it alone. I can’t do it alone. We can only do it if we do it together.

So, consider this a wake up call. At Greater Than Learning, we want to work with the community to start #MakingBetterTogether. Working together is quite literally the only way we will make progress. I get this might be confronting for some of you reading. My intent isn’t to offend you, shut you down or make you think like I won’t listen to, consider and act on your perspective. My intent is to design for meaningful progress. I recognise this has to be done as a group effort. I trust you will too.

Join us in discussing how. After all, actions speak louder than words.

Responses