[Apologies for the delay of this post. On Tuesday, I developed a head cold that made my brain ache. I’m feeling much better today.]
A few weeks ago, I got a call from a colleague in the UK. We met several years ago while she was working on privacy research at a University. She asked me if I’d be interested in joining as an advisor to research she has proposed related to Ethics in AI. She hasn’t received the funding yet, but it looks promising. I would be a “business” member of the board populated with a cross-section of tech and academia.
AI ethics is a complex topic. I think we can all agree on one thing:
Skynet = bad.
But what do we want from ethical policies and regulations related to AI? I’ve begun to pull my thoughts together, and the deeper I go, the more I worry about us.
What are ethics?
eth·ics /ˈeTHiks/ the principles of conduct governing an individual or a group
I know this topic can elicit strong reactions. The dialogue on this topic has spanned the millennia since Aristotle first used the term. As a generalist, I need to simplify to understand. I'm sure I'm losing a lot in translation, but I believe the following covers the landscape.
The principles referred to in the definition above are moral principles. What are moral principles?
Moral principles are "values of right and wrong instilled in us by society, friends, family, or religion, and they influence most of our decisions in life." Source
Values are the foundation of moral principles and, thus, of ethics. To develop ethical policies, what values should establish that foundation? My experience has been that values are not universal. This creates a problem for our ethical foundation.
The universality of values has been debated since the time of Aristotle. While truth-telling would seem to be a universal value, it's surprising how citizens of the United States have recently twisted themselves into knots trying to figure out truth from lies. Hint: Believe your eyes and not your ears.
And I've been to parts of the world where truth is a very flexible idea. At a minimum, telling lies was part of any negotiation. I was constantly on the lookout for the cheat. Even within those cultures, I met many people with opposite viewpoints on these values. Values can be relative to the culture and sub-culture based on all those things in the definition of moral principles above. This is a point referred to as ethical relativism.
On the other side of the argument is the notion of universal ethics. For example, the main principles of nursing ethics are autonomy, beneficence, justice, and non-maleficence. I expect everyone to agree that nursing should follow these principles. Many other examples of such universal ethics exist in various professions.
Whatever regulations and best practices come out of the Western democracies will primarily be rooted in the values of those institutions. I’m okay with that, but is everyone okay with that?
What is ethical AI?
When I asked Google Gemini to define Ethical AI, they came up with the following:
Ethical AI is artificial intelligence that adheres to ethical guidelines. These guidelines include fundamental values like privacy, non-discrimination, and non-manipulation. Ethical AI also emphasizes fairness, transparency, accountability, and respect for human values.
As discussed above, fundamental values don’t exist. There are some cultures, or at least governments, that believe the right to privacy doesn’t exist, discrimination is a standard operating procedure, and manipulation is the best way to stay in power.
To be clear, I like the definition the Gemini came up with. It reflects some foundational values of the culture that I and those who live in Western democracies believe are important. I will not argue the completeness of that vision, but it’s a good starting point.
The one thing I will say about those cultures that have anti-Western values is that they’re very good at compliance. As we’ll see in a moment, compliance with regulation rubs up against another Western value: freedom.
Why should we care?
This seems obvious, but in the past, we are collectively guilty of errors in omission and commission regarding technology harms.
You only have to look at social media to see our willing blindness to these harms. The algorithms that decide what we see in our doom scrolling don’t care about harm. The rates of depression, anxiety, and self-injury in teenage girls rose dramatically in the early 2010s. For example, Instagram algorithmically targeted specific segments with content that kept those segments engaged. The problem was that teen girls received content recommendations that amplified “insecurities about where they fit in their social network.” Aside from trying to shame the shameless, we’ve done nothing to mitigate this behavior by these companies. The only federal law on this topic, Section 230, protects the technology companies, not the teen girls.
Even when we realize the harms of tech, following through is not our strong point. Privacy regulations have been in effect since the EU passed the Data Protection Directive in 1995. Almost thirty years ago! In 2016, the General Data Protection Regulation enacted even stricter rules about data protection.
But if regulation was so effective at protecting our privacy, why is the digital advertising industry so up in arms about the death of third-party cookies? These cookies track your every movement across the internet. It’s how ads follow you everywhere.
This is possible because, with a click on a cookie message at the bottom of a website, you have allowed the regulations to become toothless. You have no privacy despite the regulation.
Our apathy about the harms of technology and the effectiveness of regulation is at the foundation of those very harms. Your bathroom mirror will reveal the unwitting accomplice in all of this.
What can we do about it?
Governments will have to act. The Biden-Harris Administration is already taking action, though the U.S. Congress will need to get involved. The EU is also taking action and will likely lead the way in terms of completeness of vision just as they did with privacy. Thankfully, AI companies don’t have protections like Section 230, so they will be liable for harm.
I worry that it’s 1995 all over again. Regardless of our best efforts in raising awareness of ethical AI and putting regulations and best practices in place, all this will be toothless. AI companies will do what they do best: generate revenue, regardless of the harm.
I suppose I’ll only have myself to blame.
Next week, I should be back to our regularly scheduled programming. Look for my point of view on the future of the future of Enterprise Search. It’s far less certain than it seems to be.