If a careless driver runs a red light and kills a pedestrian you might read about it in your local newspaper. If a self-driving car does the same, you’ll see it in every international news outlet.
When it comes to trusting machines, people often seem to have a ridiculously high bar. We reject machines even when they are statistically safer than humans at performing a given task. In fact, most experts agree that self-driving cars will need to be significantly safer than human drivers in order to gain social acceptance.
Why is it so hard to trust the machines? Let me venture a hypothesis…
If a person makes a poor decision, we assume that this person is flawed in some way. If a machine makes a poor decision, we assume all similar machines are broken. A human driver running a red light means that this driver is careless. A self-driving car doing the same instantly means that all self-driving cars are potentially broken.
We consider humans as separate beings while we see machines as copies made from the same blueprint. We’re not wrong. If a mobile device suffers from a security flaw, it’s fair to assume that all similar devices suffer from the same flaw.
In fact, this replicated nature of technology gives us the powerful ability to “patch” all the machines at once. We use such patches regularly to make our devices safer. Yet, at the same time, perfect replication might be precisely what makes it impossibly hard to trust the machines. Where an individual human being gives us a natural boundary beyond which we won’t extend our trust (or distrust), replicated machines have no such boundary.
Should we give up replication and purposefully build each machine slightly differently to make it easier for us to trust them? This sounds like a stupid idea, but it may not be. In fact, injecting artificial diversity may be required to help speed up social acceptance of automation.
In general, we know that diversity makes systems more resilient at the cost of slightly reducing their effectiveness. Plant diverse crops and you reduce the potential negative outcome of a disease outbreak.
What I’m proposing today is that, as paradoxical as it may seem, diversity might also help us build systems that are easier to trust at the cost of slightly reducing their safety. Our tendency to maximize safety might be running at odds with our desire to deploy trustworthy systems at scale. Instead of building a fleet of identical replicas that are easy to patch, we might be better off inserting artificial “fracture lines” that make it harder for distrust to spread.
More generally, I’ve always seen diversity as a way for us to stay humble in the face of what we’re creating. Welcoming diverse point-of-views necessarily means we will not put all our efforts and energy behind the optimal idea. However, it also means that we acknowledge our inability to predict the future. We acknowledge that there are unforeseen events that might throw a wrench at any seemingly optimal idea and that we’re better off pursuing many different paths simultaneously, even if some of these paths appear to be less efficient.
Just how much diversity do we need if we are to build trust between humans and machines? Would having a hundred different self-driving car models be enough? Would each car need its random artificial DNA to ensure it makes decisions slightly differently from other cars? I don’t have the answer… But I do believe it is interesting to look at the lack of diversity in our replicated systems as a hurdle towards building trustworthy machines.
Note: I’ve experimented with a featured image for this post, using a quote from the post in the image. The photo is from Jason Leung on Unsplash.