(Mis)Trust & (Non)Understanding

, , , , , , ,
Group 2855

As the world grapples with the limits of artificial intelligence (AI), there is an increasing focus on aspects of transparency and trust in the human-machine relationship. Generative AI is progressing with leaps and bounds – but how is its equation with people evolving, given that one party is human and the other is technological? 

In 2013, the Spike Jonze film Her cascaded into the discourse on human-machine interactions. It weaves a complex, touching narrative of the emotional connection between a struggling man and an insightful operating system named Samantha. One of her most powerful dialogues goes as follows:

“You know what’s interesting? I used to be so worried about not having a body, but now … I truly love it. You know, I’m growing in a way I couldn’t if I had a physical form … I’m not limited. I can be anywhere and everywhere simultaneously. I’m not tethered to time and space in a way that I would be if I was stuck in a body that’s inevitably gonna die.” 

The reason we are able to empathise with non-human characters or machines in fiction is because they have been created by other human beings from the only perspective available to us: “What would it be like for a person to act and think and be as X?”  We are, naturally, more understanding of something that we can conceive of as related to ourselves. Scholarly research has captured this at play in a phenomenon known as ‘algorithm aversion’ – wherein human beings tend to be less trusting of machines after seeing them err, in comparison to human agents.

When it comes to human-algorithmic relationships, we should note that the aspects of ‘trust’ and ‘trustworthiness’ are often conflated, and lead to conceptual fuzziness when evaluating their nature. Trust is relational; a user places a certain amount of confidence in the machine being able to do its task. Trustworthiness, on the other hand, is based on past performances of the machine, and whether or not it was successful. Users could choose to place their trust in a machine despite it having other trustworthy alternatives (here is where other factors like ease of navigation, access, and price could determine the decision).

It has always been a tricky relationship, because, as science writer Philip Ball puts it: “It isn’t simply that the science is dependent on the devices; the devices actually determine what is known”.

Perhaps it would be useful to view this through Kockelman’s analysis of the sieve as a mechanical device: the purpose it serves being both material (separating the desired from the undesired) as well as non-material (as an imagined form of filtration – of ideas, information etc). Sieves are essential to the processing of information – any device that has the ability to compute, does so by accepting or rejecting a string of inputs in order to generate an output. The image below, too, is the output of what it depicts i.e. a human being interacting with a machine.  However, when it comes to artificial intelligence, one of the primary inherent challenges is what is known as the ‘‘black box’’ problem: the difficulty of understanding how certain algorithms learn, and produce outputs. Essentially, we know there is a sieve, but can’t grasp how it works.

Naturally, it unnerves us to not know how an algorithm is making decisions. Bonezzi and colleagues (2022), however, ask why the inscrutability of human decision-making doesn’t generate the same mistrust. Research has documented a phenomenon known as the ‘illusion of explanatory depth’, where people overestimate the extent to which they understand how things (like devices, natural phenomena etc) work. Through experimentation, what Bonezzi and fellow researchers revealed was that this illusion plays out more in the case of humans than algorithms. Put simply, because you know how you think and make decisions, you assume that other human beings must do it the same way.

The more similar ‘other’ is, the more likely you are to project your own cognitions onto them. However, when the ‘other’ is an algorithm, we aren’t able to draw any parallels – DALL-E and I don’t have any experiential or physical similarities that I can look toward, empathise with, or comprehend, and thus the black box finds its place in our (non)understanding of AI. Then, if we arrive at the question of “how do you trust something that you don’t know?”, the answer is: you don’t. This is where the aspect of reliance comes into play. 

Almost daily, we find ourselves vulnerable to the mechanics and technology of the world around us. We rely on our laptops to work, or cars to get us places, or ATMs to withdraw money. While these aren’t interpersonal equations, they are equations that we rely on to function in a certain way.

At play here is what the physicist Cesar Hidalgo says is a simple rule: “people judge humans by their intentions and machines by their outcomes”. He sees us moving from a normative idea of how machines should behave, towards one where we discover how we should judge them. Whether this judgement is grounded in the idea of ‘trust’ or ‘reliance’ is up to us to decide. The more important aspect to observe, though, is how much we think we know about human decision-making, and if we have a preference for one black box over the other. 


Scroll to Top
(Mis)Trust & (Non)Understanding
Please share your details!
Insight Download Button Popup