Breaking the Black Box of Users’ Experience

Dr. Ender Ricart
5 min readMar 28, 2023

--

I would often explain to my product and engineering partners that the users’ experience with a product or feature is best thought of as an interaction with a black box.

A user only experiences the Input and Output of a product or feature.

A user is confronted with an interface through which they can manipulate some thing, be it by way of a button or a verbal or written command. In a successful interaction with said product or service, the user will be rewarded with an output. This might be an acknowledgement — “You’re order has been submitted” — , an answer — “The leaning tower of Pisa was built August 9, 1173” — , or a state change — “Notifications have been disabled.”

Through continued interactions, the user creates associations between their action and the output. This is the process by which users build mental models that aid them in future interactions and anticipate what outputs will result.

While a user might not know how the product’s or feature’s inner functions work, and to be honest they shouldn’t be expected to nor need to; there should be a clear and perceptual association between the users’ input and the output. An if-then causal relation.

In a well built product, a user’s experience need only be defined by what they interact with and perceive, much like putting a quarter into a gumball machine, turning the crank, and getting the gum. The “how-it-works,” the internal mechanisms that register the coin and release the gumball, are secondary to the users’ experience with the product.

The inner workings of a product are orthogonal to a user’s experience with them.

However, products leveraging Artificial Intelligence (AI) and Machine Learning (ML), have started to crack the mirrored face of the black-box user experience. Each time a Smart Technology, such as Digital Voice Assistants (Alexa), or a generative AI entity like ChatGPT, produces output that is not expected or desired, it interrupts a user’s self-centered and embodied experience with said product, drawing awareness and scrutiny towards the product itself.

“Why did ChatGPT respond this way?”

“Why didn’t Siri set the alarm I requested?”

“Why am I being shown these ads?”

Uncertainty begets curiosity upon which theories of why and how are expounded upon. Most of these theories are sensationalized. One need only perform a brief web search to surface any number of folk theories, facts awash in false facts, information mixed with misinformation, accusations, and more. This is not the desired state of a user’s experience with any product. The gears are meant to be hidden safely behind the glossy exterior of the gumball machine.

So, why can’t we do this with products that leverage ML and AI? Why do companies continue to fail at developing that seamless user experience of cause and effect, input and output?

Complexity of the Blackbox in AI applications

Let’s take a look at two facets of this. 1) Looking into the environment within which these products and features are developed and released, and 2) Whether the black box metaphor has outlived its relevance.

The vast computational systems that power Artificial Intelligence and Machine Learning models are perched tenuously atop legacy architecture (often so outdated their functioning remains a mystery), varied programming languages and code-bases, multiple data storage platforms, and mixtures of cloud computing and on-premise machines, perhaps with some edge computing thrown in there for a flourish.

The companies that develop and launch this tech are internally and externally siloed; their employees constantly churning, resulting in the creation of AI and ML technologies that are not well understood by anyone even the company that create it. So, of course, of course, we cannot expect there to be a consistent and clearly associated set of outputs to our user’s inputs if and when the creators of said product don’t even know who works on what part of the program or why certain sets of logic were applied to produce said output!

Even the creators of AI and ML applications don’t have a full understanding of how all it’s parts work together.

In order to arrive at a contained and cohesive user experience, however complex it might be on the inside, what the user interacts with needs to be clearly correlated and consequential to their input/action. Bringing this back to the black box metaphor and the question of whether or not it still applies in a world where we are discussing generative AI.

I would argue that it is still applicable, but we now need to pivot our focus away from unidirectional input-output towards the feedback or control loop as the critical component of the black box. This change in focus effectively renders it more of a dynamic between the user and the technology itself. Consider large language models (LLMs) of generative AI, with unfathomably large data sets and multifaceted model sets, resulting in novel conversational dynamics for every input. With these types of technologies, the user experience is never intended to be static, but dynamic, relational, that is, generative.

So, the black box metaphor remains relevant, however the focus is now on the feedback loop between user and technology. This element of the feedback loop and the dynamic between user and technology is the core of a user’s experience with a product that leverages AI/ML in it’s product design. A user should not be interrupted in their relational dynamic just as a consumer feeding a quarter into a gumball machine doesn’t expect to be stopped mid-way to learn about the gumball machinery.

Companies need to slow down their AI race to the moon, er… singularity, to lean into the dynamics of feedback loops for users’ engagement with complex technology like AI and ML. How can users build relational dynamics or contribute to complex systems without necessarily getting caught up in the how or the why? I think this beckons a new chapter in the User Experience Profession to define what the relational dynamics between human and technology materially look like in product and experiential design for generative AI such as ChatGPT. I also think it would behoove companies to ensure they have end to end visibility of their processing pipelines, data infrastructure, and code architecture so as to free up user experience professionals to focus on the interactional components that form the feedback loop between user-[black box] -technology.

--

--

Dr. Ender Ricart
Dr. Ender Ricart

Written by Dr. Ender Ricart

Mixed-Method Researcher & Developer

No responses yet