AI Empowered User
- Rohan Somji

- Feb 17, 2021
- 8 min read
This century began with an overwhelming surge of internet-related companies whom we welcomed with open arms. A decade later we worried about their effect on society. This decade begins with an overwhelming response to the pernicious effects of digital immersion while at the same time growing ever dependent on digitalisation during the COVID pandemic. AI now stands at the same place where computers stood in 1970’s; we once were a decade away from personal computing and now we are a decade away from personal AI. Machine Learning (ML) developments and ML based robotics have boosted AI development and are becoming synonymous with the term AI. It is well known that Facebook, Instagram, Twitter, Brandwatch, IBM, Intel, Clarabridge, etc., all use AI based tools to gather valuable insights and to respond to their users.
Ordinary users, whether they know it or not, are interacting with AI whenever they use any of these platforms and they are not armed as well as some AI wielding corporations. AI-empowering is like any form of empowerment; it is about giving back the power to the user. Unless AI becomes amicable to ordinary users, the power equations between companies offering services and ordinary users remain skewed. This means that there is a need to view AI and robotics as more than tools for corporations. We need to learn how an AI will be user friendly, both in its physical and digital manifestations. It needs to be accessible.
AI has only recently surged due to developments in Machine Learning (ML) (Blackman, R., 2020; Foote, K. D., 2019) and Robotics (Guerin, K., 2020; The History of Robots, 2020), but as of now still remains in the hands of expert users. ML based AI and robotics that use these ML techniques, have only recently become sophisticated enough to be used productively by the ordinary user (Guerin, K., 2020). Questions about UX design applied to AI is thus uncharted territory.
One of the pioneers in the digital direction is Qian Yang, who has addressed the challenge of AI as a material for design. Within robotics, Aaron Steinfeld has long been working towards improving HRI (Human Robot Interactions) -- assessing, understanding and suggesting possible interventions for improving interactions with robots.
Prior work has been done on AI in computer science & robotics (Media, 2020) and UI/UX as a field has been established for a few decades now (See Moggridge, 2007), but UX work on AI has begun only in the last few years (See Lew & Schumacher, 2020; Hartwig & Rein, 2020; Souza et al., 2019). Though many of these people work in HCI (Human Computer Interaction) & HRI (Human Robot Interaction) occassionally with a focus on AI, nobody has treated AI as a material to be designed with respect to human psychological affordances and constraints.
Qian Yang had been tackling this problem throughout her PhD research on “Profiling AI as Design Material for User Experience Research”. Built on a trail of around 17 research papers that successively zone in on the unique problems UX intervention on AI[MOU2] , her work has kickstarted this engagement of UX with AI. Yang is focused on UX research that would allow more accessibility to AI systems. She draws connections between insights from cognitive psychology, philosophy and decision-making theories to propose a theory-driven, user-centric XAI (Explainable AI) framework (Wang, et al., 2019, p.2). She seeks to understand how people reason and which XAI facilities can satisfy their reasoning goals (Yang, 2019, p.3). For this she uses the Dual Process Model of thinking called Heuristic-Systematic Model (HSM) [MOU3] that clusters reasoning into System 1 (heuristic, intuitive) and System 2 (systematic, analytical) (Croskerry, 2009, 1; Wang, et al., 2019, 7). She and her colleagues in their paper on XAI, try to import Crosskerry’s universal model of diagnostic reasoning which considers dual-process theory as the predominant approach, positing two systems of decision making (Croskerry, 2009, 1). This model enables her to explain how heuristic thinking interferes with systematic thinking. It is also her cornerstone to explain how humans actually reason and how Explainable AI (XAI) should accommodate for it. But she could have equally used another dual process model like Elaboration Likelihood Model which makes a distinction between central (careful, thoughtful) vs peripheral (persuasive, association based) decision making (Petty & Cacioppo, 1986, p.4). The problem with ELM, is that it assumes that central route memories are stronger than peripheral route memories (Petty & Cacioppo, 1986). We must also note that Osman points out that a multi process model or a model with more than just two routes of processing, would better account for the complex reasoning process we go through (Osman, 2004, p.11), though we are unaware if this model lends itself to UX.
By applying a rationalistic epistemological approach as the method of enquiry, Yang links concepts in human reasoning to explainable AI techniques in a 4-fold system (Wang, et al., 2019, p.2) - 1. How Humans should understand 2. Correspondingly how XAI explains 3. How humans actually reason (erroneously included) 4. How XAI can accommodate for that.
She puts forward a structural map, which may help XAI developers to envision the users reasoning by accounting for their biases. When this process is informed through literature review, ethnography, participatory design, etc, (Wang, et al., 2019, p.9) the XAI framework will generate user centered explanations instead of just rationally expected explanations.
Yang is trying to tackle problems that UX design faces with AI as a design material (Yang, 2020; Yang, et al., 2020; Yang 2018). If AI is to empower users, then it is essential that we understand the design affordances and constraints of AI as a design material. The two sources that she identifies that create design complexities for UX designers is:
Capability Uncertainty - Uncertainty about what AI’s evolving capabilities as it is exposed to new training data.
Output Complexity- Complexity due to the fact that AI is learning and adapting to different scenarios quasi uniquely.
By pinpointing the above two specific sources Yang develops four levels of AI Systems (Yang, 2020, sec. 6.1) that practitioners and HCI researchers can use to target unique challenges in AI-Human interaction (see below fig)

Fig: (Yang, 2020, p.8)
Now the implicit assumption here is that problems arise from AI as a design material (Yang, 2020; Yang, et al., 2020;) and one might say that she fails to consider is the possibility of a problem lying only in ‘interactions’ and not in the ‘human’ or the ‘AI’. A point well noted by Wurman, is the problem of information overload i.e. a person’s finite capacity to process information, when combined with the output of too much information, leads to a state of information anxiety (Wurman, 1997). Tufte presents a solution which focuses on quantitative information and exploring ways to organize large complex datasets visually so that machine learning can facilitate thinking (Tufte, 2001), an approach much possible within a XAI framework but still lying outside Yang’s “AI as a design material” framework. Yang’s model might be missing problems that do not assume only the intrinsic features of AI to be the source of problems.
Annotated Bibliography
Desai, M., Stubbs, K., Steinfeld, A., & Yanco, H. (2009). Creating trustworthy robots: Lessons and inspirations from automated systems.
He goes in much detail about the various flaws in simulated HRI interactions and how their shortcomings have resulted in a stunted model of trust.
As a strong empiricist, it will serve us well to look critically at some of his user questionnaires as the primary way of getting feedback regarding the robot’s interactions.
Lee, J. D., & See, K. A. (2016). Trust in Automation: Designing for Appropriate Reliance: Human Factors. https://doi.org/10.1518/hfes.46.1.50_30392
The Definition applies to automation or another person. Another robust definition is “ the willingness to rely on an exchange partner in whom one has confidence" (Moorman et al., 1993, p. 82) Now Lee and See would certainly consider reliance as “relying” or “having confidence”
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20, 709-734.
The definition by Mayer et al. (1995) is the most widely used and accepted definition of trust (Rousseau, Sitkin, Burt, & Camerer, 1998). Lee and See mention that “as of April 2003, the Institute for Scientific Information citation database showed 203 citations of this article, far more than others on the topic of trust.” Some authors also go beyond intention and define trust as a behavioral result or state of vulnerability or risk (Deutsch, 1960; Meyer, 2001).
The definition I presented by Mayer, Davis & Schoorman dives deeper into the psycho-social states of the agents involved and is worth investigating with respect to HRI and humans do tend to anthropomorphise automated processes like Siri, bomb defusal robots in the military, robot receptionists such as the experimental “Valerie” and “Tank” at CMU.
Osman, M. (2004). "An evaluation of dual-process theories of reasoning". Psychonomic Bulletin & Review. 11 (6): 988–1010. doi:10.3758/bf03196730.
A multi process model Osman suggests puts the reasoning process on a spectrum: a continuum between implicit and explicit processes.
Wang, D., Yang, Q., Abdul, A., & Lim, B. (2019). Designing Theory-Driven User-Centric Explainable AI. https://doi.org/10.1145/3290605.3300831
Her use of raitionalistic epistemological approach is never really fully explained. The choice seems arbitrary and hints toward a scientistic bias. There could have been numerous other approaches she could have taken, for example the phenomenological approach to see how users experience their reasoning.
Weidenfeld, A.; Oberauer, K.; Hornig, R. (2005). "Causal and non causal conditionals: an integrated model of interpretation and reasoning". The Quarterly Journal of Experimental Psychology. 58A (8): 1479–1513. CiteSeerX 10.1.1.163.4457. doi:10.1080/02724980443000719
For example, the clear distinction between abstract thinking vs contextual thinking is breaking down as psychological studies inform us that they are interpolated in a number of cases.
References
Blackman, R. (2020, October 15). A Practical Guide to Building Ethical AI. Harvard Business Review. https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
Croskerry, P. (2009). A universal model of diagnostic reasoning. Academic Medicine: Journal of the Association of American Medical Colleges, 84(8), 1022–1028. https://doi.org/10.1097/ACM.0b013e3181ace703
Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. (2013). Impact of robot failures and feedback on real-time trust. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 251–258.
Desai, M., Stubbs, K., Steinfeld, A., & Yanco, H. (2009). Creating trustworthy robots: Lessons and inspirations from automated systems.
Foote, K. D. (2019, March 26). A Brief History of Machine Learning. DATAVERSITY. https://www.dataversity.net/a-brief-history-of-machine-learning/
Gavin Lew, & Robert M. Schumacher Jr. (2020). AI and UX: Why Artificial Intelligence Needs User Experience. Apress.
Guerin, K. (n.d.). Council Post: With Industrial Robots Easier Than Ever To Program, How Do Our Attitudes Toward Automation Adapt? Forbes. Retrieved November 2, 2020, from https://www.forbes.com/sites/forbestechcouncil/2020/10/27/with-industrial-robots-easier-than-ever-to-program-how-do-our-attitudes-toward-automation-adapt/
Hartwig, R., & Rein, L. (2020). User Experience Principles for Systems with Artificial Intelligence. HCI International 2020 - Posters, 155–160. https://doi.org/10.1007/978-3-030-50726-8_20
Lee, J. D., & See, K. A. (2016). Trust in Automation: Designing for Appropriate Reliance: Human Factors. https://doi.org/10.1518/hfes.46.1.50_30392
Media, O. (n.d.). Cognitive Robotics. Retrieved November 2, 2020, from https://learning.oreilly.com/library/view/cognitive-robotics/9781482244571/
Moggridge, B. (2007). Designing Interactions. Footprint books.
Parasuraman, R., Molloy, R., & Singh, I. (1993). Performance Consequences of Automation Induced Complacency. International Journal of Aviation Psychology, 3. https://doi.org/10.1207/s15327108ijap0301_1
Petty, Richard E.; Cacioppo, John T. (1986). Communication and persuasion: central and peripheral routes to attitude change. Berlin, Germany: Springer-Verlag. ISBN 978-0387963440.
Souza, K. E. S., Seruffo, M. C. R., De Mello, H. D., Souza, D. D. S., & Vellasco, M. M. B. R. (2019). User Experience Evaluation Using Mouse Tracking and Artificial Intelligence. IEEE Access, 7, 96506–96515. https://doi.org/10.1109/access.2019.2927860
Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A., & Goodrich, M. (2006). Common metrics for human-robot interaction. Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, 33–40.
The History of Robots: From the 400 BC Archytas to the Boston Dynamics’ Robot Dog. (2020, July 7). https://interestingengineering.com/the-history-of-robots-from-the-400-bc-archytas-to-the-boston-dynamics-robot-dog
Tufte, E. R. (2001). The visual display of quantitative information. Cheshire, Conn. : Graphics Press. http://archive.org/details/visualdisplayofq00tuft
Wang, D., Yang, Q., Abdul, A., & Lim, B. (2019). Designing Theory-Driven User-Centric Explainable AI. https://doi.org/10.1145/3290605.3300831
Wurman, R. S. (1997). Information Architects. Graphis.
Yang, Q. (2020). Pro ling Arti cial Intelligence as a Material for User Experience Design. 126.
Yang, Q., Steinfeld, A., Rosé, C., & Zimmerman, J. (2020). Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. https://doi.org/10.1145/3313831.3376301
.png)



Comments