Improving Digital Bots: Lessons from Robotics
- Rohan Somji

- Mar 1, 2021
- 6 min read
Robotics for long had developed separately, but ML advances have begun changing this field as well. Let’s talk about Aaron Steinfeld whose work in Human-Robot Interaction (HRI) has guided the way researchers address problems and the metrics they use in the HRI field. He presents an analytical approach by generating common metrics to measure how well do humans and robots perform as a team (Steinfeld, 2006, p.1).
He relies on “Trust” as his central concept to evaluate these interactions. (Desai et al., 2009). He operationalized the concept of trust using the terms- automation reliability, automation capability, and changing levels of autonomy (Steinfeld, 2009, p.4). Driven by empiricism, most of his papers are based on experiments, user surveys and user testing.
The definition of trust he uses is that by Lee and See :
“the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” (Desai et al., 2009, p. 1).
Lee and See look into trust with respect to automation, not in terms of interpersonal trust and so are forced to justify their usage of this term in automation. They recognise that the term originates in a sense of trust as interpersonal (Lee & See, 2004) acknowledging that they are extending it to automation. Lee and See argue that reliance is a good measure for trust due to its direct correlation (grounded in laboratory studies). Steinfeld extends this correlation to use reliance on automation as a quantifier for trust in automation. (Lee & See, 2004).
But ‘Trust’ is a word that was embedded in human relationships long before it was ever used in automation. A significant definition of ‘Trust’ in social psychology is
"the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the truster, irrespective of the ability to monitor or control that party" (Mayer, Davis, & Schoorman, 1995, 712 ).
Vulnerability, an important psycho-social state, is key to the engagement of trust. Steinfeld has no account of how a user’s “willingness to be vulnerable” can be applied to HRI and the experience of automation. Moreover, reliance as an indicator of trust can be problematic because one can rely on something out of necessity (like public transportation) but still not trust it.
But Steinfeld does critically engage with trust, when he critiques Riley’s model (shown below). Here trust and reliability factored in with a physical Human Robot Interaction, explain how interconnected some terms are like- operator accuracy, risk, trust in automation, work load, perceived risk, etc.. (Desai et al., 2009, 2)

Fig: (Desai et al., 2009, 3)
According to Steinfeld, Riley’s model does not capture interface usability, proximity to the robot, situational awareness, and many more tangible factors. Steinfeld improvises this framework, emphasising on automation reliability, automation capability and changing levels of autonomy, as the trust building factors in the human-robot interaction. (Steinfeld, 2009, 4)
Throughout his approach, Steinfeld implicitly assumes more automation is not a problem in itself and that enabling ways for users to better understand system capabilities in automation will improve trust. He sidesteps Parasuraman who calls automation-induced complacency which is complacency on the part of the user due to either overreliance on automation or inability to understand how the automated system works (Merritt, et al., 2019; Parasuraman, et al., 1993), something that becomes an even larger problem leading to disasters as in cases of aviation automation. Furthermore, according to Bainsbridge this leads to the paradox of automation where
“the more efficient the automated system, the more crucial the human contribution of the operators.”
Steinfeld never addresses the paradox of automation and fails to explain how improving reliability bypasses critical user problems. A fatal example of this is Air France Flight 447; a failure of automation put the pilots in manual mode for which they weren’t prepared, a scenario less to do with human-robot trust and more to do with the intrinsic problem of automation.
Nevertheless, Steinfeld’s work is improving usability by understanding what ordinary users need in a digital as well as physical space that is soon to be dominated by AI’s. Of course, there are many more ways of classifying that we could have chosen, but the digital/physical division is well enough to begin a conversation of AI empowered users in UX design, that will go a long way.
Annotated Bibliography
Desai, M., Stubbs, K., Steinfeld, A., & Yanco, H. (2009). Creating trustworthy robots: Lessons and inspirations from automated systems.
He goes in much detail about the various flaws in simulated HRI interactions and how their shortcomings have resulted in a stunted model of trust.
As a strong empiricist, it will serve us well to look critically at some of his user questionnaires as the primary way of getting feedback regarding the robot’s interactions.
Lee, J. D., & See, K. A. (2016). Trust in Automation: Designing for Appropriate Reliance: Human Factors. https://doi.org/10.1518/hfes.46.1.50_30392
The Definition applies to automation or another person. Another robust definition is “ the willingness to rely on an exchange partner in whom one has confidence" (Moorman et al., 1993, p. 82) Now Lee and See would certainly consider reliance as “relying” or “having confidence”
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20, 709-734.
The definition by Mayer et al. (1995) is the most widely used and accepted definition of trust (Rousseau, Sitkin, Burt, & Camerer, 1998). Lee and See mention that “as of April 2003, the Institute for Scientific Information citation database showed 203 citations of this article, far more than others on the topic of trust.” Some authors also go beyond intention and define trust as a behavioral result or state of vulnerability or risk (Deutsch, 1960; Meyer, 2001).
The definition I presented by Mayer, Davis & Schoorman dives deeper into the psycho-social states of the agents involved and is worth investigating with respect to HRI and humans do tend to anthropomorphise automated processes like Siri, bomb defusal robots in the military, robot receptionists such as the experimental “Valerie” and “Tank” at CMU.
References
Blackman, R. (2020, October 15). A Practical Guide to Building Ethical AI. Harvard Business Review. https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
Croskerry, P. (2009). A universal model of diagnostic reasoning. Academic Medicine: Journal of the Association of American Medical Colleges, 84(8), 1022–1028. https://doi.org/10.1097/ACM.0b013e3181ace703
Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. (2013). Impact of robot failures and feedback on real-time trust. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 251–258.
Desai, M., Stubbs, K., Steinfeld, A., & Yanco, H. (2009). Creating trustworthy robots: Lessons and inspirations from automated systems.
Foote, K. D. (2019, March 26). A Brief History of Machine Learning. DATAVERSITY. https://www.dataversity.net/a-brief-history-of-machine-learning/
Gavin Lew, & Robert M. Schumacher Jr. (2020). AI and UX: Why Artificial Intelligence Needs User Experience. Apress.
Guerin, K. (n.d.). Council Post: With Industrial Robots Easier Than Ever To Program, How Do Our Attitudes Toward Automation Adapt? Forbes. Retrieved November 2, 2020, from https://www.forbes.com/sites/forbestechcouncil/2020/10/27/with-industrial-robots-easier-than-ever-to-program-how-do-our-attitudes-toward-automation-adapt/
Hartwig, R., & Rein, L. (2020). User Experience Principles for Systems with Artificial Intelligence. HCI International 2020 - Posters, 155–160. https://doi.org/10.1007/978-3-030-50726-8_20
Lee, J. D., & See, K. A. (2016). Trust in Automation: Designing for Appropriate Reliance: Human Factors. https://doi.org/10.1518/hfes.46.1.50_30392
Media, O. (n.d.). Cognitive Robotics. Retrieved November 2, 2020, from https://learning.oreilly.com/library/view/cognitive-robotics/9781482244571/
Moggridge, B. (2007). Designing Interactions. Footprint books.
Parasuraman, R., Molloy, R., & Singh, I. (1993). Performance Consequences of Automation Induced Complacency. International Journal of Aviation Psychology, 3. https://doi.org/10.1207/s15327108ijap0301_1
Petty, Richard E.; Cacioppo, John T. (1986). Communication and persuasion: central and peripheral routes to attitude change. Berlin, Germany: Springer-Verlag. ISBN 978-0387963440.
Souza, K. E. S., Seruffo, M. C. R., De Mello, H. D., Souza, D. D. S., & Vellasco, M. M. B. R. (2019). User Experience Evaluation Using Mouse Tracking and Artificial Intelligence. IEEE Access, 7, 96506–96515. https://doi.org/10.1109/access.2019.2927860
Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A., & Goodrich, M. (2006). Common metrics for human-robot interaction. Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, 33–40.
The History of Robots: From the 400 BC Archytas to the Boston Dynamics’ Robot Dog. (2020, July 7). https://interestingengineering.com/the-history-of-robots-from-the-400-bc-archytas-to-the-boston-dynamics-robot-dog
Tufte, E. R. (2001). The visual display of quantitative information. Cheshire, Conn. : Graphics Press. http://archive.org/details/visualdisplayofq00tuft
Wang, D., Yang, Q., Abdul, A., & Lim, B. (2019). Designing Theory-Driven User-Centric Explainable AI. https://doi.org/10.1145/3290605.3300831
Wurman, R. S. (1997). Information Architects. Graphis.
Yang, Q. (2020). Pro ling Arti cial Intelligence as a Material for User Experience Design. 126.
Yang, Q., Steinfeld, A., Rosé, C., & Zimmerman, J. (2020). Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. https://doi.org/10.1145/3313831.3376301
.png)



Comments