top of page

WORK PORTFOLIO

Spotify%20Logo_edited.png

HOW DO YOUR FRIENDS INFLUENCE THE MUSIC YOU LISTEN TO?

Results of in-depth interviews conducted with 8 Spotify users show how the songs you listen not only get influenced by the social circle you are in, but also by what you are doing at the time.

Screen Shot 2021-12-13 at 6.53.28 PM.png

HOW MANY STEPS DO YOU TAKE TO TRANSFER A DOCUMENT. 3?5?11?

What if you could do it in 2 steps. Hit send.
Deliverables: Scenarios, Story Map, Target Market, PEST Analysis, SWOT Analysis, Marketing Strategy & more. Made for college students, professors and working professionals. Read.

Artboard 2.png

ARE YOU LESS CONCERNED ABOUT YOUR SELF DRIVING CAR GETTING INTO AN ACCIDENT JUST BECAUSE YOU USE MOBILE PHONES?

Why is it that consumers willing to buy self driving cars across different countries are also always heavy digital users. Is there a relationship between digital tech adoption and consumer acceptance of self driving cars?

Futuristic Vehicle

SURVEY DESIGN FOR SELF DRIVING CAR SAFETY CONCERNS

Are digital technology users more likely to buy self-driving cars and be less concerned about safety risks? Is there a positive linear relationship?
Deliverables: 19 Likert Scale Questions based on 5 personas for a target population of ~1000 Americans in Urban Centers ages 18-60.

Stressed Man

INVESTIGATIVE PROPOSAL FOR AI TRUST ISSUES IN USERS

What factors shape a user's experience of AI? In what ways can the user interface of AI systems help the user trust the AI system? Is XAI enough? I propose a mixed-methods approach.

image004.png

DESIGN ARCHITECTURE FOR SPOTIFY SOCIAL FEATURES

Discover how Spotify steers itself as a solely music & podcast streaming platform using design at the top level of its architecture despite having a bottom level design architecture that is the same as social media companies. How does it do this?

Image by Ricardo Gomez Angel

TWO CASE STUDIES ON INTELLIGENT RECOMMENDER SYSTEMS

We follow two teams that are analyzing and making changes to an intelligent e-learning recommender system and a mental health online chatbot.

Portfolio : Projects

Annotated Bibliographies

A collection from my papers

Abdul, A. (2020, January 8). COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. Ashraf Abdul. https://www.ashrafabdul.com/publication/cogam-chi20/


Ashraf talks in detail about how XAI (Explainable AI) can overwhelm the user with cognitive overload and goes on to present a way of visualizing information in chunks with reading times mentioned and calibrated according to users.


Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180.


I want to point out her interesting usage of Foucault’s notion of surveillance. Foucault describes a Panopticon as a circular prison with a guard tower in the center, such that one never know who is inside and when he or she is watching. Taina Bucher reverses this notion to its total inverse where the Facebook user is always under “the threat of invisibility” which acts as a unique Reverse Foucauldian technique of coercion. It will be interesting to see how this notion plays out with facebook’s AI/ML program, as it might be the limit case for such a reversal. The foucauldian notion of power is an evolving notion of power and so could encompass what Bucher has offered as a reverse Foucauldian social coercion.


Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086


Here she investigates the experience of Facebook users and how their understanding of algorithms affects the way they experience Facebook. It investigates people via interviews and tweets to understand their personal stories about how their awareness of the algorithms affects the use of these platforms. It would be exciting to see whether users would have a substantially different understanding of an AI algorithm, would they perceive it as helper or disrupter? How would this impact their use of the platform?



Desai, M., Stubbs, K., Steinfeld, A., & Yanco, H. (2009). Creating trustworthy robots: Lessons and inspirations from automated systems. pp. 2.


He goes in much detail about the various flaws in simulated HRI interactions and how their shortcomings have resulted in a stunted model of trust.



Desai, M., Stubbs, K., Steinfeld, A., & Yanco, H. (2009). Creating trustworthy robots: Lessons and inspirations from automated systems.


He goes in much detail about the various flaws in simulated HRI interactions and how their shortcomings have resulted in a stunted model of trust. As a strong empiricist, it will serve us well to look critically at some of his user questionnaires as the primary way of getting feedback regarding the robot’s interactions.




Du, F., Plaisant, C., Spring, N., Crowley, K., & Shneiderman, B. (2019). EventAction: A visual analytics approach to explainable recommendation for event sequences. ACM Transactions on Interactive Intelligent Systems (TiiS), 9(4), 1–31.


This argument does have some merit as racial bias in algorithms, poor medical decisions by recommender systems, etc have been known to occur.

But Tiktok does not use XAI and yet it has a growing user base which means there are more strategies left to explore that can generate trust in user and XAI is not the only way.


Du, F., Plaisant, C., Spring, N., Crowley, K., & Shneiderman, B. (2019). EventAction: A visual analytics approach to explainable recommendation for event sequences. ACM Transactions on Interactive Intelligent Systems (TiiS), 9(4), 1–31.


Some of these questions are mentioned explicitly in the “Problem and Approach” section of their paper. They discuss the various sorts of recommender systems that exist from temporal search queries, search based on record attributes, demographic attributes etc.


They actually base their approach on case studies with students who are looking to get into a specific field (eg: professorship) and want to track what students in the past cohorts did in order to reach there. The recommendation engine is supposed to help out by suggesting profiles that match in terms of a sequence of events that led to the kind of job the user wants.




Elton, D. (2020). Self-explaining AI as an alternative to interpretable AI.

Transparency is one approach to improve trust but consistent results is also an equally powerful method. Elton provides an alternative to XAI called Self Explaining AI where the AI system explains its process using neural networks. This points to approaches common to visual display of explanations in XAI format.

Frost, R. L., & Rickwood, D. J. (2017). A systematic review of the mental health outcomes associated with Facebook use. Computers in Human Behavior, 76, 576–600. https://doi.org/10.1016/j.chb.2017.08.001


Interestingly, Facebook use resulted in addiction, anxiety, depression, among other issues yet at the same time was correlated with lower depressive symptoms when used in a way that "enabled perceived social support and connection". This lends credence to the idea that online platforms can be used in supportive ways if affective computing is considered.


Gigerenzer, G. (2011). Personal reflections on theory and psychology. Theory & Psychology, 20, 733–743


He also remarks how “Dual-process theories of reasoning exemplify the backwards development from precise theories to surrogates.”



Kowalski, R. (1974). Predicate logic as programming language. IFIP Congress, 74, 569–544.


Kowalski in his early years was concerned mostly with foundational theoretical work in logic and computer science that made some important contributions to the early days of automated theorem proving. In fact we must keep in mind that advances in discrete mathematics and logic have always had a significant relation to computer science and have driven the innovations in this field. ENIAC after all was a massive number cruncher and the Turing’s Enigma was a codebreaker!

Kowalski, R. (2011). Computational Logic and Human Thinking: How to Be Artificially Intelligent. Cambridge University Press.


Kowalski in his later years focused on using his understanding of how logical inferencing works develops a framework for Humans to learn from Artificial Intelligence. He argues that the use of computational logic can help ordinary people in improving their communication skills, and and ultimately their practical problem-solving abilities. Though the seeds of this kind of thinking can be traced back to Kowalski, R. (1979). Logic for problem-solving (Vol. 7). He does not, however, address the problem of technological solutionism and evades the aspect of our overreliance on the technological way of thinking. (Morozov, Evgeny, 2014.)

 Lee, J. D., & See, K. A. (2016). Trust in Automation: Designing for Appropriate Reliance: Human Factors. https://doi.org/10.1518/hfes.46.1.50_30392


The Definition applies to automation or another person. Another robust definition is “ the willingness to rely on an exchange partner in whom one has confidence" (Moorman et al., 1993, p. 82) Now Lee and See would certainly consider reliance as “relying” or “having confidence”


“Considerable research has shown the attitude of trust to be important in mediating how people rely on each other (Deutsch, 1958, 1960; Rempel, Holmes, & Zanna, 1985; Ross & LaCroix, 1996; Rotter, 1967). Sheridan (1975)” They then explain focus on the concept of trust and reliance. They acknowledge that trust guides - but does not completely determine - reliance. Which means there is a complicated relationship between reliance and trust and there are many conflicting findings on this relationship.


Lee, J. D., & See, K. A. (2016). Trust in Automation: Designing for Appropriate Reliance: Human Factors. https://doi.org/10.1518/hfes.46.1.50_30392


Lee and See argue for a deep relation between trust and reliability and go in much detail referring to communications studies. Steinfeld relies on their concepts to explain automation reliability and automation capability in robotics.


We know for example that within robotics cute looking robots receive better response (see Kate Darling’s work at MIT media labs)


I want to understand how does the experience of a user change when they are aware of an AI system behind a UI and how does cultural dispositions play a role in these experiences? 


Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20, 709-734.


The definition by Mayer et al. (1995) is the most widely used and accepted definition of trust (Rousseau, Sitkin, Burt, & Camerer, 1998). Lee and See mention that “as of April 2003, the Institute for Scientific Information citation database showed 203 citations of this article, far more than others on the topic of trust.”  Some authors also go beyond intention and define trust as a behavioral result or state of vulnerability or risk (Deutsch, 1960; Meyer, 2001).

The definition I presented by Mayer, Davis & Schoorman dives deeper into the psycho-social states of the agents involved and is worth investigating with respect to HRI and humans do tend to anthropomorphize automated processes like Siri, bomb defusal robots in the military, robot receptionists such as the experimental “Valerie” and “Tank” at CMU.

Osman, M. (2004). "An evaluation of dual-process theories of reasoning". Psychonomic Bulletin & Review. 11 (6): 988–1010. doi:10.3758/bf03196730.

A multi-process model Osman suggests puts the reasoning process on a spectrum: a continuum between implicit and explicit processes.


Osman, M. (2004). "An evaluation of dual-process theories of reasoning". Psychonomic Bulletin & Review. 11 (6): 988–1010. doi:10.3758/bf03196730.

A multi-process model Osman suggests puts the reasoning process on a spectrum: a continuum between implicit and explicit processes.


Picard, Rosalind W. (2000). Affective computing. MIT press.


Affective Computing argues for integrating more emotions into our computer interactions to facilitate overall user experience. In this regard, the intermediary chat bot is a perfect example of a small scale quick application of affective computing.



Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A., & Goodrich, M. (2006). Common metrics for human-robot interaction. Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, 33–40. pp. 33.

It gives us a framework to investigate performance issues in the Human-Robot team. (see Impact of Robot on Failures and Feedback on Real-Time Trust, Steinfeld).



Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A., & Goodrich, M. (2006). Common metrics for human-robot interaction. Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, 33–40. pp. 36.


As a strong empiricist, it will serve us well to look critically at some of his user questionnaires as the primary way of getting feedback regarding the robot’s interactions.

Tracy, S. J. (2019). Qualitative Research Methods: Collecting Evidence, Crafting Analysis, Communicating Impact. John Wiley & Sons, Incorporated. http://ebookcentral.proquest.com/lib/georgetown/detail.action?docID=5847435


A different approach could be complete participant observation, where the researcher will monitor users while they are operating an AI/ML-based application like a search engine and generate themes based on their own field notes of what they observed the user do. At the end, the user could be provided a semantic differential scale based questionnaire and their answers would provide the quantitative data that the researcher can later compare with their own field notes to draw out issues related to trust in AI.


Wang, D., Yang, Q., Abdul, A., & Lim, B. (2019). Designing Theory-Driven User-Centric Explainable AI. pp. 2. https://doi.org/10.1145/3290605.3300831


She talks about supporting users with various levels of AI literacy in diverse subject domains, from the bank customer who is refused a loan, the doctor making a diagnosis with a decision aid, to the patient who learns that he may have skin cancer from a smartphone photograph of his mole. This is mentioned in


Wang, D., Yang, Q., Abdul, A., & Lim, B. (2019). Designing Theory-Driven User-Centric Explainable AI. https://doi.org/10.1145/3290605.3300831


Her use of raitionalistic epistemological approach is never really fully explained. The choice seems arbitrary and hints toward a scientistic bias. There could have been numerous other approaches she could have taken, for example the phenomenological approach to see how users experience their reasoning.



Weidenfeld, A.; Oberauer, K.; Hornig, R. (2005). "Causal and non causal conditionals: an integrated model of interpretation and reasoning". The Quarterly Journal of Experimental Psychology. 58A (8): 1479–1513. CiteSeerX 10.1.1.163.4457. doi:10.1080/02724980443000719

For example, the clear distinction between abstract thinking vs contextual thinking is breaking down as psychological studies inform us that they are interpolated in a number of cases.

Portfolio : Text

Subscribe Form

Thanks for submitting!

©2020 by Rohan Somji

bottom of page