Book Review: Human Centered AI
Posted March 01, 2022
HCAI is a book that, not surprisingly given the title, strongly adopts a perspective that is quite compatible with the beliefs and values of the human factors community. In his well-written, enjoyable and easy-to-read book he echoes many points that have been made by this community for 40 years, with its origins in aviation psychology, beginning with the pioneering work of Earl Wiener, Charlie Billings, Raja Parasuraman, Nadine Sarter, John Lee and others. His writing nicely complements this approach grounded in engineering psychology, with that from the computer science perspective.
Shneiderman is a highly credentialed computer scientist, who has gained a great deal of respect within the computer science community, initially with his excellent work on data visualization. Throughout the book, he laments that much of the CS community pays little regard for the human user; and his principled devotion imparted in this book is to turn their attention toward greater heed to human centered issues in artificial intelligence. He is an optimist that this will happen.
The book is replete with well-documented references to policies on AI adopted by organizations and industries. Shneiderman is quite effective in proposing dichotomies, and triads of easy-to-recall concepts such as that automation should be “comprehensible, predictable and controllable” or, “reliable, safe and trustable”, or the distinction between goals of AI science vs those of AI innovation. Some of these I elaborate below.
Another valuable aspect of the book is his frequent descriptions of particular AI technologies, such as those that appear in cameras, thermostats, voice-text systems, spell checks or driverless cars, describing the features within these that support human-centered AI.
Underlying all coverage is reflected a deep commitment to progressive humanitarian values, such as fairness, economic disparities, employment and combatting prejudice, and how AI technology could or should play a role. This commitment is strongly emphasized in his introductory chapter.
The book is neatly divided into five parts, each with multiple chapters, addressing in turn: 1. What is HCAI, 2. Presenting an HCAI framework, 3. Four design metaphors, 4. Governance structures, and 5. “Where do we go from here”. Each part is neatly divided into an introductory chapter, two or more “content chapters”, and a final chapter called “skeptic’s corner” in which he presents some of the counter-arguments that might be raised in response to his main points that were raised in the contents chapters.
In Part 1: What is Human Centered AI, he provides, in five chapters, his answers to that question. He spends a good deal of space laying out the fundamental differences between humans and computers, and addressing the concern that automation, AI and robotics may lead to unemployment.
In Part 2, throughout his 5 chapters he lays out his framework for effective HCAI within the 2X2 space defined by level of control of the human X level of control of automation. He paints this space not as defining a necessary a tradeoff between human and computer, but rather as a goal to seek maximum automation that still enables the highest level of human supervisory control; in a manner that essentially, exploits the best capabilities of each agent.
In chapter 8, he provides several general examples of how such a goal can be achieved, such as interlock systems that prevent irreversible error, predictive displays, and well human-factored “control centers” (a recurring theme in his book), from which the human supervisor can effectively understand what automation is doing and why (i.e., the concept of transparency that has engaged much research within the HF community).
In chapter 9, the focus is on vital principles of HAI design, such as reducing working memory load, presenting many useful examples of those principles in practice, ranging from discussions of thermostats, to spell checks, to auto-completion forms, to digital cameras, to self-driving cars and others. A focus is on how to support the fluent dynamic interaction between human and automation.
Part 3 describes an intriguing set of chapters on four different Design Metaphors. In chapter 12, Shneideman describes an underlying dichotomy between what he labels “Science Goals” of HAI and “Innovation goals” of HAI research. The former, more “basic” in his description, has the objective of creating software that “behaves” as much like a human as possible, in terms of vision, comprehension, cognition and action. As an example, a lot of the focus of science goals has gone into designing robots that walk, or can respond emotionally. (He is not really a fan) and “think like the human”. Then in the next four chapters, he sets out four dichotomous design metaphors, with considerable overlap between them. These are described in the context of both their science and their innovation goals. A common theme of these dichotomies is that he likes one better than the other. I have highlighted his “preferred” option.
Chapter 13: Intelligent Agents vs Supertools: the former, software that thinks like a human, the latter, powerful automation that can complement, augment and enhance performance of the human user.
Chapter 14: Teammates vs Tele-bots. The goal of creating an AI teammate is to create a partner (often a robot), that behaves as much like a cooperating human as possible; the Tele-bot, is automation, not necessarily in classic “robot form” that nevertheless has mobility and dexterous capabilities to operate in remote regions. Tom Sheridan’s work comes to mind here.
Chapter 15: Assured autonomy vs Control Centers. Here Shneiderman emphasizes the three vital properties of comprehensible, predictable and controllable, which he argues should underlie automation systems with which humans interact. These, rather than “assurance of autonomous capabilities”, should be the goal of design. Two recurring themes in this chapter are the design of effective control centers (an important concept introduced in chapter 8), and audit trails, whereby, after a set of automation operations (that may have committed an error) it is easy for the user or other stakeholders to go back and understand the steps that the automated agent may have taken, leading to the error.
Chapter 16: social robots vs active appliances. Here again, Shneiderman expresses his disagreement with a goal of designing robots to be as much like humans as possible, including their “social” human-like appearance, and emotion-sensitive conversations. While he acknowledges the entertainment value of social robots, and their potential value in companionship for elderly, he emphasizes, again, that a better goal is to design automation in appliances (thermostats, ovens, decision supports, etc.) that need not, and, perhaps should not, possess human-like characteristics.
A general emerging theme of Part 3, is that HCAI design emphasis should be focused more on complementarity of abilities between automation and human, than on trying to mimic human properties. This preference then appears to reflect an overall preference for “innovation goals” than “science goals” discussed in Chapter 12, if the objective is to best serve HCAI.
I like his use of “bulleted lists” of 3-4 items or principles within Part 3.
Part 4: Governance Structures. These chapters address how administrative structures and procedures can better assure Human Centered AI, with a particular emphasis on ethical issues. The first content chapter within this section (Chapter 18), while less relevant to the applied cognitive scientist, is highly relevant to the human factors practitioner in computer system design, and also addresses the programmer and software designer who is interested in developing human-usable products.
Chapter 19 on Reliable Systems is the longest chapter in the book, and does a thorough job of guiding what the software designer, and her supervisors, should do to insure that software is human centered, through design, verification & validation (V&V) testing to avoid bias, enhance fairness, assure accountability and transparency, and achieve “Explainable AI (XAI)”. The critical role of UX evaluation is emphasized here, rather than simply relying upon the system developer or researcher’s intuition of what makes software understandable. Emphasis is also placed on the user’s ability to easily “explore” the software and interact with it, in developing a good mental model of its capabilities and limitations. There is an interesting emphasis on sliders for users to try out options, and understand the consequences of different weightings of AI factors in decision recommender systems. Lots of useful examples are presented.
In chapter 19, he also does a great job of linking his prescriptions, to what the system design industry (e.g., Google, Microsoft) ACTUALLY prescribes and does.
The remaining chapters in Part 4, address progressively higher layers or organizations that can assure ethical and Human Centered AI, from safety cultures within an industry, to oversight from professional organizations, to, at the highest layer, government interventions and regulations. Again, not much for the basic psychology researcher in HAI but important for those implementing AI policy and, for that matter, in organizational/industrial psychology. There is a lot of coverage on accident and mishap investigation that might have been caused by software errors.
Part 5 is entitled “Where do we go from Here” and include Schneiderman’s recommendations for particular emphasis in future HCAI research. In Chapter 24 he advocates advances in HCAI that “boost citizen science”, works to stop misinformation, and endeavors to find new treatments and vaccines. The concept of boosting citizen science involves looking for opportunities for the non-professional to contribute to contribute to vast AI data bases. Two separate chapters address “Assessing trustworthiness” (chapter 25) and “Caring for older adults” (Chapter 26). Included also is advocacy of AI to support a sustainable environment.
As noted, the book is clearly and compellingly written, with great examples, and many documented references to practices within the computer industry. I find only three criticisms:
First, on the one hand, the book is really longer than it needs to be, in order to make the vital important points. Several of these points are repeated throughout the chapters. On the other hand, a little redundancy never hurts, particularly when the points are driven home in different contexts; and guidance for HCAI has been so often ignored in product design.
Second, as a human factors science professional, I would have liked to see more references to all of the important and recent human factors research work that has been published (particularly in the Human Factors Journal); but I do realize that his intended audience is not as much us, as it is instead the professional computer science/AI community.
Finally, in some of the later chapters, Shneiderman addresses issues in depth, such as accident or mishap investigation, that really have little unique relevance to AI, even as they are certainly generally relevant to HCAI.
But these minor criticisms aside, the book is well worth reading by the human factors student, educator, scholar and professional, (a) to know that we have a highly regarded kindred spirit within the CS community and (b) to allow us to focus in detail on certain selected chapters which I have highlighted above. I hope that my brief synopsis above, will provide a guide for that focus.