toggle menu
Episode 03

Ben Shneiderman is a professor in the Department of Computer Science at the University of Maryland, where he is also the founding director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies as well as author or co-author of numerous influential books. On this episode of the PFF Podcast, Ben talks with Jeffrey about human-computer interaction, the balance between human and machine control and building machines that empower people - enhance, augment and amplify human abilities, not replace or mimic them.

Transcript

Welcome to the Piaggio Fast Forward podcast. Join the conversation by subscribing to the PFF podcast at https://www.piaggiofastforward.com/podcast.

Ryanne Harms

Welcome to the Piaggio Fast Forward Podcast. Join the conversation by subscribing to the PFF Podcast, at podcast.piaggiofastforward.com.

Jeffrey Schnapp

Welcome to Mobility +, the PFF Podcast. I'm your host Jeffrey Schnapp, Chief Visionary Officer at Piaggio Fast Forward. It's an honor to introduce today's guest, Ben Schneiderman, who is a distinguished university professor in the Department of Computer Science at the University of Maryland, where he is also the founding director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies. Ben's pioneering contributions to human-computer interaction and information visualization include clickable highlighted web links, touch screen keyboards, dynamic query sliders, the development of tree maps, knowledge network visualizations for NodeXL, and temporal event sequence analysis for electronic health records.

Ben is the author or co-author of numerous influential books, including Designing The User Interface: Strategies For Effective Human-Computer Interaction, first published in 1987, but now in its sixth edition; Leonardo's Laptop: Human Needs And The New Computing Technologies, from 2002, which won the IEEE Book Award for distinguished literary contribution; and The New ABCs Of Research: Achieving Breakthrough Collaborations, published by Oxford in early 2016.

He is currently at work on a book that argues for a human-centered approach to artificial intelligence. Welcome, Ben Schneiderman, to Mobility +, the PFF Podcast. It's great to have you with us.

Ben Schneiderman

Thanks, Jeffrey, for this invitation to speak on your podcast.

Jeffrey Schnapp

To start our conversation, I wanted to use as a springboard the article that appeared on 21 May in the New York Times, by John Markoff, entitled "A Case For Cooperation Between Machines And Humans." And in that article, he starts by describing your intervention at the Assured Autonomy Conference, an industry conference in Phoenix, Arizona, that took place in February 2019, in which you critiqued the title of that conference and tried to bring a different focus to the whole conversation about the future of mobility systems, and, in particular, self-driving cars. I'm wondering if you could tell us a little bit about not only the kind of argument that you tried to make to the dyed-in-the-wool advocates of Level 5 autonomy, but also more broadly, what are the powers and limitations of the kind of model of autonomy that has prevailed in conversations about the future of the automobile industry?

Ben Schneiderman

That industry conference was actually February 2020, so it was just a few weeks ago, just before everything shut down. That was my last trip. So it was memorable, and I made that trip because I thought it was important to appear at this event run by the Computer Research Association, which is a strong group of industry and university people that write white papers about future directions. And I was attracted to the topic, assured autonomy, but I was also troubled by it. So I came from a position where I was ready to challenge what I thought I'd find as a resistant audience. But I'm pleased to say, I found a warm reception for this issue of changing the perception. So the issue of autonomy and the phrase of autonomous machines have been around for a long time. And the notion of levels of autonomy goes back to about 1980, when MIT Professor Tom Sheridan offered a 10-level map of autonomy that went from full human control to full machine autonomy.

Ben Schneiderman

The belief was that as you increased the amount of machine autonomy, you had to decrease the level of human control. And that's been the pervasive notion that I took in and in the first edition of my book, Designing The User Interface, in 1986, I had a section describing this issue about the balance between human and machine control. And I brought that notion that you had to choose some point along that spectrum or along those levels. But in later years, I became more troubled by that notion. And so in the later edition, the sixth edition in 2016, of that book, the section is now titled Ensuring Human Control While Increasing The Level Of Automation. And that was a very different take, which at first seems like a puzzle, which it did to me as well, and to my readers, and the notion that you could have high levels of human control and high levels of automation was the breakthrough thinking.

So I went from a one-dimensional model, which said, "You increase automation, you therefore decrease human control," to a two-level that said there were two independent axes, and there was a two-dimensional space of design. This vastly opened up the possibilities for designers. And so if we choose examples like your digital camera on your phone, there's a high degree of automation, for features such as setting the lighting and the focus, reducing jitter, etc. Lots of AI and high levels of automation. But there's also high levels of human control. You can choose where to point the picture, zoom, and many other features, including all kinds of filters that you can apply before or after you take the picture. And that rich level of human control is what I think people want for their consumer products and what they insist on for consequential or life-critical applications, like the self-driving car that you're describing.

So that's the platform I came with, and the idea that levels of autonomy were a pernicious idea. And that old notion was repeated by the Society of Automotive Engineers and their five levels of autonomy for self-driving cars. And so that misguided notion continues to propagate and influence designers. And in the extreme case, you get highly autonomous systems like the Boeing 737 MAX, which were so autonomous that the pilots didn't even know that the MCAS system was in place and took over. And that kind of design, and irresponsible design, leads to deadly outcomes. So my fundamental pitch is about human responsibility for the use of technology. And when you think about the design through the perspective of responsibility, of liability, of accountability, for damage, not the kind of inconsequential things of recommender systems and lightweight applications where a mistake or a wrong recommendation may even be fun. And when you think about medical and legal and financial and military and other applications that are consequential or even life-critical, then we need to insist upon the degree of human control and therefore responsibility.

Jeffrey Schnapp

It will, no doubt, please you that the motto of Piaggio Fast Forward is "Autonomy for Humans," which I think reflects many of the values that you are articulating in your response. So there are three points that you emphasize in particular in this insistence upon cooperation, rather than techno-centric model of what autonomy means. The first is control. The second is the potential for ethical dilemmas that come with the relinquishing of control. And the third point that I wanted to hear more about, in terms of your views, is the role of human creativity as complementing the capabilities of technologies and intelligent devices. Could you say something about that?

Ben Schneiderman

Terrific. Well, you've gone to three very, very important topics to me. So the first one we'll take on was about the idea of cooperation with machines. Now, that's the headline on the article, and that's what Markoff took away. But actually I'm a bit more radical than that because I oppose the idea and the language of seeing machines as partners, teammates, or collaborators. I think that leads us down the wrong path. Machines are not people. People are not machines. That's the fundamental notion. You cannot have a collaborator in a computer because there's no responsibility for failure. So they're different. And if you think of computers as teammates or collaborators, then, as designers, you'll be misled to think of the designs that are human-like in their nature. And it's disturbing to me that after 30 years, this notion that human-human relationships are a good model for the design of human-robot interaction, it's just, to me, startling and dangerous at times, but certainly suboptimal.

And so you see designers making robots to respond like humans would, when robots are very different. They have powerful algorithms, rich databases, remarkable sensors, powerful effectors, big display screens. So why would you want the machine to be like another human being? And yet that's the persistent design. So breaking from that notion is another one of the ways I would like to see designers change their focus and move towards a notion where computers are more like appliances. The computers and robots of the future are going to be more like your dishwasher and washing machine and other technologies that are used by, that are steered by you, that are operated by you. So when you drive your car or take a photo or use any technology, you are the operator. You are responsible. And that should be optimized. The design should be optimized to enable you to know what's happening and be able to control it.

So comprehensible, predictable, and controllable are the design specs that I argue for. And I would say the Apple design guidelines for human interfaces for apps say people, not apps, are in control. So the 3 million apps on the iPhone and Android platforms are well-aligned with the notion that I have, and the bizarre notion I see, of some researchers who believe that computers will be more like people in the future, is dangerous and misleading.

Jeffrey Schnapp

That's a great answer. And you've touched on a theme that I've sometimes played the antagonist on myself with the robotics community, which is the predisposition to think the robot in humanoid terms. And it's a predisposition that has characterized much of the history of robotics, going back to antiquity, really. Hero of Alexandria, Al-Jazari, the creators of mechanical Turks in the 19th century and before. I'm curious: do you think that robots ought to just be robots, that we should focus on developing what they do brilliantly, and get away from this notion of trying to create that collaborator that, in a sense, is getting closer and closer to the things that humans are the most capable of doing well and better and more creatively than any machine, however intelligent?

Ben Schneiderman

Yes. I think that's the right idea. You're right that, historically, going back, I see Jaquet-Droz, the Swiss watchmaker in the 1770s, created three humanoid robots that played musical instruments, wrote poetry and drew sketches. And they become merely the museum pieces for the next century. And so, too, will be the humanoid robots. Now, my other fantasy, maybe, if historically, is if the current designers of robots went back to 1880 and they looked at people washing their clothes with a washboard and a bar of soap at the edge of a stream, they would have built a robot which had hands and could take a bar of soap and then could hang the clothes on the line to dry. And when you're limited that way, you miss the chance to build a washing machine and dryer that is far more effective than embodying the way people do it.

And so if you want to liberate your thinking, get out of the notion that machines are like humans. And we see this theme repeatedly. I'm inspired, and your audience may know the Lewis Mumford book in 1934, Technics and Civilization, which had a chapter that was a gift from me, it was called The Obstacle Of Animism. The Obstacle Of Animism. And it recounts how throughout history, the early designs for new technologies are based on human or animal forms. And only when, he used the awkward term, when the designers dissociate the need from the human form, do they get to real progress. So planes don't flap their wings like birds. They have a very different wing structure, and they have propulsion that's very different. And so you get planes that fly higher and faster than the birds.

And that's my goal. I'm not interested in building a machine that does what a human does. I want a machine that enables a person to be a thousand times more powerful than they've been in the past. That's what technologies have been in the past. That's what steam ships and airplanes, and that's what the web and email and navigation tools and search engines, all of those amplify your abilities by a thousand-fold. And that's what I'm after. I'm after the big win. So I don't want a cooperation. I want a powerful tool, an appliance. I want something that works the way I expect it to, that allows me to apply my creative potential, my relationships with human beings, and my distinctly human approach in an ever more powerful way.

Now, we have to remember, this is not always good, because there are evil humans. There are criminals and terrorists and hate groups, and they will take these technologies and put them to work for their purposes. So we do need to be vigilant, and try our best to prevent these misappropriations of powerful technologies. But I'm all about empowering people, enhancing, augmenting and amplifying human abilities, not replacing or mimicking them.

Jeffrey Schnapp

That aligns really well with our approach to mobility as well, where we want to encourage, promote and support people moving more, and not replace human mobility. Not support a vision of the future of cities, of the future of towns, where we're all increasingly passengers, or we sit on our sofas, waiting for a burrito to be fired by some kind of device through the window. We want people to walk. We want people to interact and engage with the places they live, they work, they play. And the need for these appliances, as you described them, to be infused with values, with a vision of the good life, seems, to me, essential to this model of cooperation that you're describing.

Ben Schneiderman

Bravo. I'm with you on that. And this may be just a further note... The popular notion that elder care robots, which is also another theme that people have raised, where they think that social robots that mimic human form will be necessary. And I just don't don't agree with that notion, either. I think we're going to find that what elders want is a sense of their independence and their own self-efficacy. And so that's the technologies that I think will become the dominant ones. And so it's important to shift the language away from intelligent agents or cognitive actors or simulated teammates, or colleagues, partners, and autonomous... And humanoid robots. All those things are useful up to a point, but then the commercially-successful things are tool-like devices that extend and amplify human abilities. They're steerable, they're prosthetics, if you wish. And they rely on the design that's usually called supervisory control, and lead us down the road towards mechanoid rather than humanoid robots. But I'd rather use the term appliance.

Jeffrey Schnapp

The counter-argument that I've heard, in response to my own polemics on this topic, is that the humanoid is useful as a way of pushing the boundaries of the field, and promoting challenges that are perhaps impossible challenges to succeed at, but that advance the state of knowledge in the field. Do you find that argument persuasive?

Ben Schneiderman

Fair enough. I make that case as two models of AI research. One is the emulation or simulation model, which seeks to do what a human does and simulate it, but then the application goal, which puts it to work for real projects, often makes the mistake of taking that model and shifting it over. And so only when they learn to do something differently, does that succeed. You have a long history of failures of humanoid robots that are quickly forgotten by the researchers who don't want to face the failure of the Postal Buddy. It was Postal Service-built. There was more than a billion dollars to build 183 of these humanoid-like Postal Buddies. And the goal was to build 10,000 of them, but consumers rejected them. The designs were inappropriate and what people want is to go up to their bank machine and get $60. And they're not interested in the chit-chat or discussion or having a human character.

And so you find that that's what happens. The main claim to success of that model, and it's a worthy one to notice, are things like Alexa and Siri, Cortana and Google Home, where a voice interaction has turned out to be quite an acceptable one, but that's become just a new interface. So instead of typing on your keyboard or touching on your screen, you do your web searches by making a voice request. I'm impressed by how rapidly the quality of speech recognition has matured and improved over time. And then we have success stories of things like Alexa, and that's worthy giving credit for that model, but we have all the other failures of humanoid robots, and yet that theme persists and the companies continue to turn out social robots, which are available on Amazon, under toys and games, because I think that's all that they're going to be.

Robots are fun. People enjoy building them, playing with them, using them, but I know of no success stories of humanoid robots, and lots of failures. So, I think, taking that into play, maybe you could consider successes of crash test dummies and medical mannequins, which are meant to be humanoid in form. And you could say also entertainment applications, like Disney's Audio-Animatronics and other puppet-like interfaces are another success story. So those are okay, but the broad commercial success goes to the dishwasher and the washing machine.

Jeffrey Schnapp

Or, in the appliance field, the robot vacuum cleaner.

Ben Schneiderman

It's an appliance. It's not humanoid at all. There's no human�

Jeffrey Schnapp

It's about as un-humanoid as you can get it.

Ben Schneiderman

So Rodney Brooks found a good way to go forward, and that is a successful product, but I think it supports my model that a human-like character doesn't work. And a mechanoid robot would be the best, I would grant, but I would rather call it an appliance.

Jeffrey Schnapp

Since you mentioned, specifically, this recurring effort to develop companion robots, what would be technologies that would support appliances that would enhance the lives of the elderly that strike you as more valuable, more promising, more interesting paths for future development?

Ben Schneiderman

Yes, there are real opportunities to improve the lives of elders through advanced technologies. And a lot of those things already are at work and being used in simple notions that... Now, simple notions like phone calling and Zoom chats that allow elders to be in touch with family members and friends are very liberating and are very strongly attractive for those populations. And wheelchairs, good wheelchairs, that are powered wheelchairs for those who are severely disabled, as simple movement controls by way of joysticks, or even, for the quadriplegics, ways that they can use a puff tube to navigate around. So there are all kinds of ways you could improve technologies, or going upstairs is a usual example. And there are lots of tools, little elevators, or railings with a chair that will bring you up to the second floor of your house, if that's what you need. And so there's lots of opportunities to do that.

I think there's opportunities for improvement in notions like kitchen activities and having more powerful appliances that make it easier for an elder to get a cup of tea, for example. So I think we can see innovation in that space, and also for users with disabilities. University of Maryland's a world leader with the Trace Center. Greg Vanderheiden and Jonathan Lazar lead this group that's been doing terrific work and a lot of the accessibility technologies on your Apple and Microsoft or Windows machines come from that work. And there's a lot more to be done.

So I see lots of opportunities, but I don't think that mobile humanoid robots are the future. They've been tried at various nursing homes, including animal-like robots, like PARO the baby seal, and AIBO the dog and there's modest success in those directions. But all the 50 studies I've seen, and the people I've talked to who do these work, all are short-term studies in the order of a few days or a week. And the novelty of a robot attracts attention, and so the elders really like that because it generates discussion among themselves. But I know of no studies, no studies of long-term use of such, even the animal-like robots. But I think there's opportunities for other ways of thinking. Shannon Vallor, an ethicist, philosopher, in her book Technologies And Their Virtues, has a wonderful chapter about the way older adults could be served by technologies and the ways of increasing compassion and caring from fellow human beings. And that's what I think we're going to be able to facilitate.

Jeffrey Schnapp

I think the key to a lot of those kinds of really promising solutions will be really good human-robot interaction models, and intuitive kinds of interfaces. And one of your most important contributions to this general field was the designing the user interface strategies for effective human-computer interaction, which was a pioneering work that goes all the way back to the mid Eighties. But the core, the eight golden rules of interface design that you propose strike me as very good rules, also, for interaction designed for... Just for the sake of our listeners, I'll just quickly list them, and I'd love to hear your thoughts on how they could be carried over to the realm of interaction design. So: strive for consistency, enable frequent users to use shortcuts, offer informative feedback, design dialogue to yield closure, offer simple error-handling, permit easy reversal of actions, support internal locus of control, and reduce short-term memory load. Those eight principles really do seem to have already embedded in them a set of guiding principles for interaction design, as well.

Ben Schneiderman

Great. Yes, you're right that the title and the focus was user interface design, and that language was of the Eighties, and the evolution to user interaction design or user experience design is where we are now. So that's what most people talk about these days, user experience design. And the eight golden rules have had far more widespread impact and acceptance than I ever expected. So I'm delighted by that. The eight rules you cited were actually early versions of those. So, for example, you cite the rule about, make it easy to correct for errors, but in later versions, later additions, that rule became prevent user errors. And that became the goal. Even in simple cases, like typing in a date, like MMDDYY, was the old way, and that was fraught with error problems, and therefore the need to design a lot of software and a lot of error detection, and a lot of messages that were really difficult. And the goal would be to prevent it.

So modern interfaces, like airline reservations, usually put up a calendar and you click on the month and you click on the day, and therefore you can never make a data entry error. And that idea of preventing errors became the driving force. So I think that's the way we want to look at the future of these designs and we want to empower people, and make it easy for them to learn the interface and then to use it.

You use the term intuitive and user-friendly are common phrases that are... They're fine to use when talking to family members, but in professional circles, we'd rather talk about the learning time, the error rates, the speed of performance of benchmark tasks. And then we will go for user satisfaction issues as a generic notion, but as a science, as a measurement-based and also qualitative science, we tend to look for more specific issues that we can compare two systems for the rate of errors that people will make. Whereas it's hard to compare the intuitiveness of system A versus system B.

Jeffrey Schnapp

Indeed. I think what counts as the intuitive can vary from practice to practice, and the language you're suggesting is much more precise. And that's crucial in these kinds of pursuits. In terms of this notion that the appliances that we build should extend and amplify human capabilities, whether you've thought about that amplification process in relation to creative practice itself, I couldn't help but notice, with interest and curiosity, many of the kinds of creative projects that you've undertaken in your own work. And you mentioned, you've assumed a leadership role in terms of fostering conversations between the creative community and the technology and innovation community. The notion of appliances that enhance and augment creativity, where does that fit in the larger picture, as you see it?

Ben Schneiderman

Yeah. Right. Great. I'm a great supporter of creativity in the arts and design, as well as creativity in science and engineering. I'm a great believer in the school of thinking that says that there's creativity in all those domains. And there are underlying shared principles of creativity that are useful to understand Csikszentmihalyi�s principles are ones. And his writings have been very influential to me. Early on, I led a National Science Foundation project, which had workshop and produced a series of papers under the notion of creativity support tools. So that line of thinking suggests that technology will support human creativity, and in whether it's the arts or sciences or music composition or dance. All those places are ways in which technologies have always supported and extended human creativity. Just as cameras, photography was a great extension of human creativity, not a replacement. And it opened up new vistas and new possibilities.

And there's a current large community of new media artists who are using the technologies in novel and creative ways. The Leonardo Journal is a great place to learn about the many projects that are going on in this community. Two years ago, I led a National Academy of Sciences symposium with more than 200 people in Washington that brought together people from the arts and sciences, design and engineering, to discuss creativity and collaboration. And those bridging efforts that bring together people from different worlds of work are very powerful. And I'm a great believer in the Leonardo school of thinking, which says that being a good artist makes you a better scientist. Being a good scientist makes you a better artist.

Jeffrey Schnapp

One question I had for you in this regard is how did design become a key focus of your own thinking? Typically in the field of computer science, design was thought of as somehow external to the technical nuts and bolts of computational thinking. But in your work, perhaps because of its focus on interface design, design becomes a key feature, really, pretty remarkably early on, going back into the Eighties. So I'm curious about that centrality of design as a concern.

Ben Schneiderman

Yes, very good. Yes, I was trained as a scientist, studied physics, and so science was my home base. I'm a computer scientist, but my book is called Designing The User Interface. So I certainly came early on to recognize the powers of design. And then, it turns out, I've been elected to the National Academy of Engineering. So I've seen these different perspectives, and while engineering often includes design aspects and computer science also, we think of chip design and algorithm design and interface design as aspects, the close integration with design thinking and the design world has emerged in the last decade in very powerful ways. And I've become a great supporter of that. My book in 2016, called The New ABCs Of Research, with its subtitle, Achieving Breakthrough Collaboration. There are several ABCs, like achieving breakthrough collaborations, but also applied and basic combined. And the result is very powerful, that if you take a person with a foundational research approach, and you partner them with someone who has a real problem, they are more likely to produce dramatic results.

So I call this POSH research, collaborations between problem owners and solution holders. Problem owners, solution holders. POSH. And I think bringing design into this is a great gift, and a great opportunity. The New ABCs Of Research discusses, equally, science, engineering, and design. And while the National Academy of Sciences goes back to 1863 and the National Academy of Engineering is 1964, I propose there be a National Academy of Design by 2065. So it may take that long to establish more common ways of judging quality of research in design, and making design respected by these other disciplines. But I think that's the way of the future. Design is extremely powerful. A fresh and important way of thinking that we need to teach our students. We need to teach them science, engineering, and design. They need to learn the methods of each of those three disciplines.

Jeffrey Schnapp

That vision, which I think is really a highly powerful vision, that, again, I share, implies a reorganization of some of the ways that we train future practitioners in the various fields that are involved. I know you're at present an emeritus professor. Are you still engaged in the activity of teaching or of shaping educational policy in some of your fields of endeavor?

Ben Schneiderman

Yes, I'm writing and trying to influence these things. But a central notion for me is the idea of team projects within a course. And the team project should have a real partner, a client who is outside the classroom, and the goal should be to design and build something that survives beyond the semester. Often, it's an inspirational prototype, but it can go on to be important. So yes, I believe in design methods is essential for our teaching.

Jeffrey Schnapp

Moving towards a close here, Ben, I'd be interested in knowing if there are any current or future projects that you have underway that you'd like to share with our listeners.

Ben Schneiderman

Well, I'm currently working on a serious vision of human-centered AI. Artificial intelligence provides powerful technologies, but those will only become widely applicable in use if the user experience and user interface are done right. And so the notion of human-centered AI is becoming more widely accepted, and that's the language I'm using. The first two papers already published, and the New York Times article that you started with, cites the first of those and links to the first of those, and that one in very short order has gotten more than 4,800 downloads, which is a large number for an academic paper. The second paper has just appeared, and these will become a book. It'll take me a little while to do that.

So I'm actively involved in trying to change the way of thinking. It seems to me by opening up from the one-dimensional model of autonomy, to focus on human autonomy equally as machine autonomy, and to promote a fresh vision, which opens up new possibilities for design that are more realistic and that have a greater chance of commercial success, seem to be what's important. And I should say, also, not only the commercial success, but the designs that will be safer. And I use the phrase reliable, safe, and trustworthy. And so we want to avoid those Boeing 737 MAX problems. And we don't want to make technologies that aren't reliable, safe, and trustworthy.

Jeffrey Schnapp

Well, we look forward to that book and, Ben Schneiderman, thank you so much for joining us on Mobility+, the PFF Podcast.

Ben Schneiderman

Thank you, Jeffrey.

Ryanne Harms

Thank you for listening to the Piaggio Fast Forward Podcast, and come back soon for further lively conversations about walking, light mobility, robots, and the design of neighborhoods, cities and towns. The PFF Podcast is hosted by Jeffrey Schnapp. Sound engineering by Robert Allen. Narration by Ryan Harms. Produced by Elizabeth Murphy. Web designed by Jerry Ding. Intro music is Funkorama by Kevin McLeod. End music is Your Call by Kevin McLeod. Special thanks to Tory Leeming. To learn more about PFF and gita, please visit piaggiofastforward.com.

Thank you for listening to the Piaggio Fast Forward podcast and come back soon for further lively conversations about walking, light mobility, robots and the design of neighborhoods, cities, and towns. The PFF podcast is hosted by Jeffrey Schnapp, sound engineering by Robert Allen, narration by Ryan Harms, produced by Elizabeth Murphy, web designed by Jerry Ding. Intro music is Funkorama by Kevin MacLeod. End music is Your Call by Kevin MacLeod. Special thanks to Tory Leeming. To learn more about PFF and gita, please visit piaggiofastforward.com.