<aside> 🌟 Ken Holstein is an Assistant Professor in the HCII and the lead of the CMU CoALA Lab. Professor Holstein’s research focuses broadly on human-AI collaboration and how such systems are transforming different contexts of work!

</aside>

KenProfile.jpeg

🐨 Who are you and what do you do?

I'm Ken Holstein, and I’m an Assistant Professor in the Human-Computer Interaction Institute. I do research in human-computer interaction broadly, but focus especially on studying ways in which AI systems are currently transforming various forms of work. I tend to focus on highly social, creative, or care-based forms of work. This includes work in which I and other scholars anticipate that humans will always have a role such as social work, education, and healthcare contexts. Within these contexts, we work with the workers who are using these AI systems on the ground to understand their experiences, their perspectives, and to understand ways in which these systems can be improved or, in some cases, redesigned from the ground up.


🐨 What is a project you are currently working on or have recently worked on?

We've been doing a lot of work recently to understand how AI-based decision support tools are increasingly being used in social work settings. This includes, for example, child welfare. This is an area we think is really important to study right now because we're seeing a rapidly accelerating proliferation of these sorts of tools as various government agencies are experimenting with what role AI can or should play.

These are really high stakes and socially impactful decisions that, for example, could impact the sorts of care or resources that families might receive. We have agencies that have deployed AI based systems to advise on whether a family should be investigated for child maltreatment, so that's an example of an extremely impactful decision. Considering this, the use of AI in these settings in the first place is extremely contentious right now, as is the case with the use of AI in many other settings as well.


🐨 A theme of your lab, CoALA Lab, is using technology to bring out the best of human ability. Could you talk about what work done in your lab might look like considering this theme?

There are different phases in the projects we work on. Sometimes a project we're working on is in the phase of understanding. We want to make sure that we actually go out and understand the technologies already being used. We're not the first to think of AI-based technologies in these settings. Considering this, we want to chat with workers and understand the multiple stakeholder perspectives including technology developers, analysts, supervisors of those workers, and the people actually impacted by these decisions.

In a lot of our projects, we're understanding the current state of the cutting edge in the sense that it may be only a few hospitals or public sector agencies are currently experimenting with these systems. We want to study the early adoption of these cutting-edge systems because we often think there's an opportunity to redirect technology development during these early phases. If you wait too long, systems might become more entrenched and difficult to change especially because a lot of organizational policies might be built around existing technologies.

Another phase of our projects is moving on to design. Oftentimes, if we're moving into a particular domain, the first question will be β€œwhat can we learn about what's already happening here?”. We don't want to just create another technology without learning about what happened with the last 30 technology deployments. We then move towards increasingly designing, developing, and prototyping new systems.


🐨 You recently attended the Symposium on Augmented Intelligence at Work in Germany. What was that workshop like? What were your experiences?

There were about 50 people invited from around the world to attend the workshop, and it was a group of academics and representatives from industry who are all working in the broad space of augmenting human intelligence at work. There were some people who came from a core machine learning background, and there were others from the HCI Club. There were also some people whose expertise was more on the side of policy and economics.

A lot of people shared their work and gave brief presentations. My PhD student, Anna Kawakami, who I co-advise with Haiyi Zhu, attended and presented our work. Towards the end, we came up with a few common themes that multiple people across multiple disciplines in academia and industry were interested in. We then split into groups to discuss one of the themes that interested us most. Those sessions are often called Birds of a Feather sessions where we talk through what some of our major concerns are and then hopefully solutions to those concerns.


🐨 What were some of these themes?

πŸ‘©β€πŸ’» Theme 1: How we can design for complementary human-AI collaborations?

When I talk about bringing out the best of human ability, one of the major concerns I have with AI deployments in all of these settings that we study is seeing more failure modes than success modes in practice today, and I would like to see a future where the opposite is true. An example of a failure mode is trying to automate away something that humans actually do much better than AI systems. Another is introducing AI systems that make it harder for people to do what they do best.