Q&A with Siegel Research Fellow Alumnus and Assistant Professor at Cornell University J. Nathan Matias

J. Nathan Matias on the benefits of involving the public for more comprehensive AI research

J. Nathan Matias wears many hats: assistant professor at the Cornell University Departments of Communication and Information Science, leader of the Citizens and Technology Lab (CAT Lab), co-founder and executive committee member of the Coalition for Independent Technology Research, and most recently, alum of our Siegel Research Fellowship program. During his time as a fellow, Nathan cultivated collaboration with his cohort. We sat down with Nathan to hear about what he has been working on since. 

You bring together a unique mix of disciplinary backgrounds to your work. How do the humanities and history inform your work in computing and information science fields? 

I studied the humanities as an undergraduate, but I was always really into computing. I was especially interested in studying history and literature and the arts from the early days of colonialism as a Guatemalan-American, trying to make sense of my own history and culture. I learned how important it was for colonized people to create knowledge and sustain their rights at a time when new forms of power were sweeping over their world. It dawned on me that while studying the past was important, I could also use my skills in computing to work on issues in the present. Today, I ask similar questions about how people who are affected by technology can use the tools of knowledge to support each other and make a difference.

One of the things that colonialism did was to overlay systems of knowledge, data, and governance onto people’s lives. People who lived under these systems had little say in what those systems of information would be. Early pioneers of human rights worked to make those systems legible and reimagine the use of those tools for more democratic purposes.

In today’s era, technology systems are still being introduced into people’s lives without their input or independent oversight. At CAT Lab and Cornell, I work to create pathways for people who are affected by systems, to understand, remake, and shape technology with the tools of science. Along the way, we’re also making new, enduring discoveries about human life and technology that wouldn’t have been possible otherwise.

You have had some important work come out since your fellowship ended, some of which was catalyzed through collaboration with other fellows. Can you tell me a bit more about how these collaborations came to be? 

One moment stands out: we were at the Siegel Fellows Convening in February 2023 when news came out of restrictions on the Twitter API by Elon Musk and the company now known as X.  As it happened, there were a number of fellows whose work relied on that kind of API access. And so several of us were able to put our heads together and put the wheels in motion to organize and support the hundreds, if not thousands, of scholars and students whose work has been negatively impacted by these decisions. Being together at the right moment with the right group of people actually helped transform our capacity to support scholars throughout the year.

At that same gathering, almost exactly one year ago, I was in the audience while Ranjit Singh and Jake Metcalf were presenting to our fellow cohort about their work on New York Local Law 144, which is the world’s first law requiring transparency for hiring algorithms. These are the algorithms that screen people when they submit their resumes and give people virtual interviews –  used widely in human resources around the country and around the world.

During their talk, Ranjit and Jake said that it would be great to be able to know how (and if) companies were complying with this law, but that would require hundreds of people to actually apply for jobs and track what employers were actually doing to comply. 

One of the great things about being in a room with other creative people is that you often learn that things that seem impossible can be possible. I was looking for something that my students in a Communication and Technology course could do to get involved in research on something related to AI policy. We ended up working together to organize about 155 students to look up employers to see how they were complying with this law and report back what they found. This is research that we’ve now submitted for peer review, and which was mentioned in The Wall Street Journal a few weeks ago.

Your lab is dedicated to public involvement in science, as evidenced by getting students involved in this project. Can you tell me a bit more about your methodology and why engaging the public is so central to the lab’s ethos?

One of our core beliefs at CAT Lab is that we can make a difference in science and in people’s lives by involving the public in research. It’s good for students, who care about learning things that connect to their real world experiences. It’s good for society when students do things that go beyond the classroom and make a difference in the wider world.

A lot of the questions we have about AI policy and the impact of technology and society have what scientists call ‘heterogeneous effects,’ which is a fancy word for ‘people experience them differently.’ One person might have an experience of algorithm discrimination that another person doesn’t notice because they’re being treated differently by the algorithm. One person might experience a technology and mental health problem that another person doesn’t, because the underlying system is responding to them in different ways. That can make it really difficult for scientists working alone to be confident that what we’re observing actually describes what’s happening – especially without missing the experiences of marginalized groups.

Specifically on the issue of hiring algorithms, students are also some of the people most affected. When students apply for internships or their first jobs after college, they’re very often encountering hiring algorithms rather than actual humans. It can be tremendously demoralizing and frustrating for students who will apply for dozens of jobs. By turning the lens the other way to scrutinize these companies, my students got a chance to learn about employers, look at how this law was playing out, and how they might be affected by the algorithms that companies are increasingly using to hire them. Finally, society benefits from this first systematic look at how employers were implementing the law. We hope our research will be useful to regulators, employers, algorithm auditors, and ultimately job seekers whose livelihoods depend on the hiring process.

What do you think the broader impact of this first-of-its-kind algorithmic bias audit law (New York Local Law 144) might be beyond NYC? 

This law has created a market for third party algorithm audits. A couple of years ago, the idea of algorithm audits was a hypothetical idea that a few scientists and a couple of hopeful consultants had tried to pilot. Now there’s actually a market where multiple companies are providing this as a service, which is valuable.

Unfortunately, the way the law is written, companies have a tremendous amount of discretion over whether the law applies to them, and whether to publish the bias audits when they receive them. The result is that individual job seekers can’t really expect to be able to make the kind of informed decision that the law is supposed to give them. Even researchers trying to understand algorithmic bias would, if they relied on the evidence being published by employers, almost certainly get a very biased view of how these algorithms are performing. Why? We think it’s likely that only the most favorable bias audits are being published by companies.

So overall, New York Local Law 144 is a step in the right direction, but needs some tweaks – some of which are straightforward. The first is to reduce the amount of discretion that employers have, so that employers are more consistently required to conduct and publish bias audits. 

Second, employers should clearly make this information visible to job seekers. If you are a job seeker and you see an audit that says that people with your racial or gender identity are systematically ranked lower than others by the algorithm – that only gives you a limited amount of information. If you don’t know how the human part of the hiring process treats people like you it’s still impossible to tell if it makes sense to opt out of applying or not. So laws about transparency that give people a choice should really give people enough information so they can make an informed choice.

In a recent article for Nature you argued that we also need new science to understand the socio-technical behavior of AI systems. How does this relate to your work on AI policy?

One of the basic challenges for understanding and regulating AI is that these systems observe and respond to how humans behave. For example, hiring algorithms are looking at how humans have made past hiring decisions and they’re adapting to them. The same is true of recommender systems like Youtube and Facebook’s algorithms. All of these systems are designed to change what they do in response to what humans do and that makes them quite unpredictable.So we can’t really say ‘this particular algorithm is safer’, or ‘this other algorithm is more trustworthy,’ because we can’t necessarily predict what it’s going to do tomorrow. 

We’re still at this point where we can’t answer those questions. And yet, these systems are shaping the lives of billions of people. If society is going to incorporate adaptive algorithms into even more of what we do, we need to know, as a baseline, if a given system is safer or better for people.

Fortunately, this is a problem humanity has faced before. There was a point in the history of auto safety where people said, ‘Well, you can’t necessarily know how a driver is going to behave. So how can you really make safer cars?’ And we have made cars safer. Similarly, there was a point in time in the study of the environment where people said, ‘A problem like pollution is the result of many interacting factors that are responding to each other. How can we possibly be good caretakers of our planet?’ And the answer was to do good science, make observations, engage the public, and build theories and models that give greater visibility into how these complex systems work. 

I am currently collaborating with communities to build the kind of knowledge that will help us make reliable predictions about what an AI system is going to do next. In the long term, we’re working on ways to make reliable claims about a system’s safety or its accuracy. I’m hopeful. It’s going to be a lot of work, and we’ll get some things wrong along the way. But I think that we’ll get there eventually.

The argument that we need access to better knowledge of AI systems in order to do good science points us to one of your other many hats, which is the co-founder and executive committee member of the Coalition for Independent Technology Research. Can you tell me a little more about the coalition, how it came to be, and what work it is currently engaged in? 

The Coalition for Independent Technology Research works to support and defend the right to do research on the impact of technology on society. It includes journalists, scientists, academics, civil society, and community scientists. We created this coalition a few years ago, after watching multiple cases wheretech companies made accusations against journalists and scientists about data, privacy, and ethics as an excuse to try to halt research that they found inconvenient. This pattern happened often enough that a group of us decided to work on a systemic response. We realized that the more influential and consequential research becomes (ex. election research, AI accountability, the impact of tech platforms on mental health, etc.), the more it will come under threat.

We realized that we needed to create a new organization that would bring people together to collaborate, to build the ecosystem of independent research, and defend it in the court of public opinion, and sometimes in the judicial courts as well, when it came under attack. We were very either very lucky or very prescient, depending on how you think of it, that we had done that groundwork and had the beginnings of a legal entity. By the spring of 2023, when tech platforms really started to crack down on research or data access, we also started to see more attempts by political actors in the US to restrict research access to data. So we ended up spending much of 2023 organizing campaigns, a lawsuit, and amicus briefs to stand up for the value of research and support people when they came under attack.

I’m thrilled to share that we just hired Brandi Geurkink (Siegel Fellow 2022-2023) to be our first executive director. Until now the coalition has been a group of volunteers who are united by a common cause. It became very clear that when negotiating with a large corporation, educating policymakers, or organizing crisis response for researchers under threat, it’s not enough to rely on the goodwill of a group of busy scientists and journalists. 

I’m also excited that the coalition is growing and now includes hundreds of members. You can expect to hear more from us as Brandi gets started.

We like to end all of our Fellow Spotlight interviews with the same question: What are you reading/watching/listening to right now that you would recommend to readers, and why?

I do most of my deep thinking and reading via audiobooks on long bicycle rides. In the last year I have especially enjoyed reading Braiding Sweetgrass by Robin Wall Kimmerer. This wonderful book has helped me approach the study of complex systems with humility and a deep engagement with lived experience in connection to culture and history. It’s a beautiful reflection on how to think like a scientist without setting aside the other things that are important to your life. Unsurprisingly, I am also reading Naomi Oreskes and Eric Conway’s book, Merchants of Doubt, and thinking about the playbooks that corporations sometimes use in order to discredit science. 

I am also reading Sick Building Syndrome and the Problem of Uncertainty by Michelle Murphy.  I’m a person with a respiratory disability, and it’s tough sometimes to help institutions accommodate my needs or help other people accept my experience. Community science and organizing on safety and accessibility issues are a matter of survival for me— and a deep inspiration for my research on technology and AI. Just like respiratory health advocates, people often worry something is wrong with technology but don’t yet know how to measure it, name it, or talk about it. Murphy’s book is helping me think about how people find the language to talk about, study, and organize around issues that they don’t even know other people are experiencing. I expect to see this challenge of shared language become central to the science and policy of AI in the coming years.