Q&A with Siegel Research Fellow at the Center for Democracy and Technology, Ruchika Joshi

Ruchika Joshi is a Siegel Research Fellow at the Center for Democracy & Technology’s AI Governance Lab, where she develops technically rigorous solutions for industry and government leaders to ensure AI systems are safe and trustworthy. We sat down with Joshi to learn more about her work on AI agents, the importance of involving diverse stakeholders in AI discourse and decision-making, and her take on how technology must be intentionally shaped to be a force for social good. 

Tell us about yourself and your background. What brought you to this point in your career?

I’ve always been driven by how technology can improve people’s lives. Before joining CDT
—where I work with industry practitioners and policy leaders on developing and deploying AI safely—I tackled similar issues working with the UK Parliament’s Science, Innovation and Technology Committee and Harvard’s Belfer Center. And because I’m deeply passionate about driving execution, I also led strategy and operations as Chief of Staff to CEOs of a global development consulting firm and an AI startup automating survey research.

But my entry point to AI was through civic technology deployment in majority world contexts. As a technical program manager working across health, education, and employment, I saw firsthand how data and technology can improve people’s lives and expand the opportunities available to them. One project I led aimed to apply machine learning techniques to help governments better allocate public services to 250 million people living in poverty. The idea that technology could have such a huge impact blew my mind. 

At the same time, I also learned that better technology doesn’t automatically lead to better outcomes, especially for vulnerable groups. Translating technical advances into real-world improvements requires deliberate, values-driven work of bridging the gaps in between. The profound stakes of that work brought me here and I feel grateful that I get to do this as a career.

What are you working on right now?

Lots of exciting projects! I’ve worked on developing best practices for auditing AI systems and directly enabling enterprises to set up robust AI governance processes and systems. Now I’m focused on how developers can responsibly build and deploy AI agents.

The term AI agent remains loosely defined, but generally refers to AI systems designed to plan and execute tasks on behalf of users, with increasing autonomy. Unlike other AI-powered systems like chatbots or recommendation engines, which support decision-making by offering suggestions or generating responses, AI agents are designed to carry those decisions out—often by interacting with external tools or websites through APIs. Early demos of agent use include operating a computer for grocery shopping, automating HR approvals, or managing legal compliance tasks. 

Yet, we are seeing cases where AI agents break easily, which currently makes them unreliable to use in high-stakes domains like cybersecurity, healthcare, and financial services. As companies race to build more capable agents, innovation must be matched by strong technical, regulatory, and legal safeguards so that these tools advance human well-being rather than deepen existing harms and inequities.

As tradeoffs between risk and opportunities evolve, it’s critical not just to clarify what agentic AI is, but to focus on what questions we most urgently need to answer before these systems become deeply embedded in society. 

To meet that challenge, my work right now centers on (1) developing a shared understanding of what agentic AI systems are, (2) prioritizing the key questions we need to answer about them, and (3) building solutions to ensure AI agents are developed and deployed in ways that elevate human capabilities and minimize harms.

Could you share some early findings from your work on AI agents?

First, we need to coalesce stakeholders around a shared understanding of what AI agents are to effectively assess their risks or governance needs. To address that, I’ve worked on bringing technical clarity to the topic grounded in concrete developments in the product landscape. 

Second, I have found that, at times, the discourse on AI agents can lack the specificity needed for action. Given the pace of advancements, I have focused on identifying concrete policy questions that industry, policymakers, and civil society can begin collaborating in practical ways.

These include: How are developers preventing agents from being hacked or used for hacking? How much do agents know about users—and when and with whom can they share that information? What control do users have over what agents are doing? What shared technical or legal infrastructure must be built to govern agents? What strategies are needed to mitigate individual and societal risks from designing increasingly human-like agents? And, what responsibilities do developers have when agents cause harm?

Given the current policy landscape, where do you see opportunities or leverage points to address these concerns? 

Addressing these concerns starts with finding meaningful answers to the questions we’ve just discussed. Right now, researchers and public interest stakeholders are operating in an information vacuum. Companies are developing and deploying agentic systems faster than they can explain what they are, how they work, or what safeguards are in place. But unless we have more transparency on those fronts, the rest of the policy discourse runs the risk of being speculative, incomplete, or misdirected.

In addition to more transparency from AI companies, we also need sustained collaboration with civil society, academics, and policymakers to do the hard, messy, but extremely necessary work of building shared language, surfacing tradeoffs, and setting standards and accountability in this field. 

That kind of collective effort can sometimes feel daunting, but the thing I find helpful to remember is that we, as a society, know how to do hard things. 

Human history is full of moments where we’ve had to make high-stakes decisions under uncertainty. What made progress possible in those situations was not knowing the right answers from the get-go but a shared willingness to name the thorny questions early, bring in the right public interest stakeholders, and stay resilient in the face of the challenge. 

That’s the opportunity in front of us now. But it’s also a narrow one, and it won’t stay open indefinitely. Once these systems are deeply embedded in the world, correcting course is likely to become exponentially harder.

In addition to the policy landscape, your work urges us to consider some more philosophical questions about if, when, and how we use AI systems to imitate care, empathy, concern, and other expressions that we owe each other as humans. Tell us more about that dimension of your work.

Yes, I’m very interested in what it means to make interactive, action-taking AI systems more human-like––not just technically, but also socially and emotionally. People report finding AI systems more useful when they appear human-like and are attuned to user preferences. But that may  pose serious risks, especially if affective and behavioural cues afforded to agentic systems are used to manipulate or deceive users in ways that undermine individual agency and collective well-being. 

Addressing these tradeoffs is an evolving area of research. We’re grappling with issues today that were barely on the horizon a few years ago—and just as likely, things that feel unfamiliar or unsettling now may seem entirely ordinary in the near future. Which makes this a very interesting space to be. 

Thoughtful experts, including people within my teams, often hold valid, well-considered disagreements on how to approach questions on anthropomorphization of AI. But that’s precisely the point: because these technologies and the norms around them are still taking shape, we have a rare opportunity to influence them intentionally—through open dialogue, inclusive participation, and a clear-eyed focus on what strengthens our relationships with each other, not just with machines.

What kind of work would you like to do in the future? How would you like to drive this work forward? 

I love working at the intersection of cutting-edge innovation and the nuts and bolts of driving day-to-day execution. I am excited to keep doing that work of turning AI breakthroughs into products, practices, and policies attuned to real-world complexities. 

One aspect of that is finding ways to connect my experiences working across vastly different global contexts. At times, my work feels split between advancing conversations about frontier AI agents and the reality that millions of people still lack access to basic technology and digital literacy. Bridging that gap—and ensuring those most impacted by these technologies have a meaningful voice in shaping them—is central to the type of leadership I want to bring to this field. That means focusing not just on where AI can go, but on how it gets there and who gets to decide.

What is a question you would like to see other researchers investigate?

There’s some excellent research being done on the technical front on questions around model alignment, robustness, fairness, accuracy, and bias. These areas tend to attract the right focus and resources, in part due to the reputational and regulatory pressures facing AI developers and deployers. I’m confident that researchers will continue pushing the envelope and driving critical progress in these domains.

But one critical gap I keep coming back to is implementation. Without serious attention to how technical breakthroughs translate into practice, even the best ideas are likely to remain abstract and disconnected from the systems and communities they’re meant to serve. That’s where I believe more research is required: in the granular, day-to-day practicalities of translating good ideas into real-world impact.

An example of what that could look like is going beyond theoretical metrics for algorithmic fairness, accuracy, and bias to asking more applied questions: How can these metrics be adapted in contexts where demographic data is incomplete or politically sensitive? What governance structures meaningfully support responsible AI adoption in low-resource public institutions? How can participatory design methods for AI systems scale when affected communities face limited digital access or institutional distrust? And what critical nuances risk being lost across diverse operational and cultural contexts?

I’d love to see more resources directed to supporting researchers tackling such questions.

What are you reading, watching, and listening to right now that you would recommend to readers, and why?

Let me give you something completely non-AI related! I recently watched All We Imagine as Light and it might be one of the best films I’ve ever seen. Directed by Payal Kapadia, it is a beautiful meditation on the lives of three ordinarily incredible women and a fourth equally special character: the city of Mumbai. I’ve also been listening to Beethoven Blues by Jon Batiste, which reimagines Beethoven’s classical compositions through the lens of Black music. It’s playful, deeply personal, and historically rich all at once. And then I’m reading George Saunders’s A Swim in a Pond in the Rain––a master class that draws on the greats of Russian literature to explore what makes stories work.