Megan Rivera was a 2024-2025 Siegel Research Fellow and is a fellow at the Washington Center for Equitable Growth, where she studies the impact of AI on the labor market and economy, among other topics. We sat down with Megan to learn more about her recent policy brief, how she found her fit working in legislative politics, the potential for integrating worker voice into decision-making around AI, and how the Siegel Research Fellowship helped expand her horizons.
Tell us a little bit about your background. How did it prepare you for your work as a Siegel Research Fellow at the Washington Center for Equitable Growth?
Originally, I thought I might want to go into politics. I grew up in Iowa, and I followed the Iowa caucuses closely with my family. After college, I did a little campaigning. I had a great mentor who helped steer me toward a job as a legislative aide for a freshman delegate in the Texas State House. I realized I really loved legislative politics. I loved the energy, the enthusiasm, the yelling—all of it.
That experience led me to pursue a graduate degree in public policy, where I focused on the economics of inequality and reproductive autonomy. After graduation, it led to a job as a policy and outreach advisor in Congress on the House Select Committee on Economic Disparity and Fairness in Growth. As part of that work, I organized national field hearings, which introduced me to the discussion around automation and labor displacement.
When I first joined Equitable Growth, I worked on algorithmic management and automated surveillance policy work, and that was the beginning of how I got involved with the Siegel Fellowship. The Siegel Research Fellowship allowed me to really dig into the policy space of AI in the economy, the topic of my recent policy brief.
In what ways has the Siegel Research Fellowship been helpful as you’ve studied the impact of AI on the nation’s labor market and economy?
I understand the economy and policy, but I don’t tend to work on the fine details of the technology. Being a part of the Siegel Research Fellowship has given me a much fuller understanding of the different ways that AI, automation, and technology are impacting the economy. I’m writing for policymakers who are not usually experts on technology. I’ve benefitted from all of the presentations of Siegel Research Fellows who are experts on technology and technology policy.
For example, [Siegel Research Fellow] Pegah [Moradi of Cornell Tech] presented her work on automated checkout machines, describing how self-checkout has changed cashiers’ jobs. Pegah talked about how cashiers are in charge of a handful of automated tills, and they only interact with people when something has gone wrong. I worked as a cashier when I was in college, and Pegah’s presentation got me thinking about how automation had changed the experience of that job for the worse. I used to have friendly conversations with customers to pass the time. If I was getting yelled at for my entire shift because the machine isn’t operating the way it’s supposed to, I would lose my mind.
That’s also an example that you included in your recent policy brief entitled “What impact is artificial intelligence having on the U.S. labor market and the nation’s economy?” Tell us more about that paper, which you describe as a “compass, rather than a definitive guide.”
The primary policy audience for the brief is Members of Congress and their staff. The column is designed to inform them about what we’re seeing in the various sources of data when it comes to AI’s impact on the labor market and the economy. It’s very much a snapshot in time, and there are differences in geographical impact. But policymaking works best when everyone is able to operate with a shared base level of understanding. I have a lot of meetings with legislative aides who are hearing big claims from lobbyists and investors. They’re not sure what to think. This policy brief aims to level the playing field.
A good example of this is the discussion over worker displacement. When I meet with legislative staffers, the question they ask most is: What’s the timeline for all of us to lose our jobs? At Equitable Growth we try to reframe that question to understand how jobs are going to change. From the data, it’s not clear that there’s going to be a clean replacement. We’re not going to see an AI agent taking over a lawyer’s job. Instead, AI might help a senior or mid-level lawyer do quite a bit more. There might be a diminished need for entry-level lawyers. But there’s still a need for physical lawyers to go to the courthouse. Our economy is not going to change that rapidly overnight.
That’s particularly true when we have no guardrails around how we can deploy these new technologies into the economy. Until we have clear rules and regulations, investors will hold back. There’s no clarity that their investment will be protected from a legal standpoint or from a regulatory standpoint—no matter how brilliant an innovation is. Capital flows where clarity exists.
What should those guardrails for AI development and deployment look like? Why is it important for the federal government—as opposed to state governments or businesses—to establish these guardrails?
It’s in nobody’s best interest when firms don’t have an idea of what regulation might look like. For example, there was a discussion of the 10-year moratorium, now the SANDBOX Act. I don’t know about you, but I tend to think of rules as stable. And when somebody tells me that they’re going to change the rules in 10 years, I’m really interested to know what the rules in 10 years are going to be and who’s going to shape them. Talk about an economic incentive to stack the deck in advance. If I know the rules will change, I’m going to shape what my plans are and how much money I can make around this, and a deviation could be devastating. Uncertainty really is the enemy for both businesses and workers in this space.
The federal government is really the only entity that is well-positioned to make those rules. Consider interstate commerce where you might have conflicting rules in different states. How do you treat those rules when moving from state to state? It’s expensive and confusing for businesses to figure out 50 different regulatory regimes. It’s already a headache for them to figure out EU differences from US differences. It’s in our interest as a leader in the economy, home to where most of the innovation is taking place, to come out with regulations that consider and prioritize our constituents’ needs. Otherwise, we’ll end up bending to the regulatory regime that the EU already came out with which may or may not have prioritized the way we’d prefer to, given our nation’s economic dominance in this space.
Of course there are practical obstacles to doing so. Floor time in Congress is an issue, especially coming out of the government shutdown. Plus, there is not a strong leadership and consensus around these issues. It’s not clear that the House and the Senate are fully aligned on what these rules should look like.
In your policy brief you argue for a number of specific policies that could be implemented as guardrails for AI. You discuss antitrust actions to prevent AI market concentration; and an expansion of the safety net and reskilling/training programs to support workers displaced by AI. Why are these policies important? What might they look like in practice?
AI is probably going to augment a lot of jobs. For example, researchers are now using AI to develop literature reviews. Scientists are using AI to pull resources around a specific topic. Some of the research put out by Open AI, makers of ChatGPT and Anthropic is showing that LLMs are often being used to modify people’s workday or help in creative thinking. Ideally the government would invest in supporting workers in weathering that transition, enabling more people to move into those types of roles. It’s not certain what that might look like. Maybe it’s a new approach to high school or middle school, similar to how we approached computer literacy skills in the early 2000s.
There’s a lot of discussions on the Hill these days about workforce retraining and reskilling. I certainly think there’s going to need to be some sort of wide-scale investment to support most of the population as we move through this transition. Transitions are inherently uncomfortable and painful. Private businesses have no incentive to help guide us through the transition, but they also stand to gain a lot if one does take place. It’s very much in our collective interest as people are pushed out of jobs, and we’re helping find them new ones.
We don’t have a safety net program that currently can do that. Not everyone qualifies for unemployment insurance and that program has a time limit that wouldn’t cover the time needed for some retraining programs. Retraining programs aren’t always accessible to everyone—consider geographic and cost factors alone. Lots of people talk about AI as producing a-job-for-a-job change. It’s probably not going to be that way. One job might become four different part-time jobs that you cobble together. Or four jobs might become one job because the worker can become so much more efficient.
The uncertainty about how all of this will pan out in the future is one of the reasons that I describe the paper as a compass. It’s intended to help readers process what the claims are, and what the reality is, rather than provide a prescription of the exact retraining programs that we should have. It’s also to help guide policymakers to the work that some amazing academics are producing in real time to keep up with the innovations in AI technology.
AI firms often position this technology as mystical. There’s a blog by two Princeton computer scientists called “AI is a Normal Technology.” That strikes me as very reasonable. We need to develop policies for what we are seeing in the field and in the data, not what AI firms are telling us might be possible for this technology. AI is something that we can regulate. We can direct its future. It is within our locus of control.
How does your work on AI in the economy fit into your larger interests in inequality?
AI is drawing out the inequalities that already exist in our society. One example is a white collar worker with a graduate degree who uses AI to make their job easier. There are other types of workers—a call center worker or a warehouse worker—who work in jobs where AI is telling them to go faster. It’s calculating the pace of their work. These workers tend to be blue collar workers, although increasingly white collar workers are subject to automated surveillance and algorithmic management functions.
Right now, lower wage workers are being disproportionately impacted in negative ways by these technologies. But that could also change. Two researchers at Equitable Growth, Chiara Chanoi and Chris Bangert-Drowns, just came out with a paper looking at labor data to try to determine the impact of AI and automation risks for different occupations. They found higher education levels, higher income workers, and women faced high AI exposure in the workplace. With each new AI development, it changes the calculus a bit in terms of which jobs are most at-risk and in what ways.
One of the most interesting policy proposals you suggest is greater inclusion of worker voice in AI adoption policies in the workplace. Why is this an important policy? And what would it look like in practice?
One of my favorite quotes is from the AFL-CIO Technology Institute: “A worker is an expert in their job.” Workers know better than anyone else how to make their workday more efficient.
There have been case studies from Germany on the impact of worker voice in unionized Volkswagen plants. Panels of union leaders engaged in discussions and negotiations with the company about how to implement AI in their factory. They ended up with much better safety and bottom line outcomes than in plants where worker voices weren’t incorporated.
Equitable Growth just announced grant funding for two different research projects on worker voice—one with healthcare workers at Kaiser Permanente and one with telecommunication, video game workers Those will be interesting to follow.
What other research questions are you interested in investigating? Do you have any new projects in the works?
I’m really interested in exploring how individuals fare in the economy when they come from low-wealth, low-income backgrounds. I’m particularly interested in how our economy isn’t really set up for everyone to be successful and finding ways to change that. In the new year, I’m looking forward to some projects that will explore how we can support those most likely to be impacted by AI’s introduction into the workplace and economy.
I need to do a lot more work on this, but I suspect that we’re going to see a lot more women impacted or displaced by AI. And I suspect that they’re going to have a harder time being redeployed into different positions, simply because they have unique needs. Think about a single mother who might have trouble retraining and redeploying if she has children who need to be taken care of. I’m really interested in probing, some of these groups that I think are likely to be forgotten about whenever policymakers figure out how to support people in moving from job to job. I am willing to bet that there are other subpopulations that are not going to be well-served by any of these retraining and reskilling programs if their particular needs are not appropriately considered.
AI’s going to impact the economy in all sorts of ways, and it’s going to shake up a workplace that’s already been very fractured from the introduction of the gig economy over the last few years. I hope that we don’t try to address it with a big hammer when it turns out we might be dealing with screws or hooks or literally anything that is not a nail.
What are you reading, watching, and listening to right now that you would recommend to readers?
Right now, I’m finishing up Frankenstein—the original 1818 Mary Shelley version, not the version with her husband’s edits. I want to start Nature and Origins of Mass Opinion by John Zaller. And I finally got a copy of How the South Won the Civil War by Heather Cox Richardson. I am very excited to read that. The book argues that the South won the Civil War from a popular narrative and cultural perspective. I’m not big on podcasts, but I like to listen to background music (right now, Duke Ellington and John Coltrane’s 1963 jazz album) and read my books!




