Pegah Moradi is a PhD candidate in Information Science at Cornell University, where she studies the social and organizational dimensions of digital automation, with a focus on its impacts on work and workers. Her dissertation research investigates how self-checkout machines and related crime-control technologies are reshaping the future of frontline retail work. Her broader academic work spans topics including behavioral advertising and generative AI in interpersonal communication.
Below is a summary of Pegah’s conversation with Siegel Research Manager, Madison Snider.
Can you tell me about your current role and the kind of research you’re working on right now?
I’m a PhD candidate in Information Science at Cornell, and I’m currently wrapping up my dissertation research. My work focuses on workplace automation—but specifically on the surprising, less obvious ways automation changes work. Over the past few years, most of my projects have focused on self-service systems and related crime-control technologies, with an emphasis on how these systems impact frontline workers and their interactions with customers.
How did you get interested in this area of research?
I was in college during this kind of peak moment for software engineering—everyone wanted to graduate and go work for a big tech company making six figures. I was taking CS classes, but I was also really interested in political science, sociology, economics, and labor studies. I started to notice how little overlap there was between what was being taught in those subjects and what was being taught in computer science, but I was drawn to both. I wanted to find a way to bring them together.
At the time, it felt like something was being missed in that gap—especially when it came to the downstream effects of technology. My classmates were going off to build tools, but often without any understanding of the social context or the broader impacts of what they were building.
That was also when conversations about the social impacts of tech were really starting to pick up—things like the public health effects of social media, online privacy, Cambridge Analytica, and shows like Black Mirror getting popular. So there was this wider cultural shift, too, where people were becoming more aware of these issues.
That’s how I stumbled onto information science as a discipline—where people were having those exact conversations in a robust way. What surprised me was how the field brought together both strong technical understanding and deep social science insight.
As for my current projects, most of my academic work is about automation and how it reshapes labor in unexpected ways. Early in my PhD, I was brainstorming project ideas—really just trying to figure out what I was going to study—and in a conversation with my advisors, we started talking about gas pumps.
We were thinking about how pumping gas used to be a job—in places like New Jersey and Oregon, it still is—but in most of the country, that work has been fully offloaded onto customers. We were initially interested in questions around prices and wages, but what stood out was that this was a system eliminating jobs without really automating anything. That opened up a lot of questions.
From there, we started thinking about self-checkout as a similar but more complex case. With gas, people generally don’t mind doing it themselves—there are safeguards in place, so you don’t need a trained professional. But with self-checkout, it’s not quite the same. Some people still prefer using a traditional cashier, and there are more obvious issues—like theft and fraud—that don’t really apply as much to gas stations.
So we thought, okay, this is a widely deployed technology, people have mixed feelings about it, and there are real, tangible problems. It seemed like a ripe area for deeper research. That’s how I arrived at studying self-checkout.
You introduce the idea of pseudo-automation in frontline work in your recent article. Can you tell us about that project—and what you mean by “pseudo-automation”? How does that differ from full automation?
Absolutely. So, that project came out of the self-checkout work. We were interested in how shifting tasks to customers changes things for the workers who used to do that job.
Unlike at gas stations, where the job disappears entirely, in retail the roles shift. A cashier might now be monitoring self-checkout machines rather than scanning items themselves. Often, they’re doing both—it’s a very fluid role.
So we conducted an interview study with cashiers across the U.S. We asked how self-checkout has impacted their day-to-day work: how they interact with customers, how they feel about their jobs, and how their responsibilities have changed.
One of the biggest takeaways was that cashiers felt their roles had become more adversarial. Instead of having one-on-one, routine interactions with customers—where you scan their items, make small talk, and move on—they now monitor multiple registers at once. And when they do interact with customers, it’s often because something has gone wrong: the machine is stuck, the customer is frustrated, there’s a problem with payment, etc. The interaction starts with tension.
That shift—from routine service to conflict resolution—was really striking. Many workers talked about how they had to actively try to mend those relationships on the fly.
So that’s where the idea of pseudo-automation comes in. Full automation means a task is completely taken over by a machine—like a robot screwing tires onto a car, or pouring plastic into a mold. A person used to do it, and now a machine does.
But pseudo-automation is different. It’s not that the machine is doing the task—it’s that the customer is doing it instead.
What’s key is that the work doesn’t disappear—it gets reconfigured. And the people who used to do it now play new roles: managing customers, fixing errors, and dealing with frustration, often without the recognition or support that should come with that shift.
How would you define frontline retail work more broadly?
Frontline work refers to roles where you directly interface with someone outside of the organization—a customer, a client, a patient. You’re the point of contact, the boundary between the organization and the public.
A big part of that role involves enforcing the organization’s rules externally. Think about a cashier: their job is to ensure people pay for their goods. They scan items, follow certain protocols—but there’s also room for discretion. If something won’t scan or is mislabeled, they might make a judgment call. There’s this wiggle room in how rules are applied.
It’s similar for doctors or nurses: there’s a structure, but also professional judgment. That’s the framework we’re using to define frontline work. And when I talk about retail specifically, I’m referring to brick-and-mortar roles—cashiers, self-checkout attendants, people who interact with customers on a daily basis.
How do you see your findings fitting into the bigger picture? How should organizations think about introducing automation into frontline workflows?
I’d say frontline work is all about discretion, or choice in how a task gets done. There are rules to follow, yes—but workers also have space to decide how they apply those rules in real situations. So when it comes to automation, one key takeaway is that organizations shouldn’t fear that discretion. It’s actually an important part of the job.
For example, in a hospital, if a doctor can’t exercise discretion—if they just say, “Sorry, those are the rules and the system won’t let me do anything”—it’s not just frustrating for the patient. It reflects badly on the whole institution. So allowing room for human judgment is crucial.
Similarly in retail, workers are often navigating real risks—physical or verbal harassment from customers, for instance. They do kinds of relational work to avoid or defuse those high-stakes moments. Discretion keeps them safe, and it also prevents situations from escalating in ways that reflect poorly on the company.
The same thing comes up with self-checkout and theft-detection technologies. If the system clearly catches someone stealing and the tech is obviously “smart,” it’s harder for workers to use those relational tactics—to say, “Oh, the machine’s just being weird.” That loss of plausible deniability takes away a valuable conflict-management tool.
So overall, discretion isn’t a flaw in the system—it’s a feature that organizations should recognize, support, and build around.
Are you seeing any changes in how workers are being trained to deal with this shift?
Often, they aren’t. Most of the training is extremely ad hoc. You learn by doing. And many retail companies aren’t investing much in formal training, even though they probably should be. Especially during the time I did these interviews—2022 to 2024—so many stores were short-staffed. The attitude was basically, “We’ll take whoever shows up and hope they can do the job well enough.”
So yeah, training hasn’t caught up. And yet, the work is becoming more about relationships and soft skills—less about, say, how to bag items properly, and more about managing unpredictable interactions. That work can be more demanding, more dangerous, and frankly, should be compensated accordingly. But right now, we’re not seeing that shift happen.
Are there policy changes you think could help address some of the tensions you’ve outlined?
When it comes to retail, there was a bill introduced, but not passed in California (SB 1446: Grocery retail store and retail drug establishment employees: self-service checkout and consequential workplace technology) that essentially proposed a requirement on retailers to staff one person for every two self-checkout kiosks. The idea is to avoid these overwhelming one-to-many scenarios, where one worker has to manage all kinds of issues across multiple kiosks.
But beyond that, I think there’s real value in strengthening systems that support worker voice—things like unions, worker organizing, and collective bargaining. These aren’t flashy “AI policies,” per se, but they’re essential for ensuring that workers have a say in how technologies are implemented in their jobs.
And even one level above that, we need better social welfare policies. If people feel they can’t leave a bad job because there’s no safety net, it’s much harder to pressure employers to make jobs better—or to reject technologies that make work untenable. So I like to think of this as a kind of pyramid: at the top are context-specific policies like the self-checkout rule; in the middle are labor policies that support organizing and voice; and at the base are broad social safety nets. When all those pieces work together, we can actually build better, more humane futures of work.
I love that framing—the pyramid helps clarify how these layers work together. And it makes me think about what other industries might be next. It’s not just retail or grocery stores—what we learn here could apply more broadly. On that note, where’s your research headed next?
The next direction might sound like a sharp turn, but it connects. My retail work looks at how workers manage adversarial interactions that are mediated by technology—while still having to provide positive customer service. They’re supposed to smile, help, and represent the brand, even when things get tense.
But what if the interaction is supposed to be adversarial? That’s why I’m starting to look at automated officiating in sports. In that setting, referees aren’t there to make players happy—they’re there to enforce rules. It’s a judgment role, and players are going to be upset sometimes. I used to be a soccer referee for youth league games when I was a teenager. And even then, I was getting yelled at by parents when I was 15 or 16. It was wild. You’re just trying to enforce the rules!
Now, a lot of those decisions are being augmented—or even replaced—by technology. In sports like tennis and soccer, technologies like Hawkeye are used to make line calls. That kind of binary decision—was the ball in or out?—is pretty easy to automate.
In some ways, it’s a relief when you can point to the tech—like, “Hawkeye made the call, not me.” But there’s also the emotional labor, the real-time judgment calls, the gray areas of officiating. Like, what if you saw something different? And then, if you override the system, people might become mad at you for not trusting the tech. I’m interested in how tech is impacting those more nuanced aspects of officiating—and how referees are still using or negotiating with the technology in real time.
What are you reading, watching, or listening to right now that you’d recommend to our readers? And why?
Honestly, I spend most of my day reading about self-checkout systems, automation, and organizational sociology, so outside of work, I try not to think about any of that. That said, I’ve found myself gravitating toward two types of content that still relate a bit to my work.
First, I’ve been really into observational and performance comedy—especially the kind that touches on sociological questions. I listen to a podcast called Exploration Live, which is hosted by two comedians, Charlie Bardey and Natalie Rotter-Laitman.
The format is great: they each come with a list of concepts or observations—like, “Isn’t it weird when people do this?” It’s exactly the kind of thing sociologists do, but they approach it in this curious, light-hearted,way.
One of their observations was about going to CVS and trying to buy something, but everything’s locked up. The joke was something like, “CVS isn’t really a store anymore—it’s more like a museum. You’re not supposed to touch anything, just admire it.” And I thought, Yes, this is exactly the kind of moment that sparks sociological inquiry, but it’s being captured through comedy.
I’ve also been reading a lot of writing that offers hope—essays, poetry, novels. I was really moved by Kaveh Akbar’s recent essay in The Nation about how to respond to this political moment—what it means to act, to show up.
I also read Haley Nahman’s newsletter Maybe Baby, which blends cultural criticism with personal reflection. It’s about how to live in a way that’s both meaningful to yourself and helpful to the world. And I’ve been drawn to other writers like Becca Rothfeld, who think critically but still leave space for possibility.
That kind of work reminds me there are people thinking deeply about how to build a better future—and that’s hopeful.