Alexandra Mateescu is in her second year as a Siegel Research Fellow at Data & Society, where she has worked for the last decade. Alexandra is a founding member of Data & Society’s Labor Futures Initiative, where she conducts qualitative research about how technologies affect worker experiences and often deepen existing inequities and social precarities, particularly in low-wage industries. In this Q&A, Alexandra shares her research about the gendered experience of the gig economy; how AI is an extension of existing tools of surveillance and control of workers; and how workers themselves are building power and challenging existing narratives about the promise and peril of technology. We also discuss Alexandra’s new co-authored paper on the underlying dynamics of power, control, and ideology that are shaping AI adoption in the workplace.
Tell us a little bit about your background. What led you to want to investigate the impact of new technologies on worker experiences, rights, and economic justice?
My path to this realm of research has been a little meandering. My background is in anthropology, which I studied in undergrad, as well as in my master’s degree where I was initially interested in the role of museums, states, and collective memory as well as a variety of other interests that began developing as I came into my own as a researcher.
After I graduated, the job market was not great, and I worked a range of service industry jobs. I was a cashier, a nanny, a call center worker, a tutor. I did all kinds of gig work before gig platforms really took off. So this research isn’t something that’s separate from my life. I was a worker—and I still am a worker.
It’s not separate from most people’s lives. Most of us need to work to survive, and questions of economic justice are part of a lot of wider conversations about justice and building a better world. Most people have experienced precarity and economic insecurity. Throughout history, technology has always been intertwined in people’s experiences of work.
I started at Data & Society as a research assistant a bit over ten years ago. Data & Society was a new organization examining the broader societal impacts of data-centric technologies. I learned a lot over the years from many of the researchers that came through Data & Society who were looking at issues ranging from surveillance and civil rights to finance to the criminal legal system.
Around the 2010s, the so-called “gig economy” was starting to emerge, and some of us began looking at gig platforms like Uber and Lyft and critiquing many of the narratives that were emerging. The marketing promise was that platforms allow workers to be their own bosses and permit flexibility in work. These gig platforms were presented as a means for economic emancipation. There is growing awareness of these narratives as very false, but, at the time those sentiments dominated.
Taking on those narratives was a major reason why we formed the Labor Futures Initiative at Data & Society. We wanted to think about the ways that technologies were serving as an instrument of control and how employers were rewriting a lot of the social contract underpinning labor.
I began collaborating with Julia Ticona on projects to understand the gendered dimensions of the gig economy. At the time, a lot of the conversation was about ride-hail drivers, which was very male-dominated. Yet women were also doing a lot of gig work. They were just doing it on different kinds of platforms, whether it was platforms for freelancers, or domestic laborers, or care more generally. Care platforms hadn’t really been written about. Our project explored the particular kinds of precarities that women face within the gig economy.
What are the big questions that are driving your current work at Data & Society?
In the 15 years or so since the gig economy really took off, companies like Uber, DoorDash, and Amazon have perfected the art of surveillance-intensive labor exploitation with tactics like algorithmic wage discrimination and opaque algorithmic manipulation of workers’ activities. Most of these practices have permeated into many different occupations, regardless of worker classification.
Part of the conversations that we’ve had with workers across different sectors has been saying, “This isn’t just the gig economy; all kinds of work are being affected by these instruments of surveillance and control. Many employers see the gig economy as an aspirational template, and now we see how the latest wave of AI hype reflects similar ambitions to further precaritize workers, whether through subcontracting arrangements, threats of layoffs, or general devaluation of human labor and expertise.”
I’ve also done a lot of projects that span across a lot of different sectors, looking at devaluation and surveillance of care labor within platforms; the human labor behind automation within service industries and agricultural labor; worker data rights and how workers are resisting data commodification; and how state surveillance through programs like Medicaid end up surveilling home care workers and their clients. Most recently, I collaborated with researchers Zoë West and Sanjay Pinto from the Worker Institute at Cornell ILR School and the Model Alliance on a project examining the impacts of AI on fashion industry workers. And now we are working on launching a collaborative project that looks at emerging issues around AI and labor from a cross-sector perspective.
The broader tying thread around a lot of this work is that often tools of surveillance and control are first used in societally devalued jobs. But then these surveillance and control technologies often spread to other industries, affecting a range of workers.
A lot of your work involves transmitting what you are learning to workers themselves, and turning to the labor movement to help shape your research agenda. How are you using research to help build worker power?
The work we do at Labor Futures Initiative has evolved over time as we figured out what role we could play to support the labor movement and worker organizing. Research in itself does not effect change.
Over the years one role we’ve taken on is convening different groups and facilitating a lot of cross-cutting dialogues with each other, whether it’s doing hands-on workshops with workers and labor unions, or facilitating more academic workshops. We’ve hosted events around work that bring researchers and workers together. For example, I hosted an event on the intersections between care work and surveillance, which brought together both care workers and clients, whether in child care, elder care, or support for people with disabilities.
Most recently, we hosted a workshop on generative AI and labor that brought together researchers and people doing organizing in a variety of contexts—everyone from Hollywood screenwriters to global data labor to public sector workers. My colleague Anuli Akanegbu has been doing fantastic work looking at AI and workforce development and its implications for Black workers in Atlanta, and is sharing back a lot of her findings with her research participants as collaborators.
One of the things that I’ve learned from talking directly with workers about surveillance and data collection is that it’s very clear that the sophistication of a technology or a data practice isn’t necessarily correlated to the degree to which it impacts workers on a day-to-day basis. A lot of public discourse tends to focus on new technologies that come along. But the technologies that come up a lot in conversation aren’t often those new technologies. For example, timekeeping software comes up a lot. It’s behind a lot of wage theft committed by employers in many industries. But timekeeping software is ”low tech,” it’s not that cutting edge.
In these conversations with workers, it’s important to find out what kinds of data are being collected and how that data fit into decision-making; there’s this massive opacity that’s stopping workers from even taking the first step of questioning how technologies operate in their workplaces. Next we ask about the kinds of actions that workers can take to challenge the power asymmetries in their workplaces or industries. What does transparency accomplish? What can workers do once they access their data or have a voice in technology design, and what are the limits of thinking about these issues through a technology lens? These issues are also going to have much bigger stakes for marginalized workers and those who are precariously employed, whether because they’re independent contractors or immigrant workers or workers of color or because they don’t have access to avenues for collective power, like a union.
Those are the kinds of questions we’ve talked through in workshops to be able to begin the work of thinking critically about the institutional and power relations underpinning technologies in the workplace.
Workers have been very proactive on these issues, whether in pushing for state regulation of AI and algorithmic technologies or through worker organizing. In the gig economy, there’s been more than a decade-long battle over worker misclassification but also successes in pushing back against practices like tip theft and subminimum wages. There have also been efforts to pass legislation that targets specific aspects of algorithmic management, such as prohibiting employers from using algorithmic management systems that result in violation of labor and employment laws.
With more recent iterations of AI technologies, more and more unions are developing principles around AI, such as the Communications Workers of America and the National Nurses United, which published a “Nurses and Patients’ Bill of Rights” that seek to protect nurses’ autonomy and patient well-being over profit or cost-cutting incentives. The UC Berkeley Labor Center recently released a roundup report that analyzes common values emerging from labor unions’ policy statements and collective bargaining agreements around AI systems. In all of these ways, workers have been very much involved in defining their relationship to technology.
What are some of the impediments to this work?
Workers’ voices, their expertise, and their professional ethical commitments are often framed as an impediment to innovation. What’s always been true in the United States is that workers have very few rights when it comes to practices like digital surveillance, but also just generally with things like basic workplace protections.
We’re currently in a moment where many of the institutions responsible for workplace standards and enforcement are being hollowed out and marginalized workers are being directly targeted. There’s been a weaponization of enforcement capacities, major staffing purges at federal agencies, and sweeping repeals of workplace regulations at federal institutions like the Occupational Safety and Health Administration, the Equal Employment Opportunity Commission, the Federal Trade Commission, the Department of Labor, and others. Right now workers are operating in a very hostile environment. We know a lot of the harms of this technology already, but it’s a matter of political will and collective worker power to change things.
With colleagues, you just published a paper that looks specifically at AI in workplaces and offers a blueprint for a counternarrative about its positioning that prioritizes workers’ collective voice. What are you pushing back against and how are you suggesting that we reframe the conversation about AI?
A goal of this paper was to build a more coherent sense of the cross-cutting concerns from labor advocates across industries, including what questions remain unanswered, and so we spoke to people within industries ranging from healthcare, creative labor, K-12 and higher education, call centers, warehousing, and more. People pointed to a general absence of information that’s often filled by speculations about job displacement, press releases, and self-reporting by corporations that often give a misleading impression over how workers are actually using or not using AI, broadly defined. While AI is either framed in terms of enhancement or displacement, a lot of the ways that AI systems play out now are just a reiteration of different kinds of algorithmic controls and disempowerment of workers that we’ve already seen before.
Much of the public discourse about AI has been about displacement fears. Most of the ways that workers are brought into that conversation is through the question of how we build more guardrails, how we reskill workers once their jobs are taken away, or how we include them in technology design.
With this paper, we want to build a vocabulary to describe the larger institutional and economic shifts that are underpinning AI adoption across different workplaces. We want to rebut the myth and narrative that AI is hyper-efficient. We want to highlight the infrastructure and institutional capture that happens when AI is presented as the only solution in the wake of widespread disinvestment in critical social institutions like healthcare, education, and public sector services. We want to show the kinds of occupational devaluation that happens once tech companies are able to co-opt or gain authority over professional expertise, occupational scope of work, and ethical norms within professions while locking workers out of those decisions. Social stratifications that happen when different workers experience technology very differently can exacerbate existing social, racial and gendered inequalities within the job market. Those are the themes that we wanted to bring to the forefront, rather than having the conversation be entirely about worker displacement.
In addition, with this shift in technology, there’s often a well-meaning but misguided effort in the policy realm and other spaces to make a clear dichotomy between good and bad workplace tech. In this telling, on the one hand, there’s “good” tech that augments workers, makes them work more productively, and gives them control. On the other hand is “bad” technology that surveils and controls and extracts. But often the lines are very blurry. In some cases, both things are true. An algorithmic assistant can very easily also be your algorithmic boss. There’s a great piece by the researchers Sara Fox and Samantha Shorey that uses the term “augmentation washing” to describe this phenomenon.
Our paper is trying to ask and engage with bigger and more ambitious questions so that we’re not stuck in the typical narratives and dichotomies that dominate the conversation. The paper offers a roadmap for us to think through what we want to do next. We hope to hone in on a few case studies to address some of these questions.
We’d also like to produce more applied and hands-on materials for workers and work organizations. For example, we’re thinking about how to create more spaces for mutual support, information sharing, and organizing. Or we might create educational modules to think critically about AI and labor.
What are you reading, watching, or listening to right now that you would recommend to readers, and why?
After the new year, I was looking for something hopeful. I saw the word “hope” in the title of Rebecca Solnit’s book, Hope in the Dark: Untold History, Wild Possibilities, which had been sitting on my bookshelf. The book is about how to maintain hope for a better future in the face of uncertainty and defeats. It was published in 2004, immediately following the Bush administration’s invasion of Iraq, which was not a very hopeful time.
I’ve had mixed feelings reading the book. It feels demoralizing to read a book from 2004 with the hindsight of someone living in 2026. Some of the political movements and global struggles that Solnit chronicled with a sense of hope at the time, have stagnated or gotten much worse.
But there’s a quote from Solnit that I do really like. She makes the point that hope isn’t a lottery ticket. You don’t sit and hope and wait for it to pay off. Instead, she says, “Hope is an axe you break down doors with in an emergency…Hope just means another world might be possible, not promised, not guaranteed.” It’s a commitment to the future. I do think that Solnit is very right about that. We do have a future ahead of us that we can shape.




