Sohyeon Hwang, PhD is in her second year as a Siegel Research Fellow. Sohyeon serves as a postdoctoral associate at the Center for Information Technology Policy (CITP) at Princeton University, where she studies community-based systems of governance in technologies, including decentralized social media and gig work. We sat down with Sohyeon to learn more about her recent work on inter-community governance processes, how she involves communities in her research, and strategies for overcoming some of the frictions that emerge from coordinating across decentralized social media communities.
Tell us a little bit about your background. What drew you to your current research interests?
The rise of the internet coincided with my childhood and early adulthood, and the way that the internet has disrupted and fundamentally destabilized our sense of community has also been part of that arc. Like a lot of people my age, I feel the urgency and value of community to sustaining a healthy and functioning society.
Communities are important. I think this is a pretty simple fact that we all know, even when we’re young. But as a concept, “community” is a difficult thing to define, and so the question of community in the digital era can be slippery and challenging to pin down.
The way that I think about it is informed quite a lot by my undergraduate studies. I was a double major in government and information science. Those majors followed separate tracks. My government classes made me think about what it means to build collective power; a lot of it was focused on Cold War era politics, revolutions, social movements, ideology, and political philosophy. In my information science classes, we talked about current issues, and the impacts of technologies on people and on communities around us. My current research interests merge those two lines of thought.
You’re currently a postdoctoral associate at the Center for Information Technology Policy at Princeton University. What are the big questions that drive your work? What kind of research are you working on right now?
A big question driving my work is how people can have more agency in the technologies that shape their everyday lives. As people enact rules about how to use a technology, they have some wiggle room to both transform and subvert the intent behind its original design. And so, I see communities as an exciting space for us to build collective decision-making power. But to leverage this, we need to better understand and strengthen community-based models of governing technologies.
As a postdoc at the Center for Information Technology Policy (CITP) at Princeton, I work primarily with Prof. Andrés Monroy-Hernández in the Princeton HCI lab. Andrés and I are both interested in community-owned technologies, across domains like social media and gig work. It’s a fun overlap because we come at it from different backgrounds: I’m a social scientist and Andrés is in computer science, and we intersect on questions of how people and technology interact. Right now, I’m conducting a mix of qualitative and computational studies that are developing data and tools for communities that help govern some key online information ecosystems we rely on.
With colleagues, you just published a paper on building shared infrastructure for community-owned platforms and online spaces. What are the online communities that you are studying?
I’ve looked at a variety of online communities across several platforms over the years. One of the main contexts I’ve been examining these days is decentralized social media. Decentralized social media is made up of independent servers that communicate with each other using an open protocol. Anyone—like you or me—can set up a server and join this network of servers, which is sometimes called the Fediverse. Mastodon, which enables microblogging, is an example of an open-source software people can use to run these servers but there are many more. The result is an ecosystem of platforms that are controlled and governed by communities of users. That’s a stark contrast to traditional social media platforms, which are closed systems, even though they may look very similar at first glance.
We’re working with developers and organizers (partnering with Bonfire Networks) who are trying to create viable alternatives to centralized social media, such as platforms run by Meta, X, or TikTok. We’re looking at what tooling and social or organizational resources decentralized communities need in order to govern this system together sustainably, and we’re developing some prototypes.
In your article, you argue for inter-community governance across different internet communities. What is inter-community governance and why is it so important?
When we think about community governance in online contexts, we often think about community as a unit, and then think about the internal mechanisms or practices that community does to become successful. With “inter-community governance,” we’re thinking about the mechanisms that shape the relationship between communities. That’s important because content and interactions always flow across community boundaries when we’re online. A community can make decisions by itself, but it’s never going to be a self-contained system. It’s always going to have this flow of information from other spaces and other people who sit across many spaces.
In the paper, we argue that thinking about inter-community governance gives us an opportunity to re-think our approach to some of the major, persistent challenges we see online. For example, if there are massive spam attacks going across many communities, how can communities work together to more effectively shut down that spam attack, instead of one community working alone, one at a time?
In some of my earlier work, I saw this happening in ad-hoc ways. You see communities copying rules or enforcement practices from each other, or sharing tools such as filtering bots. There are also emergent back-channels where people from different communities are able to talk to each other and consult with each other on how to deal with certain cases. Every now and then you also hear about some coordinated initiatives by a platform to bring communities together to deal with a larger issue such as hate raids. But these are usually temporary, and when communities are doing this informally, it’s a lot of work. There’s no clear avenues for them to actually work together.
Community governance is very labor- and resource-intensive. It results in a lot of burnout. People tend to be doing this on a volunteer basis, and they face nagging and harassment. One of the big motivations for building out this layer of inter-community governance is to share the burden of this community-building work to make it more sustainable and hopefully more robust.
Is there an opportunity to build collective power through inter-community governance practices? What does that look like?
When we think about who’s actually making decisions in these online contexts, there are actors at three key levels: the platform level, the community level, and the individual level. It’s really important to think about governance at all three of those levels, and how they interact with each other. A lot of debate has focused on the platform and individual level—for example, how platforms are designed and what features individuals can use or interact with, as well as how we should regulate systems. But the community level is also crucial and mediates the other two.
I see inter-community governance as a way of enabling communities to coordinate on decisions in online contexts. I was speaking to a group recently about how I see this as thinking about the seams or interstices between communities. We want to build and strengthen an interwoven net of meaningful community-level decisions, making it stronger and enabling communities to take collective action. This makes communities a stronger counterpoint or safeguard against a (usually centralized) platform that may have different interests than them.
For example, a lot of online communities were taken by surprise when the Meta AI feature started rolling out in Facebook groups. The initial response was quite negative as well as just confused; people wanted to be able to turn it off. Having real ways for communities to build voice and power can give them more leverage in these kinds of decisions, or can enable them to devise tactics that can help them sustain the kind of affordances or interactions that they think are valuable for the health of their community.
The online communities that you’re studying are primarily decentralized. What challenges does that pose in coming to a robust inter-community governance practice?
Decentralized social media communities, like many communities, often have different preferences and goals. For example, I help run hci.social, which is for HCI researchers, but there are lots of communities that are more personal. For example, they may center on identity, local regions, or hobbies. One challenge that you start to see with this kind of heterogeneity is that these communities can interact with one another but may have different norms that sit in tension. For example, as a professional academic network, hci.social is pretty open to scale. A more private community may not be. But if they connect with (“federate”) with us, their content might be broadcast to a pretty large audience.
In some earlier work, we called this an example of governance frictions, where the decisions of one community can actually undermine the decisions of another community and can raise problems like privacy violations.
But, the lack of top-down norms and values is a feature of this design, not necessarily a bug. One of the reasons why we call these “frictions” is because we think there’s a generative aspect to it; it forces one to slow down and assess why a difference between two communities has become salient.
At the same time, the past ten years have been so dominated by a centralized model of social media and online interaction that we’re used to having seamless online spaces that have the benefits of scale and ease-of-use. If you have critical mass, you have network effects and get a lot of content and information that is very easy to scroll through. But in decentralized spaces, you don’t have that as much; governance frictions are an example of that. I think it suggests that we may benefit from shifting our mental model of what community-based spaces look and feel like online.
To conduct research for your paper on inter-community governance, you held design workshops for community members. What did you learn from these workshops about what community members actually wanted and needed in order to facilitate inter-community governance?
The main thing we learned was that the simple act of communicating about governance is really hard.
People told us that when they encounter a problem on decentralized social media, it’s usually because of conflict between members of their own community and those of the other community. But folks would have a hard time identifying the actual root of the problem, because they didn’t know what the other community’s governance practices were. For example, if a community has just one person who’s running the show and that person is very busy, maybe the conflict is not malintended but an artifact of their limited resources and bandwidth: the admin of the other community just hasn’t gotten around to dealing with the problem yet. But it’s not realistic for people to read through all of the rules of another community in great detail to try to guess if they’re “cool” or operating in good faith. Providing basic signals about organizational practices and form seemed to be a really fundamental part of the issue.
People also told us that they don’t have clear avenues for interacting with each other across communities to try to learn about or handle these kinds of ambiguous situations. In the Fediverse, each server has a link to the person who’s their admin, and you could theoretically email that person and talk to them about something that came up. But there’s often no space for communities to convene or talk. The Fediverse has actually done a lot to try to mitigate this problem. There’s a regular forum (FediForum) where people gather online to talk about recent issues. There have been other attempts that have come and gone—discussion forums for moderators, for example, as well as more closed Discord groups.
But it’s been pretty challenging to sustain, partly because it can become another layer of work and organizing. One tricky thing is that inter-community governance is not actually all that salient all the time; it only becomes so when you start having issues with another community or you realize that there is a problem that you have to respond to together. But at that point, it’s very valuable for mechanisms of inter-community governance to already be in place or available to folks.
What research questions would you like to see investigated? What research projects do you plan to tackle next?
One of the main questions that we are thinking about as we work on developing infrastructures for communities is around the labor and costs of participating in community governance.
When we build these tools that are meant to enable more robust community decision-making, are we actually producing more work for communities? How can we make life a little easier, so that we can have useful, productive, fun, and joyful online spaces that communities own and where they don’t have to worry about privacy or all these other threats that we face online all the time? At what particular point of intervention is community governance crucial to better outcomes, and how should we support that?
For example, one of the research projects I’m focusing a lot of my energy on in the coming months is centered on information integrity. We’ve started investigating how we can build tools for communities to adapt their own governance practices toward better information outcomes in the digital spaces they mediate. The rapid changes in technology that we’re seeing now are already disrupting communities’ existing practices and forcing them to adapt and respond to new threats to information integrity.
Some of this will come through the design of tools, but a lot of it will really be about social strategies and tactics that communities can develop or employ.
What are you reading, watching, or listening to right now that you would recommend to readers, and why?
I’ve been listening to the audiobook version of The Score: How to Stop Playing Somebody Else’s Game by Prof. C. Thi Nguyen, and I’m about halfway through. It’s about how the design and metrics of scoring systems really shape our lives and can even co-opt our values. Nguyen moves back and forth between scoring systems in games and in life, asking why one is fun and the other can be terrible.
One of the concepts Nguyen discusses that stuck out to me is “value capture,” or when scoring systems manage to convince us that their metrics are what we should really care about, upending our actual values as a result. My guess is that we’ve all found ourselves being caught in this trap before. We see it happening all the time, and increasingly so in today’s world, where we seem to constantly be playing a data or numbers game.
For example, in a school context, the tension between grades and finding joy and self-actualization in learning is a classic case of this. I like how this book encourages readers to think about the scoring systems they personally take part in and why. As a plus, I think it’s also a nice introduction for a broader audience to think about data and metrics more critically, pointing to classic work about the politics of technology and design.




