This is mostly for product managers who might be involved with products that have content moderation involved. Given the increased pitch around this topic right now, having built or managed several products with these features, I thought I’d do a series on some of the issues involved. This first part will be historical. If you’re not into that, you can skip to an upcoming article, (once I write it), to go right to some suggestions.
Our collective societal debate over content moderation moving from a simmering debate to a boiling over mess was inevitable. As one of the first product people to be faced with managing consumer masses as they gained access to the open internet, I thought it might be interesting to take a look at how we got here and where we might go next. Especially because so many of us working in product management have some degree of this issue to deal with. You don’t have to be a Facebook or Twitter or Reddit to face issues in these areas.
While this is historical background, let’s be candid about what’s happening right now. Major services such as Meta/Facebook and X/Twitter are making large changes to how they handle content moderation. They’re attempting to increase AI and crowdsourced methods vs. manual. Some critics seem upset about this, but it seems fairly simple. It’s partly about money. And perhaps the reality that the volumes of content in some venues simply cannot no longer be manually managed at all. While estimates suggest content moderation costs companies billions annually in AI tools, human moderators, and legal compliance, we don’t know all the details. But we know it’s a lot. For the most part, companies contract out for these services. Just looking at some older numbers from Netchoice, Content Moderation By The Numbers, from way back at 2H2020, there were six billion takedowns across the top seven services. That’s just the takedowns. No telling how many had to be machine or manually reviewed or escalated to get to that. (And these are years old numbers.)
It’s been said we generally study history for two main reasons; to predict or change the future. So we’ll need to take a tour through how we got here before getting into some possible future directions.
Why We Talk At All or Care About Moderation
Among the theories as to how humans got to the top of the food chain with our big brains is that we started using tools. But another explanation is language. Our brains got larger as we started using language more. And why did we need to do that? Because it’s how we’ve excelled. We don’t have super sharp teeth. We’re not exceptionally fast or strong. What we do have is the ability to communicate, learn from each other, transmit culture, and most importantly, cooperate. From cave pictures to smoke signals to handwritten letters to the printing press to the telephone to dozens of options online, communication is fundamental to our evolution. Our digital channels are nothing more or less than our latest means of storytelling. When we collectively feel like we have to moderate content, it’s because somehow, there’s enough disruption that the channel loses value towards our goals. And this includes the battle over just what constitutes truth or judging what speech crosses the line into those few areas that are disallowed.
History of Online Content Moderation
Prior to the 1990s, practically the only people online were computer scientists or students. They had access to our earliest online communication tools. Consumer access to special interest bulletin board systems (BBSs) was also available. Through the early 1990s, consumer access to early online services started. (Delphi, CompuServe, Prodigy, and AOL.) There’s several reasons content moderation then was orders of magnitude easier than today. To start, access often cost something non-trivial. People were spending several dollars per hour to connect so what people were doing was generally purposeful. Get online, get task done, go back to real life. Most activity was special enthusiast specific; business areas, hobbies, etc. Private dial-up BBS services may have been free, but operators would just bounce users at will if they were annoying. And the larger services? For the most part, people behaved well. But when they didn’t, services could more easily ban them. While this before online specific speech laws, hobbyists and firms still had service guidelines. And major services had one other key benefit. They had a billing relationship with their users. If – for whatever reason – someone was to be temporarily suspended or permanently banned, they had a. powerful means to do it. For a user to get around this, they’d need to feign a new identity and use another credit card. Yes, this was do-able enough, but still a higher bar then just creating yet another free email account somewhere. Today, most services have practical limitations on the ability to moderate at the user level. There are just a few products, Facebook and LinkedIn and maybe Twitter come to mind, where you even could effectively moderate people if you could manage the volume; which is a separate issue. This is because even if such services might not have formal identity verification tools, the network connection aspect of their members is usually effective enough for identification. Of course people can spin up new accounts for free and attempt to cause trouble, but they won’t have the history, reputation, relationships and reach of their old account.
The early online services had various forms of user participation that might be called bulletin boards or forums or real-time chat or whatever. They could be managed by moderators and while I can’t give you a specific number, the percentage of bad actors was relatively small. But manageable. And again, the power to suspend or ban was real. While the internet was kind of available to some, it really wasn’t until the mid-1990s when consumers got involved. Early internet access required somewhat sophisticated computer users to install special software, and even after that was done, there wasn’t much to get to, perhaps excepting the USENET discussion groups. The major services began offering access to portions of the internet via their platforms in the mid-1990s, around the same time Microsoft Windows offered built-in capacity to allow easier connectivity. As the greater masses came online, a couple of things happened. For one, this was no longer just special enthusiasts. It was everyone. And they brought with them their various pros and cons. Secondly, people could act more anonymously, which could lead to some less constrained behavior. Having clarity on the mechanism to moderate anyone based on a unique persona is just one practical component. That’s arguably a technical problem. Actual policy? After a couple of decades of computer mediated communication, we’ve not quite solved those issues. We’ve arguably gotten worse, even as we’ve built more tools, formed firms and ecosystems for moderation, and more. If you’re a small to medium sized firm, you certainly wouldn’t want to build your own expensive capabilities here, you’d just look into Content Moderation Services Pricing Guide 2025 or similar.
An Early Lesson Regarding Openness
Early in my career at the then popular Prodigy online service, I ended up being the first person with the word “Internet” on a business card. This often required some explanation as no one knew what it was. With a small team, I had built parts of our USENET interface and managed the product, in addition to our early web products that included one of the first consumer web page hosting opportunities. Even though this was a major online service with millions of customers at that time, just a few of us worked on these then new internet products. Upon launch, we had tens of thousands of users all of a sudden entering this all but completely unmoderated world. I think there were maybe four of us in my group at the time, out of a company of 500+. Aside from the development side of our business, there were just a few internet-enabled computers. As I sat at mine watching users pour into the new product, I was quite proud of myself and our team for building this great new thing. However, not too long after as I watched some of our users’ behavior on USENET, I recall thinking… “@#$^t. I think I need to talk to my manager and legal.” Meanwhile, I was – mostly alone – managing and responding to a variety of things which at the time I had no idea existed in the world. (And sort of wished I didn’t.) So I also wondered, “I wonder if I’m going to get in trouble for this. I really need to talk to my manager and the legal folks some more.”
While we knew we were pouring people into a more open area and had some appropriate warning language to that effect, we really had no idea. This had been an IBM/Sears company and family oriented service. This was my introduction to the masses once they were more unbound. Philosophically, I’ve always been a believer in the potential value of anonymous and pseudonymous speech. After all, the pseudonym “Publius” was famously used by Alexander Hamilton, James Madison, and John Jay when they authored The Federalist Papers in support of the ratification of the U.S. Constitution. Even though the overwhelming value our users both took and provided to the USENET community was positive, some of folks we poured into USENET were not quite at the founding fathers level in terms of their discourse. Let’s say some of it was more of a sub-basement level. While I’ve always had a deep respect for my customers, the reality is maybe some people shouldn’t be your customers. In any case, to deal with this intellectually, you really have to re-commit to the ideals of free speech and Voltair’s enlightenment era ideals, “I disapprove of what you say, but I will defend to the death your right to say it.”
I didn’t fully understand it at the time. But this was my “Welcome to the next era of online” moment. And as importantly, an understanding as to just how painful and costly a very small percentage of troublesome issues could impact everything from management attention to actual costs. On the main Prodigy service boards, content moderation was easier as it was basically a family service and the rules were clear enough. But for the more open access ‘net based products? Some issues were easier to deal with than others. Privacy provisions notwithstanding, threats of self-harm could usually remove identification constraints – if possible – as there’s laws about imminent threats and good faith actions. (Though laws vary by region.) But some of the edgy things? Is that provocative art or does it cross the line? Is that hate speech? Is it political extremism? What about… [insert heinous thing here]. Is that illegal? Am I at all qualified to even judge that?
At the time, dealing with this was… mostly just me. Alone in an office in White Plains, NY with an ok window view outside and my glowing screen in terminal view in green or orange, depending on mood. Watching tens of thousands of new customers pouring in per month and discover new and exciting places. And some of them writing to me about them. Nowadays, there’s an entire ecosystem of firms to whom you can contract such work. At the extreme end, firms like Meta/Facebook, X/Twitter, YouTube spend billions combating content that seems to violate law and policy. We’ve come a long way.
So Where are we Now?
You’ll have to go to the exciting Part 2 for that.