Sunday, July 14, 2024

In a battle of AI versus AI, researchers are preparing for the coming wave of deepfake propaganda

AI-powered detectors are the best tools for spotting AI-generated fake videos. The Washington Post via Getty Images
John Sohrawardi, Rochester Institute of Technology and Matthew Wright, Rochester Institute of Technology

An investigative journalist receives a video from an anonymous whistleblower. It shows a candidate for president admitting to illegal activity. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn around the upcoming elections. But the journalist runs the video through a specialized tool, which tells her that the video isn’t what it seems. In fact, it’s a “deepfake,” a video made using artificial intelligence with deep learning.

Journalists all over the world could soon be using a tool like this. In a few years, a tool like this could even be used by everyone to root out fake content in their social media feeds.

As researchers who have been studying deepfake detection and developing a tool for journalists, we see a future for these tools. They won’t solve all our problems, though, and they will be just one part of the arsenal in the broader fight against disinformation.

The problem with deepfakes

Most people know that you can’t believe everything you see. Over the last couple of decades, savvy news consumers have gotten used to seeing images manipulated with photo-editing software. Videos, though, are another story. Hollywood directors can spend millions of dollars on special effects to make up a realistic scene. But using deepfakes, amateurs with a few thousand dollars of computer equipment and a few weeks to spend could make something almost as true to life.

Deepfakes make it possible to put people into movie scenes they were never in – think Tom Cruise playing Iron Man – which makes for entertaining videos. Unfortunately, it also makes it possible to create pornography without the consent of the people depicted. So far, those people, nearly all women, are the biggest victims when deepfake technology is misused.

Deepfakes can also be used to create videos of political leaders saying things they never said. The Belgian Socialist Party released a low-quality nondeepfake but still phony video of President Trump insulting Belgium, which got enough of a reaction to show the potential risks of higher-quality deepfakes.

University of California, Berkeley’s Hany Farid explains how deepfakes are made.

Perhaps scariest of all, they can be used to create doubt about the content of real videos, by suggesting that they could be deepfakes.

Given these risks, it would be extremely valuable to be able to detect deepfakes and label them clearly. This would ensure that fake videos do not fool the public, and that real videos can be received as authentic.

Spotting fakes

Deepfake detection as a field of research was begun a little over three years ago. Early work focused on detecting visible problems in the videos, such as deepfakes that didn’t blink. With time, however, the fakes have gotten better at mimicking real videos and become harder to spot for both people and detection tools.

There are two major categories of deepfake detection research. The first involves looking at the behavior of people in the videos. Suppose you have a lot of video of someone famous, such as President Obama. Artificial intelligence can use this video to learn his patterns, from his hand gestures to his pauses in speech. It can then watch a deepfake of him and notice where it does not match those patterns. This approach has the advantage of possibly working even if the video quality itself is essentially perfect.

SRI International’s Aaron Lawson describes one approach to detecting deepfakes.

Other researchers, including our team, have been focused on differences that all deepfakes have compared to real videos. Deepfake videos are often created by merging individually generated frames to form videos. Taking that into account, our team’s methods extract the essential data from the faces in individual frames of a video and then track them through sets of concurrent frames. This allows us to detect inconsistencies in the flow of the information from one frame to another. We use a similar approach for our fake audio detection system as well.

These subtle details are hard for people to see, but show how deepfakes are not quite perfect yet. Detectors like these can work for any person, not just a few world leaders. In the end, it may be that both types of deepfake detectors will be needed.

Recent detection systems perform very well on videos specifically gathered for evaluating the tools. Unfortunately, even the best models do poorly on videos found online. Improving these tools to be more robust and useful is the key next step.

[Get facts about coronavirus and the latest research. Sign up for The Conversation’s newsletter.]

Who should use deepfake detectors?

Ideally, a deepfake verification tool should be available to everyone. However, this technology is in the early stages of development. Researchers need to improve the tools and protect them against hackers before releasing them broadly.

At the same time, though, the tools to make deepfakes are available to anybody who wants to fool the public. Sitting on the sidelines is not an option. For our team, the right balance was to work with journalists, because they are the first line of defense against the spread of misinformation.

Before publishing stories, journalists need to verify the information. They already have tried-and-true methods, like checking with sources and getting more than one person to verify key facts. So by putting the tool into their hands, we give them more information, and we know that they will not rely on the technology alone, given that it can make mistakes.

Can the detectors win the arms race?

It is encouraging to see teams from Facebook and Microsoft investing in technology to understand and detect deepfakes. This field needs more research to keep up with the speed of advances in deepfake technology.

Journalists and the social media platforms also need to figure out how best to warn people about deepfakes when they are detected. Research has shown that people remember the lie, but not the fact that it was a lie. Will the same be true for fake videos? Simply putting “Deepfake” in the title might not be enough to counter some kinds of disinformation.

Deepfakes are here to stay. Managing disinformation and protecting the public will be more challenging than ever as artificial intelligence gets more powerful. We are part of a growing research community that is taking on this threat, in which detection is just the first step.The Conversation

John Sohrawardi, Doctoral Student in Computing and Informational Sciences, Rochester Institute of Technology and Matthew Wright, Professor of Computing Security, Rochester Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Saturday, January 7, 2023

Beyond Section 230: A pair of social media experts describes how to bring transparency and accountability to the industry

Social media regulation – and the future of Section 230 – are top of mind for many in Congress. Pavlo Conchar/SOPA Images/LightRocket via Getty Images
Robert Kozinets, USC Annenberg School for Communication and Journalism and Jon Pfeiffer, Pepperdine University

One of Elon Musk’s stated reasons for purchasing Twitter was to use the social media platform to defend the right to free speech. The ability to defend that right, or to abuse it, lies in a specific piece of legislation passed in 1996, at the pre-dawn of the modern age of social media.

The legislation, Section 230 of the Communications Decency Act, gives social media platforms some truly astounding protections under American law. Section 230 has also been called the most important 26 words in tech: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

But the more that platforms like Twitter test the limits of their protection, the more American politicians on both sides of the aisle have been motivated to modify or repeal Section 230. As a social media media professor and a social media lawyer with a long history in this field, we think change in Section 230 is coming – and we believe that it is long overdue.

Born of porn

Section 230 had its origins in the attempt to regulate online porn. One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

Section 230 explained.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls – which contains not just porn but also misinformation and hate speech – the absolutist stance that they have total protection and total legal “immunity” is untenable.

A lot of good has come from Section 230. But the history of social media also makes it clear that it is far from perfect at balancing corporate profit with civic responsibility.

We were curious about how current thinking in legal circles and digital research could give a clearer picture about how Section 230 might realistically be modified or replaced, and what the consequences might be. We envision three possible scenarios to amend Section 230, which we call verification triggers, transparent liability caps and Twitter court.

Verification triggers

We support free speech, and we believe that everyone should have a right to share information. When people who oppose vaccines share their concerns about the rapid development of RNA-based COVID-19 vaccines, for example, they open up a space for meaningful conversation and dialogue. They have a right to share such concerns, and others have a right to counter them.

What we call a “verification trigger” should kick in when the platform begins to monetize content related to misinformation. Most platforms try to detect misinformation, and many label, moderate or remove some of it. But many monetize it as well through algorithms that promote popular – and often extreme or controversial – content. When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.

Twitter began selling verification check marks for user accounts in November 2022. By verifying a user account is a real person or company and charging for it, Twitter is both vouching for it and monetizing that connection. Reaching a certain dollar value from questionable content should trigger the ability to sue Twitter, or any platform, in court. Once a platform begins earning money from users and content, including verification, it steps outside the bounds of Section 230 and into the bright light of responsibility – and into the world of tort, defamation and privacy rights laws.

Transparent caps

Social media platforms currently make their own rules about hate speech and misinformation. They also keep secret a lot of information about how much money the platform makes off of content, like a given tweet. This makes what isn’t allowed and what is valued opaque.

One sensible change to Section 230 would be to expand its 26 words to clearly spell out what is expected of social media platforms. The added language would specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it. We acknowledge that this definition isn’t easy, that it’s dynamic, and that researchers and companies are already struggling with it.

But government can raise the bar by setting some coherent standards. If a company can show that it’s met those standards, the amount of liability it has could be limited. It wouldn’t have complete protection as it does now. But it would have a lot more transparency and public responsibility. We call this a “transparent liability cap.”

Twitter court

Our final proposed amendment to Section 230 already exists in a rudimentary form. Like Facebook and other social platforms, Twitter has content moderation panels that determine standards for users on the platform, and thus standards for the public that shares and is exposed to content through the platform. You can think of this as “Twitter court.”

Effective content moderation involves the difficult balance of restricting harmful content while preserving free speech.

Though Twitter’s content moderation appears to be suffering from changes and staff reductions at the company, we believe that panels are a good idea. But keeping panels hidden behind the closed doors of profit-making companies is not. If companies like Twitter want to be more transparent, we believe that should also extend to their own inner operations and deliberations.

We envision extending the jurisdiction of “Twitter court” to neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform. Rather than going to actual court for cases of defamation or privacy violation, Twitter court would suffice under many conditions. Again, this is a way to pull back some of Section 230’s absolutist protections without removing them entirely.

How would it work – and would it work?

Since 2018, platforms have had limited Section 230 protection in cases of sex trafficking. A recent academic proposal suggests extending these limitations to incitement to violence, hate speech and disinformation. House Republicans have also suggested a number of Section 230 carve-outs, including those for content relating to terrorism, child exploitation or cyberbullying.

Our three ideas of verification triggers, transparent liability caps and Twitter court may be an easy place to start the reform. They could be implemented individually, but they would have even greater authority if they were implemented together. The increased clarity of transparent verification triggers and transparent liability would help set meaningful standards balancing public benefit with corporate responsibility in a way that self-regulation has not been able to achieve. Twitter court would provide a real option for people to arbitrate rather than to simply watch misinformation and hate speech bloom and platforms profit from it.

Adding a few meaningful options and amendments to Section 230 will be difficult because defining hate speech and misinformation in context, and setting limits and measures for monetization of context, will not be easy. But we believe these definitions and measures are achievable and worthwhile. Once enacted, these strategies promise to make online discourse stronger and platforms fairer.The Conversation

Robert Kozinets, Professor of Journalism, USC Annenberg School for Communication and Journalism and Jon Pfeiffer, Adjunct Professor of Law, Pepperdine University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, November 6, 2022

Mass migration from Twitter is likely to be an uphill battle – just ask ex-Tumblr users

The turmoil inside Twitter headquarters is sparking discussion of a mass exodus of users. What will happen if there is a rush to the exits? AP Photo/Jeff Chiu
Casey Fiesler, University of Colorado Boulder

Elon Musk announced that “the bird is freed” when his US$44 billion acquisition of Twitter officially closed on Oct. 27, 2022. Some users on the microblogging platform saw this as a reason to fly away.

Over the course of the next 48 hours, I saw countless announcements on my Twitter feed from people either leaving the platform or making preparations to leave. The hashtags #GoodbyeTwitter, #TwitterMigration and #Mastodon were trending. The decentralized, open source social network Mastodon gained over 100,000 users in just a few days, according to a user counting bot.

As an information scientist who studies online communities, this felt like the beginning of something I’ve seen before. Social media platforms tend not to last forever. Depending on your age and online habits, there’s probably some platform that you miss, even if it still exists in some form. Think of MySpace, LiveJournal, Google+ and Vine.

When social media platforms fall, sometimes the online communities that made their homes there fade away, and sometimes they pack their bags and relocate to a new home. The turmoil at Twitter is causing many of the company’s users to consider leaving the platform. Research on previous social media platform migrations shows what might lie ahead for Twitter users who fly the coop.

Elon Musk’s acquisition of Twitter has caused turmoil within the company and prompted many users to consider leaving the social media platform.

Several years ago, I led a research project with Brianna Dym, now at University of Maine, where we mapped the platform migrations of nearly 2,000 people over a period of almost two decades. The community we examined was transformative fandom, fans of literary and popular culture series and franchises who create art using those characters and settings.

We chose it because it is a large community that has thrived in a number of different online spaces. Some of the same people writing Buffy the Vampire Slayer fan fiction on Usenet in the 1990s were writing Harry Potter fan fiction on LiveJournal in the 2000s and Star Wars fan fiction on Tumblr in the 2010s.

By asking participants about their experiences moving across these platforms – why they left, why they joined and the challenges they faced in doing so – we gained insights into factors that might drive the success and failure of platforms, as well as what negative consequences are likely to occur for a community when it relocates.

‘You go first’

Regardless of how many people ultimately decide to leave Twitter, and even how many people do so around the same time, creating a community on another platform is an uphill battle. These migrations are in large part driven by network effects, meaning that the value of a new platform depends on who else is there.

In the critical early stages of migration, people have to coordinate with each other to encourage contribution on the new platform, which is really hard to do. It essentially becomes, as one of our participants described it, a “game of chicken” where no one wants to leave until their friends leave, and no one wants to be first for fear of being left alone in a new place.

For this reason, the “death” of a platform – whether from a controversy, disliked change or competition – tends to be a slow, gradual process. One participant described Usenet’s decline as “like watching a shopping mall slowly go out of business.”

It’ll never be the same

The current push from some corners to leave Twitter reminded me a bit of Tumblr’s adult content ban in 2018, which reminded me of LiveJournal’s policy changes and new ownership in 2007. People who left LiveJournal in favor of other platforms like Tumblr described feeling unwelcome there. And though Musk did not walk into Twitter headquarters at the end of October and turn a virtual content moderation lever into the “off” position, there was an uptick in hate speech on the platform as some users felt emboldened to violate the platform’s content policies under an assumption that major policy changes were on the way.

So what might actually happen if a lot of Twitter users do decide to leave? What makes Twitter Twitter isn’t the technology, it’s the particular configuration of interactions that takes place there. And there is essentially zero chance that Twitter, as it exists now, could be reconstituted on another platform. Any migration is likely to face many of the challenges previous platform migrations have faced: content loss, fragmented communities, broken social networks and shifted community norms.

But Twitter isn’t one community, it’s a collection of many communities, each with its own norms and motivations. Some communities might be able to migrate more successfully than others. So maybe K-Pop Twitter could coordinate a move to Tumblr. I’ve seen much of Academic Twitter coordinating a move to Mastodon. Other communities might already simultaneously exist on Discord servers and subreddits, and can just let participation on Twitter fade away as fewer people pay attention to it. But as our study implies, migrations always have a cost, and even for smaller communities, some people will get lost along the way.

The ties that bind

Our research also pointed to design recommendations for supporting migration and how one platform might take advantage of attrition from another platform. Cross-posting features can be important because many people hedge their bets. They might be unwilling to completely cut ties all at once, but they might dip their toes into a new platform by sharing the same content on both.

Ways to import networks from another platform also help to maintain communities. For example, there are multiple ways to find people you follow on Twitter on Mastodon. Even simple welcome messages, guides for newcomers and easy ways to find other migrants could make a difference in helping resettlement attempts stick.

And through all of this, it’s important to remember that this is such a hard problem by design. Platforms have no incentive to help users leave. As long-time technology journalist Cory Doctorow recently wrote, this is “a hostage situation.” Social media lures people in with their friends, and then the threat of losing those social networks keeps people on the platforms.

But even if there is a price to pay for leaving a platform, communities can be incredibly resilient. Like the LiveJournal users in our study who found each other again on Tumblr, your fate is not tied to Twitter’s.The Conversation

Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, June 14, 2022

EU law would require Big Tech to do more to combat child sexual abuse, but a key question remains: How?

European Commissioner for Home Affairs Ylva Johansson announced a set of proposed regulations requiring tech companies to report child sexual abuse material. AP Photo/Francisco Seco
Laura Draper, American University

The European Commission recently proposed regulations to protect children by requiring tech companies to scan the content in their systems for child sexual abuse material. This is an extraordinarily wide-reaching and ambitious effort that would have broad implications beyond the European Union’s borders, including in the U.S.

Unfortunately, the proposed regulations are, for the most part, technologically unfeasible. To the extent that they could work, they require breaking end-to-end encryption, which would make it possible for the technology companies – and potentially the government and hackers – to see private communications.

The regulations, proposed on May 11, 2022, would impose several obligations on tech companies that host content and provide communication services, including social media platforms, texting services and direct messaging apps, to detect certain categories of images and text.

Under the proposal, these companies would be required to detect previously identified child sexual abuse material, new child sexual abuse material, and solicitations of children for sexual purposes. Companies would be required to report detected content to the EU Centre, a centralized coordinating entity that the proposed regulations would establish.

Each of these categories presents its own challenges, which combine to make the proposed regulations impossible to implement as a package. The trade-off between protecting children and protecting user privacy underscores how combating online child sexual abuse is a “wicked problem.” This puts technology companies in a difficult position: required to comply with regulations that serve a laudable goal but without the means to do so.

Digital fingerprints

Researchers have known how to detect previously identified child sexual abuse material for over a decade. This method, first developed by Microsoft, assigns a “hash value” – a sort of digital fingerprint – to an image, which can then be compared against a database of previously identified and hashed child sexual abuse material. In the U.S., the National Center for Missing and Exploited Children manages several databases of hash values, and some tech companies maintain their own hash sets.

The hash values for images uploaded or shared using a company’s services are compared with these databases to detect previously identified child sexual abuse material. This method has proved extremely accurate, reliable and fast, which is critical to making any technical solution scalable.

The problem is that many privacy advocates consider it incompatible with end-to-end encryption, which, strictly construed, means that only the sender and the intended recipient can view the content. Because the proposed EU regulations mandate that tech companies report any detected child sexual abuse material to the EU Centre, this would violate end-to-end encryption, thus forcing a trade-off between effective detection of the harmful material and user privacy.

Here’s how end-to-end encryption works, and which popular messaging apps use it.

Recognizing new harmful material

In the case of new content – that is, images and videos not included in hash databases – there is no such tried-and-true technical solution. Top engineers have been working on this issue, building and training AI tools that can accommodate large volumes of data. Google and child safety nongovernmental organization Thorn have both had some success using machine-learning classifiers to help companies identify potential new child sexual abuse material.

However, without independently verified data on the tools’ accuracy, it’s not possible to assess their utility. Even if the accuracy and speed are comparable with hash-matching technology, the mandatory reporting will again break end-to-end encryption.

New content also includes livestreams, but the proposed regulations seem to overlook the unique challenges this technology poses. Livestreaming technology became ubiquitous during the pandemic, and the production of child sexual abuse material from livestreamed content has dramatically increased.

More and more children are being enticed or coerced into livestreaming sexually explicit acts, which the viewer may record or screen-capture. Child safety organizations have noted that the production of “perceived first-person child sexual abuse material” – that is, child sexual abuse material of apparent selfies – has risen at exponential rates over the past few years. In addition, traffickers may livestream the sexual abuse of children for offenders who pay to watch.

The circumstances that lead to recorded and livestreamed child sexual abuse material are very different, but the technology is the same. And there is currently no technical solution that can detect the production of child sexual abuse material as it occurs. Tech safety company SafeToNet is developing a real-time detection tool, but it is not ready to launch.

Detecting solicitations

Detection of the third category, “solicitation language,” is also fraught. The tech industry has made dedicated efforts to pinpoint indicators necessary to identify solicitation and enticement language, but with mixed results. Microsoft spearheaded Project Artemis, which led to the development of the Anti-Grooming Tool. The tool is designed to detect enticement and solicitation of a child for sexual purposes.

As the proposed regulations point out, however, the accuracy of this tool is 88%. In 2020, popular messaging app WhatsApp delivered approximately 100 billion messages daily. If the tool identifies even 0.01% of the messages as “positive” for solicitation language, human reviewers would be tasked with reading 10 million messages every day to identify the 12% that are false positives, making the tool simply impractical.

As with all the above-mentioned detection methods, this, too, would break end-to-end encryption. But whereas the others may be limited to reviewing a hash value of an image, this tool requires access to all exchanged text.

No path

It’s possible that the European Commission is taking such an ambitious approach in hopes of spurring technical innovation that would lead to more accurate and reliable detection methods. However, without existing tools that can accomplish these mandates, the regulations are ineffective.

When there is a mandate to take action but no path to take, I believe the disconnect will simply leave the industry without the clear guidance and direction these regulations are intended to provide.The Conversation

Laura Draper, Senior Project Director at the Tech, Law & Security Program, American University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, April 15, 2022

Elon Musk’s bid spotlights Twitter’s unique role in public discourse – and what changes might be in store

Twitter may not be a darling of Wall Street, but it occupies a unique place in the social media landscape. AP Photo/Richard Drew
Anjana Susarla, Michigan State University

Twitter has been in the news a lot lately, albeit for the wrong reasons. Its stock growth has languished and the platform itself has largely remained the same since its founding in 2006. On April 14, 2022, Elon Musk, the world’s richest person, made an offer to buy Twitter and take the public company private.

In a filing with the Securities and Exchange Commission, Musk stated, “I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.”

As a researcher of social media platforms, I find that Musk’s potential ownership of Twitter and his stated reasons for buying the company raise important issues. Those issues stem from the nature of the social media platform and what sets it apart from others.

What makes Twitter unique

Twitter occupies a unique niche. Its short chunks of text and threading foster real-time conversations among thousands of people, which makes it popular with celebrities, media personalities and politicians alike.

Social media analysts talk about the half-life of content on a platform, meaning the time it takes for a piece of content to reach 50% of its total lifetime engagement, usually measured in number of views or popularity based metrics. The average half life of a tweet is about 20 minutes, compared to five hours for Facebook posts, 20 hours for Instagram posts, 24 hours for LinkedIn posts and 20 days for YouTube videos. The much shorter half life illustrates the central role Twitter has come to occupy in driving real-time conversations as events unfold.

Twitter’s ability to shape real-time discourse, as well as the ease with which data, including geo-tagged data, can be gathered from Twitter has made it a gold mine for researchers to analyze a variety of societal phenomena, ranging from public health to politics. Twitter data has been used to predict asthma-related emergency department visits, measure public epidemic awareness, and model wildfire smoke dispersion.

Tweets that are part of a conversation are shown in chronological order, and, even though much of a tweet’s engagement is frontloaded, the Twitter archive provides instant and complete access to every public Tweet. This positions Twitter as a historical chronicler of record and a de facto fact checker.

Changes on Musk’s mind

A crucial issue is how Musk’s ownership of Twitter, and private control of social media platforms generally, affect the broader public well-being. In a series of deleted tweets, Musk made several suggestions about how to change Twitter, including adding an edit button for tweets and granting automatic verification marks to premium users.

There is no experimental evidence about how an edit button would change information transmission on Twitter. However, it’s possible to extrapolate from previous research that analyzed deleted tweets.

There are numerous ways to retrieve deleted tweets, which allows researchers to study them. While some studies show significant personality differences between users who delete their tweets and those who don’t, these findings suggest that deleting tweets is a way for people to manage their online identities.

Analyzing deleting behavior can also yield valuable clues about online credibility and disinformation. Similarly, if Twitter adds an edit button, analyzing the patterns of editing behavior could provide insights into Twitter users’ motivations and how they present themselves.

Studies of bot-generated activity on Twitter have concluded that nearly half of accounts tweeting about COVID-19 are likely bots. Given partisanship and political polarization in online spaces, allowing users – whether they are automated bots or actual people – the option to edit their tweets could become another weapon in the disinformation arsenal used by bots and propagandists. Editing tweets could allow users to selectively distort what they said, or deny making inflammatory remarks, which could complicate efforts to trace misinformation.

Twitter’s content moderation and revenue model

To understand Musk’s motivations and what lies next for social media platforms such as Twitter, it’s important to consider the gargantuan – and opaque – online advertising ecosystem involving multiple technologies wielded by ad networks, social media companies and publishers. Advertising is the primary revenue source for Twitter.

Musk’s vision is to generate revenue for Twitter from subscriptions rather than advertising. Without having to worry about attracting and retaining advertisers, Twitter would have less pressure to focus on content moderation. This would make Twitter a sort of freewheeling opinion site for paying subscribers. Twitter has been aggressive in using content moderation in its attempts to address disinformation.

Musk’s description of a platform free from content moderation issues is troubling in light of the algorithmic harms caused by social media platforms. Research has shown a host of these harms, such as algorithms that assign gender to users, potential inaccuracies and biases in algorithms used to glean information from these platforms, and the impact on those looking for health information online.

Testimony by Facebook whistleblower Frances Haugen and recent regulatory efforts such as the online safety bill unveiled in the U.K. show there is broad public concern about the role played by technology platforms in shaping popular discourse and public opinion. Musk’s potential bid for Twitter highlights a whole host of regulatory concerns.

Because of Musk’s other businesses, Twitter’s ability to influence public opinion in the sensitive industries of aviation and the automobile industry would automatically create a conflict of interest, not to mention affecting the disclosure of material information necessary for shareholders. Musk has already been accused of delaying disclosure of his ownership stake in Twitter.

Twitter’s own algorithmic bias bounty challenge concluded that there needs to be a community-led approach to build better algorithms. A very creative exercise developed by the MIT Media Lab asks middle schoolers to re-imagine the YouTube platform with ethics in mind. Perhaps it’s time to ask Twitter to do the same, whoever owns and manages the company.

[Over 150,000 readers rely on The Conversation’s newsletters to understand the world. Sign up today.]The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.