Friday, July 25, 2025

Democratizing space’ is more than just adding new players – it comes with questions around sustainability and sovereignty

‘Democratizing space’ is more than just adding new players – it comes with questions around sustainability and sovereignty

A group of people gaze up at the Moon in Germany. AP Photo/Markus Schreiber
Timiebi Aganaba, Arizona State University; Adam Fish, UNSW Sydney; Niiyokamigaabaw Deondre Smiles, University of British Columbia, and Tony Milligan, King's College London

India is on the Moon,” S. Somanath, chairman of the Indian Space Research Organization, announced in August 2023. The announcement meant India had joined the short list of countries to have visited the Moon, and the applause and shouts of joy that followed signified that this achievement wasn’t just a scientific one, but a cultural one.

A group of cheering, smiling people hold signs depicting the Chandrayaan-3 lander.
India’s successful lunar landing prompted celebrations across the country, like this one in Mumbai. AP Photo/Rajanish Kakade

Over the past decade, many countries have established new space programs, including multiple African nations. India and Israel – nations that were not technical contributors to the space race in the 1960s and ‘70s – have attempted landings on the lunar surface.

With more countries joining the evolving space economy, many of our colleagues in space strategy, policy ethics and law have celebrated the democratization of space: the hope that space is now more accessible for diverse participants.

We are a team of researchers based across four countries with expertise in space policy and law, ethics, geography and anthropology who have written about the difficulties and importance of inclusion in space.

Major players like the U.S., the European Union and China may once have dominated space and seen it as a place to try out new commercial and military ventures. Emerging new players in space, like other countries, commercial interests and nongovernmental organizations, may have other goals and rationales. Unexpected new initiatives from these newcomers could shift perceptions of space from something to dominate and possess to something more inclusive, equitable and democratic.

We address these emerging and historical tensions in a paper published in May 2025 in the journal Nature, in which we describe the difficulties and importance of including nontraditional actors and Indigenous peoples in the space industry.

Continuing inequalities among space players

Not all countries’ space agencies are equal. Newer agencies often don’t have the same resources behind them that large, established players do.

The U.S. and Chinese programs receive much more funding than those of any other country. Because they are most frequently sending up satellites and proposing new ideas puts them in the position to establish conventions for satellite systems, landing sites and resource extraction that everyone else may have to follow.

Sometimes, countries may have operated on the assumption that owning a satellite would give them the appearance of soft or hard geopolitical power as a space nation – and ultimately gain relevance.

A small boxlike satellite ejected into orbit around Earth from a larger spacecraft.
Small satellites, called CubeSats, are becoming relatively affordable and easy to develop, allowing more players, from countries and companies to universities and student groups, to have a satellite in space. NASA/Butch Wilmore, CC BY-NC

In reality, student groups of today can develop small satellites, called CubeSats, autonomously, and recent scholarship has concluded that even successful space missions may negatively affect the international relationships between some countries and their partners. The respect a country expects to receive may not materialize, and the costs to keep up can outstrip gains in potential prestige.

Environmental protection and Indigenous perspectives

Usually, building the infrastructure necessary to test and launch rockets requires a remote area with established roads. In many cases, companies and space agencies have placed these facilities on lands where Indigenous peoples have strong claims, which can lead to land disputes, like in western Australia.

Many of these sites have already been subject to human-made changes, through mining and resource extraction in the past. Many sites have been ground zero for tensions with Indigenous peoples over land use. Within these contested spaces, disputes are rife.

Because of these tensions around land use, it is important to include Indigenous claims and perspectives. Doing so can help make sure that the goal of protecting the environments of outer space and Earth are not cast aside while building space infrastructure here on Earth.

Some efforts are driving this more inclusive approach to engagement in space, including initiatives like “Dark and Quiet Skies”, a movement that works to ensure that people can stargaze and engage with the stars without noise or sound pollution. This movement and other inclusive approaches operate on the principle of reciprocity: that more players getting involved with space can benefit all.

Researchers have recognized similar dynamics within the larger space industry. Some scholars have come to the conclusion that even though the space industry is “pay to play,” commitments to reciprocity can help ensure that players in space exploration who may not have the financial or infrastructural means to support individual efforts can still access broader structures of support.

The downside of more players entering space is that this expansion can make protecting the environment – both on Earth and beyond – even harder.

The more players there are, at both private and international levels, the more difficult sustainable space exploration could become. Even with good will and the best of intentions, it would be difficult to enforce uniform standards for the exploration and use of space resources that would protect the lunar surface, Mars and beyond.

It may also grow harder to police the launch of satellites and dedicated constellations. Limiting the number of satellites could prevent space junk, protect the satellites already in orbit and allow everyone to have a clear view of the night sky. However, this would have to compete with efforts to expand internet access to all.

The amount of space junk in orbit has increased dramatically since the 1960s.

What is space exploration for?

Before tackling these issues, we find it useful to think about the larger goal of space exploration, and what the different approaches are. One approach would be the fast and inclusive democratization of space – making it easier for more players to join in. Another would be a more conservative and slower “big player” approach, which would restrict who can go to space.

The conservative approach is liable to leave developing nations and Indigenous peoples firmly on the outside of a key process shaping humanity’s shared future.

But a faster and more inclusive approach to space would not be easy to run. More serious players means it would be harder to come to an agreement about regulations, as well as the larger goals for human expansion into space.

Narratives around emerging technologies, such as those required for space exploration, can change over time, as people begin to see them in action.

Technology that we take for granted today was once viewed as futuristic or fantastical, and sometimes with suspicion. For example, at the end of the 1940s, George Orwell imagined a world in which totalitarian systems used tele-screens and videoconferencing to control the masses.

Earlier in the same decade, Thomas J. Watson, then president of IBM, notoriously predicted that there would be a global market for about five computers. We as humans often fear or mistrust future technologies.

However, not all technological shifts are detrimental, and some technological changes can have clear benefits. In the future, robots may perform tasks too dangerous, too difficult or too dull and repetitive for humans. Biotechnology may make life healthier. Artificial intelligence can sift through vast amounts of data and turn it into reliable guesswork. Researchers can also see genuine downsides to each of these technologies.

Space exploration is harder to squeeze into one streamlined narrative about the anticipated benefits. The process is just too big and too transformative.

To return to the question if we should go to space, our team argues that it is not a question of whether or not we should go, but rather a question of why we do it, who benefits from space exploration and how we can democratize access to broader segments of society. Including a diversity of opinions and viewpoints can help find productive ways forward.

Ultimately, it is not necessary for everyone to land on one single narrative about the value of space exploration. Even our team of four researchers doesn’t share a single set of beliefs about its value. But bringing more nations, tribes and companies into discussions around its potential value can help create collaborative and worthwhile goals at an international scale.The Conversation

Timiebi Aganaba, Assistant Professor of Space and Society, Arizona State University; Adam Fish, Associate Professor, School of Arts and Media, UNSW Sydney; Niiyokamigaabaw Deondre Smiles, Adjunct Professor, University of British Columbia, and Tony Milligan, Research Fellow in the Philosophy of Ethics, King's College London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, July 14, 2024

In a battle of AI versus AI, researchers are preparing for the coming wave of deepfake propaganda

AI-powered detectors are the best tools for spotting AI-generated fake videos. The Washington Post via Getty Images
John Sohrawardi, Rochester Institute of Technology and Matthew Wright, Rochester Institute of Technology

An investigative journalist receives a video from an anonymous whistleblower. It shows a candidate for president admitting to illegal activity. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn around the upcoming elections. But the journalist runs the video through a specialized tool, which tells her that the video isn’t what it seems. In fact, it’s a “deepfake,” a video made using artificial intelligence with deep learning.

Journalists all over the world could soon be using a tool like this. In a few years, a tool like this could even be used by everyone to root out fake content in their social media feeds.

As researchers who have been studying deepfake detection and developing a tool for journalists, we see a future for these tools. They won’t solve all our problems, though, and they will be just one part of the arsenal in the broader fight against disinformation.

The problem with deepfakes

Most people know that you can’t believe everything you see. Over the last couple of decades, savvy news consumers have gotten used to seeing images manipulated with photo-editing software. Videos, though, are another story. Hollywood directors can spend millions of dollars on special effects to make up a realistic scene. But using deepfakes, amateurs with a few thousand dollars of computer equipment and a few weeks to spend could make something almost as true to life.

Deepfakes make it possible to put people into movie scenes they were never in – think Tom Cruise playing Iron Man – which makes for entertaining videos. Unfortunately, it also makes it possible to create pornography without the consent of the people depicted. So far, those people, nearly all women, are the biggest victims when deepfake technology is misused.

Deepfakes can also be used to create videos of political leaders saying things they never said. The Belgian Socialist Party released a low-quality nondeepfake but still phony video of President Trump insulting Belgium, which got enough of a reaction to show the potential risks of higher-quality deepfakes.

University of California, Berkeley’s Hany Farid explains how deepfakes are made.

Perhaps scariest of all, they can be used to create doubt about the content of real videos, by suggesting that they could be deepfakes.

Given these risks, it would be extremely valuable to be able to detect deepfakes and label them clearly. This would ensure that fake videos do not fool the public, and that real videos can be received as authentic.

Spotting fakes

Deepfake detection as a field of research was begun a little over three years ago. Early work focused on detecting visible problems in the videos, such as deepfakes that didn’t blink. With time, however, the fakes have gotten better at mimicking real videos and become harder to spot for both people and detection tools.

There are two major categories of deepfake detection research. The first involves looking at the behavior of people in the videos. Suppose you have a lot of video of someone famous, such as President Obama. Artificial intelligence can use this video to learn his patterns, from his hand gestures to his pauses in speech. It can then watch a deepfake of him and notice where it does not match those patterns. This approach has the advantage of possibly working even if the video quality itself is essentially perfect.

SRI International’s Aaron Lawson describes one approach to detecting deepfakes.

Other researchers, including our team, have been focused on differences that all deepfakes have compared to real videos. Deepfake videos are often created by merging individually generated frames to form videos. Taking that into account, our team’s methods extract the essential data from the faces in individual frames of a video and then track them through sets of concurrent frames. This allows us to detect inconsistencies in the flow of the information from one frame to another. We use a similar approach for our fake audio detection system as well.

These subtle details are hard for people to see, but show how deepfakes are not quite perfect yet. Detectors like these can work for any person, not just a few world leaders. In the end, it may be that both types of deepfake detectors will be needed.

Recent detection systems perform very well on videos specifically gathered for evaluating the tools. Unfortunately, even the best models do poorly on videos found online. Improving these tools to be more robust and useful is the key next step.

[Get facts about coronavirus and the latest research. Sign up for The Conversation’s newsletter.]

Who should use deepfake detectors?

Ideally, a deepfake verification tool should be available to everyone. However, this technology is in the early stages of development. Researchers need to improve the tools and protect them against hackers before releasing them broadly.

At the same time, though, the tools to make deepfakes are available to anybody who wants to fool the public. Sitting on the sidelines is not an option. For our team, the right balance was to work with journalists, because they are the first line of defense against the spread of misinformation.

Before publishing stories, journalists need to verify the information. They already have tried-and-true methods, like checking with sources and getting more than one person to verify key facts. So by putting the tool into their hands, we give them more information, and we know that they will not rely on the technology alone, given that it can make mistakes.

Can the detectors win the arms race?

It is encouraging to see teams from Facebook and Microsoft investing in technology to understand and detect deepfakes. This field needs more research to keep up with the speed of advances in deepfake technology.

Journalists and the social media platforms also need to figure out how best to warn people about deepfakes when they are detected. Research has shown that people remember the lie, but not the fact that it was a lie. Will the same be true for fake videos? Simply putting “Deepfake” in the title might not be enough to counter some kinds of disinformation.

Deepfakes are here to stay. Managing disinformation and protecting the public will be more challenging than ever as artificial intelligence gets more powerful. We are part of a growing research community that is taking on this threat, in which detection is just the first step.The Conversation

John Sohrawardi, Doctoral Student in Computing and Informational Sciences, Rochester Institute of Technology and Matthew Wright, Professor of Computing Security, Rochester Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Saturday, January 7, 2023

Beyond Section 230: A pair of social media experts describes how to bring transparency and accountability to the industry

Social media regulation – and the future of Section 230 – are top of mind for many in Congress. Pavlo Conchar/SOPA Images/LightRocket via Getty Images
Robert Kozinets, USC Annenberg School for Communication and Journalism and Jon Pfeiffer, Pepperdine University

One of Elon Musk’s stated reasons for purchasing Twitter was to use the social media platform to defend the right to free speech. The ability to defend that right, or to abuse it, lies in a specific piece of legislation passed in 1996, at the pre-dawn of the modern age of social media.

The legislation, Section 230 of the Communications Decency Act, gives social media platforms some truly astounding protections under American law. Section 230 has also been called the most important 26 words in tech: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

But the more that platforms like Twitter test the limits of their protection, the more American politicians on both sides of the aisle have been motivated to modify or repeal Section 230. As a social media media professor and a social media lawyer with a long history in this field, we think change in Section 230 is coming – and we believe that it is long overdue.

Born of porn

Section 230 had its origins in the attempt to regulate online porn. One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

Section 230 explained.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls – which contains not just porn but also misinformation and hate speech – the absolutist stance that they have total protection and total legal “immunity” is untenable.

A lot of good has come from Section 230. But the history of social media also makes it clear that it is far from perfect at balancing corporate profit with civic responsibility.

We were curious about how current thinking in legal circles and digital research could give a clearer picture about how Section 230 might realistically be modified or replaced, and what the consequences might be. We envision three possible scenarios to amend Section 230, which we call verification triggers, transparent liability caps and Twitter court.

Verification triggers

We support free speech, and we believe that everyone should have a right to share information. When people who oppose vaccines share their concerns about the rapid development of RNA-based COVID-19 vaccines, for example, they open up a space for meaningful conversation and dialogue. They have a right to share such concerns, and others have a right to counter them.

What we call a “verification trigger” should kick in when the platform begins to monetize content related to misinformation. Most platforms try to detect misinformation, and many label, moderate or remove some of it. But many monetize it as well through algorithms that promote popular – and often extreme or controversial – content. When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.

Twitter began selling verification check marks for user accounts in November 2022. By verifying a user account is a real person or company and charging for it, Twitter is both vouching for it and monetizing that connection. Reaching a certain dollar value from questionable content should trigger the ability to sue Twitter, or any platform, in court. Once a platform begins earning money from users and content, including verification, it steps outside the bounds of Section 230 and into the bright light of responsibility – and into the world of tort, defamation and privacy rights laws.

Transparent caps

Social media platforms currently make their own rules about hate speech and misinformation. They also keep secret a lot of information about how much money the platform makes off of content, like a given tweet. This makes what isn’t allowed and what is valued opaque.

One sensible change to Section 230 would be to expand its 26 words to clearly spell out what is expected of social media platforms. The added language would specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it. We acknowledge that this definition isn’t easy, that it’s dynamic, and that researchers and companies are already struggling with it.

But government can raise the bar by setting some coherent standards. If a company can show that it’s met those standards, the amount of liability it has could be limited. It wouldn’t have complete protection as it does now. But it would have a lot more transparency and public responsibility. We call this a “transparent liability cap.”

Twitter court

Our final proposed amendment to Section 230 already exists in a rudimentary form. Like Facebook and other social platforms, Twitter has content moderation panels that determine standards for users on the platform, and thus standards for the public that shares and is exposed to content through the platform. You can think of this as “Twitter court.”

Effective content moderation involves the difficult balance of restricting harmful content while preserving free speech.

Though Twitter’s content moderation appears to be suffering from changes and staff reductions at the company, we believe that panels are a good idea. But keeping panels hidden behind the closed doors of profit-making companies is not. If companies like Twitter want to be more transparent, we believe that should also extend to their own inner operations and deliberations.

We envision extending the jurisdiction of “Twitter court” to neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform. Rather than going to actual court for cases of defamation or privacy violation, Twitter court would suffice under many conditions. Again, this is a way to pull back some of Section 230’s absolutist protections without removing them entirely.

How would it work – and would it work?

Since 2018, platforms have had limited Section 230 protection in cases of sex trafficking. A recent academic proposal suggests extending these limitations to incitement to violence, hate speech and disinformation. House Republicans have also suggested a number of Section 230 carve-outs, including those for content relating to terrorism, child exploitation or cyberbullying.

Our three ideas of verification triggers, transparent liability caps and Twitter court may be an easy place to start the reform. They could be implemented individually, but they would have even greater authority if they were implemented together. The increased clarity of transparent verification triggers and transparent liability would help set meaningful standards balancing public benefit with corporate responsibility in a way that self-regulation has not been able to achieve. Twitter court would provide a real option for people to arbitrate rather than to simply watch misinformation and hate speech bloom and platforms profit from it.

Adding a few meaningful options and amendments to Section 230 will be difficult because defining hate speech and misinformation in context, and setting limits and measures for monetization of context, will not be easy. But we believe these definitions and measures are achievable and worthwhile. Once enacted, these strategies promise to make online discourse stronger and platforms fairer.The Conversation

Robert Kozinets, Professor of Journalism, USC Annenberg School for Communication and Journalism and Jon Pfeiffer, Adjunct Professor of Law, Pepperdine University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, November 6, 2022

Mass migration from Twitter is likely to be an uphill battle – just ask ex-Tumblr users

The turmoil inside Twitter headquarters is sparking discussion of a mass exodus of users. What will happen if there is a rush to the exits? AP Photo/Jeff Chiu
Casey Fiesler, University of Colorado Boulder

Elon Musk announced that “the bird is freed” when his US$44 billion acquisition of Twitter officially closed on Oct. 27, 2022. Some users on the microblogging platform saw this as a reason to fly away.

Over the course of the next 48 hours, I saw countless announcements on my Twitter feed from people either leaving the platform or making preparations to leave. The hashtags #GoodbyeTwitter, #TwitterMigration and #Mastodon were trending. The decentralized, open source social network Mastodon gained over 100,000 users in just a few days, according to a user counting bot.

As an information scientist who studies online communities, this felt like the beginning of something I’ve seen before. Social media platforms tend not to last forever. Depending on your age and online habits, there’s probably some platform that you miss, even if it still exists in some form. Think of MySpace, LiveJournal, Google+ and Vine.

When social media platforms fall, sometimes the online communities that made their homes there fade away, and sometimes they pack their bags and relocate to a new home. The turmoil at Twitter is causing many of the company’s users to consider leaving the platform. Research on previous social media platform migrations shows what might lie ahead for Twitter users who fly the coop.

Elon Musk’s acquisition of Twitter has caused turmoil within the company and prompted many users to consider leaving the social media platform.

Several years ago, I led a research project with Brianna Dym, now at University of Maine, where we mapped the platform migrations of nearly 2,000 people over a period of almost two decades. The community we examined was transformative fandom, fans of literary and popular culture series and franchises who create art using those characters and settings.

We chose it because it is a large community that has thrived in a number of different online spaces. Some of the same people writing Buffy the Vampire Slayer fan fiction on Usenet in the 1990s were writing Harry Potter fan fiction on LiveJournal in the 2000s and Star Wars fan fiction on Tumblr in the 2010s.

By asking participants about their experiences moving across these platforms – why they left, why they joined and the challenges they faced in doing so – we gained insights into factors that might drive the success and failure of platforms, as well as what negative consequences are likely to occur for a community when it relocates.

‘You go first’

Regardless of how many people ultimately decide to leave Twitter, and even how many people do so around the same time, creating a community on another platform is an uphill battle. These migrations are in large part driven by network effects, meaning that the value of a new platform depends on who else is there.

In the critical early stages of migration, people have to coordinate with each other to encourage contribution on the new platform, which is really hard to do. It essentially becomes, as one of our participants described it, a “game of chicken” where no one wants to leave until their friends leave, and no one wants to be first for fear of being left alone in a new place.

For this reason, the “death” of a platform – whether from a controversy, disliked change or competition – tends to be a slow, gradual process. One participant described Usenet’s decline as “like watching a shopping mall slowly go out of business.”

It’ll never be the same

The current push from some corners to leave Twitter reminded me a bit of Tumblr’s adult content ban in 2018, which reminded me of LiveJournal’s policy changes and new ownership in 2007. People who left LiveJournal in favor of other platforms like Tumblr described feeling unwelcome there. And though Musk did not walk into Twitter headquarters at the end of October and turn a virtual content moderation lever into the “off” position, there was an uptick in hate speech on the platform as some users felt emboldened to violate the platform’s content policies under an assumption that major policy changes were on the way.

So what might actually happen if a lot of Twitter users do decide to leave? What makes Twitter Twitter isn’t the technology, it’s the particular configuration of interactions that takes place there. And there is essentially zero chance that Twitter, as it exists now, could be reconstituted on another platform. Any migration is likely to face many of the challenges previous platform migrations have faced: content loss, fragmented communities, broken social networks and shifted community norms.

But Twitter isn’t one community, it’s a collection of many communities, each with its own norms and motivations. Some communities might be able to migrate more successfully than others. So maybe K-Pop Twitter could coordinate a move to Tumblr. I’ve seen much of Academic Twitter coordinating a move to Mastodon. Other communities might already simultaneously exist on Discord servers and subreddits, and can just let participation on Twitter fade away as fewer people pay attention to it. But as our study implies, migrations always have a cost, and even for smaller communities, some people will get lost along the way.

The ties that bind

Our research also pointed to design recommendations for supporting migration and how one platform might take advantage of attrition from another platform. Cross-posting features can be important because many people hedge their bets. They might be unwilling to completely cut ties all at once, but they might dip their toes into a new platform by sharing the same content on both.

Ways to import networks from another platform also help to maintain communities. For example, there are multiple ways to find people you follow on Twitter on Mastodon. Even simple welcome messages, guides for newcomers and easy ways to find other migrants could make a difference in helping resettlement attempts stick.

And through all of this, it’s important to remember that this is such a hard problem by design. Platforms have no incentive to help users leave. As long-time technology journalist Cory Doctorow recently wrote, this is “a hostage situation.” Social media lures people in with their friends, and then the threat of losing those social networks keeps people on the platforms.

But even if there is a price to pay for leaving a platform, communities can be incredibly resilient. Like the LiveJournal users in our study who found each other again on Tumblr, your fate is not tied to Twitter’s.The Conversation

Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, June 14, 2022

EU law would require Big Tech to do more to combat child sexual abuse, but a key question remains: How?

European Commissioner for Home Affairs Ylva Johansson announced a set of proposed regulations requiring tech companies to report child sexual abuse material. AP Photo/Francisco Seco
Laura Draper, American University

The European Commission recently proposed regulations to protect children by requiring tech companies to scan the content in their systems for child sexual abuse material. This is an extraordinarily wide-reaching and ambitious effort that would have broad implications beyond the European Union’s borders, including in the U.S.

Unfortunately, the proposed regulations are, for the most part, technologically unfeasible. To the extent that they could work, they require breaking end-to-end encryption, which would make it possible for the technology companies – and potentially the government and hackers – to see private communications.

The regulations, proposed on May 11, 2022, would impose several obligations on tech companies that host content and provide communication services, including social media platforms, texting services and direct messaging apps, to detect certain categories of images and text.

Under the proposal, these companies would be required to detect previously identified child sexual abuse material, new child sexual abuse material, and solicitations of children for sexual purposes. Companies would be required to report detected content to the EU Centre, a centralized coordinating entity that the proposed regulations would establish.

Each of these categories presents its own challenges, which combine to make the proposed regulations impossible to implement as a package. The trade-off between protecting children and protecting user privacy underscores how combating online child sexual abuse is a “wicked problem.” This puts technology companies in a difficult position: required to comply with regulations that serve a laudable goal but without the means to do so.

Digital fingerprints

Researchers have known how to detect previously identified child sexual abuse material for over a decade. This method, first developed by Microsoft, assigns a “hash value” – a sort of digital fingerprint – to an image, which can then be compared against a database of previously identified and hashed child sexual abuse material. In the U.S., the National Center for Missing and Exploited Children manages several databases of hash values, and some tech companies maintain their own hash sets.

The hash values for images uploaded or shared using a company’s services are compared with these databases to detect previously identified child sexual abuse material. This method has proved extremely accurate, reliable and fast, which is critical to making any technical solution scalable.

The problem is that many privacy advocates consider it incompatible with end-to-end encryption, which, strictly construed, means that only the sender and the intended recipient can view the content. Because the proposed EU regulations mandate that tech companies report any detected child sexual abuse material to the EU Centre, this would violate end-to-end encryption, thus forcing a trade-off between effective detection of the harmful material and user privacy.

Here’s how end-to-end encryption works, and which popular messaging apps use it.

Recognizing new harmful material

In the case of new content – that is, images and videos not included in hash databases – there is no such tried-and-true technical solution. Top engineers have been working on this issue, building and training AI tools that can accommodate large volumes of data. Google and child safety nongovernmental organization Thorn have both had some success using machine-learning classifiers to help companies identify potential new child sexual abuse material.

However, without independently verified data on the tools’ accuracy, it’s not possible to assess their utility. Even if the accuracy and speed are comparable with hash-matching technology, the mandatory reporting will again break end-to-end encryption.

New content also includes livestreams, but the proposed regulations seem to overlook the unique challenges this technology poses. Livestreaming technology became ubiquitous during the pandemic, and the production of child sexual abuse material from livestreamed content has dramatically increased.

More and more children are being enticed or coerced into livestreaming sexually explicit acts, which the viewer may record or screen-capture. Child safety organizations have noted that the production of “perceived first-person child sexual abuse material” – that is, child sexual abuse material of apparent selfies – has risen at exponential rates over the past few years. In addition, traffickers may livestream the sexual abuse of children for offenders who pay to watch.

The circumstances that lead to recorded and livestreamed child sexual abuse material are very different, but the technology is the same. And there is currently no technical solution that can detect the production of child sexual abuse material as it occurs. Tech safety company SafeToNet is developing a real-time detection tool, but it is not ready to launch.

Detecting solicitations

Detection of the third category, “solicitation language,” is also fraught. The tech industry has made dedicated efforts to pinpoint indicators necessary to identify solicitation and enticement language, but with mixed results. Microsoft spearheaded Project Artemis, which led to the development of the Anti-Grooming Tool. The tool is designed to detect enticement and solicitation of a child for sexual purposes.

As the proposed regulations point out, however, the accuracy of this tool is 88%. In 2020, popular messaging app WhatsApp delivered approximately 100 billion messages daily. If the tool identifies even 0.01% of the messages as “positive” for solicitation language, human reviewers would be tasked with reading 10 million messages every day to identify the 12% that are false positives, making the tool simply impractical.

As with all the above-mentioned detection methods, this, too, would break end-to-end encryption. But whereas the others may be limited to reviewing a hash value of an image, this tool requires access to all exchanged text.

No path

It’s possible that the European Commission is taking such an ambitious approach in hopes of spurring technical innovation that would lead to more accurate and reliable detection methods. However, without existing tools that can accomplish these mandates, the regulations are ineffective.

When there is a mandate to take action but no path to take, I believe the disconnect will simply leave the industry without the clear guidance and direction these regulations are intended to provide.The Conversation

Laura Draper, Senior Project Director at the Tech, Law & Security Program, American University

This article is republished from The Conversation under a Creative Commons license. Read the original article.