Technology is a tool for and against forms of online abuse

Surveillance State

Young woman using cell phone to send text message on social network at night. Closeup of hands with computer laptop in background

(NewsNation) — When it comes to abuse, technology can be a double-edged sword, both facilitating online sexual harassment and shielding against it.

From deepfakes to sharing intimate images without consent, technology has created new pathways to victimize people in familiar ways. Increasingly sophisticated technology, however, has also created opportunities to guard against that abuse and bring perpetrators to justice.

In 2016, a Pennsylvania man was sentenced to 1 1/2 years in prison in connection with a federal computer hacking conviction. The charge stemmed from an investigation into the 2014 leak of several women’s private photos.

Although he was never directly linked to the leak or the uploading of images, federal officials identified more than 600 victims, many of whom were members of the Los Angeles entertainment industry, whose information he accessed through email phishing schemes.

So-called “revenge-porn” impacts private citizens, too. The term often refers to the sharing or distribution of intimate images or photos, sometimes sexual in nature, without the pictured person’s consent.

Among researchers and other professionals, the offense is described as image-based abuse, since the motivation isn’t always revenge and the transgressions aren’t always about pornography. That’s according to Alison Marganski, an associate professor and the director of criminology at Le Moyne College.

“The victimization is not a one-time event, but rather something that is ongoing,” Marganski said in an email to NewsNation. “People are repeatedly violated — often by multiple persons over a period of time, which creates cumulative, compounded, and complex trauma.”

Image-based abuse has historically impacted women more than men, according to Marganski’s report, which was co-authored last year with Lisa Melander, a Kansas State University sociology professor.

As with more traditional forms of abuse or harassment, victims might be hesitant to step forward, Melander said.

“The same kinds of concerns that hinder victims of in-person crimes would apply to those who experience online harassment: fear of retaliation from the perpetrator(s) and not being taken seriously by law enforcement,” she said.

A total of 48 states and the District of Columbia have laws covering nonconsensual pornography, according to a 2021 map from the Cyber Civil Rights Initiative, the most recent comprehensive data available.

Some states classify the offense as a felony while others consider it a misdemeanor.

Less common are laws surrounding deepfakes. They’re made using AI algorithms that learn from photos or audio clips to produce something similar, but artificial.

For example, if the goal were to create a realistic-looking human face or voice, a creator could feed the computer system photos of real faces and human voices, and eventually, it would spit out something similar. This can be used to make it appear as though a specific person is saying or doing something they’re not.

It’s not always easy to spot, either. The advent of deepfakes has allowed creators to make anyone appear to say or do anything. 

Four states had laws regarding deepfakes on the books as of 2021, according to the Cyber Civil Rights Institute.

“Most states have some kind of laws that cover aspects of online abuse, but many do not have specific laws that differentiate between in-person and online harassment but rather cover harmful communications in general,” Melander said. “The problem with this is that the ways in which these laws are interpreted and applied by law enforcement and the courts varies greatly.”

A 2019 report by Deeptrace Labs found that 96% of all deepfake videos were pornographic and nonconsensual videos made of women.

“Individuals who the victim may or may not know can access material not intended for them,” Marganski said. “They can view it, share it with others like the victim’s family, employer, etc., and so on. Those who gaze upon it may then judge, blame, or justify harms against those who have already been harmed, creating revictimization.”

On the other hand, technology can foster harm, it also can be used as a shield.

Certain apps can contact law enforcement and send location details to emergency contacts with the press of a button. Online social platforms also serve as a space to raise awareness and offer support to victims.

Other data-safety measures include changing passwords frequently and keeping sensitive data on a computer that isn’t connected to the internet or networked with other computers, according to Take Back the Tech, an initiative through the Association for Progressive Communications’ Women’s Rights Programme,

Such safeguards, however, shift the burden to those at risk of being victimized, Marganski said.

Instead, social media platforms should set clear guidelines that support victims and hold perpetrators accountable, Marganski and Melander wrote in their report.

“Even when potential targets engage in risk reduction measures, perpetrators may go on to offend against them or others,” Marganski said. “Strategies are therefore needed that focus on potential offenders and addressing underlying motivations for their behavior…”

The Cyber Civil Rights Initiative website has additional information and resources for anyone affected by online abuse.

This is part of a NewsNation exclusive series covering increased surveillance and data-gathering through technology and by the government.

© 1998 - 2022 Nexstar Media Inc. | All Rights Reserved.

Trending on NewsNation

Elections 2022

More Elections 2022