AI Helps Authorities Determine ‘Emotion’ Behind Social Media Posts

From the “thoughtcrimes” in George Orwell’s novel to the “precogs” tasked with investigating future crimes in the 2002 film “Minority report,” fiction writers have long been interested in exploring the concept of an all-powerful government capable of identifying supposed criminals — whether or not they had actually committed a crime.

Now, the rapidly advancing capabilities of artificial intelligence appear to be bringing humanity ever closer to a world where such predictive analysis is possible.

Most recently, reports indicate that the Department of Homeland Security and Customs and Border Protection teamed up with Fivecast, an AI company touting its ability to determine the “sentiment and emotion” behind social media posts. In addition to identifying potential “risk terms and phrases” shared online, the new partnership appears on track to use digital surveillance to report users to authorities based on the perceived emotional motivation behind such communication.

Using the target of one suspect as an illustration of the concept, Fivecast determined that the individual displayed evidence of anger and disgust throughout the period during which the surveillance was being conducted.

In addition to the U.S., the company states that it is currently working with intelligence agencies in Canada, Australia, the United Kingdom and New Zealand.

As its website states, Fivecast’s goal is to use “unique data collection and AI-enabled analytics to solve the most complex intelligence challenges.”

In addition to intelligence officials, the company claims that its services are “used and trusted by leading defense, national security, law enforcement” and private-sector organizations “around the world.”

This effort to digitally intrude into the minds of social media users is already attracting significant backlash from across the ideological spectrum.

The American Civil Liberties Union’s National Security Project Deputy Director Patrick Toomey, for example, “CBP should not be secretly buying and deploying tools that rely on junk science to scrutinize people’s social media posts, claim to analyze their emotions, and identify purported ‘risks.’”

In its own statement, however, CBP defends its use of the fledgling technology, claiming that DHS “is committed to protecting individuals’ privacy, civil rights, and civil liberties. DHS uses various forms of technology to execute its mission, including tools to support investigations related to threats to infrastructure, illegal trafficking on the dark web, cross-border transnational crime, and terrorism. DHS leverages this technology in ways that are consistent with its authorities and the law.”