Wednesday, February 21, 2018

The Battle of Twittergrad

Related image

Excerpt from Unfriended: Social Media in the Era of Trump

Wendy didn’t get Twitter. The forty-five-year old librarian from Fargo, North Dakota, tweeted occasionally, but no one replied or retweeted her tweets because she was, admittedly, a nobody. She used Twitter mostly to stargaze; to look up her favorite celebrities, sometimes replying to their tweets and hoping they’d reply back (they never did).

In October 2016, when the race for the White House was in full throttle, George Takei tweeted, “I hear the bathrooms in Trump Tower are being relabeled ‘Bad Hombres’ and ‘Nasty Women.’”

Wendy laughed and retweeted, adding her own comment, “He’ll have to learn how to spell first!”

Immediately, another Twitter user replied to her comment, “TRUMP WILL MAKE AMERICA GREAT AGAIN. #MAGA!!”

Intrigued, Wendy looked up the commenter’s profile. “FloridaMom4Trump.” A profile image of a woman in her sixties with a full head of ruffled, graying blonde hair grinned back at her. Her bio read: PROUD AMERICAN PATRIOT AND MOM OF ARMY SOLDIER. #MAGA, accompanied by multiple American flag emojis.

Wendy tweeted back, “Do you really want a racist misogynist for president?”

FloridaMom4Trump replied, “HE’S BETTER THAN KILLARY!” and added a link to a story, “WikiLEaks Confirms Hillary Sold Weapons to ISIS…Then Drops Another Bombshell!” Wendy was no expert, but even she could tell the news was fake, as well as the site, The Political Insider, reporting it.

A sinking feeling settled in Wendy’s stomach. “What in God’s name am I looking at?” she wondered.


For some, Twitter was a delightful, no-frills social media platform where they could jump in, grab what they needed, and jump out.

For others, it was a piss pool—a Palahnuikesque fight club where the morally defective could settle scores. Polluted, bitey, and overpopulated, its dumbed-down, noncommittal pop-in and pop-out public interface made it the friendliest for the trolliest—the type of asinine environment where narcissistic arrivistes like Donald Trump could thrive, turning their 140-character yammer into a propaganda megaphone audible to the world.

So it’s no wonder when Putin sent his trolls into battle, Twitter was Ground Zero.


In October 2016, London-born writer and former Conservative Member of British Parliament Louise Mensch broke the news of Russia’s Twitter army and its attempts to influence the election.

It was a shot fired too far and too late.

Blame it on Mensch’s not-so-credible reputation (she’s been called, among other things, “the paranoid bard in the age of Trump”) or the outlet in which she published it—Heat Street, a decidedly right-wing news blog funded by none other than Rupert Murdoch—but when the article broke just weeks before election day, it wasn’t well-received—when it was received at all. Conservatives laughed it off; the Washington Post refuted the evidence backing up Mensch’s claims. Liberals largely ignored it; they didn’t need a conspiracy theory raining on their parade. The “grab them by the pussy” tape had been released. Republicans had, as Bill Maher put it, “handcuffed themselves to a dead hooker.” So what if Russia was playing a virtual mind game on social media? It didn’t matter. Nothing could stop a Clinton win.

It was an assumption that 65,844,954 Americans would live to regret.


Perhaps Louise Mensch and other foreign press were on top of the Russian’s covert online operations because they saw history repeating. The American election wasn’t the first time Putin had sought to manipulate an event in his favor.

In 2014, Russia used social media to promote their Ukraine campaign. When almost fifty people—most of them pro-Russian activists—were killed in a building fire, Russian Twitter accounts went into overdrive, blaming the Ukraine and asking for public sympathy, forgetting to mention that the pro-Russian activists had fired the first shot, and continued to fire on pro-Ukrainians even as flames engulfed the trade union building.

In April 2015, The Guardian reported details of a Russian troll house, where hundreds of bloggers spent hours each day flooding forums and social networks at home and abroad with anti-western and pro-Kremlin propaganda. These trolls would work especially hard when Putin launched his less popular campaigns. For example, when the army’s fighter planes were sent to the war in Syria, the Russian population—only 14% of whom supported military intervention in Syria—where inundated with a “succession of live reports, analysis, and official defence briefings that combined delivered a seemingly coordinated message: that airstrikes in Syria are crucial in the fight against ISIS.”

Skip to June 2016, where more than 150,000 Russian-language Twitter accounts posted tens of thousands of messages urging Britain to leave the European Union, just days before referendum on the issue. When government officials studied 139 of these tweets from twenty-nine accounts, they found unflattering, photoshopped pictures of London Mayor Sadiq Kahn, racial slurs against refugees, and articles about terrorist attacks in England and across Europe, all clearly directed for spreading racial hatred across the Western world.


A year and two months after the 2016 American Presidential election, and Americans still don’t know what to think. New—and often contradictory—information continues to leak daily on how far Russia’s propaganda machine reached. So far, 50,000 Twitter accounts have been directly linked to the Kremlin, as well as 3,000 accounts to the Internet Research Agency, an infamous troll farm responsible for wide-ranging influence operations on social media in the lead-up to the election. I suspect by the time you read this, thousands—if not millions—of more accounts will have been uncovered.


The Kremlin’s modus operandi for attacking the 2016 election via social media came in two varieties—bots and trolls.

The Kremlin bots are automated Twitter accounts programmed to fire off the same message seconds apart, in alphabetical order according their made-up last names. Their messages include hashtags to rig Twitter trends, such as #TrumpTrain and #CrookedHillary. They can retweet, “like”, and reply to tweets; they can also follow each other and retweet themselves.

Autobots rarely feature a profile image, and when they do, it is often shared among multiple accounts. Bots also reply to messages in less time than what is humanly possible to read the tweet that they are responding to, and their response to you is the same response they’ve given others. They follow far more accounts than they are followed in return, they have little to say apart from the topic that they were programmed for, and they tweet prolifically without apparent need to fulfill the basic human requirements of food and sleep. (If the account has tweeted at least fifty times a day across a period of four or more days, it’s fair to assume it’s a bot, according to researchers.)

Kremlin troll accounts are run by humans. These accounts are usually the curators of the messages that the autobots retweet. In terms of the 2016 election, they trolled pro-Hillary accounts with disparaging, often crude comments and images, and they were the first to comment on Trump’s tweets, replying with memes of the American flag or photoshopped images of Hillary in prison stripes.

These online hecklers are also designed to discredit or silence private citizens in powerful positions, like journalists or celebrities, by means of organized harassment, leading the way with autobots following close behind them.

Kremlin trolls are harder to spot than autobots. Keeping up appearances is important to them: they don’t overtweet, they tweet ideas that appear to have some original thought behind them, and they almost always geotag their tweets. That last bit—geotag—is important, because occasionally the bots will forget to hide or change it, and towns like Anzhero-Sudzhensk or Belaya Kholunitsa appear instead of towns like Phoenix or Miami.

Kremlin trolls are also harder to discern because they are often not new accounts, but hacked accounts recommissioned for propaganda. In a 2017 article, the New York Times cited the case of Rachel Usedom, a young American engineer in California, “who tweeted mostly about her sorority before losing interest in 2014. In November 2016, her account was taken over, renamed #ClintonCurruption, and used to promote Russian leaks.”

Working together, these trolls and autobots turned election era Twitter into a dark, dystrophic slaughterhouse. As The Atlantic’s Douglas Guilbeault put it, “Never have we seen such an all-out bot war.”


Perhaps the best (and most disputed) example of Russia’s army bot tactics occurred after the first presidential debate. While everyone and their racist grandmother could tell Clinton came out on top, Twitter activity suggested a different outcome—an alternative fact, if you will. The hashtag #TrumpWon began to trend, and it stayed that way for hours.

The odd phenomenon had many Americans scratching their heads. The next day, Boston Globe readers woke up to the headline, “Why is #TrumpWon trending on Twitter?”

Louise Mensch knew the answer. In her October 2016 Heat Street article, she explained the Kremlin bot methods: “Let’s say you had a hashtag you wanted to get trending. You have a thousand bots (or Russian Trolls) and a popular account like Ricky Vaughn [a real person]. You have the bots use the hashtag, flooding Twitter until it gets a high count, but stays just under the top twenty trends. Then, Ricky Vaughn pitches the hashtag to his followers. Here is where the window of timing kicks in: within minutes, Ricky Vaughn can have a hashtag trending, but before he gets the hashtag to the top fifteen, the bots automatically delete their tweets with the hashtags. You’ve now started a ‘trend’ associated with Ricky Vaughn, and not a 1,000 odd bots or Russian trolls.” (At the time Mensch wrote the article, Ricky Vaughn was not a confirmed troll. A year later, his account has been deactivated by Twitter—but the question of whether he is a troll of the American or Russian variety, remains unclear to this day.)

As controversial as Mensch's conclusion may be, she wasn’t the only one watching Twitter closely. A research team led by Oxford University Professor Philip Howard was also studying nuances in hashtag trends after the debates, using popular pro-Trump and pro-Hillary hashtags as guides.

After the first debate, their research concluded that 37% of the pro-Trump tweets had been posted by bots, while bots were responsible for only 22.3% of pro-Clinton tweets. In total, 576, 178 pro-Trump tweets were by bots, while 136,696 were in support of Clinton.

Bots turned up the heat for the second debate, with 800,00 pro-Trump tweets and just under 200,000 pro-Clinton tweets.

Howard didn’t point the finger directly at Russia for the huge political bot party; however, he did suggest the data indicated a deliberate manipulation behind the bots’ behavior: “They were purposeful, thoughtful, and deliberate about when to release messages, what those messages should be, and what their targets were.”

And while both candidates benefited from the bots’ hard work, there was no question who they preferred. "On the balance of probabilities, if you examined an automated bot account, the odds are four to one that you'll find it's a bot tweeting in favor of Trump," said Howard.

So why bother tweeting for Clinton at all? According to the September 2017 New York Times article, “The Fake Americans Russia Created to Influence the Election,” Russia used the pro-Clinton bots to blur its role in influencing the election results. Additionally, their seemingly pro-Clinton hashtags weren’t pro-Clinton at all, but modern-day black propaganda for spreading dissention among Clinton supporters, applying pro-Clinton hashtags to inject anti-Clinton memes, links, and political messages into pro-Clinton circles. “Like a virus, they essentially co-opted the opponent’s messaging and infiltrated her supporters. Using pro-Clinton hashtags like #ImWithHer and #uniteblue, memes describing Clinton as corrupt ricocheted across both blue and red feeds.”


The bot activity may have increased around the time of presidential debates, but it didn’t start and stop there. A November 2017 analysis published by the Wall Street Journal determined that Russian Twitter accounts began promoting Trump mere weeks after he announced his candidacy. Not only did these accounts, often disguised as fake, right-leaning Americans, heap praise on Trump, but much of their effort was also spent criticizing and spreading fake news about Trump’s Republican opponent, Jeb Bush, as well as Clinton.

“The support for Trump was clear even at that stage,” reported the technology news site, “There was a 10:1 ratio of praise to criticism among the bogus accounts, a figure that would climb to 30:1 when the election was two weeks away. Identical messages often showed up within minutes of each other, hinting at tight coordination.”

According to Howard’s fellow researcher, University of Washington Professor Samuel Woolley, the persistence of the accounts during the election were also meant to give the appearance that Trump had a bigger following than what he had in reality.

“Some of the botnets that supported Trump were more than likely purpose-built to create an illusion of massive online political traction for Trump,” said Woolley. “These bots work to create a bandwagon effect among voters who were still considering a candidate, or were focused on a specific issue. They also generated a spiral of silence among voters who might not agree with a candidate or issue, but who experienced a barrage of hugely enhanced content from the Trump bot network. These purpose-build bots and botnets often disappeared right after a political campaign, some were even created for a specific issue within a campaign and go offline after working to manipulate public opinion around that one issue.”

On election day and a few days before, the bots redirected their purpose almost entirely to spread misinformation that benefited Trump: Democrats could vote on a different day than Republicans; Clinton had a stroke during the final week of the election; and that an FBI agent associated with her email investigation was involved in a murder-suicide.


It’s hard to look back a year later without a sense of awe at Russia’s commitment to divide America using a weapon of our own making.

But their commitment also begs the question: why Twitter? Why would Putin put so much investment in dividing Americans via a social media platform that only 16% of the country used? And surely he must have considered, out of that paltry percent, how many of those weren’t even registered voters, or too young to vote? How many were even active, or signed in just to stargaze before dropping out?

How many of the 16% were actually paying attention?

It’s a logical fly in the ointment, and not one that cannot be easily explained.

…unless Twitter was never the prime offensive, but a modern-day Pas de Calais, a staging ground to divert the enemy from the real invasion...

Coming soon: "From Russia, With Likes"
Latest pins