The dramatic growth of social media during the last decade, combined with a serious decline in both the quality and quantity of traditional news media, results in people obtaining more information from less reliable sources. As anyone who uses Facebook has witnessed, fake news circulates and goes viral as people accept it as true and repost without thinking.
The use of social media bots to influence elections is only the tip of the iceberg. By posting and reposting carefully selected information, it is possible to create the impression of consensus, groundswells, and popularity where none actually exists. While circulating fake news is a common occurrence, more insidious tactics such as adding a false component into otherwise factual information, and manipulating accurate information by presenting it out of context can be harder to spot.
“There is a knowledge chasm where our online behaviour is at severe risk,” according to Ottawa-based Cultural Communications Strategist Lucia Harper. “This knowledge gap exists in critical thinking and pattern recognition. Media Literacy Education is about pattern and bias recognition for the players online and within ourselves. With a strong critical thinking component we can drive personal insight in our online behaviour.”
“We are moving out of the age of innocence as we come to understand the magnitude of information we are inundated with that is manipulative,” explained Harper, “There are so many mechanisms designed to generate behaviours to further their goals of data gathering, stealthy access, and ideological propagation. Chatbots, neuromarketing, and click bait all use emotional manipulation to ensure knee jerk responses and sharing. We are at risk for mercenary opinion influencers. They affect our peace of mind, our political positions, our purchases, and our charitable endeavours; and one wrong move and you are subject to online bullying in the adult world the likes of which have never been seen before. These online efforts to capitalize on our emotions are less powerful when we take measures to ensure that we are not at risk for manipulation. The players are counting on certain emotions to take you out of critical thinking positions.”
As with other security threats, technology can help. Researchers have developed algorithms to identify bots. Facebook’s disclosure to the US House and Senate Intelligence Committees and the Senate Judiciary Committee included information on the content and targeting of approximately $100,000 worth of ads between 2015 and 2017. The company was also able to establish that the ads were tied to 470 accounts and Pages “associated with a Russian entity known as the Internet Research Agency.”
Once identified, responding to these findings is complicated. Bots and individuals controlling multiple accounts are an obvious terms of service violation and relatively easy for providers to terminate. But even if social media companies want to do the right thing, the decision to turn away ad revenue in all but the most blatant circumstances is difficult, especially when companies are headquartered in democracies that pride themselves on freedom of speech.
Censorship in social media is problematic. For example, should Facebook refuse to accept ads that assert false information? It is obvious that the world is not flat and that gravity exists, but many ads focus on emotion and opinion. Social media companies would be foolish to wade into whether a politically-oriented ad is true or appropriate.
One solution may be to require greater transparency in advertising. Facebook Pages and ads, for example, do not require any meaningful level of attribution. Anyone can start a Page on any topic, create posts, and pay Facebook to “promote” them. While session metadata and payment information is known to Facebook, there is no indication of source provided to users. As a result, an individual in Russia (or anywhere else) can create a Facebook Page appearing to represent a Canadian organization or local topic of interest, and target advertising, thereby exerting external influence.
Conclusively geolocating a Page owner or ad purchaser can be problematic, and requiring mandatory attribution may not be acceptable to social media companies given that their real business is advertising. Perhaps it would be more palatable to offer attributed Pages and ads on a voluntary basis, while non-attributed material is clearly labeled. Consumers could then decide for themselves how much weight to give information from anonymous sources. Reputable advertisers who want to be taken seriously will welcome a validation process.
But technology can only go so far. People must stop blindly believing everything they read on the Internet and engage critical thinking skills. Harper’s mission at YEPBusiness.com is to “ensure that organizations are providing 21st century life skills training in Critical Thinking, Media Literacy and Cultural Competence.” Those skills are becoming more important every day.
Have a security question you’d like answered in a future column? Eric would love to hear from you.
SAMSUNG GALAXY S8 PLUS
The Samsung Galaxy S8 Plus is a beautifully crafted smartphone with nearly no bezel, curvaceous in design and reflects a…
How to: Connect to Exchange Online Using Multi-Factor Authentication
Using PowerShell to manage your Microsoft cloud services like Exchange Online and using multi-factor authentication (MFA) separately is awesome. Using…