The dismantling of democracy, manipulation by algorithm, and what to do next; Part 2 ofAlgorithms, bullshit, and the dismantling of democracy
Updates 26 April: Faceook’s chief technology officer tells UK Parliament they did not read terms and conditions that enabled Cambridge Analytica’s data grab; 22 April, Facebook reported moving 1.5 billion users out of reach of pending EU privacy law; 2 May, Cambridge Analytica ceases trading, at least under that name, in US and UK. Part 1 here
Computational propaganda; a structural problem
Political bullshit was with us before the rise to dominance of on-line news sources, but developments over the past decade have made things far worse. Philip N. Howard, Professor of Internet Studies at Oxford, studies of fake news and elections, and way back in 2014 he coined the phrase “computational propaganda” to describe what was happening.
Opportunity for such propaganda is built into the very fabric of mass social media. Targeted ads and “suggestions” protocols are not optional features; they are what Facebook is for. People join groups that they agree with, and discussion among like-minded people moves consensus further away from the middle ground. Facebook’s recommendation system makes things even worse. An investigator for Buzzfeed, having signed up for antivaxx sites, found herself getting recommendations for groups about Pizzagate, the perils of fluoride, chemtrails, and Flat Earth.
Facebook also makes it easy to propagate fake news under false flags. Thus the page “Native Americans United”, apparently from the Dakota Pipeline protesters, with the message “Love water Not Oil, Protect Our Mother,” was produced by the Internet Research Agency, a Russian troll farm. The same people also gave us a page “Black matters”, ostensively part of the Black lives Matter movement. Special Prosecutor Mueller has indicted 13 members of this troll farm, though they are clearly unlikely to ever enter his jurisdiction.
False news has a further advantage over reality on social media because it is generally more novel and attention-grabbing. Thus an analysis of Twitter shows that false news spreads faster, deeper (longer chains of transmission), and more broadly (total number of tweets) than true news. This seems to be the work of individuals, rather than bots. See here; full report here.
Facebook algorithms automatically promote those messages that keep people spending more time on the site. In underdeveloped countries with ethnic conflicts, this increases the visibility of hate speech messages, leading to communal violence. In Sri Lanka, for example, such bullshit messages have sparked attacks by the Sinhalese Buddhist majority on the Muslim minority.
Breitbart, Brexit, and Cambridge Analytica
R: Cambridge Analytica’s front page image. Message superposed: “Data drives all we do. Cambridge Analytica uses data to change audience behavior.”
In The Guardian, Chris Wylie, one of the people involved in refining computational propaganda, describes links between Facebook, Russia, Bannon, Brexit, and his employer, the now notorious data-processing company Cambridge Analytica. (For more on these links, and the possibility that the Brexit campaign broke UK election law, see here) Political forecasting and targeting emerged naturally for him from his field of fashion forecasting and targeting, with which it has a great deal in common. If I know about your politics, that helps me guess your taste in shoes. This is interesting to shoe salesmen, who will find a more receptive audience among this or that political group. Now reverse the flow. People who “like” a particular brand of shoe are more likely to respond to a particular political message. Such relationships may be impossible to predict, but emerge naturally from scanning large amounts of data, and Facebook users who responded to “personality test” bait were giving away a mass of information, not only about their own “likes”, but about those of their Facebook friends. Aleksandr Kogan, the Cambridge University researcher who obtained the data later used by Cambridge Analytica, compares the method to that used by Netflix in targeted recommendations of films, and while others have drawn attention to the lacklustre performance of such methods in assigning personality type, Kogan reportedly claims 85% accuracy in discriminating between Republican and Democratic voters.
How did Cambridge Analytica come by all that information? According to the New York Times, it had $15 million from a major hard-Right Republican donor, Robert Mercer, and interest from Stephen Bannon, Trump’s political adviser at that stage. (The Mercer family are major donors to Breitbart, in which they own shares.) Cambridge University’s Aleksandr Kogan had acted completely legally in paying Facebook for permission to post a personality questionnaire, which enabled him as an academic exercise to harvest viewing and “liking” data, not only from the quarter million Facebook users who took the questionnaire, but from more than 50 million of their Facebook friends. Cambridge Analytica then purchased the data from Kogan.
Let me mention here that on April 4, Facebook admitted that most of its 2 billion users have had their public profiles collected by outside companies.
A document leaked from Cambridge Analytica claims that it generated 10,000 different ads targeted to different audiences during the 2016 US election campaign, and that these were viewed billions of times (Of course, we don’t have to believe these claims, which may well be calculated to drum up custom.
The effectiveness of Cambridge Analytica’s campaign is seriously questioned by social scientists, and I suspect that some of the eagerness to focus on their role comes from reluctance among the losers to accept responsibility for a disastrously misjudged campaign.
But there are plenty of reasons to take these claims seriously, on both sides of the Atlantic. The [US] Advertising Research Association gave Cambridge Analytica its David Ogilvy Award for matching ads to viewers in the Trump campaign. Current investigations of possible illegal spending by the Vote Leave campaign in the run-up to the Brexit referendum have exposed close links between that campaign, and the company called AggregateIQ, effectively a subsidiary of Cambridge Analytica, and, according to Brittany Kaiser, CA’s former business development director, in evidence to the UK Parliament, Cambridge Analytica met regularly with representatives of Leave.EU, Ukip, and Eldon Insurance, owned by the Leave.EU’s founder Arron Banks, over a period of five months. According to a statement made by Vote Leve director Dominic Cummings, formerly posted on the AIQ website, “Without a doubt, the Vote Leave campaign owes a great deal of its success to the work of Aggregate IQ. We couldn’t have done it without them.”
The Irish Times reports that this quotation has now been removed from the AIQ website. It also reports that Kanto, another opinion management company with strong links to Cambridge Analytica and to UKIP, has been hired by Save the 8th, defenders in the upcoming referendum of Ireland’s constitutional ban on abortion.
Julian Wheatland, former Chair of Oxford West and Abingdon Conservative Association, has now been promoted to CEO of Cambridge Analytica.
For more on the technique, and on an actual conference bringing together Facebook and shabby advertisers of fake pills and “clean your computer” scams, see Bloomberg’s report here.
How to respond?
At the individual level, I again commend the Pro-Truth Pledge, by signing which I have committed myself (among other things) to critical examination of sources before I repeat material. I would also re-emphasise the importance of refraining from drawing attention to outrageous fake news, since this plays into the hands of those who manufacture it. That is why, here, I have linked to reliable secondary sources when reporting on these.
For social platforms to block the most extreme content, Ball suggests, may be part of the solution, but who’s to say what is extreme? Do we really want Facebook deciding what may or may not be said in public?
Do we trust governments to do so? The governing parties in the US, Italy, and pro-Brexit UK have been beneficiaries, if not indeed instigators, of fake news. Sometimes the Government as such is itself the fake news source. Historical examples of this abound, and one EU member, Hungary, is one today. Turkey, a member of NATO, is another.
The Drudge Report, according to Wikipedia, was prominent in promoting the Swiftboating of John Kerry (fake news that sabotaged his 2004 US Presidential campaign), the Bill Clinton illegitimate child story, and many others, including false claims that the fires that recently ravaged California had been started by immigrants. Yet Facebook has been criticised for omitting The Drudge Report from its “trending” list, on the grounds that this is a form of censorship.
The censorship issue is real. Last month the Malaysian Government published plans to jail publishers of fake news for up to ten years. But who decides what news is fake, and if a Malaysian prosecutor were to say that a piece like the one you are now reading is fake news because it casts aspersions on the Government’s credibility, how likely are the courts to disagree?
Worse, a dictatorial government may itself require Facebook, as a condition of continued operation within its borders, to remove material that it does not like, and Facebook has accceeded to such requests already in Russia, Morocco, Tunisia, and Israel https://www.washingtonpost.com/news/democracy-post/wp/2018/04/13/why-dictators-love-facebook/
A few more words about Facebook
Facebook can’t offer more privacy, even for a fee, without destroying its whole business. Such is the view of Scott Komminers writing for the financial agency Bloomberg. Selling your data is what Facebook is all about.
Zuckerberg has been apologising for Facebook’s invasions of privacy, and promising to make its opt-out settings more transparent, for over a decade, and did so yet again in testimony to Congress. (For more on how open Facebook has been to abuse and how long this has been known, see here). An open letter, demanding, among other things, that Facebook start notifying users whenever they have been exposed to fake or malicious content, has gathered over a million signatures. Zuckerberg does promise that “if we find that someone is improperly using data, we’ll ban them and tell everyone affected”, a much weaker commitment (especially as there is no clear definition here of “improper”). And Facebook users were promised a link at the top of their newsfeed that briefly showed at the top of mine; I did not find it very helpful.
I think that a close reading of Zuckerberg’s testimony shows many matters that should, urgently, be made the subject of new legislation. For transcripts of US hearings, see here and here. For more on hearings before the UK Parliament’s culture committee, see here. For a critical analysis of all these hearings, which concludes that “Facebook fails to back up on its pleas for the pubic’s trust”, see here. I am not optimistic.
Ball’s book makes a number of recommendations, but diagnosis is always easier than treatment. Avoiding the seduction of bullshit requires hard work and conscious deliberation, and expecting this online may be far too much to ask. Nonetheless, here his recommendations are:
To politicians, don’t explain, don’t criticise on details, since that just draws attention back to the main message. Don’t just focus on fake news, which is just part of a hyper-partisan political scene. To candidates, don’t present yourself as against the system, since you don’t want to destroy its legitimacy. Don’t let yourself be pigeonholed as part of the establishment; consider how people like Farage, Trump, and Johnson have succeeded by presenting themselves as opponents of an establishment of which they are in reality privileged members.
To schools, teach media literacy. To the media he says, watch your headlines (I would add that it does not help that the headlines are written by editors, not reporters). For instance, USA Today had a headline, “Trump: Clinton won popular vote because millions ‘voted illegally'”. Such a headline will do more to keep the claim alive than to quash it. Keep it simple and transparent. Reconsider the traditional goal of writing as if from nowhere (i.e. with omniscient impartiality and no editorial bias). Your opponents will disbelieve your claim, and they may well be right. Get outside your bubble, and take your audience outside theirs (Buzzfeed, Ball says, is experimenting with links to Facebook posts covering diverse positions.) Do not overestimate the value of fact checking, since factual error may be unimportant compared with the general flavour. Publicise corrections as fully as the original misstatements. Critically examine sources; remember the existence of agencies whose sole job it is to sell catchy stories regardless of truth. Build a new kind of public media (what?) Loss of confidence resulting from scandals such as improper police activity, the behaviour of the banks, Westminster politicians’ expenses, and abuses by the press, have collectively combined in the UK to undermine confidence in established sources and institutions. I myself have noticed the expression “MSM” (Mainstream Media) being used as a pejorative on sites with which I otherwise tend to agree, and this unfortunately places NPR and The Guardian in the same category as Fox TV and the Daily Mail. In the US, a study by Public Policy Polling showed 69% of Trump photos agreeing that the news media is an enemy of the American people. And Trump himself accuses all his critics and those who challenge his statements of spreading fake news. So those most in need of input from the MSM are least likely to be open to it.
To readers and voters he says, get outside your bubble. Engage what Kahneman calls “system two“, the deliberative and reflective. Get a sense of scale; his example, the estimated cost of benefit fraud is £1.3 billion/year, which sounds like a lot, but is tiny as a proportion of £780 billion annual government spending. Be as sceptical about claims you agree with, as claims you don’t. Try not to succumb to conspiracy theories. I would point out, however that there really are conspiracies, and that much of this piece concerns them. Ball himself refers to the 2016 report from the NATO Defence College on Russian Information Warfare, and how it seeks to obfuscate issues and undermine institutions. So there really is a conspiracy to spread conspiracy theories.
Ball concludes by saying that if “we are all part of the problem, we can all be part of the solution”. This seems to me wildly optimistic. Bullshit was with us long before present-day mass communications. It is so easy and so profitable that it will not go away. The sheer volume of information now flowing makes it impossible to monitor at the corporate level, while encouraging superficial scanning at the individual level. The problem of bullshit cannot be solved. The best we can do is to manage it.
An earlier version of this piece appeared in 3 Quarks Daily.
Posted on April 27, 2018, in Politics, Society and tagged AggregateIQ, Aleksandr Kogan, Arron Banks, Breitbart, Brexit, Brittany Kaiser, Cambridge Analytica, Chris Wylie, David Ogilvy Award, FaceBook, Leave EU, Robert Mercer. Bookmark the permalink. 2 Comments.
Another possible solution to fake news and bullshit etc might be to simply tune out the human race except for those we have direct contact with like family and neighbors etc.
Ok, so I’ve been a news junkie for 50 years so I’m not succeeding at taking this advice myself. But honestly, the vast majority of what I hear on even a quality source like NPR is really not necessary information. They blab on and on and on about Trump all day long, his latest tweet, how many times he farted last week, etc etc. It’s not fake news, but it’s not particularly useful news either.
The overwhelming majority of what’s on Facebook and other social media sites is just lazy little blurbs of nothing gibberish. Never trust any medium where typing more than a sentence at a time makes you a long winded blowhard!
I dunno. Sure we could try to filter the fire hose of data. Or we could just turn it all off?
“they are what Facebook is for. People join groups that they agree with, and discussion among like-minded people moves consensus further away from the middle ground.”
Unfortunately, the media act as though Facebook is at all times a reflection of the general population’s opinions. It then fills its pages with copies of posts from Facebook and Twitter as if they were random vox pops. Thus perpetrating whatever current storm is raging and in turn puffing it up out of all proportion and giving it false credibility.
LikeLiked by 1 person