view_headline

Mainstream tunnel vision, echo chambers, and filter bubbles

February 13th, 2018

focused   angrymob

Worrying about the trustworthiness and reliability of the information we consume has become one of the key concerns of our age – with some commentators even concerned that fake news, echo chambers, and filter bubbles are threatening democracy itself. Information professionals are skilled in finding reliable sources, taking a neutral standpoint, and understanding biases, but with the deluge of information being published nowadays, it is increasingly hard to assess every source. This leads to the danger that researchers settle into using only a small set of sources, but then miss early warning signs or alternative voices and contrarian viewpoints. Regulatory researchers need to be the first to know about developments in technology and new financial products, they cannot wait for the news to become “mainstream” and be covered by the major players, so “popularity” of a source is not necessarily a useful metric for them.

 

The mainstream dominated the past

Before the rise of the Internet, there was little coverage of non-establishment viewpoints by major media players, and alternative publications were easily distinguished from the mainstream, largely because you could tell they were low-budget productions. Anarchist or neo-nazi or new age or other “fringe group” newsletters were obviously photocopied. Special interest news published by non-profits or community groups might have higher production standards, but tended to be associated directly with the group funding the publication and clearly branded because they wanted everyone to know who they were – Greenpeace for environmentalist news, for example, or Amnesty International for human rights coverage. In other words, you only had to look at the publication to be able to tell where its biases were likely to be.However, it was also very difficult to find contrarian or alternative voices that were raising new concerns, showcasing new technology, promoting new products, or discussing new ideas. Such sources were hard to find, and researchers often relied on personal social or academic networks to “stay ahead of the curve”. Journalists could build a reputation and a career by “knowing who to talk to” and researchers and analysts could gain an edge over their peers by building relationships with skilled librarians, who knew where to access hard-to-find but reliable sources, such as obscure but highly specialised technical or academic publications.

 

All sources look the same

The advent of the web promised a glorious “democratization” of the publishing process. At first, this was widely welcomed by many who thought that all the information they needed would become free and easy to find. Expensive technical sources would be made available to everyone. A single author blog would be as easy to find as a mainstream newspaper. Researchers thought their jobs would become simple – the obscure but interesting voices that might have been missed would pop up with the same prominence as established sources. It was thought the cream would rise to the top, and alternative viewpoints would promote healthy debate and challenge the “establishment propaganda” of wealthy “old media” outlets that wanted to promote specific political agendas.However, it was not long before “mainstream” organizations with a lot of money were able to produce slick, well designed websites, which looked and worked differently to websites that had been hand-crafted by individuals or built by academics. Organizations with money were able to invest in marketing, SEO, and other promotional activities to ensure they transferred their dominance to the on line world. “Alternative” sites that were less search-engine friendly, or simply did not get the “Pagerank” popularity scores needed to put them on the first page of results began to slip back into obscurity. It was at least easy to tell the difference between an expensive and a low-budget site. Genuine academic sites had a certain look and feel, while individual blogs and personal sites tended to be far less complex than corporate sites.

With the rise of high quality blogging software and falling costs of production technology, that gap closed, and those differences are now far more subtle. “Established” old media, such as local papers, have seen their budgets shrink, while technology has become cheaper, so anyone wanting to build a website from scratch with a limited budget can now produce a site that looks pretty much the same as an “established” one.

So, now we have satire, personal blogs, websites of “old media” outlets, and new sites that all look almost the same. That is the equivalent of your local anarchist collective being able to produce a newspaper that looks like Time magazine, and the National Enquirer looking much like The Economist, while the blogs of a reputable academic and an unqualified political pundit may look almost indistinguishable.

 

Mainstream tunnel vision

The tsunami of information being generated means that once again, researchers are finding they have to rely on personal networks, individual recommendations, or expensive paywalled publications to help them find the signal amidst the noise. This limits the ability of researchers to gain a truly balanced overview of a subject and leads to echo chambers where groups of researchers use the same limited set of sources, without realizing they have closed themselves into a bubble.

Tracking the blogosphere can be especially labour-intensive, as new blogs need to be assessed for validity, authors identified and their credentials checked, and the rate of decay for online sources means that many excellent and useful blogs may only receive a few posts before fading away. Distinguishing the genuine bloggers who have valid points to make or interesting things to say from politically or commercially motivated spokespeople can be tricky. The incentive to spread biased information is particularly strong in the world of finance, where rumours can affect stock prices and real money in people’s pockets. The tornado of opinion – good, bad, biased, and indifferent – swirling around the topic of cryptocurrencies is a case in point.

A huge problem for analysts and professional researchers is that an over-reliance on a manageable set of “trusted” sources, and a relatively limited network of personal contacts leads to the same information being circulated. There is a tendency to rely on a smaller and smaller set of “verified” sources, but that takes us back to the situation in the past, where only a few loud well-funded voices were heard. The interesting minority voices were just too hard to find, and too buried amidst the mess of propagandists, amateurs, and cranks.

This is particularly dangerous when it leads to an echo chamber of groupthink, as the alternative voice or warning signal gets lost. Once inside a filter bubble, the researcher is in danger of thinking they have the full view, but in fact are only seeing a tiny slice, which is subject to selection bias. Traditional search engines exacerbate this problem, by using their relevancy algorithms to personalize for maximizing advertising revenue, rather than ensuring the searcher is offered a complete and comprehensive overview of available sources.

Mass-market search engines want to provide quick and easy answers to a time-hungry demanding public who are not professional researchers and who have little interest in issues like confirmation bias and objectivity. Mass market search engines rely on advertising revenue, so it is in their best interests to flatter their users, not to challenge them by showing them surprising or unexpected content. Everyday non-professional searchers do not need a fully comprehensive view of all angles of a subject – they just want a “good enough” answer. So, the “safe bet” is to serve up sources your readers have selected before and not risk challenging them with anything new. In other words, searchers who are corralled into a tunnel, and only see a small set of “safe bet” sources do not realize that their tunnel vision, however comforting, is not giving them the full picture.

 

Technology encouraged us into filter bubbles, can technology help us break out?

We are working on ways to use our understanding of relevancy to identify not just the obvious mainstream sources, but also the more interesting minority voices that researchers need to see. We are looking at ways to give researchers the ability to tune their research – by focusing on blogs rather than newspapers, for example – without the inconvenience of having to manage lots of different specialized search tools.

For regulatory analysts who need to be “ahead of the curve”, waiting until a story is covered by a reputable mainstream source is too late. In our fast-paced world, once the mainstream media have discovered a story it is “old news”. Regulators need to be able to be pro-active and pre-emptive. Analysts need to be the first to know, but unlike the olden days when the “scoop” was a rare commodity, nowadays the scoops are buried under mountains of junk. Only offering “popular” results is therefore not enough, and so we will allow researchers to look for “obscure” as well as “popular” sources.

In the past, researchers could work with expert librarians who knew how to find a range of sources and assess them for biases and quality, including obscure but valuable sources, but with so many millions of articles published online per day, no human librarian or researcher can manage the load by themselves. Just hoping that somehow you will happen upon that nugget of information or that hidden gem of a source does not give researchers confidence that they have done a thorough job. The OpenReg search engine will give researchers the confidence to know that even without a personal expert librarian to help them, their work is thorough, complete, and reliable.

Fran Alexander
Co-Founder/CTO