Categories
Musings

The Misinformation Age

The coming of the internet promised easier access to quantities of information on a previously unimaginable scale. Until the first truly ingenious browsing tool was invented by a company with a name alluding to quantities, there was fierce competition for the ability to find that information. There is still competition for the best aggregation of information, and arguably no one compelling solution. But now, after the first few decades of the information age, it is unmistakable that the central issue in developed information economies is not access to nor quantity of information, but rather curation/verification. 

A few months ago a Forbes article posited that 2.5 quintillion bytes of data were being created every day. On a purely anecdotal basis, it is possible to consume thousands of tweets per day. (You really shouldn’t.) I don’t even have the nerve to search for how much content is being generated on Facebook or Instagram. We are drowning in information, even if you only regularly visit, say, Reddit or Instagram.

The effects of such inflows of information are still being studied. But this post is not about the consequences, good or evil, of the volume of information. Rather, it is about the value, or lack thereof. More importantly, it is about the market opportunity for information curation and validation that I believe is set to be one of the more important in the next decade or so.

The Personal Touch

Just a few hours ago I was strolling in Seattle’s First Hill neighborhood with some cousins, one of whom asked me what the price of a ticket to the Seattle Aquarium was. This type of information request is still commonplace in social gatherings, as it is potentially highly efficient, fosters companionship and serves as an indicator of trust and perhaps affection…or at the very least as something to fill a gap in conversation. However, if you pause to think about it, such requests, even though very common, are potentially quite flawed when viewed through a lens of objectivity. First, it presumes I have either gone to or contemplated attending the aquarium to the degree I looked up ticket prices; second, it presupposes that is something I’d be interested in; and third, it could although not necessarily contain the implication that what I enjoy she also would enjoy.

In this particular case, my cousin was simply being savvy; we tend to enjoy the same things, and I have looked up ticket prices to the aquarium. This is the key reason behind the frequency of such questions amid relatives and friends; we presume that our similarity, whether genetic or not, will result in similar preferences. To some degree this can be accurate, but only to some degree. By and large, the disparity in tastes will result in random asks such as “Hey, do you recommend this coffee shop?” to a friend who resides in that neighborhood as you happen to pass by it and subsequent recommendations potentially backfiring. 

That is the degree of error implicit in personal recommendations regarding fairly low-stakes decisions, such as where to get coffee. But we all engage in such requests and recommendations constantly, as it is a personal and personable route to gathering potentially useful information, even though it is riddled with subjectivity. What happens, however, when we remove such interactions away from the face-to-face by one step, to the digital realm?

Sharing Isn’t Necessarily Caring

By now, multiple tropes have sprung up around the sharing of content by connected yet semi-remote people on social networks, from the “crazy uncle that shares memes about building the border wall” to the “Starbucks-sipping soccer mom trying to stay hip”. Regardless of whether you chuckle or get vaguely peeved at the content shared, the fact is you consume at least part of it, often the most salacious or incendiary or often misleading tidbit that spawned the headline. 

This is what I like to think of as two-degree-distant information, as opposed to the one-degree, face-to-face interaction. You are most often connected on social media to people whom you actually have met in person, as extreme extroverts are rare. These are people whom you have at least some slight investment in staying connected to. So, how do you treat information from them? If it’s purely personal, that is the best of all possible worlds, as such is the most useful if mundane purpose of social media, e.g. learning my friend in London has been training for races. That way, you can bond over new fitness routines when you see him or her again. 

What if this information is not related to a topic that is personal, or even something that that person is invested in to a degree that multiple individuals trust them? To illustrate, there is a difference between your friend Bob, who is a data scientist, sharing an article about “10 common myths in data science” versus said Bob opining about the best way to cook turkey just because he really likes turkey. (By the way, turkey tends to be garbage meat so there really are few to no good ways of cooking it. Stick to better meats like pork, chicken, beef, fish, alligator, quail, duck, rabbit…you get the picture.) The disparity obviously lies in areas of expertise. But how many of your friends and family really stick to their areas of expertise when sharing content and/or information? To take my personal egregious example, I retweet dozens of observations and articles weekly, simply because I found them interesting. That’s it. My sole criterion is my personal interest. Fortunately, nobody regards me as an expert, but at best as a prolific, weak-form curator.

The point of this line of questioning around information gathered from personal connections is that it represents a significant source of the level of information we are bombarded with each day, but even though it is generated by people we are connected to and care about in varying degrees, that doesn’t necessarily make it valuable, or even accurate. It merely makes it more personal. But therein lies the rub.

The Market for Misinformation

A huge market exists for misinformation, and it’s all in your own head. We are primed since birth with information that then shapes our worldviews and likes and dislikes. Information that then contradicts those established, favored facts and beliefs is cognitively more difficult to process, and consequently we seek to avoid such jarring revelations. It takes valuable time to read through something after all, even if you continue to disagree with it after the fact. 

Hence why we are, in a way, doubly primed to favor the articles shared by friends that we get along with very well, especially on topics on which we know we share similar opinions. Confirmation bias is a heady drug. Accordingly, a huge market for misinformation has and always will exist, but is becoming vaster every week as the internet continues its inexorable expansion. The best-known examples include the 2016 election in the US, and abuse of Facebook networks in Southeast Asia, but there are countless salient iterations of desire for misinformation that confirms what certain markets or tribes, so to speak, believe. 

But, amid this flood of information, much of it misinformation, what can we actually believe? How can we figure out what is true or not?

Quid Est Veritas

Pontius Pilate’s famous question of “What is truth?” often serves as part of the reason he is maligned. But he shouldn’t be for that question alone. It is one of the fairest questions we can ask, and especially nowadays, asking “What is true?” more frequently is a noble endeavor. (What actual truth is, in noun form, is a much harder question to answer, frankly.) What can actually serve as useful criteria for assessing truthfulness, in an age where even bastions of public information like the Wall Street Journal and New York Times are decried as shills for agendas and the very office of the US presidency is occupied by someone whose accuracy can be most generously described as suspect?

Here are a few rules that I think can serve as most practical:

  1. Information from a source that does not help nor confirm the source in any way.
  2. Information from a source that actively hurts that source.
  3. Information from a source that corresponds to the most widely and historically held beliefs.  

All of these except the second can be difficult in some ways to achieve, but are so simple they are quite useful as heuristics. For example, it serves the Pew Research Center little to no good to publish the results of a survey that confirms that the American political spectrum is becoming more and more bimodal – i.e. there is less common ground between registered Democrats and Republicans than has been in decades. One could argue that such bad news would potentially benefit its social reach, under the belief that “if it bleeds, it leads”, but as everyone already has been able to see such a bimodal distribution in action in the wake of the 2016 election, it does Pew Research Center little good beyond contributing to its utility as a source to keep on publishing findings that many presume are common knowledge. Hence, Pew is likely to be trusted.

It may be a lesser-known law of public relations that in nearly all cases it is best for corporations to admit any wrongdoing and accept the consequences rather than try to execute a cover-up. At the individual level, we all too frequently try to cover things up, that such an action may seem rather too good to be true. However, it often is – getting out ahead of a negative finding in advance is the best course of action, simply because people already tend to believe the worst, so it’s best to admit to the truth, as best as it is known. For example, Facebook’s Cambridge Analytica debacle is supposed to be contributing to its record-setting market cap loss in the past few months. But until Facebook owned up to the full extent of the breach, the rumors swirling around its true reach were far worse than reality. Furthermore, once a company confirms the extent of wrongdoing there is strong predilection to believe that that extent is true at the least. You may still suppose, if you are negatively biased toward Facebook like I am, that there is even more skulduggery lurking in the shadows, but even I am willing to believe that what they’ve admitted thus far is true.

Last but not least, the historical process of trial and error conducted by humans for millennia is perhaps one of the most reliable heuristic to use. We simply know that pairing certain foods prepared in a certain way lasted for centuries in a given region because the combination was nutritious, tasty and easier to produce given the region’s natural abundance. For a concrete example in the case of diet, there was no reason to really strip out the nutritious husk from a grain of rice, if it worked well for our ancestors until we finally achieved the ability to create white rice. We know that we can trust certain people in certain scenarios. Utilizing the inherited wisdom of our ancestors can reduce errors of all kinds.

The Market Opportunity

If I was a much smarter man, I’d already be working on the company that can crack this. But it’s a bit complicated, so I’m still thinking my way through it. Essentially, given the flood of information we all bathe in daily, we often still employ the classic if flawed heuristics of relying on personal recommendations and/or authorities, e.g. the New York Times. The problem with relying on a brand is at a certain point a brand, although it can be reliable at times given its incentives, at some point is reliant on a person or machine with bias. (Yes, all machines also have biases.) Driven by our tribal instincts, we still unconsciously tend to associate the most popular brands or information that aligns with our views, however achieved, with veracity, to a dangerous degree. 

A curator is necessary. And I think that it isn’t impossible to improve upon the current state of mass media by simply creating a customized blend of sources that can be trusted but only upon certain topics. Of course, that is the first step. Careful study of the track record and biases of each source, such as the Wall Street Journal editorial page, must be codified and then continually monitored to avoid bias drift next, and as for the final step, a short, handy rating of sorts could be applied to either filter down to acceptable levels of certain types of biases or even to only ever produce pairs of countering biases. 

This may sound like overkill. Why not just read Matt Levine for his take on notable Things That Happen? No offense to Mr. Levine, but then we must rely on his take, which, as excellent as it often is, is limited to, as he declares, Money Stuff. Furthermore, we are relying on human-scale curation. I’m envisioning ratings for the entire internet, which necessarily demands machine scale. Granted, there will still be some bias involved as humans typically have to build things, but at the least it’d be a step further along than the human-designed algorithms that already dominate our social media feeds. Imagine a Twitter feed with every article possessing a short bar indicating veracity, which one can click upon for a full rationale that explains why, based on source, topic of the article, track record of the source with said topics, demonstrated domain expertise, and more. 

A host of minefields remain to be explore as I think through this idea, but that is as far as I have gotten. In the Misinformation Age, such a tool is more than important – it is critical.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Verified by MonsterInsights