Categories
Musings

The second wave of digital addiction is coming…or is it?

Thesis: Even as modern societies struggle with the adverse effects of digital immersion on mental and social health, a second wave of even more immersive experiences is imminent, with unanticipated yet not wholly negative consequences for cultural norms.

Virtual reality is quite hard – augmented reality may be even harder, all said and done, based on my considerably uninformed knowledge of its inherent engineering challenges when interacting with the physical world. But as Oculus and other companies aided by massive capital infusions despite the fact they may never release a viable product *cough* Magic Leap *cough* continue to slowly advance the basic tech within the space, it isn’t inconceivable that we could encounter an even more immersive digital experience than that currently proffered by smartphones.

That does not bode well for the human brain’s ability to resist the chemical rush of notifications, always-on communication and the rare yet always treasured brand-new, high-quality meme. If you think memes have broken the ability for everyone under the age of 30 to converse normally now, wait until they are interactive (arguably, Tik-Tok is already doing this). But however much more compelling a more seamless digital-physical reality in major urban centers boasting advanced tech capable of simulations may be, there may actually be some more positive outcomes than may be supposed at first.

As #InstagramReality is already proving, it’s irresistibly tempting for people to try to portray themselves in as flattering a light as possible online – that’s just human nature. With more virtual reality overlays possible in videosharing and photos overall, the norms of any reality are going to be so obviously reset to impossibly glamorous, stylized levels that more authentic communication will become chic again to some degree. This is already happening to some degree – people will still obviously want to look their best but a new median will arise that tries to blend digital polish with physical imperfections more closely. (I tend to agree with Alex Danco – https://alexdanco.com/2019/12/17/ten-predictions-for-the-2020s/ – that glasses or headsets for virtual reality are a tough sell beyond probably just gaming, hence why I think its applications will still be primarily conveyed via smartphone interfaces.)

But another intriguing possibility pops up here. Deepfakes are already starting to proliferate, so it’s only a matter of time that they become a two-way street. Rather than used for lulz or cons, both governments and individuals will likely employ AR-empowered ‘live’ filming and photography to further their agendas, e.g. evade facial recognition or find hidden details or promulgate whatever narratives they want. Services to help evade identification and/or preserve privacy will pop up as tools available for any imagery editing while still being able to use social media like normal. Heck, that may be finally another use case for crypto-based payments that will help popularize them even further.

I’d argue that’s more of a social positive than may be suspected when one contemplates a future a la Blade Runner 2049 wherein AR displays are primarily for advertising or, well, overlaying a VR presence onto a human woman for reasons you can probably guess at. Holograms are still likely a ways off, but there could also be value in rendering communication in even more lifelike of a fashion via screensharing, or any manner of telehealth. Think of any given virtual appointment via videosharing you can conduct with a primary care physician right now – One Medical comes to mind – and imagine a much more lifelike consultation session wherein your phone’s camera scans whatever portion of yourself is visible and recreates it in a 3-D model for the physician to analyze on their end. (Yes, this will lead to hilarious videos that will inevitably leak online.)

All in all, there could be quite a few more positives to AR in the future than an even-worse digital addiction epidemic. As always, there is more nuance in reality than may be suspected at first.

Categories
Musings

The Cycle of Centralization, Part 1

Thesis: What makes the most sense to be decentralized? The maintenance and operations of the people closest to doing the thing. What makes sense to be centralized? The most common denominators and broadest parameters of the scenarios, e.g. laws that apply to all humans based on human nature, and decisions directly pertaining thereto.

Human history rhymes, but it does not repeat, as the saying goes. Composed of intertwining cycles, history often lurches between one extreme to the other, although it’s worth contemplating just how much more swiftly such lurches have become given the compounding powers of technology. A critical cycle that often seems to provoke significant division is the basic swing between centralization and decentralization. Especially nowadays, as we head into the Roaring 20s 2.0 (as if that wasn’t confusing enough already) and Bernie Sanders appears to be somehow gaining steam again, Venezuela rampages toward complete failed-state status, Putin appears immortal and Boris Johnson proves UK voters only trust politicians with terrible haircuts, it appears that the center truly cannot hold.

However, there is much more even tension and potential balance between the two forces of centralization than may seem at first glance. I’d contend that right now, several different phases of the centralization cycle across political systems, economies and sectors (all of course related) are occurring:

  1. China is attempting to stick the landing of a pseudo-capitalist yet centralized state, but there are more troubling signs for China than many suspect.
  2. The US is recentralizing somewhat and potentially will recentralize even further if, as seems likely, antitrust talk becomes action in the next couple of political cycles. However, the groundwork at the fringes is being laid for decentralization.
  3. The EU is slowly decentralizing, as potentially the most audacious experiment in recent political history starts to totter.
  4. Centralized online information flows are approaching senescence and thus are likely to start splintering; the decentralization/destruction of traditional news media is nearly complete.
  5. Entertainment within the most developed nations is mostly centralized but beginning to fray at the edges as consolidation reaches its apex, e.g. Disney becoming a monolith in the US. In less-developed nations, however, entertainment cycles are still oscillating significantly.
  6. The energy industry is looking set to start decentralizing very slowly over the next several decades.

What’s interesting and instructive about all the above is that they all exemplify how and why centralization SHOULD occur, as opposed to where it often naturally comes into being due to system design and/or sheer scale. Let’s examine each of the three main arenas above that have contributed to these different phases of the centralization cycle:

Information Flows

Is it now time to declare that open source has won? Or will another attempt at a Microsoft-like monopoly potentially rise again in the future? I think not, for a variety of reasons, but the most compelling one is there is no point in trying to wall off access to and ability to innovate code bases when you can instead own the infrastructure behind it and learn from the best of what unpaid volunteers decide to offer. It’s simply better business.

For a similar reason, Apple is pivoting mainly to services as they have seen the writing on the wall and know that trying to still offer the best hardware & software fusion in the world for mobiles is not possible for much longer given the degree of competition and innovation already beating them (for example, it’s arguable Google has been making overall better phones for at least a few years).

In short, it’s much easier to profit from a relatively open, accessible database for experimenting in code and/or offering the infrastructure for people to get their projects up and running. (That’s why Google Cloud, Azure and AWS have grown so massively in the past several years.) The primary reason why is that decentralization works extremely well for systems with unified rules of logic that operate in intangible environments.

However, as those systems create increasingly improving and more widely accessible tools that are used to promulgate the multifarious viewpoints of individuals, the degree of decentralization was likely to regress for a time. Why? Because even though each person is unique, it’s not by much. We all share the same biological makeup, and thus tend to be co-opted by particular narratives of fear, passion and humor. Accordingly, there is much money and power to be had by owning the largest publishing platforms where narratives can proliferate and draw in attention, which is therefore monetized in a variety of ways.

Social media platforms are the current primary actors in this drama, and in their efforts to finetune that monetization machine they’ve kinda made a mess of it much faster than even yellow journalism did in the 1890s. Once the intended audiences and participants in narrative design become aware of manipulation – however benign the intentions of those who designed the narrative algorithms on Facebook, Twitter and other media hubs actually were – then fragmentation was inevitable even for those ostensibly trying to be merely the marketplaces/publishing platforms.

For providers of news, it is even worse. Media centered on news in general is notoriously difficult to centralize unless total control over citizens’ existences is in place, because shades of interpretation of even something as simple as a tragic helicopter accident can spin out of control and spawn a thousand lines of discourse in a matter of hours, in the online world. Granted, much of the discourse is the same, but as it is communicated in varying degrees of proficiency it all appears checkered until usually a few main strands consolidate. News is necessarily ephemeral; decentralization is inevitable. Technology has only accelerated the ebbs and flows of the news decentralization and recentralization cycle. Only the most advanced brands have resisted destruction in online news, e.g. The Wall Street Journal, the New York Times, and even they are feeling the pinch. I don’t think they’ll die completely but will be recast in a new paradigm that honestly I haven’t quite figured out yet. It makes sense that at the very end of the traditional news breakdown, the last entities standing are the ones that encapsulate tried-and-true clusters of human mentalities and traits; by that I mean the WSJ is the brand for conservatives and the whole host of associated narratives, traits and ideas, likewise for the NYT and liberals.

The good news is inevitably new brands will arise as discourse and economic political realities evolve. Centralization will rise again, though it’ll take some time. What is difficult to anticipate is the degree to which augmented/virtual reality may help transform the experience of news and information access in general. Trust will be at a premium, so brands could resurface much faster than anticipated as people instinctively flock to what providers they can trust, but it is also possible a prolonged era of distrust and fragmentation is upon us. Whichever outlets figure out the best way to present immersive news with significant messaging around a balanced perspective will be the Fox plus WSJ, or CNN plus NYT of the 2030s. Exciting times, indeed.

Up next are nation-states, where some truly interesting events are occurring.

Verified by MonsterInsights