Going Viral: New Responsibilities in The Attention Economy

Andy Coravos
Andrea’s Blog
Published in
3 min readMar 2, 2018

--

In the arms-race for attention, our online communication platforms employ nudges, notifications, and other tools to change our behavior and increase our “engagement.” Like drugs, algorithms also have (unintended) side-effects — stoking human anger, outrage and insecurities. In response, a new movement is digging into how to realign technology to serve humanity’s best interests.

Last month, I posted a quote and a question about Instagram’s emotional impact from an article in The Globe And Mail, a top newspaper in Canada.

Skim the full thread. There’s more than meets the eye.

Over 72 hours, the original Tweet garnered 1.7M+ impressions, Dopamine Labs and the CTO of Instagram both provided clarification, and I was faced with a series of choices about how to handle events as they unfolded over the weekend.

As I learned more information — some confirming and some denying elements of the Tweet, I had questions: How do I provide an update? As new information arises, how do I provide context to the original Tweet?

Twitter has a huge flaw in its product: the author of a post has only one suboptimal way to slow down the spread of information — delete the Tweet.

In a world where anyone can share information that could be consumed by millions of people within hours, we are faced with a new set of social norms. I’ll dig into two issues:

  1. How should we evaluate the veracity of the Tweet as new information arises?
  2. How can Twitter give us better tools to manage updates when something is going viral?

Part One: Welcome to The Attention Economy

As Albert Wenger writes in his new book, A World After Capital: with the onslaught of content, attention has become the most scarce and valuable human good.

In “What is Technology Doing To Us?,” Sam Harris and Tristan Harris, a former Design Ethicist from Google, discuss the arms race for human attention. In an ad-based economy where “engagement” is the holy-grail metric, content that sparks anger increases “time spent on site”. Why? Because when we are outraged, we re-engage with the platform. The lines between user engagement, addiction, and behavioral manipulation are hazy.

A national discussion has started to emerge. Last month, two major Apple Shareholders pushed for a study of iPhone addiction in children. As Casey Newton editor at The Verge arguesFacebook has a trust problem.” Yesterday, Jack Dorsey issued a request for proposal to “increase the collective health, openness, and civility of public conversation.” A new conversation is emerging to make social media products healthier.

The Tweet I posted hit a nerve in the middle of this discussion. The following day, Mike Krieger, the CTO of Instagram, replied to my Tweet.

At this point, Mike has made it clear that any delayed “likes” inside the Instagram app are not intentional. Sarah Mei and JediJeremy both tweeted explanations on how technical constraints could delay “likes” in the app.

But what about outside-of-app (e.g. iOS/Android) push notifications? Mike notes that “notifications” are bundled and not real-time. And so we are left with a grey zone. Multiple people, myself included, have reported receiving iOS notifications saying someone has “liked” a photo hours after the “like” showed up in the app. Is this a bug or a feature? What metric does Instagram use to optimize the timing of its notifications?

I reached out to Dopamine Labs, the team quoted in the article, and they stand by their statement, writing that it was based on public statements of Facebook leaders. As the time of publishing this post, the Canadian newspaper cited in the Tweet updated the article with the CTO’s response, but did not redact the original quote.

As an engineer, I’m assuming it would be difficult (or impossible) for Instagram to disclose their algorithm for push notifications for each user. However, without a public statement about what’s being optimized, it’s unclear what’s happening. Like drugs, algorithms have (unintended) “side effects,” and emotional impact may still occur even if unintentional.

Drugs have (unintended) side-effects, and algorithms do, too.

Thankfully, there’s a movement to make a change, even within Twitter itself with the (launched yesterday) Health Metrics Proposal Submission, asking the public to propose better optimization/health metrics. Similarly, Mark Zuckerberg made a public announcement that his 2018 goal is to “Fix Facebook,” and has been testing different Newsfeed algorithms in favor of ‘meaningful social interactions’.

Other organizations, like Tristan Harris’s Center for Humane Technology, which launched a few weeks ago, are reversing the digital attention crisis and realigning technology with humanity’s best interests.

Nonetheless, until these changes are widely implemented, I still have a Tweet going viral. How can I add the context updates to the original Tweet ASAP? Read a short-term proposal in Part 2.

--

--

CEO @ HumanFirst. Former US FDA. Decentralized clinical research. Curious about biotechs + psychedelic compounds. BoD @ VisionSpring. The party is now