Software Is Eating The World
Lately I’ve been thinking about the time before smartphones. Often, we were alone: with our thoughts, reading the paper, listening to music, talking with our friends, colleagues, and family. Now, friends can ride the subway together but still spend the whole time checking their Instagram feed.
This thought came to me watching The Social Dilemma, a recent documentary by directed by Jeff Orlowski and produced by Netflix. The film shows – in exquisite detail – how technology is used to manipulate human behavior. Viewers watch teenagers grow up eager for likes on their latest Instagram posts and see how destructive this can be to one’s self-image. We see how YouTube’s recommendation engine is explicitly designed to keep viewers glued to their screens (guilty!). And even more troubling, we see how social media plays a central role in promoting conspiracy theories, even convincing Kyrie Irving that the world is flat.
The film explores this concept through interviews from former key engineers, founders, and academics, each explaining what they helped build at companies like Google, Facebook, Pinterest, and Twitter. It was refreshing to watch people talk with such candor about these problems. For example, the Like button, created by former Facebook engineer Justin Rosenstein, was intended to bring “positivity and love” to the social network. Instead, the feature has become simply another tool to help Facebook keep you glued to your screen.
The film’s most crucial insight is delivered by Tristan Harris, co-founder of Center for Humane Technology, who astutely notes that
If the product is free, you’re the product.
In the case of social media, this does not just mean that Facebook is smart enough to serve you car ads after you browse a dealer’s website. Instead, Harris explains, these sites intentionally manipulate users’ behavior. The goal is simple: if they make users dependent on the social network, scrolling and clicking for hours every day, they can sell more and more advertising space, driving their revenue ever higher.
To that end, Harris points out that addictive behaviors many of us exhibit around social media are not, strictly speaking, our mistakes. Social platforms have done so much work to perfect their predictive models that trying to avoid clicking that next perfectly-targeted recommendation while browsing YouTube is almost always a losing battle. As Harris says,
You are on one side of the screen, trying to make thoughtful decisions about your time and usage. But on the opposite side of the screen, there are thousands of engineers and supercomputers that have goals that are different [from] your goals. …Who’s [going to] win in that game?”
My Role in ML
The movie also required me to think about how tech companies develop machine learning algorithms and the role that I have played in this trend. I am a core contributor to Featuretools, a popular machine learning library that has been starred on Github over 5K times and is frequently blogged about. It’s likely that my code has been (or is currently being) used in predictive models; I wouldn’t be surprised if my code is helping to build a “better” recommendation engine, one even more effective at keeping you glued to your screen.
Was this my intention for Featuretools? Of course not. My intention was to help people make predictions about who will churn, or how much a customer will spend on their next transaction. It didn’t occur to me that it could be used to manipulate behavior.
AI for Social Good
So where do we go from here? I sometimes think documentaries make us cynical human beings. They focus somewhat myopically on problems with the current way society is organized, omitting positive aspects that might be less attention-grabbing. To that end, it’s important to note that ML is of course also capable of doing many positive things: to pick one example, it has helped improve weather forecasting to better prepare for wildfires and drought. Similarly, many startups are currently combining predictive models with medical imaging to assist radiologists in making cancer screenings. As the movie points out, today’s technology is “simultaneously utopia and dystopia.” The same could be said of the ML libraries I work on.
Join The Movement
With all this in mind, what can you do? How can we stop behavior manipulation? And, more specifically, what have I done since watching this film?
- I’ve uninstalled apps that were wasting my time. (Facebook, Instagram, Snapchat, YouTube).
- I’ve turned off notifications for things that weren’t important, leaving only text messages, Slack, and email. Notifications should be from people, not algorithms.
- I installed a Chrome extension to block YouTube video recommendations, and auto-play.
- I’ve turned off Smart Reply on my Android phone. My replies should be original thoughts, not ML generated responses.
- My personal website now uses GoatCounter, a privacy-friendly web analytics platform.
- I’ve started to unsubscribe from all automated emails. If I am notified of an email, it should be an important email.
It’s unlikely that everyone who watches this film will take similar actions. However, as the film points out, even if a few people take actions like this, it starts a dialogue. And that’s the whole point. Informing people is the first step towards showing social media companies that we want control over these technologies, rather than to be subjugated to them. My hope is for a future where social media treat users as people rather than products; where social networks help us obtain factually-accurate information and stay in meaningful touch with one another. And when that day comes, perhaps we’ll call it a social miracle.