When Facebook announced it was changing its name to Meta Platforms Inc., in an effort to represent its shift to building the metaverse, most observers could see there were a few problems. After all, changing your name doesn’t change any of the other problems the company faces. If your goal is to get away from the criticism you are facing from lawmakers, regulators, and whistleblowers, rebranding isn’t going to fool anyone.
On the contrary, it sort of begs the question of whether you’re just ignoring those problems. And, if you are, you end up looking like you lack the self-awareness to see that there’s a problem in the first place. Or, maybe you’re just hoping people won’t notice while you distract them with a new shiny thing.
Based on Ina Fried’s Axios interview with Andrew Bosworth, the soon-to-be Chief Technology Officer for Meta, Facebook’s newly-named parent company, that seems like a reasonable impression.
Bosworth is currently the head of Meta Reality Labs, the division of what used to be known as Facebook that makes virtual reality headsets, like the Oculus. It’s also the center of the company’s future ambition to build the metaverse. The fact that he’s being promoted to CTO is an indication of how important the whole thing is to Facebook’s founder, Mark Zuckerberg.
The overarching question many people have been asking since Zuckerberg rolled out his vision for the metaverse is why anyone would trust a company that has so much trouble managing the platform it already has, to do a better job with a more immersive, virtual reality platform.
Fried, who is the chief technology correspondent for Axios, interviewed Bosworth for the news organization’s HBO Max show, during which she asked about Facebook’s role in the spread of misinformation–especially around Covid-10.
“Individual humans are the ones who choose to believe or not believe a thing,” Bosworth said. “They are the ones who choose to share or not share a thing,”
There’s an important piece of that quote that you shouldn’t miss. Look carefully at how Bosworth characterizes misinformation as content people “didn’t like.” It’s an attempt to minimize the problem, and change the subject. The question was about what Facebook should be doing to stop misinformation, to which Bosworth suggested it wasn’t Facebook’s problem at all. Instead, he pins the blame on individuals who chose to share content and those who want to see it.
“That’s their choice,” Bosworth said. “They are allowed to do that. You have an issue with those people. You don’t have an issue with Facebook. You can’t put that on me.”
There are two obvious problems with that sentiment–at least, they are obvious to everyone who doesn’t work for Facebook. The first is that Facebook seems to genuinely believe it is neutral, and therefore not responsible for the problem since it isn’t creating the content.
But, just because something isn’t your fault, doesn’t mean it isn’t your problem. In the case of misinformation on Facebook, I’m not convinced the company isn’t at fault, but it certainly has a very real problem.
To pretend that its hands are clean is not only tone-deaf, it explains why the company doesn’t feel compelled to actually solve the problem. It’s not clear that Facebook could solve the problem, even if it wanted to.
The second problem is that Facebook absolutely does amplify content that it says “people want to hear.” It does that because that’s the content people engage with. When people engage with content, they spend more time on Facebook and they tell Facebook what they care about. Both of those allow the company to show more ads–the engine that drives its massive profit machine.
Of course, maybe it isn’t just that Facebook isn’t motivated to stop misinformation, perhaps it just isn’t able to. Earlier this year, in an interview with Casey Newton, who writes the Platformer Newsletter, Meta’s CEO Mark Zuckerberg said that people shouldn’t expect Facebook to eliminate all misinformation–it’s just too difficult. Bosworth echoed that sentiment in his interview with Fried.
“If we took every single dollar and human that we had, it wouldn’t eliminate people seeing speech that they didn’t like on the platform,” Bosworth suggested. “It wouldn’t eliminate every opportunity that somebody has to use the platform maliciously.”
In fact, Bosworth suggested that Facebook is behaving exactly how it is designed. According to Bosworth, the best way to approximate the algorithm is to ask yourself “What do people want to hear?” Should that really be the goal–to give people what they want, regardless of whether it’s harmful or destructive?
“I’m very uncomfortable with the idea that we possess enough fundamental rightness, even in our most scientific centers of study to exercise that kind of power on a citizen, another human, on what they want to say and what they want to listen to,” Bosworth added.
It’s a common phrase used by Bosworth. “I’m very uncomfortable,” he told Ina Fried of Axios at least three times in the short clip I watched.
You might think Bosworth is uncomfortable about the effects that Facebook has on democracy, or the effect Instagram has on the mental health of teenagers. But, what Bosworth is really uncomfortable about is the idea that Facebook has any responsibility for the content that its users share, and that its algorithms amplify in pursuit of engagement.
The point is this–if you build a platform, you are accountable for what happens on it, even if you aren’t responsible for everything that every user shares. According to Bosworth, “I stand by the tools that we build.” That may be the case, but it doesn’t mean the rest of us aren’t worried about what you’re building next.