Happy Wednesday! A big thanks to all my colleagues who did a terrific job filling in on the newsletter while I was out. What did I miss? Let me know: firstname.lastname@example.org.
Below: Securities regulators probably won’t scrutinize Elon Musk’s tweets driving the company’s stock down, and another vendor sells data on people who use apps to track their periods. First:
Policymakers in Washington have shrugged off Facebook’s virtual reality rebrand as a distraction from the social media giant’s long-running controversies.
But according to Nick Clegg, Meta’s recently promoted president of global affairs, the pivot is leading the company to fundamentally rethink how it polices disputes over abusive speech — a framework that may lessen the heat around some of its calls.
In his most expansive remarks on the topic to date, Clegg stressed in an exclusive interview with The Technology 202 and a blog post that traditional text-based social media posts are inherently different from the “ephemeral” speech-based interactions more common in the so-called metaverse, and thus require a distinct moderation approach.
“It’s just a sort of philosophically, technologically and legally different phenomenon because you’re dealing with something … which doesn't stick around and it’s not persistent, and so you simply cannot moderate in the same way,” Clegg told me earlier this week.
It’d be shortsighted, Clegg said, to simply ask companies, “How are you going to apply the standards you’ve developed for a social media-age to the metaverse?”
According to Clegg, mediating many disputes in the metaverse should be viewed more like figuring out whether to intervene during a heated back and forth at a bar, versus the more active policing of harmful posts on Facebook’s signature news feed.
In the post, Clegg describes a scenario in which two users meet at a virtual bar in the metaverse where there’s “an uncomfortable amount of abusive language” taking place. Because of the “immediacy” of the interaction and because it “involves live speech rather than posted text,” Clegg argues, typical social media rules about what can’t be said simply wouldn’t translate.
“[I]n this case, a better place to look for answers may be the existing rules and norms that govern bars in physical reality,” he wrote. And it may behoove users and third-party app creators, Clegg argued, “to nurture the development of healthy norms rather than falling back on exhaustive and impractical lists of what users can and can’t say or do.”
It’s an approach that could lead to Meta being more hands-off with some of its services, creating distance between itself and the thorny decisions on harmful-but-legal content it’s begrudgingly made for years, often drawing political scrutiny.
Clegg isn’t arguing that the company shouldn’t bear any responsibility for what takes place in its slice of the metaverse. But he argued its duties to police for safety should shift for activity on more private, individualized products, or on a more public-facing platform, such as its Horizon Worlds or Horizon Events VR apps.
“I think our obligations will become the heaviest lower down the stack, and the more public the speech and the behavior in those worlds, clearly the heavier still,” he told me.
He added, “You simply can't have corporate employee moderators moderating what will be private ephemeral communication in privately created spaces.”
It may also be a play to create a buffer between the platform and some of the moderation responsibilities it has long sought to lighten or shed by outsourcing high-stakes calls and urging policymakers to step in and set new rules around content moderation.
Children’s safety advocates are already sounding the alarm about predators seeking out children in the metaverse. Experts are voicing concern about the spread of misinformation through virtual and augmented reality. Polls show many women fear for their personal safety in the metaverse due to concerns about abuse and harassment. And there’s questions about how Meta’s virtual moderation will play out overseas, where its track record is more checkered.
Clegg signaled that the company is thinking through a number of those issues.
In terms of protecting children, he said Meta’s plans “provide industry-leading parental controls in this space” so that parents know when kids are using services they shouldn’t be on, and that they are clued into what they are doing on “age-appropriate experiences” in the metaverse.
When it comes to addressing virtual harassment, Clegg highlighted that the company has created a “personal boundary” feature that limits how close users’ avatars can get to one another and has tools letting users block others, report unwanted contacts or exit channels.
“The thing that unsettles people a lot is that they find themselves in uncomfortable situations and they feel disempowered. … We want to try and really give people maximum agency in a way which we hope will prevent anyone from really suffering,” he said.
Clegg said the company’s artificial intelligence investments “have led to a dramatic increase in our ability to cover our languages,” bolstering its ability to tackle decisions globally.
Tesla chief executive Elon Musk continues to increase pressure on Twitter, including by saying that his $44 billion deal to buy the company “cannot move forward” until the company proves that no more than 5 percent of the accounts on the platform are bots, my colleagues report. But the Securities and Exchange Commission would have a difficult time proving that Musk’s criticisms of the company and “bots” on its platform are solely aimed at lowering the company’s stock price, Tory Newmyer reports.
Musk also hinted that the SEC should look into the accuracy of Twitter’s filings about bots. When a Twitter user suggested that the SEC investigate, Musk tweeted, “Hello, @SECGov, anyone home?”
High-profile turnover continues within Twitter. Three senior Twitter executives said in internal memos that they are leaving the company, my colleague Elizabeth Dwoskin reports. Last week, Twitter chief executive Parag Agrawal said he was replacing two Twitter executives.
The data marketplace Narrative sells lists of long alphanumeric strings that are tied to mobile devices that have installed popular apps for tracking periods, Motherboard’s Joseph Cox reports. While some advertising industry officials have said such data is anonymized, they can be linked to real people. The data could be a first step for law enforcement agencies trying to identify the apps’ users if abortion becomes illegal in some circumstances.
Apps and services that cull such data are raising alarms in the wake of reports that the Supreme Court could be prepared to strike down Roe v. Wade, clearing the way for some states to make abortions illegal.
“Narrative isn’t the company that harvests this data from mobile phones,” Cox writes. “Narrative instead acts as a middleman and makes buying access to data much easier and relies on ‘providers’ that source the information.”
Narrative took down data from the Planned Parenthood Direct app, which lets people order birth control, and period tracking apps after Motherboard contacted it. “No menstruation or pregnancy tracking app install data has ever been purchased through Narrative’s platform before,” the company told Motherboard. “However, in light of potential forthcoming changes to laws regarding women’s reproductive rights, we have updated our policy to remove those data sets from the Marketplace to prevent any potential misuse of the data.” Its terms of service prohibit its clients from using its data for surveillance, investigations or tracking the subjects of its data, it told the outlet.
Apple is delaying its requirement that employees be in the office three days a week, with the company reportedly telling workers to continue coming in two days per week. Bloomberg News's Dana Hull:
Morning Brew's Sam Blum:
Apple: dear team, you are required to come expose yourself, but don't worry, only 40% of the time. https://t.co/kWl79a01n0— Emil Protalinski (@EPro) May 17, 2022