Hi, I’m Jamie Condliffe. Greetings from London. Here’s a look at the week’s tech news:
On Easter, after terrorist attacks killed over 350 people, the Sri Lankan government blocked social media platforms including Facebook, WhatsApp, YouTube, Instagram and Snapchat. Harindra Dassanayake, a presidential adviser in Sri Lanka, said officials had acted out of fear that misinformation about the attacks and hate speech could spread, provoking more violence.
Social media certainly deserves scrutiny for its role in violence and terror. My colleagues Amanda Taub and Max Fisher have explained how false rumors spread via Facebook in Sri Lanka previously led to mob violence and murder. Similar problems have been observed in Indonesia, India and Mexico.
But there appears to be little evidence that blackouts are a correct response to an act of terrorism.
A 2018 report by Human Rights Watch found that there was “no substantive data or evidence to prove that internet shutdowns can scale down violence.”
Jan Rydzak, an associate director at Stanford’s Global Digital Policy Incubator, said that “governments of more than 40 countries have already used shutdowns as a tool to generally prevent or halt the spread of violence or riots, and not a single one has given us a report of success.” In fact, Mr. Rydzak has published research based on data from India showing that shutdowns correlated with an increase in violence as days passed.
Also consider this: Blocking social media cuts the flow of useful information, too, preventing people from contacting loved ones or getting useful information — especially in countries, including Sri Lanka, where social media platforms have become primary forms of communication.
“Imagine the day following 9/11 and what it would have been like without a communal response,” said Alp Toker, the executive director of the internet monitoring group NetBlocks.
This brings me back to the contentious debate over the increasingly popular mandate from governments for better policing of harmful content on social media platforms. In these situations, the infringement of free speech, a concern often raised by critics of aggressive regulation, seems rather more palatable than total blackouts.
Tesla’s robo-taxi rush
Elon Musk, Tesla’s chief executive, made bold predictions last Monday about the automaker’s autonomous driving abilities: that Tesla would offer fully autonomous driving by the middle of 2020, and run a fleet of robo-taxis in the United States by the end of the same year. He also said the vehicles would theoretically be able to drive anywhere, in all weather conditions.
Mr. Musk is in a rush. The rest of the world may not comply. Here’s why.
First: the driving. “It’s a really hard problem,” said Ingmar Posner, an associate professor of information engineering at Oxford University and a co-founder of the university’s autonomous-driving spinoff, Oxbotica. He, like many other commentators, is skeptical of Tesla’s ability to offer full autonomy so quickly, and said driving on unseen roads in all weathers remained a big challenge for autonomous vehicles.
Second: society. Jack Stilgoe, a senior lecturer at University College London specializing in the governance of emerging technologies, pointed out that adoption could be hampered by concerns over unintended consequences of robo-taxis. The public also needs to be convinced that it wants to use the vehicles, and road users must learn how to interact with them. Then there are infrastructure and connectivity to build out. This all takes time.
Finally: regulation. Mr. Musk conceded that his cars wouldn’t receive universal regulatory approval initially. But Mark Fagan, a lecturer in public policy at the Harvard Kennedy School, says Mr. Musk’s chutzpah — particularly the claim that future Teslas may allow drivers to dial up driving aggression to a point where there is a “slight chance of a fender bender” — could backfire. “If I were a regulator, I would find that troubling,” Mr. Fagan said. “I’d be burning the midnight oil to put a piece of regulation in place to protect the citizenry.”
Chris Urmson, previously the technical lead on Google’s self-driving car effort and now chief executive of the autonomous car start-up Aurora, recently told The Verge that we would “see small-scale deployments in the next five years, and then it’s going to phase in over the next 30 to 50 years.”
Those time scales would allow autonomous cars to be deployed safely and sustainably. But, Mr. Stilgoe said, “that’s inconvenient for technology developers who want an event horizon of years, not decades.”
Facebook’s tiny $5 billion fine
Facebook said Wednesday that it expected the Federal Trade Commission to fine it up to $5 billion for privacy violations. Is that enough?
It would be a record penalty against a tech company by the F.T.C., far beyond the $22 million demanded of Google in 2012 for misleading statements about online tracking.
It would also be in line with recent European Union punishments, like Google’s $5 billion antitrust fine last year and the $15.3 billion tax-evasion penalty given to Apple in 2016. And it looks good for the F.T.C. and Facebook: The regulator looks strong; the tech giant looks accountable, without remaking its business.
But $5 billion is a fraction of Facebook’s $56 billion in annual revenue. And while this and any future single-digit billion-dollar fines would pile up, critics aren’t convinced. Representative David Cicilline of Rhode Island, the chairman of the House’s antitrust subcommittee, called it “a slap on the wrist.”
Also troubling: The fine would stem from a consent decree made with the F.T.C. in 2011, in which Facebook promised to improve its privacy practices. Eight years later, Facebook’s technology has moved on, to unpoliced realms where the social network can now infer things about you without the need for as much explicit information.
Like it or not, tech regulation is coming, and some threatens not just large fines but sweeping changes that could undermine Facebook’s business model, or even cleave the company. That could make $5 billion look small indeed.
And some stories you shouldn’t miss
■ Samsung delayed the sale of its folding-screen phone. Reports of broken review units caused the company to say devices need “further improvements.”
■ Scientists created speech from brain signals. A new prosthetic voice decodes what the brain intends to say and generates (mostly) understandable speech.
Orignially published in NYT.