Senator Mark Warner Says Social Media's 'Wild, Wild West' is Coming to an End

PER_MarkWarner_01_ 612590370_Banner
Photo Illustration by Gluekit; Warner by Chip Somodevilla/Getty

Since the 2016 election, Senator Mark Warner has been Silicon Valley's most active and vocal watchdog on Capitol Hill. Warner, a Virginia Democrat, vice chairman of the Senate Committee on Intelligence and a former telecommunications venture capitalist, published a white paper last year proposing a variety of legislative curbs on the tech industry. Those suggestions included putting the onus on Facebook, Twitter and other social media platforms to identify bots and foreign election interference. Warner also has bipartisan co-sponsors for a variety of legislation aimed at curbing tech, including the so-called Honest Ads Act, which would require Facebook, Google and other platforms to be transparent about who is paying for political ads.

He is clearly onto something. The end of 2019 is shaping up to be a watershed period for the issue of how much the U.S. should regulate Silicon Valley. Seven states, led by New York Attorney General Letitia James, have already announced an antitrust investigation of Facebook; most of the states' attorneys general are probing Google over anti-competitive behavior. Additionally, the House Judiciary Committee just demanded Amazon, Google and Facebook hand over the personal emails of those companies' executives, hunting for evidence of anti-competitive schemes.

Warner, in a recent wide-ranging interview with Newsweek, explains why the era of unregulated Big Tech may be coming to an end. Here are some edited excerpts:

Newsweek: You said the 2016 election revealed "The dark underbelly of an entire system." Is there any chance of protective legislation being passed before the election season? Isn't it an emergency?
Warner: Our system is not secure in 2020. I would argue there are a variety of solutions that would get 80 votes on the floor of the Senate if we were allowed to vote. These are all bipartisan bills. First, to make sure if a foreign government intervenes, the appropriate response is not to say thank you, but to tell the FBI. Second, to make sure that every polling station in America has backup paper ballots. Third, that the Honest Ads Act, which makes sure that there are the same reporting requirements for political ads on Facebook and Google as there are in Newsweek and on television. Fourth, some rules of the road in terms of how social media platforms operate. The Wild, Wild West for these social media platforms is coming to an end.

Senator Mitch McConnell is the gatekeeper; he is not allowing these things to come to the floor where there would be bipartisan support. Why?
He's had a consistent position against campaign-finance and election-related legislation. One of the things that's incumbent on me and Democrats and others is to keep elevating election security and protection of our democracy as a top issue.

What is your biggest fear about 2020 and big data? Won't there be exponentially more data points for each American—that are on sale for political strategists—than Cambridge Analytica ever imagined having access to?
Yes. Exponentially more. We know that Russia will be back because it worked in 2016. If you add all they spent on Brexit, France and the U.S. election together, it's less than the cost of one new F-35. We're seeing 21st century conflict by cyber means, and mis- and disinformation means, as a cheap and effective tool. My biggest fear: Russians will continue to obtain information that they will then weaponize with a much more massive use of manipulation. That would be through both deepfake technology and the creation of both [fake] individuals and bots at an unprecedented level masquerading as Americans. They'll try to drive our debate in a way that will clearly change what kind of news we read and could change the outcome of the election.

PER_MarkWarner_02_863115548
Warner introduced the Honest Ads Act in 2017. Zach Gibson/Bloomberg/Getty

There is a lot of tension between the U.S. intelligence community and Facebook, Google and Apple about sharing more data with respect to the election. The government is saying, "You need to give us more to protect," and the position on the other side is, "We're already doing enough to protect the election." Are Washington and Silicon Valley at war?
I don't believe they're at war, and I do believe the social media companies are getting better. But we don't have the luxury of taking two or three more years to figure this out. We need, in a sense, a fusion center where we have intel and the prominent platforms— Facebook, Google, Twitter—jointly sharing information.

What do you think of the House Judiciary Committee calling for big tech executives' emails?
I think big tech is making a huge error by not further engaging with the national government. We have been ceding the regulatory lead to Europe on privacy, and to the UK and Australia on content. And what the tech community is going to find out is that when we do get the federal regulation, these previous efforts are going to be the floor and not the ceiling. I think they'll rue the day that they didn't work with us on an earlier basis.

Writer Andrew Marantz has a new book coming out called Antisocial: Online Extremists, Techno-Utopians, and the Hijacking of the American Conversation in which he blames "disruptors' naivete" and their "reckless techno-utopianism" for creating gatekeeper-free platforms that enabled proliferation and amplification of the Nazi, alt-right fringe world. Do you agree with the assessment that he's making there?
I'm not going to comment directly on the alt-right, but I would say I think these companies have been naive. I think they have talked about the communities they've created, and there have been upsides in all of this innovation. But they were, I think, too willing to ignore this dark underbelly of communities of hate that were created, the ability to have their platforms manipulated, the ability to have foreign governments and their agents use it to try to disrupt not just our democracy.

Let's talk about the recent $5 billion Federal Trade Commision settlement with Facebook over breaking a 2011 promise about privacy. The deal won't do much to change Facebook's business; they will still be able to self-regulate, collect user data and target ads. How is that a win for the public?While technically the largest settlement ever, it was peanuts in terms of the scope of Facebook's revenue. If they're allowed to build that into the cost of doing business without any further penalty, we're going to be even worse off.

PER_MarkWarner_04_945005812
Facebook CEO Mark Zuckerberg testified on Capitol Hill in April 2018.

Seven attorneys general are going after Facebook, almost every state is going after Google. Is a big tech breakup inevitable?
I still think there are other tools I'd like to use. For example, data portability. If you get tired of Facebook, can you move easily to a new site, bring your data and still be interoperable? More transparency. Being able to know what kind of data is being collected and how much it's worth. Better privacy protections. Making sure that we have the right to know whether we're being contacted by a human being versus a bot.

One of the reasons why I'm not yet at full breakup is that I don't want to see these global enterprises simply replaced by the Alibabas, the Chinese tech companies who would come with even worse behavioral tendencies.

Recently the White House hosted members of the tweeting right who complain about being censored by social media. When Trump was inaugurated, you had Milo Yiannopoulos, Alex Jones and Roger Stone all starring in huge roles on social media and now they are out. So, do you think there's any validity to that complaint? And should 4Chan and 8Chan be shut down, because of what's going on there with respect to violent white nationalists?
I've seen no objective evidence that any of the platform companies are censoring or silencing voices on the right. What I have seen is the nature of their businesses: If you lean right, they reinforce your message promoting the more and more outrageous stories, and if you lean left you get more and more outrageous stories on the left. I mean, these companies are about making money, and I think they are about trying to reinforce your preconceived notions, which is a bit the opposite of what you want in an informed democracy. But I have not seen evidence of validity of the accusation that the right has been, is being, silenced. As a matter of fact, if you look at some of the activity in Europe, it feels in many ways that the right still dominates most of social media, much more than the left.

And in terms of 4Chan, I don't think you can come up with an ad hoc solution to 4Chan and some of the other sites, as vile as they may be. I think you do need to have a debate about Section 230 and the exemptions that were granted all of
these platforms.

Can you explain Section 230?
Section 230 was called the Communications Decency Act in the late '90s, and at that moment in time, there was a decision made that social media companies, and their connections, were going to be viewed as kind of just dumb pipes, not unlike a telco. The platform companies would have no obligation for the content that went over their networks. In the late '90s, that might have made sense. But in 2019, when 65 percent of Americans get some—or all—of their news from Facebook and Google, there is clearly some curation going on.

There are two ways to get at this in terms of some of the hate mongering and other racist outrageous behavior. One is to try to think of these platforms as media companies. The other would be to move toward identity validation. I think a healthy debate ought to take place because, again, the one thing we do know is we see not only hate speech, but then as we see governments like Myanmar use encrypted Facebook posts to encourage mass violence against the Rohingya. The idea that the status quo is okay just doesn't pass the smell test.

What do you think? Are they media companies?
I don't believe they have total immunity against the content that appears on their sites. But, you know, I'm nervous about how you would set it up. Facebook promised they were hiring 10,000 people to help screen content to make sure that it fits into their standards of duty or standards of operation. But do you then open up the possibility that they might favor either content on the right or content on the left? There's also concern about some big government regulator. One idea that's not fully fleshed out but has some potential: go at it the way the film industry has done around content—an industry-driven regulatory rating system.

A new report published by the think tank Data & Society aims to create a taxonomy of trolling tactics, and they followed some viral false memes started by trolls that were retweeted by President Trump. They call that the Mount Everest of trolling, to get the president to retweet a false meme. Do you think that the president should be tweeting at all?
Well, I think it's hard to ignore the President of the United States making statements, whether they're oral, written or tweeted. I think we would have more thoughtful policies if the president tweeted less. I'm not sure I'm willing, though, to say [he should not]... because should the president be restricted? Should members of Congress be restricted on tweeting? But if the president is being manipulated by trolls or particularly trolls that may be driven by foreign adversaries, at what level does his personal freedom of speech interfere with his equally important responsibilities as commander-in-chief?

Why don't Americans care as much about privacy as Europeans? And are we going to get to a point where people will have to pay for their privacy?
Some of it has to do with age. My kids are in their 20s and have grown up with this remarkable sharing. Most younger people have not seen the negative consequences of what can come out with that post or photo along the way. I hope we don't ever get to the point where you have to pay for your privacy, but if you knew how much your data was worth, and then you were offering to, in a sense, sell part of that revenue stream to somebody to protect your data better, is that you paying? Or is that simply a smarter monetization or something that's already taking place?

Recently, the NSA released another warning about how we are not prepared for warfare on the "internet of things"…
I have had legislation, bipartisan legislation, for three years that has said at least when the U.S. government is spending taxpayer dollars to buy internet of things–connective devices, there ought to be minimum security standards. For example, there ought to not be embedded passcodes. You ought to make sure that the device is patchable. I mean, basic items. It is crazy that we're going from about roughly 5 to 8 billion devices now to 30 billion connected devices over the next four or five years.

Your refrigerator sends the message to Alexa when your diet Coke runs out. At a recent electronics show, they had hair dryers that were internet of things–connected devices. Why do you need a hair dryer connected to the internet? I don't know. But the challenge is, every time you connect another device to the network, you create another vulnerability point. We spent all this money trying to protect the nuclear plant from cyberattacks. What happens when the bad guys [don't] attack through the front door, but go through the microwave in the staff kitchen because it's connected to the internet?

So what can the government do?
With our government dollars, we ought to be buying only devices that have [at least] minimum security standards. We finally passed legislation out of the committees at the House and the Senate. It needs to get across the finish line on the floor. There is nothing partisan about this, even the folks in the Trump administration, the DOD fully understand. We've been fighting some of the low-end vendors who don't want to spend the extra money, and we're talking about pennies on the dollar, to put in some basic security. So we could wake up a year or two from now and have some important government function interrupted because we didn't put a basic security in our internet of things–connected devices. We will look crazy for not doing it.

Last question, are we at a tipping point now with Silicon Valley and U.S. regulations? In recent weeks, there have been huge lawsuits and announcements of investigations left and right. You wrote the white paper last year. What's going on now that caused this to move?
I think it's an accumulation of transgressions, a failure of the companies. I think the companies will regret the day that they didn't engage earlier. There's interest from everyday Americans on this topic, I've seen it increase exponentially over the last nine months. Some of it around democracy protection, some of it around kids concerned about polling, some of it around the fact that they, the people, realize the level of hate and vile that spread on these platforms. The public is ahead of the policymakers on this. They get that the status quo can't last.

About the writer


Nina Burleigh is Newsweek's National Politics Correspondent. She is an award-winning journalist and the author of six books. Her last ... Read more

To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go