Happy Tuesday to all, but most of all the ESPN analytics department! My Miami Heat look forward to proving you wrong some more. Send news and stats tips to: cristiano.lima@washpost.com.
Below: Twitter faces pressure to comply with incoming E.U. rules, and rural Texans are losing trust in Elon Musk’s companies. First:
Biden’s former tech adviser on what Washington is missing about AI
Tim Wu, an architect of President Biden’s antitrust policy, left the White House in January just as Silicon Valley’s artificial intelligence craze was soaring to new heights.
But now as efforts to rein in AI tools like ChatGPT gain steam in Washington, the former Biden tech adviser is trying to ensure legislators and regulators don’t veer off course.
Wu, now back at Columbia Law School, has been meeting in recent weeks with officials at the White House, the Justice Department and on Capitol Hill — including the office of Senate Majority Leader Charles E. Schumer (D-N.Y.) — to lay out his vision for how to regulate AI.
Advertisement
In a wide-ranging interview last week, Wu said he’s concerned the AI debate so far has focused “pretty narrowly” on abstract risks posed by the tools rather than concrete harms already underway — and that industry giants are playing too big a role in shaping potential rules.
“There’s a lot of … economic possibility in this moment. … There’s also a lot of possibility for the most powerful technological platforms to become more powerful and more entrenched,” he said.
Wu, an influential voice in discussions around tech regulation, outlined what he thinks officials should do to keep AI in check — and what they should avoid. Here’s a breakdown:
Don't: Create an AI licensing system
Wu, a prominent critic of Silicon Valley’s most powerful companies, shot down proposals that heavyweights like OpenAI and Microsoft have floated to create licensing requirements for operators of large AI models like ChatGPT.
Advertisement
“Licensing regimes are the death of competition in most places they operate,” said Wu, who helped develop Biden’s executive order urging greater federal action on competition.
He argued heavy licensing requirements would make compliance more difficult for smaller companies and could ultimately decide “who gets to be in the market and who doesn’t.”
Do: Require AI to proactively identify itself
If you’re dealing with an AI model, you should know it, Wu said, and operators should be required to make it so that such tools preemptively identify themselves. It wouldn’t be enough for a chatbot like ChatGPT to simply answer “yes” if you ask if they are AI, he said.
Wu said an agency such as the Federal Trade Commission could be tasked with developing formats for how different types of AI products could comply with the rules.
Advertisement
In addition to boosting transparency, Wu said it could help on an array of consumer protection fronts, including, for example, cracking down on misleading AI-generated reviews for Amazon products. (Amazon founder Jeff Bezos owns The Washington Post.)
Don't: Create an AI-focused federal agency
While lawmakers such as Sen. Michael F. Bennet (D-Colo.) have proposed launching a new federal agency to oversee digital platforms, including their use of AI, Wu said he’s concerned such approaches could “advantage existing entities” and “freeze the industry even before it gets started.”
“I'm not in favor of an approach that would create heavy compliance costs for market entry and that would sort of regulate more abstract harms,” he said.
Do: Enforce the laws on the books
While there’s significant discussion about what new rules may be needed to deal with AI harms and risks, Wu said the federal government isn’t starting from scratch. He said enforcers can lean on existing rules against deceptive and misleading practices to tackle potential abuses.
Advertisement
“We need to do things like enhance what are … essentially the deception and fraud laws,” he said, adding that mandated self-identification would help cut down on fraud and scams.
Don’t: Create transparency rules and call it a day
Wu said he’s all for more transparency around how AI operators make their products, but simply creating new disclosure requirements would not address underlying harms.
“It’s a bad temptation in Washington … to, when they lack anything better as an idea, resort to transparency as a way of everyone to save face and satisfy themselves they’ve actually done something,” he said. “It’s not bad, but it’s not enough.”
Do: Create a robot penal code
A major hurdle, he said, is that federal law has naturally been crafted to deal with lawbreaking by humans and that cases often hinge on concepts like “intent,” “malice” or “recklessness” that don’t map as well onto AI — despite some claims of its sentience.
Advertisement
“We have a pressing need to figure out the areas of the legal code that are likely to be violated by an AI likely to cause harm, but where the laws are written with a human in mind,” he said.
Wu said the Justice Department could take the lead on identifying instances where AI can cause harm, but there’s not a clear legal path to seek a remedy and Congress could fill in the gaps.
Don’t: Subsidize AI for tech giants
Wu hailed the United States’ long tradition of “very generously” funding research in tech. But he said lawmakers should be wary of subsidizing tech giants’ AI expansion efforts.
“There’s no need to give money to companies that already have a lot of money and are already profitable, and that needs to be avoided at any cost,” he said.
Do: Make sure content creators get paid
Companies require huge troves of data to train their AI models, at times relying on massive amounts of copyrighted material. Officials and industry leaders have stressed the importance of making sure content creators get compensated for their work, but how to do so is still a debate.
Advertisement
Wu said officials could model their solution after the mandatory licensing system used to ensure composers are compensated when their songs run on the radio and have content creators receive a proportional payout when their work is used to train an AI model.
Do: Encourage open-source, publicly funded AI
Wu said the government should look to supercharge efforts to create open-source AI models, which could help address concerns about concentration and spur broader innovation.
One way to do so, he said, could be to support an AI “public option,” borrowing inspiration from the publicly funded ARPANET in the ’70s and ’80s that gave way to the creation of the internet.
“For some reason, the last 20 years we’ve assumed everything can happen completely privately, and I think we should learn the lesson from that,” he said.
Our top tabs
Twitter under pressure after leaving E.U. disinformation bloc
Elon Musk’s Twitter is facing pressure to comply with incoming E.U. disinformation removal obligations after exiting the bloc’s code of practice.
Advertisement
E.U. digital policy chief Thierry Breton warned Twitter that it cannot run from obligations it will face when the Digital Services Act (DSA) takes effect in August, Foo Yun Chee reports for Reuters. The company this past week withdrew from a voluntary code adopted in 2018 as a means of motivating companies to weed out disinformation on their platforms.
DSA directs large platforms including Twitter to provide more transparency about their algorithms to researchers and take additional measures to remove illegal and false content online. Companies could face fines up to 6 percent of their global revenue if they violate DSA, Yun Chee writes.
France Digital Minister Jean-Noël Barrot leaned in on the scuttle and threatened to ban Twitter from operating in the E.U. if it did not meet compliance with the law, Carlo Martuscelli reports for Politico.
Advertisement
Twitter on Monday tweeted images of a May 26 letter addressed to E.U. officials saying that it plans to fully comply with DSA and that it exited the code of practice because of earlier concerns that were not sufficiently addressed.
OpenAI CEO downplays possibility of E.U. exit
OpenAI CEO Sam Altman signaled his company would likely not cease operating in the E.U. despite remarks last week that hinted the ChatGPT owner would be unable to comply with an incoming AI law, Kelvin Chan reports for the Associated Press.
The E.U. AI Act would assign AI systems and tools like ChatGPT to different risk categories and direct entities using AI deemed as high risk to explain their use case. Altman in London this past week signaled the rules might be too strict for the company to keep operating the bloc, Chan writes.
Breton, the E.U. digital policy chief, on Twitter “linked to a Financial Times article quoting Altman saying that OpenAI ‘will try to comply, but if we can’t comply we will cease operating,’” the report says.
Altman the next day tweeted: “very productive week of conversations in europe about how to best regulate AI! we are excited to continue to operate here and of course have no plans to leave.”
Rural Texas residents losing trust in Musk’s companies
As billionaire Elon Musk works to establish operations of his companies in Texas, some residents and critics say he is moving too fast and taking risks that could harm local wildlife and communities, our colleague Jeanne Whalen reports.
“Last month, after a SpaceX rocket exploded over the Gulf of Mexico minutes after liftoff, the Federal Aviation Administration grounded the company’s launch program, saying SpaceX had to ‘perform analyses to ensure that the public was not exposed to unacceptable risks,’” Jeanne writes, adding the U.S. Fish and Wildlife Service said the explosion sent various debris flying over the area.
“Signs of Musk’s move-fast ethos have mounted in Bastrop County. The Texas Commission on Environmental Quality has hit the Musk building sites with several violations over poor erosion controls and other matters,” the report adds.
Musk “is incredibly bright, he’s been incredibly successful, and he’s done things that are extremely hard,” Maurice Schweitzer, a management professor at the University of Pennsylvania’s Wharton School, told Jeanne. But his success has “caused him some conceit where he feels entitled and he feels a sense of being special in a way that’s caused him to overextend himself.”
Inside the industry
US 'won't tolerate' China's ban on Micron chips, commerce secretary says (Reuters)
The AI boom runs on chips, but it can’t get enough (Wall Street Journal)
Chinese apps remain hugely popular in the U.S. despite efforts to ban TikTok (CNBC)
Corporate VCs ride AI startup wave (Axios)
Competition watch
China urges Japan to halt export restrictions on chips (Reuters)
Workforce report
Twitter cut key software before DeSantis audio glitch (The Information)
Trending
How the media is covering ChatGPT (Columbia Journalism Review)
Daybook
- The Senate Banking Committee holds a hearing titled “Countering China: Advancing U.S. National Security, Economic Security, and Foreign Policy” tomorrow at 10 a.m.
- The Atlantic Council holds an equitable AI workshop tomorrow at 10 a.m.
Before you log off
That’s all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology202 here. Get in touch with tips, feedback or greetings on Twitter or email.
ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZL2wuMitoJyrX2d9c3%2BOaWxoa2Bkr6qwxKdkn6eiorKzedOemqFlkZnDqr%2FEq2SwoJGperit0qGgp5%2BkpLtutdJmpKKro567qHnAm6aurF2WtnA%3D