This Week in AI: Can we (and could we ever) trust OpenAI?

8 Min Read

Maintaining with an trade as fast-moving as AI is a tall order. So till an AI can do it for you, right here’s a helpful roundup of current tales on this planet of machine studying, together with notable analysis and experiments we didn’t cowl on their very own.

By the best way, TechCrunch plans to launch an AI e-newsletter on June 5. Keep tuned. Within the meantime, we’re upping the cadence of our semiregular AI column, which was beforehand twice a month (or so), to weekly — so be looking out for extra editions.

This week in AI, OpenAI launched discounted plans for nonprofits and training prospects and drew again the curtains on its most up-to-date efforts to cease dangerous actors from abusing its AI instruments. There’s not a lot to criticize, there — at the least not on this author’s opinion. However I will say that the deluge of bulletins appeared timed to counter the corporate’s dangerous press as of late.

Let’s begin with Scarlett Johansson. OpenAI eliminated one of many voices utilized by its AI-powered chatbot ChatGPT after customers identified that it sounded eerily much like Johansson’s. Johansson later launched a press release saying that she employed authorized counsel to inquire in regards to the voice and get precise particulars about the way it was developed — and that she’d refused repeated entreaties from OpenAI to license her voice for ChatGPT.

Now, a piece in The Washington Post implies that OpenAI didn’t the truth is search to clone Johansson’s voice and that any similarities have been unintentional. However why, then, did OpenAI CEO Sam Altman attain out to Johansson and urge her to rethink two days earlier than a splashy demo that featured the soundalike voice? It’s a tad suspect.

See also  The New York Times wants OpenAI and Microsoft to pay for training data

Then there’s OpenAI’s belief and issues of safety.

As we reported earlier within the month, OpenAI’s since-dissolved Superalignment crew, liable for growing methods to manipulate and steer “superintelligent” AI techniques, was promised 20% of the corporate’s compute sources — however solely ever (and barely) acquired a fraction of this. That (amongst different causes) led to the resignation of the groups’ two co-leads, Jan Leike and Ilya Sutskever, previously OpenAI’s chief scientist.

Nearly a dozen safety experts have left OpenAI previously 12 months; a number of, together with Leike, have publicly voiced considerations that the corporate is prioritizing business tasks over security and transparency efforts. In response to the criticism, OpenAI fashioned a brand new committee to supervise security and safety choices associated to the corporate’s tasks and operations. However it staffed the committee with firm insiders — together with Altman — relatively than outdoors observers. This as OpenAI reportedly considers ditching its nonprofit construction in favor of a standard for-profit mannequin.

Incidents like these make it more durable to belief OpenAI, an organization whose energy and affect grows day by day (see: its offers with information publishers). Few firms, if any, are worthy of belief. However OpenAI’s market-disrupting applied sciences make the violations all of the extra troubling.

It doesn’t assist issues that Altman himself isn’t precisely a beacon of truthfulness.

When information of OpenAI’s aggressive tactics toward former employees broke — ways that entailed threatening workers with the lack of their vested fairness, or the prevention of fairness gross sales, in the event that they didn’t signal restrictive nondisclosure agreements — Altman apologized and claimed he had no data of the insurance policies. However, according to Vox, Altman’s signature is on the incorporation paperwork that enacted the insurance policies.

See also  Amazon hires founders away from AI startup Adept

And if former OpenAI board member Helen Toner is to be believed — one of many ex-board members who tried to take away Altman from his publish late final 12 months — Altman has withheld data, misrepresented issues that have been occurring at OpenAI and in some instances outright lied to the board. Toner says that the board realized of the discharge of ChatGPT by means of Twitter, not from Altman; that Altman gave mistaken details about OpenAI’s formal security practices; and that Altman, displeased with an instructional paper Toner co-authored that forged a vital mild on OpenAI, tried to control board members to push Toner off the board.

None of it bodes effectively.

Listed here are another AI tales of notice from the previous few days:

  • Voice cloning made straightforward: A brand new report from the Middle for Countering Digital Hate finds that AI-powered voice cloning providers make faking a politician’s assertion pretty trivial.
  • Google’s AI Overviews battle: AI Overviews, the AI-generated search outcomes that Google began rolling out extra broadly earlier this month on Google Search, want some work. The corporate admits this — however claims that it’s iterating rapidly. (We’ll see.)
  • Paul Graham on Altman: In a collection of posts on X, Paul Graham, the co-founder of startup accelerator Y Combinator, disregarded claims that Altman was pressured to resign as president of Y Combinator in 2019 because of potential conflicts of curiosity. (Y Combinator has a small stake in OpenAI.)
  • xAI raises $6B: Elon Musk’s AI startup, xAI, has raised $6 billion in funding as Musk shores up capital to aggressively compete with rivals together with OpenAI, Microsoft and Alphabet.
  • Perplexity’s new AI characteristic: With its new functionality Perplexity Pages, AI startup Perplexity is aiming to assist customers make studies, articles or guides in a extra visually interesting format, Ivan studies.
  • AI fashions’ favourite numbers: Devin writes in regards to the numbers completely different AI fashions select once they’re tasked with giving a random reply. Because it seems, they’ve favorites — a mirrored image of the info on which every was educated.
  • Mistral releases Codestral: Mistral, the French AI startup backed by Microsoft and valued at $6 billion, has launched its first generative AI mannequin for coding, dubbed Codestral. However it may well’t be used commercially, due to Mistral’s fairly restrictive license.
  • Chatbots and privateness: Natasha writes in regards to the European Union’s ChatGPT taskforce, and the way it provides a primary have a look at detangling the AI chatbot’s privateness compliance.
  • ElevenLabs’ sound generator: Voice cloning startup ElevenLabs launched a brand new software, first introduced in February, that lets customers generate sound results by means of prompts.
  • Interconnects for AI chips: Tech giants together with Microsoft, Google and Intel — however not Arm, Nvidia or AWS — have fashioned an trade group, the UALink Promoter Group, to assist develop next-gen AI chip elements.
See also  VeriSIM Life's AI platform wants to speed up drug discovery

Source link

TAGGED: , ,
Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter CoinGecko Free Api Key to get this plugin works.