Separator

xAI Co-Founder Igor Babuschkin Exits to Build AI Safety Investment Firm

Separator

img

Igor Babuschkin, a founding member of Elon Musk’s xAI, said that he’s leaving the artificial intelligence startup to launch his own venture firm.

“Today was my last day at xAI, the company that I helped start with Elon Musk in 2023,” Babuschkin wrote on X, which is owned by xAI. “I still remember the day I first met Elon, we talked for hours about AI and what the future might hold. We both felt that a new AI company with a different kind of mission was needed. Building AI that advances humanity has been my lifelong dream.”

Musk wrote, in response, “Thanks for helping build @xAI! We wouldn’t be here without you.”

A former research engineer for Google’s DeepMind and ex-member of OpenAI’s technical staff, Babuschkin recounted some of xAI’s major operational achievements during his tenure, including building out engineering teams at the company.

 Also Read: Mary Barra: Icon Who Carved Woman Leadership Path in the Auto Industry

The facility in Memphis processes data and trains the models that power xAI’s Grok chatbot.

“Through blood sweat and tears, our team’s blistering velocity built the Memphis supercluster, and shipped frontier models faster than any company in history,” he wrote.

 

Locals have protested xAI’s operations in Memphis, especially its use of natural gas-burning turbines to power its data centers. Emissions from the turbines are reportedly worsening the poor air quality in the West Tennessee city.

At the time he was preparing to go into business with Musk, Babuschkin wrote that he believed “very soon AI could reason beyond the level of humans,” and was concerned about making sure such technology is “used for good.”

He said that, “Elon had warned of the dangers of powerful AI for years,” and shared his vision of “AI used to benefit humanity.”

 Also Read: 10 Inspiring Technology Quotes of All Time

The company has a rocky track record when it comes to AI safety.

In May, xAI’s Grok chatbot automatically generated and spread false posts about alleged “white genocide” in South Africa. After that, the company apologized and said Grok’s strange behavior was caused by an “unauthorized modification” to the chatbot’s system prompts, which help inform the way it behaves and interacts with users.

In July, xAI found itself apologizing for another problem with Grok. After a code update, the chatbot automatically generated and spread false and antisemitic content across X, including posts praising Adolf Hitler.

The European Union requested a meeting last month with representatives from xAI to discuss problems with X and the integrated Grok chatbot.

Other chatbots have also generated false or otherwise dangerous outputs in response to queries. OpenAI’s ChatGPT was recently called out for giving bad health advice to a user who wound up in the emergency room. And Google had to make changes to Gemini last year after it generated offensive images in response to user prompts about history, including images that depicted people of color as Nazis.

Current Issue




🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...