Elon Musk Reportedly Wants to Launch AI Startup that Doesn’t Ignore Safety Issues
According to the report, Elon Musk wants to launch a new AI startup that will compete with OpenAI’s ChatGPT. AI will be designed with safety at its core, something that current developers continue to ignore.
Elon Musk is developing plans to launch a new AI startup
The head of disruptive companies is developing plans to launch a new AI startup, Financial Times reported, citing several people familiar with the matter. Musk is allegedly already assembling a team of researchers and engineers in the field of artificial intelligence. In addition, he is also in talks with a number of SpaceX and Tesla investors to invest in his new venture.
The team for the new project is already being formed
Musk is recruiting engineers from top AI labs, including DeepMind, according to those with knowledge of his plans. These same people said he started exploring the idea of creating a rival company earlier this year, following the rapid progress of OpenAI. So far, Musk has reportedly brought in Igor Babushkin, a former DeepMind employee, and about half a dozen other engineers.
The equipment is already arriving
In order to launch the project, the CEO of SpaceX, Tesla, and Twitter, received thousands of powerful GPUs from Nvidia. They are necessary in order to create a large language model — an artificial intelligence system capable of absorbing a huge amount of content and creating humanlike writing or realistic imagery, like the technology that powers ChatGPT or Midjourney.
However, during an interview on Twitter Spaces this week, when asked about getting GPUs, Musk replied: “It seems like everyone and their dog is buying GPUs at this point. Twitter and Tesla are certainly buying GPUs.”
Under Musk’s leadership, AI will be designed with safety at its core
Recently, Musk was one of the thousands of tech people from around the world who signed an open letter about the threat of AI. It calls for a pause in the development of GPT-style models due to safety concerns. If AI labs do not prioritize safety, then governments will need to step in and create regulations as a matter of urgency.
Musk co-founded OpenAI in 2015. He left the board of directors three years later due to conflicts with its management. Among other things, one issue was around the attitude toward AI safety, according to two people who were involved with OpenAI at the time. Shortly thereafter, the organization turned into a commercial start-up and attracted a $1 billion investment from Microsoft. As to why Musk left OpenAI, he tweeted in 2019:
“I had to focus on solving a painfully large number of engineering & manufacturing problems at Tesla (especially) & SpaceX.
Also, Tesla was competing for some of the same people as OpenAI & I didn’t agree with some of what OpenAI team wanted to do. Add that all up & it was just better to part ways on good terms.”
Elon Musk has always expressed concern about the potential dangers of AI and has been keen to find ways to protect humanity. He has also publicly criticized OpenAI. Musk said the company had become less transparent and too commercially oriented in its pursuit of advanced AI. He is concerned that models such as GPT-4 may write lies and show political bias, which has been found by many independent people.
Twitter and Tesla can become the mainstay of the project
Sources said the new startup could use Twitter content as data to train its language model and use Tesla for computing resources. For example, Tesla’s supercomputer, Dojo, has enormous processing power and was designed for training, which means it could be the perfect tool for a new startup. “I think there’s a lot of potential there with Dojo that people don’t realize,” Musk said.