There was a time when military power was the deciding factor in determining the world’s superpowers. However, the paradigm has shifted, and now, the country with the best technology is generally considered the global superpower. The AI revolution is poised to bring some significant changes and is expected to transform various industries, like healthcare and transportation, turning them into a trillion-dollar industry in the next decade or so. However, the scenes behind the curtain are a bit daunting. A war of ideologies is raging between billionaires over the future of AI. This war has billions of dollars and the future of AI at stake.
Tech billionaires like Elon Musk and Reid Hoffman favour open-source AI development. They argue that openness in AI development leads to faster breakthroughs, accelerated research, and public scrutiny, which are crucial for developing a friendly AI model. Moreover, they believe that open-source development helps mitigate potential biases and keeps the company from developing a harmful AI model. A 2023 study conducted by PwC stated that 84% of the surveyed people believe that open-source AI development accelerates AI research.
On the other hand, billionaires like Vinod Khosla and oppose open-sourced AI development. They believe that free or unregulated AI development can increase unforeseen risks and challenges that might devastate humans. Khosla believes that if the AI models are kept unregulated, hostile countries can use the technology to develop arms or, even worse, chemical weapons. The MIT Technology Review, ‘The Hidden Biases In AI Decision-Making’, also highlighted that complex and advanced AI models are often opaque, even to their developers. This raises concern about the unintended consequences of these AI models, and the potential for bias to creep into the algorithms is also possible, which defeats the whole purpose of an advanced AI model.
Another big concern about unregulated AI models is that people can use them to manipulate the masses into believing something that is not true. Forbes’s 2024 AI Risk Perception Survey states that 62% of the people surveyed are concerned about the misuse of AI models and large-scale social manipulation.
One big example is the “All Eyes On Rafah” image, which was shared over 50 million times on Instagram. A Malaysian artist made that image using AI. However, the image that went viral was tweaked by another Malaysian artist. Moreover, people started modifying the image and started showing distressed Palestinian women and children. The ethical dilemma here is that most of the modified images didn’t have the watermark that says that the image was generated through AI and that the women and children in the shared image weren’t real. Another example of using AI for something unethical was seen in August 2023, when Ansel Adams Estate publicly criticised Adobe for using AI-generated stock images in the photographer’s landscape style. The estate also posted on the thread, “You are officially on our last nerve with this behaviour.”
There is no clear winner in this debate. Open source development, with its potential to accelerate AI models, also carries the risk of unregulated use. On the other hand, restricted AI development, while ensuring control, can stifle progress. Striking a balance between these two approaches is the key to moving forward.
Amidst this billionaire war, the path forward lies in a balanced approach. Open-source development is crucial for rapid innovation, but without supervision, it can lead to disaster. The solution is to continue with an open-source model but with restricted access and supervision. Public scrutiny is also vital. While finding a viable plan that incorporates the best of both worlds may seem challenging, it’s the only way forward.