2024: The Year of AI & Tech Troubles

The year 2024 marked the year of transformational technological revolutions and notable AI and tech misses. As we kick off 2025, the tech industry will bring more advancements and continue to push the boundaries of innovation. It has become important to learn from the mistakes of the past and funnel down our focus on reliability, accuracy, and inclusivity in the development of advanced artificial intelligence software. 

In the previous year, AI models like the famous ChatGPT large language model (LLM) from OpenAI became increasingly popular for summarising text and research. It would not be wrong to say that ChatGPT remained the dominant force in 2024 and will continue to hold its position in 2025. However, even after being a leading AI model, ChatGPT wasn’t immune to misses and controversies. 

The most advanced version of this AI model, ChatGPT-4o, which featured audio voices, was flagged for sounding like Scarlett Johansson. Also, ChatGPT was responsible for providing misleading legal advice. Canadian lawyer Chong Ke sought assistance from ChatGPT to address a client’s question about child travel rights. However, the AI-powered chatbot provided entirely fabricated court cases, and Ke failed to verify their authenticity. As a result, Ke was ordered to cover the opposing counsel’s court costs for investigating the nonexistent cases.

In the later part of the year, OpenAI was again in the headlines because some of its key executives left the company, which sparked debate across the globe. 

Another one of the major blunders of 2024 was the rollout of Google’s Gemini. The AI model was launched in the beginning of 2024 in February, with the aim of revolutionising image creation. But, instead, it ended-up revolutionising Google’s strategy. The images generated by the AI model were often exaggerated and false. Moreover, according to Inc.com, there were instances of Gemini giving death threats to a grad student. 

However, Gemini was not the only AI model that Google launched. In the middle of the year, in May, Google launched its AI pop-up feature, which quickly became famous for all the wrong reasons. The AI model ended up giving illogical answers and inaccurate data. For instance, Forbes shared a story of a person who asked the AI model, “How to keep the cheese from sliding off a homemade pizza?” To which the model replied, “Add Elmer’s glue to the sauce.”

McDonald’s Drive-Thru Robot followed the same fate. McDonald’s rolled out AI-powered bots for food ordering at 100 drive-thru locations in collaboration with IBM. However, the technology faced numerous errors and drew widespread ridicule on social media, ultimately being labelled a “disaster.” As a result, the initiative was discontinued, and McDonald’s ended its partnership with IBM. This episode underscored the difficulties of deploying AI in real-world scenarios and the importance of thorough testing and validation. 

The implications of poor testing were again realised in the CrowdStrike outage. In July, the company was flagged for giving one of the worst software updates of the year. Because of this, there was a wide disruption across the globe that resulted in massive economic losses. According to Forbes, Delta Airlines itself cancelled 7,000 flights, and as per parametrix, the outage could have cost Fortune 500 companies more than $5 billion. CrowdStrike is now facing a lawsuit worth $500 million. 

“The largest direct financial loss will be suffered by Fortune 500 companies in the healthcare sector ($1.938 billion), followed by banking ($1.149 billion)” 

                                                                       ~ An Analysis by Parametrix

And last but not least is AI Slop. Recent studies from Forbes reveal that around 57% of online content is now either AI-generated or processed through AI translation tools, marking a significant shift in how content is produced and shared. This surge in AI-generated material, often labelled as “AI slop,” spans the spectrum from entertaining and bizarre—such as the “Shrimp Jesus” meme—to misleading, like the fabricated image of a shivering girl in a rowboat after Hurricane Helene. Frequently created for clicks, this content is rarely fact-checked, raising serious concerns about its accuracy, context, and ethical implications. As AI tools become more advanced, distinguishing AI-generated content from human-created material is increasingly challenging, with 65.8% of individuals believing AI content meets or surpasses the quality of average human writing.

57% of online content is now either AI-generated or processed through AI translation tools – Forbes 

Hope for a Better Future

As we look ahead to 2025 and beyond, industries will continue to push the boundaries of innovation. However, it is essential to balance the benefits of emerging technologies with efforts to minimise their risks and negative impacts. By embracing this thoughtful approach, we can unlock technology’s full potential to drive progress, improve quality of life, and shape a better future for all. This balanced perspective ensures that technological advancements serve humanity responsibly and equitably in the years to come.

Leave a Reply