Artificial intelligence is currently at the forefront of our world, but its positive future evolution depends on prioritising ethical applications and regulatory oversight at all levels. Since OpenAI released ChatGPT in November 2022, there has been a widespread fascination with generative artificial intelligence (GenAI).
ChatGPT, based on a large language learning model pre-trained on extensive data to generate human-like writing and creative content, rapidly became the fastest-growing consumer internet application ever, accumulating an estimated 100 million monthly users within its first two months.
While GenAI isn’t entirely novel—research and development by OpenAI and other organisations have been ongoing—ChatGPT propelled it and similar tools into the mainstream. Microsoft promptly invested $13 billion in OpenAI, and in 2023, over 25% of all U.S. investment dollars in startups went into AI-related companies. Moreover, AI startups are projected to experience an annual growth rate of 37.3% from 2023 to 2030.
Despite GenAI’s initial success, there have been challenges. ChatGPT has displayed mild inaccuracies and humorous errors in some instances. Rushing to adopt, many companies and users have neglected to fact-check its outputs, exposing themselves to potential risks.
In the tech community, concerns linger regarding the interpretability of the deep neural networks (DNNs) and large language learning models (LLMs) that underpin most generative AI tools. Executives will increasingly seek transparency on these aspects and others before expanding the applications of GenAI.
Ensuring that all AI tools utilise unbiased and balanced data is crucial. Take facial recognition technology powered by GenAI, for example. If an AI dataset favours a specific ethnicity, it will likely introduce unfair biases and outcomes into the tool.
Furthermore, significant questions exist regarding the data used to train AI algorithms. Companies should lead in ensuring that any AI products they develop draw from datasets fairly and transparently. Ultimately, governments must establish standards and laws governing these parameters.
The increase in artificial intelligence (AI) is undeniably thrilling. Its abilities can potentially improve productivity and the general quality of human life in previously unimaginable ways. However, if AI remains unregulated, both at the highest levels of government and through widely accepted standards established by the private sector, it could also present risks—some known, while others not—that might have an equally adverse effect on the world.