The AI Race: What's at Stake?
Right now, Artificial Intelligence is as dumb as it’s ever going to be. The non-stop learning machines we’ve created are in constant supply of our man made source material and they’re becoming more advanced.
Seemingly, we’re all along for the ride, creators included.
What are we working with?
A balanced mix of hype and concern over AI has led to a global feeling of uncertainty surrounding these new tools and technologies. A cause for concern that led to over 1000 tech leaders calling for a pause in work on machine learning (ML) models more advanced than OpenAIs ChatGPT4.
The open letter stated that “recent months have seen A.I. labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
Elon Musk was one of the signatories… though he has since announced he’ll be creating his own AI chatbot “TruthGPT”. As with most things from Musk, it’s not clear if this is legit (though the buying up 10,000 GPUs would say - yes) or if this is more professional level trolling of his ex colleagues at Open AI. Elon ‘co-founded’ OpenAI in 2015 but clashed with management and left the board in 2018, publicly citing conflict of interest with Tesla.
Why the sudden cause for panic?
Businesses have used AI to streamline processes and improve customer services for years. Credit card companies use Machine Learning (ML) to monitor consumer card usage and alert anything suspicious. Search engines and email providers, such as Google, use Natural Language Processing (NLP) to improve the accuracy of search results and set up email spam filters. AI that improves efficiency is mostly welcomed with open arms.
However, this new AI that exists to entertain and educate is what has caused the change in stakes. AI is no longer just a technology being used in the background of our lives. It has very quickly made its way to the fore and is now raising questions of ethics.
Who has control?
Current guidance surrounding how to use AI “responsibly” is thin.
Eventually, governments will catch up and legislate. But could this “eventually” come too late?
Governments not having a full understanding and control over technological advancements is one thing, but the actual creators of the tech themselves not having full understanding and control, is for many techies and non techies alike, worrisome.
What happens next?
There is no shortage of cash flow for the AI industry both in the public and private sector. Overall global spend is predicted to reach $154 billion in 2023, an increase of 26.9% over the amount spent in 2022.
With this amount of money waiting to be spent, the piqued interest of the general public and the vast possibilities of AI technology means the only way is forward.
Which brings up the need for responsible use of AI. But who decides what is responsible?
After a bit of back and forth I ultimately asked ChatGPT to weigh in and they came back with “ethical and fair AI development and use requires ongoing dialogue and collaboration among stakeholders, including developers, policymakers, and members of affected communities.”.
Can this happen? Or is the race to the top going to get in the way of this collaboration? We will all find out soon.