AI and Cybersecurity - the rise of the malevolent machine?
OpenAI’s Chat Generative Pre-Trained Transformer, or ChatGPT as it is more commonly known, is perhaps the most pervasive AI technology available to the masses. The latest iteration, ChatGPT4, has been hailed as being impressively human-like and described as one of the best AI chatbots to be made available to the public.
This is turn has fuelled an AI arms race with big tech competitors eager to bring to market their own version of this technology to enhance their service offerings. Microsoft’s Bing has recently adopted ChatGPT into its search engine.
The fundamental premise of ChatGPT is that it is able to predict what word will appear next in the sentence based on its analysis of a large volume of text data. It is essentially a mathematical and statistical representation tool which produces the response to questions you ask the software.
There is a fascinating array of uses for this technology which provides creativity and efficiencies for individuals and businesses alike. Yet there is also a malicious side which present real risks and throws into doubt the integrity of information you may consume from the internet or from other sources.
These concerns have led to signed letters from a collective of rather famous individuals at the helm of some of the largest and most powerful companies in the world requesting a pause in AI development until the risks are better understood.
CyXcel's Technical Director Sachin Bhatt writes about the cybersecurity risks created by AI.
Read more: https://www.scl.org/12960-ai-and-cybersecurity-the-rise-of-the-malevolent-machine/