A leading government advisor on artificial intelligence (AI) has warned that businesses using ChatGPT risk creating “dangerous situations” because the chatbot is “poorly trained and biased”.
Inma Martinez, chair of the expert group of the G7/G20 group of countries’ AI initiative, spoke to Travel Weekly ahead of addressing Abta’s Travel Convention in October.
She said ChatGPT was launched “as an experiment” and described it as “a language model poorly trained and biased that infringes copyright laws and has created dangerous situations for businesses unaware that it does not protect their intellectual property”.
More: Interview: AI government advisor Inma Martinez
Martinez expressed concern about the environmental impact of training AI models and the quality of data used to train them.
The launch of ChatGPT-3.5 last November, a sophisticated chatbot or large language model (LLM) trained on huge datasets, was widely hailed as a breakthrough in generative AI.
Martinez explained: “Training large AI models requires significant computational resources, leading to substantial energy consumption.”
She warned this could “exhaust the grid” and “force governments to prohibit their further development”.
Martinez added: “Data quality is the second-most concerning [issue]. The quality of data in large datasets can vary significantly, including errors, inaccuracies and ‘noise’. Models can learn from these, impacting their reliability. If the training data contains biases, AI models can perpetuate these biases.”
She warned of the creation of “misinformation” when an AI system “attempts to fill in gaps in its knowledge but lacks accurate information or context”, saying: “AI models do not fact-check information before generating responses. They can generate statements that are inaccurate, misleading or even fictional. In their eagerness to continue answering questions they invent a completely new reality that defeats the purpose of using them.”
She blamed the hype around the technology on “individuals who pretended to use ChatGPT for tasks it was never meant to do”.
Tui is among the major businesses trialling use of ChatGPT.
Group chief information officer Pieter Jordaan insisted the company is alive to the dangers.
“It’s important to understand ChatGPT gives a plausible answer, [but] it has no fact base,” he said.
He forecast “legal and political battles” over the use of AI, noting: “You can’t send private information into these models and it be secure.”