‘Foundation models’ may be the future of AI. They’re also deeply flawed

‘Foundation models’ may be the future of AI. They’re also deeply flawed

2 years ago
Anonymous $np3LcwuhSi

https://techmonitor.ai/technology/ai-and-automation/foundation-models-may-be-future-of-ai-theyre-also-deeply-flawed

For a while, it seemed like the future. When it was first unveiled in May 2020, OpenAI’s GPT-3 language model stunned observers with its capacity to generate human-like text from prompts of just a few words. Suddenly, the demise of the professional writer – and any chance of curbing fake news – seemed nigh. GPT-3 was simultaneously praised as one of the most powerful AI tools ever developed and deemed too dangerous to be released as an open-source model.

Over subsequent months, however, significant flaws in GPT-3 began to emerge. An automated tweet generator built on the model demonstrated its shortcomings, as the quality of much of its output fell far short of the promising early examples advertised by OpenAI. Some of it was just nonsense: a tweet generated from the prompt ‘Zuckerberg’ resulted in a screed about the Facebook founder rolling up his tie and swallowing it. Other prompts generated a slew of offensive and racist stereotypes – the result, some speculated, of GPT-3’s 175 billion machine learning parameters being trained on masses of data scraped from the internet, warts and all.