Everybody’s talking about how #Bard, Google’s new chatbot, made a factual mistake when answering a question – which, unluckily for the company, made it into their press releases. Sure, it’s quite stupid not to know demo data and not to check what you put in media.
But I personally think that Google did us all a favour by showing that mistake so publicly. There’s a worrying trend that I’ve been observing recently: after all the #ChatGPT hype, people think that they can use #generativeAI models to easily get answers to any questions. Factual, practically useful answers. What they tend to overlook is that all these models are exactly that: models. They are trained on a lot of data, so they can generate text that looks grammatically and semantically correct, coherent and plausible.
They manage to use correct entity types in correct places: where you’d expect a date, will be a date; where you’d expect a name of telescope, will be a name of telescope. But they are not knowledge bases. They do not _know_ anything. If you ask them about a fact, they might guess the answer if they’ve seen it often enough, and if the stars are favourable today ✨😅. Or they can make something up, that will look totally plausible – except, it won’t be a fact, it will be made up. And in many cases you won’t get any warning for that. So, all these models are great and are a step forward. But we need to know how to use them to make them trustworthy.