As I toddle off to bed, I’m pondering why we are calling LLM tools “artificial intelligence”. They don’t really show any kind of intelligence, artificial or otherwise. All they do is put millions of texts into a blender, hit the “frappe” button, then pour the slurry back out and feed it to you. There’s no verification of presented data, no discernment between fact and fiction, just stochastically stringing words together in a way that resembles other text it has seen. The appearance of coherence is merely a factor of how much data it has consumed. The coherence is an illusion. And yet we have people out there that are convinced systems like ChatGPT are some kind of oracle, infallible and wise. That’s truly frightening.