Do Not Trust AIs
Last Updated
Last Updated
I have been writing interfaces for AI's for about a decade now, and reading about them for far longer (those wired articles about AI in the 90's, man...). From Conway's Game of Life to Eliza the Chatbot, I have been fascinated with how these computers reason in ways that sometimes seem almost alive.
They are not alive.
The paintings at the head of my articles are generated with Stable Diffusion 1.5. They are, of course, insane. If they were images of humans they would make David Croenenberg pause. I'm sure if you showed these paintings to birds they would be horrified.
Why I like them is they demonstrate in a visceral way what statistics do to information. Every graph you see, every poll, every dry investment speak presentation is a distortion of reality -- sometimes hilarious, sometimes ghastly, always innacurate.
This doesn't mean graphs and polls are worthless. It means we need to read them with the understanding that we are looking at the world through a funhouse mirror, surfacing things our minds don't necessarily know how to comprehend. We can use these tools, but we should be careful with them.
When I see people trying to get LLM's to quit hallucinating I grow concerned because they are trying to get a collection of text and statistics called an LLM to speak truth. They will never be able to. That doesn't mean these tools aren't useful, but we need to use them in their proper context with the proper viewpoint.
I use Chat GPT extensively these days, though I try to keep it separate from my main notes. It is a profoundly useful tool for writing software.
Even then
Good morning ChatGPT 3.5. Could you give me a list of reasons why I shouldn't trust you?