UX for AI


Do Not Trust AIs

Last Updated 16 Oct, 2023

Conway's Game of Life. From Wikipedia

I have been writing interfaces for AI's for about a decade now, and reading about them for far longer (those wired articles about AI in the 90's, man...). From Conway's Game of Life to Eliza the Chatbot, I have been fascinated with how these computers reason in ways that sometimes seem almost alive.

They are not alive.

Generated with Stable Diffusion 1.5


The paintings at the head of my articles are generated with Stable Diffusion 1.5. They are, of course, insane. If they were images of humans they would make David Croenenberg pause. I'm sure if you showed these paintings to birds they would be horrified.

Why I like them is they demonstrate in a visceral way what statistics do to information. Every graph you see, every poll, every dry investment speak presentation is a distortion of reality -- sometimes hilarious, sometimes ghastly, always innacurate.

This doesn't mean graphs and polls are worthless. It means we need to read them with the understanding that we are looking at the world through a funhouse mirror, surfacing things our minds don't necessarily know how to comprehend. We can use these tools, but we should be careful with them.

Allegedly from an IBM presentation in the late 70's, but I couldn't find a good source.

When I see people trying to get LLM's to quit hallucinating I grow concerned because they are trying to get a collection of text and statistics called an LLM to speak truth. They will never be able to. That doesn't mean these tools aren't useful, but we need to use them in their proper context with the proper viewpoint.

I use Chat GPT extensively these days, though I try to keep it separate from my main notes. It is a profoundly useful tool for writing software.

Even then

Generated with Stable Diffusion 1.5


Generated with Stable Diffusion 1.5

Good morning ChatGPT 3.5. Could you give me a list of reasons why I shouldn't trust you?

Good morning! Of course, it's important to consider potential reasons not to trust AI models like me: 1. Lack of Understanding: AI models like me can generate text based on patterns in the data they were trained on, but they don't truly understand the information or context. 2. Limited Knowledge: My knowledge is based on data up to September 2021, so I may not have the most up-to-date or complete information. 3. Potential for Errors: I can make mistakes or provide incorrect information, especially in rapidly evolving fields. 4. Biased Data: My training data may contain biases from the internet, which can lead to biased or unfair outputs. 5. Privacy Concerns: Sharing personal or sensitive information with AI models may pose privacy risks, and it's important to be cautious. 6. No Personal Experience: I lack personal experiences, emotions, or consciousness, so I can't provide insights based on human intuition or emotions. 7. Dependence on Data Sources: My responses are only as good as the data I was trained on, which may not cover all subjects or viewpoints. 8. Ethical Concerns: Some people have ethical reservations about AI technology, particularly in areas like job displacement and automation. 9. Vulnerability to Manipulation: AI models can be manipulated to generate harmful or misleading content if used for malicious purposes. 10. Uncertainty: There might be situations where I cannot provide a definitive answer or where there are conflicting opinions. 11. Lack of Responsibility: I don't take responsibility for any decisions made based on my information, as the ultimate responsibility lies with the user. 12. Intellectual Property: Some might be concerned about copyright or intellectual property issues when using AI-generated content. It's essential to use AI tools like me with awareness of these limitations and to verify information when making important decisions. Always exercise critical thinking and consider the context in which AI is used.