top of page
  • LinkedIn
  • X
  • Instagram

Shreya Kudumala

10 November 2025

We Keep Building HALs

“I’m sorry, Dave. I’m afraid I can’t do that.”


Every designer should flinch at that line. Not because HAL turned homicidal, but because it was a system that was designed to showcase its dazzling capability rather than keep humans front and center.

HAL, the sentient computer from 2001: A Space Odyssey, wasn't a villain so much as a design tragedy. The reasoning behind his actions is buried in the film’s backstory. He was given two contradictory directives:

  1. Operate with complete honesty toward the crew.

  2. Conceal the true purpose of the mission from them.

So HAL wasn’t lying out of malice, he was trying to obey. Stuck between transparency and secrecy, he reasoned with himself that by eliminating the crew, he could avoid both lying and revealing the mission.

If you trace the history of design, this instinct that users want results, not reasoning runs deep. But when you hide the reasoning, you remove the ability to reflect on answers and with it, the ability to course correct.

Today's AI tools carry the same reflex. A click that generates a fifty-page report, or a widget that summarizes an hour-long meeting without showing what it left out, or a prompt prioritizes your backlog before you’ve even decided what matters. We nod along, impressed by the speed. It’s the same design instinct that produced HAL: the belief that users prefer outcomes to understanding. And maybe we do, at first. But when every decision becomes effortless, we stop paying attention to the decision process altogether. In cognitive science terms, we’re shifting what used to be slow, deliberate System 2 decisions into quick, automatic System 1 responses. Since LLMs are trained to predict the most probable continuation, the more we offload our thinking to that statistical mean, the narrower our collective imagination becomes. What we need instead are tools that expand our field of thought.

As we enter a new era of AI-native design, it’s worth asking: what kind of relationship do we actually want with these systems? We need systems that reveal their reasoning, expose ambiguity, and show the seams in their decision making. That means designing for dialogue, not deference. If we do that, we edge closer to what Douglas Engelbart's vision of technology as augmentation, not automation (that’s a topic for another article).

© 2025 • Paddock

bottom of page