Working Title

Last Update: 2025-07-28T08:01:58.661Z
1. Enough AI copilots, we need AI HUDs
walterbell | 374 points | 114 comments | read
Article Summary
  • The article critiques the prevailing "copilot" metaphor in AI design, referencing Mark Weiser's 1992 talk which argued against agentic AI interfaces that act as virtual collaborators.
  • Weiser advocated for "invisible computers" that fade into the background and become extensions of the user's body, rather than demanding attention like a copilot.
  • { "The Head-Up Display (HUD) in airplanes is presented as an ideal example": "it overlays critical information directly in the pilot's field of view, enhancing awareness without requiring interaction." }
  • The author draws parallels to software, noting that features like spellcheck function as HUDs by instantly highlighting errors, giving users new senses without conversational interfaces.
  • In coding, the author prefers using AI to build custom debugger UIs that visualize program behavior, acting as HUDs that extend understanding beyond narrow tasks.
  • The article argues that automation and virtual assistants are not the only UI paradigm; HUDs can ambiently enhance human senses and expertise.
  • The author suggests that routine, predictable tasks may be best delegated to copilot-like assistants, but extraordinary outcomes require empowering humans with new superpowers via HUDs.
  • The piece concludes by encouraging designers to consider non-copilot form factors that directly augment human cognition, and provides further reading on augmenting human intelligence and malleable software.
Common Themes
Preference for HUDs and ambient AI interfaces over copilot/agentic models

so yeah, copilots are cool, but i want HUDs: quiet most of the time, glanceable, easy to interrupt, receipts for every action.

stan_kirdey | source

Author here. Yes, I think the original GitHub Copilot autocomplete UI is (ironically) a good example of a HUD! Tab autocomplete just becomes part of your mental flow. Recent coding interfaces are all trending towards chat agents though.

gklitt | source
Spellcheck, IDE features, and debugging tools as examples of AI HUDs

You know that feature in JetBrains (and possibly other) IDEs that highlights non-errors, like code that could be optimized for speed or readability (inverting ifs, using LINQ instead of a foreach, and so on)? As far as I can tell, these are just heuristics, and it feels like the perfect place for an 'AI HUD.'

utf_8x | source

AI building complex visualisations for you on-the-fly seems like a great use-case. For example, if you are debugging memory leaks in a specific code path, you could get AI to write a visualisation of all the memory allocations and frees under that code path to help you identify the problem.

sothatsit | source
Tradeoffs and limitations of HUDs versus copilot models

HUDs are primarily of use to people that are used to parsing dense visual information. Wanting HUDs on a platform that promises to do your work for you is quite pointless.

precompute | source

The current paradigm is driven by two factors: one is the reliability of the models and that constraints how much autonomy you can give to an agent. Second is about chat as a medium which everyone went to because ChatGPT became a thing. I see the value in HUDs, but only when you can be sure output is correct.

ankit219 | source
Uncommon Opinions
Skepticism about the novelty or practicality of AI HUDs

Kind of a weird article because the computer systems that is 'invisible' i.e. an integrated part of the flight control systems - is exactly what we have now. He's sort of arguing for .... computer software. Like, we have HUDs - that's what a HUD is - it's a computer program.

wewewedxfgdf | source

This is a mess. The 1992 talk wasn't at all about AI and since then our phones have given us 'ubiquitous computing' en masse. The original talk required no 'artificial intelligence' for relevance which makes it strange to apply to todays artificial intelligence.

aaron695 | source
Concerns about trust, reliability, and potential harm from AI interfaces

I'm not really sure what trust means in a world where everyone relies uncritically on LLM output. Even if the information from the LLM is usually accurate, can I rely on that in some particularly important instance?

AlotOfReading | source

I imagine there will be the same problems as with Facebook and other large websites, that used their power to promote genocide. When LLM are suddenly everywhere, who's making sure that they are not causing harm?

stahorn | source