so yeah, copilots are cool, but i want HUDs: quiet most of the time, glanceable, easy to interrupt, receipts for every action.
Author here. Yes, I think the original GitHub Copilot autocomplete UI is (ironically) a good example of a HUD! Tab autocomplete just becomes part of your mental flow. Recent coding interfaces are all trending towards chat agents though.
You know that feature in JetBrains (and possibly other) IDEs that highlights non-errors, like code that could be optimized for speed or readability (inverting ifs, using LINQ instead of a foreach, and so on)? As far as I can tell, these are just heuristics, and it feels like the perfect place for an 'AI HUD.'
AI building complex visualisations for you on-the-fly seems like a great use-case. For example, if you are debugging memory leaks in a specific code path, you could get AI to write a visualisation of all the memory allocations and frees under that code path to help you identify the problem.
HUDs are primarily of use to people that are used to parsing dense visual information. Wanting HUDs on a platform that promises to do your work for you is quite pointless.
The current paradigm is driven by two factors: one is the reliability of the models and that constraints how much autonomy you can give to an agent. Second is about chat as a medium which everyone went to because ChatGPT became a thing. I see the value in HUDs, but only when you can be sure output is correct.
Kind of a weird article because the computer systems that is 'invisible' i.e. an integrated part of the flight control systems - is exactly what we have now. He's sort of arguing for .... computer software. Like, we have HUDs - that's what a HUD is - it's a computer program.
This is a mess. The 1992 talk wasn't at all about AI and since then our phones have given us 'ubiquitous computing' en masse. The original talk required no 'artificial intelligence' for relevance which makes it strange to apply to todays artificial intelligence.
I'm not really sure what trust means in a world where everyone relies uncritically on LLM output. Even if the information from the LLM is usually accurate, can I rely on that in some particularly important instance?
I imagine there will be the same problems as with Facebook and other large websites, that used their power to promote genocide. When LLM are suddenly everywhere, who's making sure that they are not causing harm?