I've had a few thoughts on computers stewing in my noggin over the past few weeks.
I've come to the conclusion that, as a general form factor, I still prefer laptop computers over smartphones. I know! This surprised me too, given I once thought the iPad would obviate my need for a laptop. But I realized this: there are tasks that are just a giant pain in the ass on my phone. Consuming and reviewing a lot of data, and analyzing that data. Handling multiple tasks. Typing. iOS, especially, is trying to push from a specialized computing platform to a generalized one (see iPad Pro). The Mac, and laptops generally speaking, are just better at some things. I'm deeply concerned about the no-holds-barred push to mobile, where there is less of an opportunity for openness (hi! what browser runs on your iPhone? who runs the App Store?) and creative ideas.
That's not to say there won't be any. But, shit, animoji? And scanning your face? This is what Apple is applying technology to when we still have gaping holes elsewhere in the broader UI? Yeah. Well.
Keyboards v. WIMP
The great Sarah Emerson linked up this rather good rant from @gravislizard about GUIs versus keyboard-driven interfaces. My first instinct was negative because of the lead tweet – I was using computers in 1983, too, and no one can tell me that loading programs from a tape drive on a VIC-20 was faster than opening a damn tab in a browser. I stand by this.
However, my initial "SOMEONE IS WRONG ON THE INTERNET" stance faded as I read through it. It's not something I perfectly agree with, but I largely do. There are salient points and contextual items I want to call out.
GUIs were positioned as a way forward specifically because keyboard interfaces were deemed too complex for the broader public. And they were! Commands had to be memorized or looked up in a reference manual. Knowing how to program – even a bit – was a requirement, even if all someone wanted to do was load up a CB simulator. This was and is hard for a lot of people, and popularizing an abstraction layer based on visuals instead of words was one broad design solution created to address that. All commands were made obvious with menus and putting things on screen. Remember: we had keyboard overlays to help us.
That all said, the keyboard is a highly adaptable and flexible interface. No interface is perfect but, for many tasks, it can absolutely be faster (especially POS systems, as called out in the thread). Repetitive data entry, rote tasks, et al.
As GUIs matured – and thinking about consumer window/icon/mouse/pointer (WIMP) specifically – we got to a point where a large number of items piled up in design debt, both conceptual and functional, and were never fixed. In lieu of fixing them, large companies (Apple, Microsoft, Google) moved to touch-based interfaces. This lack of attention is why windows can still steal focus whilst typing on a modern desktop OS in 2017. It could be fixed; these companies just don't care.
People should always strive to make computer interfaces easier for other people. But education fell out of this picture a long time ago, and that put all of the burden on the interface to explain itself. Instead of interfaces becoming simpler and more clear, they became more complex – this is general-purpose computing, so it stands to reason. Our technology nowadays implies it's simple due to a lack of a manual; this is a lie. Manuals are not bad. Interfaces, sometimes, do need to be explained. And "no interface is the best interface" is bullshit.
I absolutely agree that interfaces, generally speaking, aren't working as hard as they could for users. Google Maps is a great example called out in the thread. Mapping software on phones and computers, generally, stagnated. I suspect this is because mapping in those contexts is seen as a byproduct of navigation, which is something that can be automated (and is, and will be). Ostensibly, these products support other use cases for maps (exploration, education), but it's obvious that's not what they're designed for.
General-purpose computing is missing and has missed a lot of opportunities. It's shifted to watching television, reading the news, communicating with people, and photo/video. There's a lot of untapped potential that is constrained by interfaces and the state of the industry.
I'm not completely pessimistic here. I think there's a lot of places computing can go, but the current setup of Apple and Google running the show, effectively, has severely constrained new ideas and cast aside the spirit of tinkering and exploring that existed in the industry 30-40 years ago. It feels more and more like there's just one path forward for computing products from a consumer perspective – which is false and terribly unimaginative.
Things that give me hope in interface design and general-purpose computing? Pfft, us finally collectively talking about ethics and realizing that a lot of design decisions made over the past few decades were really bad ones. That helps me imagine a more just world, a more equitable world where technology is used for good and serves genuine needs.
I'm also holding out a tiny sliver of hope for the World Wide Web because the non-commercial web still exists at all, and that's a good thing. Despite the increasing complexity of front-end code (trying to put a size 12 foot into a size 5 flat), it's still relatively easy to put things on the web. That remains powerful and democratic. That remains exciting. And that's a case where, when used well, technology can bring us together.
Editor's note: This is a new format I'm trying for posts – much more drafty, less polished. Although it may not appear this way, it has been edited since first published.