on technologies and interfaces

this essay uses conversation-design as a crutch to talk about design’s identity in a world defined by engineering’s obsession with ‘low-hanging fruits’ and human-like interfaces.

impressive technologies

when google publicised duplex in 2018, most people were terribly impressed by how natural the computer-generated voice sounded. there has been some controversy1, of-course, but it is no match for the sheer power with which google (and big-tech) continue to massage such technologies into our lives.

while the work done by big-tech in replicating human-ness is impressive, impressive does not always mean appropriate or timely or socio-culturally useful. (i can be overly-dramatic and allude to the sheer impressiveness of the nuclear bomb ; let me not digress, however.)

conversational technologies are still in their infancy: engines still take several seconds to understand words and speak back to us, agents struggle with context-switches and cultural references, and software are still quite ignorant of situated real-world connections. the development of these technologies is engineer-led ; for all the hype surrounding the rise of conversation design, designers (and their ilk) remain fringe-players in the development process — working mostly to make interfaces appear more natural than they actually are.

things continue to evolve, though: we are gradually beginning to engineer (and design) experiences that are less discrete than “set an alarm for 8 am tomorrow” or “will it rain today?” ; conversations with machines can now last several turns (or, if you’re interacting with woebot2, several days), thread together several touch-points, and mediate complex experiences3 ; and conversational technologies grow a lot more sophisticated, immersive and multi-modal with every passing month.

conversational technologies that power duplex are here to stay — and they should! — they’re immensely powerful, and promise us gifts of mind-numbing convenience. engineers and algorithms — who’re are put to work on google duplex, samsung neon4, hanson’s sophia, amazon alexa, apple siri, magic-leap’s mica, etc — have grown to master the art of ‘mechanising high-fidelity human-likeness’ and, with minimal design-input, are able to build conversational interfaces that appear incredibly natural. that is what makes me worry: what is design left with?

engineered technologies and designed interfaces

i think it helps if we look at the previous paragraph and recognise that a technology is not the same as an interface. this is how i see it: google’s ranking-and-sorting algorithms are a technology ; its website and voice-assistant are two interfaces (that allow me to interact with google’s technologies) ; and google-search itself serves as a medium.

a single technology may manifest through any number of interfaces: to submit a search-query to google, i may use either a browser or my voice. on the other hand, a single interface may also allow any number of different technologies to flow through it: a voice interface lets me interact with maps, set reminders, control music, dim lights, make phone calls, and so much more.

technologies and interfaces are tightly interwoven ; and neither should be allowed to dominate the other. interfaces must not be considered secondary to the technologies they help manifest ; and a designer should be allowed more control over how human-like, if at all, an interface is.

low-hanging fruits: a double-edged sword

people tend to be suspicious of unknown things, and may reflexively flinch away from outstanding products without trying them ; that is why engineers like to clothe their technologies in human-like interfaces ; cloaking newness under a garb of familiarity is an obvious way to overcome this problem, and is usually quite easy to do. (since easy is often quick, this habit seduces investors too.)

the tech-world’s ‘easy’ approaches to socially-dispersed products, however, are often morally questionable: after-all, we live in an age where “social” technologies have promoted loneliness, depression, misinformation, ignorance and polarisation on an impressively global scale.

the trouble with human-like interfaces

humans don’t just interact : we relate. a human’s appearance and behaviour encodes her situatedness, her empathy, her understanding of complex existential notions, her ability to respond humanely to non-historical patterns, and her unquestionable potential to form relationships. a machine lacks this ability ; so why must a human–machine interaction ape a human–human relationship?

first-impressions make up a very small fraction of our total experience with a technology, but disproportionately influence taint its future development. human-likeness sets inaccurate expectations in the mind of the audience ; it can deflect or amplify a technology’s psychological, socio-cultural and ecological impact ; and the messaging can also bite its own tail by subverting the medium and stunting its growth.

this is where i think design should play a role: in demonstrating the pitfalls of this approach, and crafting ways to make technology seem less daunting without mindlessly infusing it with human-likeness. so, instead of submissively gilding engineered products for sale, designers should proactively challenge5 the unsolicited development of features like sarcastic dialogue, feminine/submissive persona, filled-pauses and breathing sounds in conversational agents.

for our good and their own, today’s machines must not be allowed to set false expectations: new mediums invite new behaviours, and humans — who’re better at adapting to changing landscapes than engineers give them credit for — can be coaxed to embrace new technologies through appropriately designed interfaces.

or else: we’re only being dogmatic.6

human-like and inhumane

humane is a buzz-word in the industry today ; i’m all for it, as long as it isn’t conflated with human-like. humane is necessary ; mindlessly human-like is problematic: inconsiderate, dishonest, immoral and ultimately inhumane.


  1. the criticism sounded something like this: “if you know that it’s a machine you’re on phone with, you may get creeped-out by backchannels like ‘uh-huh’ or artificial breathing sounds ; if you don’t know, however, and find out after a call (that you’d just spoken with a machine), you may feel cheated.” — i wonder, though: if duplex helps automated-spam-calls become an unmanageably large nuisance, will people grow less addicted to their “always connected” lifestyles and look up at the sky more often? i think that would be nice. 

  2. two years ago, the new york times wrote about how woebot helps lonely people feel less depressed

  3. we live in an age where machines mediate human relationships: these relationships can be human–human, human–environmental, cultural–human, or some other combination. when seen this way, industrialised notions of human–machine interaction design do feel a bit narrow. 

  4. earlier in 2020, samsung boasted that neon’s core technology can “autonomously create new expressions” and “new movements” that are “completely different from original captured data”. i only ask: in an age where technologies are failing to mediate the rich repertiore of emotions we already have, why do we fetishise an algorithm’s ability to invent new ones? 

  5. this ends-up being less about design, and more about corporate power-play. 

  6. i’m not a fan of the church of big-tech. it stinks.