How do you design something for someone you know nothing about? I mean, really nothing about?
I first asked this question in 2009, after attending a Science Cafe talk by Seth Shostak of SETI. SETI is the Search for Extraterrestrial Intelligence – they’re looking for unusual signals that might come from an intelligent civilization somewhere out in space. Professionally, I was starting to dig into a field called user experience design, or UX for short. I knew the two interests must be related, but it wasn’t until that night that I connected them with an interesting question.
information suddenly had to be usable, not just catchy
I’d been working in advertising for a few years, trying to shape inane marketing copy into something that made sense to normal people, and I was starting to get more website work than anything else–which meant that my information suddenly had to be usable, not just catchy. UX design overlaps significantly with UI (user interface) design, and I invited user interface designer Gary Boodhoo (who I later married) to join me for this SETI talk. After, we stood outside, buzzed and looking for a cab, and talked excitedly about space stuff and digital things. I asked, “How could we design something for aliens, when we know absolutely nothing about them?”
There was no easy answer. The question stayed in the back of my brain, twirling away while I worked on website content for credit card companies, car manufacturers and other Earth-bound clients. On coffee breaks I read up on intelligence, cognition and behavior. I slowly narrowed down an interesting set of challenges and starting points. I gathered them into a talk titled “UX for Aliens” at the 2012 Webvisions conference in Portland, Oregon. The audience seemed to enjoy the novelty, but were mostly hoping for practical tips they could use at work. I was more interested in actual aliens. Finally, as I watched the first episode of Cosmos: A Spacetime Odyssey and was reminded of Carl Sagan and Frank Drake’s famous Arecibo message, the time felt right to gather my ideas together and subject them to the scrutiny of the Internet.
We’ll start with the obvious limitation: any alien we’re interested in is definitely one thing: in design/Tron terminology, a “user”. It’s the only thing we know about them, because even if our design happens to find aliens but they aren’t users, they won’t use anything we send them anyway.
Luckily, “user” is a surprisingly specific filter. On Earth, there are only a handful of organisms that use tools of any kind. We can get some useful information from seeing what they have in common.
Humans top the list of course. Usually, designers consider humans with all the standard senses and abilities – sight, sound, touch, fully developed adult brain, the works. Some designers, though, have designed for children, the elderly, or people from other cultures where there are slightly different considerations in laying out a website. Hopefully most designers consider the fact that roughly 8% of human males are colorblind. A designer reading this may have even worked on an audio interface at some point, like Siri, a bank customer service hotline, or even an audio game for the blind.
Far fewer people have designed something for apes. Apes are basically human as far as users go. They have hands, and they enjoy using iPads, when they get the chance.
Elephants evolved in the same environment as us, at the same time as us, in a slightly different ecological niche. They have enormous brains, incredible dexterity with their trunks, complex communication and culture that we still know basically nothing about despite having interacted with them for millions of years. They’ve been documented using paintbrushes, pulleys and stepping stools. They are the only non-human species known to bury their dead.
Some of the most common tool users turn out to be birds, famously Alex the African grey parrot, (if you haven’t read about Irene Pepperberg’s work with parrots, you should,) but also magpies and other corvids. A crow named Betty spontaneously solved a puzzle by shaping a bit of wire into a perfectly-sized tool a few years ago, surprising her researchers and everyone else.
Bottlenose dolphins don’t have hands, claws or even trunks, but they still manage to use sea sponges to rustle up morsels from the sea floor, holding them in their mouths like tiny, squishy shovels. When it comes to communication, dolphins are able to synchronize their actions in ways we humans suck no matter how many 5pm meetings we have or emails we send. It’s been notoriously hard, however, trying to pinpoint which click or whistle is the dolphin word for “tuna”. An interesting theory says that’s because the clicks aren’t words at all–they’re echolocation-based 3D projections. Dolphin researcher Jack Kassewitz calls it “sono-pictorial exo-holographic language”. SPEL for short.
We’ll get back to SPEL in a bit. First, let’s find a really different user. Slimy, tentacles, big bulging eyes–that’s right–the octopus is a user. Among an assortment of mind-blowing talents, it can open jars. I can barely open a jar sometimes, and I have thumbs. We can’t pin this ability on blind instinct, because there are no naturally-occurring jars at the bottom of the sea.
Other invertebrates are users, too. Ants build bridges over dangerous puddles of sticky goo with bits of leaf, to make sure their fellow ants don’t get stuck and die. It may not be the Golden Gate, but it gets the job done. The individual ants don’t have intention the way we do, but an ant colony certainly has a kind of intelligence and qualifies, like humans, as what biologist Edward O. Wilson calls a eusocial superorganism. That means they’ve formed a division of labor, not entirely unlike the collaboration of cells in the body.
That sums up our known users. None of them are ET, but we can form some ideas about extraterrestrial users by looking at what they have in common. Humans, chimpanzees, elephants, dolphins, corvids, parrots and ants all have one thing clearly in common: we’re all highly social. There’s a natural connection to be made between being social and being a user. If I’m out in the wilderness trying to avoid being naturally selected against, I have a limited ability to see danger approaching. If another animal near me freaks out all of a sudden and runs off, chances are I should run off, too, even though I haven’t seen the danger for myself. Because of this, animals that travel together in groups survive well and pass on a preference for company. In one sense of the word, our ancestors became users when they first started using each other as an extension of their own senses.
We’ve been social animals far longer than we’ve been tools users, (in fact we’ve been social animals far longer than we’ve been human,) so it’s fair to wonder if tool use evolved as an extension of the same ability that made us social. One theory is that, yes, that’s exactly what happened. The mechanism seems to be a collection of specialized neurons in our brains called mirror neurons. They’re the bits that make you sad when your friend is sad, make you cringe when a TV character is tortured, allow you to feel a wrench as an extension of your hand, and a car as an extension of your body. Without that extended spatial awareness we’d never be able to fix a leaky shower head, parallel park, or enjoy a video game. Mirror neurons seem to allow us to understand other things as extensions of ourselves, and it’s reasonable to assume that our other social tool users have either mirror neurons or something like them that serves the same purpose.
(Even the octopus, while it isn’t social, has an incredible ability to imitate its surroundings for camouflage. I’m guessing it has an excellent understanding of its environment as an extension of itself, and that leads to its tool use abilities. It probably doesn’t have mirror neurons anything like ours, but it has something that’s allowing it to think this way–to understand an object as an extension of its own body.)
I propose that this is the one assumption we can make about all users, on Earth and anywhere else: In order to use something, one must have the ability to understand it as an extension of oneself.
This still leaves us with a lot of unknowns to deal with. Remember, we can’t assume anything else about aliens. We don’t know if they have hands, tentacles, eyes or ears. We don’t know if they can see the same range of light as us, or if they can biologically sense sound the way we can.
We can look at this problem through three basic topics that come up in every design conversation: content, media and device.
When designers consider content for humans, we ask ourselves, “What colors do they like? What language do they speak? What’s their reading level? What do they already know about the subject matter?” But when we consider content for aliens, not only do we not know what language they speak, we don’t know if they speak. To wrap our heads around this, lets go back to dolphins. Assuming there’s something to Kassewitz’s “sono-pictorial exo-holographic language”, here’s how it might work:
when we consider content for aliens, not only do we not know what language they speak, we don’t know if they speak
Lucy the dolphin is swimming around in the ocean and she meets Jamie the dolphin. Suddenly a shark appears! What do they do? Lucy hatches a plan, and, using echolocation, projects the whole scenario to Jamie, in 3D. In the echolocation projection, Jamie sees, just as if she were watching it happen, herself and Lucy flanking the shark and ramming it in the gills. Because echolocation is three dimensional and real-time, she now knows exactly how far away Lucy is from the shark, what the shark looks like from Lucy’s side, and how precisely to time the attack. The shark gets closer, and they execute a perfectly coordinated offensive maneuver. As the shark sinks into watery oblivion, they high-flipper in the universal celebration of cetacean victory.
Suddenly a shark appears! What do they do? Lucy hatches a plan, and, using echolocation, projects the whole scenario to Jamie, in 3D.
These two dolphins may have never met before, but because their mode of communication depends only on having a shared biology and a shared environment, they’re able to do this – and in fact bottlenose dolphins who’ve never met before have been observed synchronizing their movements precisely. They have no need for metaphor, for symbolism, or for language as we understand it. Claiming dolphins are telepathic sounds cliche, but I can’t think of a better word to sum up the potential versatility of this mode of communication, whether dolphins actually use it or not.
The closest thing humans have to telepathic content is virtual reality. Like Lucy and Jamie, if you and a friend each have an Oculus Rift and some cameras, you can more or less include real-time action and scenery in your communication. VR, in a couple of years, might be just versatile enough to use for communicating with aliens–as long as the aliens are able to perceive the media the same way we do.
When an alien tries to squeeze an Oculus Rift over its squirming head parts, will it be able to perceive what’s inside?
When an alien tries to squeeze an Oculus Rift over its squirming head parts, will it be able to perceive what’s inside? Is the resolution high enough, the frame rate fast enough, the brightness or volume right? And what about the light we’re using – is it the appropriate spectrum? Do they even perceive electromagnetism?
Humans have two completely different sensory organs that both pick up photons. Our eyes pick them up at a high enough resolution to read road signs on the freeway (hopefully), but our skin also picks them up at a resolution comparable to what a paramecium sees. Problem is, we don’t know if aliens can perceive light at a very low resolution or a very high resolution, and only picking one will make it indecipherable to an alien who happens to have the other. So we do both: project in the highest resolution possible, but design for the lowest. Imagine reading this article on a retina display, only one letter at a time, and each letter fills the entire screen. It would be annoying to read, but at least you’d able to tell it was something from across the room.
to an intelligent plant, we might look like meaningless squiggles blurring by in fast-forward
Of course resolution won’t matter much if ET is seeing at a different frame rate. Humans like at least 30 frames per second – fewer than that and it makes us uncomfortable, much fewer and we assume we’re looking at a photograph. Cats prefer a faster frame rate – probably the speed of scurrying rodents. Plants don’t look like they’re doing much to us, but to an intelligent plant, we might look like meaningless squiggles blurring by in fast-forward, while other plants are growing new leaves and branches constantly, at a perfectly normal rate. To accommodate both cat-aliens and tree-aliens, we might send several versions of what we design, functioning in super slow motion, super fast-forward, and several speeds in-between. With messages at several speeds looping simultaneously, we’ll increase our odds of having at least one of them register as “something interesting happening”.
If the light is too dim, they won’t see it, and if it’s too bright, it could blind or even kill them. That’s a big problem.
There’s one more media issue to consider: strength. If the light is too dim, they won’t see it, and if it’s too bright, it could blind or even kill them. That’s a big problem. I’m reminded of my own distaste for waking up early: Switching on a light after being cozy in the dark all night feels jarring, even depressing in the winter. Last year I finally picked up a sunrise alarm clock, which starts very dim and gradually increases in brightness. If we use my alarm clock’s trick, and give our alien users an easy way to stop it before it gets too bright, we might prevent interstellar war.
On that thought, we’ll also want to make sure the signal can never get so strong that it would kill us if the aliens send it back.
make sure the signal can never get so strong that it would kill us if the aliens send it back
This, of course, is assuming the aliens can perceive light at all. They might be perceiving the world mainly in sound, like a dolphin or a bat, in chemicals, like ants, or even in magnetism or some other channel. They would probably be able to sense several different forces at varying resolution, like we can, (we have eyes, ears and noses,) but we have no way of knowing which senses they would have at a high enough resolution to perceive what we design. We can accommodate this unknown by designing for as many possible modes of perception as we can, to maximize the possibility of one of them being a match.
Which brings us to our next consideration: device.
There are two basic kinds of devices that users are equipped with, and I don’t just mean iPhones and Androids. There are the ones we acquire, like an Oculus Rift, a television, or a radio, and the ones we’re born with: our sensory organs. The media we can perceive with our eyes, ears, nose and skin are interfacing directly with our brains. They’re our native media. If someone says “hello,” our brains process it automatically, and hear exactly what was sent. We can say “hello” back without even a pause.
The media we can perceive with our eyes, ears, nose and skin are interfacing directly with our brains. They’re our native media.
With media we’re not native to–I’ll use radio as an example–we need an external device to pick up the signal, translate it into a media we are native to, like sound. Then, if we want to communicate that same message to someone else via radio, the sound has to be encoded back into radio, sent, then translated back into sound on the receiving end. The raw radio waves pass through us constantly and we’re utterly blind to them.
Let’s imagine we’re radio natives. We have sensory organs, in the shape of giant discs perhaps, that can pick up radio waves and send the signal straight to our brains. What would the signal we sent earlier, that was translated from sound and meant to be translated back into sound by an external device, look like? Not much.
What if a radio native is sending a radio signal to us right now? SETI is actively looking for radio signals, but if that radio signal were sent by a radio native, and it represented something we might even recognize, like a map of their solar system or a self-portrait of their segmented, slimy bodies, it wouldn’t have the kind of pattern we’re looking for. It wouldn’t be a set of prime numbers or Morse code–it would look like noise to us, the way a photograph of a star would be indistinguishable against the night sky if you were only looking for a neon sign. We could be getting millions of these radio signals right now and we would have no idea.
a photograph of a star would be indistinguishable against the night sky if you were only looking for a neon sign
We can deal with this by treating all media as native media. That means every message, every button, every scroll bar or knob, sound effect (or radio effect), has to be understandable to someone who natively perceives the media we design in. And since we can’t know which media are their native ones, we make duplicates, again, in as many media as possible.
To sum up:
How do you design for a user you know nothing about?
- Assume that all users have the ability to understand something else as an extension of themselves.
- Aim for telepathy.
- Project at high resolution, design for low resolution.
- Start dim and brighten slowly.
- Don’t send anything that, if returned, could kill you.
- Use many parallel channels.
- Design for natively-perceived media.
It looks like a whole lot of effort, and it will be. This is why we do user research when we have access to our users–it saves a lot of work. If you happen, however, to be designing a probe to Kepler 22b, or an iPad game for an octopus, or a time capsule for someone who comes across our planet long after our languages have been forgotten, there is one last design principle I’d like to leave you with: Don’t let the what-ifs stop you.
References & Further Reading
The Pioneer Plaque vector image
Tool Use Is Just a Trick of the Mind
Orangutans at Miami zoo use iPads to communicate
Insightful Problem Solving in an Asian Elephant
Crow Makes Tools
Crow Makes Wire Hook to Get Food
Why Do Dolphins Carry Sponges?
We are not alone – the discovery of dolphin language
Tool Use Found in Octopuses
Tool using in ants
Tool use by the forest ant Aphaenogaster rudis: Ecology and task allocation
Shining in the Darkness
“Jason Johnson looks into the world of audio games—videogames designed for the visually impaired—and finds a medium racing to catch up with modern times. If the zombie first-person shooter Swamp is any indication, games for the blind may herald an unforeseen future for videogames and technology.”
Redesigning videoconferencing to increase conveyance of nonverbal communication
“if an alien did have organs for sensing radio waves, first of all, they’d probably be really big. Radio is a form of light, not dissimilar to the visible light that our eyes detect, but at much longer wavelengths. Radio is not the same as sound, it IS light, so ‘watching radio’ isn’t that odd of a concept.” – Nicole Gugliucci (@NoisyAstronomer)
How Radio Waves are Produced
Essential Radio Astronomy
Baboons can learn words?
The Dolphin in the Mirror
Crow uses sequence of three tools
Virtual Reality for Animals reveals secrets of the brain
Can UX get my cat to water a houseplant?
The Alex Foundation
“The goal of The Alex Foundation is to support research establishing the cognitive and communicative abilities of parrots as intelligent beings.”
“We believe that the more people understand the cognitive and behavioral complexity of animals—their intelligence, as well as the depth and nature of our evolutionary debt to them—the more they will consider the humane treatment and conservation of animals to be a priority.”
The Search for Extraterrestrial Intelligence
“This necessity of communicating commonly observed phenomena among individuals who shared no common language or cultural upbringing encouraged those communities to become self-aware to survive in a new environment.”
Understanding Media: The Extensions of Man