We look at the tech developments that suggest we may be simulating our feels in the future
General consensus on tech culture today says that humans, perhaps the feeliest of creatures, have lost a fundamental sense of basic empathy. Recently, Silicon Valley jarheads shit the bed with the (presumably) well-intentioned “equip a homeless person with a GoPro” experiment, which, rather unsurprisingly, created a vigorous public backlash against tech advocates’ misguided brand of altruism. Some critics conceded that the resulting video actually offered an invaluable first-hand look at what homeless people deal with on a day-to-day basis, but many thought it was callous and distasteful. The point is, technology is changing the way we interact with each other on a deeply personal level, but at what cost?
We’re all familiar with the age-old concept of “stranger danger,” but thanks to a number of technological breakthroughs, including the FBI’s unknowingly vast facial recognition database, how we define a “stranger” might be a thing of the past. By 2015, the FBI claims its database will include “4.3 million images taken for non-criminal purposes.” Facebook can already recognise side profiles in photos, which honestly, after a few beers, some people can’t even do in the flesh. Then we have this tool that matches OKCupid profiles to those of registered sex offenders (which really doesn’t do much in terms of encouraging criminal rehabilitation instead of lifelong public shaming). But what does this mean for the subtle art of feeling? Consider NameTag, a startup poised to integrate Glass and other wearables with embedded face recognition so that “users can look across a crowded bar and identify the anonymous cutie they are scoping out.” What we lose here is an intangible sense of mystery, crucial to the socialized human experience.

EMOTIONS CAN BE SIMULATED
If computers can learn to recognise physical manifestations of anger, aggression, and other “pronounced” behaviors in humans, eventually, we’re going to be able to synthesize them via artificial intelligence. Scientists have managed to map 21 different types of basic human expressions, which comes off as frighteningly reductive since it negates the possibility for subtlety, unique idiosyncrasies, and physical quirks. However, this is the sort of research that gives A.I. researchers a foundation and a framework to develop more accurately emotive robots. Studies even show that emotions are felt in specific parts of the body.
THE (MIS)EDUCATION OF CORTANA
Cortana is poised to outclass Siri in terms of artificial intelligence and is supposed to get “significantly smarter” as people continue to use it, thanks to her ability to track conversations for longer and “talk back” in a more natural way; perhaps Cortana’s beta success hinges on the fact that she has a widely recognized physical form (thanks to her origins in Microsoft’s Halo franchise), whereas Siri does not, which might affect how some users identify with her as a assistant. Google Now, while probably-definitely the most effective of the ‘big three’ assistants, uses a basic “dumb” interface because giving the software a relatable personality just means people are more likely to get pissed when things go wrong. The way we interact with Cortana is a good yardstick to judge how we will move forward in “treating” emerging A.I. as a sentient tool. According to one researcher who worked on her A.I., “Making people feel they can talk back to Cortana raises the chances that Microsoft will learn what they really wanted when the app doesn’t get it right the first time.” In light of our new surveillance state, this sounds conveniently ambiguous, especially considering the extent to which personal data is being harvested for third parties.

EMOTIONS AS A GENIUNE RESOURCE
Last year we saw a conceptual video for Neurocam, a BCI headset that works with the iPhone to read the user’s emotional state and record his or her environment (i.e. it only records things that stimulates the user). It has a feature called “emotion tagging,” which basically attaches the equivalent of an emotive time stamp to a picture, and even piles on Instagram-style effects based on your emotions associated with that moment. From a gamification perspective, this might be the first instance of using emotions as a resource pool for a certain technology, which raises worthwhile questions about how entrepreneurs could peddle targeted MDMA-type drugs to trigger specific emotions in customers looking for the ultimate experience. Not to mention the blue-sky future for synthetic, fake emotions.
Researchers at the University of Bristol have discovered the neural pathway responsible for fear or “freezing.” This means wonders for people suffering from anxiety, various types of phobias and other emotional disorders, but it could also mean a very specific targeted effort to mitigate these sensations entirely in the future. The periaqueductal grey (PAG) is an area of the brain that produces signals that trigger the notorious fight or flight response when we’re backed into a corner.

ROBOT COUNSELORS
Students are being told to Google themselves a future instead of being given personal career advice, so it’s not really shocking to read that “robot counselor” is going to be a legitimate job of the future. We’re not sure whether 2030 is an overly ambitious deadline to see robot counseling in action, because right now researchers are still struggling to understand a robot’s “ethical potential” using a basic experiment involving a single elevator. The idea is that we have a delivery bot with a large, urgent package to deliver to “the boss upstairs.” The bot heads to the elevator, where someone else is already waiting. The question here is: what does the bot do? This is pretty basic stuff for a human to decide, but according to the Open Roboethics Initiative, their research aims at designing behavior to aid robots in making an “ethical action” in an ambiguous situation. Researchers ultimately found that “robot ethics” will develop from a “soft” evolution wherein robots can “better communicate and negotiate with the people they encounter to reach mutually agreeable outcomes.” Nonetheless, projecting a future in which robots are capable of advising students on personal problems, school/job stress, trauma and depression, all of which are steeped in unquantifiable personal factors (socioeconomics, religious/spiritual beliefs, personal aspirations, family issues, and so on) seems a tad out of reach…for now.

AgencyGlass is a pair of bizarre cyborg glasses to make you look friendly, even when you’re feeling shitty. This is the brainchild of Japanese professor Hirotaka Osawa, whose device alleviates the horrific cognitive pain of “emotional labor” – that is, the glasses make us appear friendlier “to increase the emotional comfort of those around us.” In the current social climate of “snowflake culture,” coupled with the hot, relentless breath of political correctness, this sounds like a double-edged sword. As the video here shows, the virtual eyeballs on the glasses will follow people and movement for you, basically freeing us from the emotional burden of appearing interested in our coworker’s mortgage fiasco/romantic life/shameful medical condition.

Ghostman is an augmented learning system that places a real-time “ghost” image of someone’s physical hand movements over your own hands, so you can follow them step-by-step. Both people have to wear glasses equipped with cameras, which allows precise remote learning from a first-person (or technically third-person?) perspective. Researchers at the University of Tasmania in Australia attempted to teach people how to use chopsticks with Ghostman. While the tech is still very basic and offered no real advantage over learning the old-fashioned way, it is a step toward a more immersive learning experience, and perhaps a more empathetic way in which medical and rehabilitative services can interact with patients who have physical disabilities.
IQ AT THE COST OF EQ
In our quest to boost the intellectual capabilities of both ourselves and our machines, emotion is a trait that some might consider expendable. While machine learning still relies on a quantifiable framework supervised by a human operator, deep learning allows A.I. to learn via abstract concepts and therefore “think” for itself. While this is clearly wonderful news for the niche technologies that will benefit from better automation, and perhaps a sense of pseudo-sentience, it raises a ton of red flags for a social paradigm in which information is prized over a solid moral framework. Thus, if we equip future A.I. with the cognitive tools to excel intellectually, how will this affect their capability to feel emotions? Can they even develop, emotionally? More importantly, what kind of effect will a fully automated society have on us, its creators, users, and enablers? Will automation take the (pardon our French) joie de vivre out of daily life, removing small pleasures and creature comforts that we’ve taken for granted? It’s hard to tell, but one writer believes that as long as an “intelligent computer” is allowed to communicate freely, “there is no real major obstacle to feeling true emotions of its own…”

THE RISE OF THE GLOBAL NOW
Futurist Heather Schlegel put forth the idea that technology is actually deepening the social and emotional ties between us, thanks to a sense of “geo-temporal awareness” that she calls “The Global Now.” Technology, she argues, has presented us with tools with which to curate and deepen specific parts of our lives with certain friends, thereby adding a “custom enhancement” feature to our relationships. This is a view we’d like to get behind, because it retains a healthy distance from the danger of fetishizing technology, and still places full responsibility on flesh-and-blood humans for how they choose to use said technology. The bottom line is, data-mining and spying and insidious surveillance aside, technology remains, at its very core, a tool – a tool that we can actively choose to learn, understand, and apply to our social interactions. It doesn’t have to diminish what it means to be human, unless we allow it to. Right?
