When popular card game Cards Against Humanity conducted a survey of Americans in 2017 to get an understanding of their feelings for the future, they discovered some surprising results. In the survey conducted with research firm Survey Sampling International in September 2017, they asked about Americans’ fears of robots and automation affecting their jobs. When asked if they were concerned robots would take their jobs in the next decade, 80% said they were not concerned. Ethnic minorities were far more likely to be concerned about robots taking their jobs, while white Americans were more concerned about ethnic minorities taking their jobs. And, because the poll was a product of Cards Against Humanity, they found that the more Transformers movies someone had seen, the more likely that person was to be concerned about technological unemployment.

For the past five years, Diplomatic Courier magazine has convened the Global Talent Summit each January to host a conversation on what the future of jobs and employment will look like, and what we need to do to prepare for it. (Disclosure: I was previously Managing Editor of Diplomatic Courier and helped to organize the 2013 and 2014 summits.) Each year, the topics of education and the need to train the workforce in skills that will be in demand in the future are discussed, but the discussion over what skills will turn out to be in demand has gradually led to in-depth conversations on robots, artificial intelligence, and machine learning. In 2017, the Summit focused on automation: its features and limitations, how automation would interact with human creativity, and what skills would be necessary to work with automation systems. What automation is, and what we can do with it dominated the conversation.

This year the Summit, hosted on the campus of ETH Zürich in Switzerland, took a philosophical turn. Rather than discussions of what automation can do for humans, panelists took turns trying to tease out an idea of what it would mean to be human in an age of artificial intelligence and automation.

“Change is a constant. Everything that is inconvenient will change,” said Chis Luebkeman, Global Director of Arup Foresight, in one of the opening keynotes of the Summit. But educators struggle to teach students how to learn in a world where knowledge and skills necessary shift so quickly. Machine learning and artificial intelligence, said Luebkeman, will fundamentally change how students work and learn.

This theme was expanded by Manu Kapur, Professor of Learning Sciences and Higher Education at ETH, who called for a radical rethinking of the very structure to education systems. “If we all agree that we learn from failures much more than from being told what to do, why not design for that?”, he asked, while presenting his concept of “productive failure”. What most of our education system suffers from, according to Kapur, is “unproductive success”, where students are given the illusion of learning, but the knowledge taught is not the knowledge necessary in order to perform a skill in the real world. Instead, knowledge should be approached as a creative toy for students to experiment and play with, and come to understanding through failure. This is naturally how very young children learn, and by integrating it into the entire education system, students will become life-long learners with skills such as inventiveness, persistence, resilience, collaboration, creativity, and creative thinking.

Scott Hartley, venture capitalist and author of The Fuzzy and the Techie: Why the Liberal Arts Will Rule the World, argued that a future of automation and artificial intelligence is overflowing with ethical dilemmas and philosophical challenges humanity has not yet had to grapple with. Over the past year especially, conversations around the ethics of algorithms and how to build technology that does not subsume humanity have prevailed in our discourse, leading to what several Silicon Valley thinkers have called a tech backlash.

This can be overcome if we shift our understanding of how the humanities and technology should interact. Instead of a gap between the fields, the most successful technology innovators today came from humanities backgrounds, and could ask the right questions. Bringing this sort of context to algorithms is vital, because so-called neutral algorithms are actually operating on a series of human decisions and biases. But, Hartley argued, machine learning can be important to mitigating human biases, and augmenting human intelligence. He gave the example of Stitch Fix, where machine learning algorithms assist human decision-making.

Finally, Hartley argued, this sort of ethical decision-making cannot happen without increased diversity in tech fields. To solve these new philosophical challenges, tech companies need people from all backgrounds and perspectives to debates and bring context to code. Ultimately, what questions people ask is more important than the answers they come up with.

This point was echoed by Andra Keay, Managing Director of Silicon Valley Robotics, who reminded the audience that when discussing robots, it is important to ask questions like, “Who’s not at the table when we have these conversations? Who’s not at the table when we are building new innovations, like robotics?” The type of people most likely to first adopt robots are not the most likely to attend an event where such conversations are taking place. For example, for all the fretting about robots taking jobs in Europe and the United States, in the developing world, robots are welcomed by farmers because they are making farmers lives easier. So it is increasingly important to “close the loop” and talk to the people who are already using robots to see what their needs are.

The future of automation “is an ever-more demanding struggle against the limitations of human intelligence”, said Keay. “On the one hand, I believe that working with robotics and AI is like a mirror into humanity; on the other hand, seeing as how I didn’t get to be an astronaut like I wanted, it’s the closest thing I’m going to have to discovering an alien civilization, because the way it doesn’t work like humans teaches us a lot about what we are and what we aren’t.”

In automation, she pointed out, service robots are giving female names while robots responsible for heavy lifting are given male names, which taps into human unconscious biases that female names are “nice” and male names are “correct”. “We’ve already gendered the robots before their bodies have even been built”, Keay said. This should be an opportunity to change things up, but she believes it will take companies willing to take risks and shake up the industry–and perhaps even a political movement–to change this.

“The problem that we have right now is that our technologies have become much more like [a slot machine], where we’re getting very instant gratification and not doing long-term goals”, she said.

Jacob Friis Sherson, Director of the ScienceAtHome.org Project at Aarhus University, asked the biggest question of the Summit: Will artificial intelligence ever attain consciousness? And if it does, would humans even recognize it? Artificial intelligence, he argued, is giving us language to reflect on what it means to be human in ways we didn’t necessarily have before. Much of this revolves around the question of what work even is. Is work something that we only engage in to survive? Or does the future of work expand the concept to be more enjoyable and fulfilling? If robots take on manual, repetitive labor and the processing of data, can humans adapt to a future of creative work and life-long learning?

Photo courtesy of Diplomatic Courier.

A Different Perspective.

A Different Perspective.


In-depth analysis and interviews about the science and technology industries, delivered once per week to your inbox.

You have Successfully Subscribed!