Return to site

The Social Implications of Artificial Intelligence

broken image

Last week, I was at an AI Conference in Seoul, South Korea and I must say that it was a refreshing change from the usual conferences I attend where there is a critical mass of thousands of people who are moving around and about. Instead, it was a more of an intimate affair, with a small, closely-knit group of people, and I very much enjoyed all the diverse presentations ranging from fighting cybercrime within financial institutions to blockchain technology in bitcoin to setting legal precedents in law utilising AI to dissecting arguments in technological unemployment to even making analogous connections with Buddhism and AI robots.

The conference was the brainchild of two sisters, Shubha Gokhale, who is an assistant Professor at Hanguk University and Hema Gokhale, who is a former executive from Citigroup in New York.

South Korea has always been at the forefront of AI and their disruptive technologies in IoT and automation have been breaking ground in new innovations. Last year, South Korea was named the most innovative country in Bloomberg's Global Innovation Index by the European Commission with Sweden second, and the United States third in ranking. Once dubbed the most connected country in the world, there is not an area anywhere in South Korea in which one does not have super fast internet access. This was primarily a result of their government investment in fiberoptics in the 1970s, a foresight that has paved the way for an online generation.

broken image
South Korea takes first place at the DARPA challenge over the summer with their version of the "running man": a rescue robot 

David Orban, head of the Network Society, Founder of dotsub and adviser to Singularity University, spoke about anthropomorphising machine morality and the necessity for the decentralisation of government in the creation of communities to prevent social collapse. He also made an important distinction between "education" and "learning", as the former might have connotations from the post-industrial era of education, as a sort of social stature, whereas "learning" he defines as a continual act of discovery. An important part of Singularity University is its focus on action-oriented activities, a kind of "learning" by "doing", whereas traditional methods of education might focus on memorisation, repetition and theory.

There were also some interesting examples of human robotics using lifeomes, and action map learning, which is the building of human behaviour maps to mimic human behaviours. Professor Zhang from Seoul National University spoke about some examples such as the "Aupair" robot, which plays the role of the mom and interacts with the child when the child is home alone, and the Pororobot, which is a robot tutor for children. 

In addition to the advent of AI humanoid robots that are built to identify and understand the emotions of humans, such as Softbank and Aldebaran Robotics' Pepper the Robot

broken image
Pepper the Robot (Softbank and Aldebaran Robotics) was designed to understand human emotion.

Professor Zhang's talk posed the question of Trust: How much can one trust machines if autonomy became a self-generalised goal? Can AI machines override human mistakes in judgement? Of course such questions have long been posed in art and science fiction, in such films as Stanley Kubrick's 2001: A Space Odyssey, Ridley Scott's Blade Runner, the character of Lieutenant Data from Star Trek: The Next Generation, and more recently, the arthouse film, Doomsday Book, directed by Jee-Woon Kim and Pil-Sung Yim in 2012, and even the Channel 4, eight-episodic miniseries, Humans that recently aired over the summer.

broken image
Humans, an episodic TV series that aired on UK's Channel 4 last summer, poses the question of Trust: can AI humanoid robots override human errors in judgement and morality? 
 

Another keynote speaker, David Wood, who is chair of the Futurist Society in London gave some interesting insights into corporate inertia, by giving background information on his work with Symbian and Nokia during the initial launch of smartphones in the mid-late 1990s. In his talk, he described Nokia's (and also, Blackberry/ Palm/ Motorola's) mistake in that they had failed due to a broken ecosystem and also failed because those companies were not good at large scale software and continuous testing of those software. They had suffered from corporate inertia- a combination of pride of being at the top, and a stagnancy derived from keeping the status quo of technology that were at that time, successful.

broken image
Explanation of the Demi-Moore Law by David Wood, Chair of the Futurist Society in London

In addition, the Demi-Moore's Law dictates that disruptive change takes twice as long as Moore's Law actually predicts, and disruptive innovations have to precipitate the dismantling of old technology. To me, this seems in direct conflict with Lean Methodology, in which short-term goals are given precedence over long-term goals, and where projects cannot be in R&D for longer than a few to several months at a time. Peter Thiel has spoken much about the failings of Lean Methodology, and that true innovation may take years in R&D, one example would be Google's Android- which from its initial concept in 2003 to selling 10M units/year in 2010 took 7 years to develop. However, David elaborated that Demi-Moore's Law is not necessarily mutually exclusive with Lean Methodology, and that Lean Methodology often clears away the corporate clutter and the bureaucracy that often accompanies project development.

It used to be that beginning with the Industrial Era, machines replaced human muscular effort in factories, and then throughout the 1970s-1990s, machines replaced human calculation effort, and currently, machines are replacing human creative effort, in that musical compositions and art can even be mimicked by AI. However, now, we are entering a new era in which perhaps, machines can replace human emotion and human relationships. All the AI we have produced thus far have been derivative of binary computing systems. However, if we consider the fact that as humans ourselves, we are "biological machines", it is really a matter of how we can replicate "life", that holds the key to the future of AI and computing, and ultimately, I believe that key is in our DNA.