We are in the midst of an industrial-technological revolution dubbed 4.0 that is transforming the way that the world lives, does business, and entertains itself. Like in the case of the earlier industrial revolutions, the present churning brings in its wake both a sense of immense opportunity and insecurity.
However, there is one major difference in the kind of insecurity that many currently have. It pertains to the fact that technologies like artificial intelligence, the Internet of Things, and robotics may take things out of the hands of humans and hand them over to intelligent and sentient machines. This may happen not immediately, but ultimately, as the inexorable advancement in the artificial intelligence technology could make machines more intelligent than humans. Some are already afraid they might want to wrest control from the human kind and declare us redundant.
How real and credible is the threat? We have highly revered figures like the physicist Stephen Hawking and corporate head honchos like Elon Musk and Jack Ma believing that it is quite a distinct possibility. Stephen Hawking actually feared that “artificial intelligence could end mankind!” as explained by the BBC Musk is of the view that artificial intelligence could create an “immoral dictator from which we can never escape”. Ma too thinks that artificial intelligence is “a big threat to humans”, as it will likely take away a lot of jobs.
Not all corporate leaders share this pessimism about the role of artificial intelligence in human lives. Mark Zuckerberg, the founder of Facebook, for one, is dismissive of what he calls “doomsday” scenarios. There is much merit in what Zuckerberg says, as there were similar doubts and fears expressed about the future well-being of human beings when the Industrial Revolution first began in the middle of the eighteenth century. No such catastrophe struck the human race because of it, and it actually saw an immense enhancement in standards of living and, indeed, longevity.
Similarly, technologies like artificial intelligence, robotics, and the Internet of Things are transforming business processes and the way we live our lives. These are creating new paradigms in not just manufacturing and other business processes, but also in vital sectors like healthcare, logistics, housing, banking, entertainment, transportation, and so on.
These new technologies could, for example, be the way for mankind to discover more about space and enabling it to settle on planets other than Earth. Observers announce billions of dollars in profit for companies using AI and the insurance to live much longer thanks to better diagnosis and treatment elaborated by algorithms trained for that purpose.
Should one fear such technologies or embrace them to secure our own future? We are living an unquestionable revolution, and the changes in our daily lives will come quickly. Sure, there should be human oversight as there has been with earlier technologies, and it won’t come without issues.
Sure, there might be incidents like the case of a German worker at a Volkswagen being killed in an accident involving a robot. But then, there have been accidents happening since the times of cavemen. Does that mean that we stop making progress because something untoward might happen? Adopting artificial intelligence is already reaping us immense benefits in delivering solutions for all kinds of pressing problems ranging from healthcare and logistics to agricultural productivity and banking.
Of course, people are right in believing that artificial intelligence is different from technologies of the past in that it enjoys a far greater autonomy and independence from human oversight. There definitely are ethical questions to be answered, for example, about cybersecurity, privacy, or using artificial intelligence in defence.
Defence establishments in the leading countries of the world are quite enamoured with autonomous weapons that offer speed and stealth in a manner that conventional weaponry does not. It can also allow militaries to considerably scale up their attack capabilities and gain a military advantage over their adversaries. At the heart of this debate lies the question of whether computers or machines should be allowed to take life-and-death decisions on their own.
The American defence intelligence community, including the Pentagon, is planning to go for artificial intelligence technology in a big way, as it fears that China may be stealing a march over it. In fact, it wants the big technology companies based out of Silicon Valley to help it do that. Google, in fact, is currently working with DARPA on a defence project called Project Maven.
There has been considerable unrest within Google over participating in this controversial project with many employees quitting in protest and over 4,000 petitions having been signed against it. Some of the most renowned researchers in AI and experts of the field have also signed against it and are willing to draw the attention of the public on this theme. Apparently, it has now been decided that Google will not renew the contract when it comes up for renewal next year.
But does this signal the end of the use of artificial intelligence in defence? Absolutely not. There will be debates and course correction, but the march of progress seems inevitable, including in defence. Ethical questions are more than ever in the center of the debates in national committees and on the table of the European Parliament and the UN.
Research in AI is advancing quicker than some expected. Its applications already have an impact on our daily life in social networks, communication, marketing, and entertainment, but artificial intelligence is also used to find solutions to the main concerns of our time such as climate change and poverty. Humans are used to living with technology, sometimes without knowing it. From turning wild grasses and plants to regular crops with the help of selective breeding and grafting to creating customized species of dogs and horses to suit their purposes, humans have always performed on the edge.
Researchers, data scientists and companies like the GAFAM are writing the technological equivalent of the genetic code of machines and systems that run on them. So, technology will become what they created and what the potential regulation will authorize. As in all creation, the use depends of the people, for good, for private profits, for both. The fears regarding its social, economical, and ethical impacts are real and based on a lack of answers or comprehension of the technologies.
To fear artificial intelligence won’t change a thing; this revolution is already happening. The focus should be more on the impacts, on communicating about what it means, what could change for every one of us in our personal and professional lives, and what the benefits we want are and the no-go. This debate is happening, and joining it may be the key to understand our future.