Beware … the future is artificial (if we get it right)
Without “preaching to the choir”, we have all realized just how dependent we have become on information technology, automation, and artificial intelligence, especially over the last 18 months or so.
The pandemic has indicated just how reliant we are on IT and automation for so many aspects of our lives, everything from ordering food and other essentials online, to communicating with our families and even online education for so many school children and university students around the world. Automation has also been essential in supporting the development and production of vaccines, of identifying people at risk of infection or being spreaders of infection, and of course in treating and supporting our families and friends who have become ill.
The United Nations’ Sustainable Development Goals, are 169 actions grouped into 17 goals for improving all of our lives by the end of 2030. Goal #3 “Good Health and Well-Being” (defined long before the existence of Coronavirus) emphasizes the use of emerging technologies (which includes the smartphones, wrist-based health monitors, and wearables that we have become accustomed to in our everyday lives) for health-monitoring and for population and disease monitoring. The UN advocates the use of such technologies for preparing for disease outbreaks, identifying patient symptoms, following treatment protocols, and performing remote diagnostics. The data that such devices can provide can be analyzed to help identify trends and make predictions about disease outbreaks, health service usage, and patient practices and locations.
We have seen the advent of tracking “apps” that enable us to identify those that have been exposed to the virus, have been in close contact with others and/or likely to be spreading the disease. But for these apps to be successful they have to be used. BBC News (7 July 2020) highlights how such apps were initially a failure in the UK and France, while highly successful in countries such as Germany and Ireland. The France 24 news service (23 June 2020) reports that in France, a country of approximately 65M people, only 1.5M people downloaded the app, and most deleted it within 48 hours. In Ireland, the Irish Times (19 July 2020) reported that 23% of the population had downloaded the Irish app within the first 24 hours, and then continued to use it. The Irish app was so successful that it has been made available open source and adapted for other countries.
Automation, big data analysis, and AI-based reasoning will only be successful if we are willing to engage with and use this technology. But clearly there are issues of trust, particularly when AI is involved.
Prior to the pandemic, we were already highly dependent on automation. Just about every form of transportation using IT and automation – even rented electric scooters and bicycles in many cities around the world. The idea of self-driving cars has been a fascination for some time, and we are getting closer. An interesting experiment in England recently showed that by having traffic lights send signals to cars to slow down and avoid encountering a red light, resulted in a 67% reduction in emissions. This is of course in addition to the reduced frustration.
We’ve seen automation and robotics being used in just about everything, from drone-based deliveries for our online shopping, to patient monitoring, power generation, financial services, and manufacturing. Industry 4.0 is already upon us and we are talking about Industry 5.0 and the new levels of automation and sophistication that will involve with fully-autonomous robots, etc. Space exploration has also advanced dramatically with robotics and automation. Several countries have missions either planned or underway to the Moon and to Mars. And we’ve seen the amazing results sent back by the most recent Mars rover and its associated helicopter. Eventually, humans will go back to the Moon and ultimately on to Mars. NASA’s Artemis mission, involving the heavy lift to go back to the Moon, and which I had the privilege to work on, will likely lead those efforts and will be the largest software project in history. Future exploration missions are likely to be based on swarms of smaller spacecraft which will work completely autonomously in space, and on the surface of various planets, being so far from Earth that they cannot be controlled on an ongoing basis and must exploit techniques from autonomic computing, AI, and other areas.
But this reliance on automation, big data, robotics and AI brings us some concerns. For one, we are often told that we are using an “AI” or an AI-based system. However, surveys have shown that as many as one-third of the AI startups that were examined in further detail used no form of AI whatsoever, despite their claims. Many issues are being blamed and delegated to AI. Due to the pandemic, in several countries school exam results have been delegated to “AI algorithms”, the same being true of university entry decisions. It is now becoming clear that human bias is being coded into these algorithms, a similar experience which has been found with using such techniques for selecting candidates for job openings. The fact is that these algorithms have originally been written by humans. Whether they use learning techniques to improve their decisions or not is irrelevant. The human bias has been built in.
While it’s very disconcerting that the wrong results have been estimated, or the wrong students admitted or the wrong job candidate selected, this raises a serious ethical question. As any advancement in science and technology, while advances in robotics and automation have great potential in improving our lives and our world, they also offer opportunities for damage and ill will. They leave us open to the bias of the original creators and the evil intentions of some.
We have seen the use of automation for cybercrime. Many people have been fooled by fake-postings that purport to be from politicians and celebrities and even major organizations. People have disclosed unintentional information due to being misled. With automation and AI, the ability for criminals to engage in far more such activity has increased dramatically. There is also the question of whether robots can be used to kill or to wage war, perhaps even making the decisions themselves as to who or what their target will be. Several jurisdictions want to give machines and robots the same rights as humans, as sentient beings. Others want Asimov’s laws of robotics to be applied. Both extremes fail to recognize that all automation has human inventors at some point in its history and we must realize that all automation is potentially subject to human bias and malicious intentions.
The advances we have made in robotics and automation should not be underestimated. But we have to ensure that all technology is applied responsibly and for the purposes of advancing and improving all of our lives.
Prof. Mike Hinchey is President of IFIP, the International Federation for Information Processing (www.ifip.org) for 2016-22. He is also President of the Irish Computer Society and Past Chair of the IEEE UK & Ireland Section. He is currently Prof. of Software Engineering and Head of the Department of Computer Science & Information Systems at University of Limerick, Ireland, where he is also Emeritus Director of Lero, the Science Foundation Ireland Research Centre for Software. Prior to joining the university, Professor Hinchey was the Director of the NASA Software Engineering Laboratory. In 2009, he was awarded NASA’s Kerley Award as Innovator of the Year and is one of only 36 people recognized in the NASA Inventors Hall of Fame. He is the author/editor of over 20 books and more than 200 papers on various aspects of software engineering and computer science. He holds 26 patents, many of which are used in NASA exploration missions. He is Editor-in-Chief of Innovations in Systems and Software Engineering: a NASA Journal and Journal of the Brazilian Computer Society and Associate Editor of ACM Computing Surveys. In January 2018, he became an Honorary Fellow of the Computer Society of India and was the SEARCC Global ICT Professional of the Year 2018.