– ROBAIRD O’CEARBHAILL
As Pope Francis said at Davos in 2018, the highest international economic forum, about social inequality and justice, “Only through a firm resolve shared by all economic actors may we hope to
give a new direction to the destiny of our world. So too artificial intelligence, robotics and other technological innovations must be so employed that they contribute to the service of humanity and
to the protection of our common home, rather than to the contrary, as some assessments unfortunately, foresee.” So O Clarim interviewed Sean Quinn, an expert in this field who was at the globe’s oldest and most important Artificial Intelligence conference this year, for the first time held in Macau, the International Joint Conference On Artificial Intelligence (AI). He went over the controversial and possibly disastrous and positive outcomes from AI.
How was the Macau AI conference? Results from you or anyone and general review?
The IJCAI conference is a special event in that it always contains a spread of research from all the various subfields in AI, where other conferences have tended to become very focused on just neural networks in recent years, which are the current hot topic in AI. IJCAI is the original international AI conference and seems to strive for a good representation and balance between different families of methods and approaches, maybe this is what has made the conference last so
well throughout different phases of technology hype.
Did you like being in Macau, the only European and US style city in Asia?
Macau was certainly a unique experience, it felt much more “Western” than Hong Kong. The local food and the atmosphere in Taipa village were the highlights.
AI brings what benefits?
The ability to extract value from the immense volumes of data that almost every organisation now accumulates, to make faster and more accurate decisions, to predict the future based upon the past. It has applications in every single sector and facet of life..
What happens when AI systems are incompatible with human requirements?
Then the AI is redundant! A failure to design a system in accordance with requirements can happen for a lot of reasons. Lack of data, flaw in the model design or simply insufficient communication between the parties involved in creating the system. If it doesn’t work as intended don’t use it. Fix it, get more data or throw in the towel. Ensuring AI systems are in accordance with requirements is no different than
ensuring any piece of software is in accordance with requirements.
AI risks?
Very small but possibly huge if out of hand by manipulators with
bad intent.
Malevolence or incompetence? Mostly incompetence and just one person or company or nation?
The main risk with AI technologies is who will use it and how will they use it. Two examples: (1) Autonomous weapons: it is a relatively straightforward task to train an AI algorithm to classify the race/ethnicity of a person. Combine this with a drone fitted with a
high resolution camera and weapons and you could have a fully automated genocide. (2) Targeted propaganda: Cambridge Analytica used AI algorithms trained on (illegally acquired) Facebook data to influence elections all over the world in favour of the highest bidder, including Brexit and the 2016 US election. They have played a huge
role in the global swing towards fascism and right-wing ideologies. The potential for any individual who possesses population level psychometric data to exploit this data using AI is huge. The discussion on how to limit these types of bad actors exploiting AI is only just beginning, and needs to be handled delicately. Any rush to legislate AI by politicians who do not understand the technology would almost certainly make the problem worse. I am not sure misuse of AI through incompetence is a great worry, but the potential for bad actors, particularly those with sensitive data, to exploit AI technologies is large.
We control the Earth because we are the most intelligent species in the world but when AI surpasses us in all areas what will they do to us?
I don’t think anybody who understands or works with AI algorithms would acknowledge this as a realistic prospect. Especially given the short timeline we have set for our own existence with the global refusal to act upon the climate emergency. It should be common knowledge that AI algorithms are not sentient or alive in any way, they are a computer program that is run just like any other, it is no more alive than a calculator. Nothing we have today suggests that this reality will change within any measurable timeline. It is important to look to the author of any piece of media which talks about these kind of AI risks as being real. I have encountered several of these articles, and without fail it is always someone with no understanding of technology who adopts the assumption, without any evidence, that this is a valid risk and generates
content based on that assumption. AI doomsday makes for good clickbait, so people will use it where they can. Particularly a lot of recruiters and sales people on LinkedIn.
Near future. Our healthcare is AI controlled and sometimes wrong diagnosis is made by AI. I have records of that. Your reactions?
I have never heard of any healthcare system being controlled by AI, AI is used in healthcare to help medical staff make informed decisions. Often times an AI algorithm can make accurate diagnoses given sufficient data (in particular using x-rays or medical images). When these systems are developed they are tested on thousands (or millions) of examples and never reach 100% accuracy, they only need to be significantly more accurate than a human to be useful. So then the predictions/ decisions of an algorithm can be used as a tool by a practitioner, who then chooses to act or not act on the information provided by the algorithm, knowing in advance what the margins of error and % likelihood of a wrong decision
from the algorithms. An algorithm would never be directly put in control of a medical decision, because when it makes a mistake, just like doctors do, who is liable? The dataset? The data scientist who designed the model? The organisation who owns it?
And AGI?
All the useful applications of AI which have been developed so far are highly specific in nature. So it is hard to see a path towards any tangible general AI. There is no technology developed today which indicates we will achieve anything approaching human intelligence for the vast majority of tasks humans perform, even if we took each task separately, never mind if they were tackled together in a single algorithm. AGI is an abstract concept, not a technology. It has been talked about since the early days of AI but is still not tangible even in the most rudimentary form, nor is it likely to be in any foreseeable timeline. Current AI algorithms are better than humans at a very small set of human tasks,
nothing functional exists for the majority.
And the downscale of that of total control of society?
Science fiction – not a realistic prospect.
Tell me about your work area and speciality?
My research focuses on knowledge transfer between neural networks. Rather than train each algorithm from scratch this aims to transfer the knowledge which has been learned by one network to another one, possibly for use in a different but related task. It is important in scenarios where we lack enough data specific to our task. It is a common technique employed in practice but is largely limited to the scenario where networks in both tasks have the same design or “architecture,” so
my research aims to develop architecture-agnostic methods, to enable knowledge transfer in a new set of domains, where both architectures are different. I have also done a small amount of applied work in healthcare. Collecting data from sensors in the home environment such as water meter, room occupancy sensors etc. to then make predictions about the health and wellbeing of occupants based on their behaviour.