‘Holistic AI will provide transparent, powerful & trustworthy solutions’


Dr Teresa Escrig, researcher and professor in Artificial Intelligence, spared time from her busy schedule for this e-interview with Industrial Automation.

  • Client


  • Services

    Qualitative Modelling, Cognitive Vision, Robotics and Cyber Security.

  • Technologies

    Artificial Intelligence

  • Dates




You have worked in the field of autonomous vehicles. How far are we from purely autonomous cars becoming street legal?

If the automakers include transparent AI, which facilitates algorithm development, and increases trust in all stakeholders (end-users, policy makers, insurance companies, and engineers), I think we can have Level 5, full automation, in the next couple of years.


A major threat to the safety of fully autonomous vehicles is expected to be from manually driven vehicles. Can both co-exist?

They will have to coexist for at least a decade or two. The key to a successful coexistence is for autonomous vehicles to have enough intelligence to react properly to sometimes unexpected behaviour of manually driven cars.


How effective is Artificial Intelligence in problem solving? Will its decisions be trustworthy in a life and death situation?

If the AI solution includes Transparent AI, where the logic behind the decision making can be explained, we’ll be able to trust the algorithms. Machine Learning (ML)/black-box is the only type of solution that cannot be trusted. The combination of Knowledge Representation and ML, which we’ve coined Holistic AI, however, will provide transparent, very powerful and trustworthy AI solutions.


In a recent article you have mentioned a study, which states a large number of CEOs think that AI and automation will have a negative impact on stakeholder’s trust in their industry. Why is it so?

Up to now the industry has understood AI as equivalent to ML, and ML is an atheoretical black box technology with opaque results. The potential benefits of ML algorithms are clear to CEOs, but they also know that this unproven technology is very risky. Many things can go wrong, jeopardising their brands. However, this is about to change when the industry embraces a more holistic view of AI where transparency is part of the algorithm development. At Accenture, we have created a Decalogue for the Responsible AI Enterprise, not only because it is the right thing to do, but also because it brings all the AI potential benefits to fruition mitigating risks.


Is quantitative AI getting more importance than qualitative AI for short term gains?

Quantitative-statistical models have gain importance recently in the industry due to very impressive first results, like arriving at over human-level image recognition. However, they are a brute-force type of solutions; it is the first attempt to solve challenges. We’ve already seen in the last 2-3 years how the advancements in the AI field have slowed down. We are at the point now where there is an acceptance of the need of integration of other AI technologies to continue the improvement in the field, in a way that is transparent and also responsible, and that includes Qualitative AI technologies.


How do we eliminate the human bias and prejudices that could creep in, a fear expressed by Google’s AI chief John Giannandrea?

At Accenture, we have developed a framework called “Teach & Test”, to ensure that the data used to train ML models is unbiased, and also that the ML algorithms behave as intended once they have been deployed. We’ve also integrated ML with other AI technologies, known as Knowledge Representation & Reasoning (KRR), to ensure transparency, so the logic behind the decision making is shown. It also significantly accelerates the training time of ML algorithms.


The EU has issued the General Data Protection Regulation (GDPR), which comes into force in May 2018, including the right to obtain an explanation of decisions made by algorithms and a right to opt-out of some algorithmic decisions altogether. We need to educate people to request the enforcement of this law. We also need to educate our kids on how AI works. They are going to be born in a society where AI already exists (like we were born in society with electricity). They need to understand it and use it properly, not been governed by it.


With the data deluge now resulting from all the sensors and other tools of automation, how serious are the security threats and remedial measures?

The cybersecurity threat is really serious at the moment. That is why I recommend people never accept to put any chip in their bodies. It will be very easy to maximise control.


To date, the AI used to solve the cybersecurity problem has been mainly ML, as in the rest of the areas of application, with poor results. With the integration of other AI technologies in a holistic way, as I’ve mentioned before, we will see more effective solutions to the cybersecurity problems. For instance, for IoT devices, if we adapt Edge Computing and include intelligence at the edges, a thermostat will know the normal behaviour of its family, and it will never accept an abnormal behaviour coming from an external attack.


Are human workers, and more important, policy makers, ready for this paradigm shift and move on to new roles and skills?

I don’t think workers and policy makers are ready for the shift because they do not completely understand the technology and its implications. Corporations have a great deal of responsibility to educate and retrain their employees in other type of occupations, which will use AI technologies to enhance their capabilities and creative potential, which is what everybody should be doing to begin with. This change will be very positive, if done in a responsible way. At Accenture, we are developing the Career Coach, which will identify when the job that a person is performing will be displaced by AI or automation, help decide the next career move, and suggest the list of training material to get ready. This technology need to be accessible to everyone because we’ll all face a career change in the next 5 to 10 years, if not before.



For over two decades, Dr. Teresa Escrig has been a researcher and professor in Artificial Intelligence, including the areas of Qualitative Modelling, Cognitive Vision, Robotics and Cyber Security.  She is the author of three books, more than one hundred peer-reviewed research articles, and the recipient of numerous awards.  From 2002 to 2010, she led the research group “Cognition for Robotics Research”, and created several breakthroughs in the formalization of common sense reasoning.  From 2007 to 2014, she was the founder and CEO of Cognitive Robots, where she created two products from their inception: the “Cognitive Brain for Service Robotics” and the “Cognitive Vision System”. The Cognitive Brain is the first artificial brain which contains - qualitative reasoning, a cognitive vision system, and cognitive learning capabilities, integrated into a unique and patented physical structure that can easily be incorporated into any vehicle to transform it into an autonomous and intelligent vehicle.  In 2015 she founded, Qualitative Artificial Intelligence, where she applied AI to areas, including DNA sequencing, and the Semantic Web. She recently joined Accenture as a AI global leader in Cognitive Modelling and Computer Vision working out of Liquid Studio at Seattle. Her mission is to enhance human capabilities with safe and responsible AI.