By Balaji Subramanian, Chief Technology Officer, ENGAIZ Inc.
For over a decade, we have been enjoying the benefits of Artificial Intelligence directly or indirectly in our lives. Voice assistants like Alexa, Siri and Google Home utilize Natural Language Processing (NLP) and Machine Learning (ML) to understand our verbal communications and respond instantly with information we need now and in future. Smart recommendations on Netflix and Disney+ try to understand what we like to watch. Products like Microsoft Viva enhance our work experience with smart suggestions and insights to organize our work efficiently. Tesla’s self-driving AI system can one day fully automate our mobility.
These are products that interact with us directly and enhance our daily lives. There are thousands of other AI software that run behind the scenes somewhere in the cloud that constantly analyze, learn, adapt and make our lives easier in our digital journey. You would be surprised to know the number of AI software used in military, law enforcement, health, finance, agriculture, weather etc.
As we enjoy these benefits, we also worry about what the voice assistants hear when we don’t use them and what they do with the information they collect. What personal information do the social media apps know and what do they do with that? We hear about volatility in the stock market caused by bots overselling when a resistance level is reached or buying heavily when a support level is reached.
What if a profiling AI system exploits specific vulnerable groups? What if they employ harmful and manipulative marketing techniques? What if the facial recognition software used by law enforcement erroneously identifies the wrong person as the suspect?
The concerns range from individual personal right infringements to larger damage to society.
As always, the law makers in European Union (EU) are the first ones to act on these concerns. In April 2021, the European Commission proposed a regulatory framework on Artificial Intelligence. This is the first ever attempt to enact a horizontal regulation of AI.
As per this AI act draft, all AI systems used in the European Union will be classified based on their risk level – (i ) Unacceptable risk, (ii) High risk, (iii) Limited risk, and (iv) Low or minimal risk. Based on the risk level or category, there will be regulations on these AI systems and companies to make sure the risks are managed properly.
Unacceptable Risk: Any system that manipulates human behavior to circumvent free-will is considered a clear threat to safety, livelihoods and people’s rights. Examples of such systems are social scoring and profiling systems, systems that encourage dangerous behavior etc.
Any AI system that comes under this category will be banned from the EU marketplace
High Risk: Some examples of AI Systems considered high risks are systems used in transportation that would harm people lives, scoring systems in education, robotic assisted surgery, resume sorting systems that determine employment opportunities, credit scoring systems that determine people’s ability to get loans, law enforcement systems that interfere in people’s fundamental rights, immigration systems that determine asylum, entry to a state etc., AI systems used in justice departments to determine the outcome of a case etc.
Providers of High-Risk AI systems are required to register their systems in EU-wide database managed by the Commission before releasing them to the market. These systems would have to comply with a range of requirements on risk management, testing, transparency, human oversight etc. to make sure the risks do not become actual problems.
Minimal or no risk: Examples include video games, spam filters etc.
Systems with limited and minimal risks go through certain requirements, however they are not as stringent as the high-risk systems.
Currently there is no federal regulation of AI in the US, though few states have enacted their own legislation on certain AI policies. The US government is more interested in encouraging innovators towards building AI systems than regulating it at this point. They have launched the National Artificial Intelligence Initiative to promote and encourage innovators to build AI systems and to be the leader in AI. However, various initiatives by the U.S regulators show that they are leaning towards developing “a voluntary risk management framework for trustworthy AI Systems”.
India is working on an approach to Responsible AI based on risk. NITI Ayog, the premier policy think-tank of India has published Principles of Responsible AI – Part 1 and Part 2 which proposes self-regulations for low-risk AI systems and a more stringent approach for applications with higher risk or where the risk is not clear. For such systems, it is proposing “regulatory mechanisms may be developed through policy sandboxes and controlled deployments where market reactions and impact could be closely monitored”
China Academy for Information and Communication Technology (CAICT) has come up with a Framework of Trustworthy AI and tools for testing and certifying trustworthy AI systems. This white paper recommends legislation and supervision at Government level along with best practices and standards at corporate level to achieve AI trustworthiness.
Whether the law makers in each region and country regulate AI systems or not, it is our responsibility as Technology Innovators to come up with a self-regulatory framework on AI systems and their risks. In the past, initiatives such as Open Source, Agile methodologies, www framework and many other standards were propelled and successfully implemented by technologists. We can now come together and build a framework for Responsible AI as well. The EU framework is a good starting point, but EU regulations can be too restrictive on AI systems that could curtail innovation. United States seems to fully support AI innovation and is not too restrictive at this point. We need to find the right balance between these approaches and create our own framework and standards and create a management culture that enables the corporate world to build responsible AI.
As larger companies like Google, Apple and Tesla are trying to change the world with their future AI systems, technology startups like ENGAIZ are using AI engines to enhance their products. For example, we already utilize Natural Language Processing in analyzing and identifying discrepancies in the contracts and alert the enterprises when a third-party company does not conform to the agreement. We will be building many more AI enabled systems in future such as scenario analysis on climate change to power our OPEN3PRX™ Sustainability Platform and predictions based on 100+ data elements on third parties to power our OPEN3PRX™ Risk Intelligence Platform. In all those efforts, ENGAIZ is committed to building Responsible AI systems that help enterprises manage third party risks efficiently.
“Whether the law makers in each region and country regulate AI systems or not, it is our responsibility as Technology Innovators to come up with a self-regulatory framework on AI systems and their risks.”
Balaji Subramanian is the Chief Technology Officer at ENGAIZ Inc. He is a strong IT leader and an expert in large scale secure software engineering & development, digital transformation, application portfolio management and cyber security.
He is passionate about leveraging emerging technologies such as AI to find solutions for both the corporate and the social world. At ENGAIZ, Balaji heads the technical strategy and roadmap including leading the product engineering.
Prior to joining ENGAIZ, Balaji held many technical leadership roles. Recently as Vice President, Applications Development at Element Fleet Management and as Vice President, Applications Development at Arcor Hotels, FRHI Hotels & Resorts (Fairmont, Raffles, Swissotel).