Unacceptable Risk: Any system that manipulates human behavior to circumvent free-will is considered a clear threat to safety, livelihoods and people’s rights. Examples of such systems are social scoring and profiling systems, systems that encourage dangerous behavior etc.
Any AI system that comes under this category will be banned from the EU marketplace
High Risk: Some examples of AI Systems considered high risks are systems used in transportation that would harm people lives, scoring systems in education, robotic assisted surgery, resume sorting systems that determine employment opportunities, credit scoring systems that determine people’s ability to get loans, law enforcement systems that interfere in people’s fundamental rights, immigration systems that determine asylum, entry to a state etc., AI systems used in justice departments to determine the outcome of a case etc.
Providers of High-Risk AI systems are required to register their systems in EU-wide database managed by the Commission before releasing them to the market. These systems would have to comply with a range of requirements on risk management, testing, transparency, human oversight etc. to make sure the risks do not become actual problems.
Limited Risk: AI systems such as chatbots that with specific transparency obligations. For example, people should know they are interacting with an automated system and not a person.
Minimal or no risk: Examples include video games, spam filters etc.
Systems with limited and minimal risks go through certain requirements, however they are not as stringent as the high-risk systems.
Regulations in the United States, India & China
Currently there is no federal regulation of AI in the US, though few states have enacted their own legislation on certain AI policies. The US government is more interested in encouraging innovators towards building AI systems than regulating it at this point. They have launched the National Artificial Intelligence Initiative to promote and encourage innovators to build AI systems and to be the leader in AI. However, various initiatives by the U.S regulators show that they are leaning towards developing “a voluntary risk management framework for trustworthy AI Systems”.
India is working on an approach to Responsible AI based on risk. NITI Ayog, the premier policy think-tank of India has published Principles of Responsible AI – Part 1 and Part 2 which proposes self-regulations for low-risk AI systems and a more stringent approach for applications with higher risk or where the risk is not clear. For such systems, it is proposing “regulatory mechanisms may be developed through policy sandboxes and controlled deployments where market reactions and impact could be closely monitored”
China Academy for Information and Communication Technology (CAICT) has come up with a Framework of Trustworthy AI and tools for testing and certifying trustworthy AI systems. This white paper recommends legislation and supervision at Government level along with best practices and standards at corporate level to achieve AI trustworthiness.