Designing an AI assisted Compliance System – A Theoretical Framework

This post offers a theoretical framework for the design of AI assisted compliance system that would be able to identify compliance clauses and fines within legislation, thus enabling enhanced compliance by identifying the regulatory risk, especially to the corporation, according to the risk magnitude and its consequences for the corporation.

AI is the theory and development of computer systems able to perform tasks that normally require human intelligence. Examples include tasks such as visual perception, speech recognition, decision making under uncertainty, learning, and translation between languages.

There are two aspects of Artificial Intelligence (AI) that are relevant to the legal field: the use of AI to benefit the legal profession and the second is the implications of AI on our lives and the role of law in this respect.

Venture capital investments in companies developing and commercializing AI-related products and technology exceeded $2 billion since 2011. Leading players like IBM, Google, and Facebook have invested heavily in developing their AI capabilities.

No doubt the alleged AI legal application that has received the most public attention is ROSS, a system supported by IBM’s Watson division. One of the co-founders of the Ross team describes it as “[b]asically, what we built is a [sic] the best legal researcher available“. Even without being made available to the public nor presented in  any public demo, ROSS has become a symbol of legal AI technology.

Another new application is Global-Regulation.com, the world’s largest search engine of legislation. Global-Regulation.com makes extensive use of both Microsoft and Google’s machine translation to offer laws from China, Mexico and Spain, among many others, in English.

Other fields in which AI has been used within the legal profession are e-discovery (Recommind, Equivio – now part of Microsoft), forecasting outcomes of IP litigation (Lex Machina), providing fact and context-specific answers to legal, compliance, and policy questions (Neota Logic) and contract lifecycle software, including discovery, analysis, and due diligence (Kira Systems and KM Standards).

It should be mentioned that while many companies are claiming to use AI in their products, not all of them actually use it or have it at the core of their product. A good indication of whether AI is actually used (albeit hard to determine as the backend is not usually transparent) is whether there is training of a model embedded in the system.

On the other hand, fear of AI’s implications has grown respectively. Oxford University researchers estimate that 47 percent of total US employment is “at risk” due to automation of cognitive tasks; Silicon Valley entrepreneur Elon Musk invested in AI “to keep an eye” on it, claiming it is potentially “more dangerous than nukes;” and the renowned theoretical physicist Stephen Hawking has said that AI may create “machines whose intelligence exceeds ours by more than ours exceeds that of snails”.

In 1939 after demonstrating a nuclear chain reaction, Leo Szilard, one of the leading scientists developing the experiment wrote: “We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief”. Stuart Russell, a leading AI expert argues that we may be in a similar situation with AI: “To those who say, well, we may never get to human-level or superintelligent AI, I would reply: It’s like driving straight toward a cliff and saying, “Let’s hope I run out of gas soon!”. One major concern about AI is the option that advanced AI system will use their superintelligence to design and build even more advanced AI systems without any human intervention or supervision. A form of this has been taking place for years in the world of CPU development. Computer chips are so complicated that computers are required to design them. The movie Ex Machina offers an illustration of the alarming consequences that could be in store for humanity.

Stephen Hawking wrote that, in the short term, A.I.’s impact depends on who controls it; in the long term, it depends on whether it can be controlled at all. Two examples illustrating Hawking’s concern are autonomous killing machines currently being developed by more than 50 nations. Equally ethically complex are the advanced data-mining tools now in use by the U.S. National Security Agency (and their counterparts around the world).

Given these concerns, Omohundro argues that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. He identifies a number of “drives” that will appear in sufficiently advanced AI systems of any design. Similar precautions are suggested in a paper co-authored by researchers from Google, Stanford Uni., Berkely Uni., and open AI, presenting a list of five practical research problems related to accident risk.

Since the global financial crisis of 2008, spending on compliance has break new records. Estimates report over $1 US trillion is spent worldwide on regulatory compliance, and over 1 million people employed around the world doing regulatory compliance.

KPMG, one of the four big global accounting companies claim that, “The top risk perceived by senior executives is the growing regulatory pressure from governments around the world. C-level executives in almost all industries say this, not just those in Financial Services, where companies are facing arguably the greatest regulatory challenge in their history.”

The bottom line, in the words of one senior executive we’ve spoken to is that “every $1 spent on compliance, saves $5 in fines.”

AI Assisted Compliance System

The capability of AI is yet to be utilized for the enhancement of legal compliance. This suggested system will identify fines within legislation by training a model to identify fines and charges within laws and create a compliance system that will be able to identify the greater risk to the corporation.

In the first stage, legislation from 8 countries will be considered: UK, Canada, USA, China, Indonesia, Australia, New Zealand and Uruguay. For each country legislation database an initial Search string: (corporation or company) and (compliance or fine or charge or penalty) will be employed. We’ll be using our database of laws, including the translations for Indonesia, China and Uruguay.

This manual process will identify 100 samples of fines in legislation and 100 samples where number digits in legislation are not fines (e.g., interest, budgets etc.). The next step will engage machine learning algorithm to ‘translate’ the positive and negative samples into numerical values that will be used towards automating the system.

SHARE THIS POST ON SOCIAL MEDIA