Future Compliance Careers: The Artificial Intelligence Compliance Officer

Innovative technologies transform the financial industry at all levels and the role of the compliance officer is no different. We have covered the subject of the Future of Compliance Jobs on several occasions over the years and the bottom line is fairly simple: while less complex jobs would be replaced by automated systems, other aspects would become more important in the portfolio of GRC professionals. One of the new responsibilities that will arise as part of a changing industry is to control the controller. Patrick Henz writes about the role of the Artificial Intelligence Compliance Officer.

The Artificial Intelligence Compliance Officer (AICO) is a human employee, whose target groups are the AI-software, but also its programmers and users. It includes AI inside the company’s products and solutions, but also internal AI, taking the role of an artificial employee.

Today’s pupils and students may work later in jobs, which do not exist today. Technology is replacing humans on certain positions, independent if it is in the workshop, factory or office. Based on the international accounting network BDO and its survey, today around 40% of in-house counsels already use an electronic review assistant. Such a software reads and understands contracts to highlight important parts and propose changes, based on local law and company policies. Today the software is an add-on, but it can be expected than in several years intelligent software has the capability to replace human employees in the Legal department. Like today’s known customer service, the software will be the first level of review and control. Only if the software identifies special risk factors, it would send the document to the second level, which would be the human contract manager.

Such technology requires new jobs, one of them could be an AI Compliance Officer (AICO). This as intelligent software enters the offices and with the time replaces today’s human colleagues. But AI will not only be our internal partner, it will be included in our products. This opens the requirement that a Compliance department not only guides and controls the human employees, but extends its services to the artificial ones.

The AICO should be part of the Compliance department. To support with these tasks, the university- and business-workgroup “Fairness, Accountability, and Transparency in Machine Learning” (FATML) identified five areas to ensure responsible decision-making by Artificial Intelligence software: Responsibility, Explainability, Accuracy, Auditability and Fairness.  These areas give a first insight into the tasks of the AI Compliance Officer.

1 Responsibility

A robot or intelligent software is less comparable to the humanized “C3PO” or “R2D2”, but better be understood as the “T-1000” from the “Terminator”- cinemalogy. Originally created to kill all humans, but as the resistance could kidnap one of these machines, they changed the basic programming to implement the new goal to protect humans. Doing so, the machines could learn and adapt to different situations, but nevertheless stayed true to their basic programming. This is like a circus tiger, the animal can learn tricks from the tamer, but nevertheless stays a predator. If such an animal would hurt or kill the tamer, it is not to blame, as it is part of its nature. Accordingly, if a T-1000 hurts or kills a human, the machine is not to blame, as it is following its basic programming.

This makes it clear that the Ethics & Compliance trainings are not assisted by the robots and computers, but the human programmers. The AICO should be part of the Compliance department and not IT. Sensibility is needed as the target group programmers and software designer are an internal department without regular contact to suppliers or clients. Nevertheless, they must be emphatic to understand the social and legal impact of their software.

Responsibility includes being prepared for different emergencies. Employees who get aware of corporate wrong-doing need access to an anonymous whistleblower-hotline. Furthermore, like potential tornado, earthquake or other natural catastrophes, the company shall have different processes for the potential wrongdoing of an AI, including accidents caused by such.

2 Explainability

The coding process starts with the idea what the AI should be capable of. Based on this vision, the programmers create the software based on mathematical formulas. In an ideal world the AI works as predicted. But, of course, this never happens. A program is complex and often distinct parts contradict each other and lead to malfunction. A certain part of the programmers’ tasks is an old-fashioned “try-and-error”. With the changing of variables, the software finally might do what it should do. Despite the positive output, the solution is not explainable. In the normal day-by-day no problem, but if the software faces a demanding situation, its is not predictable and may violate ruling law. Due to this, it is imperative that the software is explainable, documented by the responsible programmers and audited by the AI Compliance Officer.

It would be beneficial, if the AICO masters coding and could “read” a software. Furthermore, case studies may test the software to analyze, how it reacts in special situations, which are out of the ordinary.

3 Accuracy

The programmer’s attitudes must be compatible with the company’s values. They believe in the organization’s vision and want to work inside the guidelines. Of course, this not automatically means that the later software will elaborate inside these permissions. Coding errors may lead to violations of internal guidelines and external laws. Similar as human employees (including their behavior), who are included into the company’s control system, the same must be valid for their artificial colleagues. Real-time monitoring, or at least sample checks, must be in place to ensure compliance with relevant laws and regulations.

A risk factor is “temporary fixes”. Especially as in the software industry it is common that the users practically get used to finish testing the program. This can be openly communicated as a public beta version or less visible as a “buggy” version 1.0. It is common that thanks to the available internet, updates come quite regularly, automatically downloaded and installed. People are in average more tolerant about errors in a non-tangible software than in tangible products, as a house or refrigerator. Especially a risk for companies which started as pure software houses, but then extend their portfolio to offer also physical products. Or the other way around, traditional companies work together with software companies to implement software and artificial intelligence into their products. Two different corporate cultures may collide.

A real-time monitoring of the AI should inform the levels of the organization’s management. Not only to present the problem, but also that they are able to react, what could mean shut down the software and conduct the process manually. It is imperative that an emergency plan has the protocols how processes would work “manually”, without AI.

4 Auditability

AI, as all software, is based on mathematics. On paper, this makes it auditable. The AI Audit- or Compliance-department should include employees, who are able to read the software’s code to understand why the machine is acting as it does. With this, artificial employees are more transparent than their human colleagues.

Understanding human values and attitudes is most of the times like a black box. We know parts of the input, as for example communication and training. The company’s different internal controls, for example related to gifts & hospitality, can check the output. Bills and invoices serve as documented behavior. What is invisible are the processes inside the employee that lead to this behavior. Social- and business-psychology supports us to assume what may happen inside the individual, but it is not readable as an algorithm.

Even if software is more transparent than the human mind, it should not be underestimated that experienced programmers may find possibilities to hide and encrypt code. This makes it difficult to audit or analyze it. Again, this underlines why it is imperative that the programmers work based on the company’s values and follow the guidelines.

5 Fairness

As today’s society is diverse, AI should be able to attend all human counterparts in fair way. Fairness can be defined that everybody should have the same opportunities. On the first view, a clear mathematical formula should not violate this, but Albert Einstein already said: “Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.” To reach such kind of fairness, empathy is needed. To ensure that an intelligent software has this quality, the responsible group of programmers should live on and include such values. A company must carefully select its employees, this also means the individuals for the IT department. As it is true for all teams, the IT groups should not only include the most talented individuals, but also ensure that the different characters are able to interact as a team. Diverse backgrounds and personalities may support to shape a team out of a group.

Even if the algorithm for decision-making is elaborated, the quality of the results depends on the input of information. The AI Compliance Officer mustensure that the flow of information is adequate for the software. Furthermore, even if the AI itself is strongly protected against hackers, the flow of information is a risk-factor, as part of them public (as taken from news portals or social media), so a potential target for hackers or even censorship. The AICO must review or even audit the quality of information and the IT department will be key contact to discuss potential weaknesses. All involved employees must take ownership of the required and used information.

Like today’s Compliance Officer, who is responsible for human employees only, the AICO must establish him- or herself, not only as a technical and legal-expert, but furthermore as a trusted colleague and advisor. This concept explains, why AI can enrich the Ethics & Compliance department, but not replace it. A brave new world is awaiting us!



Patrick Henz is a Head GRC, Futurist, Panelist, Speaker & Author of “Business Philosophy according to Enzo Ferrari”, “Tomorrow’s Business Ethics: Philip K. Dick vs. W. Edwards Deming”. You can follow him on Twitter at https://twitter.com/Patrick_Henz


Lavanya Rathnam

Lavanya Rathnam is an experienced technology, finance, and compliance writer. She combines her keen understanding of regulatory frameworks and industry best practices with exemplary writing skills to communicate complex concepts of Governance, Risk, and Compliance (GRC) in clear and accessible language. Lavanya specializes in creating informative and engaging content that educates and empowers readers to make informed decisions. She also works with different companies in the Web 3.0, blockchain, fintech, and EV industries to assess their products’ compliance with evolving regulations and standards.

Posted in UncategorizedTagged , ,

Leave a Reply

Your email address will not be published. Required fields are marked *