New AI Ethics Advisory Board Will Address Challenges

A Global AI Ethics Advisory Council aims to address the need for guidance for organizations that lack the means or capacity to provide ethical oversight of the AI ​​technology they use.

The board will be housed at the Institute for Experiential AI at Northeastern University in Boston, an academic organization that creates AI products that use machine learning as an extension of human intelligence.

Introduced on July 28, the council is made up of 44 experts from multiple disciplines in the AI ​​industry and academia.

Board members will meet twice a year to discuss ethical issues surrounding AI. Some of the members will review requests submitted by organizations that want their products reviewed for ethical guidance.

The new panel is similar to an institutional review board, which is mandated by the federal government in areas such as health care, biomedical research and clinical trials. Since the government does not require organizations to maintain an AI ethics committee, organizations are essentially self-regulating.

A committee like this is a welcome step because currently only large organizations tend to have AI ethics committees, said Kashyap Kompella, analyst at RPA2AI Research.

“Northeastern’s initiative can help democratize access to AI ethics expertise,” he said. However, for a review board like this to be effective, it must be able to change “the what and how of designing, developing, and deploying AI products when the principles are violated.” responsible AI”.

Northeastern University’s Institute for Experiential AI set up an AI Ethics Advisory Council on July 28. The council is made up of more than 40 experts from different countries and backgrounds. CREDIT/SOURCE: Institute for Experiential AI

Gaps and Challenges

One of the gaps that can exist in an AI ethics committee like this is between ethics and compliance, said Nigel Duffy, a machine learning and AI engineer and former senior AI executive. of the global accounting firm EY.

Many organizations that create and work with AI products and technologies often face disconnects between practitioners who use the products for business and compliance teams.

“One of the challenges today is that these two ridings are not necessarily well connected,” Duffy said. “They don’t have the right skills to talk to each other.”

Bridging the gap between practice and compliance is essential for AI ethics.

Another challenge for an ethics committee like this is that because it is a third-party group, some companies might not want to address it because ethical issues such as whether a system or an AI algorithm is biased toward or against a specific gender, economic group or race, may be sensitive, Duffy said.

Many organizations may want to keep these discussions internal.

A very important potential role they can play is to provide a connection to affected communities.

Nigel DuffyMachine Learning and AI Engineer

Diverse group of people

Also, although the AI ​​Ethics Committee may have members from different countries and some private companies, it is essentially an academic group.

“A very important potential role they can play is providing a connection to affected communities,” Duffy said, referring to people and groups subject to AI bias or algorithmic discrimination.

Diversity was a key factor in building the board, said board co-chair Ricardo Baeza-Yates, director of research at the Silicon Valley campus of the Institute for Experiential AI in San Joseph, California.

“We have gender diversity, we have geographic diversity, we have industrial type diversity,” Baeza-Yates said.

While the council will make recommendations to organizations that raise ethical concerns, organizations can decide not to follow those recommendations, he said.

“The main objective of the council is to have the possibility of [ask] the right questions and get the right answers. And then they’re alone,” Baeza-Yates said. However, if the AI ​​technology is risky, the council may decide to publish its recommendations.

What should be audited?

However, not all types of AI technologies need to be audited, said Alan Pelz-Sharpe, analyst and founder of Deep Analysis.

In transactional use cases like reading a number or word from a form or document, the AI ​​will generally not hold any form of bias and therefore there is no need to conduct an audit. The need for an audit comes into play when AI technology makes a decision about someone’s finances, health or freedom, Pelz-Sharpe said.

Moreover, it is difficult to audit AI systems that have been running for years, he added.

“The challenge is that few AI systems are designed to operate unethically, rather they can be trained or used to break ethical boundaries, often unknowingly,” Pelz-Sharpe said. “Until it does something in operation, it’s hard to know if it’s doing the right thing or the wrong thing.”

To avoid this, transparency should exist on how and why an AI made its decision, he said. However, some complex applications of AI technology such as neural networks or deep learning – in which AI technology acts like a human would – could make transparency difficult, which is why guidelines are needed.

“Ethical guidelines for the design and implementation of AI are, in my view, much needed,” Pelz-Sharpe said. “Clear instructions to guide potential users would be very helpful.”

The AI ​​Ethics Committee hopes government organizations can pass laws based on some of their work.

“It fills in a place where maybe some government stuff has failed or not quickly enough or has failed outright,” said Ethics Board Member and Senior Data Science Analyst Momin Malik. AI at the Mayo Clinic Digital Health Center.

Leave a Comment