Social TV
Tech

New mathematical formula unveiled to prevent AI from making unethical decisions

Researchers from the UK and Switzerland have found a mathematical means of helping regulators and business police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging choices.

The collaborators from the University of Warwick, Imperial College London, and EPFL – Lausanne, along with the strategy firm Sciteb Ltd, believe that in an environment in which decisions are increasingly made without human intervention, there is a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy—and to find and reduce that risk, or eliminate entirely, if possible.

Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be more profitable to make certain decisions that end up hurting the company.

The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential penalty if regulators levy hefty fines or customers boycott you – or both.

That’s why these mathematicians and statisticians came together: to help business and regulators by creating a new “Unethical Optimization Principle” that would provide a simple formula to estimate the impact of AI decisions.

As it stands right now, “Optimization can be expected to choose disproportionately many unethical strategies,” said Professor Robert MacKay of the Mathematics Institute of the University of Warwick.

“The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process.”

They have laid out the full details in a paper bearing the name “An unethical optimization principle”, published in Royal Society Open Science on Wednesday 1st July 2020.

“Our suggested ‘Unethical Optimization Principle’ can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden,” said MacKay. “(The) inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.”

(Source: Warwick College , GNN )

Related posts

AI tools can help legal firms work more effectively

Mapule Mathe

UN Women and Plan International co-host digital youth activism convo on Instagram Live

Mapule Mathe

SA Writers talk free books and the value of life-sustaining royalties

Mapule Mathe

Co-owners of Fiji Water pledge $75 million toward climate research

Mapule Mathe

Dignify Africa launches news whats app learning platform

Mapule Mathe

Japanese students host their own graduation ceremony on mine-craft

Mapule Mathe

Leave a Comment