Home United States USA — software AI Weekly: The growing importance of clear AI ethics policies

AI Weekly: The growing importance of clear AI ethics policies

316
0
SHARE

Controversies like Google’s involvement in the Pentagon’s Project Maven and the sale of Amazon’s Rekognition system to law enforcement make clear the need for AI ethics policies.
A little over a week after the fervor surrounding Google’s involvement in the Department of Defense’s Project Maven, an autonomous drone program, showed signs of abating, another machine learning controversy returned to the headlines: local law enforcement deploying Amazon’s Rekognition, a computer vision service with facial recognition capabilities.
In a letter addressed to Amazon CEO Jeff Bezos, 19 groups of shareholders expressed concerns that Rekognition’s facial recognition capabilities will be misused in ways that “violate [the] civil and human rights” of “people of color, immigrants, and civil society organizations.” And they said that it set the stage for sales of the software to foreign governments and authoritarian regimes.
Amazon, for its part, said in a statement that it will “suspend… customer’s right to use … services [like Rekognition]” if it determines those services are being “abused.” It has so far declined, however, to define the bright-line rules that would trigger a suspension.
AI ethics is a nascent field. Consortia and think tanks like the Partnership on AI, Oxford University’s AI Code of Ethics project, Harvard University’s AI Initiative, and AI4All have worked to establish preliminary best practices and guidelines. But Francesca Rossi, IBM’s global leader for AI ethics, believes there’s more to be done.
“Each company should come up with its own principles,” she told VentureBeat in a phone interview. “They should spell out their principles according to the space that they’re in.”
There’s more at stake than government contracts. As AI researchers at tech giants like Google, Microsoft, and IBM turn their attention to health care, the obfuscatory nature of machine learning algorithms runs the risk of alienating those who stand to benefit: patients.
People might have misgivings, for example, about systems that forecast a patient’s odds of survival if the systems don’t make clear how they’re drawing their conclusions. ( One such AI from the Google Brain team takes a transparent approach, showing which PDF documents, handwritten charts, and other data informed its results.)
“There’s a difference between a doctor designing therapy for a patient [with the help of AI] and algorithms that can recognize books,” Rossi explained. “We often don’t even recognize our biases when we’re making decisions, [and] these biases can be injected into the training data sets or into the model.”
Already, opaque policies around data collection have landed some AI researchers in hot water. Last year, the Information Commissioner’s Office, the U. K.’s top privacy watchdog, ruled that the country’s National Health Service improperly shared the records of 1.6 million patients in an AI field trial with Alphabet subsidiary DeepMind.
“Users want to know that… AI has been vetted,” Rossi said. “My vision for AI is an audit process — a very scrupulous process through which somebody credible has looked at the system and analyzed the model and training data.”
Equally as important is ensuring that legislators, legal experts, and policy groups have a firm understanding of AI’s potential rewards and its potential risks, according to Rossi.
“Policymakers have to be educated about what AI is, the issues that might come up, and possible concerns,” she said. “And for the people who build AI, it’s important to be transparent and clear about what they do.”
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers, and guest post submissions to Cosette Jarrett.
Thanks for reading,
Kyle Wiggers
AI Staff Writer
P. S. Please enjoy this video of a neural network from Nvidia Research that can apply a slow-motion effect to any video:
Salesforce develops natural language processing model that performs 10 tasks at once
Researchers at Salesforce’s Einstein lab have developed a natural language processing technique that can perform ten different kinds of tasks without compromising performance or accuracy.
Read the full story
Yelp’s Popular Dishes AI highlights the food everyone’s talking about
Yelp’s new Popular Dishes feature uses artificial intelligence to highlight the most sought-after menu items in restaurants around the world.
Read the full story
IBM debuts Project Debater, experimental AI that argues with humans
In what may be the biggest rollout of conversational AI from IBM since Watson, IBM Research today debuted Project Debater, an experimental conversational AI with a sense of humor, little tact, and occasionally powerful arguments. Training for Project Debater began six years ago, but the AI system only gained the ability to participate in debates […]
Read the full story
Baidu Research’s breast cancer detection algorithm outperforms human pathologists
Baidu Research today announced it has developed a deep learning algorithm that in initial tests outperforms human pathologists in its ability to identify breast cancer metastasis. The convolutional neural net was trained by splitting 400 large images into grids of tens of thousands of smaller images, then randomly selecting 200,000 of those smaller images. The […]
Read the full story
This Nvidia neural network can apply slow motion to any video
Researchers at Nvidia have developed a machine learning algorithm that can apply slow motion to any video.
Read the full story
AI Weekly: Google’s research center in Ghana won’t be the last AI lab in Africa
This year, we have seen an acceleration of Silicon Valley tech giants opening AI research labs around the world as they seek to gain traction among researchers and fulfill their global ambitions. In the past six months or so, Google brought labs to China and France, Facebook opened labs in Pittsburgh and Seattle, and Microsoft announced plans to open […]
Read the full story

Continue reading...