Home United States USA — software How important is explainability? Applying critical trial principles to AI safety testing

How important is explainability? Applying critical trial principles to AI safety testing

111
0
SHARE

Explainability is a heightened focus in AI. How existing scientific techniques can ensure AI is working as intended — and safely.
The use of AI in consumer-facing businesses is on the rise — as concern for how best to govern the technology over the long-term. Pressure to better govern AI is only growing with the Biden administration’s recent executive order that mandated new measurement protocols for the development and use of advanced AI systems.
AI providers and regulators today are highly focused on explainability as a pillar of AI governance, enabling those affected by AI systems to best understand and challenge those systems’ outcomes, including bias. 
While explaining AI is practical for simpler algorithms, like those used to approve car loans, more recent AI technology uses complex algorithms that can be extremely complicated to explain but still provide powerful benefits.
OpenAI’s GPT-4 is trained on massive amounts of data, with billions of parameters, and can produce human-like conversations that are revolutionizing entire industries. Similarly, Google Deepmind’s cancer screening models use deep learning methods to build accurate disease detection that can save lives. 
These complex models can make it impossible to trace where a decision was made, but it may not even be meaningful to do so. The question we must ask ourselves is: Should we deprive the world of these technologies that are only partially explainable, when we can ensure they bring benefit while limiting harm?
Even US lawmakers who seek to regulate AI are quickly understanding the challenges around explainability, revealing the need for a different approach to AI governance for this complex technology — one more focused on outcomes, rather than solely on explainability. Dealing with uncertainty around novel technology isn’t new
The medical science community has long recognized that to avoid harm when developing new therapies, one must first identify what the potential harm might be. To assess the risk of this harm and reduce uncertainty, the randomized controlled trial was developed.
In a randomized controlled trial, also known as a clinical trial, participants are assigned to treatment and control groups. The treatment group is exposed to the medical intervention and the control is not, and the outcomes in both cohorts are observed.
By comparing the two demographically comparable cohorts, causality can be identified — meaning the observed impact is a result of a specific treatment.

Continue reading...