A “New Nobel” – Computer Scientist Wins $1 Million Artificial Intelligence Prize

Duke professor becomes second recipient of AAAI Squirrel AI Award for groundbreaking socially responsible AI.

Whether it’s preventing electrical grid explosions, patterning past crimes, or optimizing resources in the care of critically ill patients, Duke University computer scientist Cynthia Rudin wants artificial intelligence (AI) to do its job. see. Especially when it comes to making decisions that affect people’s lives deeply.

While many machine learning scientists focused on improving algorithms, Rudin instead wanted to harness the power of AI to help society. She chose to seize opportunities to apply machine learning techniques to important societal problems, realizing that the potential of AI is best unlocked when people can look inside and understand what it does.

Cynthia Rudin, professor of electrical and computer engineering and computer science at Duke University. Credit: Les Todd

Now, after 15 years of advocating for and developing “interpretable” machine learning algorithms that allow humans to see into AI, Rudin’s contributions to the field has earned her the $1 million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the United States. Association for the Advancement of Artificial Intelligence (AAAI). Founded in 1979, AAAI serves as the premier international scientific association serving AI researchers, practitioners, and educators.

Rudin, a professor of computer science and engineering at Duke, is the second recipient of the new annual award, funded by online education company Squirrel AI to recognize achievements in artificial intelligence in a manner comparable to top awards in more traditional fields. .

She is cited for “pioneering scientific work on interpretable and transparent AI systems in real-world implementations, advocating for these functions in highly sensitive areas such as social justice and medical diagnosis, and as a role model for researchers and practitioners.”

“Only world-renowned recognitions, such as the Nobel Prize and the AM Turing Award from the Association of Computing Machinery, bring in monetary rewards at the million dollar level,” said Yolanda Gil, AAAI Awards Committee Chair and former President. “Professor Rudin’s work highlights the importance of transparency for AI systems in high-risk domains. Her courage to tackle controversial issues highlights the importance of research to address critical challenges in responsible and ethical use of AI.”

Rudin’s first applied project was a collaboration with Con Edison, the energy company responsible for New York City’s energy supply. Her assignment was to use machine learning to predict which manholes were at risk of exploding due to deteriorating and overloaded electrical circuits. But she soon found that no matter how many newly published academic bells and whistles she added to her code, it struggled to meaningfully improve performance when faced with the challenges of working with coordinators’ handwritten notes and notes. accounting records from the time of Thomas Edison.

“We got more accuracy through simple classical statistical techniques and a better understanding of the data as we continued to work with it,” Rudin said. “If we could understand what information the predictive models were using, we could ask Con Edison’s engineers for helpful feedback that improved our entire process. It was the interpretability in the process that helped improve the accuracy of our predictions, not a bigger or nicer machine learning model. I decided to work on that, and it’s the foundation on which my lab is built.”

Over the next decade, Rudin developed techniques for interpretable machine learning, which are predictive models that explain themselves in ways humans can understand. While the code for designing these formulas is complex and sophisticated, the formulas may be small enough to be written on an index card in a few lines.

Rudin has applied her brand of interpretable machine learning to numerous high-impact projects. With Massachusetts General Hospital collaborators Brandon Westover and Aaron Struck, and her former student Berk Ustun, she designed a simple points-based system that can predict which patients are most at risk for destructive attacks after a stroke or other brain injury. And with her former MIT student Tong Wang and the Cambridge Police Department, she developed a model that helps uncover similarities between crimes to determine whether they’re part of a series committed by the same criminals. That open-source program eventually became the basis of the New York Police Department’s Patternizr algorithm, a powerful piece of code that determines whether a new crime committed in the city is related to past crimes.

“Cynthia’s dedication to solving important real-world problems, desire to work closely with domain experts and ability to distil and explain complex models is unmatched,” said Daniel Wagner, Deputy Superintendent of Cambridge Police. . “Her research has made significant contributions to crime analysis and policing. More impressively, she is a strong critic of potentially unjust “black box” models in criminal justice and other high-stakes areas, and an intense advocate for transparent, interpretable models where accurate, fair and biased outcomes are essential. to be.’

Black box models are the opposite of Rudin’s transparent codes. The methods applied in these AI algorithms make it impossible for people to understand what factors the models depend on, what data the models focus on, and how they use it. While this may not be a problem for trivial tasks like distinguishing a dog from a cat, it can be a huge problem for high-stakes decisions that change people’s lives.

“Cynthia is changing the landscape of how AI is being used in societal applications by shifting efforts from black box models to interpretable models by demonstrating that the conventional wisdom – that black boxes tend to be more accurate – is often incorrect,” said Jun Yang. , chair of Duke’s computer science department. “This makes it more difficult to subject individuals (such as defendants) to black-box models in high-stakes situations. The interpretability of Cynthia’s models has been crucial to their implementation as they empower, rather than replace, human decision-makers.”

An impressive example is COMPAS – an AI algorithm used in multiple states to make bail decisions accused by a ProPublica investigation of using race in part as a factor in the calculations. However, the allegation is difficult to prove, as the details of the algorithm are proprietary information and some key aspects of ProPublica’s analysis are questionable. Rudin’s team has shown that an easily interpretable model that reveals exactly what factors it takes into account can just as well predict whether someone will commit another crime. This begs the question, Rudin says, why black box models should be used for these kinds of high-stakes decisions at all.

“We have systematically shown that for high-stakes applications, there is no loss of accuracy to increase interpretability, as long as we carefully optimize our models,” Rudin said. “We’ve seen this in criminal justice decisions, numerous healthcare decisions, including medical imaging, power grid maintenance decisions, financial loan decisions and more. Knowing this is possible changes the way we think AI cannot explain itself.”

Throughout her career, Rudin has not only created these interpretable AI models, but also developed and published techniques to help others do the same. That hasn’t always been easy. When she first started publishing her work, the terms “data science” and “interpretable machine learning” didn’t exist and there were no categories her research fit neatly into, meaning editors and reviewers didn’t know what to do with it. Cynthia found that if a paper didn’t prove theorems and claimed the algorithms were more accurate, it was — and often still is — more difficult to publish.

As Rudin continues to help people and publish her interpretable designs – and concerns are mounting with black box code – her influence is finally beginning to change the ship. There are now entire categories in machine learning journals and conferences devoted to interpretable and applied work. Other colleagues in the field and their collaborators talk about how important interpretability is for designing reliable AI systems.

“I have had a tremendous admiration for Cynthia from an early age, for her spirit of independence, her determination, and her relentless pursuit of true understanding of everything she encountered in classes and papers,” said Ingrid Daubechies, the James B. Duke. Distinguished Professor of Mathematics and Electrical and Computer Engineering, one of the world’s foremost researchers in the field of signal processing, and one of Rudin’s PhD advisors at Princeton University. “Even as a graduate student, she was a community builder, standing up for others in her cohort. She led me in the direction of machine learning, because I had no expertise at all before she pushed me gently but very persistently. I am so very happy with this beautiful and well deserved recognition for her!”

“I couldn’t be happier to see Cynthia’s work being honored in this way,” added Rudin’s second PhD advisor, Microsoft Research partner Robert Schapire, whose work on “boosting” helped lay the groundwork for modern machine learning. “For her inspiring and insightful research, her independent thinking that has led her in directions very different from the mainstream, and for her long-standing focus on issues and problems of practical, social importance.”

Rudin earned a bachelor’s degree in mathematical physics and music theory from the University at Buffalo before completing her doctorate in applied and computational mathematics at Princeton. She then worked as a National Science Foundation postdoctoral researcher at New York University and as an associate research scientist at Columbia University. She became an associate professor of statistics at the Massachusetts Institute of Technology before joining Duke’s faculty in 2017, where she holds positions in computer science, electrical and computer engineering, biostatistics and bioinformatics, and statistical sciences.

She has been a three-time recipient of the INFORMS Innovative Applications in Analytics Award, which recognizes creative and unique applications of analytical techniques, and is a Fellow of the American Statistical Association and the Institute of Mathematical Statistics.

“I want to thank AAAI and Squirrel AI for creating this award that I know will be a game-changer for the field,” said Rudin. “Having a ‘Nobel Prize’ for AI to help society finally makes it clear beyond a doubt that this topic – AI works for the benefit of society – is really important.”

Comments are closed.