I came to Stanford because it is among the strongest universities in my areas of interest: machine learning and AI. This fall quarter, I am teaching CS329H: Machine Learning from Human Preferences, and in the spring quarter, I will be teaching CS221, which covers all aspects of AI.
I lead the Stanford Trustworthy AI Research (STAIR) group, which seeks to understand questions like how we ensure privacy, robustness, and fairness for human-facing applications of machine learning and AI. I’m part of the Stanford Center for AI Safety (SAFE), where I engage in research related to adversarial and robustness vulnerabilities and the machine-learning systems and algorithmic tools we can build to help address these issues. I’m also part of the Stanford Institute for Human-Centered AI (Stanford HAI), where I contribute to advancing the understanding of the role of humans in the design, development, and deployment of AI, as well as working to mitigate the possible adverse outcomes of AI systems.
The applications I end up working in the most are in healthcare and neuroscience, areas which bring to the fore some of the most challenging questions of ensuring trustworthiness in our machine learning systems. For example, I’m leading an ongoing project on equity in breast cancer screening, funded by a grant from the National Science Foundation. There are significant gaps in health outcomes correlated with race and socioeconomic status; some of the questions we’d like to answer and understand are, can we build algorithmic tools that are more accurate at improving breast cancer screening beyond what’s standard now? Also, can we combine this information with interventions that directly address equity issues and outcomes? We hope to build tools that will not only improve breast cancer screening but also the equity of outcomes downstream once these tools are deployed.
More broadly, my work considers algorithmic fairness, including thinking about how biases in the data lead these models to make certain kinds of predictions that are systematically different across groups. We’re building models that work across various hospitals, many in very different demographic areas, including rural areas and cities. Also, because we’re dealing with healthcare data, there’s an obvious privacy concern, so we’re building these tools in such a way that the hospital doesn’t have to share data directly with us.
Representation in research impacts how algorithmic tools are built, which questions are asked, and where these tools are applied. Since 2021, I’ve been president of Black in AI, which addresses representation issues in the field. We’re working to address representation gaps, specifically of Black people in AI. And we’ve done this quite successfully. One of the most important things we do is mentorship. We’ve helped people with application processes, leading to hundreds of admissions a year across different institutions worldwide. We’re now among the largest organizations in this area, with roughly 5,000 members worldwide. Our goal is to keep building our community.