I studied architecture as an undergrad, I have master’s degrees in both computer science and integrated design and management, and I worked in experience and interaction design, as well as product design, before starting my PhD at Stanford. In some ways, computer science is another tool in my design toolbox – understanding the latest technology helps me find the right solutions for problems I’m trying to address.
Design enables us to reach beyond what is currently possible, to consider and test speculative interventions with potential users and then imagine how to build them. This can help us find unexpected, creative solutions that really meet people’s needs. One of the things that struck me when I started working in engineering was how easy it is to be removed from the real-life applications of our work. In the design world, you’re often creating something that the end user will interact with directly. But in computer science, we frequently develop technology in isolation – it’s easier to feel like it’s not our responsibility to consider all the potential consequences of the things we create. There’s a lot of good that can come out of new technologies, but there are also a lot of ethical implications that go hand in hand with them.
Take, for example, my research. I’m working on identifying potential physiological signals correlated with attention-deficit/hyperactivity disorder (ADHD) and providing real-time feedback on the relevant changes throughout the day. The goal is to give people with ADHD more insight into what might be going on with their body at a given moment and help them understand how it could affect the way they perceive the world around them. But while I’m working on developing this, I need to think about how it could be misused. Some physiological signals may also give insight into someone’s mood or interest level. How do I make sure that their data is protected or only used for the intended purpose? How do I help users understand what they’re signing up for and give them control over it? If my system were used as a diagnostic tool, could it be used to further stigmatize people with ADHD? And what are the societal implications of widespread diagnosis of ADHD? Is that even something we want?
I’ve been fortunate at Stanford to find a community of people interested in thinking about these issues and incorporating them into their work. I’ve found a second family in the weekly Computing & Society reading group, where people from all over campus come together to read about and discuss ethics in computer science research, and I’ve started a Research Studio, a space for people to workshop the ethical and societal implications of their research over lunch.
I’m also working on a project to develop methods for anticipating and communicating the potential consequences of emerging technologies. We’re trying to build a framework to help product and engineering teams bring people with varying levels of expertise into the development process to create solutions collaboratively. Because ultimately, even in diverse settings, we only represent a fraction of the backgrounds and experiences out there. To design a future that works for everyone, we need to include as many people as possible in these conversations.