Skip to main content Skip to secondary navigation
Main content start

How the Computer Science Department is teaching ethics to its students

Faculty are advancing a number of new and existing courses, and they’re poised to develop and “embed” moral problem-solving exercises into the computer science curriculum.
“Helping students think about the future impact of new technologies in a thoughtful and ethical way is an imperative.” | AdobeStock/Good Studio

Mehran Sahami sees ethics through the lens of a professor who helps future engineers develop a sophisticated understanding of the subtle ways in which computer code can influence the law and other governmental, economic and cultural systems that organize an increasingly computerized society.

“Whether it’s setting bail or placing ads on social media platforms, an algorithm is working in the background, which is why justice, equity and other social concerns must be central to computer science rather than afterthoughts or omissions,” said Sahami.

As associate chair for education in Stanford’s Department of Computer Science, where the curriculum is one of his central concerns, Sahami is particularly interested in ensuring that students appreciate the ethical considerations of their technical work. “Stanford has a great engineering school inside a great university, and this has always allowed us to infuse our technical training with a liberal arts sensitivity,” he said.

Currently, all CS majors within the School of Engineering must take at least one Technology in Society course, many of which focus on ethical issues arising from the interplay of engineering, technology and society. But the department is now offering a number of new courses and initiatives that its faculty hope will further integrate an understanding of ethical values with the technological depth of the field.

One example of this interdisciplinary fusion is CS 182, Ethics, Public Policy and Technological Change, which Sahami co-teaches with two political science professors from the School of Humanities and Sciences, Rob Reich, who also directs Stanford’s McCoy Family Center for Ethics in Society, and Jeremy Weinstein, who served in several positions in the Obama administration, including deputy to the U.S. Ambassador to the United Nations.

The course, which will be taught for the third time this winter quarter, was designed to help would-be computer scientists understand how making programs more efficient and cost effective, for example, might involve tradeoffs with values like privacy or national security. For example, students were asked to write a fictitious memo to help Stanford officials determine whether the university should shift to a fully autonomous fleet of vehicles on campus, an imaginary exercise that required them to consider the impacts of new technologies. “Who weighs these values and how is a critical question of governance, politics and power,” Weinstein said in an earlier story about this jointly taught course.

Among other courses focused on ethical issues, Sahami points to CS81SI, AI Interpretability and Fairness, a new course that was taught for the first time in spring 2020. Led by computer scientist Omer Reingold, and James Zou, an assistant professor of biomedical data science in the medical school, the course, taught with the help of rising CS senior Eva Zhang, helps students understand how AI algorithms can have subtle and unintended effects that must be anticipated and mitigated on a case-by-case basis.

In another new course last spring, CS 384, Ethical and Social Issues in Natural Language Processing, students studied how gender and racial bias can be perpetuated by poorly framed algorithms, and how language processing can instead be used in a positive way to help address ethical and social problems like toxic speech and online propaganda. The course was taught by professor Dan Jurafsky, who holds dual appointments in computer science and linguistics, bridging Stanford’s schools of Engineering and Humanities and Sciences.

Sahami said the CS department has also started to include more options and requirements related to ethics in the department’s undergraduate degree programs. For example, students may now choose courses that combine computing and humanities to fulfill CS program requirements. Among them are courses like Ethical Theory in the philosophy department, Contemporary Moral Problems in the political science department and The Politics of Algorithms in the communication department; these can now be counted as part of a CS major’s depth courses — classes essential to graduate.

Similarly, the master’s program added a requirement on “Computing and Society,” which gives students the option to choose from a broad range of courses discussing ethics and computing from both CS and other departments. Jerry Cain, director of undergraduate studies in the CS department, and Reingold, who serves as director of the master’s program, helped drive these initiatives. The ultimate goal, said Sahami, is to help students at all levels understand how their work in computing has social, political and ethical ramifications.

Computer Science Department Chair John Mitchell traces the roots of CS ethics courses back to 1986, when now emeritus professor Terry Winograd and Helen Nissenbaum, currently a professor at Cornell Tech, started the department’s first such course. Ever since, a succession of CS faculty have weaved such training into the curriculum through courses like the successful and frequently updated CS181/181W, Computers, Ethics and Public Policy, taught in recent years by computer science professor Keith Winstein, philosophy professor Ray Briggs and research engineer Allison Berke of the Stanford Cyber Policy Center.

“Helping students think about the future impact of new technologies in a thoughtful and ethical way is an imperative that is only growing stronger,” Mitchell said.

The next step in infusing the computer science curriculum with ethical thinking is an initiative called “Embedded EthiCS,” where modules examining ethical issues will be woven into a number of existing CS courses. Sahami said Stanford will join Harvard and other universities in using the capital “CS” to put an additional emphasis on computer science in their initiatives, and to indicate that this is an academia-wide imperative. The Stanford version of Embedded EthiCS has arisen through a partnership of the Institute for Human-Centered AI, the Department of Computer Science and the Center for Ethics in Society. The idea will be to weave moral problem-solving exercises into the core courses that are required of computer science majors to graduate.

“The issues will vary from class to class but if a class touches on facial recognition, for instance, instructors and students may also consider how the technology affects digital privacy, bias, fairness and other values,” Sahami said. The effort to develop these embedded modules is now getting underway under the direction of Reich and Sahami. The first set of Embedded EthiCS modules are planned to be rolled out in CS courses this coming academic year.

In preparation to launch these new modules, Stanford is currently recruiting postdoctoral scholars in philosophy to help computer science professors and graduate students design and teach modules in core classes that will help students adopt the mind frame that considers the ethical consequences of technologies as readily as they now factor in cost, efficiency and usability.

“Embedded EthiCS is the natural progression of what Stanford has been doing for 50 years,” Sahami said.