Bridging philosophy and AI to explore computing ethics

During a meeting of class 6.C40/24.C40 (Ethics of Computing), Professor Armando Solar-Lezama poses the same impossible question to his students that he often asks himself in the research he leads with the Computer Assisted Programming Group at MIT:

“How do we make sure that a machine does what we want, and only what we want?”

At this moment, what some consider the golden age of generative AI, this may seem like an urgent new question. But Solar-Lezama, the Distinguished Professor of Computing at MIT, is quick to point out that this struggle is as old as humankind itself.

He begins to retell the Greek myth of King Midas, the monarch who was granted the godlike power to transform anything he touched into solid gold. Predictably, the wish backfired when Midas accidentally turned everyone he loved into gilded stone.

“Be careful what you ask for because it might be granted in ways you don’t expect,” he says, cautioning his students, many of them aspiring mathematicians and programmers.

Digging into MIT archives to share slides of grainy black-and-white photographs, he narrates the history of programming. We hear about the 1970s Pygmalion machine that required incredibly detailed cues, to the late ’90s computer software that took teams of engineers years and an 800-page document to program.

While remarkable in their time, these processes took too long to reach users. They left no room for spontaneous discovery, play, and innovation.

Solar-Lezama talks about the risks of building modern machines that don’t always respect a programmer’s cues or red lines, and that are equally capable of exacting harm as saving lives.

Titus Roesler, a senior majoring in electrical engineering, nods knowingly. Roesler is writing his final paper on the ethics of autonomous vehicles and weighing who is morally responsible when one hypothetically hits and kills a pedestrian. His argument questions underlying assumptions behind technical advances, and considers multiple valid viewpoints. It leans on the philosophy theory of utilitarianism. Roesler explains, “Roughly, according to utilitarianism, the moral thing to do brings about the most good for the greatest number of people.”

MIT philosopher Brad Skow, with whom Solar-Lezama developed and is team-teaching the course, leans forward and takes notes.

A class that demands technical and philosophical expertise

Ethics of Computing, offered for the first time in Fall 2024, was created through the Common Ground for Computing Education, an initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.

The instructors alternate lecture days. Skow, the Laurance S. Rockefeller Professor of Philosophy, brings his discipline’s lens for examining the broader implications of today’s ethical issues, while Solar-Lezama, who is also the associate director and chief operating officer of MIT’s Computer Science and Artificial Intelligence Laboratory, offers perspective through his.

Skow and Solar-Lezama attend one another’s lectures and adjust their follow-up class sessions in response. Introducing the element of learning from one another in real time has made for more dynamic and responsive class conversations. A recitation to break down the week’s topic with graduate students from philosophy or computer science and a lively discussion combine the course content.

“An outsider might think that this is going to be a class that will make sure that these new computer programmers being sent into the world by MIT always do the right thing,” Skow says. However, the class is intentionally designed to teach students a different skill set.

Determined to create an impactful semester-long course that did more than lecture students about right or wrong, philosophy professor Caspar Hare conceived the idea for Ethics of Computing in his role as an associate dean of the Social and Ethical Responsibilities of Computing. Hare recruited Skow and Solar-Lezama as the lead instructors, as he knew they could do something more profound than that.

“Thinking deeply about the questions that come up in this class requires both technical and philosophical expertise. There aren’t other classes at MIT that place both side-by-side,” Skow says.

That’s exactly what drew senior Alek Westover to enroll. The math and computer science double major explains, “A lot of people are talking about how the trajectory of AI will look in five years. I thought it was important to take a class that will help me think more about that.”

Westover says he’s drawn to philosophy because of an interest in ethics and a desire to distinguish right from wrong. In math classes, he’s learned to write down a problem statement and receive instant clarity on whether he’s successfully solved it or not. However, in Ethics of Computing, he has learned how to make written arguments for “tricky philosophical questions” that may not have a single correct answer.

For example, “One problem we could be concerned about is, what happens if we build powerful AI agents that can do any job a human can do?” Westover asks. “If we are interacting with these AIs to that degree, should we be paying them a salary? How much should we care about what they want?”

There’s no easy answer, and Westover assumes he’ll encounter many other dilemmas in the workplace in the future.

“So, is the internet destroying the world?”

The semester began with a deep dive into AI risk, or the notion of “whether AI poses an existential risk to humanity,” unpacking free will, the science of how our brains make decisions under uncertainty, and debates about the long-term liabilities, and regulation of AI. A second, longer unit zeroed in on “the internet, the World Wide Web, and the social impact of technical decisions.” The end of the term looks at privacy, bias, and free speech.

One class topic was devoted to provocatively asking: “So, is the internet destroying the world?”

Senior Caitlin Ogoe is majoring in Course 6-9 (Computation and Cognition). Being in an environment where she can examine these types of issues is precisely why the self-described “technology skeptic” enrolled in the course.

Growing up with a mom who is hearing impaired and a little sister with a developmental disability, Ogoe became the default family member whose role it was to call providers for tech support or program iPhones. She leveraged her skills into a part-time job fixing cell phones, which paved the way for her to develop a deep interest in computation, and a path to MIT. However, a prestigious summer fellowship in her first year made her question the ethics behind how consumers were impacted by the technology she was helping to program. 

“Everything I’ve done with technology is from the perspective of people, education, and personal connection,” Ogoe says. “This is a niche that I love. Taking humanities classes around public policy, technology, and culture is one of my big passions, but this is the first course I’ve taken that also involves a philosophy professor.”

The following week, Skow lectures on the role of bias in AI, and Ogoe, who is entering the workforce next year, but plans to eventually attend law school to focus on regulating related issues, raises her hand to ask questions or share counterpoints four times.

Skow digs into examining COMPAS, a controversial AI software that uses an algorithm to predict the likelihood that people accused of crimes would go on to re-offend. According to a 2018 ProPublica article, COMPAS was likely to flag Black defendants as future criminals and gave false positives at twice the rate as it did to white defendants.

The class session is dedicated to determining whether the article warrants the conclusion that the COMPAS system is biased and should be discontinued. To do so, Skow introduces two different theories on fairness:

“Substantive fairness is the idea that a particular outcome might be fair or unfair,” he explains. “Procedural fairness is about whether the procedure by which an outcome is produced is fair.” A variety of conflicting criteria of fairness are then introduced, and the class discusses which were plausible, and what conclusions they warranted about the COMPAS system.

Later on, the two professors go upstairs to Solar-Lezama’s office to debrief on how the exercise had gone that day.

“Who knows?” says Solar-Lezama. “Maybe five years from now, everybody will laugh at how people were worried about the existential risk of AI. But one of the themes I see running through this class is learning to approach these debates beyond media discourse and getting to the bottom of thinking rigorously about these issues.” 

发布者:Dr.Durant,转转请注明出处:https://robotalks.cn/bridging-philosophy-and-ai-to-explore-computing-ethics/

(0)
上一篇 2小时前
下一篇 2小时前

相关推荐

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信
社群的价值在于通过分享与互动,让想法产生更多想法,创新激发更多创新。