Perspectives On Artificial Intelligence

By: Eric Sun

Artificial intelligence (AI) is a hot commodity in the modern world. Machines are now capable of reading and transcribing books, recognizing speech, analyzing big data, playing chess and Go at superhuman levels, and identifying objects through computer vision. Corporate giants like Google, Intel, and Amazon have poured hundreds of millions of dollars into AI research. Research centers and universities have made their own contributions to developing AI. However, some concerns still loom large: Is AI ethical? What are the dangers associated with “intelligent” machines? What kind of trajectory is the research following? I discuss these concerns alongside general artificial intelligence with two members of the Harvard University faculty: Dr. Venkatesh Murthy, a neuroscientist, and Dr. Barbara Grosz, a computer scientist.

Dr. Venkatesh Murthy is a professor of Molecular and Cellular Biology at Harvard. He specializes in neuroscience with an emphasis on information processing and adaptation in neural circuits. He has made significant contributions to the understanding of neural pathways in the olfactory system. Murthy is also interested in artificial intelligence research and teaches a freshman seminar on artificial and natural intelligence.

Dr. Barbara Grosz is the Higgins Professor of Natural Sciences at Harvard. She has made significant contributions to research in natural language processing and multi-agent systems and is currently conducting research in teamwork and collaboration. She teaches the course Intelligent Systems: Design and Ethical Challenges.

What was your path to academia like?

MURTHY: I started out as an engineering undergraduate student in India, but became interested in combining physical sciences and biology for my graduate work in the US. I learned about neural network research and AI during the end of my master’s degree in bioengineering and decided to pursue a PhD in neuroscience with some tangential work on neural networks. For my postdoctoral and early faculty work, I ended up doing purely experimental neuroscience, but recently I’ve rekindled my interest in computational neuroscience.

GROSZ: It was an unusual path. I started out as an undergraduate studying mathematics: there were no undergraduate computer science majors. I was, though, able to take a few courses in computer science, and then went to graduate school in computer science. In graduate school, I focused initially on numerical analysis and then I did some work in theoretical computer science.

Then, thanks to a part-time job at Xerox PARC and a conversation with Alan Kay, I began working in natural-language processing.

At the time, there were many people working on syntax and some on semantics. I was young, brave, and foolish and took on the challenge of building the first computational model of discourse as part of a speech processing project at SRI International. Later, I co-founded the Center on the Study of Language and Information at Stanford, and subsequently went to an academic position at Harvard. I always tell undergraduate students that you don’t need to know what you want to do in your first years of college, but can change paths many times.

What are your research interests?

MURTHY: I work in neuroscience and have an interest in artificial intelligence, but I’m not sure if I would pursue the mathematical, theoretical research in artificial intelligence. But I am very interested in understanding neural pathways in animals through neural networks and seeing if any similarities [to AI systems] exist.

GROSZ: Currently, my research focuses on modeling teamwork and collaboration in computer systems. Prior to this, I worked [for] many years on problems in dialogue processing. Dialogue participation requires more than simply understanding words and sentences; you need to understand the purposes and intentions of the people communicating with one another. This insight led to my working on modeling teamwork and collaboration. In the late 1980s and 1990s, I developed, with some colleagues, the first computational model of collaborative planning. While dialogue is between two people, teamwork often involves a larger number of people. It is much simpler to model the plans of an individual than to model the plans of a group, because teamwork and collaboration are not simply the sum of individual plans. The challenges teamwork raises include such questions as, what information has to be shared for teamwork to succeed? What compels individuals to participate in teamwork? How do you know what each component part is doing? Now, the focus of my research is on using these models of teamwork to build computer systems that improve healthcare coordination, which requires handling what we call loosely coupled teamwork. This work can be applied generally to many situations, including physician-physician networks.

How did you become interested in AI?

MURTHY: As I said, I was studying engineering but, for me, it was not very interesting… I read the books [on artificial intelligence] that everyone reads—Minsky’s Society of Mind and a few others. They presented lots of different ideas about artificial intelligence that were fascinating at that time.

GROSZ: When I was looking for a thesis topic, Alan Kay, the person who invented Small Talk, suggested that I [try] taking a children’s story and retelling it from the viewpoint of a side character. It turns out that this is a very difficult task. Instead, I did some work with Alan Kay on Small Talk and then went to SRI International to work on speech understanding research in the 1970s. For years, I worked with dialogue research and from there, I got interested in collaboration and teamwork, which is a subject that I have pursued for some time now.

What is AI?

MURTHY: That is a very good question. For me, AI has a very different meaning than it might have for some others. I feel that if you have one machine that is very good at one intelligent task, then that is artificial intelligence. For example, if a program can recognize faces very well or better than humans, than that would be artificial intelligence. Or if it could predict behaviors, or if it could discern sounds… Artificial intelligence can just be one very intelligent feature instead of a unit with lots of different complex functions.

GROSZ: AI was, until very recently, purely an academic field of study, which had two complementary goals. People in the field generally chose one of two different kinds of goals: using AI/scientific approaches to model intelligent behavior in humans or developing methods to make computer systems more “intelligent”. Recently, the field has evolved from a purely academic discipline and we are beginning to see increasingly many real-world applications through corporate efforts. It is no longer purely academic, which is great.

Which aspects of AI do you find to be the most promising?

MURTHY: I find the application of artificial intelligence to predicting behaviors in consumer markets very fascinating. Companies like Facebook, Google, and Amazon have amazing capabilities in predicting what a person may be interested in next… Even the behavior of a quirky person, or at least a person who thinks that they are quirky, may be predicted by artificial intelligence sometime in the future. I feel that this is a real possibility. “Even the behavior of a quirky person, or at least a person who thinks that they are quirky, may be predicted by artificial intelligence sometime in the future.”

GROSZ: I think that almost every aspect of AI has promise—promise for changing our lives, promise for affecting what we can do in the world. This ranges all the way from sensory processing—sensing for security, robotics, helping people with hearing disabilities — to complex tasks. Language processing has the potential to help people around the world to communicate through translation, and enables increasing web access. Multi-agent systems methods support teamwork and help people in many ways, from planning and plan recognition to coaching victims of stroke and protecting wildlife areas from poaching. Machine learning is currently big in the news with advances especially in the area of sensory processing. It’s an exciting time for AI.

What are the most important challenges that face AI today?

MURTHY: Right now, we have a lot of data and most people are interested in performing some form of statistical or mathematical analysis of the data—clustering, finding patterns. I’m not sure if we will be able to evolve from that framework… We might become very good at analyzing data but it may take a very long time before we are able to create real intelligence that understands the underlying causes of the observed data.

GROSZ: Two major challenges. One is how to incorporate artificial intelligence into computer systems in a way that they work well with people. Contrary to science fiction, where computer systems are portrayed to be exact replicas of human intelligence, machines don’t need to be replicate exact replicas of human intelligence, but can complement it. Instead, we need to assess where human intelligence falls short and develop computer systems that fill in those gaps and work closely with humans. The other challenge is to build systems that are capable of explaining their decisions to humans. This is not a simple challenge. For example, it is [difficult] for systems using deep neural networks to explain their results. These issues will need to be addressed in the future.

Where do you envision AI to be in 20 years?

MURTHY: Predictions are always very difficult to make and not very accurate. I would imagine that much would be the same as today but more advanced… Machine learning would still be used in research and there would likely be more advanced pattern recognition in artificial intelligence for recognizing items (visual, auditory or other) and predicting consumer behavior.

GROSZ: There is a project at Stanford called the One Hundred Year Study on Artificial Intelligence, colloquially known as AI100, which started about two years ago. Every five years, the Standing Committee for this project, which I currently chair, brings together a group of 15-20 experts in AI along with people who are social scientists or policymakers to assess the field (in light of prior reports) and predict where the field will be in 10-15 years. Our first study panel just issued a report on AI and life in 2030, which I highly recommend. (Access this report at

Any closing remarks?

MURTHY: Artificial intelligence is a very interesting field… I would recommend that more people become involved or learn more about AI because it is one of the technologies that is driving our future.

GROSZ: I teach a course about intelligent systems, design, and ethical challenges. I think that anyone interested in artificial intelligence or design should also be aware of the limitations of the technology and the ethical questions and design challenges that these limitations lead to.

Eric Sun ‘20 is a freshman in Hollis Hall.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: