Far from being an isolated field, artificial intelligence is a tool—one that is already improving the safety and efficacy of medicine, the appropriateness of internet search results, and even the quality of the sound from your speakers.
Indeed, as thought-leaders of all kinds explained at the kickoff event for Northeastern University’s Institute for Experiential Artificial Intelligence, companies must integrate information and innovation built upon insight from artificial intelligence in order to stay relevant in the future.
“It’s important not to think about AI as an island unto itself, in which case it would just be a wasted opportunity,” said David Roux, co-founder and co-managing partner of BayPine Capital, who, along with his wife Barbara, are the founding benefactors of the Roux Institute at Northeastern University in Portland, Maine. “All of the most interesting things are happening at the interface of AI and other subject matter.”
Roux joined Joseph E. Aoun, president of Northeastern, for a fireside chat-style conversation that opened up a daylong event to launch the Institute for Experiential Artificial Intelligence. They were joined by a number of chief executives, researchers, founders, directors, and disruptors to explore the future of AI. A series of panels and discussions brought together leaders across a variety of industries in person on Northeastern’s Boston campus as well as a global audience online. The event also included an AI job fair and student research presentations across the Boston campus.
A common theme among researchers and industry leaders was the critical importance of weaving together the information and computational power afforded by AI with the insights and creativity of humans.
It’s a point that Aoun addressed early on at the event, and serves as one of the core pillars for the Institute and the university writ large.
“Higher education is changing the way it’s looking at AI, led by our colleagues here. You cannot do AI unless you bring in the human dimensions. We are changing—we have changed—the whole approach with our curricula: We want every student coming out of Northeastern to have [fluency in] what we call humanics,” Aoun said, referring to the combination of technological, data, and human literacies—a concept he championed in his book, Robot-Proof: Higher Education in the Age of Artificial Intelligence.
Again and again throughout the day, industry leaders highlighted the need to combine human decision-making with artificial intelligence. Lila Snyder, chief executive officer of Bose, explained that the company relies upon human discernment to determine when its headphones should automatically activate noise-cancellation. Don Peppers, author and customer experience expert, said that companies must rely upon decisions made by humans and machines to provide quality experiences for their customers—a list of suggestions for your next movie is only as good as the team that curates it, after all. Mike Culhane, group chief executive officer at Pepper Money Group, said that his company augments on-the-ground cultural literacy from humans with insight from AI to find and market to new customers.
“We fundamentally believe experiential AI is about human-centric AI that blends together the best of what humans can do that machines can’t—common-sense reasoning, intuitive decision-making—with the best of what machines can do that humans can’t—[crunching] through datasets,” said Usama Fayyad, the institute’s executive director.
The Institute for Experiential AI sits squarely within this critical context. With 90 faculty whose scholarship spans all nine schools and colleges at Northeastern, the institute is highly interdisciplinary. During a panel midway through the day, deans from each of the schools highlighted how their disciplines benefit from AI and vice versa.
For highly technical colleges such as the College of Science and the Khoury College of Computer Sciences, that symbiotic relationship may be obvious. But, as university officials—and all of the speakers on Wednesday—noted, AI infuses almost every aspect of modern life.
“We are awash in data for every sphere of human activity,” said David Madigan, provost at Northeastern. “It’s a complete game-changer; there’s nothing that isn’t touched by its revolution.”
Take libraries, for example. Dan Cohen, dean of libraries and vice provost for information collaboration, suggested that the vast historic archives housed in libraries can be “a very challenging playground” for architects of artificial intelligence.
“Extracting text from handwritten documents continues to be a challenge,” he said. “We know how to search the web, but it’s much harder to search through library collections that are in hundreds of languages.”
Take, for another example, arts and design. Elizabeth Hudson, dean of the College of Arts, Media and Design, described the research that faculty do to “promote ethical, critical, responsible AI in practice.”
“Overall, what we’re doing is establishing AI as a collaborator rather than a human competitor or a replacement,” she said.
Some of the liveliest conversation stemmed from exactly the issue that Hudson and her colleagues grapple with: whether AI is trustworthy, and how to build ethical artificially intelligent systems.
“I would argue that it’s dangerous to ask whether we trust AI. The better questions to consider are: How can we make sure that the individuals who are building these systems take responsibility, and how can we put in place systems of accountability?” asked Cansu Canca, founder and director of the AI Ethics Lab and the ethics lead at the Institute for Experiential AI.
Artificial intelligence is becoming more and more integrated into life-changing decision-making: Algorithms inform judicial sentencing, forecasting crime, and applications for credit cards or banking, among many others.
Tina Eliassi-Rad, a computer science professor at Northeastern, and one of the panelists for a discussion about trustworthy AI, said that in order to make those decision-making processes trustworthy, one must start by creating trustworthy data and training ethical engineers.
“Trustworthy AI is human-centered AI,” she said. “It’s important because we live in societies that are infused with algorithms.”
This post was originally published on News @ Northeastern by Molly Callahan.