AI

Approaching artificial intelligence with humanity in mind

December 20, 2023 | By Ben Fox Rubin

Bill Boulding, dean of Duke University’s Fuqua School of Business, says anxiety about artificial intelligence replacing human judgment isn't new — but ChatGPT's release last year accelerated those concerns. 

“That hype went to a whole new level when ChatGPT debuted to the public,” Boulding said at an AI summit at Mastercard’s New York City Tech Hub last week. “However, with attention focused on both the opportunities and potential pitfalls of AI, now is the time business can really get it right by thinking through how this technology should enhance human productivity, while emphasizing the role of human judgment in developing responsible uses.”

Boulding and other academic and business leaders came together for a conference called “Leading with AI: Shaping a Human-Centric Future.” There they discussed AI’s impact on education, technology development and the workplace.

In the coming years, AI is expected to keep developing at a rapid pace, bringing potentially huge economic growth and many more innovations, but also risks, like more targeted cybersecurity attacks and misinformation spreading through AI models. That balance of the good and bad of AI is what businesses, universities and governments are all struggling with today — and that challenge was front and center throughout the conference.

Sharmla Chetty, left, the CEO of Duke Corporate Education, with Fran Katsoudas, center, the executive vice president and chief people, policy & purpose officer at Cisco, and Mastercard Chief Data Officer Andrew Reiskind at a fireside chat on the promise and peril of AI. (Photo credit: Arsalan Danish)

Several speakers at the event — hosted by Duke Corporate Education and sponsored by Mastercard and Fortune — talked about the importance of presenting AI as a tool for workers, not their replacement. With concerns about AI potentially causing mass job losses, they encouraged businesses and schools to focus on educating people about AI and its value to workers, customers and the public.

“Information is a good way to fight fear,” said Andrew Reiskind, Mastercard’s chief data officer.

Here are a handful of takeaways from the discussions.

AI: What is it good for?

Many at the event said that when using AI, companies need to have a clarity of purpose — in other words, they need to ask themselves why they’re using this technology in the first place and what outcomes they want to get out of it.

They suggested creating clearly defined goals and identifying potential returns on investment, not trying to shoehorn AI into every area within an organization.

For years, AI has been used to run search engines, cybersecurity software and data analysis programs. With generative AI exploding in popularity, it’s now being used as a software engineer’s co-pilot to write code and as a designer’s assistant to create new pieces of digital art.

At Mastercard, AI is being implemented  to run a talent marketplace, connecting people interested in trying new projects with opportunities that fit their interests, Chief People Officer Michael Fraccaro said. He added that generative AI is now helping manage mundane tasks like scheduling, helping employees save time on typically tedious tasks.

Boulding said some business school faculty have found ways to use AI to enhance education — giving students assignments to unpack the quality of their engagement with ChatGPT and whether they could identify the logical inconsistencies in the AI responses

Building boundaries

This handful of uses is just the tip of the iceberg. AI is expected to take over much more of the business world and academia.

As people consider more uses for AI, ethical and privacy concerns will come up all the time, said Saša Pekeč, a professor of decision sciences at Duke. He noted that just because someone can use AI for something, that doesn’t mean they should.

Considering that, many of the speakers talked about the importance of establishing clear guardrails for AI’s development and usage, focusing on privacy, transparency, accountability and fairness. Bias is a major risk, so AI systems will need to be tested regularly to ensure they aren’t, say, discriminating against specific kinds of job applicants.

Many governments are pursuing new AI regulations, but these efforts so far aren’t coordinated. Mastercard’s Reiskind said it was imperative for the private sector to “step up to the plate” and lead the conversation on regulations, since they are innovating this technology and have the experience with it.

Speakers at the conference discussed the potential for a future where workers are freed up from repetitive tasks so they can pursue more fulfilling activities, and where everyone can have a personalized digital assistant to help them shop, answer questions and do whatever else they need.

As was the tenor throughout the panels and discussions, the speakers tempered that bright future with warnings about the pitfalls of AI’s development. Putting guardrails in place to prevent those risks will go a long way.

“With the guardrails in place,” said Raj Seshadri, Mastercard’s president of Data & Services, “the positives far outweigh the negatives.”

Banner photo: Fortune Media CEO Alan Murray, left, with Raj Seshadri, center, the president of Data & Services for Mastercard, and Thomas Solecki, the managing director for Strategy and Analytics, Engineering at BNY Mellon. (Photo credit: Arsalan Danish)

Ben Fox Rubin, vice president, editorial content, Mastercard