“Suddenly, we’re realizing that even things that at first blush probably appear to be innocuous and low stakes… these, at scale, have big consequences for our societies as more and more information is mediated by AI systems.”Ron Bodkin, Vector Institute (fmr. Google)
Can AI-based industrial systems be trusted and have ethical obligations to our partners and customers? It’s a big question, and one that was the focus of a recent podcast I did with Ron Bodkin, today a VP of AI engineering at the Vector Institute, also an engineering lead at the Schwartz Reisman Institute for Technology and Society at the University of Toronto. (At the time of our discussion, Ron was Google’s technical director for applied AI.) It was a fun discussion and we covered topics including potential deliberate misuse of AI systems; how to address the apparent loss of public trust in AI, as evidenced by the techlash; and prospects for successfully embedding ethics into AI that, for better or worse, seems likely to eventually mediate much of day-to-day life. Check out the podcast and let me know what you think. Our host, Ginni Saraswati kept things moving along at a high level; so, even if you’re relatively new to the subject, you might get something out of it. And, if you want to get into the weeds about reward functions and the like, message me or Ron on LinkedIn or post a comment below.
The Future of AI and Machine Learning with Mohsen Rezayat & Ron Bodkin
You’re taking advantage of the benefits of AI every day in ways you might not even be aware of. When you “talk” to an automated voice on the other end of the phone, when you call a Lyft or an Uber, and when you’re asking Siri or Alexa to play your favorite song while you wash dishes. AI is everywhere, and its uses are expanding rapidly.
With the application of any new technology, there’s always a period of time during which kinks that creators didn’t plan for become visible. As new systems gain traction, those unaccounted for faults can become amplified, creating patterns, which in turn can start to erode trust. One example of this when it comes to AI is how racial and gender biases that the technology was actually built to avoid can creep into the decision-making process. Another is how the AI-based algos in social media amplify extreme views and keep us all in our filter bubbles, too often fostering division.
To better broadly consider the effects of such systems, it’s perhaps useful to first understand how they work – by building upon their own intelligence, collecting information from our cues and habits. We all collectively create AI in our clicks and swipes, often without considering how the data will be used by bots and algos to make decisions. In order to make this technology work well, and work well for everyone, we need to map out the channels of its proverbial brain.
Our guests today are Mohsen Rezayat and Ron Bodkin. Rezayat is our Chief Solutions Architect here at Siemens Digital Industries Software. Bodkin spent the past few years as Technical Director of Applied Artificial Intelligence at Google. Currently, he’s the Vice President of AI Engineering and CIO at Vector Institute and Engineering Lead at the Schwartz Reisman Institute for Technology and Society.
In today’s episode, we’re talking about machine learning and artificial intelligence, including the complexity of establishing a system of ethics in AI so that it makes conscientious decisions and better serves our collective human community. And find more information on industrial AI at Siemens here.
Some Questions I Ask:
- What is an example of AI in practice? (5:58)
- How are some AI models demonstrating bias? (7:59)
- What is the potential to deliberately misuse digital systems? (10:31)
- With the loss of public trust in AI, when do you think we’ll be able to regain our trust of this technology? 12:51)
- What do you think about how tech companies can safeguard us against bias and unfair treatment from algorithms? (19:48)
- Do you think we’ll achieve the goal of embedding ethics into future models of AI? (21:39)
What You’ll Learn in This Episode:
- The definition of machine learning (2:20)
- An example of how machine learning works (2:51)
- How racial bias makes its way into AI algorithms (8:45)
- The three components of trustworthy AI (12:56)
- How we can build ethical AI (14:37)
- Why humility is a good quality (15:10)
- How AI could help us see the future when it comes to catastrophic events (16:50)
Ron is the VP of AI Engineering and CIO at Vector Institute and is the Engineering Lead at the Schwartz Reisman Institute for Technology and Society. Ron is responsible for leading engineering teams that apply Vector’s leading AI research to industry and health care problems for Canada, establishing and supporting world-class scientific computing infrastructure to scale the adoption of beneficial AI, and ensuring that all Vector users, sponsor participants and partners are upskilled to use it effectively.
Previously, Ron was responsible for Applied Artificial Intelligence in the Google Cloud CTO office where he spearheaded collaborative innovation efforts working with strategic customers and Google AI research and engineering teams. Ron was the founding CEO of Think Big Analytics. Think Big Analytics provided enterprise data science and engineering services and software such as Kylo for enterprise data lakes and data science and was acquired by Teradata in 2014. After the acquisition, Ron led Think Big’s global expansion and created an Artificial Intelligence incubator at Teradata.
Ron has an honors B.Sc. in Math and Computer Science from McGill University and a Master’s in Computer Science from MIT.
Mohsen is Chief Solutions Architect at Siemens Digital Industries Software and an Adjunct Professor at the University of Cincinnati. He holds a Ph.D. in engineering mechanics from the University of Kentucky. He has over 70 technical publications, has served as the guest editor for the Computer-Aided Design Journal, and is a member of the Board of Directors of the Global Wireless Education Consortium. He is also a member of the scientific advisory boards for Drexel University, DeVry University, and University of Cincinnati and has participated in strategic think tanks at large companies including Intel and Microsoft.
Where Today Meets Tomorrow Podcast
Amid unprecedented change and the rapid pace of innovation, digitalization is no longer tomorrow’s idea. We take what the future promises tomorrow and make it real for our customers today. Welcome to “Where today meets tomorrow.”