On March 23rd – 25th I attended the EmTech Digital Conference, hosted by the MIT Technology Review. In this blog, I report on Day 1 at the conference which covered the business of AI.
On the Mainstage, the speakers covered “The Age of Implementation” in the first segment of the conference. Jennifer Strong, MIT Technology Review, kicked off the conference with opening remarks with highlights of the next three days.
Andrew Ng, Landing AI, former leader of the Google Brain Project, began by covering what businesses need to do to ensure that they are taking advantage of AI. He says, “never say let’s be AI first,” because that focuses on technology instead of solving customer problems. So, it’s customer first. But, AI needs to be considered to advance your business. The first step is to produce data within your company. But what if you don’t have a lot of data or if it is not “clean.” Some businesses want to build a big IT project to gather clean data. But Andrew thinks you start with what you have and develop AI around that. And, look for open source algorithms or neural networks first, because often those are good enough for your project and you don’t need to develop state-of-the-art AI yourself. Even big volume data like that found in autonomous drive systems is actually looking for outliers – things never seen. And that data set is relatively small.
Andrew thinks we have general purpose, horizontal tools like PyTorch, but we need vertical platforms that are customized to business processes and that are easy to apply. These vertical platforms are for non-engineers so that they develop their solutions and clean and label their particular data. Vertical platforms are tailored around the business expertise at the company.
Andrew addressed the perception that state-of-the-art machine learning (ML) techniques are only used in big companies or in academics. How can small companies use AI/ML? Start with your biggest business problems and then see if AI is available to the problems. Sometimes there is no AI solution yet. Or, some company might already have a product that you can employ that matches your problem. Get experts in the room for a cross-functional brainstorm session to identify technical and business feasibility. Maybe do a proof of concept. Start fast and small instead of trying to solve every aspect of the problem. That way other teams can see the value and they might come up with new ideas. Jump in on AI, because your competitors are probably doing it.
Michelle Lee, Amazon Web Services (AWS), took over to discuss the Machine Learning Solutions lab. She covered lessons learned while working with NASA, the National Football League, AstraZeneca and many others. The lab identifies and implements highest value ML use cases with customers. Because she is from AWS, her solution examples use AWS products. She worked on a project with the US Patent Office for fast search of the patent database (over 10 million) for new patent approval. Her observation, “if a 200 year old government service can use ML, then probably any business should think about using ML.”
She covered 7 lessons learned while working with ML customers:
- Have access to data and a comprehensive data strategy. A data strategy means – break down data silos and collect key data; make data available easily and securely to anyone that needs it; and put data to work with ML algorithms.
- Carefully select use cases and define success metrics. Look for data readiness, business impact and the chance of success using ML technology and your team skills. Not every problem is solved by ML.
- Technical and domain experts need to work side-by-side. Data scientists should not work in a silo. You need shared communication. But sometimes there are privacy concerns (like in healthcare) that require rules on who can access the data.
- Get executive sponsorship and set big top-down goals. Turn the culture of the company to embrace ML. Support experimentation and be tolerant of failure. The key is to take a long-term view.
- Asses and address any skills gap. Start with in-house training using outside resources like AWS ML training and certification and AWS ML Embark. Then look at outside hires.
- Don’t do undifferentiated heavy lifting. Take advantage of outside elements like AWS ML frameworks and infrastructure, machine learning services, and AI services through AI calls to common elements, like speech recognition modules.
- Plan for the long term. Implement solutions in parts and phases and get results on small aspects quickly to set expectations. It is a continuous process.
Both speakers had similar high-level points: start with a business problem, not with an AI solution, and tackle a small aspect of the problem solution quickly to prove it out with a diverse team of people that have business and technical backgrounds.
The conference then shifted to a set of brief presentations on the essential elements of AI within business:
- Our own Stefan Jockusch, Siemens, discussed his vision of product manufacturing in the future. It starts with asking an automated, digitized system “make me a product.” He walked through an example of creating a custom drone. AI-driven software looks over a large dataset of drone designs that meet the customer parameters and designs the drone. The customer accepts the design that the system presents and then an automated design and verification flow creates the design. Next, an intelligent marketplace is automatically explored to send the design to an AI-driven factory system that creates the product. Finally, it gets shipped to the customer. I will cover Stefan’s vision in a future blog.
- Ed McLaughlin, MasterCard, covered AI in the banking industry. Ed says MasterCard is a network accessed by many types of devices, from swiping a card, to buying something with your phone, to near-future ideas like cars that automatically can buy gas at a station. They see billions of transactions which results in a lot of data to analyze to improve the network. By design, all their data is clean. Ed covered a key aspect of using AI at MasterCard: detecting legitimate transactions versus fraudulent ones. It reduced fraud by 3x and false fraud positives by 6x. False positives frustrate legitimate customers. Everyone has experienced a declined transaction for no reason. Another area related to crime was to identify money laundering schemes and accounts set up to move stolen money. The key is that their AI systems are dynamically changing based on inputs. They are not static systems.
- Madeleine Clare Elish, Google, told of her previous work with Duke University who used deep learning to detect sepsis risk in hospital patients and then provided guidance on how to effectively treat sepsis. While treatable, sepsis is hard to diagnose quickly, but remains the leading cause of death in hospitals. The system is called Sepsis Watch and there are many papers available on this work. The system was only effective when it considered the humans involved the patient’s care. So AI was a tool that helped assign risk and treatment only when hospital staff provided input into the system. Her background is anthropology, so it makes sense that she studies the role of humans in the AI solution.
Day 1 concluded with a set of presentations covering how to deploy real AI solutions in business:
- Hamid Tizhoosh, University of Waterloo, talked about his experience developing AI solutions. He covered three key concepts. First, form the right team of people and identify the expertise that you really need. Next, you have to determine if you have representative data and determine if it contains bias, noise, or outlier data points. Lastly, determine if you are on the right path. Ask if you are using supervised or unsupervised techniques. Most successful projects today are using supervised AI because it yields answers quickly and is easy to validate. But, unsupervised (unlabeled data) techniques will be the differentiator between businesses. The problem is, unsupervised learning is challenging to design and hard to validate. However, he believes unsupervised learning is the future of AI. That means the demise of humans labeling data.
- Alexandr Wang, Scale AI, talked about why he thinks data is the new code. AI development is fundamentally different than writing code. Instead of writing code, deploying, and evaluating it, AI development starts with labeling data, then training the model, and finally evaluating it. So, data is the foundation of the AI system, but bad data equals bad results. Scale AI built a system that combines ML models with human input for a scalable system that learns as it runs, thus minimizing human supervision over time. His company is heavily invested in helping create clean and labeled data. So, he disagrees with Hamid Tizhoosh. It might not be labeling, but he believes humans will always be involved in some way.
- Mammad Zadeh, Intuit, spoke about how his company cut its model development time from 6 months to less than a week. In the past, data scientists at his company were spending 70% of their time finding and formatting data, not tuning AI models. So, Intuit created a new infrastructure with standard, clean data models combined with a common ML platform (they bought some components and created some of their own). The key was a data platform that “provides the ecosystem to govern and manage the lifecycle of data and machine learning.” The data is decentralized but all experts have access to improve the data. Their equation for success is great technology + great data + great talent.
The themes of today boiled down to: look for a real business problem to solve and then define an AI solution to quickly test. Bring together the right people including business, technology, and IT folks to build out the overall solution. And, the solution goal is not to deploy system once and then forget it. It should learn, improve and evolve over time.