Thought Leadership

Day 3: AI in our daily lives @ the MIT Technology Review EmTech Digital Conference

By Thomas Dewey

A wrap up of Day 3 at the EmTech Digital Conference, hosted by the MIT Technology Review. On Day 3 the conference covered real AI in our daily lives. The first segment covered “Guardrails” meaning what policies and regulation are needed for responsible AI:

  • Julia Reinhardt, Mozilla Foundation, covered AI policies that she believes are necessary. AI has the potential to impact people in negative ways. Several years ago, a set of principles where created, but not they were not official policies. Today, this policy work continues in Europe with US participation, moving into the definition of rules and policies stage. The first set of policies are centered on the user/consumer and cover privacy and data protection, bias and safety. On the corporate side, definition of rules on copyright and liability are being defined. Julia is working toward policies because there are no binding rules yet. As this is an effort between governments, the progress is slow and of course, some countries might not choose to follow rules or participate in the process.

  • Saiph Savage, Universidad Nacional Autonoma de Mexico, talked about the hidden workers in AI. Many people don’t know that there are thousands of workers in AI that perform tasks like image labeling, transcribing audio or categorizing content. Of course, businesses are looking at optimizing this labor force, so many of these workers earn low pay. The key to fairness is to understand worker values and issues. She explored ideas about getting low pay workers on a path to achieve better positions with higher pay by listening to successful workers and by working with companies on training and coaching strategies. She mentioned the success of Amazon Mechanical Turk, which outsources tasks, but many of those workers are not paid well. So, maybe a similar but better approach could work in AI. Like the issues and problems of workers in AI these also apply to all “gig” workers. So Saiph would like to apply those ideas and solutions to AI workers, because the employment risks that companies are taking on for their full-time employees are now becoming the responsibility of the gig workers.

  • Abeba Birhane, University College Dublin, discussed ethics in AI. AI solutions are found in all spheres of life and they will continue expand. AI algorithms can create negative results because of biased datasets or they can create false-positives, which tends to disproportionally impact people at the margins of society. She covered documented cases of facial recognition failures as examples. Yet, those that benefit the most from AI systems are typically not well equipped to recognize harm. Her goal is to include the people that are being exposed to AI systems, particularly people at the margins of society, as stakeholders in the solution to reimagine ethics. She suggests that developers make ethics a part of the complete development process of AI systems from the start, instead of an “add-on.”

The next stage covered “Life” meaning the areas in which AI is going to fundamentally change the way we live. This was an interactive panel with these folks:

  • Sanjeev Vohra, Accenture, talked about AI and business.
  • Krishna Cheriath, Zoetis, discussed AI in the context of his animal health business.
  • Julian Sanchez, John Deere, covered AI impact on agriculture.
  • Kyle Vogt, Cruise (part of GM), discussed AI impact on travel due to autonomous vehicles.

Key points from the AI business discussion (Sanjeev and Krishna):

  • C-suite investigation of AI is well underway, looking at business value. AI is helping: make business decisions based on data, automating business processes (like forecasting and lead qualification) and finding new services and products to offer customers or to even find new markets. The C-level executives need to embrace AI, develop culture around it and invest in the organization with new tools, processes, and training. But companies that are not digitized are at a disadvantage.
  • The challenges of bringing AI into business: linking AI strategy to business strategy, determining how much data is needed (need valuable data, not as much data as you can gather), how to acquire the correct technology and talent and how to transform the business around AI. The key is to set up a pilot project and learn from it. But, make sure it targets an aspect of your overall strategy.
  • How do you grow AI talent? It depends on your AI project. The people that implement AI are scarce at this point. It might be that companies have to invest in training or sponsor college courses for employees and combine that with bringing in new talent. Existing employees might need to shift to developing a data strategy and infrastructure.

Key points from the agriculture and autonomous drive discussion (Julian and Kyle).

Agriculture:

  • The vision for precision agriculture is to start from soil and seeds, optimize plant care and to optimize crop yield by augmenting with AI. Start small and then scale to the entire farm. The example Julian gave was John Deere’s sprayer vehicle that drives itself (GPS and ML), applies nutrients to plants (using computer vision to target each plant), targets weeds with herbicide and streams information to the farmer. This saves resources and is it better for the environment.
  • John Deere says that farming is a very visual business. So they are looking at what farmers are looking for on their land. For example, computer vision and AI can be used in harvesters to evaluate the quality of the crop. Seeders can analyze soil and target the best places to plant. But the environment is dusty and unpredictable so they have a lot of technology to keep cameras clean.
  • Farming usually needs to employ AI at the edge because of a lack of access to the Internet. So John Deere focuses on AI at the edge for their new products. In some cases, AI solutions can be retrofitted to existing equipment.
  • Most of the AI training is done in labs with ongoing work to support inferencing at the edge.

Autonomous drive:

  • In the evaluation of LiDAR, RADAR, and cameras, Cruise believes that to replicate a driver and focus on safety, you need to employ all the technologies together because each technology is better at one thing than the other in the driving environment. The key is that the cost of LiDAR continues to drop.
  • The goal for the autonomous car is that it is so good and cheap that people use them instead of their own vehicle, avoiding the cost of ownership.
  • When a vehicle encounters a situation that is unexpected, a remote operator can intervene to direct an action (not take over driving the vehicle). This is termed “call a friend.”
  • For route planning: some solutions start with a fixed route or loop. Cruise uses technology to pick a route based on factors like time, construction work and traffic as the vehicle is driving.
  • There are US federal guidelines on autonomous drive but these are evolving as the technology is so new. Cruise says, “Imagine if the FAA established all rules for flying at the time of the Wright brothers.” There are also US state-level standards which vary by state.
  • Cruise is using San Francisco as a testbed because: there is a big ride share community, you can get anywhere by road, and the road system is very complex. So, if you can conquer that you have a chance to scale to other cities.
  • Autonomous vehicles are super computers on wheels. When you combine that with the number of autonomous vehicles on the road that is a massive distributed computational system. And that system can learn like a “hive mind.” So security is a key aspect considered with every element of the vehicle design. Cruise also employs a set of “hackers” to test the system.

The last stage covered “Work” meaning the impact that AI will have on your workplace:

  • David Benigson, Signal AI, discussed how to use AI to augment human work. His idea is that AI supercharges teams to do their work so that they can move their tasks up levels. For example:
    • Doctors are armed with tools to detect disease earlier but they still make all the decisions and deliver treatments.
    • AI can comb legal documents based on parameters to deliver key documents to lawyers, getting rid of that tedious task and allowing them to focus on strategy.
    • Ingredients for products can be monitored to predict problems in the supply chain and humans can respond.
    • Strategic questions can be answered by AI doing research, instead of hiring an expensive consulting group to gather data.

      What are these higher-level jobs that existing employees can take on due to AI doing the “grunt” work? Humans have the intelligence and ability to make decisions based on the results that AI produces. AI at this time cannot make those decisions.

  • Veena Dubal, UC Hastings College of the Law, covered how to fix the current situation where gig worker’s bosses are AI algorithms. She started working with taxi drivers in 2008, who have no guaranteed income. In 2012 we saw the rise of ride share services where anyone with a car could work. Taxi workers saw these services as just a replacement of the taxi companies but without a provided car. The work of routes, fees and assignments were replaced with algorithms. She claims that these labor practices are spreading to business in general. Her examples are: independent contractors which have grown over time in the technology industry, AI hiring techniques and algorithmic management which replaces manager’s control over gig workers. She took the view that only labor unions can fix the gig worker issue. The business owners in the audience pushed back on her arguments, stating that political policies are driving owners to resort to gig workers, causing companies to movie to different states, or moving manufacturing overseas. They believe there needs to be balance.

  • Elisabeth Reynolds, Work of the Future and the Industrial Performance Center at MIT, looked at how AI can create better jobs. She outlined some of the highlights of the MIT study on this topic in the US:
    • 60% of the work performed in 2018 did not exist in 1940. Meaning that technology grew the labor market. But, most of those job sectors changed and grew.
    • Business productivity growth has been rising sharply since 1975, but the typical worker (non-management and non-degree holders) has benefitted very little. So these workers ask, why will AI be any different?
    • Less-educated workers in the US receive lower pay than in other industrialized countries.

AI/ML is based on predictions using math, but it is just best guesses and the answers could be wrong. Humans make the decisions. The question will be how workers that are augmented with AI will be educated, offered new career paths, and compensated fairly. In the US, the idea that you must have a 4 year degree for success needs to change to take advantage of community colleges, company-sponsored education and certification programs.

That wraps up the three day conference. I learned a lot and it sparked many ideas for me. Hopefully, you find some aspects in the summaries that you can research and apply to your own business.

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2021/03/25/day-3-ai-in-our-daily-lives-the-mit-technology-review-emtech-digital-conference/