AI’s Ethical Dilemma: Human Interests vs Machine Interests
- Feb 01, 2021
According to Pulitzer Prize author Thomas Friedman, the acceleration of technologies like artificial intelligence (AI) is coming on so fast, deep and interconnected, that the impact to society could be equivalent to the next industrial revolution (Friedman, 2019). To many, AI products provide positive benefits, but even the best intentions could obscure potential consequences. Ethical regulation lags far behind technological advancements, and it may prove difficult to not only identify but also prevent potential disasters before a true artificial mind passes the Turing Test.
In just five years, the value of the current industrial skill set will depreciate by 50%; in ten years, existing knowledge will be worth a quarter of what it is today (Estes, 2020). This depreciation of today’s occupational skills is a telling example of the accelerated pace of adoption of AI into the workplace, where business are incorporating AI into product offerings with a just-in-time learning approach to re-tool their workforce and establish partnerships with peers, customers, and even engage competitors to bridge knowledge gaps (CES, 2021). At CES 2021, for example, we saw Panasonic announce partnerships with Envisics and Phiar, and GM with Territory Studio and Rightpoint as new competitive alliances, deepening the contribution of AI to the driver’s experience and connection to their vehicles.
AI inside self-driving cars poses a complex set risks and rewards, especially when considering its associated ethics. To understand the ethical dilemmas alone, you must grapple with the Rules of Responsibility and Safety (RSS) that go into the driving skills and safety of autonomous vehicles (AVs) that define how a car is expected to react in dangerous situations. The rule of “if you can avoid a crash without causing another” demonstrates that it is acceptable for AVs to violate RSS in order to achieve its highest priority of not crashing (Mobileye, 2020). Self-driving cars are deemed safer because the driver is no longer distracted, however, these machines can drive recklessly in order to avoid accidents, potentially impacting other humans and objects on the road. How do we program carefulness and moral values into these machines?
Today, artificial intelligence is doing more than executing on a narrow set of tasks. AI is now able to transfer knowledge from one situation to the next and approaching artificial general intelligence (AGI) faster than anyone anticipated. For example, Alexa can now “infer” latent goals even if they are not directly expressed by the user (Kumar, 2020.), such as turning off the lights when going to bed at night even if the user did not say the command. The limitations of processing power, once the bottleneck to achieving AGI has been overcome with the performance specs of the chip sets from companies like AMD and Intel announced CES 2021.
The next advancement in AI comes with understanding language (Lyamm, 2019), including the ability to read text, summarize main ideas, and (to some degree) have a conversation. Google’s AlphaGo Zero, where the AI agent uses deep learning and Monte Carlo Tree Search to come up with game strategies and solve puzzles in ways that are inconceivable to humans (Foster, 2017) demonstrates that soon machines will able to teach themselves and reach a “super-human” level of intelligence.
Although we are already “fused” to the intelligence-assisted technology in our homes, we need to question the motivations of AI product manufacturers, their motivations may be different from the interests of the user. For example, Rohit Prasad, Alexa’s head scientist, says Amazon’s goal is provide a device that “actively orchestrates the consumer’s life”; and Jeffery Chester, executive director of a consumer privacy advocacy organization cautions out that Amazon’s ultimate goal is to monetize on our daily lives (Hao, 2019).
Combine questionable intent with data privacy concerns and our increasing dependence AI to perform tasks in our daily lives, technology engineers need to encourage conversations around ethical considerations. We need to make our clients aware of the serious questions about how far AI agents can be trusted and the potential consequences to society and our complex globalized economic systems in the years to come.
References & Sources:
-
CES (2021). Technological Megashifts Impacting our World. Consumer Technology Association. Retrieved from https://digital.ces.tech/sessions/b1e847f9-7128-41fc-b5b4-17e11b0d83ff
-
CTA (2021). CES B-Roll. Consumer Technology Association. Retrieved from https://www.cesbroll.com/
-
Estes, P. (2020). The Half-life of Skills. HR Daily Advisor. Retrieved from https://hrdailyadvisor.blr.com/2020/03/25/the-half-life-of-skills/.
-
Foster, D. (2017). AlphaGo Zero Explained in One Diagram. Applied Data Science. Retrieved from https://medium.com/applied-data-science/alphago-zero-explained-in-one-diagram-365f5abf67e0
-
Friedman, T. (2019). Thomas L. Friedman: Technology moves in steps. Mckinsey.com. Retrieved from https://www.mckinsey.com/featured-insights/future-of-work/thomas-l-friedman-technology-moves-in-steps
-
Hao, Karen (2019). Inside Amazon’s Plan for Alexa to Run Your Entire Life. MIT Technology Review. Retrieved from https://www.technologyreview.com/2019/11/05/65069/amazon-alexa-will-run-your-life-data-privacy/
-
Kumar, A., Rathi, A. (2020). Alexa gets better at predicting customer’s goals. Amazon. Retrieved from https://www.amazon.science/blog/alexa-gets-better-at-predicting-customers-goals
-
Mobileye (2020). Responsibility-Sensitive Safety. Retrieved from https://www.mobileye.com/responsibility-sensitive-safety/
-
Lyamm, M. 2019). AI Experts: The Next Frontier in AI After the 2020 Job Crisis. Data Science Blog. Retrieved from https://data-science-blog.com/blog/2019/11/05/ai-experts-the-next-frontier-in-ai-after-the-2020-job-crisis/
-
Shashua, A. (2020). Mobileye. Retrieved from https://www.mobileye.com/blog/category/amnon-shashua/