What leads to false beliefs about AI capabilities and risks? In this talk based on his book AI Snake Oil, Arvind Narayanan describes a hierarchy of misleading claims and beliefs that pose increasingly thorny epistemic challenges. On one extreme are outright false claims such as a “robot lawyer” that can argue Supreme Court cases. At the other extreme is the fact that on consequential questions such as potential existential risks posed by AI, experts themselves occupy bifurcated realities. How should we attempt to make intellectual progress, and sound policy, given this state of affairs?
Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil and a newsletter of the same name which is read by 40,000 researchers, policy makers, journalists, and AI enthusiasts. He previously co-authored two widely used computer science textbooks: Bitcoin and Cryptocurrency Technologies and Fairness in Machine Learning. Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan was one of TIME’s inaugural list of 100 most influential people in AI. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).