New Report Provides Framework for Transparency in AI Systems

A new report published out of Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy provides a framework to help AI practitioners and policymakers ensure transparency in artificial intelligence (AI) systems.

The report, titled “A CLeAR Documentation Framework for AI Transparency: Recommendations for Practitioners & Context for Policymakers,” outlines key recommendations for documenting the process of developing and deploying AI technologies. The framework is meant to help stakeholders understand how AI systems work and are developed, by ensuring that those building and deploying AI systems provide transparency into their datasets, models, and systems.

The recommendations focus on documenting an AI system’s purpose, capabilities, development process, data sources, potential biases, limitations or risks. The report argues this level of transparency is essential for building trust in AI and ensuring systems are developed and used responsibly.

“As AI becomes more prevalent in our lives, it is critical that the public understands how these systems make decisions and recommendations that impact them,” said report co-author and former Shorenstein Center Fellow Kasia Chmielinski. “Our framework provides a roadmap for practitioners to openly share information about their AI work in a responsible manner.”

This framework was developed by a team of experts that have been at the cutting edge of AI documentation across industry and academia. They hope the recommendations will help standardize transparency practices and inform the ongoing policy discussion around regulating AI.

The full report is available for download on the Shorenstein Center website. Lead authors Kasia Chmielinski and Sarah Newman, co-founders of the Data Nutrition Project, are also available for interviews to discuss the framework and its implications.