In this episode of Two Think Minimum, Ellen Goodm…
This episode of Two Think Minimum explores the intricacies of artificial intelligence accountability with Ellen P. Goodman, a distinguished professor of law at Rutgers Law School and former Senior Advisor for Algorithmic Justice at NTIA. Goodman, the principal author of the March 2024 NTIA AI Accountability Policy Report, discusses the challenges and complexities of defining and establishing accountability in the rapidly evolving field of AI with the TPI hosts. Goodman articulates the difficulties in setting standards for AI systems and underscores the importance of auditing processes to ensure transparency and fairness.
Liked the episode? Well, there's plenty more where that came from! Visit techpolicyinstitute.org to explore all of our latest research!
Sarah Oh Lam: Hello and welcome back to Two Think Minimum. Today is Friday, April 26th, 2024. I’m Sarah Oh Lam, Senior Fellow at the Technology Policy Institute. I’m here with my co-hosts, TPI President and Senior Fellow, Scott Wallsten and TPI Senior Fellow and President Emeritus, Tom Lenard. Today we’re delighted to have as our guest, Ellen P. Goodman.
Professor Goodman is a Distinguished Professor of Law at Rutgers Law School specializing in information policy law. She’s co-director and co-founder of the Rutgers Institute for Information Policy and Law. She recently served as the Senior Advisor for Algorithmic Justice at NTIA, U.S. Department of Commerce. She has also served as a distinguished visiting scholar at the FCC and a visiting scholar at various graduate schools. Prior to joining the Rutgers faculty, Professor Goodman was a partner in the D.C. law firm of Covington & Burling, and also served as of counsel there for many years. Thank you, Ellen, for joining the podcast.
Sarah Oh Lam: Ellen was principal author of the newly released NTIA AI Accountability Policy Report in March 2024. We thought it would be a great time to discuss AI and her role at NTIA. Ellen, could you tell us about your position there and what you brought to the table for this report?
Ellen P. Goodman: Thank you, Sarah. I’ll underscore that these comments are in my personal academic capacity, as I ended my time with NTIA at the end of March. I was there for about 18 months as the Senior Advisor for Algorithmic Justice. My role involved conceiving of and executing on this report, as well as engaging in interagency consultations and advisory work on various AI issues and platform regulation matters.
I was fortunate to be there during the Google v. Gonzalez and Twitter v. Taamneh Supreme Court cases, as well as the early stages of the NetChoice cases. While not directly related to AI, these cases involve algorithmic processes. There were also numerous international consultations and vehicles in which the U.S. Government has an interest that touched on AI and algorithmic governance.
Sarah Oh Lam: NTIA put out a Request For Comment (RFC) in April 2023, asking for public input on AI accountability policy, which elicited 1,400 comments. Can you describe the process of reading those comments and how your team ingested the public feedback to produce the report?
Ellen P. Goodman: We asked 34-36 questions in that RFC, which itself was the product of extensive stakeholder consultations to determine the appropriate questions. The term “accountability” was carefully chosen. Our goal was to survey stakeholders with defined prompts about what accountability means.
It’s important to note that our work on this began before the release of ChatGPT and the large language model generative AI products, but we released the RFC afterwards. In this rapidly evolving field, you’re essentially building the plane while flying it.
The term accountability has been used to describe various concepts in the tech governance space, sometimes synonymously with transparency, explainability of a model or system, or responsibility. One of our objectives was to clarify that accountability only occurs when there are consequences for choices – on the downside for imposing unacceptable risks and harms, and on the upside for positive externalities. That was the theory behind the project.
I brought this project to NTIA because I had been writing about auditing and other governance mechanisms for AI. I observed the development of the EU AI Act (now passed), the UK’s interesting work on “assurance” (an analog to financial assurance, involving audits and assessments), and state regulations requiring AI audits in certain contexts. I realized that for any of these mechanisms to function as meaningful accountability measures, much more infrastructure was needed. That was the genesis of the idea – determining what that infrastructure should look like and gathering stakeholder perspectives on the pros and cons of various mechanisms and tools.
Tom Lenard: When the report discusses accountability, to whom is that accountability directed?
Ellen P. Goodman: In short, accountability is directed to anyone affected by AI systems. Many of these systems are B2B, with purchasers and deployers obtaining them from developers. Developers need to be accountable to deployers, who in turn need to be accountable to users. Additionally, there are individuals who may not be users themselves but are affected by the systems, such as job applicants subjected to AI-mediated employment processes. They are also stakeholders entitled to some level of accountability.
We heard from regulators in the interagency who view themselves as stakeholders seeking accountability. They are interested in leveraging their existing statutory tools to hold what the report terms “AI actors” (developers and deployers) accountable. It’s worth noting that the distinction between developers and deployers can be blurry, as an entity may undertake both roles, such as obtaining a pre-trained model and fine-tuning it. Similarly, users may also be deployers. It’s a value chain, and every entity in that chain needs to be responsible and accountable for harms imposed on the next link.
Tom Lenard: How do the issues under the umbrella of algorithmic justice in the context of AI differ from the discussions surrounding privacy, data use, etc. that have been ongoing for years? Is there a discontinuity or are we dealing with fundamentally different matters?
Ellen P. Goodman: It’s a Venn diagram. Let’s consider the concerns surrounding AI, which our project took as given based on the existing literature, although we didn’t explicitly ask about them in the RFC. Commenters nevertheless provided valuable insights into perceived risks.
The primary concerns include discrimination, privacy, safety, cybersecurity, autonomy, and deception. Privacy, specifically the spillage and exploitation of personally identifiable information and other privacy harms, is a component of this broader set of issues.
In terms of accountability, we acknowledged in the report that because there is no federal general data protection law, our analysis would not delve into that aspect, as it is well-covered in other areas.
What’s novel here, and I would argue it’s more evolution than revolution, is the interaction of multiple risks, problems, and possibly systems at different points in the value chain. Developers of complex AI systems may not fully understand or be responsible for the data used to train their models, as they often obtain datasets from third parties. On the output side, they sell or make these models available to deployers without necessarily knowing the contexts in which they will be used. This leads to a greater degree of opacity and complexity compared to previous technologies.
There are also potentially new issues, though one could debate whether they are entirely novel or intensified versions of past challenges. Autonomy, not in the sense of general artificial intelligence but rather the varying degrees of autonomy in AI systems as per its definition, raises heightened concerns about the displacement of human judgment.
Sarah Oh Lam: Did you consider how this accountability framework aligns with products liability or the FTC’s unfairness and deceptive practices regime? There seems to be overlap, as your description evoked the notion of a widget that can harm consumers, necessitating knowledge of its contents, which parallels products liability and advertising rules.
Ellen P. Goodman: Existing laws and law enforcers were viewed as clients or customers of accountability inputs in our analysis. However, existing legal tools may be inadequate to fully address the challenges posed by AI.
Agencies such as the CFPB, FTC, and SEC are already enforcing the laws and rules under their jurisdiction. If an AI system is used to commit fraud or engage in deception within the FTC’s purview, they have the authority to take enforcement action, regardless of whether AI is involved. The legal standard remains the same. However, the opacity surrounding these systems can hinder effective enforcement.
Part of our objective was to consider what law enforcement agencies would need or benefit from having to more easily detect violations and enforce the law. This led us to explore mechanisms such as standards, audits, assessments, disclosures, and documentation of system operations.
In that sense, we viewed the procedures and mechanisms we recommend as useful inputs to existing law enforcement efforts.
Where existing structures fell short, we did recommend a couple of areas for new laws or structures. One of them is enhanced government capacity, possibly in the form of a horizontal capacity that can work across different verticals and provide expertise on how AI systems operate. This could include adverse event databases to collect information about system failures post-deployment.
Other recommendations included supporting an audit or evaluation ecosystem and standard setting. These measures would complement existing laws.
Sarah Oh Lam: Another aspect of AI that may seem different but might not be is its technical nature. The report mentioned 65 key terms defined by standards bodies. Is this level of technical complexity unique to AI, or do other industries, such as biology or automotive, also have highly technical terminologies? How does AI differ from prior industries in this regard?
Ellen P. Goodman: The technical complexity itself is not too different. Opening a building code likely requires familiarity with numerous definitions. What’s notable about AI terminology is that the terms are in flux.
The report refers to the Transatlantic Tech Council (TTC) defining terms, but the meanings of some of those terms have likely already shifted. For example, the OECD is currently redefining AI. In that sense, AI may be more dynamic compared to more static industries.
It’s also worth considering whether AI is an industry in itself or more akin to an infrastructure like electricity. I believe it’s both. There are companies focused solely on developing AI systems, constituting an AI industry. At the same time, AI is becoming integrated into virtually everything we do, functioning as an infrastructure.
Depending on the aspect under consideration, AI may present itself as a tractable object of regulation or not. We received numerous comments on this duality and where the regulatory focus should lie based on whether the discussion pertains to developers, deployers, well-defined harms like discrimination, or novel harms that may be emerging with AI.
One example is the provenance and synthetic content generated by AI. While analogies can be drawn to forgeries, the prospect of a significant portion of our information diet being generated by artificial agents with unknown sources presents a new kind of problem.
Tom Lenard: Given the rapidly evolving nature of AI, how did you approach the establishment of standards in this context?
Ellen P. Goodman: We didn’t establish standards ourselves, but we did make recommendations. It was a source of frustration, likely shared by some proceeding participants who strongly desire standards and even risk hierarchies.
There is significant belief in the U.S. Government regarding independent standard-setting bodies like IEEE. These private bodies set technical standards that can be incorporated into law or become self-regulatory benchmarks, impacting litigation and liability. We emphasized that if we are to rely on these private standard-setting bodies, they need to be inclusive. The government may need to provide grants or other support for NGO participation, as companies often have dominant representation and influence in these bodies. Civil society representatives may lack the resources for consistent engagement, necessitating support.
That said, I have doubts about whether these standard-setting bodies are equipped to answer all the questions we’re posing. They excel at establishing common industry vocabularies and technical measurement standards, such as metrics for reliability, robustness, or fairness when clear non-discrimination norms exist. However, if we ask them to develop norms themselves, such as resolving trade-offs between robustness and fairness or privacy protection and fairness, those are value judgments that fall outside the scope of technical standards.
We discussed standards or criteria for auditing and auditors, which the EU AI Act and Digital Services Act touch upon regarding the independence of auditors. Those kinds of standards could be set by standard-setting bodies or the government.
In summary, I suggest a two-tiered approach to standard-setting. We need standards where they can be effectively established, but we tried to avoid overreliance on or suggesting excessive reliance on private standard-setting bodies.
Tom Lenard: Are there already qualified auditors in the private sector capable of performing audits if self-imposed or government-imposed auditing requirements are introduced? Over time, an entire industry of auditors would likely emerge, but do such individuals already exist?
Ellen P. Goodman: Yes, they do. Kathy O’Neil and ORCAA are prominent examples, and all of the Big 5 accounting firms, such as PricewaterhouseCoopers and Deloitte, claim to have the capability to conduct AI audits.
However, a comprehensive suite of standards covering various performance aspects is not yet available. Auditors are developing their own audit criteria. Given the mandates in Europe, it is indeed a growing business, and they will provide that service.
The concern is ensuring the quality of audits, as there will be variations. Given the diverse and socio-technical nature of the risks, it’s challenging to imagine a single audit firm having the capability and appropriate staffing to cover the entire spectrum.
If the audit scope is narrow and the applied criteria are clearly defined, it’s reasonable. However, the notion of a single auditor providing a stamp of approval or certifying that a system has passed tests covering sustainability, energy use, human rights, and all other relevant aspects is not realistic. We may never have an audit ecosystem that functions in such a comprehensive manner.
Tom Lenard: By definition, standards or criteria must be established before audits can be conducted.
Ellen P. Goodman: Correct. In the absence of standards, auditors are creating their own, specifying the standards they used. They may audit entities against their own risk assessments. NIST has developed a thoughtful AI risk management framework that provides a structure for entities to assess themselves, although it doesn’t include normative thresholds or define acceptable risk levels.
If an entity conducts a self-assessment, auditors can evaluate whether the entity managed its risks to the level it set for itself. There is a degree of “fox guarding the henhouse” in terms of entities setting their own standards.
Sarah Oh Lam: The report doesn’t seem to address how the Federal Government itself uses AI and how state governments are using AI. Did you receive comments on this topic? While the government is monitoring the private sector, who is overseeing the government’s use of AI?
Ellen P. Goodman: We didn’t cover that aspect, but it has been addressed elsewhere. An executive order in 2021 required Federal Government agencies to publicize the AI systems they are using. Compliance may not be complete. President Biden’s most recent executive order on AI, released on Halloween 2023, reiterates and expands upon this requirement.
Federal agencies are taking inventory of all the AI they use, which will be made public. OMB released guidance, aligning with our recommendation for the Federal Government to leverage its procurement power, grant-making authority, and guidance on its own AI use to energize the market for trustworthy AI.
The OMB guidance provides more detail on evaluating, disclosing, and ensuring that the government only uses audited AI or AI with transparency features. Most of these requirements are limited to AI systems that impact safety or rights, reflecting a risk-based hierarchy for government AI use.
Sarah Oh Lam: If AI enhances government efficiency, intelligence, and productivity, one would hope to see cost reductions.
Ellen P. Goodman: While our accountability report focuses on risks and downsides, the executive order dedicates more than half of its content to encouraging government AI adoption and surging AI talent. It emphasizes immigration and increasing AI expertise and capacity within the government.
Sarah Oh Lam: The report primarily addressed downsides. Did the comments frequently mention labor market impacts and productivity changes?
Ellen P. Goodman: It’s interesting. We set out to ask technical questions about auditor preparedness, increasing the number of auditors, and the nature of disclosures. Our focus was quite technical.
However, among the 1,400 comments received, about 250 were from entities such as industry, academia, NGOs, trade associations, and state governments. The remaining comments were from individuals.
Many of those comments addressed labor impacts and copyright issues, although the latter was not our focus. The Copyright Office and USPTO have ongoing proceedings on copyright and AI systems, which is a significant topic.
Job displacement was a major concern expressed by commenters. We touched on it briefly, flagging the impact of AI on workers as an aspect to consider in documentation and disclosure. If a system card or model card is released, or if information is documented for regulators or legal purposes, the impact on labor would be a salient factor to include.
Sarah Oh Lam: We could discuss AI for hours and will continue to do so. Do you have any concluding thoughts on the report and the process? What are you currently working on?
Ellen P. Goodman: Regarding the report, it was a great education for me on how things actually happen in government, the different equities involved, the roles played by various agencies, and how they interact with Congress.
In terms of the report’s impact, while I can’t claim direct causation, it’s interesting to observe that many of our recommendations are now percolating in Congress. A new bill has been introduced to create more funding for the production of federal datasets and compute resources, ensuring that AI companies don’t monopolize data and compute power. This enables auditors, evaluators, and researchers to access the necessary resources for red teaming, and businesses to develop new and competing AI products.
The NIST AI Safety Institute has recently been staffed up and will focus on the standards question. It’s encouraging to see pieces of our work emerging in the world.
Currently, I’m working on the provenance piece I mentioned earlier. With my background in media law and information law, I’m very interested in how watermarking or authenticating content as it moves through AI generation is or is not important, and how watermarking and authentication of either the backend data or content outputs impact our information environment.
Sarah Oh Lam: Fascinating. Do you have a view on the copyrightability of AI-generated outputs? It’s a debatable issue.
Ellen P. Goodman: There are two questions: whether training AI on copyrighted works constitutes a copyright violation, and whether outputs are copyrightable when they are only partially or not at all human-generated.
The Copyright Office has stated that only human authorship is recognized by copyright law, and individuals need to disclaim the AI-generated portions of their work.
Personally, I don’t believe that this approach will be sustainable in the long term, either the disclaiming process or the notion that AI-generated content is uncopyrightable. I think we’re moving towards AI generation that is less tractable in terms of clearly delineating the human and AI contributions.
As AI becomes more seamlessly integrated into the creative process, it will become increasingly difficult to tease apart the human and AI components. At that point, I believe we will have to start granting copyright protection to works that are partially or wholly AI-generated. However, that’s just my personal prediction.
Sarah Oh Lam: An alternative approach could be treating prompts as trade secrets. Prompts can be very long and complex, taking hours or days to develop in order to achieve the desired output.
Ellen P. Goodman: Absolutely, that is intellectual labor. However, I’m not sure if it will always be that way. Some AI experts suggest that as systems become more sophisticated, they will know users so well that explicit prompting may not be necessary.
Sarah Oh Lam: So prompting itself could become automated. Fascinating! We want to thank you for joining the program and hope you’ll join us again in the future.
Ellen P. Goodman: Thank you so much. It was a pleasure talking with you.