Surveillance post-implementation: checking AI performance after deployment
In the rapidly evolving world of Artificial Intelligence (AI), the importance of post-deployment monitoring and reporting has become a significant concern for regulators and policymakers. This is due to the increasing impact of AI on various aspects of people's lives, often without them being aware.
AI is being deployed across sectors, from drafting court filings and creating songs to discovering new drugs. However, its use can lead to potential society-level harms, making ongoing testing and monitoring essential. For instance, AI-supported hackers might access personal data, or recruiters could use AI to screen job applicants, raising concerns about privacy and fairness.
Pressure to use AI in the workplace is mounting, potentially leading to unsafe deployments. To address this, current regulations and proposals focus on establishing strong governance, documentation, and reporting requirements. A notable example is the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), set to take effect from January 1, 2026.
TRAIGA outlines key aspects of post-deployment monitoring and reporting, including documentation and recordkeeping, detailed data descriptions, performance metric reporting, post-deployment monitoring processes, and user safeguards. The law empowers the Texas attorney general to issue a civil investigative demand to obtain this information. TRAIGA also provides a 60-day cure period to address violations and offers safe harbor protections for voluntary compliance with recognized frameworks like the U.S. National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF).
In comparison to the EU AI Act, TRAIGA emphasizes a lighter, governance-supportive approach, focusing on documentation, transparency, and accountability post-deployment without upfront licensing burdens. Similar trends are observed in other U.S. states like Colorado and California.
However, there is a significant gap between the current state of post-deployment monitoring and reporting and the ideal scenario. This is primarily due to a lack of overall post-deployment information, information asymmetry, and privacy and business sensitivity concerns. Ideally, developers, regulators, and civil society organizations would be able to track instances of misuse of an AI model and severe malfunctions.
Post-deployment monitoring for AI is still a nascent field, and both governments and large AI companies have responsibilities and a part to play in developing and improving the current state of the monitoring ecosystem. As of March 2024, 100% of Fortune 500 companies use AI systems, but regulators often lack information on whether entities with critical roles in their country, like courts or utility companies, are using AI.
In the ideal scenario, model integration and usage information would be disclosed and shared with regulators to inform decisions on how to regulate developers, hosts, application providers, and deployers. This would enable regulators to track AI system integration, application usage, and real-world consequences after deployment, ultimately ensuring the safety and fairness of AI in our society.
- As technology advances and gadgets integrate AI, concerns about potential harms such as privacy violations and unfair practices become more salient, particularly in the workplace.
- To address these concerns, legislation like the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) outlines requirements for post-deployment monitoring and reporting, emphasizing documentation, transparency, and accountability.
- In the future, it is crucial for developers, regulators, and civil society organizations to work together to improve post-deployment monitoring systems, enabling the tracking of AI misuse and severe malfunctions, and ensuring the fairness and safety of AI in education, self-development, general news, data-and-cloud-computing, and technology sectors.