
Agentic AI systems—AI systems that may pursue complex goals with limited direct supervision—are more likely to be broadly useful if we will integrate them responsibly into our society. While such systems have substantial potential to assist people more efficiently and effectively achieve their very own goals, in addition they create risks of harm. On this white paper, we propose a definition of agentic AI systems and the parties within the agentic AI system life-cycle, and highlight the importance of agreeing on a set of baseline responsibilities and safety best practices for every of those parties. As our primary contribution, we provide an initial set of practices for keeping agents’ operations protected and accountable, which we hope can function constructing blocks in the event of agreed baseline best practices. We enumerate the questions and uncertainties around operationalizing each of those practices that should be addressed before such practices may be codified. We then highlight categories of indirect impacts from the wide-scale adoption of agentic AI systems, that are more likely to necessitate additional governance frameworks.