Last week, at Responsible AI Leadership: Global Summit on Generative AI, co-hosted by the World Economic Forum and AI Commons, I had the chance to interact with colleagues from world wide who’re pondering deeply and taking motion on responsible AI. We gain a lot after we come together, discuss our shared values and goals, and collaborate to seek out the perfect paths forward.
A worthwhile reminder for me from these and up to date similar conversations is the importance of learning from others and sharing what we now have learned. Two of probably the most frequent questions I received were, “How do you do responsible AI at Microsoft?”, and “How well placed are you to fulfill this moment?” Let me answer each.
At Microsoft, responsible AI is the set of steps that we take across the corporate to be certain that AI systems uphold our AI principles. It’s each a practice and a culture. Practice is how we formally operationalize responsible AI across the corporate, through governance processes, policy requirements, and tools and training to support implementation. Culture is how we empower our employees to not only embrace responsible AI but be energetic champions of it.
With regards to walking the walk of responsible AI, there are three key areas that I consider essential:
1. Leadership have to be committed and involved: It’s not a cliché to say that for responsible AI to be meaningful, it starts at the highest. At Microsoft, our Chairman and CEO Satya Nadella supported the creation of a Responsible AI Council to oversee our efforts across the corporate. The Council is chaired by Microsoft’s Vice Chair and President, Brad Smith, to whom I report, and our Chief Technology Officer Kevin Scott, who sets the corporate’s technology vision and oversees our Microsoft Research division. This joint leadership is core to our efforts, sending a transparent signal that Microsoft is committed not only to leadership in AI, but leadership in responsible AI.
The Responsible AI Council convenes commonly, and brings together representatives of our core research, policy, and engineering teams dedicated to responsible AI, including the Aether Committee and the Office of Responsible AI, in addition to senior business partners who’re accountable for implementation. I find the meetings to be difficult and refreshing. Difficult because we’re working on a tough set of problems and progress just isn’t at all times linear. Yet, we all know we want to confront difficult questions and drive accountability. The meetings are refreshing because there may be collective energy and wisdom among the many members of the Responsible AI Council, and we regularly leave with recent ideas to assist us advance the state-of-the-art.
2. Construct inclusive governance models and actionable guidelines: A primary responsibility of my team within the Office of Responsible AI is constructing and coordinating the governance structure for the corporate. Microsoft began work on responsible AI nearly seven years ago, and my office has existed since 2019. In that point, we learned that we wanted to create a governance model that was inclusive and encouraged engineers, researchers, and policy practitioners to work shoulder-to-shoulder to uphold our AI principles. A single team or a single discipline tasked with responsible or ethical AI was not going to fulfill our objectives.
We took a page out of our playbooks for privacy, security, and accessibility, and built a governance model that embedded responsible AI across the corporate. We now have senior leaders tasked with spearheading responsible AI inside each core business group and we continually train and grow a big network of responsible AI “champions” with a variety of skills and roles for more regular, direct engagement. Last yr, we publicly released the second version of our Responsible AI Standard, which is our internal playbook for the right way to construct AI systems responsibly. I encourage people to check out it and hopefully draw some inspiration for their very own organization. I welcome feedback on it, too.
3. Spend money on and empower your people: We now have invested significantly in responsible AI through the years, with recent engineering systems, research-led incubations, and, in fact, people. We now have nearly 350 people working on responsible AI, with just over a 3rd of those (129 to be precise) dedicated to it full time; the rest have responsible AI responsibilities as a core a part of their jobs. Our community members have positions in policy, engineering, research, sales, and other core functions, touching all elements of our business. This number has grown since we began our responsible AI efforts in 2017 and in step with our growing give attention to AI.
Moving forward, we all know we want to speculate much more in our responsible AI ecosystem by hiring recent and diverse talent, assigning additional talent to give attention to responsible AI full time, and upskilling more people throughout the corporate. We now have leadership commitments to just do that and can share more about our progress in the approaching months.
Organizational structures matter to our ability to fulfill our ambitious goals, and we now have made changes over time as our needs have evolved. One change that drew considerable attention recently involved our former Ethics & Society team, whose early work was necessary to enabling us to get where we’re today. Last yr, we made two key changes to our responsible AI ecosystem: first, we made critical recent investments within the team answerable for our Azure OpenAI Service, which incorporates cutting-edge technology like GPT-4; and second, we infused a few of our user research and design teams with specialist expertise by moving former Ethics & Society team members into those teams. Following those changes, we made the hard decision to wind down the rest of the Ethics & Society team, which affected seven people. No decision affecting our colleagues is simple, nevertheless it was one guided by our experience of probably the most effective organizational structures to make sure our responsible AI practices are adopted across the corporate.
A theme that’s core to our responsible AI program and its evolution over time is the necessity to remain humble and learn continually. Responsible AI is a journey, and it’s one which the complete company is on. And gatherings like last week’s Responsible AI Leadership Summit remind me that our collective work on responsible AI is stronger after we learn and innovate together. We’ll keep playing our part to share what we now have learned by publishing documents similar to our Responsible AI Standard and our Impact Assessment Template, in addition to transparency documents we’ve developed for patrons using our Azure OpenAI Service and consumers using products just like the recent Bing. The AI opportunity ahead is tremendous. It’s going to take ongoing collaboration and open exchanges between governments, academia, civil society, and industry to ground our progress toward the shared goal of AI that’s in service of individuals and society.