![Good governance essential for enterprises deploying AI Good governance essential for enterprises deploying AI](http://aiguido.com/wp-content/uploads/2023/07/Stephanie-Zhang_REV.png?resize=1200,600)
In association withJPMorgan Chase & Co.
Constructing fair and transparent systems with artificial intelligence has turn out to be an imperative for enterprises. AI will help enterprises create personalized customer experiences, streamline back-office operations from onboarding documents to internal training, prevent fraud, and automate compliance processes. But deploying intricate AI ecosystems with integrity requires good governance standards and metrics.
To deploy and manage the AI lifecycle—encompassing advanced technologies like machine learning (ML), natural language processing, robotics, and cognitive computing—each responsibly and efficiently, firms like JPMorgan Chase employ best practices often known as ModelOps.
These best governance practices involve “establishing the correct policies and procedures and controls for the event, testing, deployment and ongoing monitoring of AI models in order that it ensures the models are developed in compliance with regulatory and ethical standards,” says JPMorgan Chase managing director and general manager of ModelOps, AI and ML Lifecycle Management and Governance, Stephanie Zhang.
Because AI models are driven by data and environment changes, says Zhang, continuous compliance is vital to make sure that AI deployments meet regulatory requirements and establish clear ownership and accountability. Amidst these vigilant governance efforts to safeguard AI and ML, enterprises can encourage innovation by creating well-defined metrics to watch AI models, employing widespread education, encouraging all stakeholders’ involvement in AI/ML development, and constructing integrated systems.
“The hot button is to determine a culture of responsibility and accountability so that everybody involved in the method understands the importance of this responsible behavior in producing AI solutions and be held accountable for his or her actions,” says Zhang.
Full Transcript
From MIT Technology Review, I’m Laurel Ruma, and that is Business Lab, the show that helps business leaders make sense of latest technologies coming out of the lab and into the marketplace.
Our topic today is constructing and deploying artificial intelligence with a give attention to ModelOps, governance, and constructing transparent and fair systems. As AI becomes more complicated, but additionally integrated into our every day lives, the necessity to balance governance and innovation is a priority for enterprises.
Two words for you: good governance.
Today we’re talking with Stephanie Zhang, managing director and general manager of ModelOps, AI and ML Lifecycle Management and Governance at JPMorgan Chase.
This podcast is produced in association with JPMorgan Chase.
Welcome Stephanie.
Thanks for having me, Laurel.
Glad to have you ever here. So, often people consider artificial intelligence as individual technologies or innovations, but could you describe the ecosystem of AI and the way it might actually help different parts of the business?
Sure. I’ll start explaining what AI is first. Artificial intelligence is the power for a pc to think and learn. With AI, computers can do things that traditionally require human intelligence. AI can process large amounts of knowledge in ways in which humans cannot. The goal for AI is to find a way to do things like recognizing patterns, making decisions, and judging like humans. And AI will not be only a single technology or innovation, but relatively an ecosystem of various technologies, tools and techniques that work all together to enable intelligent systems and applications. The AI ecosystem includes technology resembling machine learning, natural language processing, computer vision, robotics and cognitive computing amongst others. And eventually, software. The business software that makes the selections based on the predictive answers out of the models.
That is a extremely great strategy to set the context for using AI within the enterprise. So how does artificial intelligence help JPMorgan Chase construct higher services and products?
At JPMorgan Chase, our purpose is to make dreams possible for everybody, in every single place and each day. So we aim to be probably the most respected financial services firm on the earth, serving corporations and individuals with exceptional client service, operational excellence, a commitment to integrity, fairness, responsibility, and we’re an excellent place to work with a winning culture. Now, all of this stuff I actually have mentioned from the previous questions that you’ve gotten asked, AI can contribute towards that. So specifically, well to start with, AI actually is involved in making higher services and products from the back office to the front customer-facing applications. There’s some examples here. For instance, I discussed earlier improved customer experience. So we use AI to personalize customer experience.
Second part is streamlined operations. So, behind the scenes lots of the AI applications are within the spaces of streamlining our operations, and people range from client onboarding documents, training our AI assisted agent and helping us internally training all of those things. Third, fraud detection and prevention. As a financial services company, we’ve got lots of folks within the office, and that helps when it comes to cybersecurity, when it comes to bank card fraud detection and prevention, lots of that are done through a considerable amount of data analyzed detecting anomaly situations. After which last but not least, trading and investment. It helps our investment managers by providing more information, bringing information in an efficient manner, and helps recommend certain information and things to take a look at. Compliance as well. AI power tools can even help financial services firms resembling ours to comply with regulatory requirements by automating these compliance processes.
That is an excellent explanation, Stephanie. So more specifically, what’s ModelOps and the way is it used with AI after which to assist the firm innovate?
ModelOps is a set of best practices and tools used to administer the general lifecycle of AI and machine learning models within the production environment. Specifically, it’s focused more on the governance side of things, but from an end-to-end lifecycle management, each from the very starting of whenever you desired to approach an AI/ML project, the intention of the project and end result that you just desire to need to the model development, to the way you process the information, to the way you deploy the model, and ongoing monitoring of the model to see if the model’s performance continues to be as intended. It is a structured approach to managing all the lifecycle of AI models.
There is definitely quite a bit to think about here. So specifically, how does that governance that you just mentioned earlier play into the event of artificial intelligence across JPMorgan Chase and the tools and services being built?
So, the governance program that we’re developing surrounding AI/ML not only ensures that the AI/ML models are developed in a responsible manner, in a trustworthy manner, but additionally increases efficiency and innovation on this space. The effective governance would make sure that the models are developed in the correct way and deployed within the responsible way as well. Specifically, it involves establishing the correct policies and procedures and controls for the event, testing, deployment and ongoing monitoring of AI models in order that it ensures the models are developed in compliance with regulatory and ethical standards and likewise how we handle data. After which on top of that, repeatedly monitored and updated to reflect the changes within the environment.
In order a subset of governance, what role does continuous compliance then play within the strategy of governance?
Continuous compliance is a very important a part of governance within the deployment of AI models. It involves ongoing monitoring and validation of AI models to make sure that they’re compliant with regulatory and ethical standards in addition to use case objectives within the organization’s internal policies and procedures. Everyone knows that AI model development will not be like code software development where when you don’t change the code, nothing really changes, but AI models are driven by data. So the information and environment change, it requires us to continuously monitor the model’s performance to make sure the model will not be drifting out of what we intended. So the continual compliance requires that AI models are continuously being monitored and updated to reflect the changes that we observe within the environment to make sure that it still complies to the regulatory requirements. As we all know, increasingly regulatory rules are coming internationally within the space of using data and using AI.
And this might be achieved through model monitoring tools, capturing data in real time, providing the alert when the model is out of compliance, after which alerting the developers to do the changes because it requires. But certainly one of the opposite essential things will not be detecting the changes through the monitoring, but additionally to determine clear ownership and accountability on the compliance. And this might be undergone the established responsibility matrix with governance or oversight boards that is continuously reviewing these models. And it involves also independent validation of how the model is built and the way the model is deployed. So in summary, continuous compliance plays a extremely essential role within the governance of AI models.
That is great. Thanks for that detailed explanation. So since you personally specialise in governance, how can enterprises balance each providing safeguards for artificial intelligence and machine learning deployment, but still encourage innovation?
So balancing safeguards for AI/ML deployment and inspiring innovation might be really difficult tasks for the enterprises. It’s large scale, and it’s changing extremely fast. Nonetheless, that is critically essential to have that balance. Otherwise, what’s the point of getting the innovation here? There are a number of key strategies that will help achieve this balance. Primary, establish clear governance policies and procedures, review and update existing policies where it could not suit AI/ML development and deployment at latest policies and procedures that is needed, resembling monitoring and continuous compliance as I discussed earlier. Second, involve all of the stakeholders within the AI/ML development process. We start from data engineers, the business, the information scientists, also ML engineers who deploy the models in production. Model reviewers. Business stakeholders and risk organizations. And that is what we’re specializing in. We’re constructing integrated systems that provide transparency, automation and good user experience from starting to finish.
So all of this may help with streamlining the method and bringing everyone together. Third, we wanted to construct systems not only allowing this overall workflow, but additionally captures the information that allows automation. Oftentimes most of the activities happening within the ML lifecycle process are done through different tools because they reside from different groups and departments. And that leads to participants manually sharing information, reviewing, and signing off. So having an integrated system is critical. 4, monitoring and evaluating the performance of AI/ML models, as I discussed earlier on, is absolutely essential because if we do not monitor the models, it’s going to even have a negative effect from its original intent. And doing this manually will stifle innovation. Model deployment requires automation, so having that is essential to be able to allow your models to be developed and deployed within the production environment, actually operating. It’s reproducible, it’s operating in production.
It’s totally, very essential. And having well-defined metrics to watch the models, and that involves infrastructure model performance itself in addition to data. Finally, providing training and education, since it’s a bunch sport, everyone comes from different backgrounds and plays a special role. Having that cross understanding of all the lifecycle process is absolutely essential. And having the education of understanding what’s the correct data to make use of and are we using the information appropriately for the use cases will prevent us from much in a while rejection of the model deployment. So, all of those I believe are key to balance out the governance and innovation.
So there’s one other topic here to be discussed, and also you touched on it in your answer, which was, how does everyone understand the AI process? Could you describe the role of transparency within the AI/ML lifecycle from creation to governance to implementation?
Sure. So AI/ML, it’s still fairly latest, it’s still evolving, but generally, people have settled in a high-level process flow that’s defining the business problem, acquiring the information and processing the information to resolve the issue, after which construct the model, which is model development after which model deployment. But prior to the deployment, we do a review in our company to make sure the models are developed in response to the correct responsible AI principles, after which ongoing monitoring. When people talk in regards to the role of transparency, it’s about not only the power to capture all of the metadata artifacts across all the lifecycle, the lifecycle events, all this metadata must be transparent with the timestamp so that folks can know what happened. And that is how we shared the data. And having this transparency is so essential since it builds trust, it ensures fairness. We’d like to ensure that that the correct data is used, and it facilitates explainability.
There’s this thing about models that should be explained. How does it make decisions? After which it helps support the continuing monitoring, and it might be done in several means. The one thing that we stress very much from the start is knowing what’s the AI initiative’s goals, the use case goal, and what’s the intended data use? We review that. How did you process the information? What’s the information lineage and the transformation process? What algorithms are getting used, and what are the ensemble algorithms which might be getting used? And the model specification must be documented and spelled out. What’s the limitation of when the model needs to be used and when it shouldn’t be used? Explainability, auditability, can we actually track how this model is produced all over the model lineage itself? And likewise, technology specifics resembling infrastructure, the containers wherein it’s involved, because this actually impacts the model performance, where it’s deployed, which business application is definitely consuming the output prediction out of the model, and who can access the selections from the model. So, all of those are a part of the transparency subject.
Yeah, that is quite extensive. So considering that AI is a fast-changing field with many emerging tech technologies like generative AI, how do teams at JPMorgan Chase keep abreast of those latest inventions while then also selecting when and where to deploy them?
The speed of innovation within the technology field is just growing so exponentially fast. After all, AI technology continues to be emerging, and it is actually a difficult task. Nonetheless, there’s a number of things that we will do, and we’re doing, to assist the teams to maintain abreast of those latest innovations. One, we construct a powerful internal knowledge base. Now we have lots of talent in JPMorgan Chase, and the team will proceed to construct their knowledge base and different teams evaluate different technologies, and so they share their minds. And we attend conferences, webinars, and industry events, in order that’s really essential. Second, we engage with industry experts, thought leaders and vendors.
Oftentimes, startups have the brightest ideas as to what to do with the newest technology? And we are also very much involved in educational institutes and researchers as well. Those help us learn in regards to the newest developments in the sphere. After which the third thing is that we do lots of pilot project POCs [proof of concepts]. Now we have hackathons within the firm. And so JPMorgan Chase is a spot where employees and everybody from all roles are encouraged to provide you with revolutionary ideas. And the fourth thing is we’ve got lots of cross-functioning teams that collaborate. Collaboration is where innovation truly emerges. That is where latest ideas and latest ways of approaching solving an existing problem occur, and different minds start excited about problems from different angles. So those are all of the amazing things that we profit from one another.
So it is a really great conversation because although you are saying technology is clearly on the crux of what you do, people also play a big part in developing and deploying AI and ML models. So, then how do you go about ensuring those who develop the models and manage the information operate responsibly?
It is a topic I’m very enthusiastic about because at first, I believe having a various team is at all times the winning strategy. And particularly in an AI/ML world, we’re using data to resolve problems and understanding that bias and being conscious about those things so getting within the trap of unintentionally using data within the incorrect way is very important. So, what which means is that there are several ways to advertise responsible behaviors because models are built by people. One, we do establish clear policies and guidelines. Financial services firms are inclined to have strong risk management. So, we’re very strong in that sense. Nonetheless, with the emerging field of AI/ML, we’re increasing that variety of policies and guidelines. And, two, very essential is providing training and education. Oftentimes as a knowledge scientist, individuals are more focused on technology. They’re focused on constructing a model with one of the best performing scores, one of the best accuracy, and maybe should not so well versed when it comes to, am I using the correct data? Should I be using this data?
All of those things, we want to have continued education on that so that folks know tips on how to construct models responsibly. Then we desired to foster a culture of responsibility. And inside JPMorgan Chase, there’s various groups which have already spawned as much as speak about this. Responsible AI, ethical AI are major topics here in our firm. And data privacy, ethics, all of those are topics not only in our training classes in addition to in various worker groups. Ensuring transparency. So, that is where the transparency is very important. If people do not know what they’re doing and having a special group find a way to watch and review the models being produced, they might not learn what’s the correct way of doing it.
The hot button is to determine a culture of responsibility and accountability so that everybody involved in the method understands the importance of this responsible behavior in producing AI solutions and be held accountable for his or her actions.
So, a fast followup to that essential people aspect of artificial intelligence. What are some best practices JPMorgan Chase employs to make sure that diversity is being taken into consideration when each hiring latest employees in addition to constructing after which deploying those AI models?
So, JPMorgan Chase is present in over 100 markets across the globe, right? We’re actively in search of out diverse candidates throughout the world, and 49% of our global hires are women. And 58% of the brand new US hires are ethnically diverse. So we’ve got been on the forefront and proceed to rent diversely. So, ensuring diverse hiring practices could be very essential. Second, we want to create diverse teams as well. So diverse teams, that features individuals with diverse backgrounds from diverse fields, not only computer science and AI/ML, but sociology, other fields are also essential, and so they all bring wealthy perspectives and artistic problem-solving techniques.
And the opposite thing, again, I’m going back to this, which is monitoring and auditing AI models for bias. So, not all of the AI models require bias monitoring. We tier the models depending on the usage of the models, those do have to get evaluated for it, and, subsequently, it is very, very essential to follow the chance management framework and discover potential issues before they turn out to be significant problems. After which ensuring the bias in data and bias when it comes to the model development are being detected and thru sufficient amounts of test. And, finally, fostering a culture of inclusivity. So, making a culture of inclusivity that values diversity and encourages different perspectives will help how we develop the models. So, we hire diverse candidates, we form teams which might be diverse, but additionally we want to continuously reinforce this culture of DEI. So that features establishing training programs, promoting communication amongst the communities of AI/ML folks.
We speak about how we produce models and the way we develop models, and what are those things that we needs to be looking for. So, promoting diversity and inclusion in the event and the deployment of AI models requires ongoing effort and continuous improvement, and it’s really essential to make sure that diverse viewpoints are represented throughout the entire process.
This has been a extremely great discussion, Stephanie, but one last query. Much of this technology appears to be emerging so quickly, but how do you envision the longer term of ModelOps in the subsequent five years?
So, over the previous few years, the industry has matured from model development to full AI lifecycle management, and now we see technology has evolved from ML platform towards the AI ecosystem from just making ML work to responsible AI. So, within the near future, what I see for ModelsOps is predicted to proceed to evolve and turn out to be increasingly sophisticated as organizations increasingly adopt AI and machine learning technology. And several other of the important thing trends that I’ve seen that is prone to shape the longer term of ModelOps include increased automation. As the amount and complexity of AI models proceed to grow, automation will turn out to be increasingly essential in managing all the model lifecycle. We just cannot catch up if we do not automate. So from development to deployment and monitoring, this requires a development of far more advanced tools and platforms that may automate most of the tasks currently mostly still performed by human operators.
Second thing is a greater give attention to explainability and interpretability. As AI models turn out to be more complex and are used to make more essential decisions, there will probably be increased give attention to ensuring that models are explainable and interpretable in order that the stakeholders can understand how decisions are made. It will require the event of latest techniques and tools for model interpretability. Third, integration with devOps. As I discussed earlier, just making model ML work is not any longer enough. Many models being trained are actually moving into the production environment. So ModelOps will proceed to integrate with devOps enabling the organization to administer each the software and AI models in a really unified manner. And this may require the event of latest tools and platforms to enable the seamless integration from the AI model development and deployment with the software development and deployment.
After which the increased use of cloud-based services. As more organizations move their operations to the cloud, there will probably be increased use of cloud-based services for AI model development and deployment. And this may require latest tools, again, to integrate seamlessly with cloud-based infrastructure. So the longer term of ModelOps is prone to be definitely more automation, increased give attention to the explainability and interpretability and tighter integration with devOps and increased use of cloud.
Well, thanks very much, Stephanie, for what has been a unbelievable episode of the Business Lab.
My pleasure. Thanks for having me.
That was Stephanie Zhang, the managing director and general manager of ModelOps, AI and ML lifecycle management and governance at JPMorgan Chase, who I spoke with from Cambridge, Massachusetts, the house of MIT and MIT Technology Review overlooking the Charles River.
That is it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 on the Massachusetts Institute of Technology, and you may also find us in print on the internet and at events every year world wide. For more details about us and the show, please try our website at technologyreview.com.
This show is obtainable wherever you get your podcasts. When you enjoyed this episode, we hope you may take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.