Home News Vivek Desai, Chief Technology Officer, North America at RLDatix – Interview Series

Vivek Desai, Chief Technology Officer, North America at RLDatix – Interview Series

0
Vivek Desai, Chief Technology Officer, North America at RLDatix – Interview Series

Vivek Desai is the Chief Technology Officer of North America at RLDatix, a connected healthcare operations software and services company. RLDatix is on a mission to alter healthcare. They assist organizations drive safer, more efficient care by providing governance, risk and compliance tools that drive overall improvement and safety.

What initially attracted you to computer science and cybersecurity?

I used to be drawn to the complexities of what computer science and cybersecurity try to resolve – there may be at all times an emerging challenge to explore. An amazing example of that is when the cloud first began gaining traction. It held great promise, but in addition raised some questions around workload security. It was very clear early on that traditional methods were a stopgap, and that organizations across the board would want to develop recent processes to effectively secure workloads within the cloud. Navigating these recent methods was a very exciting journey for me and a number of others working on this field. It’s a dynamic and evolving industry, so every day brings something recent and exciting.

Could you share a few of the current responsibilities that you will have as CTO of RLDatix?  

Currently, I’m focused on leading our data strategy and finding ways to create synergies between our products and the info they hold, to higher understand trends. A lot of our products house similar sorts of data, so my job is to seek out ways to interrupt those silos down and make it easier for our customers, each hospitals and health systems, to access the info. With this, I’m also working on our global artificial intelligence (AI) technique to inform this data access and utilization across the ecosystem.

Staying current on emerging trends in various industries is one other crucial aspect of my role, to make sure we’re heading in the best strategic direction. I’m currently keeping a detailed eye on large language models (LLMs). As an organization, we’re working to seek out ways to integrate LLMs into our technology, to empower and enhance humans, specifically healthcare providers, reduce their cognitive load and enable them to deal with taking good care of patients.

In your LinkedIn blog post titled “A Reflection on My 1st Yr as a CTO,” you wrote, “CTOs don’t work alone. They’re a part of a team.” Could you elaborate on a few of the challenges you’ve got faced and the way you’ve got tackled delegation and teamwork on projects which are inherently technically difficult?

The role of a CTO has fundamentally modified during the last decade. Gone are the times of working in a server room. Now, the job is rather more collaborative. Together, across business units, we align on organizational priorities and switch those aspirations into technical requirements that drive us forward. Hospitals and health systems currently navigate so many every day challenges, from workforce management to financial constraints, and the adoption of recent technology may not at all times be a top priority. Our biggest goal is to showcase how technology can assist mitigate these challenges, quite than add to them, and the general value it brings to their business, employees and patients at large. This effort can’t be done alone and even inside my team, so the collaboration spans across multidisciplinary units to develop a cohesive strategy that can showcase that value, whether that stems from giving customers access to unlocked data insights or activating processes they’re currently unable to perform.

What’s the role of artificial intelligence in the long run of connected healthcare operations?

As integrated data becomes more available with AI, it may be utilized to attach disparate systems and improve safety and accuracy across the continuum of care. This idea of connected healthcare operations is a category we’re focused on at RLDatix because it unlocks actionable data and insights for healthcare decision makers – and AI is integral to creating that a reality.

A non-negotiable aspect of this integration is ensuring that the info usage is secure and compliant, and risks are understood. We’re the market leader in policy, risk and safety, which implies we’ve got an ample amount of knowledge to coach foundational LLMs with more accuracy and reliability. To realize true connected healthcare operations, step one is merging the disparate solutions, and the second is extracting data and normalizing it across those solutions. Hospitals will profit greatly from a gaggle of interconnected solutions that may mix data sets and supply actionable value to users, quite than maintaining separate data sets from individual point solutions.

In a recent keynote, Chief Product Officer Barbara Staruk shared how RLDatix is leveraging generative AI and huge language models to streamline and automate patient safety incident reporting. Could you elaborate on how this works?

It is a really significant initiative for RLDatix and an amazing example of how we’re maximizing the potential of LLMs. When hospitals and health systems complete incident reports, there are currently three standard formats for determining the extent of harm indicated within the report: the Agency for Healthcare Research and Quality’s Common Formats, the National Coordinating Council for Medication Error Reporting and Prevention and the Healthcare Performance Improvement (HPI) Safety Event Classification (SEC). Straight away, we are able to easily train a LLM to read through text in an incident report. If a patient passes away, for instance, the LLM can seamlessly pick that information. The challenge, nevertheless, lies in training the LLM to find out context and distinguish between more complex categories, corresponding to severe everlasting harm, a taxonomy included within the HPI SEC for instance, versus severe temporary harm. If the person reporting doesn’t include enough context, the LLM won’t have the opportunity to find out the suitable category level of harm for that exact patient safety incident.

RLDatix is aiming to implement a less complicated taxonomy, globally, across our portfolio, with concrete categories that could be easily distinguished by the LLM. Over time, users will have the opportunity to easily write what occurred and the LLM will handle it from there by extracting all of the essential information and prepopulating incident forms. Not only is that this a big time-saver for an already-strained workforce, but because the model becomes much more advanced, we’ll also have the opportunity to discover critical trends that can enable healthcare organizations to make safer decisions across the board.

What are another ways in which RLDatix has begun to include LLMs into its operations?

One other way we’re leveraging LLMs internally is to streamline the credentialing process. Each provider’s credentials are formatted otherwise and contain unique information. To place it into perspective, consider how everyone’s resume looks different – from fonts, to work experience, to education and overall formatting. Credentialing is analogous. Where did the provider attend college? What’s their certification? What articles are they published in? Every healthcare skilled goes to offer that information in their very own way.

At RLDatix, LLMs enable us to read through these credentials and extract all that data right into a standardized format in order that those working in data entry don’t have to look extensively for it, enabling them to spend less time on the executive component and focus their time on meaningful tasks that add value.

Cybersecurity has at all times been difficult, especially with the shift to cloud-based technologies, could you discuss a few of these challenges?

Cybersecurity difficult, which is why it’s essential to work with the best partner. Ensuring LLMs remain secure and compliant is a very powerful consideration when leveraging this technology. In case your organization doesn’t have the dedicated staff in-house to do that, it may be incredibly difficult and time-consuming. Because of this we work with Amazon Web Services (AWS) on most of our cybersecurity initiatives. AWS helps us instill security and compliance as core principles inside our technology in order that RLDatix can deal with what we actually do well – which is constructing great products for our customers in all our respective verticals.

What are a few of the recent security threats that you will have seen with the recent rapid adoption of LLMs?

From an RLDatix perspective, there are several considerations we’re working through as we’re developing and training LLMs. A very important focus for us is mitigating bias and unfairness. LLMs are only pretty much as good as the info they’re trained on. Aspects corresponding to gender, race and other demographics can include many inherent biases since the dataset itself is biased. For instance, consider how the southeastern United States uses the word “y’all” in on a regular basis language. It is a unique language bias inherent to a selected patient population that researchers must consider when training the LLM to accurately distinguish language nuances in comparison with other regions. These kinds of biases have to be handled at scale in the case of leveraging LLMS inside healthcare, as training a model inside one patient population doesn’t necessarily mean that model will work in one other.

Maintaining security, transparency and accountability are also big focus points for our organization, in addition to mitigating any opportunities for hallucinations and misinformation. Ensuring that we’re actively addressing any privacy concerns, that we understand how a model reached a certain answer and that we’ve got a secure development cycle in place are all essential components of effective implementation and maintenance.

What are another machine learning algorithms which are used at RLDatix?

Using machine learning (ML) to uncover critical scheduling insights has been an interesting use case for our organization. Within the UK specifically, we’ve been exploring leverage ML to higher understand how rostering, or the scheduling of nurses and doctors, occurs. RLDatix has access to a large amount of scheduling data from the past decade, but what can we do with all of that information? That’s where ML is available in. We’re utilizing an ML model to research that historical data and supply insight into how a staffing situation may look two weeks from now, in a selected hospital or a certain region.

That specific use case is a really achievable ML model, but we’re pushing the needle even further by connecting it to real-life events. For instance, what if we checked out every soccer schedule throughout the area? We all know firsthand that sporting events typically result in more injuries and that an area hospital will likely have more inpatients on the day of an event in comparison with a typical day. We’re working with AWS and other partners to explore what public data sets we are able to seed to make scheduling much more streamlined. We have already got data that means we’re going to see an uptick of patients around major sporting events and even inclement weather, however the ML model can take it a step further by taking that data and identifying critical trends that can help ensure hospitals are adequately staffed, ultimately reducing the strain on our workforce and taking our industry a step further in achieving safer look after all.

LEAVE A REPLY

Please enter your comment!
Please enter your name here