Home Artificial Intelligence Generative AI is a Gamble Enterprises Should Soak up 2024

Generative AI is a Gamble Enterprises Should Soak up 2024

0
Generative AI is a Gamble Enterprises Should Soak up 2024

LLMs today suffer from inaccuracies at scale, but that doesn’t mean you need to cede competitive ground by waiting to adopt generative AI.

Towards Data Science
Constructing an AI-ready workforce with data.world OWLs, as imagined by OpenAI’s GPT-4

Every enterprise technology has a purpose or it wouldn’t exist. Generative AI’s enterprise purpose is to provide human-usable output from technical, business, and language data rapidly and at scale to drive productivity, efficiency, and business gains. But this primary function of generative AI — to supply a witty answer — can be the source of enormous language models’ (LLMs) biggest barrier to enterprise adoption: so-called “hallucinations”.

Why do hallucinations occur in any respect? Because, at their core, LLMs are complex statistical matching systems. They analyze billions of information points in an effort to find out patterns and predict the almost certainly response to any given prompt. But while these models may impress us with the usefulness, depth, and creativity of their answers, seducing us to trust them every time, they’re removed from reliable. Recent research from Vectara found that chatbots can “invent” recent information as much as 27% of the time. In an enterprise setting where query complexity can vary greatly, that number climbs even higher. A recent benchmark from data.world’s AI Lab using real business data found that when deployed as a standalone solution, LLMs return accurate responses to most elementary business queries only 25.5% of the time. On the subject of intermediate or expert level queries, that are still well throughout the bounds of typical, data-driven enterprise queries, accuracy dropped to ZERO percent!

The tendency to hallucinate could also be inconsequential for people fooling around with ChatGPT for small or novelty use cases. But relating to enterprise deployment, hallucinations present a systemic risk. The implications range from inconvenient (a service chatbot sharing irrelevant information in a customer interaction) to catastrophic, comparable to inputting the fallacious numeral on an SEC filing.

Because it stands, generative AI continues to be a big gamble for the enterprise. Nevertheless, it’s also a needed one. As we learned at OpenAI’s first developer conference, 92% of Fortune 500 corporations are using OpenAI APIs. The potential of this technology within the enterprise is so transformative that the trail forward is resoundingly clear: start adopting generative AI — knowing that the rewards include serious risks. The choice is to insulate yourself from the risks, and swiftly fall behind the competition. The inevitable productivity lift is so obvious now that to not reap the benefits of it may very well be existential to an enterprise’s survival. So, faced with this illusion of selection, how can organizations go about integrating generative AI into their workflows, while concurrently mitigating risk?

First, it’s essential to prioritize your data foundation. Like every modern enterprise technology, generative AI solutions are only pretty much as good as the info they’re built on top of — and in response to Cisco’s recent AI Readiness Index, intention is outpacing ability, particularly on the info front. Cisco found that while 84% of corporations worldwide consider AI may have a major impact on their business, 81% lack the info centralization needed to leverage AI tools to their full potential, and only 21% say their network has ‘optimal’ latency to support demanding AI workloads. It’s an analogous story relating to data governance as well; just three out of ten respondents currently have comprehensive AI policies and protocols, while only 4 out of ten have systematic processes for AI bias and fairness corrections.

As benchmarking demonstrates, LLMs have a tough enough time already retrieving factual answers reliably. Mix that with poor data quality, an absence of information centralization / management capabilities, and limited governance policies, and the chance of hallucinations — and accompanying consequences — skyrockets. Put simply, corporations with a powerful data architecture have higher and more accurate information available to them and, by extension, their AI solutions are equipped to make higher decisions. Working with a knowledge catalog or evaluating internal governance and data entry processes may not feel like probably the most exciting a part of adopting generative AI. However it’s those considerations — data governance, lineage, and quality — that might make or break the success of a generative AI Initiative. It not only enables organizations to deploy enterprise AI solutions faster and more responsibly, but additionally allows them to maintain pace with the market because the technology evolves.

Second, it’s essential to construct an AI-educated workforce. Research points to the proven fact that techniques like advanced prompt engineering can prove useful in identifying and mitigating hallucinations. Other methods, comparable to fine-tuning, have been shown to dramatically improve LLM accuracy, even to the purpose of outperforming larger, more advanced general purpose models. Nevertheless, employees can only deploy these tactics in the event that they’re empowered with the newest training and education to achieve this. And let’s be honest: most employees aren’t. We are only over the one-year mark because the launch of ChatGPT on November 30, 2022!

When a significant vendor comparable to Databricks or Snowflake releases recent capabilities, organizations flock to webinars, conferences, and workshops to make sure they will reap the benefits of the newest features. Generative AI ought to be no different. Create a culture in 2024 where educating your team on AI best practices is your default; for instance, by providing stipends for AI-specific L&D programs or bringing in an out of doors training consultant, comparable to the work we’ve done at data.world with Rachel Woods, who serves on our Advisory Board and founded and leads The AI Exchange. We also promoted Brandon Gadoci, our first data.world worker outside of me and my co-founders, to be our VP of AI Operations. The staggering lift we’ve already had in our internal productivity is nothing wanting inspirational (I wrote about it on this three-part series.) Brandon just reported yesterday that we’ve seen an astounding 25% increase in our team’s productivity through using our internal AI tools across all job roles in 2023! Adopting this sort of culture will go a great distance toward ensuring your organization is provided to grasp, recognize, and mitigate the specter of hallucinations.

Third, it’s essential to stay on top of the burgeoning AI ecosystem. As with every recent paradigm-shifting tech, AI is surrounded by a proliferation of emerging practices, software, and processes to reduce risk and maximize value. As transformative as LLMs may turn out to be, the wonderful truth is that we’re just firstly of the long arc of AI’s evolution.

Technologies once foreign to your organization may turn out to be critical. The aforementioned benchmark we released saw LLMs backed by a knowledge graph — a decades-old architecture for contextualizing data in three dimensions (mapping and relating data very like a human brain works) — can improve accuracy by 300%! Likewise, technologies like vector databases and retrieval augmented generation (RAG) have also risen to prominence given their ability to assist address the hallucination problem with LLMs. Long-term, the ambitions of AI extend far beyond the APIs of the main LLM providers available today, so remain curious and nimble in your enterprise AI investments.

Like every recent technology, generative AI solutions aren’t perfect, and their tendency to hallucinate poses a really real threat to their current viability for widespread enterprise deployment. Nevertheless, these hallucinations shouldn’t stop organizations from experimenting and integrating these models into their workflows. Quite the alternative, in actual fact, as so eloquently stated by AI pioneer and Wharton entrepreneurship professor Ethan Mollick: “…understanding comes from experimentation.” Fairly, the chance hallucinations impose should act as a forcing function for enterprise decision-makers to acknowledge what’s at stake, take steps to mitigate that risk accordingly, and reap the early advantages of LLMs in the method. 2024 is the yr that your enterprise should take the leap.

LEAVE A REPLY

Please enter your comment!
Please enter your name here