
Large Language Models (LLMs) are remarkable at compressing knowledge concerning the world into their billions of parameters.
Nonetheless, LLMs have two major limitations: They only have up-to-date knowledge as much as the time of the last training iteration. And they often are inclined to make up knowledge (hallucinate) when asked specific questions.
Using the RAG technique, we may give pre-trained LLMs access to very specific information as additional context when answering our questions.
In this text, I’ll walk through the idea and practice of implementing Google’s LLM Gemma with additional RAG capabilities using the Hugging Face transformers library, LangChain, and the Faiss vector database.
An summary of the RAG pipeline is shown within the figure below, which we are going to implement step-by-step.