Home Community Finetuning LLaMA on Medical Papers: Meet PMC-LLaMA-A Model that Achieves High Performance on Biomedical QA Benchmarks

Finetuning LLaMA on Medical Papers: Meet PMC-LLaMA-A Model that Achieves High Performance on Biomedical QA Benchmarks

0
Finetuning LLaMA on Medical Papers: Meet PMC-LLaMA-A Model that Achieves High Performance on Biomedical QA Benchmarks

The event of huge language models (LLMs), equivalent to OpenAI’s ChatGPT and GPT-4, has reshaped artificial intelligence in lots of fields, including natural language processing, computer vision, and the biomedical field. Unfortunately, the specifics of ChatGPT’s training and the model architectures for its variants are still unknown. While LLaMA is an open-source foundational language model, it’s hypothesized that its poor performance on applications requiring extensive domain knowledge is attributable to an absence of domain-specific data throughout the model pre-training stage. 

Many studies have been discussing modifying and using open-source LLMs for specialised purposes. As an example, Alpaca and Vicuna have focused on expanding the model’s capability for interaction by training it with examples of obeying instructions created robotically. 

A recent work by Shanghai Jiao Tong University and Shanghai AI Laboratory takes a unique tack by infusing domain knowledge right into a single, pre-trained LLaMA to steer the foundational language model toward a medical-specific corpus. They introduce PMC-LLaMA, a publicly available language model developed by refining LLaMA-7B on 4.8 million medical academic papers. The team believes that medical discussion and consulting would profit more from a foundational language model with a medical focus. 

🚀 JOIN the fastest ML Subreddit Community

The team began with the S2ORC Datasets, which contain 81.1M academic papers in English, and sorted them in line with their PubMed Central (PMC)-id. Subsequently, roughly 4.9M papers, totaling over 75B tokens, are highly related to medical knowledge. By optimizing an autoregressive generation objective, first presented in GPT2, they fine-tune the LLaMA-7B model on these freely available PMC papers. They employ the bf16 (Brain Floating Point) data format and the Fully Sharded Data Parallel (FSDP) acceleration approach to hurry up the educational process.

The team tests PMC-LLaMA by doing three various kinds of fine-tuning on the aforementioned associated medical QA datasets: full fine-tuning, parameter-efficient fine-tuning, and data-efficient fine-tuning. The outcomes of the experiments show that PMC-LLaMA outperforms LLaMA and other models trained with LLaMA-tuned instructions within the medical domain when the instructions are tweaked. 

A shortcoming of PMC-LLaMA is that each token can’t be present in the 4.8 million papers because they’ve only trained five epochs thus far. In the longer term, they plan to progressively train PMC-LLaMA models with more parameters, repeatedly train PMC-LLaMA, and update the bottom model on the cuddling face page. 


Take a look at the Research Paper and Code. Don’t forget to affix our 20k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the most recent AI research news, cool AI projects, and more. If you will have any questions regarding the above article or if we missed anything, be at liberty to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Tanushree

” data-medium-file=”https://www.marktechpost.com/wp-content/uploads/2020/10/Tanushree-Picture-225×300.jpeg” data-large-file=”https://www.marktechpost.com/wp-content/uploads/2020/10/Tanushree-Picture-768×1024.jpeg”>

Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest within the scope of application of artificial intelligence in various fields. She is enthusiastic about exploring the brand new advancements in technologies and their real-life application.


LEAVE A REPLY

Please enter your comment!
Please enter your name here