International Journal For Multidisciplinary Research

E-ISSN: 2582-2160     Impact Factor: 9.24

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 7, Issue 1 (January-February 2025) Submit your research before last 3 days of February to publish your research paper in the issue of January-February.

Fine-Tuning Pre-Trained Language Models for Improved Retrieval in RAG Systems for Domain-Specific Use

Author(s) Syed Arham Akheel
Country USA
Abstract Large Language Models (LLMs) have significantly advanced natural language understanding and generation capabilities, but domain-specific applications often necessitate supplementation with current, external information to mitigate knowledge gaps and reduce hallucinations. Retrieval-Augmented Generation (RAG) has emerged as an effective solution, dynamically integrating up-to-date information through retrieval mechanisms. Fine-tuning pre-trained LLMs with domain-specific data to optimize retrieval queries has become an essential strategy to enhance RAG systems, especially in ensuring the retrieval of highly relevant information from vector databases for response generation. This paper provides a comprehensive review of the literature on the fine-tuning of LLMs to optimize retrieval processes in RAG systems. We discuss advancements such as Query Optimization, Retrieval-Augmented Fine Tuning (RAFT), Retrieval-Augmented Dual Instruction Tuning (RA-DIT), as well as frameworks like RALLE, DPR, and the ensemble of retrieval based and generation-based systems, that enhance the synergy between retrievers and LLMs.
Keywords Retrieval-Augmented Generation, Large Language Models, Domain-Specific Fine-Tuning, Information Retrieval, RAFT, RA-DIT
Field Computer Applications
Published In Volume 6, Issue 5, September-October 2024
Published On 2024-10-22
Cite This Fine-Tuning Pre-Trained Language Models for Improved Retrieval in RAG Systems for Domain-Specific Use - Syed Arham Akheel - IJFMR Volume 6, Issue 5, September-October 2024. DOI 10.36948/ijfmr.2024.v06i05.22581
DOI https://doi.org/10.36948/ijfmr.2024.v06i05.22581
Short DOI https://doi.org/g82hrk

Share this