International Journal For Multidisciplinary Research

E-ISSN: 2582-2160     Impact Factor: 9.24

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 6 Issue 4 July-August 2024 Submit your research before last 3 days of August to publish your research paper in the issue of July-August.

LLM is All You Need : How Do LLMs Perform On Prediction and Classification Using Historical Data

Author(s) Yuktesh Kashyap, Amrit Sinha
Country India
Abstract This study investigates the utility of large language models (LLMs) in performing traditional machine learning tasks such as prediction, and explores the potential of refinement architectures to enhance their effectiveness in these roles. Utilizing the Titanic survival dataset, we conducted a com- parative analysis using both conventional machine learning tools and LLM-based approaches. Our findings indicate that while LLMs differ fundamentally from traditional ML models in prediction tasks, there exist specific architectural modifications, termed Thought Refinement Architectures, which can significantly improve their performance. These results highlight the potential for inte- grating LLMs into traditional ML workflows, thereby expanding their applicability and enhancing predictive accuracy.
Keywords Artificial Intelligence; Machine Learning; Large Language Models; Prompting Techniquw
Field Computer > Artificial Intelligence / Simulation / Virtual Reality
Published In Volume 6, Issue 3, May-June 2024
Published On 2024-06-27
Cite This LLM is All You Need : How Do LLMs Perform On Prediction and Classification Using Historical Data - Yuktesh Kashyap, Amrit Sinha - IJFMR Volume 6, Issue 3, May-June 2024. DOI 10.36948/ijfmr.2024.v06i03.23438
DOI https://doi.org/10.36948/ijfmr.2024.v06i03.23438
Short DOI https://doi.org/gt24x6

Share this