International Journal For Multidisciplinary Research

E-ISSN: 2582-2160     Impact Factor: 9.24

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 7, Issue 1 (January-February 2025) Submit your research before last 3 days of February to publish your research paper in the issue of January-February.

Leveraging LLM As Backend Service

Author(s) R Niranjan, J Pavan Prasad, S Nivetha
Country India
Abstract Large Language Models (LLMs) are revolutionizing dynamic web applications backend architectures by facilitating user centric, intelligent, and responsive interactions. In order to produce JSON replies for use cases including creating job descriptions, calculating candidate job matching scores, and suggesting chess moves, this article investigates the integration of LLM powered APIs. For scalable and effective application workflows, these responses are either stored in databases or seamlessly mapped to frontends. We go over the system design, performance indicators, and scalability issues. We also suggest ways to enhance response times for LLM powered backends, such as utilizing the Groq API, the fastest inference engine on the market. A thorough analysis demonstrates this method's potential in contemporary web development.
Keywords Large Language Models, LLM, backend integration, Dynamic web applications, API inference, Groq API, JSON response generation
Field Computer > Artificial Intelligence / Simulation / Virtual Reality
Published In Volume 7, Issue 1, January-February 2025
Published On 2025-01-30
Cite This Leveraging LLM As Backend Service - R Niranjan, J Pavan Prasad, S Nivetha - IJFMR Volume 7, Issue 1, January-February 2025.

Share this