
International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
WSMCDD-2025
GSMCDD-2025
Conferences Published ↓
RBS:RH-COVID-19 (2023)
ICMRS'23
PIPRDA-2023
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 7 Issue 2
March-April 2025
Indexing Partners



















signcall: bridging communication gaps in virtual meetings with sign language recognition
Author(s) | Prof. Ms. Shabira S, Balaji K, Harish M, Jagadesh K |
---|---|
Country | India |
Abstract | Sign language is a visual mode of communication that uses hand gestures and movements to convey meaning, serving as an essential communication tool for individuals with hearing or speech impairments. Despite its importance, many virtual platforms lack the ability to recognize and interpret sign language, creating significant barriers to inclusivity in digital communication. As virtual meetings become more integral to professional and personal communication, the need for inclusivity in these spaces has grown. Current meeting platforms often fail to accommodate users who rely on sign language, limiting their ability to engage fully in discussions. This project aims to address this gap by integrating real-time sign language recognition into video calling platforms, ensuring accessibility for all participants. The proposed system employs the Video Calling Vision Transformer (VCViT) to accurately recognize word-level hand gestures. The system captures live video streams from participants, focusing on hand gestures, and translates them into text or speech in real time. By utilizing advanced video processing techniques, gesture segmentation, and the VCViT's ability to model spatial relationships, the system achieves high recognition accuracy, adapting to different signing styles and environmental conditions. This project strives to create inclusive virtual meeting environments, allowing hearing-impaired individuals to actively participate in discussions. Through AI-driven solutions, it ensures seamless communication, fosters equity, and enhances digital collaboration. |
Keywords | : Sign Language Recognition, Video Calling ,Virtual Meetings ,Inclusivity ,Accessibility, Video Calling Vision Transformer (VCViT) ,Hand Gesture Recognition , Real-time Translation ,Gesture Segmentation ,AI-driven Communication, Digital Collaboration ,Speech Impairment, Hearing Impairment, Spatial Relationship Modelling ,Video Processing Techniques. |
Published In | Volume 7, Issue 2, March-April 2025 |
Published On | 2025-03-24 |
DOI | https://doi.org/10.36948/ijfmr.2025.v07i02.39676 |
Short DOI | https://doi.org/g89v6r |
Share this

E-ISSN 2582-2160

CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
