Gesture recognition through binary video transformation with IoT

dc.contributor.authorArefin, Muhammed Nazmul
dc.date.accessioned2025-01-04T04:34:17Z
dc.date.issued2023-12
dc.descriptionVol.-1, Issue-1, December 2023, pp. 183-200
dc.description.abstractSign language is used by those who are deaf to communicate. The use of American Sign Language (ASL) is widespread all over the world. For those who are deaf, it is problematic because many others do not understand sign language. Both deaf individuals and any service staff would greatly benefit if there was an innovative solution that could identify signs and convert them into simple English. Convolutional neural networks (CNN), one of the subtypes of neural networks, are utilized for picture applications and have a wide range of uses. For video applications with time components, we use Recurrent Neural Networks (RNN) or Long Short-Term Memory (LSTM) networks. However, these networks require a substantial amount of conditioning before producing accurate results, leading to longer processing times. In this work, we achieved improved results with 99.40% accuracy for alphabets and 99.70% accuracy for words when using CNN to recognize gestures from video.
dc.description.sponsorshipDepartment of Computer Science and Engineering International Islamic University Chittagong
dc.identifier.issn3005-5873
dc.identifier.urihttp://dspace.iiuc.ac.bd/handle/123456789/8472
dc.language.isoen
dc.publisherCRP, International Islamic University Chittagong
dc.subjectASL
dc.subjectIoT
dc.subjectCNN
dc.subjectNeural Network
dc.titleGesture recognition through binary video transformation with IoT
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Article 12.pdf
Size:
14.19 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: