Detection of Malaysian Sign Language with Single Shot Detector Algorithm

  • Nurfarah Idayu Mohamad Fauzi
  • Shahirah Mohamed Hatim Facullty of Computer and Mathematical Sciences, Universiti Teknologi MARA Perak Branch, Tapah Campus, Perak
  • Zalikha Zulkifli


Sign language is a nonverbal communication that relies on facial expressions, postures, and gestures to facilitate communication between individuals who are deaf or have a hearing impairment. Despite its importance, many people do not recognize or understand sign language, which creates communication barriers for individuals with disabilities. Malaysia faces a lack of interest among its people in learning sign language, which may be attributed to various factors such as limited awareness, resource constraints, or a perception that sign language is not relevant or necessary. To address this issue, the research introduced a simple mobile application for that could potentially increase interest and awareness in sign language and promote greater inclusivity for individuals with disabilities. Single Shot Detector (SSD) algorithm was implemented to perform the Malaysian Sign Language object detection in the application. To facilitate the training of a custom TensorFlow Lite model, the project leveraged the TensorFlow eLite Model Maker library. The outcome of the research indicated a detection accuracy of 75.2%, which is significant as it demonstrates the potential for the developed model to serve as an effective Malaysia Sign Language detector. The framework used in this project can serve as a useful reference for future developers seeking to create similar custom models. Moreover, the promising results of the research indicate the potential for mobile applications utilizing the developed model to significantly enhance communication and inclusivity for individuals with hearing impairments in Malaysia.


[1] P. R. Sanmitra, V. V. S. Sowmya, and K. Lalithanjana, “Machine Learning Based Real Time Sign Language Detection,” vol. 4, no. 6, pp. 137–141, 2021.
[2] M. A. M. M. Asri, Z. Ahmad, I. A. Mohtar, and S. Ibrahim, “A real time detection algorithm based on YOLOv3,” Int. J. Recent Technol. Eng., vol. 8, no. 2 Special Issue 11, pp. 651–656, 2019, doi: 10.35940/ijrte.B1102.0982S1119.
[3] C. Yi, L. Zhou, Z. Wang, Z. Sun, and C. Tan, “Long-range Hand Gesture Recognition with Joint SSD Network,” 2018 IEEE Int. Conf. Robot. Biomimetics, ROBIO 2018, pp. 1959–1963, 2018, doi: 10.1109/ROBIO.2018.8665302.
[4] R. Rastgoo, K. Kiani, and S. Escalera, “Sign Language Recognition: A Deep Survey,” Expert Syst. Appl., vol. 164, no. August 2020, p. 113794, 2021, doi: 10.1016/j.eswa.2020.113794.
[5] A. Z. Shukor, M. F. Miskon, M. H. Jamaluddin, F. Bin Ali Ibrahim, M. F. Asyraf, and M. B. Bin Bahar, “A New Data Glove Approach for Detection,” Procedia Comput. Sci., vol. 76, no. Iris, pp. 60–67, 2015, doi: 10.1016/j.procs.2015.12.276.
[6] R. Rastgoo, K. Kiani, and S. Escalera, “Hand sign language recognition using multi-view hand skeleton,” Expert Syst. Appl., vol. 150, p. 113336, 2020, doi: 10.1016/j.eswa.2020.113336.
[7] N. Saquib and A. Rahman, “Application of machine learning techniques for real-time sign language detection using wearable sensors,” MMSys 2020 - Proc. 2020 Multimed. Syst. Conf., pp. 178–189, 2020, doi: 10.1145/3339825.3391869.
[8] R. Akmeliawati et al., “Assistive technology for relieving communication lumber between hearing/speech impaired and hearing people,” J. Eng., vol. 2014, no. 6, pp. 312–323, 2014, doi: 10.1049/joe.2014.0039.
[9] K. Neumann, S. Chadha, G. Tavartkiladze, X. Bu, and K. R. White, “Newborn and infant hearing screening facing globally growing numbers of people suffering from disabling hearing loss,” Int. J. Neonatal Screen., vol. 5, no. 1, p. 7, 2019.
[10] D. Kamarudin, D. Kamarudin, and Y. Hussain, “Hearing impaired student achievement on the bahasa melayu subject: are these tests applicable?,” J. Pendidik. Bitara UPSI, vol. 12, pp. 59–67, 2019.
[11] J. Ma and Y. Yuan, “Dimension reduction of image deep feature using PCA,” J. Vis. Commun. Image Represent., vol. 63, p. 102578, 2019.
[12] R. K. Karunanayake, W. G. M. Dananjaya, M. S. Y. Peiris, B. Gunatileka, S. Lokuliyana, and A. Kuruppu, “CURETO: skin diseases detection using image processing and CNN,” in 2020 14th international conference on Innovations in Information Technology (IIT), 2020, pp. 1–6.
[13] T. Diwan, G. Anirudh, and J. V Tembhurne, “Object detection using YOLO: Challenges, architectural successors, datasets and applications,” Multimed. Tools Appl., pp. 1–33, 2022.
[14] A. Kumar, Z. J. Zhang, and H. Lyu, “Object detection in real time based on improved single shot multi-box detector algorithm,” EURASIP J. Wirel. Commun. Netw., vol. 2020, no. 1, pp. 1–18, 2020.
[15] W. Zhu, H. Zhang, J. Eastwood, X. Qi, J. Jia, and Y. Cao, “Concrete crack detection using lightweight attention feature fusion single shot multibox detector,” Knowledge-Based Syst., vol. 261, p. 110216, 2023.
How to Cite
MOHAMAD FAUZI, Nurfarah Idayu; MOHAMED HATIM, Shahirah; ZULKIFLI, Zalikha. Detection of Malaysian Sign Language with Single Shot Detector Algorithm. Mathematical Sciences and Informatics Journal, [S.l.], v. 4, n. 1, p. 42-48, may 2023. ISSN 2735-0703. Available at: <>. Date accessed: 21 apr. 2024. doi:

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.