7
IoT-Based Health Monitoring System for Speech-Impaired People Using Assistive Wearable Accelerometer

Ishita Banerjee and Madhumathy P.*

Department of ECE, Dayananda Sagar Academy of Technology and Management, Bangalore, India

Abstract

In modern life, with the advent of technology, many difficulties are overcome to make life more convenient. But the situations are much more different for people who are physically impaired and still communication to the world is a challenging job for them. The support of sign language and its uses have extended helping hands toward speech-impaired persons but still it is difficult to understand for the common people. This proposed project aims toward implementation of a wearable electronic glove which serves as an electronics speaking system for the speech-impaired persons. The communication is done in the form of audio signals which can be understood by common people. There is also a provision of LCD display which helps the information to convey to be displayed for communication with the hearing-disabled persons. Using real-time operating systems and embedded systems, these technological advances are brought into reality. We also propose to update and track health related information like heartbeat and body temperature using cloud-based storage in Internet of Things (IoT). This will enable to establish an effective communication for those suffering from any kind of communication disorders. For the implementation, we need to depend on accelerometers and sensors so that the movements can be read accurately and transformed to the required audio or visual form. Using this approach, the hindrance of communication will be reduced. We also approach for a microcontroller-based reconfigurable smart device that can collect, process, and transmit data and store it into the cloud for further monitoring. IoT-based wireless communication systems with network devices that are connected to each other will communicate through open source internet access and establish connection between apps and devices for communication between the person under supervision and the medical supervisor. This also can help in keeping track of real-time records and emergency alerts. To handle the storage and analysis of data related issues, IoT analytics is implemented.

Keywords: Healthcare monitoring, IoT, wearable accelerometer, gesture recognition, cloud-based storage, speech impairment

7.1 Introduction

Sign language is the communication method here for speech- and hearingdisabled people to communicate with each other [1–4]. The representation of sign language differs in each corner of the world due to diversities in language. This makes it difficult for global communication. The major difficulties for speech- and hearing-disabled people occur due to lack of expression of their emotions, mental health, and behavioral issues with normal people. This causes a mental set back as well as social obstacles for them and they get discouraged to open up about their problems in public as well as personal places or even in emergency situations. Due to language diversity in different regions in the world, the communication needs a stronger bond to thread all the possibilities of interaction. Even if we consider only India, different regions have different languages for communication. This finally resulted in the advent of Indian Sign Language which is the most widely used sign language in India [5, 6]. Even if there is a common thread for speech- and hearing-impaired people to interact among them, it still lacks the common platform to interact with normal people since this ISL is not known to them and it cannot be interpreted by them so easily.

The difficulties mentioned here became the research areas over the years to give a common platform for speech- and hearing-impaired people and normal people to express their thoughts with each other [7]. The research areas concentrate on detection the hand gestures [8–10]. This hand gesture detection is even widely used in the field of robotics and even medical support field that deal with artificial or prosthetic hands.

When we make any kind of gesture by moving our hands, the gesture can be interpret to convey information in digital form. As we can express our information through gestures to a large extend, this leads toward the improvement of sign languages [11–13]. This can surely convey information, facts, and emotions effectively.

The proposed project aims toward designing an electronic speaking system or an electronic glove for the purpose of ease of communication [14–16]. This helps a speech- and hearing-impaired person to communicate to each other and even to the outer world. One accelerometer tracks the movements made by hand gestures and this mounted on a wearable glove. The main control unit is handled by Arduino Mega which captures user inputs. Thus, this is a command basis control unit. This proposed work is based on the concept of embedded systems which contains application-specific integrated circuit or microcontroller-based platform to address intended application. These are mostly pre-programmed with many functionalities for addressing a general group of queries and then can be used by the users according to their programs or applications. It needs other parts of the electronic system to incorporate with itself to achieve the task. Most of the microprocessors are used as components of the embedded systems. The embedded system comes with advantages of lesser power consumption, compact size, and low cost units. The processing resources are limited, thus interacting with other units and programming the components is a real challenge here. The embedded systems hold common purpose microprocessors and/or specific purpose microcontrollers ranging from a huge variety. For example, if we consider Digital Signal Processors, these are used for very specific application-oriented purposes; thus, it can also be optimized for increasing performance and reduction of size and cost. The real-time operating systems and embedded system serve the purpose together to solve and answer many critical real-life scenarios.

After capturing and processing the data which is majorly done by the embedded system, the real challenge comes when the data is to be communicated to the network connected devices. Here comes the importance of IoT where sharing of data happens between the network connected devices via secure service layer. For storing and analyzing the data that are transferred by the connected network components, Internet of Things (IoT) analytics is used. The raw data received is converted to more usable form by means of data extraction and data analytics. We collect data from several sources and share through the network after required level of processing. The robustness and reliability of IoT have extended its application widely into the healthcare fields [17–20]. Smart sensors can be implemented for patient health monitoring and the related data that are pulse rate, blood sugar level, blood pressure, etc., can be precisely monitored time to time without much human intervention. These collected data can be sent to the medical team for proper monitoring and call for any changes in medication or treatment if required. This ensures healthy living with wearable electronic health monitoring devices.

7.2 Literature Survey

With the technological development and adaptation of technology in day-to-day life, the need of imposing technology to make life easier for the physically disabled people also came into the minds of the researchers. Technology and medical field came hand in hand to minimize the difficulties in speech- and hearing-impaired people as well as critical patients who face hurdles to communicate their thoughts and feelings to the outer world. The use of Electronic Hand Glove is a strong support to such people.

S. F. Ahmed et al. elaborated their work in this field [21]. Here, the researchers have implemented electronic speaking gloves for dumb people for the ease of communication. The concept of synthesized speech is used to facilitate effective communication which acts as a virtual tongue to the speech disabled people. Here, the author made use of touch phone to make different gestures for analyzing the data from the made gestures. The inbuilt application software makes audio sound by interpreting the various gestures. This needs user awareness about the technology so that the user can make use of the smart phones and apps properly which may not be a real scenario always. Therefore, this system limits its usefulness to a limited group of technologically aware people.

R. R. Itkarkar and A. V. Nandi present their work in [22]. The purpose of the proposed work was to understand and interpret the gestures made by the speech-impaired people and convert it to a common interpretation form so that it can be understood easily by all. The proposed system is termed as Gesture to Speech system or G2S system that can be used to build the basic concept of image processing by skin color segmentation. The camera implemented for the purpose of capturing the hand gestures takes images of the movements, and then, the image processing part takes place. After various steps of image segmentation and image extraction, the hand gesture is interpreted. This can now help to play the prerecorded sound track corresponding to that particular hand gesture.

A five-fingered underactuated prosthetic hand controlled by surface EMG (electromyographic) signals acts as a supporting device for artificial hand movements [23]. The device proposed here is light weight and simple and serves the requirements of prosthetic hands. The theory of selfadaptivity helps to limit the excess use of hardware, thus reducing the size and weight of the system. The size and shape is kept similar to an adult hand. The hand movement is controlled by EMG motion pattern classifier. This makes use of VLR (variable learning rate) along with neural network. The signal processing part uses wavelet transform. Sample entropy is also used here. As the thumb moves, the pattern classifier senses its motion. It also traces the motions made by the middle finger and the index finger along with the thumb. The three electrodes help to record the EMG signals produced by the movements of the fingers. By continuous movement of single finger, the underactuated prosthetic hand can even make various postures such as power grasp. This application, if used properly, can be a great help to treat hand imputation cases.

Kuldeep Singh V Rajput implemented a speaking hand glove [24]. This paper described design of gloves that can translate the gestures made by speech impaired. A speech-impaired person who needs to communicate through sign language faces many difficulties since the sign language used by them is not understood by normal people. The authors propose to give a solution to the problem. The voice chip used here is a 8-bit MCU (4-bit ADPCM with sampling rate 6KHz).

G. Marin and et al. suggested a system for recognizing hand gesture using leap motion device and Kinect devices [25]. The invention of leap motion devices is of great help toward gesture recognition. Once data is gathered from the sources, the feature extraction is done which could be given to SVM classifier for the purpose of gesture recognition. Data is acquired by lead motion from where feature extraction takes place. These extracted features are then fed to the SVM classifier where the gesture is recognized. The sixth sense technology helps to interact with the digital information using only hand gesture using a wearable interface to the physical world. Wearable sensors, motion sensors, accelerometers, etc., are used to interface humanmachine interaction [26].

S. Apte and et al. work on the sixth sense technology [27]. The authors propose the use of a smart glove embedded with flex sensors or transducer where the physical movement or physical energy is transformed into electrical energy. The voltage output of transducers is processed by microcontrollers and other circuits to control and drive home appliances. Such wearable devices have many other applications such as making call, multimedia applications, ticket booking updates, operating maps, and tracking vehicles and flights [28, 29]. To identify the gestures, a marker detection technique can be fast enough for its usability in Augmented Reality (AR) [30].

Geetha M. and Menon R. proposed a gesture control method [31]. They proposed a method to represent some static symbolic form of alphabets (A-Z). The proposed work uses polygon approximation method with Douglas-Peucker algorithm. This approximates the boundary of gesture image. As the gesture edge is approximated, chain code direction is assigned to it. So, firstly, the finger count is detected and later the gesture is read. This is done using Canny edge algorithm. This can also make a difference between open finger and closed finger gesture making it a complex system.

S. U. N. Praveenkumar and S. Havalagi proposed a system of driving gloves with sensors implanted on its fingers and thumb is made to capture the gestures made by the fingers and then translate it to a speech form so that everyone can understand the information to be communicated made through hand gesture. This work definitely is an added advantage for the biomedical field [32].

Y. Li proposed a work on sign language translation [33]. In this paper, sign language recognition is extended to a newer level by identifying the components of the gestures made during interpretation of sign languages. The skeletal muscles show some movements while performing activities that are transformed into electrical signals and sensed by accelerometers and surface electromyography. Right hand and left hand dictate the main words and supported words, respectively. The expression of a sentence can also be done by the hand shape, finger, or palm orientation or their movement. The estimated word is calculated by averaging the left side channels. Knowing the threshold levels are also of great importance for this calculation. The algorithm that approximates the values is fuzzy k-means algorithm.

7.3 Procedure

As we can see that there are different research works done on the field of electronic gloves, most of the existing models use flex sensors. As the flex sensors can be mounted one in each fingers, so it limits the number commands that can be set. The existing model has difficulties in setting centralized health parameters [34, 35]. To overcome these difficulties, we propose for designing an electronic speaking glove with LCD. This is a portable device thus gives the user flexibility to use it. Patients will get on time services from doctors and family. It will reduce the processing time and human intervention as well as accuracy will be increased. This patient assistance system can be used in hospitals as well as in homes. This wearable device is usually placed in the hand of paralyzed patient so patient get the on time services from the doctor and family member for bending hand in the different position.

This electronic speaking system communicates either through the speaker or through LCD.

Hand gestures are interpreted with glove consists of accelerometer. Initially, all the audio and display forms are stored in the SD card. The block diagram is shown in Figure 7.1. Accelerometer is fixed to the patient hand for his assistance. APR300 kit and speaker is used for voice announcement purpose. Different voices are recorded and these voices are enabled by bending the hands.

Each wrist position indicates different service which patient need as shown below.

1.Washroom- Bend the wrist toward left side
2.I need food- Bend the wrist toward right side
3.I need tablets- Palm facing upwards
4.I need water- Wrist at 90°
5.Regular checkup- Wrist at –90°

We have different sensors to monitor the different parameters of the patients, e.g., temperature sensor, heartbeat sensor, and respiration sensor.

Schematic illustration of the block diagram of proposed system.

Figure 7.1 Block diagram of proposed system.

Each sensor monitors the parameters if anything below or above the threshold the system will send monitored data to the web application using IoT. Each patient will have the profile in the cloud. Each patient health parameters can be read on web portal. Each parameters plot will be appear separately. We can take report from the website. If any of the parameters go below or above the threshold level, then alarm will be set ON.

The hardware used are Arduino Mega 2560 microcontroller, accelerometer, 16x2 LCD display, APR 33A3 voice kit, Wi-Fi module, buzzer, speaker, respiration sensor, heartbeat sensor, and temperature sensor. The Arduino sketch is done in Embedded C. The Arduino Mega 2560 board can be depicted as shown in Figure 7.2. It is a microcontroller-based platform. It operates at a voltage of 5 V. 256 KB of flash memory and 4 KB of RAM is used to store and so the data processing.

The voice kit saPR33A series Q7.0 can trigger circuits to store and playback audio signals. This is shown in Figure 7.3. Out of all eight channels associated with this kit, each can record for 1.3 m of audio message. It runs on supply of 12 V AC/DC. Let us see how recording is done here. First, the board is powered on and jumper is fixed in JP1. Selection of J5 helps for recording in a particular channel. To record in M0, we shall make M0 Grounded and we can directly initiate the voice recording for storage purpose. After the segment is completed, LD2 will interpret that no place for storage is available in this channel any more. During play back, J4 is activated. If we connect M0 to Ground, we can see that the LD2 will remain ON and play the recordings in the channel M0. The process can be repeated for other channels also.

A photograph of the arduino 2560 board.

Figure 7.2 Arduino 2560 board.

A photograph of the voice kit APR 33A3.

Figure 7.3 Voice kit APR 33A3.

The temperature sensor used is LM35. The circuit diagram shown in Figure 7.4 has two transistors: one having 10 times more emitter region of the other. Thus, current intensity varies for both the transistors. The voltage across R1 resistor can be said proportional to the absolute temperature and they have a linear relationship. The amplifier at the top of the circuit makes the base region voltage of Q1 transistor to be proportional to the absolute temperature. Another amplifier at the right of the circuit shown is used to convert the temperature scale from Kelvin to Celsius (for LM 35).

Schematic illustration of the working circuit diagram of LM35.

Figure 7.4 Working circuit diagram of LM35.

Schematic illustration of the working of heartbeat sensor.

Figure 7.5 Working of heartbeat sensor.

The heartbeat sensor helps to measure heartbeat based on the light intensity changes due to scattered or absorbed light during its path through the blood for each heartbeat. A light detector and a LED can perform this work very well. The intensity of the LED matters a lot since the brightness of the LED determines whether there is passing blood through the finger during heartbeat once its place on the LED. As the blood is pumped by the heart, the finger becomes nontransparent and the brightness of light falling on the detector is low. Thus, per heartbeat, the detector can receive the signal through the LED which is later converted to electrical signal. This converted signal after amplification gives the measure of heart beats. The process how heartbeat sensor works is shown in Figure 7.5.

The respiration sensor is kind of a stretch which is sensitive to movements. The strap is tied around the patient’s chest or upper abdomen to measure the expansion and contraction of rib cage due to inhale and exhale and converted into signal form to show on the screen. The sensor is shown in Figure 7.6.

A photograph of an respiration sensor.

Figure 7.6 Respiration sensor.

The information collected by the sensors is shared among the network components by ThingSpeak open source IoT application. This stores and retrieves data using HTTP protocol via Internet or LAN. After successful login in the web portal, the channels are created. For the application shown here, three channels are created for temperature, heartbeat, and respiration monitoring, respectively, as shown in the Figure 7.7.

The following is the algorithm of the health monitoring system is as follows:

Step1: Initialization of pins to which the accelerometer, sensor, buzzer, voice kit, and LCD are connected.

Step2: Declare the pins of LCD connections with Arduino.

A snapshot of the channels for heath monitoring.

Figure 7.7 Channels for heath monitoring.

Step3: Set the baud rate.

Step4: Assigning pins as input and output pins.

Step5: Set all the channels to high initially.

Step6: Assign accelerometer reading to variables.

Step7: Calling accelerometer function.

Step8: Temperature and heartbeat measurement and corresponding output.

Step9: Uploading the measurements into cloud.

Step10: Analyze values of accelerometer for different positions.

Step11: Using if, else if statements with the values of accelerometer to produce output.

The input and output pins of the microcontroller are initialized, and then, conditions are set by programming. There are three parameters on which conditions are imposed, i.e., X, Y, and Z. Let us see the first condition. If X ≥ 300 and Y < 300 and Z ≥ 300, then LCD will flash a message as “I need Water” and the speaker will play the same message as “I need Water”. Then, after this message delivery, the temperature will be checked again. In case the first condition fails, then the second condition will be checked and so one. One any of the one condition is set true the sensor checks the temperature. If it is greater than 37°C, then temp is displayed as high and buzzer is ON. If temperature is lesser than 37°C, then it will check heartbeat. Each time any value is displayed on the LCD, it is simultaneously uploaded in cloud. Again for heartbeat, the condition of alert is heartbeat lesser than 470 or greater than 900. The heartbeat measured is also uploaded into the cloud. Next, the checking for accelerometer is done. This working is described in terms of flowchart in Figure 7.8.

When the gesture is made, the accelerometer sends the x, y, z coordinates for that gestures into the Arduino. The Arduino processes and sends the corresponding test command of that gesture that is sent to LCD and APR 33A3 voice kit. The LCD displays the test command and voice kit receives the signal and sends the corresponding voice into speaker. Heartbeat sensor, temperature sensor, and respiration sensor are the input to the Arduino and Wi-Fi module and buzzer are the output of the Arduino. Whenever the heartbeat and body temperature is higher than the threshold value, the buzzer is on and LCD displays the corresponding command.

Schematic illustration of the flow chart of health monitoring.

Figure 7.8 Flow chart of health monitoring.

7.4 Results

The hardware shown here is mounted with accelerometers that collect data which are simulated through software tool. The glove is used to make sign gestures that can be interpret through the accelerometer. Hardware circuit for measuring the gestures is shown in Figure 7.9.

A photograph of the hardware circuit for hand gesture recognition.

Figure 7.9 Hardware circuit for hand gesture recognition.

The different gestures and the corresponding commands are displayed on LCD. As we have already stored recorded voice messages for each gesture, when a gesture is interpreted by an accelerometer, we can play the corresponding message from the recordings. The text for that command can also be displayed simultaneously. Figure 7.10 shows the “Temp is High” on the LCD display whenever the body temperature is high and “Medical Emergency” on the LCD display whenever the heartbeat is high.

Let us see the graphical analysis now. Whenever the body temperature and heartbeat are greater than the threshold value, then the corresponding measured value of temperature, heartbeat, and respiratory measures are uploaded to cloud as shown in Figures 7.11, 7.12, and 7.13, respectively.

If we look into the graph shown in Figure 7.11, it shows that the temperature of the patient is varying with respect to time and is well plotted as shown.

Figure 7.12 depicts the heartbeat rate during a given time slot and shows that there is a sudden rise at around 17.35 which might need medication. After which the graph becomes much stable.

Figure 7.13 shows the number of times the patient inhales and exhales in the given time.

The device used here has portability, convenience in terms of ease of usage, cost, and weight. This device will help persons those have normal communication disability. Patients will get on time services from doctors and family.

Snapshots of the gesture and commands accordingly.

Figure 7.10 Gesture and commands accordingly.

A snapshot of the field chart for body temperature.

Figure 7.11 Field chart for body temperature.

A snapshot of the field chart for heartbeat.

Figure 7.12 Field chart for heartbeat.

A snapshot of the field chart for respiration rate.

Figure 7.13 Field chart for respiration rate.

7.5 Conclusion

The advent of sign language is a miraculous change in the lives of speech-and hearing-impared people. The main aim of the project was to provide support to the speech-impaired people to express themselves with the outer world and also for the paralyzed persons to convey their needs to their supervisors. The project aims on feeding the inputs taken from the accelerometer to the Arduino Mega for different gestures which can be communicated to others via LCD display and speaker in the form of audio output. Depending on the three axes of the accelerometer, the program is built. Along with the existing gesture control system, this proposal adds value to the work by uploading the values to the cloud through IoT and heath care monitoring is also proposed simultaneously. So, this project acts as a dual purpose prototype that can be used either by just speech-impaired person or by bed-ridden patients. As extension of future work this Arduino is reconfigurable and coding extension options are always available. Hence, this concept can be extended for adding different voice for different gestures.

References

1. Karmel, A., Sharma, A., Pandya, M., Garg, D., IoT based assistive device for deaf, dumb and blind people. International Conference on Recent Trends in Advanced Computing, vol. 165, Procedia Comput. Sci., pp. 259–269, 2019.

2. Spender, A., Bullen, C., Altmann-Richer, L., Cripps, J., Duffy, R., Falkous, C., Farrel, M., Horn, T., Wigzell, J., Yeap, Wearables and the internet of things: considerations the life and health insurance industry. Br. Actuar. J., 24, 1–31, 2019.

3. Ghotkar, A.S., Khatal, R., Khupase, S., Asati, S., Hadap, M., Hand gesture recognition for indian sign language. Indian conference on computer communication and informatics, pp. 1–4, 2012.

4. Rekha, J., Bhattacharya, J., Majumder, S., Shape, texture and local movement hand gesture features for indian sign language recognition. International conference on trendz in information sciences and computing (TISC2011), IEEE, pp. 30–35, 2011.

5. Singha, J. and Das, K., Indian sign language recognition using eigen value weighted euclidean distance based classification technique. Int. J. Adv. Comput. Sci. Appl., 4, 2, pp. 188–195, 2013.

6. Subha Rajam, P. and Balakrishnan, G., Real time indian sign language recognition system to aid deaf-dumb People. IEEE International Conference on Communication Technology, pp. 737–742, 2011.

7. Kishore, P.V.V. and Rajesh Kumar, P., A video based indian sign language recognition system (INSLR) using wavelet transform and fuzzy logic. IACSIT Int. J. Eng. Technol., 4, 5, pp. 537–542, 2012.

8. Zhu, C. and Sheng, W., Wearable sensor-based hand gesture and daily activity recognition for robot-assisted living. IEEE Transactions on Systems, man and cybernetics, Part A, 41, 3, 2011.

9. Sanna, K., Juha, K., Jani, M., Johan, M., Visualization of hand gestures for pervasive computing environments. Proceedings of the working conference on advanced visual interfaces, ACM, pp. 480–483, 2006.

10. Juha, K., Panu, K., Jani, M., Sanna, K., Giuseppe, S., Luca, J., Sergio, D.M., Accelerometer-based gesture control for a design environment, Springer, Finland, 2005.

11. Garcia-Ceja, E., Galvn-Tejada, C.E., Brena, R., Multi-view stacking for activity recognition with sound and accelerometer data. Inf. Fusion, 40, 45–56, 2018.

12. Erdau, B., Atasoy, I., Koray, H., Ofuul, Integrating features for accelerometer-based activity recognition. Proc. Comput. Sci., 98, 522–527, 2016.

13. Hui, S. and Zhongmin, W., Compressed sensing method for human activity recognition using tri-axis accelerometer on mobile phone. J. China Univ. Posts Telecommun., 24, 2, 31–71, 2017.

14. Mirri, S., Prandi, C., Salomoni, P., Fitting like a GlovePi: a wearable device for deaf-blind people. 14th IEEE Annual Consumer Communications & Networking Conference (CCNC), pp. 1057–1062, 2017.

15. Jani, M., Juha, K., Panu, K., Sanna, K., Enabling fast and effortless customization in accelerometer based gesture interaction. Proceedings of the 3rd international conference on Mobile and ubiquitous multimedia, ACM, Finland, pp. 25–31, 2004.

16. Malik, S. and Laszlo, J., Visual touchpad: A two-handed gestural input device. Proceedings of the ACM International Conference on Multimodal Interfaces, p. 289, 2004.

17. Anliker, U. et al., MON : a wearable multiparameter medical monitoring and alert system. IEEE Trans. Inf. Technol. Biomed., 8, 4, 415–427, 2004.

18. Sareen, S., Sood, S.K., Gupta, S.K., IoT-based cloud framework to control ebola virus outbreak. J. Ambient. Intell. Humaniz. Comput., 9, 1–18, 2016.

19. Yang, Z., Zhou, Q., Lei, L., Zheng, K., Xiang, W., An IoT-cloud based wearable ECG monitoring system for smart healthcare. J. Med. Syst., 40, 12, 1–11, 2016.

20. Verma, P., Sood, S.K., Kalra, S., Cloud-centric IoT based student healthcare monitoring framework. J. Ambient. Intell. Humaniz. Comput., 116, 1–17, 2017.

21. Ahmed, S.F., Muhammad, S., Ali, B., Saqib, S., Qureshi, M., Electronic speaking glove for speechless patients A tongue to the dumb. IEEE Conference on Sustainable Utilization and Development in Engineering and Technology, pp. 56–60, 2010.

22. Itkarkar, R.R. and Nandi, A.V., Hand gesture to speech conversion using Matlab. Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT), pp. 1–4, 2013.

23. Zhao, J., Jiang, L., Shi, S., Cai, H., Liu, H., Hirzinger, G., A five-fingered underactuated prosthetic hand system. Proceedings of IEEE International Conference on Mechatronics and Automation, pp. 1453–1458, 2006.

24. Singh, K. and Rajput, V., Design and implementation of Talking hand glove for the hearing impaired. IEEE, 2014.

25. Marin, G., Dominio, F., Zanuttigh, P., Hand gesture recognition with leap motion and kinect devices. IEEE, 2014.

26. Mannini, A. and Sabatini, A.M., Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers. Sensors, 10, 2, pp. 1154–1175, 2010.

27. Apte, S., Sawant, D., Dubey, M., Pandole, M., Vengurlekar, P., Gesture based home automation using sixth sense technology. Int. J. Comput. Appl., 5, pp. 179–186, 2017.

28. Desale, R.D. and Ahire, V.S., A study on wearable gestural interface-- A sixth sense technology. IOSR J. Comput. Eng., 10, 5, 1016, 2013.

29. Mistry, P., Meas, P., Chang, L., WUW -Wear Ur World -A wearable gesture interface. Proceedings of the 27th International Conference extended abstracts on Human factors Computing Systems, Association for Computing Macinery/Special Interest Group on Computer-Human Interaction, MIT Open Access Article, 2009.

30. Hirzer, M., Marker Detection for Augmented Reality Applications, Technical Report, ICG Publications, ICG-TR-08/05, 2008.

31. Geetha, M. and Menon, R., Gesture Recognition for American Sign Language with Polygon Arroximation. IEEE International Conference on Technology for Education, 2011.

32. Praveenkumar, S.U.N. and Havalagi, S., The amazing digital gloves that give voice to the voiceless. IJAET, 6, 1, 471–480, 2013.

33. Li, Y., A Sign-Component-Based framework for chinese sign language recognition using accelerometer and sEMG data. IEEE Trans. Biomed. Eng., 59, 10, 2695–2704, 2012.

34. Yuan, Y.S. and Cheah, T.C., A study of internet of things enabled healthcare acceptance in Malaysia. J. Crit. Rev., 7, 3, 25–32, 2020.

35. Noah, B., Keller, M.S., Mosadeghi, S., Stein, L., Johl, S., Delshad, S., Spiegel, B.M.R., Impact of remote patient monitoring on clinical outcomes: an updated meta-analysis of randomized controlled trials. NPJ Digital Med., 1, Article no. 20172, pp. 1–12, 2018.

  1. *Corresponding author: [email protected]
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.82.79