Lip Reading using CNN

Authors

  • Raghav K R, Sarvamangala DR, R Dushyanth Reddy

Abstract

Communicating with visually impaired people, or during noise or disturbance can lead to poor communication or loss of communication. The purpose of this project is to overcome the communication loss by creating a video interface. The video interface is used to capture video of a talking person and is converted into text which is displayed on the screen. The video interface is developed using a deep learning algorithm called Convolution Neural Network. The architecture used is VGG16 and the model is trained and tested on MIRACL -V1 data.

 Keywords: Lip Reading, CNN, MIRACL-VC1

Downloads

Published

2020-05-16

Issue

Section

Articles