35986841_10216840653711318_1105697261150535680_n
Local cover image
Local cover image

Enabling Brain Typing Via LSTM Recurrent Neural Network // GP // Dr. Ahmed Farouk (2018 - 2019)

By: Material type: TextTextSeries: COMPUTER SCIENCES DISTINGUISHED PROJECTS 2019Publication details: GIZA MSA 2019Description: 76 PSubject(s): DDC classification:
  • 005
Online resources: Summary: In our ever-changing world, no one can survive or prosper in isolation. According to UNICEF, 30 per cent of street youths are disabled. Moreover, youth and new generations are of utmost importance in our modern world for a better future. It is sad to say that in our world, there are physically impaired people who are deprived from the very basic means of communication. Thus, sharing information in a bidirectional flow acts as a building block for constructive communication. Here comes the technology that saves those helpless people from this prison of not communication which is Brain-Computer Interface (BCI). As it is one of the most emerging technologies in the past 10 years. The project aims to make a BCI application that reads the human brain signals from an EEG device that then classifies these signals to commands that write the wanted text. By using deep learning the application will be able to classify these received signals and make use of these classes to be converted into commands to write the specified text. This state-of-the- art field can lend a helping hand and enable those who are physically disabled to be able to communicate better. The project is consisting of 2 phases. Phase one: which the user wears the EEG headset and by collecting data to feed it to the RNN model that will later train on these data for better analysis of the signals. Phase two: where the model have trained on this person data and able to classify his signals, all he will do is to imagine doing 1 of the 5 commands which are: moving right hand, moving left hand, moving legs, closing eye, moving both hands. Furthermore, this will help him to choose the specified letter to write the wanted word. By developing a high accuracy deep learning model this will help the humanity to have much brighter future.
List(s) this item appears in: CS D.G.P 2018 / 2019
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode
Distinguished Graduation Projects Distinguished Graduation Projects Centeral Library Soft Copy located on library Cataloge GP293CS2019 (Browse shelf(Opens below)) Available 82123

Computer Science

In our ever-changing world, no one can survive or prosper in isolation. According to
UNICEF, 30 per cent of street youths are disabled. Moreover, youth and new generations are of
utmost importance in our modern world for a better future. It is sad to say that in our world, there
are physically impaired people who are deprived from the very basic means of communication.
Thus, sharing information in a bidirectional flow acts as a building block for constructive
communication. Here comes the technology that saves those helpless people from this prison of
not communication which is Brain-Computer Interface (BCI). As it is one of the most emerging
technologies in the past 10 years. The project aims to make a BCI application that reads the
human brain signals from an EEG device that then classifies these signals to commands that write
the wanted text.
By using deep learning the application will be able to classify these received signals and make
use of these classes to be converted into commands to write the specified text. This state-of-the-
art field can lend a helping hand and enable those who are physically disabled to be able to
communicate better. The project is consisting of 2 phases. Phase one: which the user wears the
EEG headset and by collecting data to feed it to the RNN model that will later train on these data
for better analysis of the signals. Phase two: where the model have trained on this person data and
able to classify his signals, all he will do is to imagine doing 1 of the 5 commands which are:
moving right hand, moving left hand, moving legs, closing eye, moving both hands. Furthermore,
this will help him to choose the specified letter to write the wanted word. By developing a high
accuracy deep learning model this will help the humanity to have much brighter future.

There are no comments on this title.

to post a comment.

Click on an image to view it in the image viewer

Local cover image