Outputs.

For Reading a book in front of a camera.

output using a webcam in real time

For Writing in front of the camera.

 



Human Activity Recognition with OpenCV

Have you ever wondered while watching a Sci-Fi film how does computer Recognize what a person’s next move will be or how it predicts our actions according to activities performed? Well, the simple answer is it uses Human Activity Recognition (HAR) Technology for this. To accurately engineer features from the raw data to build a machine learning model, it generally requires extensive domain understanding and methodologies from signal processing. It entails predicting a person’s movement based on sensor data.

Moreover, This Technology helps the machine to decide and predict what we perform in various activities. For this, we must use a Deep Learning Model and a Dataset. These two terms go hand in hand in the Machine Learning genre as the program uses an already predefined Dataset to refer to an existing set of activities defined there, and by referring to them, it will predict what activity matches that already present in its dataset.

We will be using Python programming language for this task, along with basic knowledge of Convolutional Neural Networks is good enough for you to get started.

Convolutional Neural Network  

Since we want to recognize activity using the camera, we need a branch of Deep Learning called Convolutional Neural Network (CNN), which uses ANN or Artificial Neural Network to predict output using analysis of Visual Imagery. This model involves two components convolutional layers and pooling layers.

  • Convolutional layers- these layers rely on input i.e. two dimensional images or 1D signal, using a kernel that reads in small segments at a time and steps across the entire input field. Each read results in an input projected onto a filter map and represents an internal interpretation of the input.
  • Pooling layers-Feature map projections are reduced to their core components utilizing signal averaging or signal maximizing techniques in pooling layers.

OpenCV for python

We primarily use openCV for real-time computer vision since we want the program to detect real-time activity. We will be imported using this useful library and its functions.

To use this, we must ensure that our system has the opencv-python library installed. This can be done by running the following command in the command processor of the operating system:

pip install opencv-python

Now once the library has been installed, it can be imported using the following command:

import cv2

Open Neural Network Exchange(ONNX)

ONNX is a representation of machine learning models, a collection of already trained models since its open source; we will be able to use it free of cost; we will be training our ML model using this resource as it needs other external datasets and pre-trained models for its development which this resource will provide.

To use it, you need to download it from here (make sure to drop it into the folder containing your model).

The file id inside the model .

The Kinetics Human Action Video Dataset

This data set contains over 400 videos to which our model can refer and predict what actions are being performed. Each action has a corresponding clip.

Also, we will require a text file for each action to match and give output if matched successfully with the video. which can be downloaded from Kinetics’ official site or download Actions.txt here 

This the text file which will correspond to every activity.

Similar Reads

Human Activity Recognition

First, we need to import all the required libraries for this project; the libraries used are numpy for gpu and deep learning imutils and cv2 for real-time imagery processing. Below is the code for importing...

Deep Learning implementation

We will specify frames for processing our image using fixing its dimensions. Then we will loop over the number of required frames. A frame will be passed from the video stream if matched and read, then added to the frame list and saved for further processing....

Outputs.

...

Contact Us