Note: 

   

If you guys are getting coupon expired or course is not free after opening the link, then it is due to the fact that course instructors provide only few hundreds or thousands of slots which get exhausted. So, try to enroll in the course as soon as it is posted in the channel. The Coupons may expire any time for instant notification follow telegram channel

New customer offer! Top courses from $13.99 when you first visit Udemy

Quick Starter for Optical Character Recognition, Image Recognition Object Detection and Object Recognition using Python

What you’ll learn

  • Optical Character Recognition with Tesseract Library, Image Recognition using Keras, Object Recognition using MobileNet SSD, Mask R-CNN, YOLO, Tiny YOLO from static image, realtime video and pre-recorded videos using Python
Description

Hi There!

welcome to my new course ‘Optical Character Recognition and Object Recognition Quick Start with Python’. This is the third course from my Computer Vision series.

Image Recognition, Object Detection, Object Recognition and also Optical Character Recognition are among the most used applications of Computer Vision.

Using these techniques, the computer will be able to recognize and classify either the whole image, or multiple objects inside a single image predicting the class of the objects with the percentage accuracy score. Using OCR, it can also recognize and convert text in the images to machine readable format like text or a document.

Object Detection and Object Recognition is widely used in many simple applications and also complex ones like self driving cars.

This course will be a quick starter for people who wants to dive into Optical Character Recognition, Image Recognition and Object Detection using Python without having to deal with all the complexities and mathematics associated with typical Deep Learning process.

Let’s now see the list of interesting topics that are included in this course.

At first we will have an introductory theory session about Optical Character Recognition technology.

After that, we are ready to proceed with preparing our computer for python coding by downloading and installing the anaconda package and will check and see if everything is installed fine.

Most of you may not be coming from a python based programming background. The next few sessions and examples will help you get the basic python programming skill to proceed with the sessions included in this course. The topics include Python assignment, flow-control, functions and data structures.

Then we will install the dependencies and libraries that we require to do the Optical Character Recognition. We are using Tesseract Library to do the OCR. At first we will install the Library and then its python bindings. We will also install OpenCV, which is the Open Source Computer Vision library in Python.

We also will install the Pillow library, which is the Python Image Library. Then we will have an introduction to the steps involved in the Optical Character Recognition and later will proceed with coding and implementing the OCR program. We will use few example images to do a Character Recognition testing and will verify the results.

Then we will have an introduction to Convolutional Neural Networks , which we will be using to do the Image Recognition. Here we will be classifying a full image based on the single primary object in it.

We will then proceed with installing the Keras Library which we will be using to do the Image recognition. We will be using the built in , pre-trained Models that are included in Keras. The base code in python is also provided in the Keras documentation.

At first We will be using the popular pre-trained model architecture called the VGGNet. we will have an introductory session about the architecture of VGGNet. Then we will proceed with using the pre-trained VGGNet 16 Model included in keras to do Image Recognition and classification. We will try with few sample images to check the predictions. Then will move on to a deeper VGGNet 19 Model included in keras to do Image Recognition and classification.

Then we will try the ResNet pre-trained model included with the Keras library. We will include the model in the code and then we will try with few sample images to check the predictions.

And after that we will try the Inception pre-trained model. We will also include the model in the code and then we will try with few sample images to check the predictions. Then will go ahead with the Xception pre-trained model. Here also, we will  include the model in the code and then we will try with few sample images.

And those were Image Recognition pre-trained models, which can only label and classify a complete image based on the primary object in it. Now we will proceed with Object Recognition in which we can detect and label multiple objects in a single image.

At first we will have an introduction to MobileNet-SSD Pre-trained Model, which is single shot detector that is capable of detecting multiple objects in a scene. We will be also be having a quick discussion about the dataset that is used to train this model.

Later we will be implementing the MobileNet-SSD Pre-trained Model in our code and will get the predictions and bounding box coordinates for every object detected. We will draw the bounding box around the objects in the image and write the label along with the confidence value.

Then we will go ahead with object detection from a live video. We will be streaming the real-time live video from the computer’s webcam and will try to detect objects from it. We will draw rectangle around each object detected in the live video along with the label and confidence.

In the next session, we will go ahead with object detection from a pre-saved video. We will be streaming the saved video from our folder and will try to detect objects from it. We will draw rectangle around each object detected along with the label and confidence.

Later we will be going ahead with the Mask-RCNN Pre-trained Model. In the previous model, we were only able to get a bounding box around the object, but in Mask-RCNN, we can get both the box co-ordinates as well the mask over the exact shape of object detected. We will have an introduction about this model and its details.

Later we will be implementing the Mask-RCNN Pre-trained Model in our code and as the first step we will get the predictions and bounding box coordinates for every object detected. We will draw the bounding box around the objects in the image and write the label along with the confidence value.

Later we will be getting the mask returned for each object predicted. We will process that data and use it to draw translucent multi coloured masks over each and every object detected and write the label along with the confidence value.

Then we will go ahead with object detection from a live video using Mask-RCNN. We will be streaming the real-time live video from the computer’s webcam and will try to detect objects from it. We will draw the mask over the perimeter of each object detected in the live video along with the label and confidence.

And like we did for our previous model, we will go ahead with object detection from a pre-saved video using Mask-RCNN. We will be streaming the saved video from our folder and will try to detect objects from it. We will draw coloured masks for object detected along with the label and confidence.

The Mask-RCNN is very accurate with vast class list but will be very slow in processing images using low power CPU based computers. MobileNet-SSD is fast but less accurate and low in number of classes. We need a perfect blend of speed and accuracy which will take us to Object Detection and Recognition using YOLO pre-trained model. we will have an overview about the yolo model in the next session and then we will implement yolo object detection from a single image.

And using that as the base, we will try the yolo model for object detection from a real time webcam video and we will check the performance. Later we will use it for object recognition from the pre-saved video file.

To further improve the speed of frames processed, we will use the model called Tiny YOLO which is a light weight version of the actual yolo model. We will use tiny yolo at first for the pre-saved video and will analyse the accuracy as well as speed and then we will try the same for a real-time video from webcam and see the difference in performance compared to actual yolo.

That’s all about the topics which are currently included in this quick course. The code, images and libraries used in this course has been uploaded and shared in a folder. I will include the link to download them in the last session or the resource section of this course. You are free to use the code in your projects with no questions asked.

Also after completing this course, you will be provided with a course completion certificate which will add value to your portfolio.

So that’s all for now, see you soon in the class room. Happy learning and have a great time.

Who this course is for:
  • Beginners or who wants to start with Python based OCR, Image Recognition and Object Recognition

[maxbutton id=”1″ url=”https://www.udemy.com/course/computer-vision-python-ocr-object-detection-quick-starter/?couponCode=COURSE-LAUNCH-OFFER” ]