Face recognition door unlock system using machine learning and IoT
Data science projects in pondicherry
Create New

Face recognition door unlock system using machine learning and IoT

Project period

08/07/2019 - 09/30/2019

Views

534

3



Face recognition door unlock system using machine learning and IoT
Face recognition door unlock system using machine learning and IoT

Storing images and comparing them with the input image by using the Pi camera connected to a Raspberry Pi. If the input image is at least 80 percent of the stored image, then the solenoid lock unlocks. Or else, the door remains locked. When the input image is not equal to the stored image, that is, if a new person is near the door, the camera takes a snap of him. Also, Alexa says to the user that someone is near the door. 
To verify that our daily life is going securely. A lot of research programmers are going on in this entire society. The turning point comes through the internet of things, the industry has emerged with the lots of elements provided from IoT. We can able to connect our daily life things or objects with this had successfully evolved lots of things. This Facial recognition door unlock system is a process that will detect the face and identifies the people. People are having different types of face cut, in particular, there are many unique faces that are different from each other which inspired us. From that concept, this process has been established. Our main aim is to create the smart door system to a house, that will secure the house and all your things at your home. In this concept of our system, we have been used a live web camera on the front side of the door, along with the display monitor. This web camera shows the owner/particular viewer for whom the house is his control. This shows the person who stood in front of the door. The system is set up and the voice output is being processed by the processor that is used to show the answers/instructions as the output on the screen. We are using a stepper motor that is used to lock/open then by the sliding method so that a normal person stands in front of the door and access it. This process is done through this Microsoft face API application. The display is being operated on a Microsoft Visual Studio application.

Why: Problem statement

If an unknown person locks the door, it can be known by the people inside the home. It helps the house owner to protect himself and his belongings inside the home.

How: Solution description

Face Recognition System/automatic attendance system will identify or verifies the identity of a person from digital images captured from a camera source. We utilize OPEN CV library which is a famous PC vision library that began by Intel. The cross-stage library sets its emphasis on constant picture handling and incorporates sans patent executions of the most recent PC vision calculations. The basic flow of the face recognition system/automatic attendance system is the image captured by the camera. The Viola jones method will identify the face in the picture utilizing Haar cascade classifiers and features are extricated from the face. After the extraction, the system matches the captured images with database images. The matching of the captured images and database images is done using the LBPH algorithm. The thought is to not take a gander at the entire picture as a high-dimensional vector-like in Eigen Faces and the Fisher Face recognizer algorithms. However, it is to depict just neighborhood components of a question. The LBPH algorithm is more accurate than the Eigen Faces. The complexity in the huge calculation in Eigen Faces or PCA is reduced by the LBPH algorithm. The components you extricate along these lines will have a low dimensionality verifiably. If a face is remembered, it is known, else it is obscure. The entryway will open consequently for the approved individual because of the charge of the Raspberry Pi to the entryway motor. Then again, the alarm will ring for the obscure individual. 

How is it different from competition

It can be used for surveillance, that is, in CCTV cameras only. The video will be stored and we have to view all the videos. But, here snapshots are taken and are therefore easy to view. The key is not required.

Who are your customers

Home door automation.

Project Phases and Schedule

Phase 1: Face recognition system/automatic attendance system

Phase 2: Door unlock system

Resources Required

Software:

PYTHON 3

RASPBIAN OS

Hardware:

RASPBERRY PI 3

PI CAMERA

SOLENOID LOCK

ALEXA

2 CHANNEL RELAY

Node MCU

BUZZER

12V BATTERY

JUMPER CABLE

BREADBOARD

 

Download:
Project Code Code copy
/* Your file Name : Untitled.ipynb */
/* Your coding Language : python */
/* Your code snippet start here */
{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "confidence: 65.12928409339315\n",
      "label: 0\n",
      "confidence: 126.44426623945117\n",
      "label: 0\n",
      "confidence: 65.3773405832612\n",
      "label: 0\n",
      "confidence: 66.08108488028167\n",
      "label: 0\n",
      "confidence: 62.009241633840226\n",
      "label: 0\n",
      "confidence: 62.867062309828874\n",
      "label: 0\n",
      "confidence: 134.3719327036363\n",
      "label: 0\n",
      "confidence: 63.454937341919944\n",
      "label: 0\n",
      "confidence: 62.73225514241265\n",
      "label: 0\n",
      "confidence: 62.33195436438359\n",
      "label: 0\n",
      "confidence: 63.852441365341754\n",
      "label: 0\n",
      "confidence: 63.06585015769451\n",
      "label: 0\n",
      "confidence: 60.89464012650123\n",
      "label: 0\n",
      "confidence: 64.59589795560413\n",
      "label: 0\n",
      "confidence: 69.90075131971334\n",
      "label: 0\n",
      "confidence: 62.895071827576686\n",
      "label: 0\n",
      "confidence: 63.243771691091645\n",
      "label: 0\n",
      "confidence: 64.2950765218079\n",
      "label: 0\n",
      "confidence: 65.09169788030636\n",
      "label: 0\n",
      "confidence: 66.4796318374923\n",
      "label: 0\n",
      "confidence: 64.72194740385582\n",
      "label: 0\n",
      "confidence: 63.048114671510895\n",
      "label: 0\n",
      "confidence: 64.35244290450446\n",
      "label: 0\n",
      "confidence: 63.78672314425795\n",
      "label: 0\n",
      "confidence: 63.293321060418236\n",
      "label: 0\n",
      "confidence: 64.6370013408927\n",
      "label: 0\n",
      "confidence: 64.36118183670601\n",
      "label: 0\n",
      "confidence: 60.208186615866744\n",
      "label: 0\n",
      "confidence: 62.571294944081686\n",
      "label: 0\n",
      "confidence: 64.65262329447835\n",
      "label: 0\n",
      "confidence: 62.34953982904529\n",
      "label: 0\n",
      "confidence: 64.1501784763454\n",
      "label: 0\n",
      "confidence: 65.09507672328621\n",
      "label: 0\n",
      "confidence: 65.14060442280983\n",
      "label: 0\n",
      "confidence: 64.36762331988272\n",
      "label: 0\n",
      "confidence: 63.98077328366261\n",
      "label: 0\n",
      "confidence: 65.9113934100607\n",
      "label: 0\n",
      "confidence: 66.30730235361105\n",
      "label: 0\n",
      "confidence: 66.56367391939992\n",
      "label: 0\n",
      "confidence: 67.65146572743343\n",
      "label: 0\n",
      "confidence: 68.4461699926076\n",
      "label: 0\n",
      "confidence: 62.80472568113546\n",
      "label: 0\n",
      "confidence: 69.76375536804998\n",
      "label: 0\n",
      "confidence: 68.77096092324933\n",
      "label: 0\n",
      "confidence: 64.59856080790811\n",
      "label: 0\n",
      "confidence: 73.00213009030337\n",
      "label: 0\n",
      "confidence: 72.7212320569789\n",
      "label: 0\n",
      "confidence: 67.30848370328577\n",
      "label: 0\n",
      "confidence: 63.11750405718068\n",
      "label: 0\n",
      "confidence: 65.13025401806132\n",
      "label: 0\n",
      "confidence: 65.44184831060674\n",
      "label: 0\n",
      "confidence: 68.59038539957777\n",
      "label: 0\n",
      "confidence: 73.23704597963979\n",
      "label: 0\n",
      "confidence: 122.79339505255115\n",
      "label: 0\n"
     ]
    },
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-1-10fcb8f44cd1>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m     16\u001b[0m \u001b[0;32mwhile\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     17\u001b[0m     \u001b[0mret\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mtest_img\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcap\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;31m# captures frame and returns boolean value and captured image\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 18\u001b[0;31m     \u001b[0mfaces_detected\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mgray_img\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mfr\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfaceDetection\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtest_img\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     19\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     20\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/Desktop/final_face/faceRecognition.py\u001b[0m in \u001b[0;36mfaceDetection\u001b[0;34m(test_img)\u001b[0m\n\u001b[1;32m      9\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mfaceDetection\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtest_img\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     10\u001b[0m     \u001b[0mgray_img\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcv2\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcvtColor\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtest_img\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mcv2\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mCOLOR_BGR2GRAY\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;31m#convert color image to grayscale\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 11\u001b[0;31m     \u001b[0mface_haar_cascade\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcv2\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mCascadeClassifier\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'HaarCascade/haarcascade_frontalface_default.xml'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;31m#Load haar classifier\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     12\u001b[0m     \u001b[0mfaces\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mface_haar_cascade\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdetectMultiScale\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mgray_img\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mscaleFactor\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m1.32\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mminNeighbors\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;31m#detectMultiScale returns rectangles\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     13\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "import os\n",
    "import cv2\n",
    "import numpy as np\n",
    "import faceRecognition as fr\n",
    "\n",
    "\n",
    "#This module captures images via webcam and performs face recognition\n",
    "face_recognizer = cv2.face.LBPHFaceRecognizer_create()\n",
    "face_recognizer.read('trainingData.yml')#Load saved training data\n",
    "\n",
    "name = {0 : \"kalai\",1 : \"vignesh\"}\n",
    "\n",
    "\n",
    "cap=cv2.VideoCapture(0)\n",
    "\n",
    "while True:\n",
    "    ret,test_img=cap.read()# captures frame and returns boolean value and captured image\n",
    "    faces_detected,gray_img=fr.faceDetection(test_img)\n",
    "\n",
    "\n",
    "\n",
    "    for (x,y,w,h) in faces_detected:\n",
    "      cv2.rectangle(test_img,(x,y),(x+w,y+h),(255,0,0),thickness=7)\n",
    "\n",
    "    resized_img = cv2.resize(test_img, (1000, 700))\n",
    "    cv2.imshow('face detection Tutorial ',resized_img)\n",
    "    cv2.waitKey(10)\n",
    "\n",
    "\n",
    "    for face in faces_detected:\n",
    "        (x,y,w,h)=face\n",
    "        roi_gray=gray_img[y:y+w, x:x+h]\n",
    "        label,confidence=face_recognizer.predict(roi_gray)#predicting the label of given image\n",
    "        print(\"confidence:\",confidence)\n",
    "        print(\"label:\",label)\n",
    "        fr.draw_rect(test_img,face)\n",
    "        predicted_name=name[label]\n",
    "        if confidence < 39:#If confidence less than 37 then don't print predicted face text on screen\n",
    "           fr.put_text(test_img,predicted_name,x,y)\n",
    "\n",
    "\n",
    "    resized_img = cv2.resize(test_img, (1000, 700))\n",
    "    cv2.imshow('face recognition tutorial ',resized_img)\n",
    "    if cv2.waitKey(10) == ord('q'):#wait until 'q' key is pressed\n",
    "        break\n",
    "\n",
    "\n",
    "cap.release()\n",
    "cv2.destroyAllWindows\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import cv2\n",
    "import os\n",
    "import numpy as np\n",
    "import faceRecognition as fr\n",
    "\n",
    "\n",
    "test_img=cv2.imread('TestImages/vignesh.jpg')#test_img path\n",
    "faces_detected,gray_img=fr.faceDetection(test_img)\n",
    "print(\"faces_detected:\",faces_detected)\n",
    "\n",
    "\n",
    "faces,faceID=fr.labels_for_training_data('trainingImages')\n",
    "face_recognizer=fr.train_classifier(faces,faceID)\n",
    "face_recognizer.write('trainingData.yml')\n",
    "\n",
    "\n",
    "\n",
    "name={0:\"Kalai\",1:\"vignesh\"}\n",
    "\n",
    "for face in faces_detected:\n",
    "    (x,y,w,h)=face\n",
    "    roi_gray=gray_img[y:y+h,x:x+h]\n",
    "    label,confidence=face_recognizer.predict(roi_gray)\n",
    "    print(\"confidence:\",confidence)\n",
    "    print(\"label:\",label)\n",
    "    fr.draw_rect(test_img,face)\n",
    "    predicted_name=name[label]\n",
    "    if(confidence>37):\n",
    "        continue\n",
    "    fr.put_text(test_img,predicted_name,x,y)\n",
    "\n",
    "resized_img=cv2.resize(test_img,(1000,1000))\n",
    "cv2.imshow(\"face dtecetion tutorial\",resized_img)\n",
    "cv2.waitKey(0)\n",
    "cv2.destroyAllWindows"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}

Comments

Leave a Comment

Post a Comment

Are you Interested in this project?


Do you need help with a similar project? We can guide you. Please Click the Contact Us button.


Contact Us

Social Sharing