Face Mesh using OpenCV and dlib – News Couple

Face Mesh using OpenCV and dlib

In this article we will use OpenCV And dlib to me Face extraction From a specific image then we will try it Network both sides. In short, we will try to connect faces from two different images. We will use a pre-trained model to extract features from faces (68 feature detection).

Industrial Application of Face Grid Application

snap chat: Snapchat is one of the leading apps and it is a fun loving app for todayÔÇÖs generation as we have seen several filters applied on our faces and similarly this feature can also be added to Snapchat or other leading software of the same type to attract more users which in turn helps in getting more downloads.

Augmented Reality Software: AR/VR can also use this particular functionality in some of its use cases to clarify and make it more creative.

import cv2
import numpy as np
import dlib
import requests  
from PIL import Image
dlib for face network app
picture 1

  1. Install Visual Studio (latest version) – refer to this link.
  2. In visual studio, one needs to install CMake package.
  3. After installing from Visual studio we have to install it again using – CMake installation point.
  4. Here comes the last part now, we have to install dlib by – dlib install point.

Download the pre-trained model anticipate shape


We will now create a function to extract the index from the NumPy array.

# Extracting index from array
def extract_index_nparray(nparray):
    index = None
    for num in nparray[0]:
        index = num
    return index

Next, we will load our source image from the internet using the URL and resize it.

# Reading source image form url
image1 = Image.open(requests.get('https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSx8Pu1tW1uCiZPfj9K1EL6uHxbg3bOKO9XkA&usqp=CAU', stream=True).raw)
image1 = image1.resize((300,300))


photo of face grid app

Here we will upload our destination image from the internet using the URL and resize it.

# Reading destination image form url
image2 = Image.open(requests.get('https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTYX1dyl9INRo5cbvDeTILRcZVzfcMsCsE0kg&usqp=CAU', stream=True).raw)
image2 = image2.resize((300,300))


Resize face for grid app

We will now convert our images to a NumPy array and use cv2 to convert them to grayscale. We will create an empty image or a mask similar to the source image with zeros.

# Converting image to array and converting them to grayscale
img = np.array(image1)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
mask = np.zeros_like(img_gray)
img2 = np.array(image2)
img2_gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

To generate blank images with zeros, we will first load the face detector and the facial feature predictor with dlib and then we will find the height, width and channels.

# Initalizing frontal face detector and shape predictor
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
height, width, channels = img2.shape
img2_new_face = np.zeros((height, width, channels), np.uint8)

Triangulate photos for Face Mesh

triangulating pictures
picture 2

First, we need to pass the image to the detector, then this object will be used to extract the features using the predictor. After that, store the extracted features (x and y) in the features (menu). We will form the face into triangles. This step is the core of the face swap. Here we are going to swap each triangle with the one corresponding to the destination image. The destination image triangulation must have exactly the same pattern as the source image triangulation model. This means that the connection of the connecting symbols must be the same. So after we triangulate the source image, from this triangulation we take the indexes (x and y) so that we can repeat the same triangulation on the destination image. Once we have the indexes of the triangles, we circle through them and triangulate the face of the destination.

# Face 1
faces = detector(img_gray)
for face in faces:
    landmarks = predictor(img_gray, face)
    landmarks_points = []
    for n in range(0, 68):
        x = landmarks.part(n).x
        y = landmarks.part(n).y
        landmarks_points.append((x, y))

    points = np.array(landmarks_points, np.int32)

    convexhull = cv2.convexHull(points)
    cv2.fillConvexPoly(mask, convexhull, 255)    
    face_image_1 = cv2.bitwise_and(img, img, mask=mask)

    # Delaunay triangulation

    rect = cv2.boundingRect(convexhull)
    subdiv = cv2.Subdiv2D(rect)
    triangles = subdiv.getTriangleList()
    triangles = np.array(triangles, dtype=np.int32)
    indexes_triangles = []
    for t in triangles:
        pt1 = (t[0], t[1])
        pt2 = (t[2], t[3])
        pt3 = (t[4], t[5])

        index_pt1 = np.where((points == pt1).all(axis=1))
        index_pt1 = extract_index_nparray(index_pt1)

        index_pt2 = np.where((points == pt2).all(axis=1))
        index_pt2 = extract_index_nparray(index_pt2)

        index_pt3 = np.where((points == pt3).all(axis=1))
        index_pt3 = extract_index_nparray(index_pt3)
        if index_pt1 is not None and index_pt2 is not None and index_pt3 is not None:
            triangle = [index_pt1, index_pt2, index_pt3]

Triangulation Pattern – Delaunay Triangulation

Once all the triangles are cut and rolled We need to tie them together. Then we have to Reconstruction of the face using the triangulation pattern, with the only difference that this time we put the coiled triangle.

# Face will be swapped sucessfully by convex hull partioning
img2_face_mask = np.zeros_like(img2_gray)
img2_head_mask = cv2.fillConvexPoly(img2_face_mask, convexhull2, 255)
img2_face_mask = cv2.bitwise_not(img2_head_mask)

The face is now ready for replacement.

So we take the new face and the intended image without a face and connect them together.

img2_head_noface = cv2.bitwise_and(img2, img2, mask=img2_face_mask)
result = cv2.add(img2_head_noface, img2_new_face)

Finally, the faces are swapped correctly and now we have to check the colors until the source image fits the destination image.

In Opencv we have a built in function called “smooth reproduction” It does this process automatically. We need to take the new face (created in the sixth step), Take the original photo, which is a mask for clipping the face, we need to get the middle of the face.

Create a smooth two-sided clone

(x, y, w, h) = cv2.boundingRect(convexhull2)
center_face2 = (int((x + x + w) / 2), int((y + y + h) / 2))
seamlessclone = cv2.seamlessClone(result, img2, img2_head_mask, center_face2, cv2.NORMAL_CLONE)

Finally, we will visualize the resulting NumPy image by converting it into an Image object for Pillow.

# Converting array to image


Create a smooth two-sided clone

We started by downloading the pre-trained model of facial features and downloading images from the Internet that we will work on. Next, we used CV2 and Dlib for image preprocessing and used different functions to reach the end which is to swap the face of the destination image with the source image.

This project can be used to learn and understand different concepts of computer vision. This project can be used to build augmented reality apps like Snapchat etc.

Well, this is a cover on my part!


Thank you for reading my article ­čÖé

I hope you like this step by step learning of Face mesh using computer vision. I hope to work in the next article on flask-web until that time Happy learning!

Here is the repo link for this article.

Here you can access my other articles published on Analytics Vidhya -Blogathon (link)

If you get stuck somewhere you can connect with me on LinkedIn, refer to this link

About Me

Hello everyone, I am currently working in a TCS Previously, she worked as an Associate Analyst in Data Science in a Zorba Consulting India. Besides working full time, I have a great interest in the same field i.e. data science with other subsets of artificial intelligence like computer vision, machine learning and deep learning, feel free to collaborate with me on any project on the above mentioned fields (LinkedIn).

Image Sources

  1. picture 1: https://th.bing.com/th/id/R.17820239375ad0f8b4088a3d321a72aa?rik=cuc2p43UgbwIoA&riu=http%3a%2f%2fwww.learnopencv.com%2fwp-content%2fuploads%2fibows%2.jpg-windows2012%2% ehk = 3TU7GRqxkV7eHDMWY92oO3Gunn3QuNVYoktdQNS0QyM%3d & risl = &pid = ImgRaw &r = 0
  2. Pictures 2: https://www.researchgate.net/profile/Marcelo-Amaral-2/publication/331984027/figure/download/fig1/AS:[email protected]/The-dual-2-complex- for-triangulation-level-1- with-four-tetrahedrons-highlighting-the.ppm

The media described in this article is not owned by Analytics Vidhya and is used at the author’s discretion.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button