Linear Regression with Perceptron Using the PyTorch Library in Python – News Couple
ANALYTICS

# Linear Regression with Perceptron Using the PyTorch Library in Python

## Linear Regression Overview

“Without understanding the engine, building or working a car is just playing with metal”

This seems to be true in almost all areas of life, without the basics; Creation and innovation simply are not possible. In this guide, we will understand what it is linear regression And how can we implement it using neural networks. The main unit of any neural network – simple or complex – is the neuron. A neural network that contains a single neuron is called “visual sense”. It was founded by Frank Rosenblatt at Cornell Flight Laboratory in 1958. Therefore, it has been around for more than 60 years.

During the current rise in real-world deep learning applications, the use of dense neural networks has grown with respect to sensory range but that does not mean that it is still uncommon. Here, we will look at the theory as well as the code for constructing a perceptron to solve a linear regression problem using PyTorch.

Pytorch is a framework designed and developed by Facebook to easily develop artificial intelligence and machine learning code using tensor computations. It is one of the top 3 frameworks for developing deep learning applications and models. Pytorch is a Python package that offers two high-level features:

• Tensor computalar to NumPy) with strong support for GPU acceleration.
• Deep neural networks are based on the tape-based autograd system (one of the methods for calculating automatic gradients).

## Terms Related to Linear Regression with Perceptron

tensor: Tensor is an array, a multidimensional array that stores data just like any other data structure. The set of stored values ​​can be easily accessed through indexing. For a better reference, think of tensors in a more creative way with this series of Structure Increasing Complexity. scalar -> vector -> matrix -> tensor

More powerful: The process of changing certain values ​​to get a completely better result.

Loss: The difference between actual and expected output is called ‘loss’. The term refers to the value that must be reduced to obtain the best optimized model.

Worker: The input values ​​we already have as data to model are called on the variables. The values ​​have already been defined at the time of training and inference (evaluation/testing).

Weights: The coefficient values ​​associated with the linear equation and optimized during training to reduce loss are called model weights. These along with bias are also known as model parameters.

bias The constant value used in the linear equation to manage the vertical position of the line above the Cartesian plane. Also called the “y-intercept” (the point where the linear regression line will intersect the y-axis)

## Code in Pytorch for Linear Regression with Perceptron

Before we start anything, you should know that the python package we use in PyTorch is: ‘torch’. The first and foremost thing for any project is to know the basic libraries and packages that will help you in the successful and smart implementation of the project. In our case, other than flame, we will use numbe for mathematical calculation and Matplotlib for visualization.

#### 1. Import libraries and create dataset

```import numpy as np
import matplotlib.pyplot as plt
import torch```

The dataset is created using NumPy arrays.

```x_train = np.array ([[4.7], [2.4], [7.5], [7.1], [4.3],
[7.8], [8.9], [5.2], [4.59], [2.1],
, , [7.5], , ,
, [5.2], [4.9], , [4.7],
, [4.8], [3.5], [2.1], [4.1]],
dtype = np.float32)```
```y_train = np.array ([[2.6], [1.6], [3.09], [2.4], [2.4],
[3.3], [2.6], [1.96], [3.13], [1.76],
[3.2], [2.1], [1.6], [2.5], [2.2],
[2.75], [2.4], [1.8], , ,
[1.6], [2.4], [2.6], [1.5], [3.1]],
dtype = np.float32)```

We created the dataset with some random values ​​as a NumPy array data structure.

Data visualization.

```plt.figure(figsize=(8,8))
plt.scatter(x_train, y_train, c="green", s=200, label="Original data")
plt.show()```

#### 2. Data Preparation and Modeling with Pytorch

Now, the next step is to convert these NumPy matrices to PyTorch tensors (referred to above in terminology) because they are the background data structures that will enable all PyTorch functions for further machine learning, and deep learning code. So, let’s do that.

```X_train = torch.from_numpy(x_train)
Y_train = torch.from_numpy(y_train)

Noticeable: Required_grad is the property that manages the information in the tensor regarding whether or not to calculate the gradient of the tensor during training. Not all tensors with a value of False get to store their gradients for further use.

## Modeling in Pytorch

In this article, we create the simplest possible model. This model is only the first equation for the neuron responsible for determining the linearity in the data as mentioned above. The parameters of the model are W1 and b1, which are weight and bias, respectively. These parameters are independent in how they appear after being set by the optimizer during training. On the other hand, we have some hyperparameters, which the developer can control and are specifically used to manage the process and direct it to a more improved direction during training. Let’s look at the criteria first.

```w1 = torch.rand(input_size,
hidden_size,
```b1 = torch.rand(hidden_size,
output_size,

Note that, here we are explicitly declaring that these tensor variables must have their Requied_grad value as True so that their gradient is calculated during training and used by the optimizer to fine-tune them further.

These are some of the hyperparameters used. Different neural network architectures come with different hyperparameters. Here are the ones we’ll be using in this model.

```input_size = 1
hidden_size = 1
output_size = 1
learning_rate = 0.001```

input_size , hidden_size and output_size , all these values ​​are 1 because there is only one neuron. Its value indicated the number of neurons used by its layers. The learning rate as the name implies is the amount of sensitivity that the network assumes before changing the parameters. For example, too high a learning rate will lead to drastic changes in the value and it becomes difficult to reach the optimum result. Similarly, making the model too small makes the model take a very long time to get to the optimum level.

## Visualize forecast and actual values

```plt.figure(figsize=(8, 8))
plt.scatter(x_train, y_train, c="green", s=200, label="Original data")
plt.plot(x_train, predicted, label="Fitted line")
plt.legend()
plt.show()```

So here we have it. A simple linear model built and trained with PyTorch. There are different ways to build a simple linear model, but through this tutorial, you should understand and familiarize yourself with some of the important functions in PyTorch to build a neural network. This time it was a neuron, but those same things can be expanded into something more powerful and better especially by adding an activation function, working with multiple neurons, or both. You can easily configure it into a complete deep neural network. I suggest you try to create multiple neural structures and deepen your understanding using PyTorch.

#### gargia sharma

B-Tech fourth year student
Specialist in deep learning and data science