Monday, April 2, 2018

MACHINE LEARNING | MULTIVARIATE REGRESSION

MACHINE LEARNING - DAY 3






If you have left the earlier tutorials then you can visit the following links :



Continuing our learning in machine learning let's now move on to today's topic.

Linear Regression with Multiple Variables or Multiple Linear Regression

In the previous tutorial in Day 2, we saw how to predict the output based on a single input value, x i.e.,

hΘ(x) = Θ0 + Θ1x

Now, let's see what if the output depends upon more than one value or more than one feature?

for eg: a house price depends upon the square feet, number of rooms, location etc. 

This is where Multiple Linear Regression comes in handy.

Notations:

n: Number of features

m: number of training sets

x(i): ith training set

x(i)(j) : jth value in ith training set

For eg:

Height    Age    Standard
5’11      25      CA
5’9       29      Artist
5’5       21      Singer
6’2       32      Scientist

Here:
n = 3
m = 4

x(2) = 5’9, 29, Artist

x(2)(2) = 29

GENERAL FORM OF MULTIVARIATE LINEAR REGRESSION

The general form of multivariate linear regression is :

hΘ(x) = Θ0 + Θ1x1 + Θ2x2 + …. + Θnxn

Θj = features of the hypothesis

xj = input values

Let x0 = 1 Now,

hΘ(x) = Θ0x0 + Θ1x1 + Θ2x2 + …. + Θnxn


Θ = [Θ0 Θ1 Θ2 …. Θn]

x = [x0 x1 x2 ….xn]

Then using matrix multiplication,

hΘ(x) = ΘTx

So that was multiple regression or multivariate regression. Now let’s move on to gradient descent for multiple regression which decides how to predict the output value for a given set of inputs.

GRADIENT DESCENT FOR MULTIVARIATE REGRESSION

The gradient descent equation itself is generally of the same form as in simple linear regression, we just have to repeat it for the required number of parameters i.e. repeat it till ‘n’ times.


Repeat until convergence:


Multiple linear regression is same as simple linear regression, the only difference is that in the latter there is only one input variable while in the other one there are multiple input variables and hence gradient descent is executed multiple times for each parameter.

That's all for day 3. Next we will learn about how to make gradient descent work efficiently and how to choose the learning rate alpha in day 4 which will be uploaded by April 6, 2018.
      
If you feel this article helped you in any way do not forget to share and if you have any thoughts or doubts upon it do write them in the comment section.

Till then Happy Learning!!

No comments:

Post a Comment