Menu

Regression Coefficient Formula

 

Let’s understand the formula for the linear regression coefficients. That is the formula for both alpha and the beta.

Now, if you have simple linear regression that does, you have just 1x variable in your data, you will be able to compute the values of alpha and beta using this formula.

Let’s suppose you have the data where your x is your area, and y is your price amongst various different candidates that is possible for the line of best fit, say, let’s imagine that the linear equation determined a line like this.

Alright, for this to be the line of best fit, the sum of residuals of all these points should be the least should be the minimum.

What do I mean by SQL, for example, for this particular data point of this line, this is not exactly falling on this line, isn’t it, there’s a certain distance between the actual value versus what is predicted by the linear regression for this particular value of x.

When x is this much, the linear regression is predicting your y, which is the price to be this much, isn’t it, but the actual value of y is lying somewhere over here, or the vertical distance the distance along the y axis between the line between the line and the actual value this is called as the receipt you will note that this is not the shortest distance, this is not the shortest vertical distance, this is the shortest distance, whereas the rest of you will is the distance measured along the y axis.

Likewise, it computes the recipe Well, between all these points that we have the data, right, all these points to compute the residuals, the sum of squares of residuals of all these points will be the minimum for the line of best fit.

And why are we looking at the sum of squares because we don’t want the residuals to cancel out for example, this could be a positive residual, this could be a positive residual, whereas this is a negative residual, if you just take a sum of squares, the positive and the negative value would cancel out and that might not yield the best or the accurate line of best fit, whereas, we want the sum of squares to be at a minimum level.

So, the equation looks something like this y equals to alpha plus beta X plus this is nothing but the residuals the alpha is nothing but the intercept and beta is nothing but say you measure one unit along the x axis, you measure one unit along the x axis and this is theta the slope beta is nothing but tan theta when your x is one unit, this is equal to b this will be the value of your beta right when x is one unit, this vertical distance is nothing but the value of theta.

Now, let’s look at the actual formula. The formula is the value of beta the slope and theta which is nothing but beta is nothing but the covariance of x and y.

So, you have your data right in your data, you have your x and y the covariance between your x and the y divided by the variance of x this works out to for all the points in the data excite minus x bar x bar is nothing but the mean Y minus Y bar whole thing divided by N likewise divided by all the points equal to one to n x i minus x bar square divided by this n and this n will cancel out.

So, this whole thing is the this whole thing is the value of your beta the formula your beta like once this is your V or beta right this is computed once beta is computed alpha is nothing but y dash minus b into x dash v is nothing but your beta.

So, alpha is nothing but Vi dash minus beta into x dash where y dash and x dash these are the means mean value of the respective columns the mean of X column and your y column that is nothing but your y dash and your x dash

Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science