This repository shows how outliers affect best fit linear regression line and how we can overcome this outlier problem with regularization.
Linear Regression is a machine learning algorithm based on supervised learning. It is used to predict the value of a variable based on the value of another variable. The variable you want to predict is called the dependent variable. The variable you are using to predict the other variable's value is called the independent variable. A best fit line is drawn and the errors can be calculated for it. Here, squared error/squared loss is calculated for all the datapoints. It is nothing but the squared error between the actual datapoint and predicted datapoint.
An outlier is an individual point of data that is distant from other points in the dataset. It is an anomaly in the dataset that may be caused by a range of errors in capturing, processing or manipulating data. There can be multiple outliers in the dataset. Also an outlier can be informative sometimes. So we shouldn't remove the outliers before analyzing them.
In the above given diagram, we can see that after increasing the number of outliers, the best fit regression line tends to bend towards the outlier datapoints. So this is the adverse effect of the outliers in the dataset.
When alpha = 0.0001
As alpha value is very small, the regression line tends to outliers as number of outliers increases. For 1 or 2 outliers, the line is far away from them. But as the number of outliers increase, the line shifts towards them, considering them non-outlier points.
When alpha = 1
Even after increasing alpha from 0.0001 to 1, exact same observation is seen for the line which is seen for alpha = 0.0001.
When alpha = 100
As the alpha is set to 100, the regression line becomes more robust to outliers(line remains far away from outliers). As the outliers increase, it still affects the line but comparably less (compared to less values of alpha).