Demystifying logic for Lasso, Ridge and Linear Regression

Nikhil Verma
7 min readSep 16, 2020

Machine learning is interesting because developing our understanding of machine learning entails developing our understanding of the principles that underlie intelligence.

And we know Learning is our means of attaining the ability to perform the task. The task which we will discuss in this is one of the simplest and interesting one, called as Regression. We will touch normal Linear Regression along with its Regularized flavors — Lasso and Ridge.

Some of the most common machine learning tasks include the following:

  1. Classification:
  • In this type of task, the computer program is asked to specify which of k categories some input belongs to.
  • To solve this task, the learning algorithm is usually asked to produce a function f : Rn → {1, . . . , k}

2. Regression:

  • In this type of task, the computer program is asked to predict a numerical value given some input.
  • To solve this task, the learning algorithm is asked to output a function
    f : Rn → R
  • This type of task is similar to classification, except that the format of output is different.

Let’s start by talking about a few examples of supervised learning problems:-

  • Suppose we have a dataset giving the living areas and prices of 47 houses from Portland, Oregon

--

--

Nikhil Verma

Knowledge shared is knowledge squared | My Portfolio https://lihkinverma.github.io/portfolio/ | My blogs are living document, updated as I receive comments