KNN Classification
Algorithm
What’s in it for you?
Why do we need KNN?
What is KNN?
How do we choose the factor ‘K’?
When do we use KNN?
How does KNN Algorithm work?
Use Case: Predict whether a person will have diabetes or
not
Why KNN?
By now, we all know
Machine learning models
makes predictions by
learning from the past
data available
Input value
Machine Learning Model
Predicted Output
Is that a dog?
No dear, you can
differentiate
between a cat
and a dog based
on their
characteristics
No dear, you can
differentiate
between a cat
and a dog based
on their
characteristics
DOGSCATS
Sharp Claws, uses to climb
Smaller length of ears
Meows and purrs
Doesn’t love to play around
Dull Claws
Bigger length of ears
Barks
Loves to run around
No dear, you can
differentiate
between a cat
and a dog based
on their
characteristics
DOGSCATS
Length of ears 
Sharpnessofclaws
Now tell me if it
is a cat or a dog?
Now tell me if
it’s a cat or a
dog?
DOGSCATS
Length of ears 
Sharpnessofclaws
It’s features are
more like cats, it
must be a cat!
DOGSCATS
Length of ears 
Sharpofclaws
Why KNN?
Because KNN is based on
feature similarity, we can
do classification using KNN
Classifier!
Input value
KNN
Predicted Output
What is KNN?
What is KNN Algorithm?
KNN – K Nearest Neighbors, is one of the simplest Supervised Machine Learning algorithm
mostly used for
Classification
It classifies a data point based on how its
neighbors are classified
What is KNN Algorithm?
KNN stores all available cases and
classifies new cases based on a
similarity measure
Chloride Level 
SulphurDioxideLevel
RED or WHITE?
What is KNN Algorithm?
But, what is K?
Chloride Level 
SulphurDioxideLevel
RED or WHITE?
What is KNN Algorithm?
RED or WHITE?
k in KNN is a parameter that refers
to the number of nearest neighbors
to include in the majority voting
process
K=5
Chloride Level 
SulphurDioxideLevel
What is KNN Algorithm?
RED or WHITE?
A data point is classified by majority
votes from its 5 nearest neighbors
K=5
Chloride Level 
SulphurDioxideLevel
What is KNN Algorithm?
Chloride Level 
SulphurDioxideLevel RED
Here, the unknown point would be
classified as red, since 4 out of 5
neighbors are red
K=5
How do we choose ‘k’?
How do we choose the factor ‘k’?
?
K=3
New
Variable
KNN Algorithm is based on feature similarity: Choosing the right value of k is a process
called parameter tuning, and is important for better accuracy
How do we choose the factor ‘k’?
?
k=3
New
Variable
KNN Algorithm is based on feature similarity: Choosing the right value of k is a process
called parameter tuning, and is important for better accuracy
So at k=3, we can classify ‘?’ as
How do we choose the factor ‘k’?
?
k=3
k=7
New
Variable
KNN Algorithm is based on feature similarity: Choosing the right value of k is a process
called parameter tuning, and is important for better accuracy
But at k=7, we classify ‘?’ as
How do we choose the factor ‘k’?
?
K=1
K=3
New
Variable
KNN Algorithm is based on feature similarity: Choosing the right value of k is a process
called parameter tuning, and is important for better accuracy
So at k=3, we can classify ‘?’ as
The class of unknown data
point was at k=3 but
changed at k=7, so which k
should we choose?
How do we choose the factor ‘k’?
Odd value of K is selected to avoid confusion between two classes of data
Sqrt(n), where n is the total number of data points
To choose a value of k:
How do we choose the factor ‘k’?
Odd value of K is selected to avoid confusion between two classes of data
Sqrt(n), where n is the total number of data points
To choose a value of k:
Higher value of k has lesser
chance of error
When do we use KNN?
When do we use KNN Algorithm?
We can use KNN when
Dog
Data is labeled
When do we use KNN Algorithm?
We can use KNN when
Data is noise free
Data is labeled
Dog
Noise
Weight(x2) Height(y2) Class
51 167Underweight
62 182 one-fourty
69 176 23
64 173 hello kitty
65 172 Normal
When do we use KNN Algorithm?
We can use KNN when
Data is noise freeDataset is small
Data is labeled
Dog
Because KNN is a ‘lazy learner’ i.e.
doesn’t learn a discriminative function
from the training set
Noise
Weight(x2) Height(y2) Class
51 167Underweight
62 182 one-fourty
69 176 23
64 173 hello kitty
65 172 Normal
How does KNN Algorithm work?
How does KNN Algorithm work?
Consider a dataset having two variables: height (cm) & weight (kg) and each point is
classified as Normal or Underweight
Weight(x2) Height(y2) Class
51 167 Underweight
62 182 Normal
69 176 Normal
64 173 Normal
65 172 Normal
56 174 Underweight
58 169 Normal
57 173 Normal
55 170 Normal
How does KNN Algorithm work?
On the basis of the given data we have to classify the below set as Normal or
Underweight using KNN
Assuming, we
don’t know how to
calculate BMI!
57 kg 170 cm ?
How does KNN Algorithm work?
To find the nearest neighbors, we will calculate Euclidean
distance
But, what is
Euclidean distance?
How does KNN Algorithm work?
According to the Euclidean distance formula, the distance between two points in the plane with
coordinates (x, y) and (a, b) is given by:
dist(d)= √(x - a)² + (y - b)²
How does KNN Algorithm work?
Let’s calculate it to understand clearly:
Similarly, we will calculate Euclidean distance of unknown data point from
all the points in the dataset
dist(d1)= √(170-167)² + (57-51)² ~= 6.7
d1
dist(d2)= √(170-182)² + (57-62)² ~= 13
d2
dist(d3)= √(170-176)² + (57-69)² ~= 13.4
d3
Unknown data point
How does KNN Algorithm work?
Hence, we have calculated the Euclidean distance of unknown data point from all
the points as shown:
Where (x1, y1) = (57, 170) whose
class we have to classify
Weight(x2) Height(y2) Class Euclidean Distance
51 167 Underweight 6.7
62 182 Normal 13
69 176 Normal 13.4
64 173 Normal 7.6
65 172 Normal 8.2
56 174 Underweight 4.1
58 169 Normal 1.4
57 173 Normal 3
55 170 Normal 2
How does KNN Algorithm work?
Now, lets calculate the nearest neighbor at k=3
k = 3
57 kg 170 cm ?
Weight(x2) Height(y2) Class Euclidean Distance
51 167 Underweight 6.7
62 182 Normal 13
69 176 Normal 13.4
64 173 Normal 7.6
65 172 Normal 8.2
56 174 Underweight 4.1
58 169 Normal 1.4
57 173 Normal 3
55 170 Normal 2
Weight(x2) Height(y2) Class Euclidean Distance
51 167 Underweight 6.7
62 182 Normal 13
69 176 Normal 13.4
64 173 Normal 7.6
65 172 Normal 8.2
56 174 Underweight 4.1
58 169 Normal 1.4
57 173 Normal 3
55 170 Normal 2
How does KNN Algorithm work?
Now, lets calculate the nearest neighbor at k=3
k = 3
57 kg 170 cm ?
We have n=10,
And sqrt(10)=3.1
Hence, we have taken k=3
How does KNN Algorithm work?
So, majority neighbors are pointing towards ‘Normal’
Hence, as per KNN algorithm the class of (57, 170) should be ‘Normal’
k = 3
Class Euclidean Distance
Underweight 6.7
Normal 13
Normal 13.4
Normal 7.6
Normal 8.2
Underweight 4.1
Normal 1.4
Normal 3
Normal 2
Recap of KNN
Recap of KNN
• A positive integer k is specified, along with
a new sample
• We select the k entries in our database
which are closest to the new sample
• We find the most common classification of
these entries
• This is the classification we give to the
new sample
USE CASE: Predict Diabetes
KNN - Predict diabetes
Objective: Predict whether a person will be diagnosed with diabetes or not
We have a dataset of 768 people who were or were not
diagnosed with diabetes
KNN - Predict diabetes
Import the required Scikit-learn libraries as shown:
KNN - Predict diabetes
Load the dataset and have a look:
KNN - Predict diabetes
Values of columns like ‘Glucose’, BloodPressure’ cannot be accepted as zeroes because it will affect the outcome
We can replace such values with the mean of the respective column:
KNN - Predict diabetes
Before proceeding further, let’s split the dataset into train and test:
KNN - Predict diabetes
Feature Scaling:
Rule of thumb: Any algorithm that
computes distance or assumes
normality, scale your features!
KNN - Predict diabetes
Then define the model using KNeighborsClassifier and fit the train data in
the model
N_neighbors here is ‘K’
p is the power parameter to define
the metric used, which is ‘Euclidean’
in our case
KNN - Predict diabetes
There are other metrics
also to evaluate the
distance like Manhattan
distance , Minkowski
distance etc
KNN - Predict diabetes
Let’s predict the test results:
KNN - Predict diabetes
It’s important to evaluate the model, let’s use confusion matrix to do that:
KNN - Predict diabetes
Calculate accuracy of the model:
KNN - Predict diabetes
So, we have created a model using
KNN which can predict whether a
person will have diabetes or not
KNN - Predict diabetes
And accuracy of 80% tells us that it is
a pretty fair fit in the model!
Summary
Why we need knn? Eucledian distance Choosing the value of k
Knn classifier for diabetes predictionHow KNN works?
KNN Algorithm - How KNN Algorithm Works With Example | Data Science For Beginners | Simplilearn

KNN Algorithm - How KNN Algorithm Works With Example | Data Science For Beginners | Simplilearn

  • 1.
  • 2.
    What’s in itfor you? Why do we need KNN? What is KNN? How do we choose the factor ‘K’? When do we use KNN? How does KNN Algorithm work? Use Case: Predict whether a person will have diabetes or not
  • 3.
    Why KNN? By now,we all know Machine learning models makes predictions by learning from the past data available Input value Machine Learning Model Predicted Output
  • 4.
  • 5.
    No dear, youcan differentiate between a cat and a dog based on their characteristics
  • 6.
    No dear, youcan differentiate between a cat and a dog based on their characteristics DOGSCATS Sharp Claws, uses to climb Smaller length of ears Meows and purrs Doesn’t love to play around Dull Claws Bigger length of ears Barks Loves to run around
  • 7.
    No dear, youcan differentiate between a cat and a dog based on their characteristics DOGSCATS Length of ears  Sharpnessofclaws
  • 8.
    Now tell meif it is a cat or a dog?
  • 9.
    Now tell meif it’s a cat or a dog? DOGSCATS Length of ears  Sharpnessofclaws
  • 10.
    It’s features are morelike cats, it must be a cat!
  • 11.
    DOGSCATS Length of ears Sharpofclaws
  • 12.
    Why KNN? Because KNNis based on feature similarity, we can do classification using KNN Classifier! Input value KNN Predicted Output
  • 13.
  • 14.
    What is KNNAlgorithm? KNN – K Nearest Neighbors, is one of the simplest Supervised Machine Learning algorithm mostly used for Classification It classifies a data point based on how its neighbors are classified
  • 15.
    What is KNNAlgorithm? KNN stores all available cases and classifies new cases based on a similarity measure Chloride Level  SulphurDioxideLevel RED or WHITE?
  • 16.
    What is KNNAlgorithm? But, what is K? Chloride Level  SulphurDioxideLevel RED or WHITE?
  • 17.
    What is KNNAlgorithm? RED or WHITE? k in KNN is a parameter that refers to the number of nearest neighbors to include in the majority voting process K=5 Chloride Level  SulphurDioxideLevel
  • 18.
    What is KNNAlgorithm? RED or WHITE? A data point is classified by majority votes from its 5 nearest neighbors K=5 Chloride Level  SulphurDioxideLevel
  • 19.
    What is KNNAlgorithm? Chloride Level  SulphurDioxideLevel RED Here, the unknown point would be classified as red, since 4 out of 5 neighbors are red K=5
  • 20.
    How do wechoose ‘k’?
  • 21.
    How do wechoose the factor ‘k’? ? K=3 New Variable KNN Algorithm is based on feature similarity: Choosing the right value of k is a process called parameter tuning, and is important for better accuracy
  • 22.
    How do wechoose the factor ‘k’? ? k=3 New Variable KNN Algorithm is based on feature similarity: Choosing the right value of k is a process called parameter tuning, and is important for better accuracy So at k=3, we can classify ‘?’ as
  • 23.
    How do wechoose the factor ‘k’? ? k=3 k=7 New Variable KNN Algorithm is based on feature similarity: Choosing the right value of k is a process called parameter tuning, and is important for better accuracy But at k=7, we classify ‘?’ as
  • 24.
    How do wechoose the factor ‘k’? ? K=1 K=3 New Variable KNN Algorithm is based on feature similarity: Choosing the right value of k is a process called parameter tuning, and is important for better accuracy So at k=3, we can classify ‘?’ as The class of unknown data point was at k=3 but changed at k=7, so which k should we choose?
  • 25.
    How do wechoose the factor ‘k’? Odd value of K is selected to avoid confusion between two classes of data Sqrt(n), where n is the total number of data points To choose a value of k:
  • 26.
    How do wechoose the factor ‘k’? Odd value of K is selected to avoid confusion between two classes of data Sqrt(n), where n is the total number of data points To choose a value of k: Higher value of k has lesser chance of error
  • 27.
    When do weuse KNN?
  • 28.
    When do weuse KNN Algorithm? We can use KNN when Dog Data is labeled
  • 29.
    When do weuse KNN Algorithm? We can use KNN when Data is noise free Data is labeled Dog Noise Weight(x2) Height(y2) Class 51 167Underweight 62 182 one-fourty 69 176 23 64 173 hello kitty 65 172 Normal
  • 30.
    When do weuse KNN Algorithm? We can use KNN when Data is noise freeDataset is small Data is labeled Dog Because KNN is a ‘lazy learner’ i.e. doesn’t learn a discriminative function from the training set Noise Weight(x2) Height(y2) Class 51 167Underweight 62 182 one-fourty 69 176 23 64 173 hello kitty 65 172 Normal
  • 31.
    How does KNNAlgorithm work?
  • 32.
    How does KNNAlgorithm work? Consider a dataset having two variables: height (cm) & weight (kg) and each point is classified as Normal or Underweight Weight(x2) Height(y2) Class 51 167 Underweight 62 182 Normal 69 176 Normal 64 173 Normal 65 172 Normal 56 174 Underweight 58 169 Normal 57 173 Normal 55 170 Normal
  • 33.
    How does KNNAlgorithm work? On the basis of the given data we have to classify the below set as Normal or Underweight using KNN Assuming, we don’t know how to calculate BMI! 57 kg 170 cm ?
  • 34.
    How does KNNAlgorithm work? To find the nearest neighbors, we will calculate Euclidean distance But, what is Euclidean distance?
  • 35.
    How does KNNAlgorithm work? According to the Euclidean distance formula, the distance between two points in the plane with coordinates (x, y) and (a, b) is given by: dist(d)= √(x - a)² + (y - b)²
  • 36.
    How does KNNAlgorithm work? Let’s calculate it to understand clearly: Similarly, we will calculate Euclidean distance of unknown data point from all the points in the dataset dist(d1)= √(170-167)² + (57-51)² ~= 6.7 d1 dist(d2)= √(170-182)² + (57-62)² ~= 13 d2 dist(d3)= √(170-176)² + (57-69)² ~= 13.4 d3 Unknown data point
  • 37.
    How does KNNAlgorithm work? Hence, we have calculated the Euclidean distance of unknown data point from all the points as shown: Where (x1, y1) = (57, 170) whose class we have to classify Weight(x2) Height(y2) Class Euclidean Distance 51 167 Underweight 6.7 62 182 Normal 13 69 176 Normal 13.4 64 173 Normal 7.6 65 172 Normal 8.2 56 174 Underweight 4.1 58 169 Normal 1.4 57 173 Normal 3 55 170 Normal 2
  • 38.
    How does KNNAlgorithm work? Now, lets calculate the nearest neighbor at k=3 k = 3 57 kg 170 cm ? Weight(x2) Height(y2) Class Euclidean Distance 51 167 Underweight 6.7 62 182 Normal 13 69 176 Normal 13.4 64 173 Normal 7.6 65 172 Normal 8.2 56 174 Underweight 4.1 58 169 Normal 1.4 57 173 Normal 3 55 170 Normal 2
  • 39.
    Weight(x2) Height(y2) ClassEuclidean Distance 51 167 Underweight 6.7 62 182 Normal 13 69 176 Normal 13.4 64 173 Normal 7.6 65 172 Normal 8.2 56 174 Underweight 4.1 58 169 Normal 1.4 57 173 Normal 3 55 170 Normal 2 How does KNN Algorithm work? Now, lets calculate the nearest neighbor at k=3 k = 3 57 kg 170 cm ? We have n=10, And sqrt(10)=3.1 Hence, we have taken k=3
  • 40.
    How does KNNAlgorithm work? So, majority neighbors are pointing towards ‘Normal’ Hence, as per KNN algorithm the class of (57, 170) should be ‘Normal’ k = 3 Class Euclidean Distance Underweight 6.7 Normal 13 Normal 13.4 Normal 7.6 Normal 8.2 Underweight 4.1 Normal 1.4 Normal 3 Normal 2
  • 41.
    Recap of KNN Recapof KNN • A positive integer k is specified, along with a new sample • We select the k entries in our database which are closest to the new sample • We find the most common classification of these entries • This is the classification we give to the new sample
  • 42.
  • 43.
    KNN - Predictdiabetes Objective: Predict whether a person will be diagnosed with diabetes or not We have a dataset of 768 people who were or were not diagnosed with diabetes
  • 44.
    KNN - Predictdiabetes Import the required Scikit-learn libraries as shown:
  • 45.
    KNN - Predictdiabetes Load the dataset and have a look:
  • 46.
    KNN - Predictdiabetes Values of columns like ‘Glucose’, BloodPressure’ cannot be accepted as zeroes because it will affect the outcome We can replace such values with the mean of the respective column:
  • 47.
    KNN - Predictdiabetes Before proceeding further, let’s split the dataset into train and test:
  • 48.
    KNN - Predictdiabetes Feature Scaling: Rule of thumb: Any algorithm that computes distance or assumes normality, scale your features!
  • 49.
    KNN - Predictdiabetes Then define the model using KNeighborsClassifier and fit the train data in the model N_neighbors here is ‘K’ p is the power parameter to define the metric used, which is ‘Euclidean’ in our case
  • 50.
    KNN - Predictdiabetes There are other metrics also to evaluate the distance like Manhattan distance , Minkowski distance etc
  • 51.
    KNN - Predictdiabetes Let’s predict the test results:
  • 52.
    KNN - Predictdiabetes It’s important to evaluate the model, let’s use confusion matrix to do that:
  • 53.
    KNN - Predictdiabetes Calculate accuracy of the model:
  • 54.
    KNN - Predictdiabetes So, we have created a model using KNN which can predict whether a person will have diabetes or not
  • 55.
    KNN - Predictdiabetes And accuracy of 80% tells us that it is a pretty fair fit in the model!
  • 56.
    Summary Why we needknn? Eucledian distance Choosing the value of k Knn classifier for diabetes predictionHow KNN works?

Editor's Notes

  • #3 Remove title case
  • #4 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #5 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #6 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #7 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #8 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #9 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #10 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #11 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #12 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #13 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #15 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #16 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #17 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #18 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #19 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #20 ,. Unlike other regression algorithms where we have one or more than one dependent variables
  • #29 ,. Unlike other regression algorithms where we have one or more than one dependent variables We have a small data set as KNN is a lazy learner so it becomes slow with large datasets
  • #30 ,. Unlike other regression algorithms where we have one or more than one dependent variables We have a small data set as KNN is a lazy learner so it becomes slow with large datasets
  • #31 ,. Unlike other regression algorithms where we have one or more than one dependent variables We have a small data set as KNN is a lazy learner so it becomes slow with large datasets
  • #49  Most of the times, your dataset will contain features highly varying in magnitudes, units and range. we need to bring all features to the same level of magnitudes. This can be acheived by scaling
  • #53 The F1 score is the harmonic average of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0.