Giter Site home page Giter Site logo

callroko / dsc-evaluating-logistic-regression-models-lab-online-ds-pt-041519 Goto Github PK

View Code? Open in Web Editor NEW

This project forked from learn-co-students/dsc-evaluating-logistic-regression-models-lab-online-ds-pt-041519

0.0 1.0 0.0 451 KB

License: Other

Jupyter Notebook 100.00%

dsc-evaluating-logistic-regression-models-lab-online-ds-pt-041519's Introduction

Evaluating Logistic Regression Models - Lab

Introduction

In regression, you are predicting values so it makes sense to discuss error as a distance of how far off our estimates were. When classifying a binary variable, however, a model is either correct or incorrect. As a result, we tend to quantify this in terms of how many false positives versus false negatives we come across. In particular, we examine a few different specific measurements when evaluating the performance of a classification algorithm. In this review lab, we'll review precision, recall, accuracy, and F1-score in order to evaluate our logistic regression models.

Objectives

You will be able to:

  • Understand and assess precision, recall, and accuracy of classifiers
  • Evaluate classification models using various metrics

Terminology Review

Let's take a moment and review some classification evaluation metrics:

$Precision = \frac{\text{Number of True Positives}}{\text{Number of Predicted Positives}}$

$Recall = \frac{\text{Number of True Positives}}{\text{Number of Actual Total Positives}}$

$Accuracy = \frac{\text{Number of True Positives + True Negatives}}{\text{Total Observations}}$

$\text{F1-Score} = 2\ \frac{Precision\ x\ Recall}{Precision + Recall}$

At times, it may be superior to tune a classification algorithm to optimize against precision or recall rather than overall accuracy. For example, imagine the scenario of predicting whether or not a patient is at risk for cancer and should be brought in for additional testing. In cases such as this, we often may want to cast a slightly wider net, and it is preferable to optimize for recall, the number of cancer positive cases, than it is to optimize precision, the percentage of our predicted cancer-risk patients who are indeed positive.

1. Split the data into train and test sets

import pandas as pd
df = pd.read_csv('heart.csv')
#Your code here

2. Create a standard logistic regression model

#Your code here

3. Write a function to calculate the precision

def precision(y_hat, y):
    #Your code here

4. Write a function to calculate the recall

def recall(y_hat, y):
    #Your code here

5. Write a function to calculate the accuracy

def accuracy(y_hat, y):
    #Your code here

6. Write a function to calculate the F1-score

def f1_score(y_hat,y):
    #Your code here

7. Calculate the precision, recall, accuracy, and F1-score of your classifier.

Do this for both the train and the test set

#Your code here

Great Job! Now it's time to check your work with sklearn.

8. Calculating Metrics with sklearn

Each of the metrics we calculated above is also available inside the sklearn.metrics module.

In the cell below, import the following functions:

  • precision_score
  • recall_score
  • accuracy_score
  • f1_score

Compare the results of your performance metrics functions with the sklearn functions above. Calculate these values for both your train and test set.

#Your code here

9. Comparing Precision, Recall, Accuracy, and F1-Score of Test vs Train Sets

Calculate and then plot the precision, recall, accuracy, and F1-score for the test and train splits using different training set sizes. What do you notice?

import  matplotlib.pyplotmatplot  as plt
%matplotlib inline
training_Precision = []
testing_Precision = []
training_Recall = []
testing_Recall = []
training_Accuracy = []
testing_Accuracy = []

for i in range(10,95):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= None) #replace the "None" here
    logreg = LogisticRegression(fit_intercept = False, C = 1e12)
    model_log = None
    y_hat_test = None
    y_hat_train = None

# Your code here

Create 4 scatter plots looking at the test and train precision in the first one, test and train recall in the second one, test and train accuracy in the third one, and test and train f1-score in the fourth one.

# code for test and train precision
# code for test and train recall
# code for test and train accuracy
# code for test and train F1-score

Summary

Nice! In this lab, you gained some extra practice with evaluation metrics for classification algorithms. You also got some further python practice by manually coding these functions yourself, giving you a deeper understanding of how they work. Going forward, continue to think about scenarios in which you might prefer to optimize one of these metrics over another.

dsc-evaluating-logistic-regression-models-lab-online-ds-pt-041519's People

Contributors

loredirick avatar fpolchow avatar mathymitchell avatar canary-jpg avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.