Curated Resource ( ? )
#
Beyond Accuracy: Precision and Recall – Towards Data Science

## my notes ( ? )

## Read the Full Post

The above notes were curated from the full post
towardsdatascience.com/beyond-accuracy-precision-and-recall-3da06bea9f6c.
## Related reading

The metric our intuition tells us we should maximize is known in statistics as recall... number of true positives divided by the number of true positives plus the number of false negatives. ... true positives are correctly identified terrorists, and false negatives ... labels as not terrorists that actually were terrorists... a model’s ability to find all the data points of interest in a dataset...

when we increase the recall, we decrease the precision... defined as the number of true positives divided by the number of true positives plus the number of false positives... individuals the model classifies as terrorists that are not... While recall expresses the ability to find all relevant instances in a dataset, precision expresses the proportion of the data points our model says was relevant actually were relevant...

where we want to find an optimal blend of precision and recall we can combine the two metrics using what is called the F1 score... We use the harmonic mean instead of a simple average because it punishes extreme values... The F1 score gives equal weight to both measures ...

the ROC curve shows how the recall vs precision relationship changes as we vary the threshold for identifying a positive in our model... plots the true positive rate on the y-axis versus the false positive rate on the x-axis. The true positive rate (TPR) is the recall and the false positive rate (FPR) is the probability of a false alarm... the total Area Under the Curve (AUC)... a higher number indicating better classification performance.

when we increase the recall, we decrease the precision... defined as the number of true positives divided by the number of true positives plus the number of false positives... individuals the model classifies as terrorists that are not... While recall expresses the ability to find all relevant instances in a dataset, precision expresses the proportion of the data points our model says was relevant actually were relevant...

where we want to find an optimal blend of precision and recall we can combine the two metrics using what is called the F1 score... We use the harmonic mean instead of a simple average because it punishes extreme values... The F1 score gives equal weight to both measures ...

the ROC curve shows how the recall vs precision relationship changes as we vary the threshold for identifying a positive in our model... plots the true positive rate on the y-axis versus the false positive rate on the x-axis. The true positive rate (TPR) is the recall and the false positive rate (FPR) is the probability of a false alarm... the total Area Under the Curve (AUC)... a higher number indicating better classification performance.

More Stuff I Like

More Stuff tagged ai , metrics

See also: Communication Strategy , Digital Transformation , Innovation Strategy , Communications Strategy , Science&Technology

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.