Herramientas de usuario

Herramientas del sitio


clase:iabd:pia:2eval:tema07.metricas_derivadas

Diferencias

Muestra las diferencias entre dos versiones de la página.

Enlace a la vista de comparación

Ambos lados, revisión anterior Revisión previa
clase:iabd:pia:2eval:tema07.metricas_derivadas [2024/03/25 14:33]
admin [Juntado dos Métricas Básicas]
clase:iabd:pia:2eval:tema07.metricas_derivadas [2024/03/25 14:47] (actual)
admin [Juntado dos Métricas Básicas]
Línea 175: Línea 175:
   * [[https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/|How to Use ROC Curves and Precision-Recall Curves for Classification in Python]]   * [[https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/|How to Use ROC Curves and Precision-Recall Curves for Classification in Python]]
   * {{ :clase:iabd:pia:2eval:predicting_receiver_operating_characteristic_curve_area_under_curve_and_arithmetic_means_of_accuracies_based_on_the_distribution_of_data_samples.pdf |Predicting Receiver Operating Characteristic curve, area under curve , and arithmetic means of accuracies based on the distribution of data samples}}   * {{ :clase:iabd:pia:2eval:predicting_receiver_operating_characteristic_curve_area_under_curve_and_arithmetic_means_of_accuracies_based_on_the_distribution_of_data_samples.pdf |Predicting Receiver Operating Characteristic curve, area under curve , and arithmetic means of accuracies based on the distribution of data samples}}
-  * [[https://neptune.ai/blog/f1-score-accuracy-roc-auc-pr-auc|F1 Score vs ROC AUC vs Accuracy vs PR AUCWhich Evaluation Metric Should You Choose?]] +  * Cálculo del mejor Threshold: 
-  * [[https://stackoverflow.com/questions/44172162/f1-score-vs-roc-auc|F1 Score vs ROC AUC]] +    * [[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5470053/|Defining an Optimal Cut-Point Value in ROC AnalysisAn Alternative Approach]] 
 +    * [[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5082211/|On determining the most appropriate test cut-off value: the case of tests with continuous results]]
  
  
 +<note tip>
 +Hay otra curva que en vez de ser (1-Especificidad) vs Sensibilidad , es la de Sensibilidad vs Precisión (llamada en inglés Precision-Recall) que se usa cuando los datos tienen una baja prevalencia.
 +Y además está relacionado con el F1-score ya que el F1-score se calcula justamente con la Sensibilidad y Precisión
 +
 +  * [[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/|The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets]]
 +  * {{ :clase:iabd:pia:2eval:roc_graphs_notes_and_practical_considerations_for_researchers.pdf |ROC Graphs:Notes and Practical Considerations for Researchers}}
 +  * [[https://juandelacalle.medium.com/how-and-why-i-switched-from-the-roc-curve-to-the-precision-recall-curve-to-analyze-my-imbalanced-6171da91c6b8|How and Why I Switched from the ROC Curve to the Precision-Recall Curve to Analyze My Imbalanced Models: A Deep Dive]]
 +  * [[https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test#Area-under-curve_(AUC)_statistic_for_ROC_curves|Area-under-curve (AUC) statistic for ROC curves]]
 +  * F1-score y ROC
 +    * [[https://neptune.ai/blog/f1-score-accuracy-roc-auc-pr-auc|F1 Score vs ROC AUC vs Accuracy vs PR AUC: Which Evaluation Metric Should You Choose?]]
 +    * [[https://stackoverflow.com/questions/44172162/f1-score-vs-roc-auc|F1 Score vs ROC AUC]]  
 +
 +</note>
 ===== Juntado dos Métricas derivadas ===== ===== Juntado dos Métricas derivadas =====
 Las 4 métricas derivadas son PPV, NPV, FDR y FOR. Las 4 métricas derivadas son PPV, NPV, FDR y FOR.
clase/iabd/pia/2eval/tema07.metricas_derivadas.txt · Última modificación: 2024/03/25 14:47 por admin