…
DEEP CREDIT RISK
  • WELCOME
  • CONTENTS
  • START
  • FEATURED
  • DATA & CODE
  • TRAINING
  • PAPERS
  • CONTACT
  • WELCOME
  • CONTENTS
  • START
  • FEATURED
  • DATA & CODE
  • TRAINING
  • PAPERS
  • CONTACT
Search

Featured: Function "validation"

In the `dcr` module, we have included the function `validation`, which provides a collection of validation techniques that we will apply throughout this book.

Four Panels

The function provides an output of four panels:
  •  Upper left panel: summary table of validation metrics;
  •  Upper right panel: real-fit diagram by time, i.e., a comparison of the average observed and fitted outcome variable over time;
  •  Lower left panel: histogram of the fitted values;
  •  Lower right panel: real fit diagram by deciles of the fitted values.
 
This summary includes the visuals and metrics that we find most useful in validating models for default, payoff, loss rates given default and exposures.

Inputs

The function `validation()` has three input arguments:
  •  Outcome variable (may be binary or metric);
  •  Fitted variable (may be binary or metric);
  •  Time variable.

All variables may be `numpy` arrays or `pandas` dataframes.

Output - Perfect Model

In a perfect PD model the fitted values are equal to the observed outcomes and the validation function provides the following output:
Picture
In a perfect model, mean outcomes and mean fitted values match each other and performance measures are one, distance measures are zero and p-values are high as the hypothesis can not be rejected.

Details

The summary table (upper left panel) reports observation counts, means for outcomes and fitted values, discrimination and calibration measures. Calibration measures are separated into R-squared, distances and p-values. The following metrics are included:
    
  •  Counts: number of observations;
  •  Mean outcome: average of outcome variable;
  •  Mean fit: average of fit variable if this value is close to the mean outcome then this is a good indication for calibration;
  •  AUC: area under the ROC, values are between zero (low fit) and one (high fit);
  •  OLS R-squared: this is the R-squared of an OLS regression of the outcome on the fit variable. The number is equal to the square of the Pearson correlation coefficient. Values are between zero (low fit) and one (high fit);
  •  `scikit-learn` R-squared: coefficient of determination as described above. Values are between minus infinity (low fit) and plus one (high fit);
  •  RMSE/SQR(Brier score): square root of the mean of the differences between outcome and fit variable;
  •  Log-loss: negative log-likelihood. Values are positive. The greater the value, the lower the fit;
  •  P-values of the Binomial test. The null hypothesis is that the PD is lower than or equal to the average predicted PD for the entire sample. Values are positive. The greater the value, the lower the fit;
  •  P-values of the Jeffrey's prior test whether the PD is lower than or equal to the average predicted PD for the entire sample. Values are positive. The greater the value, the lower the fit.

The real-fit diagram by time (upper right panel) shows that the default rate and mean fitted values match perfectly as they fully overlap.

The histogram (lower left panel) shows that the fitted values are either zero (higher number of observations) or one (lower number of observations).  

The real-fit diagram by deciles of the fit variable (lower right panel) shows that the default outcomes and deciles of fitted values match perfectly.

Output - Neural Network

Here is the output for a Neural Network. We apply backtesting out-of-time as part of the validation. We predict
the future ex-ante, wait until this predicted future has realized, and compare backward looking
(ex-post) how well our earlier predictions and the observed events match.
Picture
Get dcr.py

Get Started
Copyright © 2021  |  Privacy Policy | Terms & Conditions
  • WELCOME
  • CONTENTS
  • START
  • FEATURED
  • DATA & CODE
  • TRAINING
  • PAPERS
  • CONTACT