Follow

Run Diagnostic Statistics

Understanding the performance of a particular model can be quite complex, and often it’s useful to see key metrics at a glance — metrics like area under curve, F1 score, log-likelihood — across all your runs, to allow you to quickly pre-prune which experiments are worth investigating further.

Domino’s Diagnostic Statistics functionality allows you to do just that! You can even compare run statistics using our Run Comparison feature.

To enable this, write a file dominostats.json to the root of your project directory that has the diagnostic statistics you’re interested in showing.  Here is an example in R:

diagnostics = list("R^2" = 0.99, "p-value" = 0.05, "sse" = 10.49)
library(jsonlite)
fileConn<-file("dominostats.json")
writeLines(toJSON(diagnostics), fileConn)
close(fileConn)

And in Python:

import json
with open('dominostats.json', 'w') as f:
    f.write(json.dumps({"R^2": 0.99, "p-value": 0.05, "sse": 10.49}))

If Domino detects this file, it will parse the values out and show them on the Runs Dashboard:

Also, run comparisons will show these statistics rendered in a table as well to make it even easier to compare their performance:

 

 

NOTE: The dominostats.json file is deleted before each run automatically by Domino. Therefore, past dominostats.json files will not pollute new runs on your Runs Dashboard.

Was this article helpful?
0 out of 0 found this helpful

Comments