
Results for lco_run: LCO
This diagram displays the mean rank of each model over all cross-validation splits: Within each CV split, the models are ranked according to their MSE. We calculate whether a model is significantly better than another one using the Friedman test and the post-hoc Conover test. The Friedman test shows whether there are overall differences between the models. After a significantFriedman test, the pairwise Conover test is performed to identify which models are significantly outperforming others. One line indicates which models are not significantly different from each other. The p-values are shown below. This can only be rendered if at least 3 models were run.
Results of Post-Hoc Conover Test
DIPK | ElasticNet | GradientBoosting | MultiOmicsNeuralNetwork | MultiOmicsRandomForest | NaiveCellLineMeanPredictor | NaiveDrugMeanPredictor | NaiveMeanEffectsPredictor | NaivePredictor | RandomForest | SRMF | SimpleNeuralNetwork | SuperFELTR | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DIPK | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0083 | 0.0 |
ElasticNet | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0031 | 0.0000 | 0.0000 | 0.0031 | 0.0000 | 0.0000 | 0.0000 | 0.0 |
GradientBoosting | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0083 | 0.0 |
MultiOmicsNeuralNetwork | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0083 | 0.0000 | 0.0 |
MultiOmicsRandomForest | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.7376 | 0.0000 | 0.0000 | 0.0 |
NaiveCellLineMeanPredictor | 0.0000 | 0.0031 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0 |
NaiveDrugMeanPredictor | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.7376 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0 |
NaiveMeanEffectsPredictor | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.7376 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0 |
NaivePredictor | 0.0000 | 0.0031 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0 |
RandomForest | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.7376 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0 |
SRMF | 0.0000 | 0.0000 | 0.0000 | 0.0083 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0 |
SimpleNeuralNetwork | 0.0083 | 0.0000 | 0.0083 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0 |
SuperFELTR | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 1.0 |
Violin Plots of Performance Measures over CV runs
Violin plots comparing all models
To focus on a specific metric, choose it in the dropdown menu in the top right corner.You can investigate the distribution of the performance measures by hovering over the plot. To select/exclude specific algorithms, (double-)click them in the legend.Violin plots comparing all models with normalized metrics
Before calculating the evaluation metrics, all values were normalized by the predictions of the NaiveMeanEffectsPredictor. Since this only influences the R^2 and the correlation metrics, the error metrics are not shown.Violin plots comparing performance measures for tests within each model
- violin_NaiveCellLineMeanPredictor_LCO.html
- violin_ElasticNet_LCO.html
- violin_RandomForest_LCO.html
- violin_SimpleNeuralNetwork_LCO.html
- violin_MultiOmicsRandomForest_LCO.html
- violin_MultiOmicsNeuralNetwork_LCO.html
- violin_SuperFELTR_LCO.html
- violin_SRMF_LCO.html
- violin_GradientBoosting_LCO.html
- violin_DIPK_LCO.html
Heatmap Plots of Performance Measures over CV runs
Heatmap plots comparing all models
Unnormalized metrics collapsed over all CV runs with mean and standard deviation. The strictly standardized mean difference is a measure of effect size which is calculated pairwise. For two models, it is calculated as [mean1 - mean2] / [sqrt(var1 + var2)] for a specific measure. The larger the absolute SSMD, the stronger the effect (a strong effect could, is e.g., a |SSMD| > 2 ).Heatmap plots comparing all models with normalized metrics
Before calculating the evaluation metrics, all values were normalized by the predictions of the NaiveMeanEffectsPredictor. Since this only influences the R^2 and the correlation metrics, the error metrics are not shown.Heatmap plots comparing performance measures for tests within each model
- heatmap_GradientBoosting_LCO.html
- heatmap_MultiOmicsNeuralNetwork_LCO.html
- heatmap_RandomForest_LCO.html
- heatmap_NaiveCellLineMeanPredictor_LCO.html
- heatmap_SuperFELTR_LCO.html
- heatmap_ElasticNet_LCO.html
- heatmap_MultiOmicsRandomForest_LCO.html
- heatmap_SimpleNeuralNetwork_LCO.html
- heatmap_DIPK_LCO.html
- heatmap_SRMF_LCO.html
Regression plots
- regression_lines_LCO_cell_line_name_DIPK.html
- regression_lines_LCO_cell_line_name_ElasticNet.html
- regression_lines_LCO_cell_line_name_GradientBoosting.html
- regression_lines_LCO_cell_line_name_MultiOmicsNeuralNetwork.html
- regression_lines_LCO_cell_line_name_MultiOmicsRandomForest.html
- regression_lines_LCO_cell_line_name_NaiveCellLineMeanPredictor.html
- regression_lines_LCO_cell_line_name_RandomForest.html
- regression_lines_LCO_cell_line_name_SRMF.html
- regression_lines_LCO_cell_line_name_SimpleNeuralNetwork.html
- regression_lines_LCO_cell_line_name_SuperFELTR.html
- regression_lines_LCO_cell_line_name_normalized_DIPK_normalized.html
- regression_lines_LCO_cell_line_name_normalized_ElasticNet_normalized.html
- regression_lines_LCO_cell_line_name_normalized_GradientBoosting_normalized.html
- regression_lines_LCO_cell_line_name_normalized_MultiOmicsNeuralNetwork_normalized.html
- regression_lines_LCO_cell_line_name_normalized_MultiOmicsRandomForest_normalized.html
- regression_lines_LCO_cell_line_name_normalized_NaiveCellLineMeanPredictor_normalized.html
- regression_lines_LCO_cell_line_name_normalized_RandomForest_normalized.html
- regression_lines_LCO_cell_line_name_normalized_SRMF_normalized.html
- regression_lines_LCO_cell_line_name_normalized_SimpleNeuralNetwork_normalized.html
- regression_lines_LCO_cell_line_name_normalized_SuperFELTR_normalized.html
Comparison of normalized R^2 values
R^2 values can be compared here between models, either per cell line or per drug. This can either show if a model has consistently higher or lower R^2 values than another model or identify cell lines/drugs for which models agree or disagree. The x-axis is the first dropdown menu, the y-axis is the second dropdown menu.Cell_line_name-wise comparison
Comparisons per model
- comp_scatter_cell_line_name DIPK LCO.html
- comp_scatter_cell_line_name ElasticNet LCO.html
- comp_scatter_cell_line_name GradientBoosting LCO.html
- comp_scatter_cell_line_name MultiOmicsNeuralNetwork LCO.html
- comp_scatter_cell_line_name MultiOmicsRandomForest LCO.html
- comp_scatter_cell_line_name RandomForest LCO.html
- comp_scatter_cell_line_name SRMF LCO.html
- comp_scatter_cell_line_name SimpleNeuralNetwork LCO.html
- comp_scatter_cell_line_name SuperFELTR LCO.html