Why does the result change slightly each time I re-do the biomarker analysis?

The algorithm uses repeated random sub-sampling cross validation (CV) to evaluate the feature importance as well as to test the performance of different models. Every time when user click the “Submit” button to re-do the analysis, a new random sampling will be generated. Therefore, the results would be only slightly different. This is the intended behavior, users should looking for the “stable core” for more robust results. The procedure is computationally intensive and the following rules are used to control the time and resources based on the sample size:

  • < 100 samples: 50 repeats
  • 100 - 200 samples: 30 repeats
  • 200 - 500 samples: 20 repeats
  • > 500 samples: 10 repeats

Two other factors can also affect the degree of variation - sample size and presence of outliers. If the sample size is small, the training and testing tend to have high variations. The outliers can also affect the results. Due to the nature of repeated sampling in each CV, some samples may be used multiple times and some samples may never be used. If the outlier samples are used imbalanced (i.e. never been used in the first analysis but used multiple times in the subsequent analysis), the result could change more than “slightly”.