Why does the univariate ROC analysis sometimes yield AUROC better than those obtained using multiple features?

The ROC curves created (using ROC Explorer or Tester) are based on cross-validation performance of multivariate algorithms (SVM, PLS-DA or Random Forests). In contrast, the classical univariate ROC curves are created based on the performance measured by testing all possible cutoffs within ALL data points.

Therefore, the AUROC from cross validated ROC curve is more realistic for prediction purpose, while the AUROC calculated from ROC curve created by univariate approach is often too optimistic (i.e. overfit). In other words, univariate ROC can be considered as an indicator of the discriminating “potential” of the feature, not its actual performance.