Why in sometimes hit "submit" in permutation of PLSDA use same data has different Empirical p value?

Sometimes, I use same data for PLSDA, When hit the “submit” two or more times of permutation, it provided different Empirical p value

Permutation tests contain random components in the procedure (more details: biomarker analysis, PLS-DA). The results may be different in each run. In our experience, the variations should be usually small when you have a large number of samples (at least 40 samples with balance design).

If the differences are large, consider the following:

  1. Sample size is too small
  2. Very unbalanced group
  3. Potential outliers which lead to unstable models