We show that those well-known problems in adversarial training, including robust overfitting, robustness overestimation, and robustness-accuracy trade-off, are all related to low-quality samples in the dataset. Removing those low-quality samples can greatly alleviate these problems and often boost the robustness as well.