The current study examined the effects of variability on infant event-related potential (ERP) data editing methods. A widespread approach for analyzing infant ERPs is through a trial-by-trial editing process. Researchers identify electroencephalogram (EEG) channels containing artifacts and reject trials that are judged to contain excessive noise. This process can be performed manually by experienced researchers, partially automated by specialized software, or completely automated using an artifact-detection algorithm. Here, we compared the editing process from four different editors—three human experts and an automated algorithm—on the final ERP from an existing infant EEG dataset. Findings reveal that agreement between editors was low, for both the numbers of included trials and of interpolated channels. Critically, variability resulted in differences in the final ERP morphology and in the statistical results of the target ERP that each editor obtained. We also analyzed sources of disagreement by estimating the EEG characteristics that each human editor considered for accepting an ERP trial. In sum, our study reveals significant variability in ERP data editing pipelines, which has important consequences for the final ERP results. These findings represent an important step toward developing best practices for ERP editing methods in infancy research.