论文部分内容阅读
Content 1.Abstract 2.Background 3.Related work 4.Our work 5.Experiments 6.References 1.Abstract We mainly study on the potential safety hazards in machine learning model by performing an attack on it.In general,the backdoored network can correctly recognize obstacles In special cases,triggering the backdoor in model will change the bounding box and mask