New Code-poisoning Attack could Corrupt Your ML Models
A group of researchers discovered a new type of code-poisoning attack that can manipulate natural-language modeling systems via a backdoor. By nature, this is a blind attack, in which the attacker does not require to observe the execution of their code or the weights of the backdoored model during or post the training. Staying protected from this new code poisoning attack will be very challenging for organizations.