Kiryu Sento
毕业设计3

毕业设计3

测试了一下发现单标签的效果是真的差,大约是2.5%左右的成功率

只需要更改BoundaryAttack的run()

经过多次实验, 我认为BoundaryAttack是能够直接迁移到多标签攻击中的,但是问题也很明显,那就是效率很低,攻击次数要在25000次以上才比较好。另外foolbox的clipped_pertubation函数不知道意义何在,哪怕是一个单标签的白盒攻击在取clipped后效果都极差。

而以下是官方文档对于clipped的结果的说明

1
raw, clipped, is_adv = attack(fmodel, images, labels, epsilons=0.03)

The attack returns three tensors.

  1. The raw adversarial examples. This depends on the attack and we cannot make an guarantees about this output.
  2. The clipped adversarial examples. These are guaranteed to not be perturbed more than epsilon and thus are the actual adversarial examples you want to visualize. Note that some of them might not actually switch the class. To know which samples are actually adversarial, you should look at the third tensor.
  3. The third tensor contains a boolean for each sample, indicating which samples are true adversarials that are both misclassified and within the epsilon balls around the clean samples.

和师兄讨论下,另外差分进化多标签攻击现在准备改成不同进化算法,差分进化变种或者协同进化变种试试。


本文作者:Kiryu Sento
本文链接:https://wandernforte.github.io/kirameki/毕业设计3/
版权声明:本文采用 CC BY-NC-SA 3.0 CN 协议进行许可