Kiryu Sento
毕业设计2

毕业设计2

用foolbox移植一部分已知的多标签分类

面临的问题

foolbox在生成对抗样本时, 会进行is_adv的判定,这个is_adv是大小为[num_classes, 1] 的bool 类型的tensor. 而且生成时使用的方法是where,这导致了无法对齐

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
for j in range(self.directions):
# random noise inputs tend to be classified into the same class,
# so we might need to make very many draws if the original class
# is that one
random_ = ep.uniform(x, x.shape, min_, max_)
is_adv_ = atleast_kd(is_adversarial(random_), x.ndim)

if j == 0:
random = random_
is_adv = is_adv_
else:
random = ep.where(is_adv, random, random_) # 现在改成多标签会在这里出现问题, is_adv不是其所需的shape
is_adv = is_adv.logical_or(is_adv_)

if is_adv.all():
break

我并不会写任何的神经网络, 只能白嫖别人的边用边改

租的GPU平台没有科学上网的能力,整代理也是整了很久


2023/5/13更新

自己仿写的一个criterion

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# custom function
class TargetedMisclassificationML(Criterion):
"""Considers those perturbed inputs adversarial whose predicted class
matches the target classes. Multi-Label

Args:
target_classes: Tensor with target classes ``(batch,)``.
"""

def __init__(self, target_classes: Any):
super().__init__()
self.target_classes: ep.Tensor = ep.astensor(target_classes)
def __repr__(self) -> str:
return f"{self.__class__.__name__}({self.target_classes!r})"

def __call__(self, perturbed: transforms, outputs: transforms) -> transforms:
outputs_, restore_type = ep.astensor_(outputs)
del perturbed, outputs

classes = (outputs_>=0.5)+0 # transform logits to predict results

assert classes.shape == self.target_classes.shape
is_adv = ep.all(classes == self.target_classes, axis=1)

return restore_type(is_adv)

简单粗暴把Boundary Attack的初始攻击样本改成随机的了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
def run(
self,
model: Model,
inputs: T,
criterion: Union[Criterion, T],
*,
early_stop: Optional[float] = None,
starting_points: Optional[T] = None,
**kwargs: Any,
) -> T:
raise_if_kwargs(kwargs)
_, t_restore_type = ep.astensor_(torch.zeros(1))
originals, restore_type = ep.astensor_(inputs)
# del inputs, kwargs

verify_input_bounds(originals, model)

criterion = get_criterion(criterion)
is_adversarial = get_is_adversarial(criterion, model)

init_attack = LinearSearchBlendedUniformNoiseAttack(steps=50)
logging.info(
f"Neither starting_points nor init_attack given. Falling"
f" back to {init_attack!r} for initialization."
)

best_advs = ep.astensor(torch.rand_like(t_restore_type(inputs)))
del inputs, kwargs
tb = TensorBoard(logdir=self.tensorboard)

2023/5/14 更新

boundary attack转向多标签碰到的问题:

  • 初始攻击样本似乎是非常必要的,缺少的话会非常难搞[我觉得主要可以用来减少查询次数]
  • 优化算法我看不出来有什么大问题

又看了下决策攻击方法的流程,基本上都需要特殊的初始攻击样本

本文作者:Kiryu Sento
本文链接:https://wandernforte.github.io/kirameki/毕业设计2/
版权声明:本文采用 CC BY-NC-SA 3.0 CN 协议进行许可