Skip to content

用test_BadNets训练CIFAR10数据集后并使用该代码得到的ROC并不理想的问题 #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
DearYukin opened this issue Dec 10, 2024 · 3 comments

Comments

@DearYukin
Copy link

老师您好,我首先使用了test_BadNets.py训练CIFAR10数据集得到的训练日志如下:
==========Test result on benign test dataset==========
[2024-12-9_21:49:26] Top-1 correct / Total: 8709/10000, Top-1 accuracy: 0.8709, Top-5 correct / Total: 9949/10000, Top-5 accuracy: 0.9949, mean loss: 0.49871015548706055, time: 1.853926181793213
==========Test result on poisoned test dataset==========
[2024-12-9_21:49:28] Top-1 correct / Total: 10000/10000, Top-1 accuracy: 1.0, Top-5 correct / Total: 10000/10000, Top-5 accuracy: 1.0, mean loss: 0.00047622263082303107, time: 1.9681081771850586

之后,我保存了中毒数据集为pth文件,并将其加载到此repo的dataloader2tensor_CIFAR10.py得到了poisoned_test_samples.pth文件

然后,我将训练好的中毒模型和bengin_labels.pth这一标签文件载入到torch_model_wrapper.py之后,再分别用bengin_test_samples和poisoned_test_samples运行该代码得到了tiny_benign.npy和tiny_bd.npy文件

最后,我将tiny_benign和tiny_bd加载进test.py文件中获得AUC_SCORE和ROC图像,然而得到的结果十分的糟糕,我不知道哪一步除了问题,希望老师您能为我指出

@YuanShunJie1
Copy link

这是我在CIFAR-10数据集,ResNet-18模型,BadNets攻击下的测试结果。
image

20241218_145821_badnet_attack_badnet_cifar10_fc5A
TPR: 46.53
FPR: 39.07
AUC: 0.5267
f1 score: 0.49429888403687533
你的是怎么样的?

@LiHu1997
Copy link

请问有解决了吗,我这边测试的和你差不多

@YuanShunJie1
Copy link

YuanShunJie1 commented Dec 27, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants