This repository presents the open-source implementation of SDCA, a novel adversarial camouflage generation framework that leverages semantic features of natural textures (e.g., color distributions and contour patterns) to optimize adversarial textures for evading both biological vision systems and computer vision systems.
Source code will be released soon.
- Utilizing programmatic noise to achieve inverse modeling of semantic features.
- Leveraging semantic features to drive the generation of the initial texture.
- Providing a priori guidance for subsequent optimization tasks.
- Forming a unique semantic perturbation based on a priori semantics from the initial texture.
- Constraining the optimization space of the perturbation actively.
- Maintaining semantic consistency between adversarial textures and initial textures.
- Evaluating attack performance: transferability across models/scenes and robustness under different viewpoints/occlusion conditions.
- Evaluating texture naturalness via similarity metrics (SSIM, FSIM, CSI) and the camouflage object detection (COD) task.
- Achieving state-of-the-art naturalness while maintaining transferability and robustness.
- Collected via [CARLA simulator]
- Download link: [Google Drive]
- [RSSCN7 Dataset]
- Collected the forest-class subset only.
- Official link: [URL]
- [Unity Jungle Scene Dataset]
- Collected via [Unity Jungle Scene Asset]
- Download link: [Google Drive]