Skip to content

[IEEE GRSS DFC 2025 Track II] BRIGHT: A globally distributed multimodal VHR dataset for all-weather disaster response

Notifications You must be signed in to change notification settings

ChenHongruixuan/BRIGHT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

☀️BRIGHT☀️

BRIGHT: A globally distributed multimodal VHR dataset for all-weather disaster response

Hongruixuan Chen1,2, Jian Song1,2, Olivier Dietrich3, Clifford Broni-Bediako2, Weihao Xuan1,2, Junjue Wang1
Xinlei Shao1, Yimin Wei1,2, Junshi Xia3, Cuiling Lan4, Konrad Schindler3, Naoto Yokoya1,2 *

1 The University of Tokyo, 2 RIKEN AIP, 3 ETH Zurich, 4 Microsoft Research Asia

arXiv paper Codalab Leaderboard Zenodo Dataset HuggingFace Dataset Zenodo Model visitors

Overview | Start BRIGHT | Common Issues | Others

🛎️Updates

  • Notice☀️☀️: The full version of the BRIGHT paper are now online!! The contents related to IEEE GRSS DFC 2025 has been transferred to here!!
  • May 05th, 2025: All the data and benchmark code related to our paper has now released. You are warmly welcome to use them!!
  • Apr 28th, 2025: IEEE GRSS DFC 2025 Track II is over. Congratulations to winners!! You can now download the full version of DFC 2025 Track II data in Zenodo or HuggingFace!!
  • Jan 18th, 2025: BRIGHT has been integrated into TorchGeo. Many thanks for the effort of Nils Lehmann!!
  • Jan 13th, 2025: The arXiv paper of BRIGHT is now online. If you are interested in details of BRIGHT, do not hesitate to take a look!!

🔭Overview

  • BRIGHT is the first open-access, globally distributed, event-diverse multimodal dataset specifically curated to support AI-based disaster response. It covers five types of natural disasters and two types of man-made disasters across 14 disaster events in 23 regions worldwide, with a particular focus on developing countries.

  • It supports not only the development of supervised deep models, but also the testing of their performance on cross-event transfer setup, as well as unsupervised domain adaptation, semi-supervised learning, unsupervised change detection, and unsupervised image matching methods in multimodal and disaster scenarios.

accuracy

🗝️Let's Get Started with BRIGHT!

A. Installation

Note that the code in this repo runs under Linux system. We have not tested whether it works under other OS.

Step 1: Clone the repository:

Clone this repository and navigate to the project directory:

git clone https://github.com/ChenHongruixuan/BRIGHT.git
cd BRIGHT

Step 2: Environment Setup:

It is recommended to set up a conda environment and installing dependencies via pip. Use the following commands to set up your environment:

Create and activate a new conda environment

conda create -n bright-benchmark
conda activate bright-benchmark

Install dependencies

pip install -r requirements.txt

B. Data Preparation

Please download the BRIGHT from Zenodo or HuggingFace. Note that we cannot redistribute the optical data over Ukraine, Myanmar, and Mexico. Please follow our tutorial to download and preprocess them.

After the data has been prepared, please make them have the following folder/file structure:

${DATASET_ROOT}   # Dataset root directory, for example: /home/username/data/bright
│
├── pre-event
│    ├──bata-explosion_00000000_pre_disaster.tif
│    ├──bata-explosion_00000001_pre_disaster.tif
│    ├──bata-explosion_00000002_pre_disaster.tif
│   ...
│
├── post-event
│    ├──bata-explosion_00000000_post_disaster.tif
│    ... 
│
└── target
     ├──bata-explosion_00000000_building_damage.tif 
     ...   

C. Model Training & Tuning

The following commands show how to train and evaluate UNet on the BRIGHT dataset using our standard ML split set in [bda_benchmark/dataset/splitname/standard_ML]:

python script/standard_ML/train_UNet.py --dataset 'BRIGHT' \
                                        --train_batch_size 16 \
                                        --eval_batch_size 4 \
                                        --num_workers 16 \
                                        --crop_size 640 \
                                        --max_iters 800000 \
                                        --learning_rate 1e-4 \
                                        --model_type 'UNet' \
                                        --model_param_path '<your model checkpoint saved path>' \
                                        --train_dataset_path '<your dataset path>' \
                                        --train_data_list_path '<your project path>/bda_benchmark/dataset/splitname/standard_ML/train_set.txt' \
                                        --val_dataset_path '<your dataset path>' \
                                        --val_data_list_path '<your project path>/bda_benchmark/dataset/splitname/standard_ML/val_set.txt' \
                                        --test_dataset_path '<your dataset path>' \
                                        --test_data_list_path '<your project path>/bda_benchmark/dataset/splitname/standard_ML/test_set.txt' 

D. Inference & Evaluation

Then, you can run the following code to generate raw & visualized prediction results and evaluate performance using the saved weight. You can also download our provided checkpoints from Zenodo.

python script/standard_ML/infer_UNet.py --model_path  '<path of the checkpoint of model>' \
                                        --test_dataset_path '<your dataset path>' \
                                        --test_data_list_path '<your project path>/bda_benchmark/dataset/splitname/standard_ML/test_set.txt' \
                                        --output_dir '<your inference results saved path>'

E. Other Benchmarks & Setup

In addition to the above supervised deep models, BRIGHT also provides standardized evaluation setups for several important learning paradigms and multimodal EO tasks:

  • Cross-event transfer setup: Evaluate model generalization across disaster types and regions. This setup simulates real-world scenarios where no labeled data (zero-shot) or limited labeled data (one-shot) is available for the target event during training.

  • Unsupervised domain adaptation: Adapt models trained on source disaster events to unseen target events without any target labels, using UDA techniques under the zero-shot cross-event setting.

  • Semi-supervised learning: Leverage a small number of labeled samples and a larger set of unlabeled samples from the target event to improve performance under the one-shot cross-event setting.

  • Unsupervised multimodal change detection: Detect disaster-induced building changes without using any labels. This setup supports benchmarking of general-purpose change detection algorithms under realistic large-scale disaster scenarios.

  • Unsupervised multimodal image matching: Evaluate the performance of matching algorithms in aligning raw, large-scale optical and SAR images based on manual-control-point-based registration accuracy. This setup focuses on realistic multimodal alignment in disaster-affected areas.

  • IEEE GRSS DFC 2025 Track II: The Track II of IEEE GRSS DFC 2025 aims to develop robust and generalizable methods for assessing building damage using bi-temporal multimodal images on unseen disaster events.

🤔Common Issues

Based on peers' questions from issue section, here's a quick navigate list of solutions to some common issues.

Issue Solution
Complete data of DFC25 for research The labels for validation and test sets of DFC25 have been uploaded to Zenodo and HuggingFace.
Python package conflicts The baseline code is not limited to a specific version, and participants do not need to match the version we provide.

📜Reference

If this dataset or code contributes to your research, please kindly consider citing our paper and give this repo ⭐️ :)

@article{chen2025bright,
      title={BRIGHT: A globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response}, 
      author={Hongruixuan Chen and Jian Song and Olivier Dietrich and Clifford Broni-Bediako and Weihao Xuan and Junjue Wang and Xinlei Shao and Yimin Wei and Junshi Xia and Cuiling Lan and Konrad Schindler and Naoto Yokoya},
      journal={arXiv preprint arXiv:2501.06019},
      year={2025},
      url={https://arxiv.org/abs/2501.06019}, 
}

🤝Acknowledgments

The authors would also like to give special thanks to Sarah Preston of Capella Space, Capella Space's Open Data Gallery, Maxar Open Data Program and Umbra Space's Open Data Program for providing the valuable data.

🙋Q & A

For any questions, please feel free to leave it in the issue section or contact us.