Skip to content

Release v0.5.0 #1127

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 17, 2025
Merged

Release v0.5.0 #1127

merged 1 commit into from
Apr 17, 2025

Conversation

qubvel
Copy link
Collaborator

@qubvel qubvel commented Apr 16, 2025

New Models

DPT

DPT

The DPT model adapts the Vision Transformer (ViT) architecture for dense prediction tasks like semantic segmentation. It uses a ViT as a powerful backbone, processing image information with a global receptive field at each stage. The key innovation lies in its decoder, which reassembles token representations from various transformer stages into image-like feature maps at different resolutions. These are progressively combined using convolutional PSP and FPN blocks to produce full-resolution, high-detail predictions.

The model in smp can be used with a wide variety of transformer-based encoders

import segmentation_models_pytorch as smp

# initialize with your own pretrained encoder
model = smp.DPT("tu-mobilevitv2_175.cvnets_in1k", classes=2)

# load fully-pretrained on ADE20K 
model = smp.from_pretrained("smp-hub/dpt-large-ade20k")

# load the same checkpoint for finetuning
model = smp.from_pretrained("smp-hub/dpt-large-ade20k", classes=1, strict=False)

The full table of DPT's supported timm encoders can be found here.

Models export

A lot of work was done to add support for torch.jit.script, torch.compile (without graph breaks: fullgraph=True) and torch.export features in all encoders and models.

This provides several advantages:

  • torch.jit.script: Enables serialization of models into a static graph format, enabling deployment in environments without a Python interpreter and allowing for graph-based optimizations.
  • torch.compile (with fullgraph=True): Leverages Just-In-Time (JIT) compilation (e.g., via Triton or Inductor backends) to generate optimized kernels, reducing Python overhead and enabling significant performance improvements through techniques like operator fusion, especially on GPU hardware. fullgraph=True minimizes graph breaks, maximizing the scope of these optimizations.
  • torch.export: Produces a standardized Ahead-Of-Time (AOT) graph representation, simplifying the process of exporting models to various inference backends and edge devices (e.g., through ExecuTorch) while preserving model dynamism where possible.

PRs:

Core

All encoders from third-party libraries such as efficientnet-pytorch and pretrainedmodels.pytorch are now vendored by SMP. This means we have copied and refactored the underlying code and moved all checkpoints to the smp-hub. As a result, you will have fewer additional dependencies when installing smp and get much faster weights downloads.

🚨🚨🚨 Breaking changes

  1. UperNet model was significantly changed to reflect the original implementation and to bring pretrained checkpoints into SMP. Unfortunately, UperNet model weights trained with v0.4.0 will be not compatible with SMP v0.5.0.

  2. While the high-level API for modeling should be backward compatible with v0.4.0, internal modules (such as encoders, decoders, blocks) might have changed initialization and forward interfaces.

  3. timm- prefixed encoders are deprecated, tu- variants are now the recommended way to use encoders from the timm library. Most of the timm- encoders are internally switched to their tu- equivalent with state_dict re-mapping (backward-compatible), but this support will be dropped in upcoming versions.

Other changes

New Contributors

Full Changelog: v0.4.0...v0.5.0

Copy link

codecov bot commented Apr 16, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Files with missing lines Coverage Δ
segmentation_models_pytorch/__version__.py 100.00% <100.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@adamjstewart
Copy link
Collaborator

you will no longer need to install extra dependencies when installing smp

Technically there are still dependencies, there are just fewer now. Maybe clarify that there are fewer dependencies or all dependencies are actively maintained and publish wheels.

@qubvel
Copy link
Collaborator Author

qubvel commented Apr 16, 2025

Thanks! Fixed

Copy link
Collaborator

@adamjstewart adamjstewart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated one sentence in Model Export, looks good to me now.

@qubvel qubvel merged commit 420ce84 into main Apr 17, 2025
17 checks passed
@adamjstewart adamjstewart deleted the release/0.5.0 branch April 17, 2025 10:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Quickstart guide typo
2 participants