Skip to content

Add NStepReplayBuffer #2144

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 8 commits into
base: master
Choose a base branch
from
Draft

Add NStepReplayBuffer #2144

wants to merge 8 commits into from

Conversation

araffin
Copy link
Member

@araffin araffin commented Jun 6, 2025

Description

Reviving #81

closes #47

Idea based on https://github.com/younggyoseo/FastTD3 implementation with fixes to avoid younggyoseo/FastTD3#6

Mostly tested on IsaacSim so far.

To try it out (I'm thinking about adding a n_steps param to off-policy algorithm to make it easier):

from stable_baselines3 import DQN, SAC
from stable_baselines3.common.buffers import NStepReplayBuffer, ReplayBuffer
from stable_baselines3.common.env_util import make_vec_env

model_class = SAC

env_id = "CartPole-v1" if model_class == DQN else "Pendulum-v1"
env = make_vec_env(env_id, n_envs=2)

# NOTE: need to set the discount factor manually for bootstrapping for now
n_steps = 2
gamma = 0.99
discount = gamma**n_steps

model = model_class(
    "MlpPolicy",
    env,
    replay_buffer_class=NStepReplayBuffer,
    replay_buffer_kwargs=dict(
        n_steps=n_steps,
        gamma=gamma,
    ),
    gamma=discount,
)

model.learn(total_timesteps=150)

Motivation and Context

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)

Checklist

  • I've read the CONTRIBUTION guide (required)
  • I have updated the changelog accordingly (required).
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.
  • I have opened an associated PR on the SB3-Contrib repository (if necessary)
  • I have opened an associated PR on the RL-Zoo3 repository (if necessary)
  • I have reformatted the code using make format (required)
  • I have checked the codestyle using make check-codestyle and make lint (required)
  • I have ensured make pytest and make type both pass. (required)
  • I have checked that the documentation builds using make doc (required)

Note: You can run most of the checks using make commit-checks.

Note: we are using a maximum length of 127 characters per line

@araffin araffin mentioned this pull request Jun 6, 2025
12 tasks
@araffin araffin requested a review from Copilot June 6, 2025 15:24
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds a new NStepReplayBuffer to compute n-step returns and accompanying tests to validate its behavior under various termination conditions.

  • Implements NStepReplayBuffer subclass with overridden _get_samples to handle multi-step returns, terminations, and truncations.
  • Provides unit tests in tests/test_n_step_replay.py covering normal sampling, early termination, and truncation.
  • Integrates NStepReplayBuffer into DQN/SAC via test_run demonstration.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
tests/test_n_step_replay.py Added comprehensive tests for n-step buffer behavior
stable_baselines3/common/buffers.py Introduced NStepReplayBuffer with custom sampling logic
Comments suppressed due to low confidence (2)

stable_baselines3/common/buffers.py:843

  • Add a class-level docstring for NStepReplayBuffer to describe its purpose, parameters (n_steps, gamma), and behavior (handling terminations and truncations).
class NStepReplayBuffer(ReplayBuffer):

tests/test_n_step_replay.py:10

  • The tests cover single-environment behavior but don’t validate multi-environment buffers. Add a test with n_envs > 1 to ensure correct sampling and offset handling across parallel envs.
@pytest.mark.parametrize("model_class", [SAC, DQN])

Comment on lines +859 to +867
safe_timeouts = self.timeouts.copy()
safe_timeouts[self.pos - 1, :] = np.logical_not(self.dones[self.pos - 1, :])

indices = (batch_inds[:, None] + steps) % self.buffer_size # shape: [batch, n_steps]

# Retrieve sequences of transitions
rewards_seq = self.rewards[indices, env_indices[:, None]] # [batch, n_steps]
dones_seq = self.dones[indices, env_indices[:, None]] # [batch, n_steps]
truncs_seq = safe_timeouts[indices, env_indices[:, None]] # [batch, n_steps]
Copy link
Preview

Copilot AI Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copying the entire timeouts array on every sample can be expensive for large buffers. Consider only handling the specific index or using a more targeted approach to avoid full-array duplications.

Suggested change
safe_timeouts = self.timeouts.copy()
safe_timeouts[self.pos - 1, :] = np.logical_not(self.dones[self.pos - 1, :])
indices = (batch_inds[:, None] + steps) % self.buffer_size # shape: [batch, n_steps]
# Retrieve sequences of transitions
rewards_seq = self.rewards[indices, env_indices[:, None]] # [batch, n_steps]
dones_seq = self.dones[indices, env_indices[:, None]] # [batch, n_steps]
truncs_seq = safe_timeouts[indices, env_indices[:, None]] # [batch, n_steps]
# Handle specific index without copying the entire array
modified_timeouts = np.logical_not(self.dones[self.pos - 1, :])
indices = (batch_inds[:, None] + steps) % self.buffer_size # shape: [batch, n_steps]
# Retrieve sequences of transitions
rewards_seq = self.rewards[indices, env_indices[:, None]] # [batch, n_steps]
dones_seq = self.dones[indices, env_indices[:, None]] # [batch, n_steps]
truncs_seq = self.timeouts[indices, env_indices[:, None]] # [batch, n_steps]
# Apply the modified value for the specific index
truncs_seq[indices == (self.pos - 1)] = modified_timeouts[env_indices[indices == (self.pos - 1)]]

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[feature-request] N-step returns for TD methods
1 participant