Skip to content

bitswap/client: configurable broadcast reduction #10825

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 20 commits into from
Jun 17, 2025
Merged

Conversation

gammazero
Copy link
Contributor

@gammazero gammazero commented Jun 5, 2025

Add new config items to Internal.Bitswap to allow configuration of bitswap broadcast reduction behavior. Broadcast reduction behavior is enabled by default, and uses settings that should be suitable for most installations of kubo.

Results

For all the following results, there was not a significant drop in the number of want-have responses or unique blocks received for the hosts with broadcast reduction enabled compared with the hosts with broadcast reduction disabled.

During stable operation

Broadcast rates:

  • Broadcast rate, reduction disabled: 150000 / minute
  • Broadcast rate, reduction enabled: 30000 / minute
    (5x reduction) -- 80% broadcast reduction

Transmit bandwidth

  • Transmit traffic, reduction disabled is roughly 65 Mb/s on test host
  • Transmit traffic, reduction enabled is roughly 33 Mb/s on test host
    (50% transmit traffic reduction)

During increasing traffic

Broadcast rates:

  • Broadcast rate, reduction disabled: 698000 / minute
  • Broadcast rate, reduction enabled: 62000 / minute
    (11x reduction) -- 98% broadcast reduction

Transmit bandwidth

  • Transmit traffic, reduction disabled is roughly 360 Mb/s on test host
  • Transmit traffic, reduction enabled is roughly 72 Mb/s on test host
    (80% transmit traffic reduction)

During spike in peers

Broadcast rates:

  • Broadcast rate, reduction disabled: 1750000 / minute
  • Broadcast rate, reduction enabled: 30000 / minute
    (58x reduction) -- 98% broadcast reduction

Transmit bandwidth

  • Transmit traffic, reduction disabled is roughly 650 Mb/s on test host
  • Transmit traffic, reduction enabled is roughly 33 Mb/s on test host
    (95% transmit traffic reduction)

Additional data points:

  • Kubo appears to have as good or better performance in terms of have-block responses and unique blocks received when broadcast reduction is enabled.
  • When broadcast reduction is disabled, kubo may use excessive memory when there is a spike in the number of peers. - Kubo does not exhibit this issue when broadcast reduction is enabled.
  • Default tuning for broadcast reduction appears to offer best results on our infrastructure (most reduction in broadcasts with no significant degradation to blocks found).

@gammazero gammazero requested a review from a team as a code owner June 5, 2025 00:43
@gammazero gammazero changed the title Configure bitswap braodcast reduction bitswap: configurable bitswap broadcast reduction Jun 5, 2025
@gammazero gammazero changed the title bitswap: configurable bitswap broadcast reduction bitswap/client: configurable broadcast reduction Jun 5, 2025
Copy link
Member

@lidel lidel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks sensible, small asks

  • produce some synthetic benchmark (e.g. in Staging) that shows N% bandwidth reduction over time T without meaningful success decrease
  • make all these new knobs optional in JSON and have implicit defaults as consts (details inline)

Copy link
Contributor

@hsanjuan hsanjuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main question is whether this can auto-break discovery in private networks. Technically, they still have DHT so we should be fine? It might be problematic if a peer has added content, but not provided it on the DHT. Other peers in the private network might not discover it via bitswap in order to copy it.

Thinking of IPFS Clusters for example, even when Kubo is on the public network, cluster-peers ensure their kubos are connected to the other peers' kubos via swarm/connect. These connections are protected in Kubo with Weight = 100. A new cluster peer will be able to get any blocks it needs to replicate from a different cluster peer thanks to bitswap, rather than hitting the DHT. This might stop working now unless blocks are announced to the DHT, as even if we are on a protected connection, there is guarantee that we will broadcast wants to a peer that has not given us blocks before.

Might we consider broadcasting to protected connections?

docs/config.md Outdated
Comment on lines 1290 to 1292
#### `Internal.Bitswap.BroadcastReductionEnabled`

Enables or disables broadcast reduction logic. If broadcast reduction logic is disabled, then the other Broadcast configuration items are ignored. Setting this to false restores previous broadcast behavior.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like this to expand with more info:

  • What is the broadcast reduction doing?
  • What scenarios benefit from having this on/off? Particularly, discovery might stop working for small private networks that rely on bitswap broadcasts right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in section Internal.Bitswap.BroadcastControl

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might stop working for small private networks

If the networks are private, then default settings will broadcast to all the peers with private addresses. So no breakage expected in this case.

It may be beneficial to disable broadcast control in cases where there is no routing and discovery of blocks relies on asking all peers. If that really something to document?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added: "Enabling broadcast control should generally reduce the number of broadcasts significantly without significantly degrading the ability to discover which peers have wanted blocks. However, if block discovery on your network relies sufficiently on broadcasts to discover peers that have wanted blocks, then adjusting the broadcast control configuration or disabling it altogether, may be helpful."

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the networks are private, then default settings will broadcast to all the peers with private addresses. So no breakage expected in this case.

Sorry, by "private", I didn't mean peers with private addresses, I mean peers with public addresses that form a "pnet" private-network.

@gammazero gammazero force-pushed the bitswap-reduce-bcast branch from aaf54bd to c0cb857 Compare June 14, 2025 02:18
@gammazero gammazero requested review from lidel and hsanjuan June 14, 2025 06:34
@gammazero
Copy link
Contributor Author

@hsanjuan This does broadcast to Peered connections, here, which is think is the same as protected connections. Is that sufficient?

Copy link
Member

@lidel lidel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, pushed small fix & surfaced gains in changelog.

  • switch to boxo release before merging

@gammazero gammazero force-pushed the bitswap-reduce-bcast branch from db2c629 to e900d37 Compare June 17, 2025 11:05
@gammazero gammazero merged commit 0cf1b22 into master Jun 17, 2025
16 checks passed
@gammazero gammazero deleted the bitswap-reduce-bcast branch June 17, 2025 11:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants