Skip to content

Default pipelining to maxConcurrentStreams with allowH2 #4143

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
slagiewka opened this issue Apr 3, 2025 · 4 comments · May be fixed by #4158
Open

Default pipelining to maxConcurrentStreams with allowH2 #4143

slagiewka opened this issue Apr 3, 2025 · 4 comments · May be fixed by #4158
Labels
enhancement New feature or request H2 Pull requests or issues related to HTTP/2

Comments

@slagiewka
Copy link

slagiewka commented Apr 3, 2025

Bug Description

The recent H2CClient work (#4118) does it well. If pipelining is not set, it defaults to maxConcurrentStreams which makes sense. This however, is not a case when a regular Client (or its descendent Agent) is in use.

Being on a HTTP/2 connection is still limited to either the default (1) or requires manual setup.
Manual setup is inconvenient and confusing. Pipelining is a HTTP/1.1 concept.

Docs say nothing about adjust pipelining when using HTTP/2 multiplexing and actually link to the HTTP/1.1 RFC under pipelining.

Reproducible By

sever.js
import { fetch, Agent, setGlobalDispatcher, interceptors } from "undici";
import { createServer } from "node:http";
import { once } from "node:events";

setGlobalDispatcher(
new Agent({
  allowH2: true,
  connections: 10,
  //pipelining: 100,
}).compose([interceptors.dns(), interceptors.retry()]),
);

const server = createServer((req, res) => {
fetch("https://my-https2-server.com", {
  signal: AbortSignal.timeout(1_000),
})
  .then((result) => {
    return Array.fromAsync(result.body);
  })
  .then(() => {
    res.end("");
  })
  .catch((err) => {
    res.writeHead(500);
    res.end("internal server error");
  });
}).listen("3000");

await once(server, "listening");
console.log("server started on port 3000");

Then run the server:

node index.js

And load test it. Run the same with and without pipelining set.

hey -n 15000 -c 100 -q 300 -t "1" http://localhost:3000

Expected Behavior

Same performance with or without pipelining set.

Environment

Node 23, macOS 15.3, but it's not platform related, it seems.

Additional context

Should #2750 make HTTP/2 on by default, users will not benefit from HTTP/2 at all - it might behave pretty much the same as HTTP/1.1 connections with keep-alive + some TLS overhead...

@slagiewka slagiewka added the bug Something isn't working label Apr 3, 2025
@metcoder95
Copy link
Member

I'm not sure this is a bug per-se, rather an improvement in the way we setup H2 (as undici was pointing to be only 1.1 compatible).

Adding a new option multiplexing that defaults to maxConcurrentStreams should suffice (companioned of its documentation).

Would you like to send a PR for that?

@metcoder95 metcoder95 added enhancement New feature or request H2 Pull requests or issues related to HTTP/2 and removed bug Something isn't working labels Apr 3, 2025
@slagiewka
Copy link
Author

Adding a new option multiplexing that defaults to maxConcurrentStreams should suffice (companioned of its documentation).

Would you like to send a PR for that?

I started dabbing at that and I think it would make things confusing.
With an example set up that tries to work with HTTP/2 we'd have the following settings:

  • connections
  • maxConcurrentStreams
  • multiplexing

Assume we'd want to sustain 1_000 concurrent requests. I would then open:

  • 10 connections with
  • 100 maxConcurrentStreams and
  • 100 multiplexing (or previously - pipelining)

The question that comes to mind: why would a user need to configure multiplexing separately? Is it necessary? Why isn't maxConcurrentStreams actually the equivalent of multiplexing? What would be the point of setting these to different values? If a user would want lower multiplexing/concurrency on a connection/session why wouldn't they also lower the maxConcurrentStreams?

Even if someone would be afraid of the peer changing SETTINGS_MAX_CONCURRENT_STREAMS:

  • to something lower - they could still lower maxConcurrentStreams on their end to the same result as changing multiplexing
  • to something higher - undici will still send at most maxConcurrentStreams per session and ignore the higher allowance (unless undici starts respecting the value set by peer).

The only difference now is that pipelining actually controls the level of multiplexing. Should this be omitted, users could use one toggle to control the behaviour or leave it to the defaults and enjoy multiplexing.

@metcoder95
Copy link
Member

connections

connections does only applies to high level APIs like Agents/Pools, for low level ones like Clients this is of no help as each Client/Dispatcher maps to a single socket.

Indeed, that's a good point; pipelining, as per the nature of undici was handling the pipelining concept from HTTP/1.1 but I never fully connected the concepts.

Agree with you, maxConcurrentStreams should be use as pipelining equivalent and manage the number of concurrent requests matching the current streams.

Sadly this will impose a breaking change, we can attempt to keep warnings over that and state that we will honor mostly maxConcurrentStreams.

Feel free to open a PR so we can iterate over it.

slagiewka added a commit to slagiewka/undici that referenced this issue Apr 13, 2025
For H2 sessions pipelining is misleading and is a concept of HTTP/1.1.

This makes use of `maxConcurrentStreams` parameter to be equivalent of
what H2 calls multiplexing. It makes sure that clients do not have to
change `pipelining` in order to achieve multiplexing over a H2 stream.

Closes nodejs#4143.
@slagiewka slagiewka linked a pull request Apr 13, 2025 that will close this issue
7 tasks
@slagiewka
Copy link
Author

Started #4158.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request H2 Pull requests or issues related to HTTP/2
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants