-
Notifications
You must be signed in to change notification settings - Fork 697
Are we testing real-world frameworks? #8116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You say that we are not testing any real world usage, I cannot disagree. The scenario are not very useful , we are testing the routing part of a piece of software. There is an open discussion for that (I mean to have a new scenario) #8093 Also we need to change our load tool in order to test at constant rate, there is also an open discussion. The idea is to use a tool like Vegeta to make http calls, instead of wrk to make sure we are more realistic in figures. Also what you point is that we need to check http headers ... I agree that it could be accurate to test http communication facing some security rules, like OWASP. You script cool be cool and can be used in a one shot manner to see if there is some framework that are falsey, and open an issue on their repository. |
I mean that your point @trikko is more related to HTTP part, and that is more the responsibility of HTTP stack, not the whole framework. For example, node or deno or bun for JavaScript should pass this kind of test, and this will affect express, bunicorn, fastify ... but the root cause is not in their scope |
After some verifications, it appears that
|
It was related to http stack just because it was easier to test! There would be many similar tests, for example, "expect: 100-continue." Surely, if the server does not respond correctly, it has probably never been widely used in real life, because even a simple file upload with curl triggers this. And if not handled, the server likely remains waiting indefinitely. This should be a strict requirement. 🙂 |
400 means bad (http) request. Is that a http request at all? Even closing connection I think is fine in this case. There's nothing indicates it's a http request and for sure not a http/1.1! "sdsP" could be a completely different protocol. For example: https://www.twilio.com/docs/glossary/sip-invites#example-sip-invite In this case should I reply with HTTP? |
Not sure I'm legitimate to answer this 😛 The idea, imho, is more to stick to the standard, but again not in the scope of this project. |
I think it is related. Ignoring lines and just read the first G of "GET"
and the URL before the last space is a huge advantage in benchmark for
speed, but not fair.
At least return 400, not 200 :)
Same for http/1.0 Vs http/1.1 . If someone askS you a page with http/1.0
you have to reply with 1.0protocol not with something else just to be
faster on benchmark.
If this is the point one could write a fast server that just search for
/user or /user/ in the first line and reply with the answer you expect
Another example: date header on http/1.1 response is mandatory. And it's a
bit consuming to build that extra header with current date in that format
for each request. Not fair to skip it to be faster. So on.
Il sab 4 gen 2025, 10:53 Marwan Rabbâa ***@***.***> ha
scritto:
… Not sure I'm legitimate to answer this 😛
The idea, imho, is more to stick to the standard, but again not in the
scope of this project.
—
Reply to this email directly, view it on GitHub
<#8116 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAE575LPEF2XPE6NUDM3B6L2I6VQDAVCNFSM6AAAAABUR5AGNGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNZRGA2DKOJYGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
You have some points 😛 I plan to have OWASP tests here, to test the implementations. I can some test also to make sure that a server does not return HTTP1.1 if we send HTTP/1.0, but this is more like for investigation, not something we can merge |
I think I could write a small tool to make some tests, it would be nice to see which % is passed by each server! IMHO some tests must be mandatory (parsing of http protocol, correct reply to "connection: xxx", etc.), and every server must at least pass x% of non-mandatory tests It would be nice to have a column with test passed (%) |
I think you should start a PR / issue to discuss with communities, but not something we should merge / add to the UI. imho, this is usefull to make some network handling part more reliable facing http standard you could even write a list of tests (a plain list of bullet points), I can convert them into ruby for specs here |
|
@waghanza it seems to be not possible to read the httpVersion of a ubwesocket.js request. |
Maybe an issue with nodejs then for chubbyts @dominikzogg |
I released a small tool with some tests. Just run |
For now, it's just a few tests. But apparently, they're enough to cause several servers to crash catastrophically (and many others fail!). |
Some servers are made only win benchmarks :) not to be used in real solutions :P |
And some framework used here and widely used in production are not fully respecting http standards |
Yes but at least I hope they respect the bare minimum and they don't crash just for a request they received. |
Fun fact. I see a reply to this request (4 bytes) with 200 ok (I wonder which uri...): "\r\n\r\n" |
Yeah, I'll open a branch to check that and a PR to ping all buddies concerned by that |
Another interesting thing: most servers don't seem to check the size of posts. So I can make a post with a 100TB file by default on a server without it even flinching. There should be a default limit (adjustable, of course). Leaving such an open-ended limit is very dangerous. |
So |
If For example:
If you want only |
I believe that the frameworks included here should at least satisfy a minimum level of adherence to the HTTP standard. I'm not saying they need to be perfect, but they should at least implement the basics of the standard.
Otherwise, this isn't a benchmark of HTTP frameworks, but rather a benchmark of pseudo-HTTP programs designed to win benchmarks, because obviously the fewer checks you perform, the faster you are.
And in that case, they wouldn't even be useful in real-life scenarios, would they?
Give it a try and run this script against any random web framework, and try to check the responses they give. It's just a little subset of what should be tested.
The text was updated successfully, but these errors were encountered: