-
-
Notifications
You must be signed in to change notification settings - Fork 134
🐛 Bug Report: Sporadic JSON errors #151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
|
Back to life when restarted. Then, it died again. version: https://github.com/redlib-org/redlib/tree/try_fix_oauth_refresh |
Not sure if related but apparently not encoding "+" causes ratelimit errors, at the moment: https://sh.reddit.com/r/redditdev/comments/1diwa3i/anyone_getting_prawcoreexceptionsredirect/ In the thread it's fixed by encoding to %2B |
At least on my end, it appears the server I was using to host redlib has been blacklisted by reddit. If I run EDIT: |
I am getting the same exact error when opening the homepage: Failed to parse page JSON data: EOF while parsing a value at line 1 column 0. is there a fix yet? BTW this happens on all the instances ive tried |
Some instances do work, like reddit.nerdvpn.de, and redlib.ducks.party Rate limiting ? |
I run a single-user instance and also getting the same errors so it seems to be something on reddit's side perhaps (it's intermittent for me) |
I really can't tell what this problem is. I can't reproduce it locally, despite experiencing it all day on public instances. Is anyone able to reproduce this consistently within a few minutes of starting a local server? There's a tiny chance it could have been caused by this: #154 or maybe even this: bacc9e3 But I couldn't tell ya. Best thing to do is wait for instances to update to see |
I would love to get to the bottom of this, though, it requires adding a few println statements to see just precisely what the client is getting back. |
After 1-2 minutes even on 48873c0 it still happens for me, but seemingly only on my server and not my desktop. Server has two different containers running Redlib, each with a different public IPv6 address. All devices have the same public IPv4. Disabling IPv6 on my desktop still doesn't result in an error though. I've got no problem taking one of the server containers offline to do some local testing and adding some print statements if you want to let me know where they'd go. |
Reddit seems to be able to detect Redlib. Not by ip, even with my cloudflare proxy fork it still complains about too many requests. |
Hm. I have to wonder if it's some kind of bot protection, since it's more likely to happen to public instances (and thus would be under load). Thanks @ButteredCats, here's a diff, this should work on mv logging.diff.txt logging.diff
git apply logging.diff
cargo run Would greatly appreciate a zip with the |
I think that means your IP is banned (including cloudflare). I can't reproduce a 429 off the bat at all |
@sigaloid My IP has been banned for a long time, at least several months. But a few days ago, my instance was still working normally. And I don't think anybody would block cloudflare ip. |
Hm. What image are you using? I made a few theoretically relevant changes in the last few hours |
I'll try the latest commit. |
Couldn't recreate it with the container not getting proxied to, reenabled it for just a second to see if that'd cause it and was immediately hit with 429 errors. However it only created an error_oauth_token.txt and I was hit with a bunch of this:
Unfortunately I have this awful thing called work tomorrow so I'm not gonna be able to test any more tonight. |
Ah, irony of ironies. Couldn't test the logging because I couldn't reproduce it. Updated patch, no rush on reproduction. |
I procrastinated long enough to see that before I got off lol. Hopefully this can tell you what you need to know: errors.zip |
It does. Interestingly enough the error code is (presumably) still 200 (since it got all the way to json parsing). But the rate limit was reached (info in headers). Content-length is zero as well. I'm not sure what the solution path is. The goal was for oauth to remove the worry for rate limits. I need to probably add a lot of observability into the rate limiting info returned in the headers. For now, I think making sure the failures are evicted from cache instead of staying broken for 5 minutes is the stopgap solution. I think the reason you're dealing with rate limits could have something to do with IP reputation. Maybe some kind of temporary blocklist. Weird interplay with their own edge CDN and the way we present ourselves (totally legit mobile client) vs TLS cipher handshake appearing differently, plus datacenter IP etc. There's so many weird things they can do to detect us, it's not difficult to bypass, but narrowing exactly what changes every time is tedious. I guess this might mean they're actually trying to block us? 🤔 |
On VPN I've noticed that:
Funny enough every VPN server can bypass the message on sh.reddit.com at the moment, but this will likely be patched any second now. |
My server is in my home network, where I just have a normal residential plan. When I have time I'll disable IPv6 on both my Redlib containers (which are still recieving requests) and my desktop and then see if my desktop starts running into the issue too. |
Bad news, it looks like the issue lies in the fact that the rate limit started being enforced - for everyone. Going to push a quick fix out that properly renders an error page, would greatly appreciate instance operators upgrading to it. Also, I'm going to close this issue - the error is going to be fixed. I'll open a new issue for the rate limiting issue. |
Describe the bug
Bug when clicking on post from r/popular
Steps to reproduce the bug
Steps to reproduce the behavior:
What's the expected behavior?
See the post
Additional context / screenshot
The text was updated successfully, but these errors were encountered: