Good day to all! Over the last 30 minutes or so, I’ve been having issues loading beehaw.org. Sometimes CSS is missing and the page layout is broken, and others there is a server side NGINX error.
Just wanted to make the admins aware this is happening. There are some NGINX settings that can be adjusted to make more threads available to NGINX if it is hitting a worker limit.
I’m not an official spokescritter, but I can assure you the Beehaw admins aren’t ignoring the issues. But ultimately it’s going to come down to someone getting PRs in to the code. I hope someone gets some performance-focused PRs in soon.
They are not informing the end-users of the problem, they are leaving people like me wasting their time calling out the problem. Denial isn’t just a river in Egypt. Lemmy isn’t scaling, it’s falling flat on it’s face, and the federation protocols of doing one single like per https transaction are causing servers to overload peer servers.
Where are the server logs? Why are the crashes not being shared to developers? Do i really have to build up an instance with 5000 users to get access to the data that Beehaw’s servers are logging each hour?
What are you asking for? I’m not smart enough to know what is going on here, but can relay the request to someone who is if you’re willing to dumb it down for me and ask nicely
Right out of the Lemmy documentation for servers:
Log them to a file and dump them somewhere public, like a github repository. What is gong on in these logs when 500 errors are happening?
Thanks for the suggestions. We are aware of how to review system logs and work to solve the issues. Right now there are a lot of moving parts, some of which we control and are responsible for, but a lot that we cannot.
As you know, an NGINX 500 issue is due to the server instance and not the client (you). For our stack, that could be an issue anywhere along the path with varnish, nginx, firewall rules, security/HIDS, host networking, docker networking, one or multiple services of the six containers, the docker service/daemon itself.
The issues are being addressed as we are able to troubleshoot, prove it, and verify a solution.
Do you consider 0.17.4 a “stable” release of Lemmy that is proven and production ready, or more like an experimental project under active development?
I do not grasp why no Github issues are being opened to discuss openly these problems with the Lemmy platform that I have seen on many instances.
Every version of Lemmy is experimental and not really production ready. But it is in use and serving our needs, with a few pain points that are being worked on. I don’t have the time to run down every single bug or issue we experience with Lemmy, in order to make a good, useful, bug report; certainly not enough time to do that and fix them.
So why should I post a GitHub issue for the devs to see, that is just another “hey this isn’t working, fix please” complaint? They have hundreds of open issues already. When I find one I can prove and give sufficient details for, I make an issue. To do otherwise is a pretty entitled take.
I’m getting what I pay for and happy to contribute how I can. You’re saying that’s not enough and I need to do more. No thanks.
I see you run a Neruodiverse community here, maybe you are misinterpreting my Asperger syndrome. I posted here 8 days ago, and I’m revisiting it.
I posed here 8 days ago, I linked back to lemmy.ml having the same problem, I am the one doing the labor here of screaming out loud how serious this problem is and it isn’t like the other issues being posted on GitHub which are mostly end-user wishlists for new features.
Really sucks they aren’t listening to you, they don’t appear to be listening to us either. Best of luck screaming at them, hopefully they’ll listen as they haven’t fixed any of the bugs I opened either.
sent this along
Because not every issue we’re experiencing even the 500’s , are a result of Lemmy or their code. There is no reason to share that with them.
Then what are they, when Nginx is failing to talk to the NodeJS app? I also consider this more than code, as they are also giving recommendations for performance tuning various components, etc.
I have a lot of suspicion so far that federation activity is causing 500 and other errors due to how it queues (swarms) other peers. It isn’t just the lemmy-ui webapp, smartphone users, and other end-users.
If you aren’t aware, Lemmy.ml has been down for the past 45 minutes, and could likely be causing your lemmy_server code to back up with all kinds of problems.
I’m actually working on these issues 10+ hours a day, for the past two weeks.