• 1 Post
  • 55 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • You have two options for setting up https certificates and then some more options for enabling it on the server:

    1: you can generate a self signed certificate. This will make an angry scary warning in all browsers and may prevent chrome from connecting at all (I can’t remember the status of this). Its security is totally fine if you are the one using the service since you can verify the key is correct

    2: you can get a certificate to a domain that you own and then point it at the server. The best way to do this is probably through letsencrypt. This requires owning a domain, but those are like $12 a year, and highly recommended for any services exposed to the world. (You can continue to use a dynamic DNS setup, but you need one that supports custom domains)

    Now that you have a certificate you need to know, Does the service your hosting support https directly. If it does, then you install the certificates in it and call it a day. If it doesn’t, then this is where a reverse proxy is helpful. You can then setup the reverse proxy to use the certificate with https and then it will connect to the server over http. This is called SSL termination.

    There’s also the question of certificate renewal if you choose the letsencrypt option. Letsencrypt requires port 80 to do a certificate renewal. If you have a service already running on port 80 (on the router’s external side), then you will have a conflict. This is the second case where a reverse proxy is helpful. It can allow two services (letsencrypt certificates renewal and your other service) to run on the same external port. If you don’t need port 80, then you don’t need it. I guess you could also setup a DNS based certificate challenge and avoid this issue. That would depend on your DNS provider.

    So to summarize:

    IF service doesn’t support SSL/https OR (want a letsencrypt certificate AND already using port 80)

    Then use a reverse proxy (or maybe do DNS challenge with letsencrypt instead)

    ELSE:

    You don’t need one, but can still use one.


  • Reverse proxies don’t keep anything private. That’s not what they are for. And if you do use them, you still have to do port forwarding (assuming the proxy is behind your router).

    For most home hosting, a reverse proxy doesn’t offer any security improvement over just port forwarding directly to the server, assuming the server provides the access controls you want.

    If you’re looking to access your services securely (in the sense that only you will even know they exist), then what you want is a VPN (for vpns, you also often have to port forward, though sometimes the forwarding/router firewall hole punching is setup automatically). If the service already provides authentication and you want to be able to easily share it with friends/family etc then a VPN is the wrong tool too (but in this case setting up HTTPS is a must, probably through something like letsencrypt)

    Now, there’s a problem because companies have completely corrupted the normal meaning of a VPN with things like nordvpn that are actually more like proxies and less like VPNs. A self hosted VPN will allow you to connect to your hone network and all the services on it without having to expose those services to the internet.

    In a way, VPNs often function in practice like reverse proxies. They both control traffic from the outside before it gets to things inside. But deeper than this they are quite different. A reverse proxy controls access to particular services. Usually http based and pretty much always TCP/IP or UDP/IP based. A VPN controls access to a network (hence the name virtual private network). When setup, it shows up on your clients like any other Ethernet cable or WiFi network you would plug in. You can then access other computers that are on the VPN, or given access to to the VPN though the VPN server.

    The VPN softwares usually recommended for this kind of setup are wireguard/openvpn or tailscale/zerotier. The first two are more traditional VPN servers, while the second two are more distributed/“serverless” VPN tools.

    I’m sorry if this is a lot of information/terminology. Feel free to ask more questions.


  • How will a reverse proxy help?

    Things that a reverse proxy is often used for:

    • making multiple services hosted on the same IP and port
    • SSL termination so that the wider world speaks https and the proxy speaks http to the server. This means the server doesn’t have to do its own key management
    • load balancing services so multiple servers can serve the same request (technically a load balancer but I believe some reverse proxies do basic load balancing)
    • adding authentication in front of services that don’t have their own (note that some of the protections/utility is lost if you use http. Anyone who can see your traffic will also be able to authenticate. It’s not zero protection though because random internet users probably can’t see your traffic)
    • probably something I’m forgetting

    Do any of these match what you’re trying to accomplish? What do you hope to gain by adding a reverse proxy (or maybe some other software better suited to your need)?

    Edit: you say you want to keep this service ‘private from the web’. What does that mean? Are you trying to have it so only clients you control can access your service? You say that you already have some services hosted publicly using port forwarding. What do you want to be different about this service? Assuming that you do need it to be secured/limited to a few known clients, you also say that these clients are too weak to run SSL. If that’s the case, then you have two conflicting requirements. You can’t simultaneously have a service that is secure (which generally means cryptographically) and also available to clients which cannot handle cryptography.

    Apologies if I’ve misunderstood your situation




  • Huh?? I’m using Kubuntu 24.04 right now and didn’t have to jump through these hoops. That’s weird.

    I compile them because I want to use them with my system wine, and not with proton. Proton does that stuff for you for steam games. This is for like CAD software that needs accelerated graphics. I could probably use like wine-ge and let GE compile it for me, but I’m not sure they include all the Nvapi/cuda stuff that’s needed for CAD and not gaming. If there’s an easier way to do it, I’d love to hear! Right now I’m using https://github.com/SveSop/nvidia-libs

    I’m a developer that’s been using Ubuntu distros for 20 years and never ran into such issues.

    If you’re a developer that’s comfortable with desktop software toolchains, that makes sense. (And checkinstall is wonderful for not polluting your system with random unmanaged files). But I came at this knowing like embedded c++ and Python, and there was just a lot of tools I had to learn. Like what make was and how library files are linked/found, etc. And for someone who’s not a developer at all, I imagine that this would be even harder.

    I’ve learned a lot, especially because of everyone in this thread

    I’m glad!


  • Re the flatpak issue, what you linked is just saying that flatpak won’t be a default installed program and packages provided by flatpaks won’t be officially supported by Ubuntu support as of 23.03. I don’t think this effects your use of Ubuntu in any way. If you want to use flatpaks, just install the program. It will still be packaged in the Ubuntu repositories. 23.04 was over a year ago. I still use flatpak without a problem on my kubuntu 24.04 system. It’s just a one time thing to do sudo apt-get install flatpak and maybe a second package for KDE’s flatpak packagekit back end and it’s like canonical never made that decision.

    The push of snaps instead of debs is a bit more concerning because it removes the deb as an option in the official repositories. But as of right now I think only Mozilla software has this happening? If your timeline is 5-10 years though, this may be more of an issue depending on how hard canonical pushes snaps and how large their downsides remain


  • All those patches seem like nice things to have, but are more focused on adding hardware support and working around bugs in software/other people’s implementations. If you have one of the effected GPUs/games/etc, then those patches probably make a huge difference, but I’d guess there won’t be noticeable frame rate differences on most systems. I have not tested this claim though, so maybe something on there makes a big difference. What’s nice is all the packaging stuff they’ve done to make setting things up correctly easily, not necessarily most of the changes themselves. Like on my system I compile dxvk and various wine nvidia libs myself since Ubuntu doesn’t package them. And it’s easy to screw that up/it requires some knowledge of compiling things

    Reading your update, I’d still choose whatever distro packages the software you want with the versions/freshness you need. If you’re willing to tweak things, then the performance stuff can be done yourself pretty easily (unless you have broken hardware that isn’t well supported by the mainline kernel), but packaging things/compiling software that isn’t in the repositories is a huge pain. I think this is one of the reasons people choose arch even with its need to stay on top of updates. Is that the AUR means that you don’t have to figure out how to build software that the distribution managers didn’t package. Ubuntu’s PPAs aren’t great (though I don’t have personal arch experience to compare with)


  • I’m not sure what performance improvements you’re talking about. As far as I’m aware, the difference between distros on performance is extremely minimal. What does matter is how up to date the DE is in the distribution provided package. For example, I wanted some nvidia+Wayland improvements that were only in kwin 6.1, and so I switched from kubuntu to neon in order to get them (and also definitely sacrificed some stability since more broken packages/combinations get pushed to users than in base ubuntu). It’s also possible that the kernel version might matter in some cases, but I haven’t run into this personally.

    I think the main differences between distros is how apps are packaged and the defaults provided, and if you’re most comfortable with apt based systems, I’m not sure what benefit there’s going to be to switching (other than the joy in tinkering and learning something new, which can be fun in its own right).

    For some users less experienced with linux, the initial effort required to setup Ubuntu for gaming (installing graphics drivers/possibly setting kernel options, etc) might push someone toward a distribution that removes that barrier, but the end state is going to be basically identical to whatever you’ve setup yourself.

    The choice between distributions is probably more ‘what do I want the process to getting to my desired end state to be like’ and less ‘how do I want the computer to run’.


  • Could you post the specific output of the commands that don’t work? It’s almost impossible to help with just ‘It doesn’t work’. Like when ping fails, what’s the error message. Is it a timeout or a resolution failure. What does the resolvectl command I shared show on the laptop. If you enable logging on the DNS server, do you see the requests coming in when you run the commands that don’t work.







  • I’ve setup okular signing and it worked, but I believe it was with a mime certificate tied to my email (and not pgp keys). If you want I can try to figure out exactly what I did to make it work.

    Briefly off the top of my head, I believe it was

    1. Getting a mime certificate for my email from an authority that provides them. There’s one Italian company that will do this for any email for free.
    2. Converting the mime certificate to some other format
    3. Importing the certificate to Thunderbird’s (or maybe it was Firefox’s) certificate store (and as a sidequest setting up Thunderbird to sign email with that certificate
    4. Telling Okular to use the Thunderbird/Firefox certificate store as the place to find certificates

    I can’t remember if there was a way to do this with pgp certificates easily





  • I’d be surprised if it was significantly less. A comparable 70 billion parameter model from llama requires about 120GB to store. Supposedly the largest current chatgpt goes up to 170 billion parameters, which would take a couple hundred GB to store. There are ways to tradeoff some accuracy in order to save a bunch of space, but you’re not going to get it under tens of GB.

    These models really are going through that many Gb of parameters once for every word in the output. GPUs and tensor processors are crazy fast. For comparison, think about how much data a GPU generates for 4k60 video display. Its like 1GB per second. And the recommended memory speed required to generate that image is like 400GB per second. Crazy fast.