MAWP - Archer
MAWP - Archer
Yale z-wave work well and last a long time between needing to replace batteries, and can run off of rechargeables. Can add to home assistant and work with Siri and Alexa integrations on home assistant.
Had some Schlage locks that ran through batteries way too fast.
Exactly, if she could feel shame or humiliation she would be hiding in a cabin somewhere and never speaking to another person or camera again for the rest of her life…
Sorry, I got distracted by a happy thought there, what were we talking about?
When they’re not recording your desktop in an unencrypted database for AI, boot-looping your computer with bad patches or showing ads in your start menu, they’re disabling your account for calling family to see if they’re still alive. Damn.
Taking ollama for instance, either the whole model runs in vram and compute is done on the gpu, or it runs in system ram and compute is done on the cpu. Running models on CPU is horribly slow. You won’t want to do it for large models
LM studio and others allow you to run part of the model on GPU and part on CPU, splitting memory requirements but still pretty slow.
Even the smaller 7B parameter models run pretty slow in CPU and the huge models are orders of magnitude slower
So technically more system ram will let you run some larger models but you will quickly figure out you just don’t want to do it.
Boeing made $76B in revenue in 2023. This is slightly more than 1 day’s revenue for them ($210M / day) or a bit more than 10 days profit for them ($21M / day). They will keep doing what they’re doing, but increase their spending on a PR campaign to improve their public image.
Respect, but…
FWIW they didn’t merge it, they closed the PR without merging, link to line that still exists on master.
The recent comments are from the announcement of the ladybird browser project which is forked from some browser code from Serenity OS, I guess people are digging into who wrote the code.
Not arguing that the new comments on the PR are good/bad or anything, just a bit of context.
“Official” insurrection
I’ve been tempted to try and install plasma mobile on a tablet.
Why no arch install?
Also, the few points others are talking about needing others, there’s a group-finder and I’d say most people running those raids in group finder groups don’t talk at all, so you can just pretend they’re NPCs if you want.
I will never get tired of comedy responses to photoshop requests. It’s just a timeless classic.
Been 100% linux for like 6-9 months now, these stories make me thankful for finally making the switch.
I’ve tried to make the switch 3-4 times in the past and was stopped by 2 main things:
The experience was so much better this time and I really have no regrets. I don’t imagine I’ll ever run Windows again outside of a VM
Nah. There are some nvidia issues with wayland (that are starting to get cleared up), and nvidia’s drivers not being open-source rubs some people the wrong way, but getting nvidia and cuda up and running on linux is pretty easy/reliable in my experience.
WSL is a bit different but there are steps to get that up and running too.
Agree with others, this guide is a bit more work than you probably need. I don’t really run windows much anymore but I did have an easier time with WSL like the other poster mentioned.
And just to check, are you planning on fine-tuning a model? If so then the whole anaconda / miniconda, pytorch, etc… path makes sense.
But if you’re not fine-tuning and you just want to run a model locally, I’d suggest ollama. If you want a UI on top of it, open-webui is great.
Hopefully you’re only forwarding the minimal set of network ports and not all ports/traffic? If so then you’re good, like someone else said if you’ve got a router and it’s forwarding selected traffic then no need for anything else
Is this the new “Simpsons already did it”?
Cunk already did it…
(3:40 if you want to get right to it) https://www.youtube.com/watch?v=UoSUx1xyj1E