Most likely, a Hetzner storage box is going to be so slow you will regret it. I would just bite the bullet and upgrade the storage on Contabo.
Storage in the cloud is expensive, there’s just no way around it.
Most likely, a Hetzner storage box is going to be so slow you will regret it. I would just bite the bullet and upgrade the storage on Contabo.
Storage in the cloud is expensive, there’s just no way around it.
There was a good blog post about the real cost of storage, but I can’t find it now.
The gist was that to store 1TB of data somewhat reliably, you probably need at least:
Which amounts to something like 6TB of disk for 1TB of actual data. In real life you’d probably use some other level of RAID, at least for larger amounts so it’s perhaps not as harsh, and compression can reduce the required backup space too.
I have around 130G of data in Nextcloud, and the off-site borg repo for it is about 180G. Then there’s local backups on a mirrored HDD, with the ZFS snapshots that are not yet pruned that’s maybe 200G of raw disk space. So 130G becomes 510G in my setup.
Imagine if all the people who prefer systemd would write posts like this as often as the opposition. Just use what you like, there are plenty of distros to choose from.
I wish I knew about Photon before. Just spun up my own instance and loving it!
At this stage I’ll probably just mirror my stuff from GH. I have a feeling they’ll be doing something stupid soon, forcing people to look for alternatives.
Would be nice to collaborate with others, but getting started is hard when you don’t have enough free time.
It seems Gitea has basic CI + package registries now, that will be plenty for my needs.
Nextcloud, Synapse + bridges, Adguard Home, Uptime Kuma, Home Assistant. Thinking about spinning up Gitea, Forgejo or Gitlab again.
They could explain things better, you are right. I actually think I remember having almost the exact same confusion a few years back when I started. I still have two keys stored in my pw manager, no idea what the other one is for…
The decryption has gotten much more reliable in the past year or two, I also try out new clients a lot and have had no issues in a long time. Perhaps you could give it a new go, with the info that you use the same key for all sessions.
I have a feeling you are overthinking the Matrix key system.
Basically it’s just another password, just one you probably can’t remember.
Most of the client apps support verifying a new session by scanning a QR code or by comparing emoji. The UX of these could be better (I can never find the emoji option on Element, but it’s there…). So if you have your phone signed in, just verify the sessions with that. And it’s not like most people sign in on new devices all the time.
I’d give Matrix a new look if I were you.
Wireguard runs over UDP, the port is undistinguishable from closed ports for most common port scanning bots. Changing the port will obfuscate the traffic a bit. Even if someone manages to guess the port, they’ll still need to use the right key, otherwise the response is like from a wrong port - no response. Your ISP can still see that it’s Wireguard traffic if they happen to be looking, but can’t decipher the contents.
I would drop containers from the equation and just run Wireguard on the host. When issues arise, you’ll have a hard time identifying the problem when container networking is in the mix.
You install the Google services and Play store from the gOS Apps application, then use them like normal.
Behind the scenes they run in the sandboxed environment, but to the user it makes no difference.
resolvectl flush-caches
just in caseLook at resolvectl dns
to check there’s no DHCP-acquired DNS servers set anymore
If you use a VPN, those often set their own DNS servers too, remember to check it as well.
I recently put the nvidia variant of ublue-os on my work laptop, which has Optimus graphics. Couldn’t be happier.
It’s great to see these variants popping up! I really think ostree may be the future for desktop Linux, and not even very far away.
I started using gestures, and haven’t been able to transition away since.
Both have their pros and cons.
Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.
Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.
The mess is only a mess if you don’t really understand what you’re doing, same goes for traditional services.