

Yeah, that would be ideal,but the issue is that I need a photo management software that exports the metadata and imports it - this is the main issue. Otherwise things would be far easier.
Yeah, that would be ideal,but the issue is that I need a photo management software that exports the metadata and imports it - this is the main issue. Otherwise things would be far easier.
And then?
Would need a stable enough connection - as written above that is sadly not the case and won’t be it for the next few years.
Nightly backups/syncs might work,putting the private machine “public facing” won’t - I tried that.
I would love to do that, but the issue is the software to accept it - basically I need a solution that exports the metadata as well and then adds it into a larger library - and that is the problem.
It depends. Very much. And this is the main problem: There isn’t “one” solution, you will need a few.
The thing with the PRC is: Their great firewall isn’t “one big uniform block”. It’s fairly “variable”.
For example: In Beijing,even 10 years ago, I could access google maps and Facebook without any issues(back then highly blocked) as long as my mobile phone was roaming. The second I was on wifi of course it was blocked. But even the cheapo VPN my colleague had did work out fine. Until the day the police started to prepare for the party convention - then suddenly my colleague couldn’t get out, neither could I with our company wifi and even my carefully crafted wire guard over HTTPs didn’t work - unless I was in the wifi of the hotel or our host company. There it did. Party congress over? Back to normal operations.
If you travel through the country you will find that in one place solution A works, in another solution B. Generally the more rural (or closer to Tibet/Xinjiang/Myanmar) you get, the more restrictive it seems to be.
Personally I would simply get there different commercial VPNs to make sure you have a choice to get out at all - there are various ones with a good PRC reputation. Most providers have trials as well. And then double tunnel through that if you can’t directly reach your usual VPN at home
Yeah, I come from the same scenario. Consolidated multiple nodes incl. a NAS into one. Initially had the HDD (which run through a controller anyway) passed through to a TrueNAS VM. That was…a mistake. TrueNAS can become a real bitch if it’s own VM storage is slower/has a hiccup while the rest of the pool is not. And a lot of other things are a PIA as well, e.g.permission wise, especially with a FreeIPA Domain. And all that for a quite hefty price in ressources.
The day I pulled the plug on that was a good day. Later had the issue repeat itself with a client system that the client brought with him.
Nowadays I really love the proxmox only solution,even though it’s somewhat icky to run something directoy on the host - but it’s acceptable imho,when it’s literally built onto host data-as it is the case for ZFS NFS anyway.
(I have Samba in a proper LXC, though - but rarely use it these days as we run everything via NFSv4 by now)
The question is why use both. TrueNAS adds a lot of overhead, tends to become unstable if the workload is high in a VM, can lead to problems especially with ZFS and it often leads to people using privileged containers to use NFS directly (for ease of use) or use a mount bind solution via the host.
With ZFS NFS the whole thing can easily be provided directly and then use mount bind - which is way more consistent. With Cockpit and Napp-it you have graphical tools available.
Don’t get me wrong, for an existing solution it’s fine,but if one is doing a new build I would absolutely not go for it. TrueNAS has some oddities with permission handling one can also avoid if doing it directly - and far more stringent.
Personally Proxmox+ZFS is imho currently the best bet in that regard, especially if you can avoid Samba. (The Zamba Server is solid,though). Especially with a FreeIPA/RhelIDM setup things are surprisingly easy/stringent in terms of permission handling.
Signal itself is solid. For now. The issue is that signal is a centralized infrastructure service that is based in the US.
While it’s rather unlikely that something shady is going on and the current administration manages to pressure someone into installing back doors without anyone noticing, there is a growing chance that at some point the Orange Hitler or his cronies aim at Signal - and simply shut the whole thing down in a single sweep.
Which would mean the whole thing is lost - in theory they of course could rebuild a foundation outside the US, but that would also mean they need people not residing in the US (not like Proton which claims to operate from Switzerland and in reality are US based) and find funding there - enough funding to cover the costs and that is not impeded by US pressure.
This is the scenario that makes Signal a problematic candidate - and sadly the foundation is doing nothing against it.
Personally I would avoid rasperries like the plague here - they have many downsides when booting up rarely. I’d rather use a Mini PC or ZimaBoard, maybe a build on a MC12 leo (if you can still get it cheap),chuck it all in a cheap case and be good. Unless you have something with IPMI on it I would also invest in a semi professional KVM like PiKVM,JetKVM,NankKVM - and if you can’t stop/start power with that due to the device not following the standards maybe an IP switchable plug.
We are talking about a hobbyist here - if you want to have precautions against all these points OP would need to have a redundant PSU, redundant power sources with automatic failover, backup power,etc. Of course paired with redundant data connections, redundant KVM solutions, physical access management, etc.
In other words: A freaking data center.
Sure, PSUs break. Happens. But very very rarely. And everything else that is on the side of his backup device can be handled through a KVM. And tbh, if that one fails, one can usually direct a “non IT user” to simply pull the plug and put it back on.
In theory that is solvable by a PiKVM,JetKVM,nankKVM, etc.
Yep.
Absolutely the best advice.
I always recommend the same:
Get a secure proper cloud storage (Backblaze, Hetzner Object Storage/Storage box, Ionos,etc.) for daily/incremental backups and single file recovery. (As Tandberg is no longer an alternative this seems to be the only choice atm). Make sure you have encryption on and a proper rotation/deletion schedule.
Get an external harddrive for a full backup every few weeks/months, preferably store it offsite, even better if you get two and rotate them offsite.
Get a M-DISC Burner for the important files. Burn them onto BlueRay M Discs and store these at various offsite locations as well. Do so every few months. These have the advantage of being WORM (write once, read many).
Tapes are fucking expensive for current models and the old LTO drives one can get off Ebay,etc. tend to write faulty data and are almost always end of life. And as LTO is not backwards compatible beyond the generation below it’s very much a possibility that people will have issues reading their tapes in 5 or 10 years.
Just another thing: Get proper,WORM(write once read many) backups. Get a M-Disc capable blueray burner (around 100 bucks) and burn the real important stuff in Archive capable Bluerays (normal ones degrade within years,these don’t). You don’t want to find out your datasets suffered from bit rot(yes,that is a thing) 5 years later and have no option to restore because you fucked up backups 2 years ago. For the real important data(everything that can’t be redownloaded aka the personal stuff) it’s worth it.
Ideally do put some of those discs somewhere else,away from your house.
But for gods sake use proper backups. The tendency for immich to break things is the reason I nowadays recommend photoprism to people who start with selfhosting - it’s worse in a lot of ways but way more stable most of the times.
You don’t need many “guides”, especially not on blogs. They are risky - often written by people who don’t really know what they are doing fully and,more importantly, don’t update their guides. Then things can become really really ugly fast.
If you managed to run jellyfin on a miniPC on Debian you are already doing a good job and very likely already quite a bit.
My personal recommendation: Get another miniPC (no ARM,so no Raspi) and put Debian on it. Then use the Proxmox Community scripts to expand your reach, BUT use them as an “understanding how shit works” base - they have their limitations and their quality has sadly dropped since tteck is no longer with us. (RIP :(
That should give you a pretty good insight into virtualisation, KVM, basic networking - and a plattform to play that you easily can revert to an earlier state if you fuck up.
Remember backups, remember documentation (a wiki,maybe netbox) and monitoring (Prometheus/Grafana or Zabbix are some of the multiple options).
If you want to, you can also look into bash scripts to automate a few things. I know people here hate LLMs but actually ChatGPT and perplexity are good for that. Let them write a bash script for some easy tasks (e.g. update the VM, download a configuration file, create two admin users, make them sudo, install zabbix agent, install this and that) and then let them explain step by step to you. They aren’t too bad at it and actually help you learn basic scripting fairly well. (And then learn it properly with a e-course or something.)
As long as you don’t operate any public facing services and proper backups the actual risk involved is fairly small
Yeah. Would be my recommendation, too. For the size of the lab a Zimba seems a good choice if something new is what OP wants, otherwise a MiniPC.
I know what you mean, but just saying that Proxmox absolutely has an Api that can do a few (not all) of these things - and some are potentially use cases for the data centre manager. But yeah, I know what you mean.
Funny enough, before “einen dübeln” became smoking weed it was also used for fucking. Which left me very confused a few times.
These wall plugs/the Fischer type are not meant for plasterboard at all. Because Plasterboard is a fucking abomination in terms of building quality.
I am currently open for any software solution, that’s why I came here.