

Can’t understand why this is interesting, as phones now have a lot of storage space, even the ones that don’t have SD card slots. Just store the music that interests you directly on the phone.
Can’t understand why this is interesting, as phones now have a lot of storage space, even the ones that don’t have SD card slots. Just store the music that interests you directly on the phone.
Article is from 2018. Someone must have pasted the url from hacker news where the same story was dug up recently.
I guess 4chan is still down?
I haven’t looked in a few years but 20TB is probably plenty. I agree that Wikipedia lost its way once it got all that attention online and all that search traffic. Everyone should have their own copy of Wikipedia. I used to download the daily incremental data dumps but got tired of it. I still have a few TB of them around that I’ve been wanting to merge.
The text is in not-exactly-convenient database dumps (see other commenter’s link) and there are daily diffs (mostly bot noise), but then there are the images and other media, which are way up in the terabytes by now. There are some docs, maybe out of date, about how to run the software yourself. It’s written in PHP and it’s big and complicated.
They’ve done that on and off for ages, and the ones being offered with Ubuntu here are mostly pretty expensive or else not so interesting. I’ve been content to buy older Thinkpads and self-install Debian for my past several laptops. I was somewhat tempted by recent Ideapad Yogas but resisted, and since then, prices have gone up, whether due to tariffs or whatever else.
Mozilla propaganda. It’s not just about individually identifiable data. Privacy means not giving the bad guys ANY data, whether or not it points at any individual.
How much do you expect to pay for the 24 NVMe disks?
It’s possible for a while but there is a whack-a-mole game if you’re doing anything they would care about. So you will have to keep moving it around. VPS forums will have some info.
This is about “Chris Krebs, the former head of the US Cybersecurity and Infrastructure Security Agency (CISA) and a longtime Trump target”.
Oh I didn’t know about the new requirements. Less backwards compatibility too. IBM 3592 looks better but costs even more. Tape drives can’t be that much higher tech than HDDs, so if they cranked up the volume they could likely be way more affordable.
The upfront cost of tape is excessive though. It wasn’t always like that. And LTO-9 missed its capacity target: it’s 18TB (1.5x LTO-8) instead of 24TB as planned. Who knows what will happen later in the roadmap.
Noo, really, idk what Disco was but tags and recommendations from other humans are plenty to find good AO3 fic to read. And AO3 itself has been getting hammered for months, presumably by corporate AI crawlers. A recommendation engine would also have to crawl AO3. That’s very difficult to do because of said hammering. Even the regular download feature barely works now if you use fanficfare for it.
Are you familiar with git hooks? See
https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
Scroll to the part about server side hooks. The idea is to automatically propagate updates when you receive them. So git-level replication instead of rsync.
I see, fair enough. Replication is never instantaneous, so do you have definite bounds on how much latency you’ll accept? Do you really want independent git servers online? Most HA systems have a primary and a failover, so users only see one server. If you want to use Ceph, in practice all servers would be in the same DC. Is that ok?
I think I’d look in one of the many git books out there to see what they say about replication schemes. This sounds like something that must have been done before.
Why do you want 5 git servers instead of, say, 2? Are you after something more than high availability? Are you trying to run something like GitHub where some repos might have stupendous concurrent read traffic? What about update traffic?
What happens if the servers sometimes get out of sync for 0.5 sec or whatever, as long as each is in a consistent state at all times?
Anyway my first idea isn’t rsync, but rather, use update hooks to replicate pushes to the other servers, so the updates will still look atomic to clients. Alternatively, use a replicated file system under Ceph or the like, so you can quickly migrate failed servers. That’s a standard cloud hosting setup.
What real world workload do you have, that appeared suddenly enough that your devs couldn’t stay in top of it, and you find yourself seeking advice from us relatively clueless dweebs on Lemmy? It’s not a problem most git users deal with. Git is pretty fast and most users are ok with a single server and a backup.
I wonder if you could use HAProxy for that. It’s usually used with web servers. This is a pretty surprising request though, since git is pretty fast. Do you have an actual real world workload that needs such a setup? Otherwise why not just have a normal setup with one server being mirrored, and a failover IP as lots of VPS hosts can supply?
And, can you use round robin DNS instead of a load balancer?
What does this even mean? You want to replicate between git repositories? Can you do that with receive/update hooks on the servers?
"…the dreaded “death cross,” a historical indicator of a likely downturn for the company.
"Business Insider called out the event, which has been hitting the stock indexes of some major players over the last couple of weeks as tariff trouble has hit just about everyone. Tesla is just the latest to see the symbol of bearishness, which occurs when a company’s 50-day moving average crosses and drops below the 200-day average.x
50GB of flac = maybe 20GB of Vorbis amirite? Is that 450GB of flac in your screen shot? It would fit on a 256gb phone even without an SD card. A 512GB card is quite affordable these days. Just make sure to buy a phone with a slot, and think of it as next level degoogling ;).
Yeah I know there’s lots of music in the world but who wants to listen to all of it on a moment’s notice anyway?