Paywalled. Can anyone paste the text here?
One foot planted in “Yeehaw!” the other in “yuppie”.
Paywalled. Can anyone paste the text here?
I mean sure maybe 10 years ago. But most static sites like blogs and such can fit entirely on a cloudflare page worker under the free tier. Or heck, even the free allotment on AWS S3 or other object storage providers.
I mean, perhaps this isn’t a static site and it’s built on some sort of CMS and has a postgres database in the background. In that case it probably runs around $5 to $10 a month.
Of course, this all presumes that the person setting this up is fairly savvy about the offerings available. I see a lot of people making silly decisions in this space, thinking that they need some full fat virtual private server, when all they really need is an object storage bucket behind a DNS c-name.
I guess I didn’t really see the pressure that they were under.
I hope they heal! But it’s a bummer that such an excellent resource will be taken down.
I wish more creators were willing to hand their creations to someone who wishes to continue it. But oftentimes, I fear that it’s far too entwined with a person’s identity for that to be common occurrence.
I’ve tried it before, it’s fine but had issues running on wayland last I tried. Did they fix the wayland issues? Looking at the issue tracker it seems like there are still a few open Wayland issues.
kiTTY by contrast has had Wayland support for about as long as I’ve used it.
No kidding. One of the YouTubers I followed was really shilling Zed editor. He didn’t seem to mention that it was Mac only.
Well, I guess it’s back to neovim on kiTTY terminal for me.
Sometimes I swear Mac based developers think the world revolves around them.
I agree. My instance is locally focused so I can just ask rando’s off the street here in Tucson. Turns out people don’t like feeling controlled very much and want to be able to talk to friends that are on other platforms, and that’s why we won’t be defederating from meta/threads.
I have a sense, though, that Blahaj is mostly not real people. Sometimes I wonder how easy it might be to give someone like Ada a false sense of support by loading up the instance with LLM bots that all act like Her. (It’s her, right? God please don’t dox me)
Ive just grown skeptical that people actually operate like this in real life, or rather, enough people to actually matter. And that makes me further question the authenticity of entire instances like Beehaw or Blahaj as “lefty safe spaces”. Seems to me that they function more as a way to provide reasons to dismiss the progressive movement rather than join them.
I’m in agreement here, and given Blahaj’s trigger-happy nature when it comes to defederation, I’m not sure I care all that much.
I’ve seen them defederate so many other instances for “wrong-think” and I don’t think Snowe should feel like he’s in the wrong here.
It’s only a matter of time before they defederate from my own instance, tucson.social, because I don’t think 100% like them. I apparently support trans genocide because… checks notes… I don’t think that doxxing far right reactionaries/extremists is an effective tactic for garnering sympathy and building a movement.
Yup, that’s it. Apparently that opinion makes you a Nazi sympathizer in these circles.
I work for another distributed database company. I can say that it’s much harder to convert cockroachDB customers than Yugabyte customers. Given that, I’d think that CockroachDB is likely the more vetted solution. Sure it’s new (2017), but it’s not THAT new.
IIRC, MySQL (and PosgreSQL) is pretty much limitted to a write/read-replica sort of horizontal scaling. Other SQL engines have better support for multi-master configurations.
However, these types of configurations are usually tied to licensing - especially for Microsoft SQL server and OracleDB.
As another commenter suggested, there is Yugabyte and CockroachDB as well - of those two I think CockroachDB is the more mature product. And they’re one of the fiercest competitors for the company I work for too.
I cannot speak to “Battle Tested-ness” of CockroachDB, but given it’s been around for a few years now, I don’t think it’s quite as risky as other comments have indicated. Also, they’re doing something right if we haven’t been able to convert many CockroachDB customers.
Nope - full fat install on hardware - as I said in the post.
Again, just so you don’t miss the crucially important context - I’m an advanced user. I typically run vanilla arch or endeavor, both of which do not have these issues. Not to mention, I know that many of these are a result of adding so many repositories on top of the base Arch ones - at least as upgrades are concerned.
If this was in a VM I would go to great lengths to specify as such.
And I apologize in return for the rather harsh way I came across. The common (and frutrating) nature of your comment didn’t deserve the terseness of my response.
See: every AAA big game releases lately. Even on Windows, having to nuke your graphics drivers and install a specific version from some random forum is generally accepted as fine like it’s just how PC gaming is.
Never had to do that since I was ROM hacking an old RX480 for Monero hashrates. In fact, on my Windows 11 partition (Used for HDR gaming which isn’t supported on Linux yet), I haven’t needed to perform a reinstall of the NVIDIA driver even when converting from a QEMU image to a full-fat install.
When I see those threads, it often comes across as a bunch of gamers just guessing at a potential solution and often become “right” for the “wrong” reasons. Especially when the result is some convoluted combination of installs and uninstalls with “wiping directories and registry keys”.
But, point taken, the lengths gamers will go to to get an extra 1-2 FPS even if it’s unproven, dangerous, and dumb is almost legendary.
They’re probably okay for most users, especially the gamer kind.
Eh, IDK - the amount of breakage I got simply trying to upgrade the system after a few days would probably be incredibly hostile to a less technical user/gamer.
Sure, if most things worked out-of-the-box and upgrades were seamless, I’d agree - but as it stands, it seems like you need to know Arch and Linux itself fairly well to get the most out of Garuda Linux.
I think Chakra has largely been abandoned these days, but when it was the newest kid on the block I actually appreciated the REALLY GOOD QT5 experience that was lacking on other distros at the time. That being said, not being able to install ANY GTK thing was definitely a deal-breaker. These days the project is very dead and the best “KDE” experience is on KDE Neon.
I really doubt that. Again - advanced user here - with numerous comparison points to other arch based distros. I also maintain large distributed DB clusters for Fortune 100 companies.
If it was something not on the latest version - it’s not due to my lack of effort or knowledge, but instead due to the terrible way Garuda is managed.
What, am I supposed to compile kernel modules from scratch myself? Never needed to do that with Endeavour, Manjaro, or just Arch.
If Garuda’s install (and subsequent upgrade) doesn’t fetch the latest from the Arch repos, that’s on them.
EDIT: Also, these non-answers are tiresome, low effort, and provide zero guidance on any matter. I know every single kernel change since 5.0 that impacted my hardware. I have rss feeds for each of the hardware components I have, and if Linux or a distro ships an enhancement to my hardware - I’m usually aware well before it is released. If you were to point to any bit of my hardware I can tell you, for certain, what functionalities are supported, which has bugs, and common workarounds.
If you want this type of feedback to be valuable, then let me know if a new issue/regression has arisen given the list of hardware I’ve supplied.
Valuable: “Perhaps it was the latest kernel X which shipped some regressions for Nvidia drivers that causes compositor hitching on KWin”
Utterly Useless: “It’s very likely some drivers are not up to date or compatible with your system.”
I dunno, my OLED panel has some notable image retention issues - and a screensaver does appear to help in that regard.
Eh, I went back to screen savers due to my use of OLED panels. Better than a static lock-screen image for sure.
“Your application” - the customers you mean. Our DB definitely does it’s own rate limiting and it emits rate limit warnings and errors as well. I didn’t say we advertised infinite IOPs that would be silly. We are totally aware of the scaling factors there and to date IOPs based scaling is rarely a Sev1 because of it. (Oh no p99 breached 8ms. Time to talk to Mr customer about scaling up soon)
The problem is that the resulting cluster is so performant that you could load in 100x the amount of data and not notice until the disk fills up. And since these are NVME drives on cloud infrastructure, they are $$$.
So usually what happens is that the customer fills up the disk arrays so fast that we can’t scale the volumes/cluster fast enough to avoid stop-writes let alone get feedback from the customer in time. And now that’s like the primary reason to get paged these days.
We generally catch gradual disk space increases from normal customer app usage. Those give us hours to respond and our alerts are well tuned. It’s the “Mr. Customer launched a new app and didn’t tell us, and now they’ve filled up the disks in 1 hour flat.” that I’m complaining about.
It is definitely an under provisioning problem. But that under provisioning problem is caused by the customers usually being very very stingy about what they are willing to spend. Also, to be clear, it isn’t buckling. It is doing exactly The thing it was designed to do. Which is to stop writes to the DB since there is no disk space left. And before this time, it’s constantly throwing warnings to the end user. Usually these customers tend to ignore those errors until they reach this stop writes state.
In fact, we just had to give an RCA to the c-suite detailing why we had not scaled a customer when we should have, but we have a paper trail of them refusing the pricing and refusing to engage.
We get the same errors, and we usually reach out via email to each of these customers to help project where their data is going and scale appropriately. More frequently though, they are adding data at such a fast clip that them not responding for 2 hours would lead them directly into the stop writes status.
This has led us to guessing what our customers are going to end up at. Oftentimes being completely wrong and eating to scale multiple times.
Workload spikes are the entire reason why our database technology exists. That’s the main thing we market ourselves as being able to handle (provided you gave the DB enough disk and the workload isn’t sustained for a long enough to fill the discs.)
There is definitely an automation problem. Unfortunately, this particular line of our managed services will not be able to be automated. We work with special customers, with special requirements, usually fortune 100 companies that have extensive change control processes. Custom security implementations. And sometimes even no access to their environment unless they flip a switch.
To me it just seems to all go back to management/c-suite trying to sell a fantasy version of our product and setting us up for failure.
I’m really surprised no one mentioned Terra Invicta!
Basically if the Three Body Problem series was a Grand strategy game.
In terms of grand strategy it is quite grand. So massive and complex that even 100 hours in, I haven’t completed a game.
That being said, it’s so addicting. I haven’t really played any other Sci-Fi games where you can take over multiple countries on Earth, take over other bodies in the solar system, and field space Navy to defend the planet.