• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: February 6th, 2025

help-circle
  • This has been an extolled benefit of the new Hall/TMR design keyboard/switches.

    Because they deal with a continuous activation level, you can define in software when the “press down” signal gets fired in the key travel, including immediately stopping the press once it stops traveling down, and resuming it in the reverse; effectively eliminating pre-travel.

    These boards apparently started getting banned in comp play even, from what I’ve heard. Caveat emptor, I’m not into the comp gaming scene.


  • My experience as well.

    I’ve been writing Java lately (not my choice), which has boilerplate, but it’s never been an issue for me because the Java IDEs all have tools (and have for a decade+) that eliminate it. Class generation, main, method stubs, default implementations, and interface stubs can all be done in, for example: Eclipse, easily.

    Same for tooling around (de)serialization and class/struct definitions, I see that being touted as a use case for LLMs; but like… tools have existed[1] for doing that before LLMs, and they’re deterministic, and are computationally free compared to neural nets.


    1. e.g. https://transform.tools/json-to-java ↩︎




  • It’s been interesting (though mostly I feel bad for the people being exploited by these AI companies) how this manifests in some highly clustered ways. Angela Collier posted a video several months ago which covers almost exactly the same kind of AI-physics posting in detail.

    https://youtu.be/TMoz3gSXBcY

    The whole video is great and relevant, however if you’re strapped for time then just start at 24:20 and within a couple minutes you’ll see examples very similar to this OP.


  • Glitchvid@lemmy.worldtoProgramming@programming.dev*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 months ago

    The other reply shares much of my thought, just as using psychoactive drugs can trigger episodes, so can interacting with sycophantic AI chat bots. If you are seeing a professional for help, I encourage you to share with them, what you have shared with us today.

    By your own admission, and the nature of psychosis, I don’t think further engagement is going to do any good.




  • I typically just specify the height of the video and let the browser figure out the width and aspect ratio. The most annoying layout shift is the vertical kind anyway, so that solves it to my satisfaction.

    That said, I also use the poster feature of the video tag and set preload to none, this produces vastly faster page loading, as images are a fast-path compared to browsers loading a video chunk and then decoding it just to display a cover image. I have a set of scripts that generate the poster images for me, I just specify the frame number I want to use in the video and ffmpeg produces an avif.




  • Multi-cloud is far from trivial, which is why most companies… don’t.

    Even if you are multi-cloud, you will be egressing data from one platform to another and racking up large bills (imagine putting CloudFront in front of a GCS endpoint lmao), you are incentivized to stick on a single platform. I don’t blame anyone for being single-cloud with the barriers they put up, and how difficult maintaining your own infrastructure is.

    Once you get large enough to afford tape libraries then yeah having your own offsite for large backups makes a lot of sense, but otherwise the convenience and reliability (when AWS isn’t nuking your account) of managed storage is hard to beat — cold HDDs are not great, and m-disc is pricey.


  • In this guy’s specific case, it may be financially feasible to back up onto other cloud solutions, for the reasons you stated.

    However public cloud is used for a ton of different things. If you have 4TiB of data in Glacier, you will be paying through the absolute nose pulling that data down into another cloud; highway robbery prices.

    Further as soon as you talk about something more than just code (say: UGC, assets, databases) the amount of data needing to be “egressed” from the cloud balloons, as does the price.




  • I don’t think it’s hyperbole to say a significant percentage of Git activity happens on GitHub (and other “foundries”) – which are themselves a far cry from efficient.

    My ultimate takeaway on the topic is that we’re stuck with Git’s very counterintuitive porcelain, and only satisfactory plumbing, regardless of performance/efficiency; but if Mercurial had won out, we’d still have its better interface (and IMO workflow), and any performance problems could’ve been addressed by a rewrite in C (or the Rust one that is so very slowly happening).