• 2 Posts
  • 287 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • Depends on the vendor for the specifics. In general, they don’t protect against an attacker who has gained persistent privileged access to the machine, only against theft.
    Since the key either can’t leave the tpm or is useless without it (some tpms have one key that it can never return, and will generate a new key and return it encrypted with it’s internal key. This means you get protection but don’t need to worry about storage on the chip), the attacker needs to remain undetected on the server as long as they want to use it, which is difficult for anyone less sophisticated than an advanced persistent threat.

    The Apple system, to its credit, does a degree of user and application validation to use the keys. Generally good for security, but it makes it so if you want to share a key between users you probably won’t be using the secure enclave.

    Most of the trust checks end up being the tpm proving itself to the remote service that’s checking the service. For example, when you use your phones biometrics to log into a website, part of that handshake is the tpm on the phone proving that it’s made by a company to a spec validated by the standards to be secure in the way it’s claiming.


  • Package signing is used to make sure you only get packages from sources you trust.
    Every Linux distro does it and it’s why if you add a new source for packages you get asked to accept a key signature.

    For a long time, the keys used for signing were just files on disk, and you protected them by protecting the server they were on, but they were technically able to be stolen and used to sign malicious packages.

    Some advanced in chip design and cost reductions later, we now have what is often called a “secure enclave”, “trusted platform module”, or a general provider for a non-exportable key.
    It’s a little chip that holds or manages a cryptographic key such that it can’t (or is exceptionally difficult) to get the signing key off the chip or extract it, making it nearly impossible to steal the key without actually physically stealing the server, which is much easier to prevent by putting it in a room with doors, and impossible to do without detection, making a forged package vastly less likely.

    There are services that exist that provide the infrastructure needed to do this, but they cost money and it takes time and money to build it into your system in a way that’s reliable and doesn’t lock you to a vendor if you ever need to switch for whatever reason.

    So I believe this is valve picking up the bill to move archs package infrastructure security up to the top tier.
    It was fine before, but that upgrade is expensive for a volunteer and donation based project and cheap for a high profile company that might legitimately be worried about their use of arch on physical hardware increasing the threat interest.


  • ricecake@sh.itjust.workstoScience Memes@mander.xyzBalls
    link
    fedilink
    English
    arrow-up
    6
    ·
    25 days ago

    So, at the time (1930) ball jar actually would have qualified as big business in the sense that you mean.
    Home canning was very popular and they consistently bought out smaller companies.
    Since they were privately owned, it’s tricky to find specifics about value, but they were “found a university”, “own a company town or two”, “chairman of the federal reserve” levels of rich.

    So actually a pretty good use of government.



  • ricecake@sh.itjust.workstoScience Memes@mander.xyzBalls
    link
    fedilink
    English
    arrow-up
    250
    ·
    26 days ago

    The weird thing is, they don’t actually sell the jars anymore. “Ball jars” are not made by the ball jar corporation after their antitrust lawsuits for being a fucking jar monopoly. So they sold the “ball jar” rights and now only do aluminum cans for food packaging and high end satellites and satellite launch systems.


  • So, you’re correct that active emergencies take priority.

    That being said, in essentially every place that has 911, both numbers connect to the same place and the only real difference is pick-up order and default response.
    It’s the emergency number not simply because it’s only for emergencies but because it’s the number that’s the same everywhere that you need to know in the event of an emergency.

    It should be used in any situation where it should be dealt with by someone now, and that someone isn’t you. Finding a serious crime has occurred is an emergency, even if the perpetrator is gone and the situation is stable.
    A dead person, particularly a potential murder, generally needs to be handled quickly.

    It’s also usually better to err on the side of 911, just in case it is an emergency that really needs the fancy features 911 often gives, like location lookups.




  • https://daniel.haxx.se/blog/2020/12/17/curl-supports-nasa/

    https://daniel.haxx.se/blog/2023/02/07/closing-the-nasa-loop/

    Their process for validating software doesn’t have a box for “open source”, and basically assumes it’s either purchased, or contracted. So someone in risk assessment just gets a list of software libraries and goes down it checking that they have the required forms.

    As the referenced talk mentions, the people using the software understand that all the testing and everything is entirely on them, and that sending these messages is bothersome and unfair, and they’re working on it. Unfortunately, NASA is also a massive government bureaucracy and so process changes are slow, at best.
    The TLAs don’t generally help NASA, and getting them involved would unfortunately only result in more messages being sent.

    As for contributions, I think that turns into an even worse can of worms, since generally software developed by or for the US government isn’t just open source, but public domain. I think you’d end up with a big mess of licensing horror if you tried to get money or official relationships involved. It’s why sqlite is public domain, since it was developed at the behest of the US.

    Mostly just context for what you said. NASA isn’t being arrogant, they’re being gigantic. Doing their due diligence in-house while another branch goes down a checklist, sees they don’t have a form and pops of an email and embarrassing the hell out of the first group.

    The time limit thing is weird, but it’s a common practice in bureaucracies, public or private. You stick a timeline on the request to convey your level of urgency and the establish some manner of timeline for the other person to work with. Read the line again, but extremely literally: “we have a time frame of 5 days for a response”. “Our audit timeline guessed that it would take a business week for you to reply, so if you take longer we’re behind schedule”. The threatening version is “your response is required on or before five business days from the date of this message”.
    The presumption is that the person on the other end is also working through a task queue that they don’t have much personal investment in, and is generally good natured, so you’re telling them “I don’t expect you to jump on this immediately, but wherever you can find a moment to reply this week would keep anyone from bothering me, and me from needing to send another email or trying to find a phone number”



  • Paul Eggart is the primary maintainer for tzdb, and has been for the past 20 years.
    Tzdb is the database that maintains all of the information about timezones, timezone changes, leap whatever’s and everything else. It’s present on just about every computer on the planet and plays an important role in making sure all of the things do time correctly.

    If he gets hit by a bus, ICANN is responsible for finding someone else to maintain the list.

    Sqlite is the most widely used database engine, and is primarily developed by a small handful of people.

    ImageMagick is probably the most iconic example. Primarily developed by John Cristy since 1987, it’s used in a hilarious number of places for basic image operations. When a security bug was found in it a bit ago, basically every server needed to be patched because they all do something with images.



  • ricecake@sh.itjust.workstoMemes@lemmy.mlMeh burger
    link
    fedilink
    arrow-up
    8
    ·
    2 months ago

    Most of them are mediocre. Most burger places were mediocre, and then the American gastropub trend saw burgers being made nice as opposed to diner food or bar food. They could also charge more money because they were making nicer food.

    Eventually a bunch of the mediocre places shifted to try to also be nice, but mostly just increased prices, changed decor, and started using the word aioli more than mayo. Oh, and pretzel buns on burgers that got taller without being bigger and are cumbersome to eat.

    In the plus side, if you like a Swiss burger with a garlic aioli, a burger with a fried egg on it, or a burger with 2 pieces of bacon, a spicy BBQ sauce, and fried onion strings and you’re in the mood for some fries with bits of peel on them and a garlic Parmesan butter, then you know exactly what they’re going to put in from of you and exactly what it’ll taste like.

    Mediocre. Not bad, but definitely not the best you’ve ever had.


  • Google analytics is loaded by JavaScript. There are also other things like Google analytics that are also loaded by JavaScript.

    Updating a website can take time, and usually involves someone with at least a passing knowledge of development.

    Google tag manager is a service that lets you embed one JavaScript thing in your page, and then it will handle loading the others. This lets marketing or analytics people add and manage such things without needing to make a full code deployment.
    It also lets you make choices about when and how different tracking events for different services are triggered.

    It’s intended usage is garbage tracking metrics and advertising. Some sites are built more by marketing than developers, and they’ll jam functional stuff in there which causes breakage if you block it. These sites are usually garbage though, so nothing of value was lost.





  • Oh interesting, I’d be happy to be wrong on that. :)

    I figured they’d factor the staffing costs into what they charge the insurance, so it’d be more profit due to a higher fixed costs, longer treatment and some fixed percentage profit margin.
    The estate costs thing is unfortunately an avenue I hadn’t considered. :/

    I still think it would be better if we removed the profit incentive entirely, but I’m pleased if the two interests are aligned if we have to have both.


  • It’s a money saver, so it’s profit model is all wonky.

    A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.
    A hospital, as a place that helps people, will still want to use these scans widely because “ignoring preventative care to profit off long term treatment” is a bit too “mask off” even for the US healthcare system and doctors would quit.

    Insurance companies, however, would pay just shy of the cost of treatment to avoid paying for treatment.
    So the cost will rise to be the cost of treatment times the incidence rate, scaled to the likelihood the scan catches something, plus system costs and staff costs.

    In a sane system, we’d pass a law saying capable facilities must provide preventative screenings at cost where there’s a reasonable chance the scan would provide meaningful information and have the government pay the bill. Everyone’s happy except people who view healthcare as an investment opportunity.