• 00Sixty7@lemmy.world
    link
    fedilink
    English
    arrow-up
    90
    arrow-down
    2
    ·
    1 year ago

    “Poorly designed, partially functional software running with substandard hardware and subpar implementation designed by overextended engineers and burnt-out developers led by known megalomaniac malfunctions, local man astonished.”

    • gornar@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      More transparency in your version of the headline than there will ever be in any of Elon’s dealings! Ever!

      • plz1@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        1 year ago

        I knew that was the Boeing 737 Max “issue” without clicking the link. When safety is an add-on cost, capitalism/profit margins negates safety.

        • Sharkwellington@lemmy.one
          link
          fedilink
          English
          arrow-up
          14
          ·
          1 year ago

          From Fight Club:

          A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don’t do one.

        • maporita@unilem.org
          link
          fedilink
          English
          arrow-up
          24
          ·
          edit-2
          1 year ago

          I agree. But automation has made air travel safer by an order of magnitude. The problem with the 737 Max debacle was trying to use automation as a band aid to avoid costly recirtification and pilot training. They didn’t inform pilots about the new pitch control system and they didn’t train them on how to deal with runaway trim. Oh and relying on a single sensor to detect ACA was also a bad move. So, many mistakes … for which a lot of people died.

          Notwithstanding that, every day hundreds of planes rely on automation to help keep passengers safe.

          • bisq@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            The other issue is the only had one of those angle of attack sensors and should’ve had more redundancy

            • rm_dash_r_star@lemm.ee
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              1 year ago

              You’d think aerospace engineers would have it down to reflex that things need to be fail safe. It’s ironic a system designed to make the plane safer actually crashed the plane. That one should get an award for world’s worst engineering.

              Like any accident it wasn’t just one thing. The maker implemented a safety system that was not fault tolerant, then the airline neglected to train pilots how to deal with a failure of that system. In fact that particular airline didn’t even know the system had been added to their planes. Bad engineering, communication, and training still happens in the industry, but really it’s pretty amazing how safe these machines are overall.

              Pilot error is still the cause of a majority of accidents. A big problem is bad pilots that don’t pass regular exams can slip through the system because of management deficiencies. Like pilots it happens in the medical industry where bad doctors or nurses just get passed on from one hospital to the next. Employers fail to do proper checks on previous job performance.

              • bisq@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                100%. A failure at every possible level. Shame on Boeing for outsourcing the design of the 737-Max. I believe it was contracted to India?

                I’m going to give the pilots a mild pass since I’ve read they’re instructed to ignore their gut and trust the instruments since their instruments are “always right” and their gut can be wrong.

              • GreyEyedGhost@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Pretty sure I read while this was new that the design changes were considered minor enough that recertification wasn’t required. So I’d put that on Boeing, too. It’s obvious that airlines aren’t going to recertify on functionally equivalent design, and also obvious that these weren’t equivalent designs.

            • Buddahriffic@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              From my understanding of the situation, it did have two sensors but it would act if only one of them said the angle was too high. One of the fixes they added was that it warns the pilot instead of engaging if the two don’t agree.

      • const_void@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Wasn’t Boeing using outsourced software engineers that were being paid $9/hr for that system?

    • MisterFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      I think the only way it can realistically work within the next decade is if all road construction is diligently recorded into a database and correct paths are predetermined and checked.

      Thing is, I’m sure there’s literally millions of km of road across a country where the road designs only exist in a cad drawing, or some realllly old drawing, or not at all.

      So basically, won’t happen this decade :P

      Or ya know, build more autonomous vehicles that use tracks, like trains, or busses on predetermined routes.

      • irotsoma@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Or build more trains that are reliable and come often enough and eliminate the need for cars in many areas and so significantly reduce the area needed to be mapped.

        • bobs_monkey@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          Trains, trams, anything. There are always going to areas, especially in the US, that 100% public transit is a pipe dream, but we can certainly try. In my mountain town I’ve floated the idea of a gondola that runs above the main boulevard the length of the valley (~7-8 miles) and people look at me like I have 3 heads. The majority of our town’s services are along said road, as well as hotels and tourist hotspots, so it makes sense to me.

          Problem is you get the old-school types that think “public transit” = “you’re taking away my car!” and it always goes nowhere.

  • inclementimmigrant@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    1 year ago

    And FSD was going to be delivered in 2018, 2019, 2020, just around the corner.

    Glad I never paid for that option when it was 5K not to mention the ludicrous 15K they want for it now.

    • UnknownQuantity@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      How are you liking driving Tesla, the build, interior etc.? I’ve heard they’re great at first but then the novelty wears off and you start noticing the shortcomings.

      • sweetdude@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I’ll never get FSD until 3rd parties confirm it does just that. I don’t see it happening for another 20 years. But Auto-Pilot, that thing is life changing, IMO. Highway trips are so much better and now that I’ve owned an EV for the last 4 years, I’m never going back to gas. I think Tesla’s are actually great cars, if you get a decent built one. There’s a reason they’re selling over a million a year now.

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Another big complaint I hear is having to do all sorts of necessary things on the touch screen while driving is a huge distraction.

      • WoahWoah@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’ve had mine for 9mo. Still enjoying it. 0-60 in under 4 seconds never gets old for me. Summon is still great, and the looks on people’s faces when a car shows up with no one driving it is always hilarious.

  • Dr. Dabbles@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    3
    ·
    1 year ago

    The software sucks, it always has sucked, and it’s always going to suck. The cameras are in the wrong place, there isn’t enough compute available, and the jitter in distance/size measurement because of non-existent stereoscopic cameras means there’s no hope for real depth measurement.

    People got ripped off for $15k for this crap.

      • ours@lemmy.film
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        The reason is to save money so they can make more money. It always is about the $$$.

      • Dr. Dabbles@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        Ego. Pure and simple, it’s ego at this point. At first it was being cheap, but now he can’t go back on the foolish things he said without needing to retrofit every single vehicle ever produced.

  • SecretSauces@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    1 year ago

    Elon’s motto: “Move fast and break stuff.”

    In this case, stuff may mean people, property, laws, take your pick.

  • Move to lemm.ee@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    1 year ago

    Lmao car can’t tell the difference between a green light facing the other road and the red light facing it.

    • WorseDoughnut 🍩@lemdro.id
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I have an issue like this w/ Google Maps where it thinks a stop sign on a merging lane is for me, but that’s purely based on the streetview car camera info so it’s mostly understandable.

      Meanwhile, Tesla has all this crazy high tech real-time cameras and sensors at play and it can’t hazard a guess which direction the traffic lights are facing? Beyond stupid lol.

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 year ago

    Corporations only have incentive to suppress results that don’t help them. This is why unless some third party evaluator (such as a gov agency) should be recieving automotive data separately for evaluation.

    Car companies could easily send encrypted camera data to a third party data holder that both the client and company can decrypt - this would prevent the goverment from decrypting this data en mass - and when the car violates a law or crashes it could be decrypted by either party.

    • Tetractys@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      My thinking is similar, as long as these type of systems - which can directly kill the public - are developed by purely commercial interests with no oversight, they will always do the minimum safety measures to get the $. For some reason a car plowing into people is more palatable than a plane crashing out of the sky.

  • LittleLordLimerick@lemm.ee
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    10
    ·
    1 year ago

    So I hate Elon Musk and I think Tesla is way overhyped, but I do want to point out that singular anecdotes like this don’t mean anything.

    Human drivers run red lights and crash cars all the time. It’s not a question of whether a self-driving car runs a light or gets in a crash, it’s whether they do it more often than a human driver. What are the statistics for red lights run per mile driven?

    • LukeMedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      1 year ago

      I’d hazard a guess that it’s lower, but regardless shouldn’t be available to the consumer yet if this is what they found in preliminary testing. Trying to hide it is quite disingenuous though - of course it’s going to make mistakes while in testing and even after, trying to hide those mistakes and act as if they don’t exist is not how you treat your potential customer base.

    • DanTilDawn@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      1 year ago

      Okay but now there is a tragic accountability scenario to deal with. You forget you are discussing human beings and not statistics

      • LittleLordLimerick@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        When we’re talking about public safety, it should be entirely about statistics. Basing public safety policy on feelings and emotions is how you get 3 hour long TSA checkpoints at airports to prevent exactly zero attempted hijackings in the last 22 years.

        • DanTilDawn@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          1 year ago

          Personally -and this is why you simply aren’t getting your way, because it’s a broad and legitimate concern, - but I’m not interested in letting some corporate special interest plow through the regulatory processes required to manage things such as accountability and justice frameworks with having lethal robots have sovereignty on our public roads. Even if they are less lethal than people - which happen to have pesky rights afforded to them that aren’t afforded to autonomous vehicles. The data is great and hopeful for a good future - the implementation matters. TSA is a false-equivalence, the situations are not the same.

          • LittleLordLimerick@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I am not saying we should exempt autonomous vehicle manufacturers from regulation. I’m actually saying the opposite: that we need to base any decision on a rigorous analysis of safety data for these vehicles, which means the manufacturers should be required to provide said data to regulatory agencies.

    • Perhaps@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      It’s kind of like how people are more likely to die in a car accident on their way to the airport than they are to die in a plane. Yet people are more likely to be afraid of flying.

      There’s a sense of control that people tend to gravitate towards. In the aggregate, Tesla might run fewer red lights than humans. At the individual level there will certainly be “safe drivers” who run red lights at a lower rate, who end up dying because of the Tesla. It’s a hard pill to swallow.

      • LittleLordLimerick@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’ll be honest here, I hate cars and the car-centered culture of the USA. I care way more about the victims of bad/careless/drunk/distracted drivers than I do about the bad/careless/drunk/distracted drivers themselves.

        If me being in a self-driving car means other people around me are more safe, then it’s not even a question.

    • WorseDoughnut 🍩@lemdro.id
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I think what it really boils down to is that the vast majority of drivers who run red lights choose to do so out of stupidity, where as someone trusting Tesla’s claims about their new “self-driving” car might not have the chance to stop the vehicle as it hurtles itself through a red light. So yes, in terms of raw numbers it will cause less accidents in some cases, but that it can happen at all when the average trusting consumer/user would expect to never do that compared to a normal car should be a huge issue.

      Also, as far as liability goes, I’m horrified to think about what the future of vehicle injury lawsuits will look like in the US when the driver can blame the software and the company providing the software is run by a grifter asshole.

      • LittleLordLimerick@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        Your concern seems to be for the pilot of the car that causes the accident. What about the victims? They don’t care if the car was being driven by a person or a computer, only that they were struck by it.

        A car is a giant metal death machine, and by choosing to drive one, you are responsible not only for yourself, but also the people around you. If self-driving cars can substantially reduce the number of victims, then as a potential victim, I don’t care if you feel safer as the driver. I want to feel less threatened by the cars around me.

        • WorseDoughnut 🍩@lemdro.id
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          You’ve managed to make up an angle to this that I made absolutely zero stance on, and then get mad about it, congrats.

          Everyone on the road near these stupid things is at risk when Tesla pretends like it’s road-safe and the moronic drivers trust it to obey traffic laws. The concern is obviously for everyone involved, not sure why you’re pretending I said otherwise.

          If I know I’m mostly surrounded by humans who on average don’t accelerate through red lights, I can make certain assumptions when I’m on the road. Yes, the car next to me could randomly swerve into my lane, but on average you can assume they won’t unless you also observe something happening farther ahead in their lane. When you start adding in the combination of bad-faith company and terminally naive driver I described above, you drastically increase uncertainty and risk to everyone within range of the nearest Tesla. The unknown in that equation is always going to be the stupid fucking Tesla.

          • LittleLordLimerick@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            So then you’ve just circled back around to what I originally said: is it actually true that you’re at more risk near a Tesla than you are near a human driver? Do you have any evidence for this assertion? Random anecdotes about a Tesla running a light don’t mean anything because humans also run red lights all the time. Human drivers are a constant unknown. I have never and will never trust a human driver.

            • assassin_aragorn@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I would actually say yes, the Tesla poses more risk. Driving safety is all about anticipating what the other drivers are going to do. After commuting in Houston for 2-3 years, I actually became quite good at identifying scenarios where something dangerous could happen. I wasn’t always right if they were actually going to happen, but I was always prepared to take action in case it was. For instance, if the positioning is right for someone to suddenly cut you off, you can hang back and see if they’ll actually do it. If a larger car is next to you and you’re both making a turn, you can be wary of it spilling into your lane. I avoided a collision today actually because of that.

              We have a sense of what human drivers might do. We don’t have that sense for self driving cars. I can’t adequately predict when I need to take defensive actions, because their behavior is totally foreign to me. They may run a red light well after it’s turned red, while I would expect a human to only do that if it had recently changed. It’s very rare for someone to run a red when they pull up to a light that they’ve only seen as red.

              This same concept is why you can’t make a 100% safe self driving car. Driving safety is a function of everyone on the road. You could drive as safely as possible, but you’re still at the mercy of everyone else’s decisions. Introducing a system that people aren’t familiar with will create a disruption, and disruptions cause accidents.

              Everyone has to adopt self driving technology at about the same time. When it’s mostly self driving cars, it can be incredibly safe. But that in between where it isn’t fully adopted is an increase in risk.

              • LittleLordLimerick@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                This same concept is why you can’t make a 100% safe self driving car. Driving safety is a function of everyone on the road. You could drive as safely as possible, but you’re still at the mercy of everyone else’s decisions. Introducing a system that people aren’t familiar with will create a disruption, and disruptions cause accidents.

                Again, we don’t need a 100% safe self driving car, we just need a self driving car that’s at least as safe as a human driver.

                I disagree with the premise that humans are entirely predictable on the road, and I also disagree that self driving cars are less predictable. Computers are pretty much the very definition of predictable: they follow the rules and don’t ever make last minute decisions (unless their programming is faulty), and they can be trained to always err on the side of caution.

            • WorseDoughnut 🍩@lemdro.id
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              You’re still missing the point. It’s not about how much the drivers around the Tesla should “feel safer” (and they absolutely shouldn’t), it’s about the misguided trust the Tesla driver has in it’s capability to operate autonomously. Their assumptions about what the car can or will do without the need for human intervention makes them an insane risk to everyone around them.

              Also, the vast majority of Tesla owners are weird fanboys who deny every issue and critique, do you really think this is an anecdotal edge case? They wouldn’t be caught dead admitting buyers remorse every time their early access car software messes up. We’re lucky the person in the article was annoyed enough to actually record the incident.

              I would never trust a machine to operate a moving vehicle fully, to pretend it’s any less of an unknown is absurd. Anecdotal fanboying about how great the tech “should be” or “will be someday” also don’t mean anything.

              • LittleLordLimerick@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Their assumptions about what the car can or will do without the need for human intervention makes them an insane risk to everyone around them.

                Do you have statistics to back this up? Are Teslas actually more likely to get into accidents and cause damage/injury compared to a human driver?

                I mean, maybe they are. My point is not that Teslas are safer, only that you can’t determine that based on a few videos. People like to post these videos of Teslas running a light, or getting into an accident, but it doesn’t prove anything. The criteria for self-driving cars to be allowed on the road shouldn’t be that they are 100% safe, only that they are as safe or safer than human drivers. Because human drivers are really, really bad, and get into accidents all the time.

                • WorseDoughnut 🍩@lemdro.id
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  The criteria for self-driving cars to be allowed on the road shouldn’t be that they are 100% safe,

                  This is where our complete disconnect is. IMO when you put something on the road that has the capacity to remove control from the driver it absolutely needs to be 100% reliable. To me, there is no justifiable percentage of acceptable losses for this kind of stuff. It either needs to be fully compliant or not allowed on the road around other drivers at all. Humans more likely to cause accidents and requiring automated systems to not endanger the lives of those in / around the vehicle are not mutually exclusive concepts.

        • assassin_aragorn@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This is exactly the issue. The driver isn’t at fault, because they’re not even driving the thing. The program is. But are we going to prosecute a programmer who genuinely tried their best to make a good product?

          Unless we hold corporations overall liable for this, there is no recourse. And we should hold them liable. If they can be sued for accidents caused by self driving cars, they’re sure as hell going to make them as safe as technologically possible.

    • Coreidan@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      1 year ago

      The whole argument for self driving cars is they don’t make stupid mistakes. Just because they do it less than humans isn’t a good argument.

      I am putting my trust in the technology. It needs to work otherwise I’m not going to use it. It working 99% time is not good enough especially when it failed because it wasn’t good enough.

      • GONADS125@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I don’t think exceeding 99% is realistic with self-driving cars… There’s always going to be a margin of error with anything.

      • LittleLordLimerick@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        That’s not the argument for self-driving cars at all. The argument for self-driving cars is that people hate driving because it’s a huge stressful time sink. An additional benefit of self-driving cars is that computers have better reaction times than humans and don’t stare at a phone screen while flying down the freeway at 70 mph.

        If we find that SDC get in, say, 50% fewer serious accidents per 100 miles than human drivers, that would mean tens of thousands fewer deaths and hundreds of thousands fewer injuries. Your objection to that is that it’s not good enough because you demand zero serious accidents? That’s preposterous.

        • Mkengine@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Is there any progress on the question who is responsible for such accidents? Of course less accidents are desirable, but if the manufacturer suddenly is responsible for deaths and not the human behind the wheel, then there is a really big incentive to have zero serious accidents. If the system is not perfect from the start, someday the industry and the government have to decide how to handle this.

          • LittleLordLimerick@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I believe the manufacturer should be liable for damage caused by their product due to manufacturing defects and faulty software. This incentivizes manufacturers to make the safest product possible to reduct their liability. If it turns out that it’s not possible for manufacturers to make these cars safe enough to be profitable, then so be it.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      The real question is “who goes to jail when that self driving truck flattens a family of four?”

      Because if the answer is nobody, then we shouldn’t have full self driving.

    • Fedizen@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      1 year ago

      This is a corporation breaking the law as a business decision, it should be noted if we let small errors slide they will lose incentive to be accountable .

  • RadialMonster@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    1 year ago

    to be fair, that light was in a weird position and I might have been just as confused if its an area i’m not familiar with. but we should expect better also.

    • Move to lemm.ee@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      A human being that is uncertain slows down and drives more cautiously until understanding what they’re dealing with. This thing does not.

    • Coreidan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      At least you understand that you’re confused and can react accordingly. A machine has no capability of knowing it’s confused and instead goes full throttle into a situation it doesn’t not understand.

      I’d rather take my chances with a human, at least humans can think.