That is true, but perhaps inappropriate in this case. Humans are not predictable, nor is weather, the actual outcomes of policy decisions, and any number of things that are critical to a functioning society. We mostly cope with most issues by creating systems that are somewhat resilient, take into account the lack of perfection, and by making adjustments over time to tweak the results.
I think perhaps a better analogy than the oil refinery might be economic or social policy. We have to always be fiddling with inputs and processes to get the results we desire. We never have perfectly predictable outcomes, yet somehow mostly manage to get things approximately correct. This doesn’t even ignore the issue that we can’t seem to really agree on what “correct” is as we seem to be in general agreement that 1920 was better than 1820 and that 2020 was better than 1920.
If we want AI to be the backbone of industry, then the current state of the art probably isn’t suitable and the LLM/transformer systems may never be. But if we want other ways to browse a problem space for potential solutions, then maybe they fit the bill.
I don’t know and I suspect we’re still a decade away from really being able to tell whether these things are net positive or not. Just one more thing that we have difficulty predicting, so we have to be sure to hedge our bets.
(And I apologize if it seems I’ve just moved the goal posts. I probably did, but I’m not really sure that I or anyone else really knows enough at this point to really lock them in place.)
That is true, but perhaps inappropriate in this case. Humans are not predictable, nor is weather, the actual outcomes of policy decisions, and any number of things that are critical to a functioning society. We mostly cope with most issues by creating systems that are somewhat resilient, take into account the lack of perfection, and by making adjustments over time to tweak the results.
I think perhaps a better analogy than the oil refinery might be economic or social policy. We have to always be fiddling with inputs and processes to get the results we desire. We never have perfectly predictable outcomes, yet somehow mostly manage to get things approximately correct. This doesn’t even ignore the issue that we can’t seem to really agree on what “correct” is as we seem to be in general agreement that 1920 was better than 1820 and that 2020 was better than 1920.
If we want AI to be the backbone of industry, then the current state of the art probably isn’t suitable and the LLM/transformer systems may never be. But if we want other ways to browse a problem space for potential solutions, then maybe they fit the bill.
I don’t know and I suspect we’re still a decade away from really being able to tell whether these things are net positive or not. Just one more thing that we have difficulty predicting, so we have to be sure to hedge our bets.
(And I apologize if it seems I’ve just moved the goal posts. I probably did, but I’m not really sure that I or anyone else really knows enough at this point to really lock them in place.)