Cambridge scientists have shown that placing physical constraints on an artificially-intelligent system – in much the same way that the human brain has to
I don’t think LLM’s will lead to AGI, but at some point a system is going to be implemented that does lead to AGI but it will be unexpected.
This seems to be the worse case scenario as the AGI will likely be clever enough to ensure humans don’t realise it’s AGI. There are network effects and complexity that we are not 100% knowledgeable about. The net result is the same: we lose the control problem. Badly.
I don’t think LLM’s will lead to AGI, but at some point a system is going to be implemented that does lead to AGI but it will be unexpected.
This seems to be the worse case scenario as the AGI will likely be clever enough to ensure humans don’t realise it’s AGI. There are network effects and complexity that we are not 100% knowledgeable about. The net result is the same: we lose the control problem. Badly.