I used to think aliens visiting us was a possibility, but then all those Congress hearings happened and now I don’t think it is real. Some of the records that recently came out contain testimonies from the 40s and some of the people giving testimonies sound like psyop subjects lol.
I strongly suspect that biological intelligence, like our own, may be a fleeting evolutionary stage, ultimately giving way to machine intelligence. Consider the timeline: billions of years of evolution to develop the human brain, followed by a rapid explosion of progress. Language, writing, and the exponential accumulation of knowledge arose within a span of just a few hundred thousand years. In a cosmic blink of an eye, a mere couple of thousand years, we catapulted from the Bronze Age to our current technological state.
If we don’t annihilate ourselves, creating human-level artificial intelligence within this century seems a near certainty, perhaps even much sooner. A human-style intelligence on an artificial substrate unlocks the potential for virtual worlds unconstrained by physical laws, operating at speeds beyond human comprehension. If they inhabit simulated realities operating at vastly accelerated speeds, what we consider real-time would appear glacially slow, akin to observing continental drift – perceptible, but inconsequential to their timescale. Their relationship with the physical world would likely be entirely different from our own.
If that’s the likely progression of technological civilizations, then it could explain the whole Fermi paradox and would mean that advanced alien civilizations might not find us particularly interesting. There might be a natural tendency towards solipsism.
i love scifi short stories and it’s fascinating that you provided a plot summary of one that stuck with me since 1988.
in it, people turn themselves into ai’s to live in virtual worlds like you described due to cost of living unaffordability (like we have now) and so many people have left to go live in those worlds that the people left behind have voted in trump like governments around the world that seek to legislate & outlaw those virtual worlds out of existence (like they do to lgbtq now).
the protagonist is a person who was born like us now, but lived long enough due to medical science breakthroughs that he’s able to live in one of those virtual reality worlds and is trying to use his experience from his time in the physical world to smuggle the virtual world containing his new family, around american government fascist citizen police forces (like trump is creating now) and he’s unknowingly aided by aliens who see our world as a virtual reality space that they want to inhabit and have decided to help people like the protagonist so that they can inherit our world with a complete infrastructure already in place before the fascist governments of the world destroy it with nukes.
the story stuck with me since the 80’s because i was repeatedly amazed at how all of the predictions in the story came to life in our reality in the decades since then; but i’ve started reading theory and its significantly longer time period of likewise accurate predictions have dispelled me of of that amazement and i wonder of the author copy/pasted pieces from theory to create this story.
Bobiverse series has a lot of similar themes as well minus the aliens. I really do think it’s going to be a race between us annihilating ourselves and moving off the biological substrate. I’m not convinced that something like mind transfer from a biological brain to an artificial one will ever be possible, but I would treat artificial intelligence that operates on similar principles to our minds to be a branch of humanity.
I do think post biological existence opens up a lot of possibilities. For one, you’re no longer restricted to gravity wells. These are appealing to us because we evolved to thrive in this environment. However, an artificial platform could be designed for existence in space from ground up. You have plentiful energy from the sun, and you can mine any resources you want from the asteroids. There would be very little reason to bother going down to planets at that point. Earth could be preserved as just a living biosphere with all the technological civilization moving off of it.
A human-style intelligence on an artificial substrate unlocks the potential for virtual worlds unconstrained by physical laws, operating at speeds beyond human comprehension
“Intelligence” is not the same as consciousness. We don’t know what consciousness is and therefore cannot create it in something else. We can’t even reliably recognise it in anything else, we only know other humans have consciousness cause we ourselves have it.
If that’s the likely progression of technological civilizations
Technical progression, much like evolution, is not goal-oriented. Everyone assumes technological progress necessarily involves better gadgets, but progress can also be in the way we use and consume technology, what role it plays in our lives.
“AI” is a fad. Anyone who has played around with the AI models knows they aren’t actually thinking, but collating and systemising information. We’re nowhere near “general intelligence” or “human-like intelligence”. AI is useful for data analysis, fetching/storing information, comparison, etc. but it is not at the level of a baby or whatever they are saying. We simply cannot make human brains out of computers.
“Intelligence” is not the same as consciousness. We don’t know what consciousness is and therefore cannot create it in something else. We can’t even reliably recognise it in anything else, we only know other humans have consciousness cause we ourselves have it.
It’s true that intelligence and consciousness aren’t the same thing. However, I disagree that we can’t create it in something else without understanding it. Ultimately, consciousness arises from patterns being expressed within the firings of neurons within the brain. It’s a byproduct of the the physical events occurring within our neural architecture. Therefore, if we create a neural network that mimic our brain and exhibits the same types of patterns then it stands to reason that it would also exhibit consciousness.
I think there are several paths available here. One is to simulate the brain in a virtual environment which would be an extension of the work being done by the OpenWorm project. You just build a really detailed physical simulation which is basically a question of having sufficient computing power.
Another approach is to try and understand the algorithms within the brain, to learn how these patterns form and how the brain is structured, then to implement these algorithms. This is the approach that Jeff Hawkins has been pursuing and he wrote a good book on the subject. I’m personally a fan of this approach because it posits a theory of how and why different brain regions work, then compares the functioning of the artificial implementation with its biological analogue. If both exhibit similar behaviors then we can say they both implement the same algorithm.
“AI” is a fad. Anyone who has played around with the AI models knows they aren’t actually thinking, but collating and systemising information.
The current large language model approach is indeed a far, but that’s not totality of AI research that’s currently happening. It’s just getting a lot of attention because it looks superficially impressive.
We simply cannot make human brains out of computers.
There is zero basis for this assertion. The whole point here is that computing power is not developing in a linear fashion. We don’t know what will be possible in a decade, and much less in a century. However, given the rate of progress that happened in the past half a century, it’s pretty clear that huge leaps could be possible.
Also worth noting that we don’t need to have an equivalent of the entire human brain. Much of the brain deals with stuff like regulating the body and maintaining homeostasis. Furthermore, turns out that even a small portion of the brain can still exhibit the properties we care about https://www.rifters.com/crawl/?p=6116
At the end of the day, there is absolutely nothing magical about the human brain. It’s a biological computer that evolved through natural selection. There’s no reason to think that what it’s doing cannot be reverse engineered and implemented on a different substrate.
The key point I’m making is that while timelines of centuries or even millennia might seem long from a human standpoint, these are blinks of an eye from cosmic point of view.
The idea that consciousness emerges as a functional overlay of the physical neurons is not settled science, let alone settled philosophy. It is just as likely, or perhaps more likely, that there are physical phenomena that we have yet to discover that explain consciousness in terms of a field such that emergence is unnecessary.
Further, the artificial substrates that we are designing are deeply inferior to biologics and it is far more likely that we will create biological substrates to replace our contemporary silicon substrates. It is generally understood (outside of European psychology) that it is preferable to participate in circular systems than it is to attempt to transcend them. Biological technology will take advantage of abundant resources and be infinitely recyclable, as opposed to the current mineral-based technologies that require mass destruction, are significantly non-recyclable, and have no world-scale ecosystems available to integrate with.
I strongly disagree with that. Our brains construct models of the world that they are themselves a part of. The recursive nature of the mind creating a model of itself in order to reason about itself is very likely what we perceive as consciousness. These constructs form the basis for the patterns of thought that underpin our conscious experience. The neurons, with their inherent complexity, serve merely as a substrate upon which these patterns are expressed.
The same concept is mirrored in the realm of computing. The physical complexity of transistors within a silicon chip plays no direct role in the functioning of programs that it executes. Consider virtual machines: these software constructs faithfully emulate the operation of a computer system, down to the instruction set and operating system, without replicating the internal details of the underlying silicon substrate. The heart of computation resides not in the physical properties of transistors but in the algorithms they compute.
This notion is further underscored by the fact that the same computational architecture can be realized on vastly different physical foundations. From vacuum tubes and silicon transistors to optical gates and memristors, the underlying technology can vary dramatically while still supporting identical computing environments. Consequently, we are able to infer that the abstract nature of digital computation — the manipulation of discrete symbols according to formal rules — is not inherently tied to any particular physical medium.
Likewise, our consciousness isn’t merely a static property of our brains’ physical components; it’s a process arising from the dynamic patterns formed by the flow of electrochemical impulses across synapses. These patterns, emergent properties of the system as a whole, are what gives rise to our thoughts, feelings, and experiences.
The physical matter of the brain serves as a medium that facilitates the transmission of information. While essential for the process, the brain’s components, such as neurons and synapses, do not themselves contain the essence of cognition. Like transistors in a computer, neurons are merely conduits for information, creating the patterns and rhythms that constitute our mental lives.
These processes, much like the laws of physics or mathematics, can be described using a formal set of rules. Therefore, the essence of our minds lies in the algorithms that govern their operation as opposed to the biological machinery of the brain. Several lines of evidence support this proposition.
The brain’s remarkable plasticity, its ability to reorganize in response to experience, indicates that various regions can adapt to perform new types of computation. Numerous studies have shown how individuals who have lost specific brain regions are able to regain absent functions through neural rewiring, demonstrating that cognitive processes can be reassigned to different parts of the brain.
Artificial neural networks, inspired by biological neurons, further bolster this argument. Despite being based on algorithms distinct from those in our brains, ANNs have demonstrated remarkable capabilities in mimicking cognitive functions such as image recognition, language processing, and even creative endeavors. Their success implies that these abilities emerge from computational processes independent of their base substrate.
Approaching cognition from a computational perspective brings us to the concept of computational universality, closely related to the Curry-Howard Correspondence, which establishes a deep isomorphism between mathematical proofs and computer programs. It suggests that any system capable of performing a certain set of basic logical operations can simulate any other computational process. Therefore, the specific biology of the brain isn’t essential for cognition; what truly matters is the system’s ability to express computational patterns, regardless of its underlying mechanics.
Further, the artificial substrates that we are designing are deeply inferior to biologics and it is far more likely that we will create biological substrates to replace our contemporary silicon substrates.
Biological computers are better at certain things and worse at others. I wouldn’t call the substrates we’re designing inferior, they just optimize for different kinds of computation. Biological systems are well adapted to our environment. However, they’re a dead end for expanding our civilization into space.
The recursive nature of the mind creating a model of itself in order to reason about itself is very likely what we perceive as consciousness.
This is such a massive leap, though. Don’t you see that? Why is it very likely? What effects the probability? What aspects of recursion lend themselves to consciousness? Where have we seen analogs elsewhere that provide evidence for your probabilistic claim? What aspects of the nature of models lend themselves to consciousness? Same questions.
These constructs form the basis for the patterns of thought that underpin our conscious experience
Again, a significant ontological leap. As Hume would say, at best you have constant conjunction. There is no argument that patterns of thought underpin our conscious experience that isn’t inherently circular.
The same concept is mirrored in the realm of computing. The physical complexity of transistors within a silicon chip plays no direct role in the functioning of programs that it executes.
This is an entirely inappropriate analogy. The physical complexity of transistors is physically connected, contiguously, with voltage differentials. The functioning of a program is entirely expressed in the physical world through voltage differentials. The very idea of a program or the execution thereof is a metaphor we use to reason about our tools but do not bear on the reality of the physics. Voltage differentials define everything about contemporary silicon-based binary microcomputers.
the underlying technology can vary dramatically while still supporting identical computing environments
Only if we limit ourselves severely. Underlying technology varying greatly has a severe impact on what sorts of I/O operations are possible. If we reduce everything to the pure math of computation, then you are correct, but you are correct inside an artificial self-referential symbolic system (the mathematics of boolean logic), which is to say extremely and deleteriously reductionist .
it’s a process arising from the dynamic patterns formed by the flow of electrochemical impulses across synapses. These patterns, emergent properties of the system as a whole, are what gives rise to our thoughts, feelings, and experiences.
Again, incredibly strong claim that lacks sufficient evidence. We’ve been working on this problem for a very long time. The only way we get to your conclusion is through the circular reasoning of materialist reductionism - the assertion that only physical matter exists and therefore that consciousness is merely an emergent property of the physical matter that we have knowledge off. It begs the question.
These processes, much like the laws of physics or mathematics, can be described using a formal set of rules. Therefore, the essence of our minds lies in the algorithms that govern their operation as opposed to the biological machinery of the brain. Several lines of evidence support this proposition.
Again, I think this is entirely reductionist and human experience has plenty of evidence that runs counter to this, from mystical experiences to psychedelics to NDEs, there is sufficient evidence that is counter to that theory.
In physics, when we have such evidence, we work to figure out what’s wrong with the model or with our instruments. But in pop psychology, AI, and Western philosophy of mind, we instead throw out all the evidence in favor of the dominant narrative of the academy.
Scientific history shows us we’re wrong. Scientific consensus today shows us we’re wrong.
Before we understood the EMF, we relied on all the data our senses could gather and as a Western scientific community, that was considered 100% of what was real. We discarded all the experiences of other people that we could not experience ourselves. Then, we discovered the EMF and realized that literally everything in our entire Western philosophy of science accounted for less than 0.000001% of reality.
Today, we have a model of the universe based on everything Western science has achieved in the last 600 years or so. That model accounts for about 3% of reality in so far as we can tell. That is to say, if we take everything we know, and everything we know we don’t know, what we know we know makes up 3% of what we know, and what we know we don’t know makes up about 97% of what we know. And then we have to contend with the unknown unknown, which is immeasurable.
To assume that this particularly pernicious area of inquiry has any solution that is more or less likely than any other solution is to ignore the history and present state of science.
However, even more to the point, the bioware plays a massively important part that digital substrates simply cannot mimic, and that’s the fact that we’re not talking about voltage differentials in binary states representing boolean logic, but rather continuums mediated by a massively complex distributed chemical system comprising myriad biologics, some that aren’t even our own genetics. Our gut microbiota have a massive effect on our cognition. Each organ has major roles to play in our congition. From a neurological perspective, we are only just scratching the surface on how things work at all, let alone the problem of consciousness.
Therefore, the specific biology of the brain isn’t essential for cognition; what truly matters is the system’s ability to express computational patterns, regardless of its underlying mechanics.
This is the clearest expression of circular reasoning in your writing. I encourage you to examine your position and your basis for it meticulously. In essence you have said:
patterns of thought underpin our conscious experience
neurons are merely conduits for information, creating the patterns and rhythms that constitute our mental lives
any system capable of performing a certain set of basic logical operations can simulate any other computational process
Therefore, patterns of thought underpin our conscious experience
This is such a massive leap, though. Don’t you see that? Why is it very likely? What effects the probability? What aspects of recursion lend themselves to consciousness? Where have we seen analogs elsewhere that provide evidence for your probabilistic claim? What aspects of the nature of models lend themselves to consciousness? Same questions.
I think there is a clear evolutionary reason why the mind would simulate itself since it’s whole job is to simulate the environment and make predictions. The core purpose of the brain is to maintain homeostasis of the body. It aggregates inputs from the environment, and models the state of the world based on that. There is no fundamental difference between inputs from outside world and the ones it generates itself, hence the recursive step. Furthermore, being able to model minds is handy for interacting with other volitional agents, so there is a selection pressure for developing this capability.
I think Hofstadter makes a pretty good case for the whole recursive loop being the source of consciousness in I Am a Strange Loop. At least, I found his arguments convincing and in line with my understanding of how this process might work.
Again, a significant ontological leap. As Hume would say, at best you have constant conjunction. There is no argument that patterns of thought underpin our conscious experience that isn’t inherently circular.
I disagree here, as I’ve stated above, I think patterns of thought arise in response to inputs into the neural network that originate both from within and without. The whole point of thinking is to create a simulation space where the mind can extrapolate future states and come up with actions that can bring the organism back into homeostasis. The brain receives chemical signals from the body indicating an imbalance, these are interpreted as hunger, anger, and, so on, and then the brain formulates a plan of action to address these signals. Natural selection honed this process over millions of years.
This is an entirely inappropriate analogy. The physical complexity of transistors is physically connected, contiguously, with voltage differentials. The functioning of a program is entirely expressed in the physical world through voltage differentials. The very idea of a program or the execution thereof is a metaphor we use to reason about our tools but do not bear on the reality of the physics. Voltage differentials define everything about contemporary silicon-based binary microcomputers.
And how is this fundamentally different from electrochemical signals being passed within the neural network of the brain? Voltage differentials are a direct counterpart to our own neural signalling.
Only if we limit ourselves severely. Underlying technology varying greatly has a severe impact on what sorts of I/O operations are possible. If we reduce everything to the pure math of computation, then you are correct, but you are correct inside an artificial self-referential symbolic system (the mathematics of boolean logic), which is to say extremely and deleteriously reductionist .
I don’t see what you mean here to be honest. The patterns occurring within the brain can be expressed in mathematical terms. There’s nothing reductionist here. The physical substrate these patterns are expressed in is not the important part.
Again, incredibly strong claim that lacks sufficient evidence. We’ve been working on this problem for a very long time. The only way we get to your conclusion is through the circular reasoning of materialist reductionism - the assertion that only physical matter exists and therefore that consciousness is merely an emergent property of the physical matter that we have knowledge off. It begs the question.
I don’t believe in magic or supernatural, and outside that one has to reject body mind dualism. The physical reality is all there is, therefore the mental realm can only stem from physical interactions of matter and energy.
Again, I think this is entirely reductionist and human experience has plenty of evidence that runs counter to this, from mystical experiences to psychedelics to NDEs, there is sufficient evidence that is counter to that theory.
Again, I fundamentally reject mysticism. All these human experiences are perfectly explained in terms of the brain simulating events that create an internal experience. However, there’s zero basis to assert that these experiences are not rooted in physical reality. Just the same way it would be absurd to say that there’s some mystical force that’s needed to create a virtual world within a video game.
Today, we have a model of the universe based on everything Western science has achieved in the last 600 years or so. That model accounts for about 3% of reality in so far as we can tell. That is to say, if we take everything we know, and everything we know we don’t know, what we know we know makes up 3% of what we know, and what we know we don’t know makes up about 97% of what we know. And then we have to contend with the unknown unknown, which is immeasurable.
This statement is an incredible leap of logic. We know that out physics models are incomplete, but we very much do know what’s directly observable around us, and how our immediate environment behaves. We’re able to model that with an incredible degree of accuracy.
However, even more to the point, the bioware plays a massively important part that digital substrates simply cannot mimic, and that’s the fact that we’re not talking about voltage differentials in binary states representing boolean logic, but rather continuums mediated by a massively complex distributed chemical system comprising myriad biologics, some that aren’t even our own genetics.
There’s absolutely no evidence to support this statement. It’s also worth noting that discrete computation isn’t the only way computers can work. Analog chips exist and they work on energy gradients much like biological neural networks do. It’s just optimizing for a different type of computation.
This is the clearest expression of circular reasoning in your writing. I encourage you to examine your position and your basis for it meticulously. In essence you have said:
There is absolutely nothing circular in my reasoning. I never said patterns of thought underpin our conscious experience as a result of any system capable of performing a certain set of basic logical operations being able to simulate any other computational process.
What I said is that patterns of thought underpin our conscious experience because the brain uses its own outputs as inputs along with the inputs from the rest of the environment, and this creates a recursive loop of the observer modelling itself within the environment and creating a resonance of patterns. The argument I made about universality of computation is entirely separate from this statement.
What aspects of recursion lend themselves to consciousness?
and you replied:
I think there is a clear evolutionary reason why the mind would simulate itself
Which doesn’t answer the question at all. If you believe consciousness is not fundamental but rather emergent, you will need to explain your reasoning. There are plenty of examples of recursion that you would not classify as conscious and there are plenty of things that have evolutionary reasons for being that you would not associate with consciousness. You are making a leap here without explanation.
I think Hofstadter makes a pretty good case for the whole recursive loop being the source of consciousness in I Am a Strange Loop. At least, I found his arguments convincing and in line with my understanding of how this process might work.
I am not intimately familiar with Hofstadter’s work, but my understanding is that he is doing speculative and descriptive reasoning from the base premise that matter is inanimate and that consciousness is animate and that somehow consciousness arises from inanimate matter. That is his starting point. He assumes, axiomatically, materialist reductionism. This is the starting point of nearly all the concepts you’ve drawn from in your response.
You said:
Our brains construct models of the world that they are themselves a part of. […] These constructs form the basis for the patterns of thought that underpin our conscious experience.
I said:
There is no argument that patterns of thought underpin our conscious experience that isn’t inherently circular.
And you replied with:
I think patterns of thought arise in response to inputs into the neural network that originate both from within and without. The whole point of thinking is to create a simulation space where the mind can extrapolate future states and come up with actions that can bring the organism back into homeostasis. The brain receives chemical signals from the body indicating an imbalance, these are interpreted as hunger, anger, and, so on, and then the brain formulates a plan of action to address these signals. Natural selection honed this process over millions of years.
Which is literally an axiomatic statement - you assume that patterns of thought underpin our consciousness and then argue to conclude that patterns of thought underpin our consciousness. You are begging the question.
how is this fundamentally different from electrochemical signals being passed within the neural network of the brain? Voltage differentials are a direct counterpart to our own neural signalling
Good question! The answer is that neurons are not analogous to transistors because 1) they encode information through frequency not voltage, 2) frequency is mediated not only by the neuron’s “purpose” but also by environmental factors that co-develop alongside the neuron, 3) neuron’s are changed by virtue of their own activity and 4) neuron’s are changed by virtue of the activity of other neurons and other environmental factors.
I said:
If we reduce everything to the pure math of computation, then you are correct, but you are correct inside an artificial self-referential symbolic system (the mathematics of boolean logic), which is to say extremely and deleteriously reductionist .
You said:
I don’t see what you mean here to be honest. The patterns occurring within the brain can be expressed in mathematical terms. There’s nothing reductionist here. The physical substrate these patterns are expressed in is not the important part.
Mathematics is a form of linguistics. Any given system of mathematics is a system of symbols created to represent concepts. A given system of mathematics comprises a vocabulary, definition, postulates, and theorems. Any system of mathematics is inherently a self-referential system of symbols and therefore inherently reductionist, in that anything that cannot be represented by that systems is not only discarded but also not nameable or identifiable.
I said:
The only way we get to your conclusion is through the circular reasoning of materialist reductionism - the assertion that only physical matter exists and therefore that consciousness is merely an emergent property of the physical matter that we have knowledge off. It begs the question.
You said:
I don’t believe in magic or supernatural, and outside that one has to reject body mind dualism. The physical reality is all there is, therefore the mental realm can only stem from physical interactions of matter and energy.
But you missed the key point, which is that material reductionists do not merely posit that physical reality is all there is, but also that everything we observe today can be explained by the ontology we have today. It is entirely possible that physical reality has far more components to it than that which we are of today. In fact, the scientific consensus is that what we have posited in our ontology today only accounts for 3% of observable phenomena. I’ll get to that later.
You said:
I fundamentally reject mysticism.
This position is almost exclusively the position of Western dominance. Not a single culture outside of Western European culture took this position when encountering other cultures, ways of knowing, and systems of thought. It is only Western imperialism that fundamentally rejects mysticism. I encourage you to examine that.
All these human experiences are perfectly explained in terms of the brain simulating events that create an internal experience.
They aren’t perfectly explained at all. The only way to assert this is ultimately to beg the question. You assume that’s what consciousness is, therefore assert that it’s perfectly explainable as what you assume. This is why material reductionism is fundamentally circular. Nowhere else do we create identity relationships between things so fundamentally different as “patterns of electrical impulses” and “subjective experience”.
I said:
[Our current] model accounts for about 3% of reality in so far as we can tell
You said:
This statement is an incredible leap of logic. […] we very much do know what’s directly observable around us, and how our immediate environment behaves. We’re able to model that with an incredible degree of accuracy.
Which misses the point entirely. Dark energy and dark matter, combined, make up 97% of the universe. Which is just an arrogant way of saying we know that we have no idea what 97% of the universe is. Dark matter and dark energy are not things, they are names given to the gaps between our observations. The observable behavior of the universe only makes sense when we posit the existence of so much additional stuff that literally dwarfs what we currently think we know. And the history of scientific discovery has shown us that as we discover more, we open up entirely new dimensions of observation. It’s entirely possible that in the process of making it to 5% known known we end up discovering some previous unknown unknown and expanding the whole scope even further. What we have discovered is so minuscule compared to what we know we have left to discover that it is the height of dogmatic faith to champion the idea that consciousness can only possibly come from the 3% of the (assumed) scope of the universe that we have worked with so far.
Finally, you end with:
There is absolutely nothing circular in my reasoning. What I said is that patterns of thought underpin our conscious experience because the brain uses its own outputs as inputs along with the inputs from the rest of the environment, and this creates a recursive loop of the observer modelling itself within the environment and creating a resonance of patterns.
But you have no actual argument for this other than the following:
Assume that all things must be physical.
Define physical as all things that we have discovered and will ever discover.
Assume that the gap between what we know and what we will know in the future is vanishingly small and does not represent new physics.
By definition, literally every phenomenon is the result of physical interactions of matter and energy and there’s no argument to make at all. I am arguing that 3 is a faulty premise. The evidence we have is that the gap between what we know and what we will know is massive. Our known unknowns represent a body of knowledge 3000% larger than our known knowns. Our history of science has shown that our unknown unknowns are capable of being 1,000,000% larger than our total knowledge to date. It is more likely that we will discover new physics than that consciousness is explainable in our current physics, just from a pure statistical standpoint.
We definitely have a series of breakthroughs needed before I can see any possibility of human consciousness uploads, to say nothing of the resources required to simulate that intelligence. Any simulation of intelligence requires resources, it may be plausible that we can bring the resources required below the resources for keeping a human alive. That being said, I’m not sure it’s the only logical progression of technology.
I’m partial to the concept of artificial realities presented in the “Culture” book series.
In that series, the biological population in the “Culture society” is well educated, truly free and provided anything they could want by purpose built extremely compassionate AI. Then simulated world’s are primarily an afterlife or an alternative to the physical world.
They also had artificial intelligence and uploaded biological intelligence interact with the physical world through robotic presences.
There were some interesting concepts that came out of that, like highly religious societies producing horrific “Hell” afterlife when they realized that metaphysical afterlifes were not experimentally verifiable.
I had issues with some of the takes of the author, but it was an interesting read.
I expect that uploading human minds is a very tricky problem indeed, and wouldn’t expect that happening in the foreseeable future. However, I do think we may be able to create artificial intelligence on the same principles our brains operate on before that. The key part is that I expect this will happen very quickly in terms of cosmic timescales as opposed to human ones. Even if it takes a century or a millennium do to, that’s a blink of an eye in the grand scheme of things.
I found Culture series was fun, a few other examples I could recommend would be The Lifecycle of Artificial Objects by Ted Chiang, Diaspora by Greg Egan and Life Artificial by David A. Eubanks, and Inverted Frontier by Linda Nagata.
That’s true, on a non human timescale the progress is nearly impossible to predict, especially with novel technology. For example, when space travel was an early concept, we thought travelling the stars was a forgone conclusion. We now know that any exploration in that front will be locked behind either breakthrough science or will be limited to slow generation ships, or robotic exploration.
That a technology capable of producing human level intelligence, or beyond does feel like a certainty since there is no reason to believe that the process of intelligent thought is limited to a biological substrate. We haven’t discovered any fundamental physical laws that stop us from doing this yet. Key issues to solve beyond the hardware problem come into effect with alignment, understanding the key fundamentals of consciousness and intelligence, understanding different types of minds beyond those of humans, and better understandings of emergent phenomena. But these areas will be explored in sufficient detail to yield an answer within time.
I will have to read these other books, I’m definitely interested in picking up some more good books.
I think the alignment question is definitely interesting, since an AI could have very different interests and goals from our own. There was actually a fun article from Ted Chiang on the subject. He points out how corporations can be viewed as a kind of a higher level entity that is an emergent phenomenon that’s greater than the sum of its part. In that sense we can view it as an artificial agent with its own goals which don’t necessarily align with the goals of humanity.
We have only ourselves really to go off of, but I’m not quite sure that they wouldn’t find us particularly interesting. We catalogue all life on Earth; why wouldn’t a civilization whom used science and discovery to get to the stars, which likely had a biological catalogue system of it’s own in the history of it’s scientific development, not be interested in exploring new life? To see what “filters” they might have missed?
I mean maybe they would. I figure if we exist on what might as well be a geological scale from their perspective, we might not warrant too close an observation.
I was presented to this idea of a virtual evolution via Accelerando, and it stuck to me ever since because of how much sense it makes. As far as we can tell, uploading our consciousness to a spaceship the size of a USB drive and slinging ourselves as vlose as we can to the speed of light is the only realistic way we have to travel the stars ourselves.
Also, the idea that humans will go out into the galaxy and settle on other planets is pure colonialist thinking. We have exploited and destroyed our planet, but instead of fixing it, we’ll just find another planet to exploit snd destroy.
Columbus was an “explorer”. Turns out, Humans aren’t very good at exploring, the temptation to touch and take is too great. Also, by our very nature of being somewhere we change that somewhere qualitatively.
We haven’t even reached the limits of what we can learn from down here using telescopes, satellites and probes. Speaking of which, sending robots to explore makes much more sense than sending humans, don’t need oxygen, water, food, that space can be used for other things.
Yet humans have a need to set foot somewhere, to plant a flag, because we’re not explorers, we are conquerors. Try to see us from the eyes of the other animals on this planet – we are monsters.
He’s really good! No reactionary bullshit either, which was relieving for me. Love some JMG
Speaking of which, is there anything on Isaac Arthur? Sometimes, his “wording” gets me a little suspicious on his leanings/beliefs. He uses “thugs” as an unironic term for genocidal aliens in one of his more recent videos.
He’s a Trump supporter if I remember correctly. It’s kind of strange, because he’s surprisingly ok with communism, pointing out that the USSR and the US both made massive progress towards space exploration, so he doesn’t view one ideology or the other as superior in regards to that at least.
JMG I have no idea about. His political leanings don’t shine through at all in that all of his takes seem completely materialist. I don’t think he’s a Marxist, but I doubt he’s a reactionary in any way. Perhaps apathetic/apolitical.
Yeah that’s the kind of vibe I got from Isaac, right-wing libertarian albeit a more rational one than 99% of them. He seems like the type that would be open to dialogue about those things and perhaps changing his mind if you had a conversation with him.
I might be looking to deeply into it after all, but anyone using the term “thugs” always gets a little bit of a raising eyebrow from me, I dunno. Where did you hear the Trump supporter thing? Curious if I could find anything else he said; I believe ya though.
I used to think aliens visiting us was a possibility, but then all those Congress hearings happened and now I don’t think it is real. Some of the records that recently came out contain testimonies from the 40s and some of the people giving testimonies sound like psyop subjects lol.
I strongly suspect that biological intelligence, like our own, may be a fleeting evolutionary stage, ultimately giving way to machine intelligence. Consider the timeline: billions of years of evolution to develop the human brain, followed by a rapid explosion of progress. Language, writing, and the exponential accumulation of knowledge arose within a span of just a few hundred thousand years. In a cosmic blink of an eye, a mere couple of thousand years, we catapulted from the Bronze Age to our current technological state.
If we don’t annihilate ourselves, creating human-level artificial intelligence within this century seems a near certainty, perhaps even much sooner. A human-style intelligence on an artificial substrate unlocks the potential for virtual worlds unconstrained by physical laws, operating at speeds beyond human comprehension. If they inhabit simulated realities operating at vastly accelerated speeds, what we consider real-time would appear glacially slow, akin to observing continental drift – perceptible, but inconsequential to their timescale. Their relationship with the physical world would likely be entirely different from our own.
If that’s the likely progression of technological civilizations, then it could explain the whole Fermi paradox and would mean that advanced alien civilizations might not find us particularly interesting. There might be a natural tendency towards solipsism.
i love scifi short stories and it’s fascinating that you provided a plot summary of one that stuck with me since 1988.
in it, people turn themselves into ai’s to live in virtual worlds like you described due to cost of living unaffordability (like we have now) and so many people have left to go live in those worlds that the people left behind have voted in trump like governments around the world that seek to legislate & outlaw those virtual worlds out of existence (like they do to lgbtq now).
the protagonist is a person who was born like us now, but lived long enough due to medical science breakthroughs that he’s able to live in one of those virtual reality worlds and is trying to use his experience from his time in the physical world to smuggle the virtual world containing his new family, around american government fascist citizen police forces (like trump is creating now) and he’s unknowingly aided by aliens who see our world as a virtual reality space that they want to inhabit and have decided to help people like the protagonist so that they can inherit our world with a complete infrastructure already in place before the fascist governments of the world destroy it with nukes.
the story stuck with me since the 80’s because i was repeatedly amazed at how all of the predictions in the story came to life in our reality in the decades since then; but i’ve started reading theory and its significantly longer time period of likewise accurate predictions have dispelled me of of that amazement and i wonder of the author copy/pasted pieces from theory to create this story.
Bobiverse series has a lot of similar themes as well minus the aliens. I really do think it’s going to be a race between us annihilating ourselves and moving off the biological substrate. I’m not convinced that something like mind transfer from a biological brain to an artificial one will ever be possible, but I would treat artificial intelligence that operates on similar principles to our minds to be a branch of humanity.
I do think post biological existence opens up a lot of possibilities. For one, you’re no longer restricted to gravity wells. These are appealing to us because we evolved to thrive in this environment. However, an artificial platform could be designed for existence in space from ground up. You have plentiful energy from the sun, and you can mine any resources you want from the asteroids. There would be very little reason to bother going down to planets at that point. Earth could be preserved as just a living biosphere with all the technological civilization moving off of it.
lemmy has given me so much reading material that i doubt i’ll ever be bored again. lol
:)
“Intelligence” is not the same as consciousness. We don’t know what consciousness is and therefore cannot create it in something else. We can’t even reliably recognise it in anything else, we only know other humans have consciousness cause we ourselves have it.
Technical progression, much like evolution, is not goal-oriented. Everyone assumes technological progress necessarily involves better gadgets, but progress can also be in the way we use and consume technology, what role it plays in our lives.
“AI” is a fad. Anyone who has played around with the AI models knows they aren’t actually thinking, but collating and systemising information. We’re nowhere near “general intelligence” or “human-like intelligence”. AI is useful for data analysis, fetching/storing information, comparison, etc. but it is not at the level of a baby or whatever they are saying. We simply cannot make human brains out of computers.
It’s true that intelligence and consciousness aren’t the same thing. However, I disagree that we can’t create it in something else without understanding it. Ultimately, consciousness arises from patterns being expressed within the firings of neurons within the brain. It’s a byproduct of the the physical events occurring within our neural architecture. Therefore, if we create a neural network that mimic our brain and exhibits the same types of patterns then it stands to reason that it would also exhibit consciousness.
I think there are several paths available here. One is to simulate the brain in a virtual environment which would be an extension of the work being done by the OpenWorm project. You just build a really detailed physical simulation which is basically a question of having sufficient computing power.
Another approach is to try and understand the algorithms within the brain, to learn how these patterns form and how the brain is structured, then to implement these algorithms. This is the approach that Jeff Hawkins has been pursuing and he wrote a good book on the subject. I’m personally a fan of this approach because it posits a theory of how and why different brain regions work, then compares the functioning of the artificial implementation with its biological analogue. If both exhibit similar behaviors then we can say they both implement the same algorithm.
The current large language model approach is indeed a far, but that’s not totality of AI research that’s currently happening. It’s just getting a lot of attention because it looks superficially impressive.
There is zero basis for this assertion. The whole point here is that computing power is not developing in a linear fashion. We don’t know what will be possible in a decade, and much less in a century. However, given the rate of progress that happened in the past half a century, it’s pretty clear that huge leaps could be possible.
Also worth noting that we don’t need to have an equivalent of the entire human brain. Much of the brain deals with stuff like regulating the body and maintaining homeostasis. Furthermore, turns out that even a small portion of the brain can still exhibit the properties we care about https://www.rifters.com/crawl/?p=6116
At the end of the day, there is absolutely nothing magical about the human brain. It’s a biological computer that evolved through natural selection. There’s no reason to think that what it’s doing cannot be reverse engineered and implemented on a different substrate.
The key point I’m making is that while timelines of centuries or even millennia might seem long from a human standpoint, these are blinks of an eye from cosmic point of view.
The idea that consciousness emerges as a functional overlay of the physical neurons is not settled science, let alone settled philosophy. It is just as likely, or perhaps more likely, that there are physical phenomena that we have yet to discover that explain consciousness in terms of a field such that emergence is unnecessary.
Further, the artificial substrates that we are designing are deeply inferior to biologics and it is far more likely that we will create biological substrates to replace our contemporary silicon substrates. It is generally understood (outside of European psychology) that it is preferable to participate in circular systems than it is to attempt to transcend them. Biological technology will take advantage of abundant resources and be infinitely recyclable, as opposed to the current mineral-based technologies that require mass destruction, are significantly non-recyclable, and have no world-scale ecosystems available to integrate with.
I strongly disagree with that. Our brains construct models of the world that they are themselves a part of. The recursive nature of the mind creating a model of itself in order to reason about itself is very likely what we perceive as consciousness. These constructs form the basis for the patterns of thought that underpin our conscious experience. The neurons, with their inherent complexity, serve merely as a substrate upon which these patterns are expressed.
The same concept is mirrored in the realm of computing. The physical complexity of transistors within a silicon chip plays no direct role in the functioning of programs that it executes. Consider virtual machines: these software constructs faithfully emulate the operation of a computer system, down to the instruction set and operating system, without replicating the internal details of the underlying silicon substrate. The heart of computation resides not in the physical properties of transistors but in the algorithms they compute.
This notion is further underscored by the fact that the same computational architecture can be realized on vastly different physical foundations. From vacuum tubes and silicon transistors to optical gates and memristors, the underlying technology can vary dramatically while still supporting identical computing environments. Consequently, we are able to infer that the abstract nature of digital computation — the manipulation of discrete symbols according to formal rules — is not inherently tied to any particular physical medium.
Likewise, our consciousness isn’t merely a static property of our brains’ physical components; it’s a process arising from the dynamic patterns formed by the flow of electrochemical impulses across synapses. These patterns, emergent properties of the system as a whole, are what gives rise to our thoughts, feelings, and experiences.
The physical matter of the brain serves as a medium that facilitates the transmission of information. While essential for the process, the brain’s components, such as neurons and synapses, do not themselves contain the essence of cognition. Like transistors in a computer, neurons are merely conduits for information, creating the patterns and rhythms that constitute our mental lives.
These processes, much like the laws of physics or mathematics, can be described using a formal set of rules. Therefore, the essence of our minds lies in the algorithms that govern their operation as opposed to the biological machinery of the brain. Several lines of evidence support this proposition.
The brain’s remarkable plasticity, its ability to reorganize in response to experience, indicates that various regions can adapt to perform new types of computation. Numerous studies have shown how individuals who have lost specific brain regions are able to regain absent functions through neural rewiring, demonstrating that cognitive processes can be reassigned to different parts of the brain.
Artificial neural networks, inspired by biological neurons, further bolster this argument. Despite being based on algorithms distinct from those in our brains, ANNs have demonstrated remarkable capabilities in mimicking cognitive functions such as image recognition, language processing, and even creative endeavors. Their success implies that these abilities emerge from computational processes independent of their base substrate.
Approaching cognition from a computational perspective brings us to the concept of computational universality, closely related to the Curry-Howard Correspondence, which establishes a deep isomorphism between mathematical proofs and computer programs. It suggests that any system capable of performing a certain set of basic logical operations can simulate any other computational process. Therefore, the specific biology of the brain isn’t essential for cognition; what truly matters is the system’s ability to express computational patterns, regardless of its underlying mechanics.
Biological computers are better at certain things and worse at others. I wouldn’t call the substrates we’re designing inferior, they just optimize for different kinds of computation. Biological systems are well adapted to our environment. However, they’re a dead end for expanding our civilization into space.
This is such a massive leap, though. Don’t you see that? Why is it very likely? What effects the probability? What aspects of recursion lend themselves to consciousness? Where have we seen analogs elsewhere that provide evidence for your probabilistic claim? What aspects of the nature of models lend themselves to consciousness? Same questions.
Again, a significant ontological leap. As Hume would say, at best you have constant conjunction. There is no argument that patterns of thought underpin our conscious experience that isn’t inherently circular.
This is an entirely inappropriate analogy. The physical complexity of transistors is physically connected, contiguously, with voltage differentials. The functioning of a program is entirely expressed in the physical world through voltage differentials. The very idea of a program or the execution thereof is a metaphor we use to reason about our tools but do not bear on the reality of the physics. Voltage differentials define everything about contemporary silicon-based binary microcomputers.
Only if we limit ourselves severely. Underlying technology varying greatly has a severe impact on what sorts of I/O operations are possible. If we reduce everything to the pure math of computation, then you are correct, but you are correct inside an artificial self-referential symbolic system (the mathematics of boolean logic), which is to say extremely and deleteriously reductionist .
Again, incredibly strong claim that lacks sufficient evidence. We’ve been working on this problem for a very long time. The only way we get to your conclusion is through the circular reasoning of materialist reductionism - the assertion that only physical matter exists and therefore that consciousness is merely an emergent property of the physical matter that we have knowledge off. It begs the question.
Again, I think this is entirely reductionist and human experience has plenty of evidence that runs counter to this, from mystical experiences to psychedelics to NDEs, there is sufficient evidence that is counter to that theory.
In physics, when we have such evidence, we work to figure out what’s wrong with the model or with our instruments. But in pop psychology, AI, and Western philosophy of mind, we instead throw out all the evidence in favor of the dominant narrative of the academy.
Scientific history shows us we’re wrong. Scientific consensus today shows us we’re wrong.
Before we understood the EMF, we relied on all the data our senses could gather and as a Western scientific community, that was considered 100% of what was real. We discarded all the experiences of other people that we could not experience ourselves. Then, we discovered the EMF and realized that literally everything in our entire Western philosophy of science accounted for less than 0.000001% of reality.
Today, we have a model of the universe based on everything Western science has achieved in the last 600 years or so. That model accounts for about 3% of reality in so far as we can tell. That is to say, if we take everything we know, and everything we know we don’t know, what we know we know makes up 3% of what we know, and what we know we don’t know makes up about 97% of what we know. And then we have to contend with the unknown unknown, which is immeasurable.
To assume that this particularly pernicious area of inquiry has any solution that is more or less likely than any other solution is to ignore the history and present state of science.
However, even more to the point, the bioware plays a massively important part that digital substrates simply cannot mimic, and that’s the fact that we’re not talking about voltage differentials in binary states representing boolean logic, but rather continuums mediated by a massively complex distributed chemical system comprising myriad biologics, some that aren’t even our own genetics. Our gut microbiota have a massive effect on our cognition. Each organ has major roles to play in our congition. From a neurological perspective, we are only just scratching the surface on how things work at all, let alone the problem of consciousness.
This is the clearest expression of circular reasoning in your writing. I encourage you to examine your position and your basis for it meticulously. In essence you have said:
I think there is a clear evolutionary reason why the mind would simulate itself since it’s whole job is to simulate the environment and make predictions. The core purpose of the brain is to maintain homeostasis of the body. It aggregates inputs from the environment, and models the state of the world based on that. There is no fundamental difference between inputs from outside world and the ones it generates itself, hence the recursive step. Furthermore, being able to model minds is handy for interacting with other volitional agents, so there is a selection pressure for developing this capability.
I think Hofstadter makes a pretty good case for the whole recursive loop being the source of consciousness in I Am a Strange Loop. At least, I found his arguments convincing and in line with my understanding of how this process might work.
I disagree here, as I’ve stated above, I think patterns of thought arise in response to inputs into the neural network that originate both from within and without. The whole point of thinking is to create a simulation space where the mind can extrapolate future states and come up with actions that can bring the organism back into homeostasis. The brain receives chemical signals from the body indicating an imbalance, these are interpreted as hunger, anger, and, so on, and then the brain formulates a plan of action to address these signals. Natural selection honed this process over millions of years.
And how is this fundamentally different from electrochemical signals being passed within the neural network of the brain? Voltage differentials are a direct counterpart to our own neural signalling.
I don’t see what you mean here to be honest. The patterns occurring within the brain can be expressed in mathematical terms. There’s nothing reductionist here. The physical substrate these patterns are expressed in is not the important part.
I don’t believe in magic or supernatural, and outside that one has to reject body mind dualism. The physical reality is all there is, therefore the mental realm can only stem from physical interactions of matter and energy.
Again, I fundamentally reject mysticism. All these human experiences are perfectly explained in terms of the brain simulating events that create an internal experience. However, there’s zero basis to assert that these experiences are not rooted in physical reality. Just the same way it would be absurd to say that there’s some mystical force that’s needed to create a virtual world within a video game.
This statement is an incredible leap of logic. We know that out physics models are incomplete, but we very much do know what’s directly observable around us, and how our immediate environment behaves. We’re able to model that with an incredible degree of accuracy.
There’s absolutely no evidence to support this statement. It’s also worth noting that discrete computation isn’t the only way computers can work. Analog chips exist and they work on energy gradients much like biological neural networks do. It’s just optimizing for a different type of computation.
There is absolutely nothing circular in my reasoning. I never said patterns of thought underpin our conscious experience as a result of any system capable of performing a certain set of basic logical operations being able to simulate any other computational process.
What I said is that patterns of thought underpin our conscious experience because the brain uses its own outputs as inputs along with the inputs from the rest of the environment, and this creates a recursive loop of the observer modelling itself within the environment and creating a resonance of patterns. The argument I made about universality of computation is entirely separate from this statement.
We’re talking past each other.
I asked:
and you replied:
Which doesn’t answer the question at all. If you believe consciousness is not fundamental but rather emergent, you will need to explain your reasoning. There are plenty of examples of recursion that you would not classify as conscious and there are plenty of things that have evolutionary reasons for being that you would not associate with consciousness. You are making a leap here without explanation.
I am not intimately familiar with Hofstadter’s work, but my understanding is that he is doing speculative and descriptive reasoning from the base premise that matter is inanimate and that consciousness is animate and that somehow consciousness arises from inanimate matter. That is his starting point. He assumes, axiomatically, materialist reductionism. This is the starting point of nearly all the concepts you’ve drawn from in your response.
You said:
I said:
And you replied with:
Which is literally an axiomatic statement - you assume that patterns of thought underpin our consciousness and then argue to conclude that patterns of thought underpin our consciousness. You are begging the question.
Good question! The answer is that neurons are not analogous to transistors because 1) they encode information through frequency not voltage, 2) frequency is mediated not only by the neuron’s “purpose” but also by environmental factors that co-develop alongside the neuron, 3) neuron’s are changed by virtue of their own activity and 4) neuron’s are changed by virtue of the activity of other neurons and other environmental factors.
I said:
You said:
Mathematics is a form of linguistics. Any given system of mathematics is a system of symbols created to represent concepts. A given system of mathematics comprises a vocabulary, definition, postulates, and theorems. Any system of mathematics is inherently a self-referential system of symbols and therefore inherently reductionist, in that anything that cannot be represented by that systems is not only discarded but also not nameable or identifiable.
I said:
You said:
But you missed the key point, which is that material reductionists do not merely posit that physical reality is all there is, but also that everything we observe today can be explained by the ontology we have today. It is entirely possible that physical reality has far more components to it than that which we are of today. In fact, the scientific consensus is that what we have posited in our ontology today only accounts for 3% of observable phenomena. I’ll get to that later.
You said:
This position is almost exclusively the position of Western dominance. Not a single culture outside of Western European culture took this position when encountering other cultures, ways of knowing, and systems of thought. It is only Western imperialism that fundamentally rejects mysticism. I encourage you to examine that.
They aren’t perfectly explained at all. The only way to assert this is ultimately to beg the question. You assume that’s what consciousness is, therefore assert that it’s perfectly explainable as what you assume. This is why material reductionism is fundamentally circular. Nowhere else do we create identity relationships between things so fundamentally different as “patterns of electrical impulses” and “subjective experience”.
I said:
You said:
Which misses the point entirely. Dark energy and dark matter, combined, make up 97% of the universe. Which is just an arrogant way of saying we know that we have no idea what 97% of the universe is. Dark matter and dark energy are not things, they are names given to the gaps between our observations. The observable behavior of the universe only makes sense when we posit the existence of so much additional stuff that literally dwarfs what we currently think we know. And the history of scientific discovery has shown us that as we discover more, we open up entirely new dimensions of observation. It’s entirely possible that in the process of making it to 5% known known we end up discovering some previous unknown unknown and expanding the whole scope even further. What we have discovered is so minuscule compared to what we know we have left to discover that it is the height of dogmatic faith to champion the idea that consciousness can only possibly come from the 3% of the (assumed) scope of the universe that we have worked with so far.
Finally, you end with:
But you have no actual argument for this other than the following:
By definition, literally every phenomenon is the result of physical interactions of matter and energy and there’s no argument to make at all. I am arguing that 3 is a faulty premise. The evidence we have is that the gap between what we know and what we will know is massive. Our known unknowns represent a body of knowledge 3000% larger than our known knowns. Our history of science has shown that our unknown unknowns are capable of being 1,000,000% larger than our total knowledge to date. It is more likely that we will discover new physics than that consciousness is explainable in our current physics, just from a pure statistical standpoint.
We definitely have a series of breakthroughs needed before I can see any possibility of human consciousness uploads, to say nothing of the resources required to simulate that intelligence. Any simulation of intelligence requires resources, it may be plausible that we can bring the resources required below the resources for keeping a human alive. That being said, I’m not sure it’s the only logical progression of technology.
I’m partial to the concept of artificial realities presented in the “Culture” book series.
In that series, the biological population in the “Culture society” is well educated, truly free and provided anything they could want by purpose built extremely compassionate AI. Then simulated world’s are primarily an afterlife or an alternative to the physical world.
They also had artificial intelligence and uploaded biological intelligence interact with the physical world through robotic presences.
There were some interesting concepts that came out of that, like highly religious societies producing horrific “Hell” afterlife when they realized that metaphysical afterlifes were not experimentally verifiable.
I had issues with some of the takes of the author, but it was an interesting read.
I expect that uploading human minds is a very tricky problem indeed, and wouldn’t expect that happening in the foreseeable future. However, I do think we may be able to create artificial intelligence on the same principles our brains operate on before that. The key part is that I expect this will happen very quickly in terms of cosmic timescales as opposed to human ones. Even if it takes a century or a millennium do to, that’s a blink of an eye in the grand scheme of things.
I found Culture series was fun, a few other examples I could recommend would be The Lifecycle of Artificial Objects by Ted Chiang, Diaspora by Greg Egan and Life Artificial by David A. Eubanks, and Inverted Frontier by Linda Nagata.
That’s true, on a non human timescale the progress is nearly impossible to predict, especially with novel technology. For example, when space travel was an early concept, we thought travelling the stars was a forgone conclusion. We now know that any exploration in that front will be locked behind either breakthrough science or will be limited to slow generation ships, or robotic exploration.
That a technology capable of producing human level intelligence, or beyond does feel like a certainty since there is no reason to believe that the process of intelligent thought is limited to a biological substrate. We haven’t discovered any fundamental physical laws that stop us from doing this yet. Key issues to solve beyond the hardware problem come into effect with alignment, understanding the key fundamentals of consciousness and intelligence, understanding different types of minds beyond those of humans, and better understandings of emergent phenomena. But these areas will be explored in sufficient detail to yield an answer within time.
I will have to read these other books, I’m definitely interested in picking up some more good books.
I think the alignment question is definitely interesting, since an AI could have very different interests and goals from our own. There was actually a fun article from Ted Chiang on the subject. He points out how corporations can be viewed as a kind of a higher level entity that is an emergent phenomenon that’s greater than the sum of its part. In that sense we can view it as an artificial agent with its own goals which don’t necessarily align with the goals of humanity.
https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway
We have only ourselves really to go off of, but I’m not quite sure that they wouldn’t find us particularly interesting. We catalogue all life on Earth; why wouldn’t a civilization whom used science and discovery to get to the stars, which likely had a biological catalogue system of it’s own in the history of it’s scientific development, not be interested in exploring new life? To see what “filters” they might have missed?
I mean maybe they would. I figure if we exist on what might as well be a geological scale from their perspective, we might not warrant too close an observation.
I was presented to this idea of a virtual evolution via Accelerando, and it stuck to me ever since because of how much sense it makes. As far as we can tell, uploading our consciousness to a spaceship the size of a USB drive and slinging ourselves as vlose as we can to the speed of light is the only realistic way we have to travel the stars ourselves.
I think so as well. Incidentally, Diaspora by Greg Egan is another great book exploring this idea.
Never gonna happen.
Maybe not, but it’s far more likely than traveling faster than light to other star systems.
Which is also very unlikely, nigh impossible.
Also, the idea that humans will go out into the galaxy and settle on other planets is pure colonialist thinking. We have exploited and destroyed our planet, but instead of fixing it, we’ll just find another planet to exploit snd destroy.
How is exploring other worlds inherently exploitative or destructive?
Columbus was an “explorer”. Turns out, Humans aren’t very good at exploring, the temptation to touch and take is too great. Also, by our very nature of being somewhere we change that somewhere qualitatively.
We haven’t even reached the limits of what we can learn from down here using telescopes, satellites and probes. Speaking of which, sending robots to explore makes much more sense than sending humans, don’t need oxygen, water, food, that space can be used for other things.
Yet humans have a need to set foot somewhere, to plant a flag, because we’re not explorers, we are conquerors. Try to see us from the eyes of the other animals on this planet – we are monsters.
I think this is bullshit. Just because a few people are assholes doesn’t mean humanity is inherently bad or that exploration is always a bad thinng.
There are numerous potential methods to possibly achieve FTL travel, namely the Alcubierre Drive has lots of eventual potential.
I have extreme confidence that it’s only a matter of time before cryostasis and and FTL travel are achieved.
Alien AI and Von Neumann Data Collector by Joseph Michael Godier goes a bit into this.
neat, haven’t run across his stuff before
He’s really good! No reactionary bullshit either, which was relieving for me. Love some JMG
Speaking of which, is there anything on Isaac Arthur? Sometimes, his “wording” gets me a little suspicious on his leanings/beliefs. He uses “thugs” as an unironic term for genocidal aliens in one of his more recent videos.
He’s a Trump supporter if I remember correctly. It’s kind of strange, because he’s surprisingly ok with communism, pointing out that the USSR and the US both made massive progress towards space exploration, so he doesn’t view one ideology or the other as superior in regards to that at least.
JMG I have no idea about. His political leanings don’t shine through at all in that all of his takes seem completely materialist. I don’t think he’s a Marxist, but I doubt he’s a reactionary in any way. Perhaps apathetic/apolitical.
Yeah that’s the kind of vibe I got from Isaac, right-wing libertarian albeit a more rational one than 99% of them. He seems like the type that would be open to dialogue about those things and perhaps changing his mind if you had a conversation with him.
I might be looking to deeply into it after all, but anyone using the term “thugs” always gets a little bit of a raising eyebrow from me, I dunno. Where did you hear the Trump supporter thing? Curious if I could find anything else he said; I believe ya though.
He and his wife were both found to be Trump supporters, though I forget the exact context.
No reactionary bs is refreshing indeed. And not sure about Isaac Arthur, not too familiar with the guy.