• 0 Posts
  • 46 Comments
Joined 11 months ago
cake
Cake day: August 2nd, 2023

help-circle



  • Why is the content ubiquitously pirated if it’s legal and totally acceptable in NK?

    I could imagine someone unfamiliar with America saying “weed is ubiquitous and nobody gives a shit”, but that’d be a massive oversimplification given we have a metric fuckton of people in prison for nonviolent drug offenses.

    Could it not be the case that in NK that pirating and watching foreign media is both extremely common and against the law/lands people in prison?

    And if that is the case, then even if this one case happens to be fabricated, there’s likely a ton of cases where people are actually imprisoned for breaking the law, since that’s usually how breaking laws goes. I don’t think it should be against the law to watch foreign media.


  • You had a small fallacy in the middle, when you said “assume the negative claim”, you then made a positive claim.

    “subjectivity is not explained by information processing theory” is a positive claim, but you said it was negative. I know it has the word “not” in it, but positive/negative doesn’t have to do with claims for or against existence, it has to do with burden of proof. A negative “claim” isn’t actually a claim at all.

    The negative claim here would be “subjectivity may not be explained by information processing theory”. People usually have more understanding about these distinctions in religious contexts:

    Positive claim: god definitely exists Positive claim: god definitely doesn’t exist Negative claim: god may or may not exist.

    The default stance is an atheistic one, but it’s not “capital A” atheist (for what it’s worth I do make the positive claim against a theological God’s existence). Someone who lacks a belief in God is still an atheist (e.g someone who has never even heard of a theological God), but they’re not making a positive claim against his existence.

    So the default stance is “information theory may or may not account for subjectivity”, we don’t assume it does, but we also don’t discount the possibility that it does as necessarily untrue, like you are.

    If you notice, you made another mistake, you misread what I was saying. I never made a positive claim about subjectivity being information processing. I only alluded to the possibility. You on the other hand did make a positive claim about subjectivity definitely not being information processing.


  • I think my point didn’t exactly get across. I’m not saying philosophical zombies can’t exist because subjectivity is something beyond information processing, I’m saying it’s plausible that subjectivity is information processing.

    To say “a person with information processing but not subjectivity” could be like saying “a person with information processing but not logical reasoning”.

    I would argue a person that processes information exactly like me, except that they don’t reason logically, wouldn’t process information like me. It’s not elevating logic beyond information processing, it’s a reductio ad absurdum. A person like that cannot exist.

    I was saying philosophical zombies could be like that, it’s possible that they can’t exist. By lacking subjectivity they could inherently process information differently.



  • Premise B is where you lost me.

    The premise of philosophical zombies is that it’s possible for there to be beings with the same information processing capabilities as us without experience. That is, given the same tools and platforms, they would be having just as intricate discussions about the nature of experience and sentience. without having experience or sentience.

    I’m not convinced it’s functionally possible to behave the way we behave when talking & describing sentience without being sentient. I think a being that is functionally identical to me except that it lacks experience wouldn’t be functionally identical to me, because I wouldn’t be interested in sentience if I didn’t have it.



  • So I take it you’re not a determinist? That’s a whole conversation that’s separate from this, but you should know there are a lot of secular people who don’t believe in free will (e.g having a will independent of any casual relationships to physical reality). Secular people are generally deterministic, we believe that wills exist within physical reality, and that they exist in the same cause/effect relationship as everything else.

    With enough information of the present, you could know everything a human will do in their lifetime, there’s no will that exists outside of reality that is influencing reality (no will that is “free”). Instead, will is entirely casually linked, like everything else.

    Put another way, you’re guaranteed to get the same result every time you put a human in exactly the same situation. Even if there is true chaos in the universe (e.g pure randomness) that’s a different situation every time you get a different random result.




  • It seems by your periodically hostile comments (“oh so smug terms the ‘soul’”) indicates that you have a disdain for my position, so I assume you think my position is your option 2, but I don’t ignore self-reports of sentience. I’m closer to option 1, I see it as plausible that a sufficiently general algorithm could have the same level of sentience as humans.

    The third position strikes me as at least just as ridiculous as the second. Of course we don’t totally understand biological life, but just saying there’s something “special” is wild. We’re a configuration of non-sentient parts that produce sentience. Computers are also a configuration of non-sentient parts. To claim that there’s no configuration of silicon that could arrive at sentience but that there is a configuration of carbon that could arrive at sentience is imbuing carbon with some properties that seems vastly more complex than the physical reality of carbon would allow.



  • The philosophy of this question is interesting, but if GPT5 is capable of performing all intelligence-related tasks at an entry level for all jobs, it would not only wipe out a large chunk of the job market, but also stop people from getting to senior positions because the entry level positions would be filled by GPT.

    Capitalists don’t have 5-10 years of forethought to see how this would collapse society. Even if GPT5 isn’t “thinking”, it’s actually its capabilities that’ll make a material difference. Even if it never gets to the point of advanced human thought, it’s already spitting out a bunch of unreliable information. Make it slightly more reliable and it’ll be on par with entry-level humans in most fields.

    So I think dismissing it as “just marketing” is too reductive. Even if you think it doesn’t deserve rights because it’s not sentient, it’ll still fundamentally change society.




  • Just to be clear, the claim is that human thought is qualitatively different than an algorithm, I just haven’t been convinced of the claim. I chose my words incredibly carefully here, this isn’t me being pedantic.

    Anyway, I don’t know how you’ve come to the definitive conclusion that somehow emotions aren’t information. Or that thoughts and dreams are somehow not outputs of some process.

    Nothing you’ve outlined is necessarily impossible to derive as an output of some process. It’s actually quite possible that they’re only derived as an output of some process, unless you think they’re spawned into existence without causes, which I think religious people do believe (this is the essence of a free soul). I’m not religious.



  • I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.

    I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

    Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.

    That’s not something that I’ve bought into yet.