• 6 Posts
  • 273 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • a standalone drive

    Another cool/scary feature of the BluRay spec is offline firmware updates (called BD+). Any disc can contain code that runs automatically and can patch the player firmware or execute arbitrary functions. So if you have an older hacked player and you insert a newer disc into it, the AACS Consortium has the ability to brick it. Or if you “own” an older disc but the Consortium starts to dislike it for some reason (maybe they discovered that the disc was printed by a pirate publisher, or maybe there was a retroactive licensing dispute), they can include code on every newly published disc that blacklists the old disc. Even with a standalone player that you never connect to the internet, the moment you insert any new disc into it, your old “problematic” disc will be unplayable. This has never yet happened with a previously-legal disc AFAIK, but it is possible within the spec. Every player manufacturer must obey the spec and implement the BD+ virtual machine in order to be allowed to read AACS content. And if you hack your player to ignore BD+ code, then the newer disc will not play because its content may be scrambled in a way that only the custom BD+ code included with it can unscramble.




  • Some notes for my use. As I understand it, there are 3 layers of “AI” involved:

    The 1st is a “transformer”, a type of neural network invented in 2017, which led to the greatly successful “generative pre-trained transformers” of recent years like GPT-4 and ChatGPT. The one used here is a toy model, with only a single hidden layer (“MLP” = “multilayer perceptron”) of 512 nodes (also referred to as “neurons” or “dimensionality”). The model is trained on the dataset called “Pile”, a collection of 886GB text from all kinds of sources. The dataset is “tokenized” (pre-processed) into 100 billion tokens by converting words or word fragments into numbers for easier calculation. You can see an example of what the text data looks like here. The transformer learns from this data.

    In the paper, the researchers do cajole the transformer into generating text to help understand its workings. I am not quite sure yet whether every transformer is automatically a generator, like ChatGPT, or whether it needs something extra done to it. I would have enjoyed to see more sample text that the toy model can generate! It looks surprisingly capable despite only having 512 nodes in the hidden layer. There is probably a way to download the model and execute it locally. Would it have been possible to add the generative model as a javascript toy to supplement the visualizer?

    The main transformer they use is “model A”, and they also trained a twin transformer “model B” using same text but a different random initialization number, to see whether they would develop equivalent semantic features (they did).

    The 2nd AI is an “autoencoder”, a different type of neural network which is good at converting data fed to it into a “more efficient representation”, like a lossy compressor/zip archiver, or maybe in this case a “decompressor” would be a more apt term. Encoding is also called “changing the dimensionality” of the data. The researchers trained/tuned the 2nd AI to decompose the AI models of the 1st kind into a number of semantic features in a way which both captures a good chunk of the model’s information content and also keeps the features sensible to humans. The target number of features is tunable anywhere from 512 (1-to-1) to 131072 (1-to-256). The number they found most useful in this case was 4096.

    The 3rd AI is a “large language model” nicknamed Claude, similar to GPT-4, that they have developed for their own use at the Anthropic company. They’ve told it to annotate and interpret the features found by the 2nd AI. They had one researcher slowly annotate 412 features manually to compare. Claude did as well or better than the human, so they let it finish all the rest on its own. These are the descriptions the visualization shows in OP link.

    Pretty cool how they use one AI to disassemble another AI and then use a 3rd AI to describe it in human terms!



  • Can’t access the article, but wasn’t China the one most vulnerable from the Malacca Strait being a chokepoint? As in, their trade towards Europe and fuel from the Middle East being potentially threatened? How does Thailand pitching to the US make sense then? How would a Thai bypass even increase security, since both routes are in the same area and can be equally blockaded? There aren’t any problems with throughput capacity at Malacca, unlike say at the Panama Canal. Maybe it will make the travel distance slightly shorter, but is there really any way it could ever be cost-effective to offload and reload ships for a few hundred kilometers savings?


  • I want people to be able to report bugs without any trouble.

    Thank you for being aware! I’ve experienced this on github.com. I’ve tried to submit issues several times to open source projects, complete with proposed code to solve a bug, but github shadowbans my account 6 hours after creating it (because I use a VPN? a third-party email provider? do not provide a phone number? who knows). I can see the issue and pull request when logged in, but they only see a 404 on their project page even if I give them a direct link. I ended up sending them a screenshot of the issue page just to convince them this was even possible. Sad to hear gitlab does it even worse now by making phone mandatory.






  • To be clear, human chimeras already exist naturally, from the fusion of twin embryos in utero. Most of them go entire lives without even realizing it. Only occasionally it pops up in the news when someone receives a negative paternity test and after lots of stress and hairpulling and doctor’s visits it turns out that their blood comes from a different cell line than their balls.

    Human-ape chimeras are the stuff of bioethicists’ nightmares and thankfully illegal everywhere civilized.






  • Excellent excellent!

    If 6 is rolled, then P(X|R=6) = (N-1 choose 9)/(N choose 10)

    Might as well reduce that to 10/N to make the rest of the lines easier to read.

    If you don’t flip it, you have a 2/3 chance of dying.

    There is also a chance that your switch is not connected and someone else has control of the real one. So there is an implicit assumption that everyone else is equally logical as you and equally selfish/altruistic as you, such that whatever logic you use to arrive at a decision, they must have arrived at the same decision.

    No matter what your goal is, given the information you have, flipping the switch is always the better choice.

    That is my conclusion too! I was surprised to learn though in the comment thread with @pancake that the decision may be different depending on the percentage of altruism in the population. E.g. if you are the only selfish one in an altruistic society, you’d benefit from deliberately not flipping the switch. Being a selfish one in a selfish society reduces to the prisoner’s dilemma.


  • there’s no way to know which track the trolley is on

    It’s a standard trolley meme problem, the trolley will keep going on the main track unless the lever is switched 😁. I thought !science_memes would be familiar with trolley problems, but I guess I get to introduce some of you! You might want to start off on some easier trolley memes first, this is advanced level stuff.

    where the real lever sends it

    There is not usually ambiguity with the lever. If you wish, you can have an announcement in the headphones “main trackside track…” every time you flip the lever. Your only uncertainty is which track you yourself are bound to, given how you’re blindfolded.

    there’s a 0.017% chance

    1/6 * 10% = 1/60 = 0.01666… = 1.666…% ~= 1.7%! Careful there!

    It’s not really a trolley problem, because in both scenarios a track is empty,

    Everything is a trolley problem.


  • My guess is that no to the first, since I have a 1/3 chance of being in the forked path, vs 1/15 of being in the straight path and my lever being connected.

    Suppose you live in a kingdom where everyone is as selfish as you, and you’ve seen on TV many situations exactly like this one where people were tied to the tracks - usually one at a time and occasionally 10 at a time. (The villain has been prolific.) You’ve seen them all follow this logic and choose not to flip their switch, yet out of ~1500 people you have seen in peril this way, ~1000 of them have died. If only their logic had convinced them (and you) otherwise, 1000 of them could have selfishly survived! Doesn’t seem very logical to follow a course of action that kills you more often than its opposite.

    (If you don’t want to imagine a kingdom where everyone is selfish, you can imagine one where x% are selfish and (100-x)% are altruistic, or some other mixture maybe with y% of people who flip the lever randomly back and forth and z% who cannot even understand the question. The point is that the paradox still exists.)

    Edit: I can see now how in a 100% altruistic kingdom, where you are the only selfish one and you know for sure that everyone else will logically altruistically pull the lever, it makes sense for you to not pull the lever. Presumably there is some population x% split (44% selfish/56% altruistic?) where your selfish decision will have to reverse. Weird to think that your estimate of the selfishness of the rest of the population has a relevance on your decision!