• 6 Posts
  • 290 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle


  • howrar@lemmy.catoScience Memes@mander.xyzAcademic writing
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    And that’s not all. It’s easy to tell someone the high level area that you’re working on, but to explain the exact problem you’re trying to solve and why it’s interesting? That’s a whole journey into many topics that are very unintuitive for human brains to grasp and sometimes require heavy mathematical abstractions to even see that there’s a problem to begin with.





  • Maybe chickpeas are expensive where you live, or maybe you miscalculated. Either way, take a look at my numbers for comparison.

    We can get a 3.63kg bag of chickpeas here for $7.49 (CAD). Assuming you fulfill all your Calorie and protein needs from chickpeas alone (2500 Calories and 150g protein per day), it comes out to about $600/year. That’s $1.64/day. In order to be $10/day, you’d have to pay 6x as much for your chickpeas, so that same 3.63kg bag would have to cost $45.50.


  • More variety in your diet is likely to always be superior to less. That goes for both kids and adults. The trouble with younger kids is that deficiencies can impact their development and have more severe long term consequences, and they’re also less capable of seeking out foods to fill that gap.


  • It’s important for sure. It just so happens to also be one of those things that are very easy to verify but hard to do, which is what makes it perfect for automation.

    The other nice thing about letting AI do naming is that these are names that are very statistically likely given the context. That means it’s more likely to be understood by others. If I come up with something myself, it might make sense to me, but it might not to someone else reading the code. I think this is especially important when you’re working in your own little bubble and don’t get many eyes on your code.











  • Another one of the million projects in my backlog that I’ll never get to.

    There’s one major problem with this kind of website that I’ve been wanting a solution for, and it’s that people often only leave reviews when they have an exceptionally bad experience. So when you see a product with lots of negative reviews, does that mean it’s actually bad? Or is it just a very popular product, so lots of people will find issues with it? I think the solution to that is some form of review pre-registration. When you buy something that’s intended to last a while, inform the review website of that purchase. Then if something goes wrong and you leave a negative review, you can see what percentage of purchases are affected.


  • mathematically “correct” sounding output

    It’s hard to say because that’s a rather ambiguous way of describing it (“correct” could mean anything), but it is a valid way of describing its mechanisms.

    “Correct” in the context of LLMs would be a token that is likely to follow the preceding sequence of tokens. In fact, it computes a probability for every possible token, then takes a random sample according to that distribution* to choose the next token, and it repeats that until some termination condition. This is what we call maximum likelihood estimation (MLE) in machine learning (ML). We’re learning a distribution that makes the training data as likely as possible. MLE is indeed the basis of a lot of ML, but not all.

    *Oversimplification.