• 0 Posts
  • 128 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle




  • As someone that has recently taken an infant and and family CPR class for my son who started solid foods a few months ago, this is pretty similar to how they teach it today and I’m pretty sure it would have the same effect. You can’t perform a heimlich on a baby or very small child for a variety of reasons. This method or something similar to it is both safer and more effective, since it lets gravity help dislodge the food.





  • https://curl.se/ has been account free since 1998.

    Never understood why people keep trying to use proprietary tools for this, especially when curl is so good.

    I have a directory of shell scripts I use to test out endpoints. I persist request/response data either with environment variables or regular files. Oh and since these are just shell scripts, it’s pretty trivial to do stuff like iterate over a CSV (or JSON array) and make a request for each row, conditionally make requests, or whatever else you want.

    Oh and honorable mention goes to jo and jq for making it super easy to make/process JSON data.



  • Commits should be reasonably small, logical, and atomic. MRs represent a larger body of work than a commit in many cases. My average number of (intentionally crafted) commits is like 3-5 in an MR. I do not want these commits squashed. If they should be squashed, I would have done so before making the MR.

    People should actually just give a damn and craft a quality history for their MRs. It makes reviewing way easier, makes stuff like git blame and git bisect way more useful, makes it possible to actually make targeted revert commits if necessary, makes cherry picking a lot more useful, and so much more.

    Merge squashing everything is just a shitty band-aid on poor commit hygiene. You just get a history of huge, inscrutable commits and actively make it harder for people to understand the history of the repo.







  • I understand what you’re saying—I’m saying that data validation is precisely the purpose of parsers (or deserialization) in statically-typed languages. Type-checking is data validation, and parsing is the process of turning untyped, unvalidated data into typed, validated data. And, what’s more, is that you can often get this functionality for free without having to write any code other than your type (if the validation is simple enough, anyway). Pydantic exists to solve a problem of Python’s own making and to reproduce what’s standard in statically-typed languages.

    In the case of config files, it’s even possible to do this at compile time, depending on the language. Or in other words, you can statically guarantee that a config file exists at a particular location and deserialize it/validate it into a native data structure all without ever running your actual program. At my day job, all of our app’s configuration lives in Dhall files which get imported and validated into our codebase as a compile-time step, meaning that misconfiguration is a compiler error.