i’m lizard

  • 0 Posts
  • 17 Comments
Joined 3 months ago
cake
Cake day: June 21st, 2024

help-circle
  • It depends on if you can feasibly implement compatibility layers for large parts of the “required” but very work-intensive drivers. FreeBSD has the same driver struggles and ended up with LinuxKPI to support AMD/Intel GPUs. I know there’s a whole bunch of toy kernels that implemented compatibility layers for parts of Linux in some fashion too.

    It’s a ton of work overall but there’s room to lift enough already existing stuff from Linux to get the ball rolling.


  • In my experience, most hangs with a message about amdgpu loading on screen are caused by an amdgpu issue of some kind. I’d check to see if amdgpu ends up being loaded correctly via lsmod | grep amdgpu and just a general journalctl -b 0 | grep amdgpu to see if there’s any obvious failures there. Chances are that even if it’s not amdgpu, the real failure is in the journal somewhere.

    Could be a wrong setting of hardware.enableRedistributableFirmware (should be true) or the new-ish hardware.amdgpu.initrd.enable (can be either really but either true or false might be more or less reliable on your system).


  • You can’t pretend an open port is closed, because an open port is really just a service that’s listening. You can’t pretend-close it and still have that service work. The only thing you can do is firewalling off the entire service, but presumably, any competent distro will firewall off all services by default and any service listening publicly is doing so for a good reason.

    I guess it comes down to whether they feel like it’s worth obfuscating port scan data. If you deploy that across all of your network then you make things just a little bit more annoying for attackers. It’s a tiny bit of obfuscation that doesn’t really matter, but I guess plenty of security teams need every win they can get, as management is always demanding that you do more even after you’ve done everything that’s actually useful.









  • Requiring agreement to some unspecified ever-changing terms of service in order to use the product you just bought, especially when use of such products is required in the modern world. Google and Apple in particular are more or less able to trivially deny any non-technical person access to smartphones and many things associated with them like access to mobile banking. Microsoft is heading that way with Windows requiring MS accounts, too, though they’re not completely there yet.


  • My suggestion is to use system management tools like Foreman. It has a “content views” mechanism that can do more or less what you want. There’s a bunch of other tools like that along the lines of Uyuni. Of course, those tools have a lot of features, so it might be overkill for your case, but a lot of those features will probably end up useful anyway if you have that many hosts.

    With the way Debian/Ubuntu APT repos are set up, if you take a copy of /dists/$DISTRO_VERSION as downloaded from a mirror at any given moment and serve it to a particular server, that’s going to end up with apt update && apt upgrade installing those identical versions, provided that the actual package files in /pool are still available. You can set up caching proxies for that.

    I remember my DIY hodgepodge a decade ago ultimately just being a daily cronjob that pulls in the current distro (let’s say bookworm) and their associated -updates and -security repos from an upstream rsync-capable mirror, then after checking a killswitch and making sure things aren’t currently on fire, it does rsync -rva tier2 tier3; rsync -rva tier1 tier2; rsync -rva upstream/bookworm tier1. Machines are configured to pull and update from tier1 (first 20%)/tier2 (second 20%)/tier3 (rest) appropriately on a regular basis. The files in /pool were served by apt-cacher-ng, but I don’t know if that’s still the cool option nowadays (you will need some kind of local caching for those as old files may disappear without notice).



  • Company offering new-age antivirus solutions, which is to say that instead of being mostly signature-based, it tries to look at application behavior instead. If Word was exploited because some user opened not_a_virus_please_open.docx from their spam folder, Word might be exploited and end up running some malware that tries to encrypt the entire drive. It’s supposed to sniff out that 1. Word normally opens and saves like one document at a time and 2. some unknown program is being overly active. And so it should stop that and ring some very loud alarm bells at the IT department.

    Basically they doubled down on the heuristics-based detection and by that, they claim to be able to recognize and stop all kinds of new malware that they haven’t seen yet. My experience is that they’re always the outlier on the top-end of false positives in business AV tests (eg AV-Comparatives Q2 2024) and their advantage has mostly disappeared since every AV has implemented that kind of behavior-based detection nowadays.


  • All GPUs released since they came out with the RTX 2000+ line are supported and all new GPUs will most likely have support, especially with this announcement saying they’re committed to it. There’s a support list on their GitHub and it includes all the weird little things you’d be worried about. Even silly little laptop chips like the new RTX 500 are on it.

    I think the only reason they limited GPU support is because the older ones physically don’t have the hardware for this approach; they switched to their newer RISC-V “GSP” processors with the RTX line. In the new open module, all of their proprietary “secret sauce” was shoved off to firmware running on that new GSP. Previously, their proprietary kernel module loaded all of that same secret sauce as a gigantic obfuscated blob running on your normal CPU instead. The Windows side of their driver has also been moving towards using the GSP, they even advertised it boosts performance or whatever, and I can believe it.

    That said, with this new stuff, the official Nvidia userland portions providing Vulkan/OpenGL/CUDA support and the like are still proprietary. It’s still worse than AMD in that regard. But at least it’s possible to replace those bits, and Mesa/NVK are working on getting Vulkan up and running (with NVK supposedly getting pretty damn good, and Mesa’s OpenGL-on-Vulkan is pretty good too so that’s free).


  • vim has better default keybindings/commands that allow for less movement of your hands. Nowadays, in reasonably current versions of nano, that’s mostly it. The main difference is nano is somewhat usable but extremely inefficient unless you learn it, while vim forces you to learn it to get anything done at all, which also pushes people to spend a bit of time learning it in general.

    If you’re sure of the numbers you’re using, vim’s ability to repeat commands is also helpful. In practice I find that it’s really hard to make use of them beyond low numbers, where nano can still achieve things in similar amounts of keypresses. Eg something to delete 3 words like <escape>3dwi can be done similar with a sequence like Alt-A ^→ ^→ ^→ ^K in nano. Make it 20 words and nano is going to be a lot slower, but that’s quite an uncommon action.

    But the practice is that nano users don’t spend time learning any of that and just hold delete until the words are gone, which takes forever. Everyone that can do basics in vim quickly learns that you can dw words away and make it 3dw to delete 3 of them. The default, easiest to use & access tool for any given situation gets blamed not just for its flaws, but also for the users that don’t want to spend time learning any tool.