• 0 Posts
  • 171 Comments
Joined 6 months ago
cake
Cake day: January 2nd, 2024

help-circle


  • Aha, would you mind elaborating? That sounds like quite the issue for Pacman to break its own dependencies.

    There was a bug with http/2 in a particular version of curl, which was very quickly updated in the arch repos and rolled out to users; It broke pacman’s ability to sync.

    It’s one of those frustrating things that happens, and someone has to hit the bug first. It’s nice to have a “stable” and “testing” branch so that users explicitly opt-in to bleeding edge packages.

    Ah okay, I was under the impression that the installation didn’t require installing from source with the new binary system – I thought it was more akin to Arch’s installation where you just select your kernel binary in Pacman, then download, and install.

    This is just the base system - it’s like any other distribution’s base install except that we don’t have an official ‘installer’; Gentoo distributes tarballs that users unpack following the guidance in the handbook.

    From there most packages can be installed as a binary if the USE flags line up (and it has been asked to do so), otherwise portage will compile it for you.

    After unpacking the system image you can install a binary kernel, have portage compile one for you, or manage it manually (but still let portage fetch sources)

    Gentoo has a great system for managing configuration changes when a package updates a file that you’ve customised.

    Would you have any resources/documentation for me to look into this more?

    https://wiki.gentoo.org/wiki/Dispatch-conf

    I misworded my original post – I was referring to things like updating the kernel. I thought that maybe the kernel would be a binary, so it would not have to be recompiled like how I would assume it usually does.

    It comes down to user choice. That can now be entirely binary or from source (or from source but managed by portage)

    This sounds very appealing to me, but I must admit that these sorts of configurations do seem like they would be mildly daunting to juggle on a production machine.

    It’s actually pretty straightforward - you nominate packages that you want to run on ~arch (testing) and add them to some config files. Portage handles the rest.





  • This has been answered a bit already but:

    So, in summary, is a binary Gentoo functionally equivelant to Arch Linux, but with more control over the system?

    Perhaps, if that’s how you view the world. I’d argue that it’s better as I’ve never seen Gentoo ship a version of curl that broke Portage…

    I would like to know more about the following:

    1. Does the OS installation change, and, if so, how?

    You basically unpack a tarball, select a kernel, install a bootloader, and go. It’s no different to before except that you can optionally choose to enable the use of binary packages.

    1. Does package installation, updates, and maintenance change, and, if so, how?

    If comparing to arch, you use portage to handle that but the concept is the same.

    Gentoo has a great system for managing configuration changes when a package updates a file that you’ve customised.

    1. Do system updates change, and, if so, how?

    This question doesn’t make much sense to me. What is a “system update”? Isn’t that just updating all of your packages at once?

    1. Do you lose any potential control over the system when using the binaries, rather than compiling from source, and, if so, what?

    Yes and no. If you customise your USE flags the binary won’t be suitable and instead portage will build the package as you requested it

    1. Are there any differences in system stability? Can I expect things to break more readily on a binary Gentoo compared to Arch Linux?

    Hahahahaha. Hahahahahaha. Hahaha. Ha.

    Arch is notorious for shipping barely tested software to have the higher version number in their repo.

    Gentoo enables users to select the stable or testing path, on a per package basis, so you have to opt into packages that haven’t been well tested and even those are typically better tested than arch.


  • As the Gentoo chromium maintainer, not really - we strip most CFLAGS as part of the ebuild unless you enable a special USE flag to keep them and it’s not particularly supported - if you encounter breakage with that enabled the first thing I’m going to ask you to do is turn them off.

    Edit: we do have some USE flags that control how the package is built, but that’s mostly choosing between the Google-bundled and system versions of libraries.

    Edit the second: there isn’t a package on the binhost for chromium yet, I need to work out how to build it so that it isn’t an issue to distribute.


  • i was thinking about re-writing the bluetooth daemon, in order to…

    The NIH is strong with this one.

    IMO you’d be better off putting that enthusiasm into fixing BlueZ - you might actually be able to fix some real issues and improve things for a great number of users relatively quickly.

    Writing a new, competing, piece of software is going to take a while to achieve both feature parity and see any adoption by major distros.

    retro-compatible (exposes the same D-Bus APIs as BlueZ)

    Is there any reason for this? I can’t think of anything off the top of my head that would require it. It’s an admirable goal but make sure it’s worthwhile doing this and that there aren’t actual benefits that could be achieved by breaking compatibility.


  • Overall you’re not too far off, but what you’ll tend to find is that it’s a lot of doing similar calculations over and over.

    For example, climate scientists may, for certain experiments, read a ton of data from storage for say different locations and date/times across a bunch of jobs, but each job is doing basically the same thing - you might submit 100000 permutations, or have an updated model that you want to crunch the existing dataset out with.

    The data from each job is then output, and analysed (often with followup batch jobs).

    Edit: here’s an example of a model that I have some real-world experience building to run on one of my clusters: https://www.nrel.colostate.edu/projects/century/

    Swin have some decent, public docs. I think mine are pretty good, but they’re not public so…

    https://supercomputing.swin.edu.au/docs/2-ozstar/oz-partition.html

    There will typically be some interactive nodes in a cluster as well that enable users to log in and perform interactive tasks, like validating that the software will run or, more commonly, to submit jobs to the queue manager.


  • Yes. I’m actually doing so right now at work, and run multiple Beowulf clusters for a research institution. You don’t need or want this.

    In a real cluster you would use software like Slurm or PBS to submit jobs to the cluster and have them execute on your compute nodes as resources are available to keep utilisation high.

    It makes no sense for the home environment unless you’re trying to run some serious computations and if you have a need to do that for work or study then you probably have access to a real HPC.

    It might be interesting and fun, but not particularly useful. Maybe a fun HCI setup would be more appropriate to enable you to scale VMS across hosts and get some redundancy.







  • I’m slightly biased, but if you already know a bit of Linux and desire more control / customisation, or want to understand how a system is put together, then I highly recommend Gentoo Linux. The install process is pretty simple, and with the new binary package hosts you have the option of quickly installing precompiled packages to get a system installed or up-to-date.

    The USE flags on packages, combined with portage the package manage enable an unparalleled level of configurability, the community is welcoming and respect user choice about how they want to configure / use their system, and the documentation on the wiki is top notch - I’d say better than the arch wiki in terms of quality overall.