• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Agreed on your point. We need a way to identify those links so that our browser or app can automatically open them through our own instance.

    I am thinking along the lines of a registered resource type, or maybe a central redirect page, hosted by each instance, that knows how to send you to your instance to view the post there.

    I am sure it is a problem that can be solved. I would however not be in favour of some kind of central identity management. It is to easy a choke point and will take autonomy away from the instances.


  • That should just work. You view the post on your own instance and reply there. That reponse trickles to the other instances.

    It may take a while to propagate though. The paradigm is close to that of the ancient nntp news groups where responses travel at the speed of the server’s synchronisation. It may be tricky for rapid fire conversation, but works well for comments of articles.


  • Agreed, I installed Ubuntu 22.04 last week to play with stable diffusion. Decided to have a quick look at steam / proton and was blown away with how easily it works. Fallput 76, my primary online game installed and run with almost no hassle. I even managed to get a long time irritation with runaway frame rates fixed.

    The only glitch that remains unsolved is a hang on exit. Which is a known issue.


  • This is a fact and a half. Ihave been using linux on and off for a headless Minecraft server. Vanilla Debian. Yesterday I decided to load up the latest Ubuntu lts, to run stable diffusion. My first end user linux install in ages. And it was a 15 minute seamless experience. From boot ISO to running a normal functioning desktop. Add another hoiur and stable diffusion was up and running. A far cry from building slackware from, from source, in the early 2000s. It truly is amazing when we consider what has been achieved.


  • The exciting thing about this space is that much of it is undefined. It is all about the protocols and the main features at the moment. The 2nd generation tools will be born out of what we discuss now and think about now.

    How do you make sure a user is not trapped in his special interest bubble and still gets to see content that has everyone excited? How will we make use of the underlying data, on both posts and users to suggest and aggregate content.

    I think there will be more than one solution eventually, different flavours of aggregators running on the same underlying data.

    So much possibility. And we control it. If you don’t like the way your lemmy instance or kbin aggregates, choose another site or build your own. The data is there.




  • Edit: Wrote this on mobile. The mobile U/I is not always clear as to the source magazine where the post came from, so I missed the Linux in there. Things are not as dire on Linux as on Windows for AMD, so my assessment may be a bit pessimistic. With AMD’s focus on the data centre for machine learning, the linux driver stack seems fairly well supported.

    I spent the last few days getting stable defusion and pytorch working on my Radeon 6800 XT in windows. The machineml distribution of stable diffusion runs at about 1/4 of the speed of raw rocm when I compare it to the shark tooling, which supports rocm via docker on windows.

    Expect tooling to be clinky and that you will need to compile everything yourself on linux. Prebuilt stuff will all be for Nvidia.

    Amd is pushing hard into the ai space, but aiming at datacenter users. They are rumoured to be building rocm for their windows drivers, but when that will ship is anyone’s guess.

    So right now, if you need to hit the ground running for your academic work, I would recommend NVidia, as much as it pains me, a long time AMD user.