• 1 Post
  • 116 Comments
Joined 11 months ago
cake
Cake day: July 28th, 2023

help-circle
  • Anafroj@sh.itjust.workstoSelfhosted@lemmy.worldCost-cutting tips?
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    9 months ago

    That’s the same thing. :) If you reduce computing load, you reduce the need for costly hardware and you reduce the need for energy, thus you reduce the amount of money needed to build and run your setup. There’s a saying in (software) engineering : “reducing energy consumption and increasing performances requires the same optimizations”. Make your code faster (by itself, not by buffing up hardware) and it consumes less energy. Make your application simpler, and it will run faster, and it will consume less energy. It’s not an absolute truth (it sometimes happen that you make your code faster and it consumes more energy), but it’s true most of the time.


  • Anafroj@sh.itjust.workstoSelfhosted@lemmy.worldCost-cutting tips?
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    9 months ago

    Basically, yes. You can configure most cron programs to mail task output to you (it’s usually done by setting the MAILTO variable in the crontab, provided sendmail is available on your system).

    I use that to do things like:

    0 9 11 10 * echo 'lunch with John Doe at 12:20'
    

    It sends me a mail, and I can see the upcoming events with crontab -l. If it’s not a recurring event, I then delete the rule.


  • Anafroj@sh.itjust.workstoSelfhosted@lemmy.worldCost-cutting tips?
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    9 months ago

    My favorite cost cutting tip is to avoid big webapps running on docker, and instead do with small UNIX utilities (cron instead of a calendar, text files instead of note taking app, rsync instead of a filehosting dropbox-like app, simple static webserver for file sharing, etc). This allows me to run my server on a simple Raspberry Pi, with less than 500mb of used RAM in average, and mininal energy consumption. So, total cost of the setup:

    • Raspberry Pi : 77€ x 2 = 144€ (I bought two to have a backup if the first one fails)
    • MicroSD 64gb : 13€ x 2 = 26€ (main and backup)
    • average energy consumption : 0.41€ (2kWh) per month

    With that, I run all services I need on a single machine, and I have a backup plan for recovery of both hardware and software.

    Getting used to a UNIX shell and to UNIX philosophy can take some time, but it’s very rewarding in making everything more simple (thus more efficient).


  • I have both a resin printer and a FDM printer, I can confirm the price difference exists, but is not prohibitive (resin is about 2x PLA). The difference of quality is mind blowing, though (in favor of resin printer). If you’re building an army, I assume you will have many pieces? If so, the difference of printing time is also mind blowing in favor of resin printer. The reason for that being that if you print 10x the same mini on your build surface, with FDM it will take 10x the same time as a single mini (the printing head must move to cover each point) while with the MSLA (resin printer), it will take… 1x the same time. That’s because each layer is flashed from a PNG image, so all points of a layer are created at the same time. On top of that, there are things you can do with resin that you just can’t with FDM, especially because of supports needed for hanging parts : if your character has arms, chances are the hand will be lower than the shoulder, which means than when printing from bottom to top, the hand won’t be connected to the body until printing reaches the shoulder, so you need something to support it (a “tower” under the hand, that you will cut off). It’s easy to do with resin, because in a bath of dense liquid, Archimedes is your friend and you can build the support in wild angles, but it’s way more difficult in thin air (with a FDM).

    An other thing to know, though, is that resin printing is way more messy. You will manipulate toxic products, that you can’t throw in the sink, you need gear to cover your hands and face, and resin ends up everywhere and is near impossible to clean. But it’s worth it, especially if you’re into minis. :) FDM, on the other hand, is unbeatable for functional prints (because those resin prints are damn fragile, and tend to not be perfectly at the scale you designed).



  • Are they still sold, anyway? I mean, sure, someone who has no printer should buy a more recent one. But that was not the subject, here : the question was if it was needed to replace an Ender 3. I certainly would not, personally, it would be throwing out a perfectly good printer for incremental upgrades. Of course, it depends on the usage. For someone who uses their printer professionally to serially print all day, sure, it’s probably worth it upgrading. Me? I really don’t care if my prints are slower. I really don’t find the Ender 3 hard to get a print right either. But I’ve been printing since the wooden Printrbot Simple about a decade ago, maybe I’m just used to it.


  • I’m using a pi4 8gb as my server, with a pi4 2gb as backup in case the first one dies. It’s a very classic server, running postfix/courier-imap for mails, lighttpd for web, bind9 for dns, ergo for irc, sqlite3 for databases. I also use fail2ban for IDS and cron to run tons of various task. All of that is hosted on a Gentoo linux OS.

    The one thing I don’t want to use is docker. I love docker for development or for deploying the main app at work, but it makes managing updates a nightmare for handling multiple services on my server (most your containers probably contain vulnerable software due to lack of system updates), and it eats resources needlessly. Then again, it’s made possible because I avoid the big webapps that usually need it.



  • “Git hosting” would be more appropriate. Unless that by frontend, you mean specifically web frontend, but that would be weird, because forges also provide the web backend part.

    Sourceforge was the biggest FOSS host in the 2000s, before GitHub (mainly because there was not much centralization to begin with). That train is long gone. :) Sure, the name and website Sourceforge still exist. Myspace, Digg and Yahoo do too. They are basically web ghosts, only an echo of what they once were.


  • Actually, I do use git bare repos for CD too. :) The ROOT/hooks/post-update executable can be anything, which allows to go wild : on my laptop, a push to a bare repos triggers deploy to all the machines needing it (on local or remote networks), by pushing through ssh to other bare repos hosted there, which builds and installs locally, given they all have their own post-update scripts ; all of that thanks to a git push and scripts at the proper paths. I don’t think any forge could do it more conveniently.

    For me the main interest of forges is to publish my code and get it discovered (before GitHub, getting people to find your repos hosted on your blog’s server was a nightmare). Even for the collaboration, I could do with emails. That being said, most people aren’t on top of their inbox, in which mails from family are mixed with work mails and commercial spam in one giant pile of unread items, so it’s a good thing for them we have those issue trackers.






  • Anafroj@sh.itjust.worksto3DPrinting@lemmy.worldModernizing an Ender 3
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    9 months ago

    Has there been so much going on in the market? I’m still using my Ender 3 and I’m not sure what I would add to it, it serves me well (I already added a BL Touch, in the early days I got it, and a glass bed, although I don’t see much benefit from that last part). It’s just doing the job perfectly. 🤷 That being said, I only use it for functional printing. I way more often use my Elegoo Saturn (a resin printer), as I use it to print my tabletop minis.



  • Anafroj@sh.itjust.workstoFediverse@lemmy.worldFediverse or Decentralisation?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    10 months ago

    I’m sorry to say that, but you get you’re definition wrong. “decentralized” means “which has no center anymore”. ActivityPub is decentralized. The usual criticism of the Fediverse by peer to peer networks such as Secure Scuttlebutt or Dat is not that ActivityPub is not decentralized, but that it will eventually “recentralize”, like client/server models tend to do, when one instance capture all the traffic (like Gmail with SMTP, we already see signs of that with mastodon.social, but we’re still very far from it to be a center). I think that maybe you’ve been exposed to that argument and misunderstood it?

    What you really want to say is that ActivityPub is not p2p. You can criticize the fact that there is a server/client model behind it, which means that users don’t really own their data and can lost it if the server goes down - that’s a valid criticism.

    To which I would answer that it’s a tradeoff. :) ActivityPub is built on top of HTTP, the well known protocol on which the web is built. This makes it dirt simple to build an ActivityPub app. The difference of adoption rate between SSB, Dat or IPFS and ActivityPub has nothing to do with luck. It’s HTTP and JSON, it’s just simpler (and easier) to build on top of ActivityPub. Not only that, but it’s a w3c standard. Which means, for people like me who have been burnt by building apps on top of the Beaker Browser only to see it abandoned, that we can trust there won’t be any rug pull. That matters.

    And of course, you can also… run your own server (look into self-hosting if you’re interested in that, there’s a vibrant community here on Lemmy about that). If you run your server, then you own your data and the other servers become your peers. The idea that only others (presumably big companies) can have servers is a very centralized way of thinking.



  • They do maintain the simplicity of the line oriented protocol, so I’m fine with that. :)

    That’s the strongest point of IRC, IMO, and why it’s kept so simple : every instruction is a plain text line, period. It makes it incredibly simple to build on top of it. You don’t need to introduce a dependency to a project that probably will be abandonned in a few years, at which point you’ll have to rewrite your codebase to use an other dependency, for a few years. You just open a TCP connection, you read lines from the socket and write lines to it, each line is its own instruction structured in well known fields, and that’s it. It’s so simple!

    As long as IRCv3 sticks to that, they have my blessing. :)