what should I use as my base DN?
I posted this a while ago about LDAP basics: https://lemmy.world/comment/1539633
The base DN is usually the DN under which your user accounts (inetOrgPerson
s) can be found. In my case it is ou=users,dc=example,dc=org
.
Keycloak is nice, but probably overkill for what OP needs. Keep it simple.
I use openLDAP + LDAP Account Manager and Self-service password. Deployed/managed through this ansible role
I want to look into apt-cacher-ng for learning purposes, to stop 10s of VMs in my homelab from adding load to Debian official repos, and also to check if there is a way to have it only mirror a list of “approved” packages.
saw a huge time improvement even though I have a good internet connection
Note that for best performance you should use https://deb.debian.org/
Semi-related I have set up a personal APT repository on gitlab pages: https://nodiscc.gitlab.io/toolbox/ (I think Ubuntu users would call that a “PPA”). It uses aptly and a homegrown Makefile/Gitlab CI-based build system (sources/build tools are linked from the page). I wouldn’t recommend this exact setup for critical production needs, but it works.
JBOD here just means “show me this bunch of old drives as a single drive/partition”. It’s just a recommendation to at least get something out of these drives - but don’t use this as backup storage , these drives are old and if a single one fails, you lose access to the whole array.
If you’re not sure what to do with them, just get an USB/SATA dock or adapter, and treat them as old books: copy not-so-valuable stuff on them, and store them in a bookshelf with labels such as Old movies
, Wikipedia dumps 2015-2022
…
Definitely get a good, new drive for backup storage. And possibly another one for offsite backups.
Don’t use a synchronized folder as a backup solution (delete a file by mistake on your local replica -> the deletion gets replicated to the server -> you lose both copies).
old pc that has 2x 80gb, 120gb, 320gb, and 500gb hdd
You can make a JBOD array out of that using LVM (add all disks as PVs, create a single VG on top of that, create a single LV on top of that VG, create a filesystem on top of that LV, format it as ext4 filesystem, mount this filesystem somewhere, access it over SFTP or another file transfer protocol).
But if the disks are old, I wouldn’t trust them as reliable backup storage. You can use them to store data that will be backed up somewhere else. Or as an expendable TEMP directory (this is what I do with my old disks).
My advice is get a large disk for this PC, store backups on that. You don’t necessarily need RAID (RAID is a high availability mechanism, not a backup). Setup backup software on this old PC to pull automatic daily backups from your server (and possibly other devices/desktops… personally I don’t bother with that. Anything that is not on the server is expendable). I use rsnapshot for that, simple config file, basic deduplication, simple filesystem-backed backups so I can access the files without any special software, gets the job done. There are a few threads here about backup software recommendations:
In addition I make regular, manual, offsite copies of the backup server’s backups/
directory to removable media (stash the drive somewhere where a disaster that destroys the backup server will not also destroy the offsite backup drive).
Prefer pull-based backup strategies, where hosts being backed up do not have write access to the backup server (else a compromised host could alter previous backups).
Monitor correct execution of backups (my simple solution to that, is to have cron create/update a state file after correct execution, and have the netdata agent check the date of last modification of this file. If it has not been modified in the last 24-25hrs, something is wrong and I get an alert).
deleted by creator
File synchronization is not a backup.
USB tethering between home server and cellphone with cheap data plan. Setup iptables rules/default routes on the server and other devices on my LAN, to route traffic to the Internet through the server and the USB modem/phone. Call ISP and wait 3 months for them to unfuck phone/fiber pole trashed by tractor. Keep paying for service while it is down. Keep calm and carry on, at least I got a backup Internet access.
I don’t need to access this server from outside (and it wouldn’t work as the mobile Internet plan uses CGNAT), just to have the laptop or phone on the same LAN once in a while to let Nextcloud sync do its thing (essential files, Keepass database…). I suppose I could set up a wireguard tunnel between the home server and my cheap VPS, and access it from there, I just don’t have the need for it.
You’re probably not gonna get metadata
You can do it using --write-info-json
option [1] and https://github.com/ankenyr/jellyfin-youtube-metadata-plugin which reads metadata from yt-dlp’s .info.json files and displays it in Jellyfin.
This is what I do - except I don’t use a Web UI, but a script that downloads videos I bookmark on my shaarli instance [1]. Having a local copy of my bookmarked videos is nice (but takes quite a bit of disk space)
Any journaled filesystem is mostly fine (e.g. good old ext4).
Same as you, if power goes down for a long time I have bigger problems than not being able to access my home server. Guess I could still hook it up to my car battery and DC->AC converter if I really wanted to, and use my phone as 4G modem/backup internet access.
I maintain my own Debian-based live image. It’s a general-purpose desktop, with a good amount of diagnostic/troubleshooting tools. It’s quite easy to build your own using different package lists or default configuration, etc.
Now that dendrite is baasically feature complete I’m curious when was the last time you used it? I remember having issues with bridges one or two years ago.
About that time, yeah, ~1 year ago.
I needed a full replacement for RocketChat (ditched RC for many reasons, unaddressed security/privacy issues, painful and frequent major version upgrades, dependency on mongodb, corporate-driven development/removing security features from community edition, no lifecycle/EOL policy…) so I needed proper file upload/audio/video chat integration - Currently using the jitsi-meet integration, but might switch to element-call someday… In this regard my current setup appears to work well, so there’s no incentive to change…
I also wanted to set up a few bridges, started implementing the IRC bridge but didn’t go very far (tried going off the beaten path and making it work with podman, it might take a while). The steam chat bridge is also planned, but it doesn’t appear to be very well-maintained and I’m afraid it will break without warning, and the signal bridge which looks OK.
Currently I’m juggling between clients for all these different chat networks, feels like it’s 2002 again.
Ansible role to deploy/maintain Synapse + Element-web here if you’re interested.
I have tried a few other matrix servers (dendrite and conduit), something always ended up not working because they don’t implement everything synapse (the reference server) does, or there were bugs - generally audio/video calling or file transfer would break. Synapse worked out of the box. It also has good documentation.
I don’t see any performance problems or abnormal resource usage with synapse either. As I said I don’t use it that much, so maybe there is something nasty I didn’t see yet. From what I’ve read, it is only a problem when you federate with “large” instances/rooms, but my server is not federated, it’s just a basic private chat server.
Wait until you hear about mod_md
It’s because there is widespread interest in a full chat application which includes E2EE, fancy web UI, video conferencing, and integrations with other platforms. Mumble is none of these, it’s a rock-solid, old-school, efficient VoIP server with thick clients (desktop and mobile, no working web clients as far as I know). Basic text chat without persistence. Not many changes to the codebase or new features in the last few years. There’s nothing very “novel” about it, it just works and is extremely easy to install. This is easily one of my most used services
/etc/mumble-server.ini
and (at least) set a superuser password and a normal user passwordThis is my ansible role for mumble installation and management
The Mumble wiki has all the info about other topics.