Hello –

I have my DNS with a cloud provider that I want to stop using, and was considering where to move it (a few domains with a handful entries each). At some point I was wondering if I should run it myself. I have two VPS’ in different data centers with fixed IP addresses, and I read up a bit - seems like this is doable. I am not set on what software to use. I would like it to run in a container. Does anybody have any recommendations, positive or negative?

Thanks :)

  • IsoKiero@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    It’s doable for sure. You just need a way to sync the data between locations so that every DNS server responds with the same records, but that’s pretty much it. I do that for a (very small) business with ispconfig, but there’s plenty of options around starting from building your own. On the question if you should it’s a bit more difficult. Running a DNS server out in the wild isn’t the most complex thing to do, but it’s also a thing where you can break pretty much everything else you’re running very easily if you mess something up.

    It’s a bit difficult to say if you should. From my point of view, if you really know what you’re getting into and doing you’re not asking if you should around the internet and (in general) if you ask if you should do it then (way more often than not) the answer is ‘no’. If you know how to write zone files manually (not that you really need to, it’s just a thing you can do when you have enough understanding on DNS and things related), understand how axfr and loads of other tech works, then sure, go for it. But, and I know I’m repeating myself, if you ask if you should (to me) it’s a sign that you don’t know enough.

    • Muddybulldog@mylemmy.win
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      I can see wanting to run your own DNS to serve personal clients for privacy purposes but for self-hosting class stuff I can think of plenty of downsides and zero upsides to privatizing this.

      Definitely a “yeah, you could” vs. “yeah, you should” situation.

  • wiki@lemmy.pt
    cake
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I think it’s pretty doable, but there are some things you should think about:

    • Email delivery dependencies: If you use email addresses on the domains you’re hosting, problems in DNS will cause you to not be able to receive emails. This can be a problem if you plan on monitoring your DNS with alerts via email, or if you need to do password recovery via email to access your VPS accounts, for example. Check if this applies to other services you might be running.
    • You’ll need to understand glue records to setup at least one of your domains.

    For my domains, I’m running nsd in two different VPSes, and the way that I edit my zones is that I have a script that converts a shorthand format (that I came up with) to a standard zone file and then rsyncs (using hostnames declared in my .ssh/config files) the zone files and nsd configuration files to both servers. The script then reloads nsd.

    I chose nsd because it felt like the simpler option, no troubles so far. I use them directly on my debian hosts, no containers.

    I have no monitoring, but I should. My terrible excuse is that the infrastructure I’m running is not critical and it’s on the same hosts as my nameservers, so they usually go down together. I wouldn’t put client domain names in there without monitoring.

  • RegalPotoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    It’s super achievable - I’ve run my own DNS for ages, there are a few common pitfalls but overall it’s pretty low maintenance.

    • Personally I use PowerDNS, but you could also use something like BIND. I find PDNS to be a little easier to configure
    • Make sure you are looking at the docs for PowerDNS Authoritative, not PowerDNS recursor
    • You install PDNS Authoritative on bother servers, then designate one as a primary (/master) and the other as a secondary (/slave/replica). You create records on the primary, and configure it to replicate the records to the secondary using AXFR
    • I’d recommend using one of the database backends for PDNS - personally I use Postgresql. Sqlite is simpler to set up, but I’ve had issues where making multiple updates over the API causes errors due to locking
    • DNSSEC is a bit fiddly to set up initially, but doesn’t add much operational overhead once it’s running
    • Take a looks at glue records if your want to host the domain that the nameservers themselves use
    • Once you’ve got things running, consider something like https://ns-global.zone as a backup

    Feel free to ping me if you have questions or need help getting things set up

    • lidstah@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Kudos for mentionning powerDNS, it’s an amazing software :)

      One thing I love with powerDNS is the various backends available, notably the postgreSQL and mariaDB/mysql ones. Only the primary powerdns instance modifies the database records, the secondary instances just read from database (master or replicas). Thus, no real need for AXFR: as soon as you added/modified a record on the primary, the secondary pdns servers will see it in the database.

      The pdnsutil CLI tool is also really convenient, and the powerDNS API is a godsend when you need to automatise stuff for thousands of domains and hundred of thousands of records. There’s also a nice third-party webUI (powerdns-admin, docker image: pdnsadmin/pda-legacy). Bonus, Terraform does have a powerdns provider.

      At work we use dnsdist (from powerDNS too) to load-balance between our powerdns instances (with caching!), and to filter out/rate-limit/temporary ban bad actors (dns laundering, records enumeration and such for example).

  • voidf1sh@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    I use Bind9 on an OCI compute instance for all my DNS needs. I don’t use it in a container but I’m sure there is a container of it or you could sell set one up.

  • wwwwhatever@lemmy.omat.nl
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago

    You can use Bind or any other nameserver-server.

    But this is one of the things you might want to reconsider. Setup errors might slip in silently and might be hard to diagnose. Complying to the standards like DNSSec and IPv6 on the nameserver might be a challenge without experience.

    Next to that, you probably can’t register the domain itself without a third party, and I always advice to not use a different party for nameservers than the party that registered the domain.

    Laat point I want to bring up, I would advise against combining name servers with other services, as it is crucial for operating the services, you are creating one giant point of failure. Keep it separated. Seperate hardware

    That said, if you accept all these dangers, it’s technically doable. Open the right ports, configure the zone, setup master and slave, read up on glue records, register the name server if needed, setup DNSSec and set the correct name servers in the domain at the party you registered the domain.

    • solberg@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      and I always advice to not use a different party for nameservers than the party that registered the domain.

      Why is that? I register most of my domains at Porkbun, but I usually use Cloudflare’s nameservers as they seem to support more record types, have more features, and have a better UI than most registrars’ offerings.

  • johnnychicago@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    Hey - lots of good points, thanks. I think I’ll give this a try on a less used domain of mine, just to get a practical feel for it. I do appreciate the arguments against, but to an extent, if both my VPS’s are down, so is basically everything served by the domain. I will have to make sure monitoring is taken care of, and I do have a completely remote email address from all of this.

    knot seems to have a current docker image maintained by the project, so I’ll give that one a try and see how it goes. Stay tuned for me coming back crying and repenting in a few day’s time, I guess :)

    If worst comes to worst, I can always go back to where I came from, it will have been a learning experience.

  • solberg@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    I’ve had success with PowerDNS (PDNS). I like that it comes with a HTTP API and it’s got integrations with Caddy and other ACME clients if you need that.

  • HousePanther@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    11 months ago

    Check out Unbound. I am sure there is a docker image available for it. Unbound used to be recursive only. Now it has support for both recursive and authoritative DNS. The docs are good for it and there are plenty of examples. But that much said, I am curious why you want to do this. DNS is a really critical service. Chances are your cloud based service will be more reliable with faster resolution times. I am very pro-self hosting and I don’t do it myself. I don’t even do email myself either.

  • gnzl@nc.gnzl.cl
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I recommend NSD or Knot for strictly authoritative servers. BIND is great too, but it is built to do both authoritative and caching DNS which makes it a bit too “big” for the task of serving only authoritative DNS data. You can definitely configure BIND to only serve authoritative data though.

    I can’t comment on running from a container, I’ve always worked with NSD/Knot/BIND building directly from source.