Much development is being done at public research universities leveraging government grants. Most of what these companies pay for is packaging, marketing, and distribution
Much development is being done at public research universities leveraging government grants. Most of what these companies pay for is packaging, marketing, and distribution
Creating an AD domain carries a substantial amount of extra overhead that they might not want to deal with. The basics of setting one up are simple enough but actually building out/maintaining the infrastructure the correct way can be a lot of extra work (2 DCs for redundancy, sites configuration, users, groups, initial GPOs). There are also licensing and CAL considerations (bare metal and hypervisor, both different), domain and forest options that can paint you into a nasty corner of you’re not careful, and a whole host of other things to think about and plan around. I’m not arguing that a domain is bad, on the whole I agree 100%. I just like to set the record straight that building a new production domain isn’t as simple as a lot of people would have you believe, and OP might not have the time to go through all that.
I feel like this is legitimately more true than a lot of people think. Say what you want about the average end user, but UX is a HUGE driver with regard to adoption and user uptake. You can have the best of everything else in your application, but if the UX sucks, folks just aren’t going to use it
I kinda disagree with the context comment though. That era of computing was inherently wild - nobody had figured anything out yet beyond the most basic and general strokes, and security analysts (such as they were) had what would be considered a childish understanding of IT security by modern standards. Heck, Windows95 didn’t even have the TCP stack enabled by default, so when these features were being designed, planned, and coded at Microsoft, there was no context for security on that kind of feature. Wikipedia says that Win95 was in the planning stage in 1992 - I take that with a grain of salt, but the concept is valid. Microsoft was writing the core features of Windows 95 before WAN was even really a thing. Like I said, I don’t disagree with the idea that AutoRun was a terrible thing among many terrible things Microsoft is responsible for, but given the era in which AutoRun came out, it was a reasonable trade-off between security and functionality for the lowest common denominator of user. The whole thing should have been disabled (on 95 and 98) when Windows 98 came out since they should have known better at that point.
I don’t disagree with this statement in general. That days, I don’t know how old you are and whether or not you were really around the home PC space when the auto run feature first came to be. I can sort of understand what Microsoft was trying to accomplish with it… the mid-90’s were a wild, lawless time with regard to personal computing. There was a lot of heartburn on the end user side because things were changing so rapidly. Getting them to understand that what a “drive letter” was, how to get there, and how to run an application from it (let alone what an application even was) proved challenging even under the best circumstances. The ability to insert a CD into the drive tray and have it “just work” (also a big theme in Win 95/98) was a godsend for a lot of publishers.
Of course, in today’s world, we look at that kind of feature and rightly say “yo, that’s fucking crazy, why would you do that?”, but in the old days it really did help. At the end of the day, it was a useful feature that, like a lot of windows legacy crap, was left in the OS after its usefulness had gone and just became another attack vector.
Yo, please tag this NSFW… we didn’t come here to see this kind of smut
I’ll also add to this that WSL is a security nightmare. If something manages to dig its way into your wsl install and add, for example, WINE, there’s no end to the (hidden to your av) mischief it can enact.
I was just saying that there can be a lot of good reasons for downtime. Heck, I use a secondary in my network because sometimes my unraid host starts dnsmasq and it clobbers my adguard container
Depending on the client, it can be. The Microsoft page pretty cleanly defines expected dns client behavior [https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/dns-client-resolution-timeouts#what-is-the-default-behavior-of-a-windows-7-or-windows-8-dns-client-when-two-dns-servers-are-configured-on-the-nic](Microsoft learn). There haven’t been any published changes to this that I’ve seen, and it more or less matches my experience. Linux is a lawless land in this respect, but it really boils down to “it might”, so caveat emptor there. That’s also why I suggested a public ad blocking dns server as a secondary, in case multicast dns does its multicast dns thing
No worries, I had the same thought at first and was very confused for a minute
OP already said that their current DHCP solution (the router) can’t push multiple DNS servers. Having a good secondary can be really helpful for things like power blips, maintenance windows, and cats pulling power cables. There are a few solutions that also do ad blocking that can make good secondaries
This would be great except OP said that their router can’t push 2 DNS addresses. Otherwise, ya, redundant services is always best
If you already have pihole in your environment, I would just use that. DHCP is pretty light weight, so the pi should be more than capable, and you don’t want to complicate your core services more than you need to
PRV? Not sure what that is or where i would find it
So I’ve already done this one. Exactly according to the link posted a bit further down but no dice. I’m really confused as to why this only seems tied to the hot water though
I use tandoor myself, but mealie is also a solid choice
Be careful with the Intel laptop chips and make sure you understand what you’re getting. My work laptop has an i7 with 12 “cores” but it’s 10 of the low powered e-cores and 2 of the hyperthreaded p- cores, so for heavy applications (like compiling) it’s a glorified dual core i3.
I also run an unraid instance. I just use a regular tower build with an asrock rack board (the ipmi is nice), a r5 3600, 32gb ECC, and 4x4TB drives. I also have 1 ssd for vms and 1 ssd for a write cache. I think the biggest advantage of unraid is the simplicity once you grok how storage works regarding parity and how you add container apps. VM management and container management are extremely simple, and the next release (in rc now) is supposed to make ZFS a first class option.
I’ll second this. Manage Engine does have its faults, but it’s not terrible, patches a few Linux distros, and doesn’t cost a ton.
That’s just their idle animation. Supposedly, if they desync, it’s like a yo-yo until they catch back up