• 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: July 24th, 2023

help-circle
  • Backwards compatibility - yes I agree, it’s quite good at it.

    Hardware specific issues for any OSes - disagree. For windows that’s 80-90% done by the hardware manufacturer’s drivers. It’s not through an effort from Microsoft whether issues are fixed or not. For Linux it’s usually an effort of maintainers and if anything, Linux is famous for supporting old hardware that windows no longer works with.

    But the point I was making is not to say Linux or osx is better than windows or vice versa, it’s that windows holds by far the largest market share in desktops and neither of the alternatives are really drop-in replacements. So in the end they have no pressure on them to improve UX since it’s infeasible to change OS for the majority of their users at the moment.



  • Aside from the effort required others have mentioned, there’s also an effect of capitalism.

    For a lot of their tech, they have a near-monopoly or at least a very large market share. Take windows from Microsoft. What motivation would they have to fix bugs which impact even 5-10% of their userbase? Their only competition is linux with its’ around 4(?)% market share and osx which requires expensive hardware. Not fixing the bug just makes people annoyed, but 90% won’t leave because they can’t. As long as it doesn’t impact enterprise contracts it’s not worth it to fix it because the time spent doing that is a loss for shareholders, meanwhile new features which can collect data (like copilot for example) that can be sold generate money.

    I’m sure even the devs in most places want to make better products and fight management to give them more time to deliver features so they can be better quality - but it’s an exhausting sharp uphill battle which never ends, and at the end of the day the person who made broken feature with data collector 9000 built in will probably get the promotion while the person who fixed 800 5+ year old bugs gets a shout-out on a zoom call.



  • Not a lawyer but in the scenario where proton closed the source but kept offering the build, even if gpl3 still applies since they’re the only copyright holder (no contributions) it’d only give them grounds to sue themselves?

    From gnu.org:

    The GNU licenses are copyright licenses; free licenses in general are based on copyright. In most countries only the copyright holders are legally empowered to act against violations.


  • I haven’t used tailscale to know how well it works but as a current zerotier user I’ve been considering moving away from it.

    I actually love the idea and it’s super simple to set up but has some very annoying pitfalls for me:

    1. It’s a lot of “magic”. When it fails to work the zerotier software gives you very little information on why.
    2. The NAT tunneling can be iffy. I had it fail to work in some public WiFis, occasionally failed to work on mobile internet (same phone and network when it otherwise works). Restarting the app, reconnecting and so on can often help but it’s not super reliable IMO.
    3. Just recently I’ve had to uninstall the app restart my Mac, reinstall the app to get it to work again - there were no changes that made it stop, it just decided it’s had enough one day to the next and as in point 1, it doesn’t tell you much over whether it’s connected or not.

    Pretty much all of the issues I’ve had were with devices that have to disconnect and re-connect from the network and/or devices that move between different networks (like laptop, phone). On my router, it’s been super stable. Point is, your mileage may vary - it’s worth trying but there are definitely issues.



  • I have no experience with this, but happened to have seen an interview with Ludwig Minelli, the founder of Dignitas (an organisation for assisted death). The man is 90+ and still fighting for this right. I believe I saw it in a video format, but I think this was the interview - I think it’s worth a read.

    I’d suggest you look up the contact for the various organisations and reach out with your situation and questions to see what they say. They’re likely to be much better sources of information.


  • Maybe set up a script that runs locally and pings an external service like 1.1.1.1 or 8.8.8.8 every second to see if it survives in a window when your services alert? Perhaps it’s your modem refreshing some config which causes a blip for a few seconds or something similar. If this doesn’t alert at least you can rule out that your internet fully goes out.

    The other side of this would also be useful, if you could run a similar check towards different levels of your home network to see how far down it gets (e.g. ping your router, expose some simple TCP echo service on the server running all this and nc it, curl the status page of the reverse proxy (or set up a static page in it), curl the app behind the reverse proxy - just make sure to use firewall rules for this and not just put everything on the internet). Depending on where it fails should hopefully give you some idea to go on.

    Maybe set up https://www.thinkbroadband.com/broadband/monitoring/quality/ to see if it registers any packet loss in those times or increased latency (although I’d still do the above as well)




  • I don’t know if there are agencies focussing on this, but in general it probably comes down to the company more than the agency. Probably worth filtering for companies offering flexible hours in the description

    I would say at the moment the IT job market is incredibly competitive for candidates, so it might be even more difficult to find truly flex roles when they can so easily find 100s of people who just work regular hours.

    On your last question: I’ve been a hiring manager in 2 companies (although in the UK) for software engineers and adjacent roles (like devops, platform, QA) and I would not care whether someone needs equipment. In the big scheme of things spending $800 for a monitor, keyboard and mouse is not even a drop in the bucket for the cost of an employee. What I would want to know is how do you work in a team in your situation and what arrangement can we do where you have a good experience, but other people in the company can still count on you. E.g. if you are working on a project and an issue pops up that’s blocking others from progressing and we need you to discuss, but you’re having a bad day and not working, what are the options you can offer? Or what if you get blocked when everyone else is asleep so you can’t progress?

    I think being prepared and upfront about this in an early stage of interviewing would be ideal, it signals that you have thought about others around you and also weed out any companies who aren’t willing to make this arrangement work. That being said, as above it’s a very competitive market right now so chances are pretty slim (at least in the UK).

    Also keep in mind once you look at companies who hire from abroad, you’re now also competing with (comparably) cheap labour from developing countries, who will likely agree to much worse terms.

    Edit: one thing I forgot, you may have the option to be your own boss (depending on your skill level) and freelance on a project basis rather than on a per-day basis.



  • I wonder if this will also have a reverse tail end effect.

    Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.

    Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that’s hard.


  • I think I misunderstood your problem, I assumed the issue was the volume mounts and after testing it I was indeed wrong - the docker cli now accepts relative paths so your original command does the same as what I suggested. After re-reading your issue I have a different idea of what’s wrong, but would have to see your dockerfile (or for you to confirm) to be sure.

    Do you add 10f.py to the docker image when you build it and do you specify the command/entrypoint in the Dockerfile? There are possibly to issues I can think of with how you do that (although considering the docker compose works it’s probably the 2nd):

    1. You do add it and you add it to /data in the image - when you mount a volume over it would make the script no longer exist in the container.
    2. You do add it and it’s not in /data - in this case the issue with running docker run -v ./:/data -w /workdir tenfigers_10f:v1 10f.py is the last bit - you override the command which makes it try to look for it at /data/10f.py, if you omit it the last part (10f.py) it should run whatever the original command was and assuming you set the cmd/entrypoint correctly in the Dockerfile it should see /data as ./ in python.

    (Also when you run it with the CLI you might want to add -it --rm as well to the docker command otherwise it won’t really behave similarly to a regular command)


  • It works in docker compose because compose handles relative paths for the volumes, the docker CLI doesn’t.

    You can achieve this by doing something like

    docker run -v $(pwd):/data ...
    

    pwd is a command that returns the current path as an absolute path, you can just run it by itself to see this. $() syntax is to execute the inner command separately before the shell runs the rest of it. (Same as backticks, just better practice)

    I imagine that wouldn’t work on windows, but it would on either osx, Linux or wsl.

    Generally speaking, if you need the file system access and your CLI requires some setup, I’d recommend either writing it in a statically compiled language (e.g. golang, rust) or researching how to compile a python script into an executable.

    If you’re just mounting your script in the container - you’re better off adding it directly at build time.