• 7 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • For MIT/Apache it doesn’t matter. That’s always a problem with those free to use licenses you have a “good idea” who’s using it, but you never really can tell. It also creates a shit load of wasted improvements every time a company uses it, moth balls the project, but never pushes code upstream because why do that? \s So you sit back and hope that someone in the company feels a big enough moral drive or obligation to contribute their improvements up stream. But, how can you tell definitively? You can sometimes see it in the job descriptions they are hiring for, also I have had companies reach out out me personally for help. Many open source projects also will reach out and ask, and if they get the ok, will put it in the project description in order to encourage others companies to do the same. So why to companies bother? The funny thing about open source is that it lets people who like solving tough problems (the best type of engineers) know where the tough problems are being definitively solved, because here’s the code, and here’s the author from xyz company contributing and showing the rest of the world how it’s done. Often this will bring in engineers who are at the top of their game to these companies.








  • The problem with this is that companies like rabbitai are exploiting our inherent drive to teach in order to pass on knowledge and make society and life better for the next generation and ourselves. (In this case code reviews) This doesn’t work in this situation because you’re not actually helping out another person that will reciprocate help to you down the line. You’re helping out a large company, which has no moral values and doesn’t operate in society with the same values as a human being. To me a code review is more than just pointing out mistakes it’s also about sharing knowledge and having meaningful dialog about what makes sense and what doesn’t. There’s no doubt that AI is an amazing achievement, but to me it seems that every application of this technology that involves human interaction manages to simultaneously exploit and erase the core “humanness”, of the interaction. I think this is the case because these types of AI applications are purely monetarily driven, and not for the advancement of our society. OpenAI had the right idea to start with, but they have sunken into the same trope in lock step with the rest of the Googles, Apples and Amazons of the world. Imagine if one of these large companies like say Google had been given money by the us government to create the arpa net and then went on to only use the technology for profit. Would we really be in the same connected world we are now?










  • I second this, we have had a Synology NAS for over 10 years (i degoogled a long time ago) and have had virtually no problems. I did need to transition to the new “Photos” app which was a bit annoying when we upgraded (after 7 years), but I know that none of our kids baby pics, our wedding pics, our life in general is being scrapped or stored on a server with a terms of service agreement that we basically have no control over.








  • I think this is a good conversation to have, I’m assuming there are no security checks to make sure instances connecting to each other are legitimately released and code reviewed by the community? I’m also curious if you could run a malicious instance that garners a lot more information from your users than is necessary or uses security holes to gather information from other instances. This could send this entire experiment down the toilet very fast. For instance HTTPS guarantees you are connecting to who they say they are and are from a trusted source. At the very least it would be nice to be able to have control over your credentials and history, and only release it to trusted instances.


  • I started out where everyone else did and worked my way up so I’ve “been in the trenches”. After doing this for 20 years and shipping multiple consumer and internal products I’ve seen it all and know what can make or break a project and what works and doesn’t when introducing or using a new technology to a dev group. Also, I definitely don’t throw it over the fence, it’s a team effort and we all agree on what sounds like the best approach. Along with code reviews, part of the coding I discussed is sitting down and creating a skeleton of tests and an initial architecture that others can build off of and give me feedback on. If someone is having trouble implementing something I sit down with them and work through it. It’s also about trust, people also trust me and know that in general I know what I’m talking about. The thing is most people would read my resume (or even this quick summary) and say I’d make a great development manager. But the problem with being pigeon holed into being a manager just because your a great dev is that it doesn’t reflect what developers are good at, making software. More and more companies are realizing this when they shove their best dev engineers into the position of a manger and it crushes their souls, and makes them leave. So they are creating these principal or staff positions which at most companies are laterally equivalent to a director of software engineering without the people/staff managment. There’s a great podcast episode on this by Stephen Dubner who wrote the book Freakonomics https://freakonomics.com/podcast/why-are-there-so-many-bad-bosses/