• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Unity did a bad thing, but the stock sale here is a complete non-event.

    According to Guru Focus, Unity CEO John Riccitiello, one of the highest-paid bosses in gaming, sold 2,000 Unity shares on September 6, a week prior to its September 12 announcement. Guru Focus notes that this follows a trend, reporting that Riccitiello has sold a total of 50,610 shares this year, and purchased none.

    He receives and sells stock constantly, as do most execs of publicly traded companies. Their compensation is majority stock, which incentivizes them to maximize stock prices since a higher price means more money RIGHT NOW for them. Look up any publicly traded company and peek at their insider trading info. Microsoft as a random reference and here’s Unity so you can see everyone else and the long term trends.

    The piece cites Guru Focus as their source of this info as if they have some keen inside information or something, but it’s literally public data that anyone with an internet connection can look up as these sorts of notices are required for publicly traded companies. Riccitiello only sold about $83k worth of stock before the announcement for a total of about $1.1M worth of stock this year, vs about $33M last year, and close to $100M in 2021. The idea that he dumped $83k worth of stock to beat bad news Unity was dropping is just a hilariously bad take.





  • AI resume screeners are very much at risk of bias. There have been stories about exactly this in years past. The ML models need to be trained, so they get fed resumes of candidates that were hired and not hired so the model can learn to differentiate the two and make decisions on new resumes in the future. That training, though, takes any bias that went into previous decisions and brings it forward.

    From the Amazon I linked above, the model was prioritizing white men over women and people of color. When you think back to how these models were trained, though, that’s exactly what you’d expect to happen. No one was intentionally introducing bias to the AI process, but software teams have historically been very male and white, and when referrals and references come into play, those demographics were further emphasized. And then let’s not pretend that none of those recruiters or hiring managers were bringing their own bias to the table.

    If you feed that into your model as it’s training data, of course the model is going to continue to favor white men, not because it’s actually looking for men, but because resumes that men typically submit are the kinds that get hired. Then they found that resumes that mention a professional women’s organization or historically black or women only colleges were typically not hired. The model isn’t “thinking” about why that is - it just knows that when certain traits exist, the resume is ranked lower, so it replicates that.

    Building a truly unbiased AI system is actually incredibly difficult, not the least due to the fact that the demographics of the data scientists working on these systems are themselves predominantly male and white themselves. We’ve also seen this issue in the past with other AI systems, including facial recognition systems, where these systems built by teams of white men can’t seem to make reliable determinations when looking at a picture of a black woman (with accuracy rates 20-30% lower for black woman compared to white men).


  • The problem is less to do with personal goals and more to do with how your company or manager implements them.

    My team has their org goals, which is what our bonuses are based on, and each person’s individual goals that they set with me. Those goals have the boilerplate reviews, and we keep it metrics based. Did we miss, meet, or exceed our goals? There’s a formula, which everyone knows before the year starts (because we wrote them as a group and them got board executive sign off on them) that tells us what our bonus metric will be. We sink or swim as a group, myself included. Each person has individual goals related to their unique role, but those are largely “Did you perform at the level expected of your title and salary?” No fluff. No BS. Some of my people write sentences, some give concise bullets, some write 3 word answers. This isn’t the SATs, so it doesn’t matter how the info is provided.

    Then we have the personal goals, which are 100% rooted in the question “what do you want next?” For some people, it’s to move into a more Sr role, for others to break into a new discipline (expertise in a particular area, management, or something completely different), and sometimes it’s as simple as “make $30k more per year” or “have more time with my kids in the evenings.” (For the last one, it’s usually easy - we are remote with few mandatory hours so it’s easy to modify a schedule to have free hours when needed) We set personal goals and I coach them to achieve them, but the only person they answer to if they don’t achieve them is themselves. It has zero impact on their performance metrics, bonuses, or raises.

    I want to see everyone have the life and career they want, and we use these goals as way to work towards that. Our 1-on-1 meetings are NOT about their tasks. We have the task board and team syncs for that and I can schedule a 1-off chat if we need to address something. Instead we spend the 1-on-1 more or less on whatever topic they want to address. If something is stressing them, annoying them, etc, they have that time to bring it up and we can try to find a solution. One of my people has a goal to move to a city 9 time zones away. They also highly values their work/life balance, so flexing their schedule is likely not going to solve this so instead I’m helping them leave the team for a new job. Ideally I’ll keep them in the company, but if that doesn’t work out and they have to leave, so be it. It’s what’s best for them and everyone else here sees it - that shit goes a long way.

    If you’re doing bullshit personal goals and nonsense 1-on-1 meetings, that’s the manager and culture at fault, not the concept as a whole.