• 3 Posts
  • 246 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle


  • Japan has nicovideo.jp as well. Russia has Yandex Efir (gone through a couple rebrands, Efir was the name in 2020 when we were discussing deals; it was operating under another name prior, and I think it is superseded by dzen). Off to the side I think vK also has a small video delivery presence like how Facebook has videos in their feeds. China has several platforms: Tencent Video (owned by Tencent), Youku as you’ve called out (owned by Alibaba), XiGua (ByteDance), Haokan (Baidu), and then slew of smaller ones like KuaiShou, BiliBili and that video thing WeChat tries to push. None of these are public service operated by the State, by the way. List really goes on… and I’d know, because I’ve worked in the space for almost 12 years now.

    China’s great firewall aside, all these platforms are tiny in comparison, and in the grand scheme of things, and barely have any reach. In general, these regional are all taking a backseat just like Nebula and alike — if creators’ content are hyperlocal/super niche, they might be okay with smaller regional platforms; but if they’re trying to extend their reach and monetization (to ensure they have money to continue producing content), the creators’ presence on these platforms are really just auxiliary to their primary presence on YouTube.

    Getting viewers to these smaller platforms is going to pose a significant chicken or the egg problem — creators aren’t incentivized to be there because lack of viewer, viewers aren’t incentivized to go there because lack of content. Worse yet, I’ve also seen situations where creators are paid for some period of exclusivity and then when the deal lapses they just go straight back to YouTube.

    Real competitors do not exist, and likely will not exist for the foreseeable future. YouTube is the million pound behemoth when everyone else barely registers on the radar.


  • That’s a drop in the pond in the grand scheme of things. You just out source that out to rights management companies and absolve yourself from that obligation behind safe harbour. This is basically what they’re doing in this department. They’ve built Content ID for digital finger printing, and then invented an entire market for rights management companies on both sides of the equation.

    On the other hand, 500 hours of video footage got uploaded to YouTube every minute per YouTube in 2022 (pdf warning). 30 minutes of video game content (compresses better), just the 720p variant using avc1 codec is about 443MB of space. Never mind all the other transcodes or higher bitrates. So say 800MB per hour of 720p content; 500 hours of content per minute means 400GB of disk space requirement, per minute; 500TB of disk space per day.

    That’s just video uploaded to YouTube. I don’t even know how much is being watched regularly, but even if we assume at least one view per video, that’s 500TB of bandwidth in and then 500TB of bandwidth out per day.

    Good luck scaling that on public budget.











  • If you have enough drive bays, I’d probably shutdown the server, live boot into any linux distro without mounting the drives, then use dd to copy from 1st 256GB to 1st 500GB, from 2nd 256GB to 2nd 500GB, then boot the system, and use resize2fs to expand the file system to fill the partition.

    Since RAID1 is just a mirror, the more adventurous type might say you can just hot swap one drive, let it rebuild, then hot swap the other, let it rebuild again, and then expand the file system all online live. Given it is only 256GB of data max, on a pair of SSD, it shouldn’t take too long, but I’m more inclined to do it safely.




  • Does individual users send activities directly? I thought only users of your instance and remote federated instances send traffic to your instance, so this change would only affect data coming through from the larger instances?

    Also, what’s happening with the original request while they remain in queue? Say for example if large-instance.com is sending 11 updates to your instance; while your back end server is processing the first 10, what’s happening with the 11th? Does it get put on hold while your back end churn, or does it get a 200 OK response, despite the request may be failed at a later date? Neither of which seems ideal — if the instance get put in a queue waiting for a response while you churn, or worst yet, if your backend fails and your buffer is waiting X seconds to time out each request, you’re going to hurt the global federation by holding up a spot in their out going federation queue; if the instance is sent with a 200 OK and your backend eventually fails, you’d lose data as the other instance wouldn’t know they’d need to retry.


  • One potential downside to this on the posts/comment front is that if the thread in question is not in a community your instance is federated with, any form of local redirect would yield just an empty post with no comments. Lenny’s current federation is primarily push driven, so when requesting a post from an unknown community will yield no historical comments, as your instance have never subscribed to the community and thus never received the push notifications. Whereas getting sent to the original instance, you’d be able to see the full interaction history and have a better picture of the intended discussion.