Actual programmer
Actual programmer
I wonder if JJ anonymous branches would be something that solves this. I’ve only read about it, have not used JJ yet.
Or meet old ideological dogs like me :P
Much better integrated refactoring support. Much better source code integration support. Much better integrated debugging support. Much better integrated assistive (but not ai) support.
Vscode can do many things IntelliJ can, but not all, and many of them require fiddling with plugins.
Usually, JB is also faster (if your dev machine can run it, but in my experience most devs have beefy machines).
Zed is also lightspeed fast compared to either vscode or JetBrains’ stuff.
I would expect it to rise. I still think it’s worth it, if it’s a good tool for you. IntelliJ is really a good product, even if they do have their downsides. In a commercial environment, it’s totally worth it to buy a licence per developer, if it makes them more productive.
JetBrains git integration is a known mess, true.
Even this is forum-like though. It’s a forum of people talking about a topic that interests them. It just happens to be distributed.
I don’t know what happened, but since 6.2 rolled out on Fedora a week ago, I’ve had several bugs. At the very day I updated, I had two outright crashes. It happened a few more times since. My keyboard shortcuts don’t work any more. Window layout behaves…odd (haven’t pinned it down yet).
Just all-around messy upgrade. Am I the only one with problems, though?
That still doesn’t look like a very heavy workload. My older box was older then your 6700k and was fine running such stuff.
Perhaps your limit isn’t the CPU. What storage and ram setup do you have, did you look at that?
I’ll be honest and say that when I replaced my old crap with 7900x I did feel improvements on occasion, mostly when I really burden the pc. Plus I think having 64 gigs of ram helps there, at my old system I hit the limits sometimes. Not often, but sometimes. Now my new box just laughs at anything I try to do to it.
Since when do Unix tools output 3,000 word long usage info? Even GNU tools don’t even come close…
[zlatko@dilj ~/Projects/galactic-bloodshed]$ man grep | wc -w
4297
[zlatko@dilj ~/Projects/galactic-bloodshed]$ man man | wc -w
4697
[zlatko@dilj ~/Projects/galactic-bloodshed]$
The article sure mentions 💩a lot.
No problem!
As an aside, I see we’re bringing the strangers thing over from Reddit. I hope more of the fun and funny stuff gets over, I miss some of the light shitposting.
Why not just cd $XDG_DOWNLOAD_DIR
in the first place?
did you mean smuts?
For bash, this is enough:
# Bash TAB-completition enhancements
# Case-insensitive
bind "set completion-ignore-case on"
# Treat - and _ as equivalent in tab-compl
bind "set completion-map-case on"
# Expand options on the _first_ TAB press.
bind "set show-all-if-ambiguous on"
If you also add e.g.CDPATH=~/Documents
, it will also always autocomplete from your Documents no matter which directory you’re on.
These technologies, although archaic, clumsy and insecure
Like cars? Or phones? Those are also archaic, clumsy and insecure technologies.
Sure -> I’m not smart enough to explain it like you’re five, but maybe 12 or so would work?
The problem here is that you’re not adding 1 + 2
, or 0.1 + 0.2
. You’re converting those to binary (because computers talk binary), then you’re adding binary numbers, and converting the result back. And the error happens at this conversion step. Let’s take it slow, one thing at a time.
See, if you are looking at decimal numbers, it’s kinda like this:
357 => 7 * 1 + 5 * 10 + 3 * 100. That sequence, from right to left, would be 1, 10, 100, … as you go from right to left, you keep multiplying that by 10.
Binary is similar, except it’s not 1, 10, 100, 1000 but rather 1, 2, 4, 8, 16 -> multiply by 2 instead of 10. So for example:
00101101 => right to left => 1 * 1 + 0 * 2 + 1 * 4 + 1 * 8 + 0 * 16 + 1 * 32 + 0 * 64 + 0 * 128 => 45
The numbers 0, 1, 2, 3…9 we call digits (since we can represent each of them with one digit). And the binary “numbers” 0 and 1 we call bits.
You can look up more at simple wikipedia links above probably.
We usually “align” these so that we fill with zeroes on the left until some sane width, which we don’t do in decimal.
132 is 132, right? But what if someone told you to write number 132 with 5 digits? We can just add zeroes. So call, “padding”.
00132 - > it’s the same as 132.
In computers, we often “align” things to 8 bits - or 8 places. Let’s say you have 5 - > 1001 in binary. To align it to 8 bits, we would add zeroes on the left, and write:
00001001 -> 1001 -> decimal 5.
Instead of, say, 100110, you would padd it to 8 bits, you can add two zeroes to left: 00100110.
Think of it as a thousands separator - we would not write down a million dollars like this: $1000000. We would more frequently write it down like this: $1,000,000, right? (Europe and America do things differently with thousands- and fractions- separators, so 1,000.00 vs 1.000,00. Don’t ask me why.)
So we group groups of three numbers usually, to have it easier to read large numbers.
E.g. 8487173209478 is hard to read, but 8 487 173 209 478 is simpler to see, it’s eight and a half trillion, right?
With binary, we group things into 8 bits - we call that “byte”. So we would often write this:
01000101010001001010101010001101
like this:
01000101 01000100 10101010 10001101
I will try to be using either 4 or 8 bits from now on, for binary.
As a tangential side note, we sometimes add “b” or “d” in front of numbers, that way we know if it’s decimal or binary. E.g. is 100 binary or decimal?
b100 vs d100 makes it easier. Although, we almost never use the d, but we do mark other systems that we use: b for binary, o for octal (system with 8 digits), h for hexadecimal (16 digits).
Anyway.
To convert numbers to binary, we’d take chunks out of it, write down the bit. Example:
13 -> ?
What we want to do is take chunks out of that 13 that we can write down in binary until nothing’s left.
We go from the biggest binary value and substract it, then go to next and next until we get that 13 down to zero. Binary values are 1, 2, 4, 8, 16, 32, … (and we write them down as b0001, b0010, b0100, b1000, … with more zeroes on the left.)
the biggest of those that fit into 13 seems to be 8, or 1000. So let’s start there. Our binary numbers so far: 1000 And we have 13 - 8 = 5 left to deal with.
The biggest binary to fit into 5 is 4 (b0100). Our binary so far: b1000 + b0100 And our decimal leftover: 5 - 4 = 1.
The biggest binary to fit into 1 is 1 (b0001). So binary: b1000 + b0100 + b0001 And decimal: 1 - 1 = 0.
So in the endl, we have to add these binary numbers:
b1101 `
So decimal 13 we write as 1101 in binary.
So far, so good, right? Let’s go to fractions now. It’s very similar, but we split parts before and after the dot.
E.g. 43.976 =>
Just note that we started already with 10 on the fractional part, not with 1 (so it’s 1/10, 1/100, 1/1000…)
The decimal part is similar, except instead of multiplying by 10, you divide by 10. It would be similar with binary: 1/2, 1/4, 1/8. Let’s try something:
b0101.0110 ->
So b0101.0110 (in binary) would be 5.375 in decimal.
Now, let’s convert 2.5 into binary, shall we?
First we take the whole part: 2. The biggest binary that fits is 2 (b0010). Now the fractional part, 0.5. What’s the biggest fraction we can write down? What are all of them?
If you remember, it’s 1/2, 1/4, 1/8, 1/16… or in other words, 0.5, 0.25, 0.125, 0.0625…
So 0.5 would be binary 1/2, or b0.1000
And finally, 2.5 in decimal => b0010.1000
Let’s try another one:
13.625
Together with b0.1000 above, it’s b0.1010 So the final number is:
b1101.1010
Get it? Try a few more:
4.125, 9.0625, 13.75.
Now, all these conversions so far, align very nicely. But what when they do not?
1 + 2 = 3. In binary, let’s padd it to 4 bits: 1 -> the biggest binary that fits is b0010. 2 -> the biggest thing that fits is b0010.
b0001 + b0010 = b0011.
If we convert the result back: b0011 -> to decimal, we get 3.
Okay? Good.
Now let’s try 0.1 + 0.2.
How do we get it in binary? Let’s find the biggest fraction that fits: 1/16, or 0.0625, or b0.0001
What’s left is 0.1 - 0.0625 = 0.0375.
Next binary that fits: 1/32 or 0.03125 or b0.00001. We’re left with 0.00625.
Next binary that fits is 1/256
… etc etc until we get to:
decimal 0.1 = b0.0001100110
We can do the same with 0.2 -> b0.0011001100.
Now, let’s add those two:
b0.0100 1100 10 `
Right? So far so good. Now, if we go back to decimal, it should come out to 0.3.
So let’s try it: 0/2+1/4+0/8+0/16+1/32+1/64+0/128+0/256+1/512+0/1024 => 0.298828125
WHAAAT?
I also didn’t think much of them, but when I compare this with off-the-shelf Synology or QNAP (in the consumer-grade, like I’m building), the Celeron is a beast :)
I mean, it is not embarrassing for you. In the browser, the CSS’s “native platform”, you add classes, via the JavaScript API, without the dot. It’s not a stupid assumption.
To have to add the dot in the CSS class name seems a bit of an oversight in the gtkrs API.