It could also be this: Cheang, R. T., Skjevling, M., Blakemore, A. I., Kumari, V., & Puzzo, I. (2024). Do you feel me? Autism, empathic accuracy and the double empathy problem. Autism, 0(0). https://doi.org/10.1177/13623613241252320
It could also be this: Cheang, R. T., Skjevling, M., Blakemore, A. I., Kumari, V., & Puzzo, I. (2024). Do you feel me? Autism, empathic accuracy and the double empathy problem. Autism, 0(0). https://doi.org/10.1177/13623613241252320
It seems OP wanted to pass the file name to -k
, but this parameter takes the password itself and not a filename:
-k password
The password to derive the key from. This is for compatibility with previous versions of OpenSSL. Superseded by the -pass argument.
So, as I understand, the password would be not the first line of /etc/ssl/private/etcBackup.key
, but the string /etc/ssl/private/etcBackup.key
itself. It seems that -kfile /etc/ssl/private/etcBackup.key
or -pass file:/etc/ssl/private/etcBackup.key
is what OP wanted to use.
Oracle trilateration refers to an attack on apps that have filters like “only show users closer than 5 km”. In case of the vulnerable apps, this was very accurate, so the attacker could change their position from the victim (which does not require physical movement, the application has to trust your device on this, so the position can be spoofed) until the victim disappeared from the list, and end up a point that is almost exactly 5 km from the victim.
Like if it said the user is 5km away, that is still going to give a pretty big area if someone were to trilateral it because the line of the circle would have to include 4.5-5.5km away.
This does not help, since the attacker can find a point where it switches between 4 km and 5 km, and then this point (in the simplest case) is exactly 4.5 km from the victim. The paper refers to this as rounded distance trilateration.
That command will produce a list of (dynamic) libraries that are being used by that helper. It will look somewhat like this (this is copied from my Arch instalation):
linux-vdso.so.1 (0x00007edb2f060000)
libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007edb2ee6f000)
libpcre2-8.so.0 => /usr/lib/libpcre2-8.so.0 (0x00007edb2edd1000)
libz.so.1 => /usr/lib/libz.so.1 (0x00007edb2edb8000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007edb2ebcc000)
libnghttp3.so.9 => /usr/lib/libnghttp3.so.9 (0x00007edb2eba9000)
libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007edb2eb7f000)
libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007edb2eb5b000)
libssh2.so.1 => /usr/lib/libssh2.so.1 (0x00007edb2eb12000)
libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007edb2eafe000)
libssl.so.3 => /usr/lib/libssl.so.3 (0x00007edb2ea24000)
libcrypto.so.3 => /usr/lib/libcrypto.so.3 (0x00007edb2e400000)
libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007edb2e9d0000)
libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007edb2e8ef000)
libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007edb2e8e0000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007edb2f062000)
libunistring.so.5 => /usr/lib/libunistring.so.5 (0x00007edb2e250000)
libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007edb2e178000)
libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007edb2e14a000)
libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007edb2e8d8000)
libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007edb2e13c000)
libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007edb2e8d1000)
libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007edb2e12a000)
libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x00007edb2e107000)
It might be a good idea actually to try running this both when it works and when it doesn’t, maybe there is some difference?
ldd /usr/lib/git-core/git-remote-https
?
I like btdu which is essentially ncdu, but works in a way that is useful even if advanced btrfs features (CoW, compression etc.) are used.
I am afraid you are still a bit misled; WireGuard is exactly what they use for the demo video. In general the underlying protocol does not matter, since the vulnerability is about telling the system to direct the packages to the attacker, completely bypassing the VPN.
pub trait Sum<A = Self>: Sized { fn sum<I: Iterator<Item = A>>(iter: I) -> Self; }
So I’d presume the
A = Self
followed byI: Iterator<Item = A>
for the iterator binds the implementation pretty clearly to the type of the iterator’s elements.
Quite confusingly, the two =
s have very different meaning here. The Item = A
syntax just says that the iterator’s item type, which is set as the trait’s associated type, should be A
. So, you could read this as “I
should implement the Iterator
trait, and the Item
associated type of this implementation should be A
”.
However, A = Self
does not actually mean any requirement of A
. Instead, it means that Self
is the default value of A
: that is, you can do impl Sum<i64> for i32
and then you will have Self
equal to i32
and A
equal to i64
, but you can also do impl Sum for i32
and it will essentially be a shorthand for impl Sum<i32> for i32
, giving you both Self
and A
equal to i32
.
In the end, we have the relationship that the iterator item should be the same as A
, but we do not have the relationship that Self
should be the same as A
. So, given this trait, the iterator item can actually be different to A
.
Note that the standard library does actually have implementations where these two differ. For instance, it has impl<'a> Sum<&'a i32> for i32
, giving you a possibility to sum the iterator of &i32
into i32
. This is useful when you think about this: you might want to sum such an iterator without .copied()
for some extra ergonomics, but you can’t just return &i32
, there is nowhere to store the referenced i32
. So, you need to return the i32
itself.
The definition is pretty clear here right? The generic here is
Sum<Self::Item>
, abbreviated toS
… which AFAIU … means that the element type of the iterator — hereSelf::Item
— is the type that has implementedSum
… and the type that will be returned.
In Sum<Self::Item>
, Self::Item
is the A
parameter, and Sum<Self::Item>
, or S
, is the type that implements the trait (which is called Self
in the definition of the Sum
trait, but is different to the Self
in the sum
method definition). As above, A
and S
can be different.
It might be helpful to contrast this definition with a more usual one, where the trait does not have parameters:
fn some_function<S>(…) -> …
where
S: SomeTrait,
{…}
fn sum<S>(…) -> …
where
S: Sum<Self::Item>,
{…}
Note that you might have an intuition from some other languages that in case of polymorphism, the chosen function either depends on the type of one special parameter (like in many OOP languages, where everything is decided by the class of the called object), or of the parameter list as a whole (like in C++, where the compiler won’t let you define int f()
and float f()
at the same time, but will be fine with int f(int)
and float f(float)
). As you can see, in Rust, the return type also matters. A simpler example of this is the Default
trait.
Regarding inference, some examples (Compiler Explorer link):
vec![1i32].into_iter().sum();
// or: <_ as Sum<_>>::sum(vec![1i32].into_iter());
// error[E0283]: type annotations needed
// note: cannot satisfy `_: Sum<i32>`
Compiler knows that the iterator contains i32
s, so it looks for something that implements Sum<i32>
. But we don’t tell the compiler what to choose, and the compiler does not want to guess by itself.
vec![1i32].into_iter().sum::<i32>();
// or: <i32 as Sum<_>>::sum(vec![1i32].into_iter());
As above the compiler knows that it wants to call something that implements Sum<i32>
, but now it only has to check that i32
is such type. It is, so the code compiles.
vec![1i32].iter().sum::<i32>();
// or: <i32 as Sum<_>>::sum(vec![1i32].iter());
Now we actually have a iterator of references, as we used .iter()
instead of .into_iter()
. But the code still compiles, since i32
also implements Sum<&i32>
.
vec![1i64].into_iter().sum::<i32>();
// or: <i32 as Sum<_>>::sum(vec![1i64].into_iter());
// error[E0277]: a value of type `i32` cannot be made by summing an iterator over elements of type `i64`
// help: the trait `Sum<i64>` is not implemented for `i32`
Now the compiler can calculate itself that it want to call something that implements Sum<i64>
. However, i32
does not actually implement it, hence the error. If it did, the code would compile correctly.
vec![].into_iter().sum::<i32>();
// or: <i32 as Sum<_>>::sum(vec![].into_iter());
// error[E0283]: type annotations needed
// (in the second case) note: multiple `impl`s satisfying `i32: Sum<_>` found in the `core` crate: impl Sum for i32; impl<'a> Sum<&'a i32> for i32;
Now the situation is reversed. The compiler knows the return type, so it knows that i32
should implement some Sum<_>
. But it doesn’t know the iterator element type, and so it doesn’t know if it should choose the owned value, or the reference version. Note that the wording is different, the compiler wants to guess, but it can’t, as there are multiple possible choices. But if there is only one choice, the compiler does guess it:
struct X {}
impl Sum for X {
fn sum<I: Iterator<Item = X>>(_: I) -> Self { Self{} }
}
vec![].into_iter().sum::<X>();
// or: <X as Sum<_>>::sum(vec![].into_iter());
builds correctly. I am not sure about the reason for the difference (I feel like it’s related to forward compatibility and the fact that outside the standard library I can do impl Sum<i32> for MyType
but not impl Sum<MyType> for i32
, but I don’t really know).
Hope that helps :3
EDIT:
I’d also caught mentions of the whole zero thing being behind the design. Which is funny because once you get down to the implementation for the numeric types, zero seems (I’m not on top of macro syntax) to be just a parameter of the macro, which then gets undefined in the call of the macro, so I have to presume it defaults to 0 somehow??. In short, the zero has to be provided in the implementation of sum for a specific type. Which I suppose is flexible. Though in this case I can’t discern what the zero is for the integer types (it’s explicitly 0.0 for floats).
Ah, I read this, thought about this, and forgot about this almost immediately. I know almost nothing about macros, but if I understand correctly, the zero is in line 92, here:
($($a:ty)*) => (
integer_sum_product!(@impls 0, 1,
#[stable(feature = "iter_arith_traits", since = "1.12.0")],
$($a)*);
integer_sum_product!(@impls Wrapping(0), Wrapping(1),
#[stable(feature = "wrapping_iter_arith", since = "1.14.0")],
$(Wrapping<$a>)*);
);
The intention seems to be to take a list of types (i8 i16 i32 i64 i128 isize u8 u16 u32 u64 u128 usize
), and then for each type to generate both the regular and Wrapping
version, each time calling into the path you have seen before. For floats there is no Wrapping
version, so this time 0.0
is really the only kind of zero that can appear.
If so, why not rely on the
Add
Trait at the element level, which is responsible for the addition operator (see docs here)?
You made me curious and I found some discussion on the subject: https://github.com/rust-lang/rust/issues/27739. Short version is that you could do that if you had some other trait that would tell you what the zero value of the type is, so you know what is the sum of vec![]
. Originally the standard library did just that, the trait was literally called Zero
. But there were some issues with it and it has been removed in favor of the current design.
For example, this code doesn’t compile because a type needs to be specified, presumably type inference gets lost amongst all the generics?
Unfortunately with this design of the Sum
trait it is impossible to guess the result type from the iterator type. For example, see https://godbolt.org/z/c8M7eshaM.
I really need to try out Mercury one day. When we did a project in Prolog at uni, it felt cool, but also incredibly dynamic in a bad way. There were a few times when we misspelled some clause, which normally would be an error, but in our case it just meant falsehood. We then spent waaay to much time searching for these. I can’t help but think that Mercury would be as fun as Prolog, but less annoying.
I actually use from time to time the Bower email client, which is written in Mercury.
My understanding is that all issues are patched in the mentioned releases, the config flag is not needed for that.
The config flag has been added because supporting clients with different endianness is undertested and most people will never use it. So if it is going to generate vulnerabilities, it makes sense to be able to disable it easily, and to disable it by default on next major release. Indeed XWayland had it disabled by default already, so only the fourth issue (ProcRenderAddGlyphs
) is relevant there if that default is not changed.
Ultimately you can configure these however you want. On my 5600X, I easily got one full execution of scrypt to last 34.6 seconds (--logN 27 -r 1 -p 1
in the example CLI), and one full execution of bcrypt to last 47.5 seconds (rounds=20
and the bcrypt
Python library).
This kind of configuration (ok, not this long, but definitely around 1 second per execution) is very common in things like password managers or full disk encryption.
I’m betting there’s probably something that generates the key from a vastly smaller player input, i.e what gameobjects you interacted with, in what order, or what did you press/place somwhere. But that also means that the entropy is probably in the bruteforcable range, and once you find the function that decrypts the secrets, it should be pretty easy to find the function that generates the key, and the inputs it takes.
When handling passwords, it is standard practice to use an intentionally costly (in CPU, memory, or both) algorithm to derive the encryption key from the password. Maybe the dev can reuse this? The resulting delay could easily be masked with some animation.
I got curious and decided to check this out. This value was set to the current one in 2009: https://github.com/torvalds/linux/commit/341c87bf346f57748230628c5ad6ee69219250e8 The reasoning makes sense, but I guess is not really relevant to our situation, and according to the newest version of the comment 2^16 is not a hard limit anymore.
Have you tried etckeeper? I haven’t, but it’s supposed to be an improvement over just using git in this usecase.
As a data point, I have a Green Cell battery in my X220. I have bought the battery on July 24, 2022 and I have been using my X220 regularly but lightly. The battery was marketed as 6600 mAh at 10.8 V. As of writing, the OS reports design capacity of 73.26 Wh and current capacity of 60.6 Wh:
POWER_SUPPLY_NAME=BAT0
POWER_SUPPLY_TYPE=Battery
POWER_SUPPLY_STATUS=Discharging
POWER_SUPPLY_PRESENT=1
POWER_SUPPLY_TECHNOLOGY=Li-ion
POWER_SUPPLY_CYCLE_COUNT=0
POWER_SUPPLY_VOLTAGE_MIN_DESIGN=11100000
POWER_SUPPLY_VOLTAGE_NOW=11783000
POWER_SUPPLY_POWER_NOW=28726000
POWER_SUPPLY_ENERGY_FULL_DESIGN=73260000
POWER_SUPPLY_ENERGY_FULL=60600000
POWER_SUPPLY_ENERGY_NOW=54960000
POWER_SUPPLY_CAPACITY=90
POWER_SUPPLY_CAPACITY_LEVEL=Normal
POWER_SUPPLY_MODEL_NAME=45N1023
POWER_SUPPLY_MANUFACTURER=SANYO
POWER_SUPPLY_SERIAL_NUMBER= 9001
The bootloader is stored unencrypted on your disk. Therefore it is trivial to modify, the other person just needs to power down your PC, take the hard drive out, mount it on their own PC and modify stuff. This is the Evil Maid attack the other person talked about.
Not a Fedora user, but according to https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems/ adding a new fstab entry with the correct option should just work. They even give changing the size of /tmp
as an example usecase :)
You might also like https://github.com/nvim-neorg/neorg which is not meant to be compatible with Emacs org-mode, but rather something new that’s built around similar ideas but for Neovim. Hadn’t used it myself though, only heard about it.
Same in Python, Rust, Haskell and probably many others.
But apparently JS does work that way, that is its
filter
always iterates over everything and returns a new array and not some iterator object.