The same way you would do it with a black box while optionally taking as many shortcuts as one is comfortable with by virtue of assuming having a better understanding of it’s been built?
Get it audited by tools, e.g OneSpin, or people, e.g Bunnie, that one trusts?
I’m not saying it’s intrinsically safer than other architectures but it is at least more inspectable and, for people who do value trust for whatever, can be again federated.
I assume if you do ask the question you are skeptical about it so curious to know what you believe is a better alternative and why.
I imagine it’s like everything else, you can only realistically verify against a random sample. It’s like trucks passing a border, they should ALL be checked but in practice only few gets checked and punished with the hope that punishment will deter others.
Here if 1 chip is checked for 1 million produced and there is a single problem with it, being a backdoor or “just” a security flaw that is NOT present due to the original design, then the trust in the company producing them is shattered. Nobody who can afford alternatives will want to work with them.
I imagine in a lot of situations the economical risk is not worth it. Even if say a state actor does commission a backdoor to be added and thus tell the producing company they’ll cover their losses, as soon as the news is out nobody will even use the chips so even for a state actor it doesn’t work.