Key management
Blog posts
go back

Understanding the security of web3 remote signing

What we need to do about a realistic attacker model

May 1, 2023
written by
Deian Stefan
Co-Founder & Chief Scientist
Fraser Brown
Co-Founder & CTO
Key management
Blog posts
In the [first post of this series]( we went over the basics of what you need to do to secure your staking keys: (1) use a remote signing tool and (2) ensure that the remote signer both _stores_ and _signs_ in secure hardware---otherwise an attacker can just exfiltrate your keys from memory. But as we hinted in the post, this is not enough.<br> <br> ## The attacker model<br> <br> To understand what else we need, let's first look at the attacker model of a typical <a href="" target="_blank">proof-of-stake validator setup</a> with remote signing. At a high level, there are three moving pieces: the validator or consensus client (used to participate in consensus protocol), the execution client (used to execute smart contracts), and the remote signing service (used to sign requests for the validation client).<br> <br> ![validator attacker model](<br> <br> The validator and execution clients are complex pieces of software that are exposed to untrusted network traffic and execute untrusted code, respectively. So, using a remote signer is good for security not only because it allows you to keep your keys in hardware, but also because it introduces a boundary between the complex attacker-facing code and the security critical code that handles keys. And, if we turn this boundary into a _secure isolation boundary_, we can even safeguard keys from completely compromised clients.<br> <br> ## Isolation<br> <br> Unfortunately, trying to get isolation by running an off-the-shelf remote signing tool like Web3Signer in a separate process won't cut it. An attacker that compromises a client (or any other service running on the same machine) could just read the memory of the signer process (e.g., by reading `/proc/{pid}/mem`).<br> <br> Enforcing isolation by running a remote signer in a separate VM or enclave also won't do. This is because existing tools don't consider the code that _interfaces_ with them to be untrusted: they'll (almost always) sign whatever you ask them to sign with any key at any time. The one exception to the sign-anything approach is slashing protections, but even then, existing tools assume that requests are coming from a trusted client. Retrofitting an existing tool to treat everything as untrusted is a whack-a-mole game of trying to insert the right security checks into the right parts of the codebase.<br> <br> ## Least privilege + compartmentalization<br> <br> Even if we win the game of whack-a-mole, this only solves part of the problem. Existing signers do _a lot_: they're essentially large, monolithic web apps. The code handling keys run in the same process as the code handling untrusted HTTP requests, the code parsing the untrusted JSON objects coming from other validators, the code communicating with a database that stores the anti-slashing data, the log4j code managing logs, etc. To safeguard your keys (and, e.g., sign in hardware), you would have to break this system into multiple compartments, each running with least privilege. The signing code that uses secret keys should not be able to talk to the network or filesystem, and your logging library should _definitely_ not be in your trusted computing base.<br> <br> An alternative approach---and the one we've taken---is to design the system to be least-privileged and compartmentalized from the start. Ground up, security-centric design is much easier to get right: it's far simpler to reason about what code *should* be allowed to do than to reason about what code *should not* be allowed to do. It's the same reason we all prefer allow-lists over deny-lists.<br> <br>

Read more

Cubist & EigenLabs anti-slasher collaboration

Cubist is excited to announce a new partnership: we are working with EigenLabs to build anti-slashers that will help honest operators avoid getting slashed on EigenLayer.

September 19, 2023

Hardware-backed signing for MetaMask developers

Our Snap lets Snap- or dapp-developers use CubeSigner, our hardware-backed key management system, to safely sign transactions on behalf of their MetaMask users.

September 12, 2023

Intel SGX is broken (again)

Last week, security researcher Daniel Moghimi publicly announced the new Downfall attack that can steal private keys from Intel SGX hardware. In this post, we review the SGX architecture and discuss its underlying security problems. Then, we describe the process we used for evaluating which secure hardware to use in our key manager.

August 15, 2023