Key management
Security
Wallets
go back

Understanding and preventing the Bybit hack

A different approach to security would have made the attackers’ job much harder

February 25, 2025
tags
Key management
Security
Wallets
Several days ago, the Lazarus group <a href="https://www.forbes.com/sites/digital-assets/2025/02/21/latest-on-the-bybit-record-breaking-14-billion-dollar-crypto-hack/" target="_blank">hacked the Bybit exchange</a> for around one and a half billion dollars. The attack was clever—it required deep knowledge of Bybit’s internal operations—and technically sophisticated. <br> It was also _completely preventable_. <br> This blog post first digs into the hack itself, and then explains how a different approach to security would have made the attackers’ job much harder. <br> ## How the hack happened <br> Bybit uses on-chain “<a href="https://safe.global" target="_blank">Safe</a>” wallets to store (large amounts of) funds, and then periodically, manually transfers funds from those wallets to other wallets. To approve transfers, three Bybit signers navigate to the wallet UI and view the transaction details. If everything looks good, they use a hardware device (e.g., a Ledger) to sign the transaction. When the final signer approves, the UI broadcasts the fully-signed transaction to the blockchain. <br> ### Operations under attack <br> The attacker started by deploying two contracts: <ol> <li>A malicious contract that transfers ETH and ERC20 tokens to any address that the attacker chooses.</li> <li>A contract that upgrades a standard Safe contract to instead point to the malicious contract.</li> </ol> <br> The attacker’s goal was to get the Bybit signers to approve a contract upgrade transaction; once the cold wallet contract logic was replaced by malicious logic, the attackers could drain all the wallet’s funds to the address of their choice. Unfortunately (for the attacker), presenting the Bybit signers with the chance to `upgrade` their Safe to a malicious contract would almost certainly fail: the signers would veto the transaction immediately. Thus, the attackers had to _trick_ the signers into approving the upgrade by making it look like something…safe. <br> Their first step was to implement the upgrade in a non-standard—and thus harder to detect—way. Instead of using a standard `upgrade` transaction, they <a href="https://www.certik.com/resources/blog/3wI26AFKF1UtSDjJEXNEDM-bybit-incident-technical-analysis" target="_blank">implemented</a> <a href="https://www.blockaid.io/blog/the-15b-bybit-hack-explained-a-technical-breakdown" target="_blank">the</a> <a href="https://x.com/dhkleung/status/1893073663391604753" target="_blank">upgrade</a> so that its transaction’s data looked like a `transfer`. This alone is probably insufficient: while a `transfer` is plausible, this _particular_ transfer is very strange looking. It’s not even ERC-20 compliant, for example. <br> The attacker’s second step was to make the upgrade look _absolutely, completely routine_. This isn’t possible at the contract level: no matter what, a contract upgrade is going to look different from your ordinary, once-a-week transfer. Luckily (for the attacker), there’s another way. Recall that the Bybit signers navigate to the UI to view transaction details and approve transactions. Thus, to get their evil upgrade approved, the attackers could make the UI display one thing—a completely normal transfer—while collecting a signature on something else—a malicious upgrade. <br> To pull off this attack, the attackers had to get three key Bybit employees to navigate to the evil (but normal-looking) UI. One possibility is that a phishing attack got them to click on a link that sent them directly to the evil UI; another possibility is a <a href="https://blog.google/threat-analysis-group/0-days-exploited-wild-2022/" target="_blank">more sophisticated</a> phishing attack, one that sent them to a webpage embedding malicious code that compromised their machines (and ultimately displayed the evil UI). <br> Once the Bybit employees had looked at the malicious transaction details in the evil UI and deemed them safe, they needed to actually sign the transaction. This is where the entire attack could have broken down: if the Bybit employees had examined the transaction details _on their hardware devices_—instead of just in the UI—they would have seen the weird-looking transfer. Instead, the attackers knew (or suspected) that Bybit operations used “<a href="https://x.com/Ledger/status/1893956121385165278" target="_blank">blind signing</a>,” where transaction details aren’t displayed on the hardware wallet. This meant that a compromised UI was enough to trick all three Bybit signers into approving a malicious contract upgrade—which cost them over a billion dollars. <br> ### Feb 26th 2025 update: <br> According to the most <a href="https://x.com/safe/status/1894768522720350673" target="_blank">recent</a> <a href="https://www.bybit.com/en/press/post/bybit-confirms-security-integrity-amid-safe-wallet-incident-no-compromise-in-infrastructure-blt9986889e919da8d2" target="_blank">reports</a>, the evil UI wasn’t the result of a phishing attack against Bybit. Instead, the attackers were able to compromise a Safe employee and make malicious modifications to the _real_ UI that’s used by all Safe customers. <br> This is the worst possible case: the attackers had the power to subvert the operation of the UI for _all_ users, not just for Bybit. And in Safe's design, the UI is a fully trusted component of the system—so this is unequivocally a compromise of Safe. (It's true that the smart contract itself was not compromised. But that hardly matters: as proved by this attack, an attacker does not need to compromise the smart contract in order to drain a Safe.) <br> Stepping back: securing code deployment is one of the fundamental requirements for operating security-critical systems. It is astonishing that compromising one Safe employee was sufficient to subvert a security-critical component of the Safe system. This particular attack vector could have been prevented by following best practices—and while there are more sophisticated attack vectors, this is sadly a reminder that securing software development, deployment, and the supply chain more broadly is as critical to Web3 as it's been for Web2. <br> ## How to prevent something similar <br> This attack _should not_ have happened: wallet and key management providers have developed all sorts of strategies that would thwart such a thing. We’ll give an example by walking through several ways that a hardware-backed CubeSigner wallet with a 3-out-of-3 approval requirement would have prevented the hack. <br> ### Don’t blind sign <br> If you—or your wallet/key management platform, as we discuss below—can’t examine the details of the transaction you’re signing, there’s very little you can do to prevent exploits. (CubeSigner does have a few features that _do_ help but we’ll talk about them in a future post.) Just don’t blind sign. <br> ### Use policies (and don’t enforce them on the client machine) <br> Signing policies are rules like “don’t sign any transfer over a certain amount”; if a transaction violates the policy, the system simply refuses to sign it. Crucially, CubeSigner enforces policies in the system backend—_not_ on client machines. As a result, if the UI displays a safe transaction but submits an _unsafe_ one to the backend for signing, that transaction will be rejected. <br> Here’s an example set of policies that would prevent the attack with essentially no inconvenience in day-to-day operations: <ul> <li>Require 3/3 YubiKey approvals from the Bybit signers. (YubiKeys give an extra layer of defense against phishing attacks: a YubiKey approval sent to an evil UI would include different metadata than one sent to the real UI, so the approval would fail).</li> <li>Only allow transfers to a known set of warm wallet addresses. This prevents any unusual or malicious transfers by construction.</li> <li>Only allow transfers of a certain amount in a certain time window. This prevents attackers from draining all the funds in the wallet.</li> </ul> <br> Even if an attacker phishes the Bybit signers into approving a massive transfer to a malicious address, this set of policies would immediately reject the transfer. <br> That’s not _quite_ what happened, though: in the hack, the attackers got the Bybit signers to approve a contract upgrade by making it look like a transfer. There’s no off-chain equivalent of a smart contract upgrade. The closest thing is a governance change: tricking the Bybit signers into removing all the policies we just set (in order to then trick them into signing evil transfers). Which brings us to… <br> ### Separate governance policies and signing policies <br> Governance policies are rules like “five out of seven people must approve this signing policy change”; instead of controlling what a wallet can sign, they control how and when it’s possible to change the policy on a wallet. <br> We might configure our cold wallet with two policies: <ul> <li>Require 5/7 YubiKey approvals to change the wallet’s signing policy. (Remember: YubiKeys enhance security against phishing.)</li> <li>Require a 7-day waiting period for changes to the wallet’s signing policy. Even if approvers are tricked into approving malicious changes, they have time to cancel those approvals. This is especially powerful in concert with alerts on governance changes.</li> </ul> <br> At this point, it’s very hard for the attacker to make a governance change look like a transaction request: governance changes require both more approvers and different approvers than signature requests. Furthermore, since CubeSigner’s transaction approvals and governance approvals are completely different, it’s impossible for an attacker to collect approvals for signatures and then use them to approve governance changes. <br> ### Incorporate transaction monitoring services <br> There are many excellent transaction monitoring and simulation tools that would, for example, have flagged that the upgrade transaction was suspicious. You can use any of these tools in combination with CubeSigner! Look out for an update 👀 on how to integrate your favorite. <br>

Read more

K3 brings wallet automations to CubeSigner users

K3 brings wallet automations to CubeSigner users

We are excited to announce that Cubist has partnered with K3 Labs to provide the secure wallet infrastructure underlying their new drag-and-drop Web3 automation platform.

February 5, 2025
Cubist teams up with Babylon and Lombard to bring Bitcoin to Sui

Cubist teams up with Babylon and Lombard to bring Bitcoin to Sui

Together with Lombard, we have been extending the CubeSigner hardware-backed key management platform to bring smart contract capabilities to Bitcoin and unlock Bitcoin liquid staking on Sui.

November 25, 2024
A step towards smart contracts on Bitcoin

A step towards smart contracts on Bitcoin

Hardware-enshrined smart contracts, which we developed using our CubeSigner key management platform, allow Bitcoin protocols to encode complex operational logic and maintain decentralized governance much like a traditional smart contract on Ethereum.

November 18, 2024