- Polygon’s Heimdall consensus layer was down for 1 hour today due to a consensus bug.
- The Bor layer stayed live, and transactions continued uninterrupted.
- The bug follows the recent complex Heimdall V2 upgrade.
Polygon, one of Ethereum’s leading Layer 2 scaling solutions, suffered a temporary outage on Wednesday, July 30, 2025, that halted its Heimdall consensus layer for approximately one hour.
Notably, the unexpected disruption came just weeks after the network’s most technically complex upgrade since its inception in 2020.
Validator exit triggered the rare failure
The outage began around 09:30 UTC when Heimdall, the consensus layer responsible for managing validators and syncing Polygon’s proof-of-stake chain with Ethereum, suddenly became unresponsive.
According to an official statement from the Polygon Foundation, the incident was caused by a validator unexpectedly exiting the network — a rare event that the system had not been programmed to handle.
This unusual validator exit led to a “consensus bug” that halted checkpointing and caused a temporary break in chain progression.
Polygon confirmed that the chain’s “liveliness”— or its ability to process and execute transactions — remained intact throughout the disruption, thanks to the Bor layer, which continued to produce blocks without interruption.
Bor stays live, but RPCs falter
While the core functionality of the network was preserved, the user experience told a different story.
According to an update from the Polygon Foundation, due to the outage, several RPC providers experienced sync inconsistencies across their Bor nodes.
This created confusion for users and dApps who rely on explorers and API endpoints to verify network status and transactions in real time.
Some users mistakenly believed the entire network was offline.
Polymarket, a major prediction market platform built on Polygon, briefly showed error messages during the downtime, further adding to the perception that funds or trades were stuck.
Polygon’s team later clarified that while validator data and checkpoint information were temporarily inaccessible, the chain itself never stopped processing transactions.
In their words, the situation “triggered a false alarm” due to…