Mar 24, 5-6 PM (111)
Mar 24, 6-7 PM (17)
Mar 24, 7-8 PM (9)
Mar 24, 8-9 PM (15)
Mar 24, 9-10 PM (16)
Mar 24, 10-11 PM (28)
Mar 24, 11-12 AM (14)
Mar 25, 12-1 AM (1)
Mar 25, 1-2 AM (2)
Mar 25, 2-3 AM (14)
Mar 25, 3-4 AM (2)
Mar 25, 4-5 AM (10)
Mar 25, 5-6 AM (5)
Mar 25, 6-7 AM (7)
Mar 25, 7-8 AM (14)
Mar 25, 8-9 AM (22)
Mar 25, 9-10 AM (48)
Mar 25, 10-11 AM (28)
Mar 25, 11-12 PM (36)
Mar 25, 12-1 PM (86)
Mar 25, 1-2 PM (29)
Mar 25, 2-3 PM (28)
Mar 25, 3-4 PM (65)
Mar 25, 4-5 PM (30)
Mar 25, 5-6 PM (16)
Mar 25, 6-7 PM (37)
Mar 25, 7-8 PM (10)
Mar 25, 8-9 PM (9)
Mar 25, 9-10 PM (6)
Mar 25, 10-11 PM (25)
Mar 25, 11-12 AM (16)
Mar 26, 12-1 AM (3)
Mar 26, 1-2 AM (9)
Mar 26, 2-3 AM (21)
Mar 26, 3-4 AM (10)
Mar 26, 4-5 AM (1)
Mar 26, 5-6 AM (14)
Mar 26, 6-7 AM (4)
Mar 26, 7-8 AM (8)
Mar 26, 8-9 AM (18)
Mar 26, 9-10 AM (33)
Mar 26, 10-11 AM (21)
Mar 26, 11-12 PM (34)
Mar 26, 12-1 PM (33)
Mar 26, 1-2 PM (77)
Mar 26, 2-3 PM (46)
Mar 26, 3-4 PM (51)
Mar 26, 4-5 PM (40)
Mar 26, 5-6 PM (19)
Mar 26, 6-7 PM (19)
Mar 26, 7-8 PM (15)
Mar 26, 8-9 PM (9)
Mar 26, 9-10 PM (17)
Mar 26, 10-11 PM (38)
Mar 26, 11-12 AM (11)
Mar 27, 12-1 AM (3)
Mar 27, 1-2 AM (1)
Mar 27, 2-3 AM (26)
Mar 27, 3-4 AM (12)
Mar 27, 4-5 AM (6)
Mar 27, 5-6 AM (3)
Mar 27, 6-7 AM (10)
Mar 27, 7-8 AM (18)
Mar 27, 8-9 AM (38)
Mar 27, 9-10 AM (26)
Mar 27, 10-11 AM (38)
Mar 27, 11-12 PM (26)
Mar 27, 12-1 PM (57)
Mar 27, 1-2 PM (31)
Mar 27, 2-3 PM (60)
Mar 27, 3-4 PM (40)
Mar 27, 4-5 PM (20)
Mar 27, 5-6 PM (30)
Mar 27, 6-7 PM (29)
Mar 27, 7-8 PM (15)
Mar 27, 8-9 PM (17)
Mar 27, 9-10 PM (13)
Mar 27, 10-11 PM (24)
Mar 27, 11-12 AM (17)
Mar 28, 12-1 AM (2)
Mar 28, 1-2 AM (2)
Mar 28, 2-3 AM (12)
Mar 28, 3-4 AM (1)
Mar 28, 4-5 AM (2)
Mar 28, 5-6 AM (1)
Mar 28, 6-7 AM (0)
Mar 28, 7-8 AM (2)
Mar 28, 8-9 AM (7)
Mar 28, 9-10 AM (7)
Mar 28, 10-11 AM (7)
Mar 28, 11-12 PM (7)
Mar 28, 12-1 PM (4)
Mar 28, 1-2 PM (5)
Mar 28, 2-3 PM (12)
Mar 28, 3-4 PM (3)
Mar 28, 4-5 PM (5)
Mar 28, 5-6 PM (5)
Mar 28, 6-7 PM (0)
Mar 28, 7-8 PM (2)
Mar 28, 8-9 PM (0)
Mar 28, 9-10 PM (1)
Mar 28, 10-11 PM (21)
Mar 28, 11-12 AM (21)
Mar 29, 12-1 AM (2)
Mar 29, 1-2 AM (6)
Mar 29, 2-3 AM (6)
Mar 29, 3-4 AM (6)
Mar 29, 4-5 AM (3)
Mar 29, 5-6 AM (5)
Mar 29, 6-7 AM (0)
Mar 29, 7-8 AM (0)
Mar 29, 8-9 AM (13)
Mar 29, 9-10 AM (0)
Mar 29, 10-11 AM (0)
Mar 29, 11-12 PM (2)
Mar 29, 12-1 PM (13)
Mar 29, 1-2 PM (2)
Mar 29, 2-3 PM (2)
Mar 29, 3-4 PM (4)
Mar 29, 4-5 PM (6)
Mar 29, 5-6 PM (8)
Mar 29, 6-7 PM (9)
Mar 29, 7-8 PM (6)
Mar 29, 8-9 PM (4)
Mar 29, 9-10 PM (10)
Mar 29, 10-11 PM (24)
Mar 29, 11-12 AM (17)
Mar 30, 12-1 AM (5)
Mar 30, 1-2 AM (5)
Mar 30, 2-3 AM (7)
Mar 30, 3-4 AM (7)
Mar 30, 4-5 AM (3)
Mar 30, 5-6 AM (12)
Mar 30, 6-7 AM (3)
Mar 30, 7-8 AM (36)
Mar 30, 8-9 AM (27)
Mar 30, 9-10 AM (10)
Mar 30, 10-11 AM (67)
Mar 30, 11-12 PM (47)
Mar 30, 12-1 PM (30)
Mar 30, 1-2 PM (39)
Mar 30, 2-3 PM (63)
Mar 30, 3-4 PM (33)
Mar 30, 4-5 PM (20)
Mar 30, 5-6 PM (41)
Mar 30, 6-7 PM (17)
Mar 30, 7-8 PM (18)
Mar 30, 8-9 PM (13)
Mar 30, 9-10 PM (28)
Mar 30, 10-11 PM (44)
Mar 30, 11-12 AM (28)
Mar 31, 12-1 AM (16)
Mar 31, 1-2 AM (5)
Mar 31, 2-3 AM (15)
Mar 31, 3-4 AM (6)
Mar 31, 4-5 AM (4)
Mar 31, 5-6 AM (7)
Mar 31, 6-7 AM (12)
Mar 31, 7-8 AM (43)
Mar 31, 8-9 AM (47)
Mar 31, 9-10 AM (30)
Mar 31, 10-11 AM (36)
Mar 31, 11-12 PM (29)
Mar 31, 12-1 PM (38)
Mar 31, 1-2 PM (33)
Mar 31, 2-3 PM (50)
Mar 31, 3-4 PM (31)
Mar 31, 4-5 PM (47)
Mar 31, 5-6 PM (11)
3,098 commits this week Mar 24, 2026 - Mar 31, 2026
fix(ouroboros): close connection on rollback point not found instead of restarting chainsync (#1774)
When a peer sends RollBackward to a point not in the local chain, the
chainsync resync handler previously attempted to restart chainsync on
the same connection. This fails because:

1. The peer's chainsync protocol state has moved past the intersect
   phase — it sends MsgRollForward while dingo is in Intersect state
2. buildDefaultChainsyncIntersectPoints calls GetConnectionById which
   returns nil if the connection was cleaned up during the restart
3. The restart loop repeats indefinitely, preventing any chain progress

This scenario commonly occurs after a pipeline stall: the node falls
behind, the peer's chain advances, and when chainsync resumes the
peer's rollback point is no longer in the local blob store.

Close the connection immediately (like plateau and local rollback
handlers already do) and let peer governance reconnect with a fresh
bearer and updated intersect points. The new connection starts a clean
FindIntersect from the current ledger tip, which succeeds because the
tip is on the canonical chain.

Signed-off-by: wcatz <[email protected]>
Replace Introduction page with System Overview
The previous Introduction page listed the three components (Consensus,
Storage, Mempool) as separate entities, but this was misleading — in the
code, "Consensus Protocol" does not exist as a distinct component.
Blocks flow directly from the network to ChainDB, and chain selection
lives inside ChainDB, not in a separate protocol layer.

The new System Overview:
- Opens with the consensus problem and why this layer exists
- Adds a C4 Context diagram showing consensus in relation to
  cardano-node, the network layer, and the ledger layer
- Names ChainDB (storage + chain selection) and mempool as the
  main internal components without implying a separate protocol layer
- Covers era evolution (Byron through Conway) with glossary links
- Describes the code organization: polymorphic core in
  ouroboros-consensus vs Cardano instantiations in
  ouroboros-consensus-cardano
- Links to existing deeper pages (Design Goals, Data Flow,
  Ledger Interaction, Queries, Node Tasks)
chore(deps): bump codecov/codecov-action from 5.5.3 to 6.0.0 (#239)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.5.3 to 6.0.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/1af58845a975a7985b0beb0cbe6fbbb71a41dbad...57e3a136b779b570ffcdbf80b3bdc90e7fab3de2)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
fix(database): wire cache config through to database layer (#1773)
The CBOR cache configuration (BlockLRUEntries, HotUtxoEntries,
HotTxEntries) was parsed from config/env vars but never passed to
the database.Config when creating the database. This caused all cache
sizes to default to zero, effectively disabling the block LRU cache
and using minimal hot cache sizes regardless of configuration.

Add WithCacheConfig option to the node Config and pass the values
through to database.CborCacheConfig when initializing the database
in node.go. Wire the config.Cache fields from internal/config through
to the dingo.New() call in internal/node/node.go.

Evidence: all 4 nodes showed dingo_cbor_cache_block_lru_hits_total=0
despite DINGO_CACHE_BLOCK_LRU_ENTRIES=5000 being set.

Signed-off-by: wcatz <[email protected]>
fix(database): address review issues on size metrics
- Replace MustRegister with safe Register to avoid panic on duplicate
  metric registration when multiple Database instances share a registry
- Add lifecycle stop/done channels so the metrics goroutine exits on
  Database.Close() instead of leaking
- Return 0 from SQLite DiskSize() in in-memory mode (empty dataDir)

Signed-off-by: wcatz <[email protected]>
Signed-off-by: wcatz <[email protected]>
feat(metrics): add database disk size metrics
Add dingo_database_size_bytes gauge with store label ("blob" or
"metadata") to expose on-disk database sizes via prometheus metrics.

- Add DiskSize() method to BlobStore and MetadataStore interfaces
- Badger: returns DB.Size() (LSM + vlog)
- SQLite: returns PRAGMA page_count * page_size
- Cloud/remote stores (S3, GCS, MySQL, Postgres): return 0
- Bark blob store: delegates to upstream
- Metrics update every 60 seconds via background goroutine

Signed-off-by: wcatz <[email protected]>
Test backwards compatibility of snapshots
The roundtrip tests only decode values that have been encoded at the
current version. Similarly, the golden file tests only check that our
serialisation doesn't change unexpectedly. This is useful, but so far
nothing tested that we can actually *decode* at older versions.

We should at least have tests for deserialisation of old versions of
specific types that changed between versions. This commit adds the first
such test.