Home /
Input Output /
ouroboros-leios-sim
Apr 24, 6-7 AM (0)
Apr 24, 7-8 AM (0)
Apr 24, 8-9 AM (0)
Apr 24, 9-10 AM (0)
Apr 24, 10-11 AM (0)
Apr 24, 11-12 PM (0)
Apr 24, 12-1 PM (0)
Apr 24, 1-2 PM (0)
Apr 24, 2-3 PM (0)
Apr 24, 3-4 PM (0)
Apr 24, 4-5 PM (0)
Apr 24, 5-6 PM (0)
Apr 24, 6-7 PM (4)
Apr 24, 7-8 PM (1)
Apr 24, 8-9 PM (0)
Apr 24, 9-10 PM (2)
Apr 24, 10-11 PM (0)
Apr 24, 11-12 AM (0)
Apr 25, 12-1 AM (0)
Apr 25, 1-2 AM (0)
Apr 25, 2-3 AM (0)
Apr 25, 3-4 AM (0)
Apr 25, 4-5 AM (0)
Apr 25, 5-6 AM (0)
Apr 25, 6-7 AM (0)
Apr 25, 7-8 AM (0)
Apr 25, 8-9 AM (0)
Apr 25, 9-10 AM (0)
Apr 25, 10-11 AM (0)
Apr 25, 11-12 PM (0)
Apr 25, 12-1 PM (2)
Apr 25, 1-2 PM (0)
Apr 25, 2-3 PM (0)
Apr 25, 3-4 PM (0)
Apr 25, 4-5 PM (0)
Apr 25, 5-6 PM (1)
Apr 25, 6-7 PM (0)
Apr 25, 7-8 PM (1)
Apr 25, 8-9 PM (25)
Apr 25, 9-10 PM (1)
Apr 25, 10-11 PM (0)
Apr 25, 11-12 AM (0)
Apr 26, 12-1 AM (0)
Apr 26, 1-2 AM (0)
Apr 26, 2-3 AM (0)
Apr 26, 3-4 AM (0)
Apr 26, 4-5 AM (0)
Apr 26, 5-6 AM (0)
Apr 26, 6-7 AM (0)
Apr 26, 7-8 AM (1)
Apr 26, 8-9 AM (0)
Apr 26, 9-10 AM (0)
Apr 26, 10-11 AM (0)
Apr 26, 11-12 PM (0)
Apr 26, 12-1 PM (0)
Apr 26, 1-2 PM (0)
Apr 26, 2-3 PM (0)
Apr 26, 3-4 PM (0)
Apr 26, 4-5 PM (0)
Apr 26, 5-6 PM (0)
Apr 26, 6-7 PM (1)
Apr 26, 7-8 PM (0)
Apr 26, 8-9 PM (0)
Apr 26, 9-10 PM (0)
Apr 26, 10-11 PM (0)
Apr 26, 11-12 AM (0)
Apr 27, 12-1 AM (0)
Apr 27, 1-2 AM (0)
Apr 27, 2-3 AM (0)
Apr 27, 3-4 AM (0)
Apr 27, 4-5 AM (0)
Apr 27, 5-6 AM (0)
Apr 27, 6-7 AM (0)
Apr 27, 7-8 AM (0)
Apr 27, 8-9 AM (0)
Apr 27, 9-10 AM (0)
Apr 27, 10-11 AM (0)
Apr 27, 11-12 PM (3)
Apr 27, 12-1 PM (2)
Apr 27, 1-2 PM (3)
Apr 27, 2-3 PM (1)
Apr 27, 3-4 PM (0)
Apr 27, 4-5 PM (7)
Apr 27, 5-6 PM (0)
Apr 27, 6-7 PM (0)
Apr 27, 7-8 PM (0)
Apr 27, 8-9 PM (0)
Apr 27, 9-10 PM (0)
Apr 27, 10-11 PM (0)
Apr 27, 11-12 AM (2)
Apr 28, 12-1 AM (0)
Apr 28, 1-2 AM (0)
Apr 28, 2-3 AM (0)
Apr 28, 3-4 AM (0)
Apr 28, 4-5 AM (0)
Apr 28, 5-6 AM (0)
Apr 28, 6-7 AM (0)
Apr 28, 7-8 AM (0)
Apr 28, 8-9 AM (0)
Apr 28, 9-10 AM (0)
Apr 28, 10-11 AM (0)
Apr 28, 11-12 PM (0)
Apr 28, 12-1 PM (0)
Apr 28, 1-2 PM (0)
Apr 28, 2-3 PM (0)
Apr 28, 3-4 PM (0)
Apr 28, 4-5 PM (0)
Apr 28, 5-6 PM (0)
Apr 28, 6-7 PM (0)
Apr 28, 7-8 PM (0)
Apr 28, 8-9 PM (0)
Apr 28, 9-10 PM (0)
Apr 28, 10-11 PM (0)
Apr 28, 11-12 AM (0)
Apr 29, 12-1 AM (0)
Apr 29, 1-2 AM (0)
Apr 29, 2-3 AM (0)
Apr 29, 3-4 AM (0)
Apr 29, 4-5 AM (0)
Apr 29, 5-6 AM (0)
Apr 29, 6-7 AM (0)
Apr 29, 7-8 AM (0)
Apr 29, 8-9 AM (0)
Apr 29, 9-10 AM (0)
Apr 29, 10-11 AM (0)
Apr 29, 11-12 PM (0)
Apr 29, 12-1 PM (0)
Apr 29, 1-2 PM (0)
Apr 29, 2-3 PM (1)
Apr 29, 3-4 PM (0)
Apr 29, 4-5 PM (0)
Apr 29, 5-6 PM (0)
Apr 29, 6-7 PM (0)
Apr 29, 7-8 PM (0)
Apr 29, 8-9 PM (0)
Apr 29, 9-10 PM (0)
Apr 29, 10-11 PM (0)
Apr 29, 11-12 AM (0)
Apr 30, 12-1 AM (0)
Apr 30, 1-2 AM (0)
Apr 30, 2-3 AM (0)
Apr 30, 3-4 AM (0)
Apr 30, 4-5 AM (0)
Apr 30, 5-6 AM (0)
Apr 30, 6-7 AM (0)
Apr 30, 7-8 AM (0)
Apr 30, 8-9 AM (0)
Apr 30, 9-10 AM (0)
Apr 30, 10-11 AM (0)
Apr 30, 11-12 PM (0)
Apr 30, 12-1 PM (1)
Apr 30, 1-2 PM (0)
Apr 30, 2-3 PM (68)
Apr 30, 3-4 PM (0)
Apr 30, 4-5 PM (0)
Apr 30, 5-6 PM (0)
Apr 30, 6-7 PM (0)
Apr 30, 7-8 PM (1)
Apr 30, 8-9 PM (0)
Apr 30, 9-10 PM (0)
Apr 30, 10-11 PM (0)
Apr 30, 11-12 AM (0)
May 01, 12-1 AM (0)
May 01, 1-2 AM (0)
May 01, 2-3 AM (0)
May 01, 3-4 AM (0)
May 01, 4-5 AM (0)
May 01, 5-6 AM (0)
May 01, 6-7 AM (0)
128 commits this week
Apr 24, 2026
-
May 01, 2026
Allow topology selection in voting benchmark script
Accept topology as leafname (resolved in data/simulation/pseudo-mainnet/), relative path, or absolute path. Defaults to topology-v2-cip.yaml. Auto-compute vote thresholds from topology stake distribution at configurable quorum fraction (QUORUM_FRACTION, default 60%). Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Raise default quorum to 75% and stake fraction to 99%
Aligns all three committee modes to a consistent 75% quorum (wfa-ls was already 450/600 = 75%) and captures nearly all block-producing nodes in the top-stake-fraction committee. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add no-caps parameter file and baseline voting results
parameters/no-caps.yaml disables all three memory caps for diagnostic
experiments (peer backlog, generated backlog, TX max age).
voting_results.csv captures the full 4-way matrix at 0.200/wfa-ls:
{turbo,sequential} × {caps,nocaps} × seeds 0-4. Key findings:
- Seed 4 is the stress seed: caps cause 40% uncertified (seq) vs 17%
without caps. Root cause is a race in propagate_tx where
acknowledge_tx consumes the one-shot missing_txs trigger before
PeerBacklogFull drops the TX.
- Seeds 1,3 are cap-insensitive (well-spaced RBs).
- No-caps converges all seeds to 16-22% uncertified.
- Stale rows (pre-rayon-fix, pre-seed-wiring) labelled as such.
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Set wfa-ls VRF trials to 600 (480 persistent + 120 non-persistent)
The experiment config's vote-generation-probability: 600 was being ignored because the simulation uses persistent + non-persistent probabilities directly (defaulting to 400 + 100 = 500). Override both in the wfa-ls mode to get the intended 600 total with 80:20 split, giving a 75% quorum at the existing threshold of 450. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Convert HashMap/HashSet to BTreeMap/BTreeSet in linear_leios node state
Eliminates non-deterministic iteration order in NodeLeiosState, LedgerState, and LinearLeiosNode.txs. All key types already implement Ord. At typical map sizes (5-50 entries for leios state, 100s-1000s for txs) BTreeMap has negligible CPU overhead and slightly lower memory usage than HashMap. The praos state (NodePraosState) was already using BTreeMap; this brings the leios state into line. Remaining HashSet usages are pure membership tests (contains/insert) that do not affect determinism. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Sort event stream by (timestamp, node_id) for deterministic jsonl output
Multi-shard execution emits tracking events from concurrent shard threads via an mpsc channel, so events at the same virtual timestamp can arrive in arbitrary order. Buffer events in timestamp buckets (BTreeMap) with a 1-second flush window, sorting each bucket by originating node ID before writing. This makes the jsonl event stream byte-identical across runs without affecting simulation logic. Also adds Event::node_id() to extract the originating node from each event variant for sorting purposes. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Retain EB-critical TXs on peer backlog overflow
Problem
-------
When a node's peer TX backlog hits its cap (e.g. 10,000), incoming TXs
are silently dropped from self.txs. If a dropped TX is referenced by a
pending Endorser Block, the EB's validation scan (try_validating_eb)
finds has_tx() = false and the EB is never marked all_txs_seen. The EB
then misses its vote window and is orphaned by the next Ranking Block
(WrongEB). Because the TX is never re-offered by peers, the one-shot
missing_txs trigger — already consumed by acknowledge_tx — cannot
re-fire, leaving the EB permanently stuck.
Under Poisson-clustered RB production (e.g. seed 4 at 0.200 MB/s), this
cascade produced 48 EBs with 19 uncertified (40%), 23M peer TX drops,
and a mean of only 348 votes/EB (well below the 450 quorum).
Fix
---
Two changes in propagate_tx():
1. Move the mempool insertion check (try_add_to_mempool) BEFORE
acknowledge_tx, so that missing_txs has not yet been consumed at the
point where we decide whether to drop.
2. When PeerBacklogFull fires, check whether the TX is referenced by a
pending EB (self.leios.missing_txs.contains_key). If yes, keep the
TX in self.txs (skip the backlog, but preserve has_tx = true) and
fall through to acknowledge_tx normally. If no, drop as before.
This retains only EB-critical TXs — bounded by (pending_EBs × EB_size),
typically a few thousand entries and ~3 MB of HashMap overhead per node.
Non-critical TXs are still dropped, preserving the memory cap's purpose.
Effect on seed 4 sequential 0.200/wfa-ls (worst-case seed)
-----------------------------------------------------------
EBs uncert mean WrongEB drops peak RSS
caps (before): 48 19 348 1138 23.2M ~20 GB
caps-retain: 45 8 470 1330 5.9M ~24 GB
nocaps (ref): 46 8 473 1516 0 ~35 GB
Uncertified EBs: 19 → 8 (40% → 18%)
Mean votes/EB: 348 → 470 (near nocaps 473)
Peer TX drops: 23.2M → 5.9M (−74%)
Peak RSS: ~20 → ~24 GB (+20%, well below nocaps ~35 GB)
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Allow filtering by throughput and committee mode in benchmark script
Accept optional second and third args for comma-separated throughput and mode filters (use "-" for all). Append to existing results CSV when filtering instead of overwriting. Examples: ./scripts/cip-voting-options.sh - 0.250 everyone ./scripts/cip-voting-options.sh - 0.250,0.300 wfa-ls,everyone Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add network queue stats instrumentation
Expose per-shard connection queue statistics (total/active connections, queued messages, queued bytes) via a shared NetworkStatsCollector. Each shard's sequential engine updates its counters at slot boundaries; the node's existing log_memory_stats reads the aggregate. Output appears every 60 slots alongside Memory stats, covering all shards. Initial profiling showed zero queued messages in turbo mode (zero-latency clusters bypass bandwidth queues), ruling out network queues as the cause of the ~40 GB RSS vs ~20 GB tracked-state gap. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
De-RNG Linear Leios completely: withhold attacker + TxGeneratorCore
Migrate every remaining stateful-RNG use reachable from Linear Leios:
- linear_leios.rs generate_withheld_txs: `self.rng.random_bool(p)` is
replaced with `rng.draw_bool(node, slot, DrawSite::WithholdDecision,
p)`. The distribution sample for `txs_to_generate` and the per-tx
`new_tx` body generation use `Rng::seeded_chacha(node, slot, site)`
to produce one-shot ChaChaRngs seeded from context — this keeps the
rand_distr / `new_tx` machinery unchanged while removing the
cross-call stateful coupling.
- tx.rs TxGeneratorCore: replaces its `ChaChaRng` with the stateless
`SimRng` plus a monotonic `next_tx_idx: u64`. Each TX is generated
from a one-shot ChaChaRng seeded from
`("tx_generator", tx_idx)` — so the generated TX stream is a pure
function of the master seed regardless of per-node or network-timing
behaviour. Propagates the `SimRng` type through TransactionProducer
and its callers in sim/sequential.rs and sharding/shard.rs; the
master-RNG `.next_u64()` consumption is preserved to keep any
remaining downstream draws on stracciatella/leios variants seeded
the same way they were.
- Drops `rng: ChaChaRng` field from `LinearLeiosNode`. The NodeImpl
trait signature still takes a `ChaChaRng` for the other variants, so
LinearLeiosNode::new accepts it as `_rng` and discards.
New Rng methods: `seeded_chacha(node, slot, site)` for context-tied
one-shot ChaChaRng seeding, and `seeded_chacha_from<K: Hash>(&K)` for
sim-wide (non-node-tied) draws like the TX generator.
All 54 sim-core tests pass; clippy clean for Linear Leios and
TxGeneratorCore.
Stracciatella and full-Leios variants retain their stateful `self.rng`
for now — they build fine but are out of scope for the current
determinism investigation.
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add stateless context-derived RNG primitive; migrate VRF/lottery
The simulator's stateful ChaChaRng-per-node design is fragile: RNG
consumption count per node depends on control flow (e.g., "did this
node receive an EB in time to vote"), which depends on network timing.
Any microsecond-scale timing drift changes the number of RNG draws on a
node, desynchronising its RNG state, and every downstream random
decision on that node diverges — a macro-amplifier that turns upstream
timing blips into EB-scale outcome drift.
It's also unrealistic. Cardano's real VRF is stateless per slot:
vrf_output = f(key, nonce || slot) is a pure function that doesn't
"advance" with each use.
Introduce a stateless oracle: every random draw becomes a pure function
of (global_seed, context). The new `sim-core/src/rng` module provides:
- DrawSite enum naming every call site (RbLottery, VoteVrf, MempoolSwap,
TxGen{Node,Body,Frequency}, TxConflict, Withhold*, test/lottery site
variants). Discriminant plus variant fields are hashed into the
context, so distinct call sites never collide.
- Rng::draw_{u64,range,f64_01,bool}, all pure functions of
(seed, node, slot, site).
- SplitMixHasher — portable deterministic hasher: endian-pinned writes
(to_le_bytes in every write_uNN), splitmix64-style mixing, splitmix
finalizer. Not cryptographic; fine for a sim (no adversarial inputs)
and ~ns per draw.
Ten unit tests in rng::tests cover: determinism, different-seed
differentiation, 500-context collision check, 600-trial-index
distinctness, site-variant-on-same-(node,slot) distinctness, range/
probability sanity, endian-independence, and golden vectors pinning the
hash output (tested to catch accidental hash-function changes).
Migrate the VRF/lottery call paths for all three node variants:
- sim/lottery.rs: LotteryConfig::run signature changes from
`(kind, success_rate, &mut ChaChaRng)` to
`(kind, success_rate, &Rng, NodeId, slot, DrawSite)`. MockLotteryResults
(tests) unchanged: still keyed by LotteryKind.
- sim/linear_leios.rs: run_vrf threads slot+site through; RB lottery
uses DrawSite::RbLottery; vote VRF enumerates its (up to) 600 trials
as DrawSite::VoteVrf { eb_id, trial }.
- sim/stracciatella.rs: inline run_vrf (bypasses LotteryConfig) migrated
similarly. DrawSites: RbLottery, EbLottery{pipeline, trial},
VoteVrfPipeline{pipeline, trial}.
- sim/leios.rs: inline run_vrf migrated. DrawSites: IbLottery, EbLottery,
VoteVrfPipeline, RbLottery.
Nodes still hold a ChaChaRng for mempool shuffle, withhold-TX attack,
TxGeneratorCore, and new_tx body randomness. These are migrated in
follow-up phases. The critical VRF path — the macro-amplifier that
cascades network-timing non-determinism into per-node RNG-state
desynchronisation — is now structurally deterministic by construction.
All 51 sim-core tests pass.
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Convert cip-voting-options.sh to named params, add engine selector
Positional args had grown unwieldy. Rewrite with flag parsing: -t/--topology, -T/--throughput, -m/--mode, -e/--engine, -s/--slots, --quorum-fraction, --stake-fraction. Add an `--engine` selector that writes an on-the-fly override file: actor — default (tokio async), single-shard, non-deterministic sequential — single-shard sequential DES (deterministic) turbo — sequential DES with 6 shards (non-deterministic, fast) Add `engine` as a CSV column so runs from different engines can live in the same file and be pivoted cleanly. Add determinism-run.sh / determinism-check.sh as a simple 3-run harness for spot-checking single-shard-sequential determinism against the 0.200/wfa-ls scenario. determinism-run.sh runs the benchmark 3× and writes progress to /tmp/det-run-state; determinism-check.sh prints a concise status summary (safe to poll from /loop or cron). Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Remove || true from benchmark script now that shutdown is clean
The || true was masking simulation failures (including OOM kills). The shutdown panic it originally worked around was fixed in e089975c4, and shard panics are now caught cleanly via catch_unwind in 8354af475. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add RSS to poll-sim.sh process status line
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add process RSS to memory stats and simplify praos.blocks instrumentation
Read VmRSS from /proc/self/status and log it alongside estimated totals so we can directly compare instrumented vs actual memory usage. Simplify praos.blocks stats back to basic entry count and tx_refs — the detailed unique/endorse breakdown showed praos.blocks is not a significant memory consumer. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
1500-node sweep results: 23 completed runs (NA × 3 modes + Plutus partial)
Commits the small text artifacts (case.csv, config.yaml, summary.txt, time.txt, done marker) for every topology-v2-1500 seed-0 run that has finished its full pipeline. Excludes the bulky outputs (sim.log.gz, *.csv.gz, stdout, stderr) per the existing .gitignore. Coverage at this snapshot: - All 5 NA throughputs (0.150-0.350) × all 3 voting modes = 15 runs - All 6 wfa-ls Plutus levels (1000-50000) = 6 runs - everyone Plutus 1000 and 2000 = 2 runs (sweep still in progress) Plus the `done` marker for the canonical NA,0.350/everyone/topology-v2 baseline (the run formerly known as seed-0.no-limits, promoted to seed-0 in the cleanup commit). Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
run-all-voting-modes: continue across modes on partial failure
run-sweep.sh now exits 1 when any experiment fails (continue-on- failure logic), which under set -eo pipefail aborted the outer loop after the first mode with any OOM. Wrap the inner call so failures in one mode don't lose the remaining modes; report at the end. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Fix JoinHandle panic on clean shutdown
When the simulation completes before the event monitor, the select! macro joins the monitor future. Awaiting it again on line 151 panics with "JoinHandle polled after completion". Return early from the monitor branch to avoid the double-await. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Document determinism guarantees and benchmark scripts in CLAUDE.md
Add Determinism section covering all sources of non-determinism that were found and fixed (HashMap iteration, shard assignment, TX ID counters, rayon collect order, event stream sorting), what was tested and found unnecessary (barrier synchronization), and what does not affect determinism (CpuTaskQueue HashMap, config HashSets). Add Benchmark Scripts section documenting cip-voting-options.sh, poll-sim.sh, and the determinism verification methodology. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add 1500-node expanded topology resampled from v2-cip
Generated via generate_topology.py from topology-v2-cip.yaml (750 nodes). Preserves degree distribution (relay med=35), latency profile (p95=305ms, max=575ms), BP/relay ratio (432/1068), and stake total. Source (v2-cip): 750 nodes, 19314 links, latency med=30.1ms Expanded (1500): 1500 nodes, 38511 links, latency med=25.4ms Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Fix generate-topology to produce bidirectional links
The sim's connectivity BFS traverses consumer edges (reverse of producers). Unidirectional producer links left nodes unreachable, causing "Graph must be fully connected!" errors. Symmetrize all links so every A→B producer also creates B→A. Also rename generate_topology.py → generate-topology.py and summarize_topology.py → summarize-topology.py for consistency with the other shell scripts. Re-generated topology-v2-expanded-1500.yaml (59,268 links, fully connected). Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add configurable committee selection algorithms for linear leios
Add committee-selection-algorithm config with three modes: - wfa-ls (default): existing VRF lottery matching CIP-0164 wFA+LS - everyone: every node votes unconditionally (1 vote each) - top-stake-fraction: nodes covering top N% of cumulative stake vote This enables traffic analysis comparing the CIP's VRF-based scheme against simpler alternatives. Vote bundle sizes, CPU times, diffusion, and threshold checking are unchanged — only the selection mechanism differs. Includes benchmark script (scripts/cip-voting-options.sh) that runs CIP topology under turbo mode across all three committee modes. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Bump virt ulimit to 256G; revert pigz -1 -> pigz -9
The 96G ulimit -v killed 1500-node sims at slot 313 with RSS only at 58G — Rust+tokio allocator reserves more virtual address space on larger topologies than physical commit alone implies. The cap was sized for 750 nodes; 1500 needs more headroom. 256G is the board's max physical RAM ceiling; actual commit is bounded by RAM + swap. Reverts pigz -1 to pigz -9 — the faster compressor did not solve the end-of-sim EventMonitor spike (still ~8 GB from 11M events flushed at once, regardless of compressor speed). The bottleneck is the unbounded mpsc channel, not compression. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Fix TX generation rate over-rate from f64 truncation
`TxGeneratorCore::generate` computed inter-tx delay as `config.frequency_ms.sample() as u64 * shard_count as u64` and passed it to `Duration::from_millis`. The `as u64` cast truncated each sample: a configured 7.5 ms became 7 ms, producing TXs ~7% faster than requested. For the 0.200/wfa-ls single-shard run this meant 128,572 TXs over 900s (~214 KB/s) instead of the intended ~120,000 TXs (~200 KB/s). Only affects configurations with sub-ms precision and no batching. Turbo is largely unaffected (1 ms resolution, 10 ms tx-batch-window collapses the fractional delay anyway). Switch to `Duration::from_secs_f64`, preserving sub-millisecond precision via nanosecond-resolution Duration. Clamp to `.max(0.0)` so distributions that can sample negative (e.g., Normal) keep the old "treat negative as zero delay" behaviour rather than panicking in `from_secs_f64`. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>