[Infra Series] Kaia v2.1.0 – Storage compression for block data (new default + safe compaction)

As part of our “What’s new in Kaia v2.1.0” infra series, we’ve published a deep dive on storage compression for block data.

What changed in v2.1.0

  • Kaia now exposes a new default LevelDB compression flag for block-related tables so that fresh v2.1.0 nodes benefit from compression without extra tuning.

  • For nodes with a large amount of historical data, there is a safe compaction process that reclaims disk space while the node continues to sync and serve RPC requests.

The article focuses on:

  • Which tables are compressed (and why those were chosen)

  • How the new default flag behaves for new v2.1.0 deployments

  • How to configure and verify the compression setting

  • How to run compaction on existing nodes and what to watch (disk I/O, time, monitoring)

What you should consider doing

If you upgraded to v2.1.0:

  • Confirm your node is using the intended compression setting for block data.

  • For long-running mainnet nodes, plan a compaction window using the recommended procedure to reclaim disk space without stopping sync.

:open_book: Full article: https://medium.com/kaiachain/cutting-blockchain-storage-in-half-8bd5a8a81aff

We’ll keep adding to this infra series with more v2.1.0 under-the-hood changes that matter to node operators and infra providers.

2개의 좋아요

Follow-up from last week’s storage compression post – one concrete scenario where this change really matters:

  • If you’re running a long-lived mainnet full node on a single NVMe, enabling the v2.1.0 compression default and running the recommended compaction flow can roughly halve the disk taken up by historical block data.
  • Depending on your workload and history depth, that can be the difference between staying on your current hardware vs having to plan a more complex sharded/upgraded setup.

Exact savings will vary by node, but the pattern is consistent: less disk pressure and a cleaner path to “infra spring cleaning” without downtime, since compaction runs while the node continues to sync.

If you’re considering running your own infra (or migrating off a hosted RPC) and want a sanity check on your setup, reply here with your use case and constraints – we can point you to relevant docs and operational tips.

1개의 좋아요