High Reliability Data
Loading on ClickHouse
Altinity Engineering Webinar
1
Presenter Bio and Altinity Introduction
The #1 enterprise ClickHouse provider. Now offering Altinity.Cloud
Major committer and community sponsor for ClickHouse in US/EU
Robert Hodges - Altinity CEO
30+ years on DBMS plus
virtualization and security.
ClickHouse is DBMS #20
Alexander Zaitsev - Altinity CTO
Altinity founder with decades
of expertise on petabyte-scale
analytic systems
Ingestion Pipeline
ClickHouseINSERT
Event
Stream
Ingestion Pipeline
ClickHouse
HDD
INSERT
OS Page Cache
Event
Stream
Ingestion Pipeline
ClickHouse
HDD
INSERT
OS Page Cache
Event
Stream
Table
MV1 MV2
MV3
Ingestion Pipeline
Shard1INSERT
Event
Stream
Table
MV1 MV2
MV3
Shard2
Shard3
Ingestion Pipeline
Shard1INSERT
Event
Stream
Table
MV1 MV2
MV3
Shard2
Shard3
Replica
Replica
Replica
Topics to discuss
● Performance
● Reliability
● Deduplication
ClickHouseINSERT
Event
Stream
Under the hood:
• Data is parsed by rows and converted to in-memory columns
• Columns are split to partitions and parts (could be multiple)
• Columns are sorted, PK is calculated
• Columns are compressed and written to the disk into the temporary dir(s)
• Single column may require 2-4 files in a part
• Once the part is ready – it is renamed to the real one
General Insert Performance Considerations
Single INSERT has a lot of overhead, so:
● User bigger blocks
● Do not insert too often
● Do not use too aggressive compression
● Pick partitioning wisely
● And:
○ INSERT close to ZooKeeper for replicated tables
○ Asynchronous is always faster (but less reliable)
Altinity Ltd. www.altinity.com
Extra techniques to reduce overhead
● Buffer tables – collect data in memory, and flush once
ready
● Polymorphic MergeTree parts – store small inserts more
efficiently
Buffer tables
Engine=Buffer Engine=MergeTree
• Memory buffer
• Flush on size/time treshold
• SELECT FROM buffer_table
• Lost on hard restart
Buffer(database, table, num_layers, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes)
Compact parts for MergeTree (20.3+)
Part
.idx .bin/.mrk2
Part
.idx .bin/.mrk2
Part
.idx data.bin/mrk3
Part
.idx data.bin/mrk3
“wide” parts (default) “compact” parts (new)
Compact parts compared to wide parts
“wide” parts (default) “compact” parts (new)
/var/lib/clickhouse/data/datasets/ontime_refc/2020_
347_347_0/
.
..
checksums.txt
columns.txt
count.txt
data.bin
data.mrk3
minmax_Year.idx
partition.dat
primary.idx
8 files for 109 columns!
/var/lib/clickhouse/data/datasets/ontime_ref/2020
_547_552_1/
.
..
ActualElapsedTime.bin
ActualElapsedTime.mrk2
AirlineID.bin
AirlineID.mrk2
AirTime.bin
AirTime.mrk2
ArrDel15.bin
ArrDel15.mrk2
ArrDelay.bin
224 files for 109 columns!
Overview of compact part design
● Single file but columnar inside
● Reduces file system overhead
● Useful for small frequent inserts
● Threshold is controlled by merge_tree_settings:
○ min_bytes_for_wide_part
○ min_rows_for_wide_part
● min_bytes_for_wide_part = 10485760 by default since 20.8
●In-memory parts with write-ahead-log in 20.6
In-memory parts with write-ahead-log 20.6
.idx .bin/.mrk2
merge merge
INSERTS
○ min_bytes_for_wide_part
○ min_rows_for_wide_part
○ min_bytes_for_compact_part
○ min_rows_for_compact_part
○ in_memory_parts_enable_wal
memory parts compact parts wide part
.idx data.bin/mrk3
.idx data.bin/mrk3
.idx data.bin/mrk3
INSERT atomicity
User expectations:
● INSERT inserts all the data completely or aborts
● INSERT inserts into all dependent objects or aborts
● INSERT inserts into all distributed and replicated object or
aborts
No transactions in ClickHouse
How to Make INSERT atomic
ClickHouseINSERT
How it works:
• Data is parsed and written in blocks
(parts)
• Blocks are written when ready
• Partial insert possible in case of
failurePart Part Part
Need to ensure there is a single part on insert!
How to Make INSERT atomic
Important settings:
• max_insert_block_size = 1M rows – split insert into chunks
• max_block_size = 65K rows – split SELECT into chunks
• min_insert_block_size_rows =1M rows – merge input into bigger chunks
• min_insert_block_size_bytes = 256MB – merge input into bigger chunks
• input_format_parallel_parsing = 1 – splits text input into chunks
• max_insert_threads = 1 – parallel INSERT/SELECT
Temp Table
INSERT INSERT SELECT
Table
INSERT
max_insert_block_size, default 1M rows
Durability Settings (20.10)
When to fsync?
— min_rows_to_fsync_after_merge
— min_compressed_bytes_to_fsync_after_merge
— min_compressed_bytes_to_fsync_after_fetch
— fsync_after_insert
— fsync_part_directory
— write_ahead_log_bytes_to_fsync
— write_ahead_log_interval_ms_to_fsync
— in_memory_parts_insert_sync
ClickHouse
HDD
OS Page Cache
Further reading
Files Are Hard (2015): https://danluu.com/file-consistency/
PostgreSQL "Fsyncgate" (2018): https://lwn.net/Articles/752063/
Materialized Views
Table
MV1 MV2
MV3
How it works:
• MVs are executed sequentially in
alphabetical order
• If any MV fails, source table and
unfinished MVs are aborted
INSERT
Workarounds (partial):
• parallel_view_processing = 1
• Do not use cascades
MV transactions are coming in 2021!
Distributed Table INSERT
Shard1INSERT
Shard2
Shard3
How it works:
• Split to block per shard and store
locally in 'distribution queue'
• Ack once data is in the queue
• Asynchronously send to shards
• Possible loss on hard reset
Workarounds:
• Insert locally
• insert_distributed_sync
• insert_distributed_timeout
Replicated table INSERT
Shard1INSERT Replica
How it works:
• Part is written locally and registered
in ZooKeeper
• Ack client
• Data is fetched asynchronously by
replicas
• Possible loss on hard reset
Workarounds:
• insert_quorum
Kafka ingest – even more ways to fail
Kafka
Engine
Merge
Tree
MVTOPIC
MV1 MV2
Replica
<yandex>
<kafka>
<!-- enable EOS semantics -->
<isolation_level>
read_committed
</isolation_level>
</kafka>
</yandex>
To make things more complicated:
• multiple topics
• multiple partitions per topic, partition re-balance
• multiple consumers in ClickHouse (num_consumers)
Summary of Best Practices
● Do not use buffer tables (use compact/memory parts instead)
● Make sure single INSERT generates single part if possible
● Local inserts or insert_distributed_sync
● insert_quorum
● Do not use cascading MVs
● parallel_view_processing
● durability settings (if you understand them)
Deduplication
Why duplicates are possible?
● Retry failed INSERTs
● Collisions in message bus (e.g. Kafka re-balances)
● User errors
No unique keys and constraints in ClickHouse
Block Level Deduplication
Scenario – retry INSERT after failure
● ClickHouse keeps history of block hashes per table (crc64 or similar)
● If hash matches INSERT is ignored
Details:
● Only Replicated tables (non-replicated in Q1/2021)
● replicated_deduplication_window (100),
replicated_deduplication_window_seconds (604800)
● deduplicate_blocks_in_dependent_materialized_views -- fire MVs if
source table is deduped
ReplacingMergeTree
Eventually removes duplicates:
● Replaces values with equal PRIMARY KEY value
● Replace during merge
● OPTIMIZE FINAL
● SELECT FINAL
○ Slow for aggregation (performance has been improved in 20.5 and 20.11)
○ Good for key_column in (… ) queries
Logical Deduplication
Scenario:
● There is a natural unique id in the table
● There is a unique hash
Temporary Table
INSERT
INSERT INTO Table
SELECT * FROM …
WHERE id NOT IN
(SELECT id FROM
WHERE <dedup_window> )
Engine=Null Table
INSERT
MaterializedView
INSERT SELECT
Bullet-proof de-duplication in Kafka
How to Find Duplicates in Big Table
SELECT
min(ts),
max(ts),
count(*)
FROM
(
SELECT ts
FROM Table
WHERE ts BETWEEN time_start and time_end
GROUP BY ts, hash
HAVING count(*) > 1
) AS Z Reliable but may be slow lots of RAM
Scenario:
● Table.hash – should be unique for a table
How to Find Duplicates in Big Table
SELECT
ts,
hash,
neighbor(hash, -1) AS p_hash
FROM Table
WHERE BETWEEN time_start and time_end AND hash = p_hash
ORDER BY
ts ASC,
hash ASC
Scenario:
● Table.hash – should be unique for a table
Fast, but may be inaccurate
• neighbor – works inside blocks
• max_block_size
• group_by_two_level_threshold=0
OPTIMIZE DEDUPLICATE
● Full re-sort, may take a lot of time
● Deduplicates identical rows (all columns considered)
● Deduplicate on a subset of columns – coming in 20.13:
○ OPTIMIZE TABLE table DEDUPLICATE BY col1,col2,col3;
○ OPTIMIZE TABLE table DEDUPLICATE BY * EXCEPT (colX, colY)
○ OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by-
regex') EXCEPT (colX, colY);
Final words
● ClickHouse is very fast and reliable
● Proper schema design is important for performance and reliability
● Default settings are tuned for performance, but not for reliability
● Atomicity requires careful attention
● Important features in 2021 roadmap:
○ Block de-duplication for non-replicated table
○ 'Transactional' materialized views updates
○ 'Transactional' multi-inserts
● https://clickhouse.tech/
○ Everything Clickhouse
● https://www.youtube.com/c/ClickHouseDB
○ Piles of community videos
● https://altinity.com/blog/
○ Lots of articles about ClickHouse usage
● https://altinity.com/webinarspage/
○ Webinars on all aspects of ClickHouse
● https://github.com/ClickHouse/ClickHouse
○ Check out tests for examples of detailed usage
More information and references
37
Thank you! Contacts:
info@altinity.com
Visit us at:
https://www.altinity.com

High Performance, High Reliability Data Loading on ClickHouse

  • 1.
    High Reliability Data Loadingon ClickHouse Altinity Engineering Webinar 1
  • 2.
    Presenter Bio andAltinity Introduction The #1 enterprise ClickHouse provider. Now offering Altinity.Cloud Major committer and community sponsor for ClickHouse in US/EU Robert Hodges - Altinity CEO 30+ years on DBMS plus virtualization and security. ClickHouse is DBMS #20 Alexander Zaitsev - Altinity CTO Altinity founder with decades of expertise on petabyte-scale analytic systems
  • 3.
  • 4.
  • 5.
    Ingestion Pipeline ClickHouse HDD INSERT OS PageCache Event Stream Table MV1 MV2 MV3
  • 6.
  • 7.
  • 8.
    Topics to discuss ●Performance ● Reliability ● Deduplication
  • 9.
    ClickHouseINSERT Event Stream Under the hood: •Data is parsed by rows and converted to in-memory columns • Columns are split to partitions and parts (could be multiple) • Columns are sorted, PK is calculated • Columns are compressed and written to the disk into the temporary dir(s) • Single column may require 2-4 files in a part • Once the part is ready – it is renamed to the real one
  • 10.
    General Insert PerformanceConsiderations Single INSERT has a lot of overhead, so: ● User bigger blocks ● Do not insert too often ● Do not use too aggressive compression ● Pick partitioning wisely ● And: ○ INSERT close to ZooKeeper for replicated tables ○ Asynchronous is always faster (but less reliable) Altinity Ltd. www.altinity.com
  • 11.
    Extra techniques toreduce overhead ● Buffer tables – collect data in memory, and flush once ready ● Polymorphic MergeTree parts – store small inserts more efficiently
  • 12.
    Buffer tables Engine=Buffer Engine=MergeTree •Memory buffer • Flush on size/time treshold • SELECT FROM buffer_table • Lost on hard restart Buffer(database, table, num_layers, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes)
  • 13.
    Compact parts forMergeTree (20.3+) Part .idx .bin/.mrk2 Part .idx .bin/.mrk2 Part .idx data.bin/mrk3 Part .idx data.bin/mrk3 “wide” parts (default) “compact” parts (new)
  • 14.
    Compact parts comparedto wide parts “wide” parts (default) “compact” parts (new) /var/lib/clickhouse/data/datasets/ontime_refc/2020_ 347_347_0/ . .. checksums.txt columns.txt count.txt data.bin data.mrk3 minmax_Year.idx partition.dat primary.idx 8 files for 109 columns! /var/lib/clickhouse/data/datasets/ontime_ref/2020 _547_552_1/ . .. ActualElapsedTime.bin ActualElapsedTime.mrk2 AirlineID.bin AirlineID.mrk2 AirTime.bin AirTime.mrk2 ArrDel15.bin ArrDel15.mrk2 ArrDelay.bin 224 files for 109 columns!
  • 15.
    Overview of compactpart design ● Single file but columnar inside ● Reduces file system overhead ● Useful for small frequent inserts ● Threshold is controlled by merge_tree_settings: ○ min_bytes_for_wide_part ○ min_rows_for_wide_part ● min_bytes_for_wide_part = 10485760 by default since 20.8
  • 16.
    ●In-memory parts withwrite-ahead-log in 20.6 In-memory parts with write-ahead-log 20.6 .idx .bin/.mrk2 merge merge INSERTS ○ min_bytes_for_wide_part ○ min_rows_for_wide_part ○ min_bytes_for_compact_part ○ min_rows_for_compact_part ○ in_memory_parts_enable_wal memory parts compact parts wide part .idx data.bin/mrk3 .idx data.bin/mrk3 .idx data.bin/mrk3
  • 17.
    INSERT atomicity User expectations: ●INSERT inserts all the data completely or aborts ● INSERT inserts into all dependent objects or aborts ● INSERT inserts into all distributed and replicated object or aborts No transactions in ClickHouse
  • 18.
    How to MakeINSERT atomic ClickHouseINSERT How it works: • Data is parsed and written in blocks (parts) • Blocks are written when ready • Partial insert possible in case of failurePart Part Part Need to ensure there is a single part on insert!
  • 19.
    How to MakeINSERT atomic Important settings: • max_insert_block_size = 1M rows – split insert into chunks • max_block_size = 65K rows – split SELECT into chunks • min_insert_block_size_rows =1M rows – merge input into bigger chunks • min_insert_block_size_bytes = 256MB – merge input into bigger chunks • input_format_parallel_parsing = 1 – splits text input into chunks • max_insert_threads = 1 – parallel INSERT/SELECT Temp Table INSERT INSERT SELECT Table INSERT max_insert_block_size, default 1M rows
  • 20.
    Durability Settings (20.10) Whento fsync? — min_rows_to_fsync_after_merge — min_compressed_bytes_to_fsync_after_merge — min_compressed_bytes_to_fsync_after_fetch — fsync_after_insert — fsync_part_directory — write_ahead_log_bytes_to_fsync — write_ahead_log_interval_ms_to_fsync — in_memory_parts_insert_sync ClickHouse HDD OS Page Cache
  • 21.
    Further reading Files AreHard (2015): https://danluu.com/file-consistency/ PostgreSQL "Fsyncgate" (2018): https://lwn.net/Articles/752063/
  • 22.
    Materialized Views Table MV1 MV2 MV3 Howit works: • MVs are executed sequentially in alphabetical order • If any MV fails, source table and unfinished MVs are aborted INSERT Workarounds (partial): • parallel_view_processing = 1 • Do not use cascades MV transactions are coming in 2021!
  • 23.
    Distributed Table INSERT Shard1INSERT Shard2 Shard3 Howit works: • Split to block per shard and store locally in 'distribution queue' • Ack once data is in the queue • Asynchronously send to shards • Possible loss on hard reset Workarounds: • Insert locally • insert_distributed_sync • insert_distributed_timeout
  • 24.
    Replicated table INSERT Shard1INSERTReplica How it works: • Part is written locally and registered in ZooKeeper • Ack client • Data is fetched asynchronously by replicas • Possible loss on hard reset Workarounds: • insert_quorum
  • 25.
    Kafka ingest –even more ways to fail Kafka Engine Merge Tree MVTOPIC MV1 MV2 Replica <yandex> <kafka> <!-- enable EOS semantics --> <isolation_level> read_committed </isolation_level> </kafka> </yandex> To make things more complicated: • multiple topics • multiple partitions per topic, partition re-balance • multiple consumers in ClickHouse (num_consumers)
  • 26.
    Summary of BestPractices ● Do not use buffer tables (use compact/memory parts instead) ● Make sure single INSERT generates single part if possible ● Local inserts or insert_distributed_sync ● insert_quorum ● Do not use cascading MVs ● parallel_view_processing ● durability settings (if you understand them)
  • 27.
    Deduplication Why duplicates arepossible? ● Retry failed INSERTs ● Collisions in message bus (e.g. Kafka re-balances) ● User errors No unique keys and constraints in ClickHouse
  • 28.
    Block Level Deduplication Scenario– retry INSERT after failure ● ClickHouse keeps history of block hashes per table (crc64 or similar) ● If hash matches INSERT is ignored Details: ● Only Replicated tables (non-replicated in Q1/2021) ● replicated_deduplication_window (100), replicated_deduplication_window_seconds (604800) ● deduplicate_blocks_in_dependent_materialized_views -- fire MVs if source table is deduped
  • 29.
    ReplacingMergeTree Eventually removes duplicates: ●Replaces values with equal PRIMARY KEY value ● Replace during merge ● OPTIMIZE FINAL ● SELECT FINAL ○ Slow for aggregation (performance has been improved in 20.5 and 20.11) ○ Good for key_column in (… ) queries
  • 30.
    Logical Deduplication Scenario: ● Thereis a natural unique id in the table ● There is a unique hash Temporary Table INSERT INSERT INTO Table SELECT * FROM … WHERE id NOT IN (SELECT id FROM WHERE <dedup_window> ) Engine=Null Table INSERT MaterializedView INSERT SELECT
  • 31.
  • 32.
    How to FindDuplicates in Big Table SELECT min(ts), max(ts), count(*) FROM ( SELECT ts FROM Table WHERE ts BETWEEN time_start and time_end GROUP BY ts, hash HAVING count(*) > 1 ) AS Z Reliable but may be slow lots of RAM Scenario: ● Table.hash – should be unique for a table
  • 33.
    How to FindDuplicates in Big Table SELECT ts, hash, neighbor(hash, -1) AS p_hash FROM Table WHERE BETWEEN time_start and time_end AND hash = p_hash ORDER BY ts ASC, hash ASC Scenario: ● Table.hash – should be unique for a table Fast, but may be inaccurate • neighbor – works inside blocks • max_block_size • group_by_two_level_threshold=0
  • 34.
    OPTIMIZE DEDUPLICATE ● Fullre-sort, may take a lot of time ● Deduplicates identical rows (all columns considered) ● Deduplicate on a subset of columns – coming in 20.13: ○ OPTIMIZE TABLE table DEDUPLICATE BY col1,col2,col3; ○ OPTIMIZE TABLE table DEDUPLICATE BY * EXCEPT (colX, colY) ○ OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by- regex') EXCEPT (colX, colY);
  • 35.
    Final words ● ClickHouseis very fast and reliable ● Proper schema design is important for performance and reliability ● Default settings are tuned for performance, but not for reliability ● Atomicity requires careful attention ● Important features in 2021 roadmap: ○ Block de-duplication for non-replicated table ○ 'Transactional' materialized views updates ○ 'Transactional' multi-inserts
  • 36.
    ● https://clickhouse.tech/ ○ EverythingClickhouse ● https://www.youtube.com/c/ClickHouseDB ○ Piles of community videos ● https://altinity.com/blog/ ○ Lots of articles about ClickHouse usage ● https://altinity.com/webinarspage/ ○ Webinars on all aspects of ClickHouse ● https://github.com/ClickHouse/ClickHouse ○ Check out tests for examples of detailed usage More information and references 37
  • 37.
    Thank you! Contacts: info@altinity.com Visitus at: https://www.altinity.com