Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
3ab54c8
Merge pull request #1390 from Altinity/frontport/antalya-26.1/alterna…
zvonand Mar 16, 2026
a78bcd4
Fix build after cherry-pick
ianton-ru Apr 10, 2026
9cf8394
Merge pull request #1453 from Altinity/frontport/antalya-26.1/timezon…
zvonand Mar 2, 2026
17b07a5
Merge pull request #1526 from Altinity/bugfix/antalya-26.1/1487_fix_u…
zvonand Mar 24, 2026
a33845c
Fix fast tests
ianton-ru Apr 13, 2026
06df609
Merge pull request #1551 from Altinity/bugfix/antalya-26.1/fix_remote…
zvonand Mar 26, 2026
f50215b
Merge pull request #1577 from Altinity/feature/antalya-26.1/remote_in…
zvonand Mar 31, 2026
b47e069
Merge antalya-26.3
ianton-ru Apr 13, 2026
c8473ee
Fix version parsing for Antalya branch
ianton-ru Apr 13, 2026
f52a41d
Merge version_parser
ianton-ru Apr 13, 2026
59deeb2
Fix settings history
ianton-ru Apr 13, 2026
8884205
More fix
ianton-ru Apr 13, 2026
2fbf4c2
Check version format
ianton-ru Apr 14, 2026
57c5846
Merge remote-tracking branch 'altinity/bugfix/antalya-26.3/version_pa…
ianton-ru Apr 14, 2026
d8a3993
Fix after AI review
ianton-ru Apr 14, 2026
41e54dd
Fix configuration double update
ianton-ru Apr 14, 2026
a30f2c4
Fix test_mask_sensitive_info multiple runs
ianton-ru Apr 15, 2026
cffce77
Fix multiple updates
ianton-ru Apr 15, 2026
d53e099
Merge antalya-26.3
ianton-ru Apr 16, 2026
7bf5abe
More fix for tests
ianton-ru Apr 16, 2026
968d774
Revert "Fix version parsing for Antalya branch"
ianton-ru Apr 20, 2026
c2f4e0a
Merge antalya-26.3
ianton-ru Apr 20, 2026
3c28439
Fix settings history for antalya
ianton-ru Apr 20, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 73 additions & 0 deletions docs/en/antalya/swarm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Antalya branch

## Swarm

### Difference with upstream version

#### `storage_type` argument in object storage functions

In upstream ClickHouse, there are several table functions to read Iceberg tables from different storage backends such as `icebergLocal`, `icebergS3`, `icebergAzure`, `icebergHDFS`, cluster variants, the `iceberg` function as a synonym for `icebergS3`, and table engines like `IcebergLocal`, `IcebergS3`, `IcebergAzure`, `IcebergHDFS`.

In the Antalya branch, the `iceberg` table function and the `Iceberg` table engine unify all variants into one by using a new named argument, `storage_type`, which can be one of `local`, `s3`, `azure`, or `hdfs`.

Old syntax examples:

```sql
SELECT * FROM icebergS3('http://minio1:9000/root/table_data', 'minio', 'minio123', 'Parquet');
SELECT * FROM icebergAzureCluster('mycluster', 'http://azurite1:30000/devstoreaccount1', 'cont', '/table_data', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'Parquet');
CREATE TABLE mytable ENGINE=IcebergHDFS('/table_data', 'Parquet');
```

New syntax examples:

```sql
SELECT * FROM iceberg(storage_type='s3', 'http://minio1:9000/root/table_data', 'minio', 'minio123', 'Parquet');
SELECT * FROM icebergCluster('mycluster', storage_type='azure', 'http://azurite1:30000/devstoreaccount1', 'cont', '/table_data', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'Parquet');
CREATE TABLE mytable ENGINE=Iceberg('/table_data', 'Parquet', storage_type='hdfs');
```

Also, if a named collection is used to store access parameters, the field `storage_type` can be included in the same named collection:

```xml
<named_collections>
<s3>
<url>http://minio1:9001/root/</url>
<access_key_id>minio</access_key_id>
<secret_access_key>minio123</secret_access_key>
<storage_type>s3</storage_type>
</s3>
</named_collections>
```

```sql
SELECT * FROM iceberg(s3, filename='table_data');
```

By default `storage_type` is `'s3'` to maintain backward compatibility.


#### `object_storage_cluster` setting

The new setting `object_storage_cluster` controls whether a single-node or cluster variant of table functions reading from object storage (e.g., `s3`, `azure`, `iceberg`, and their cluster variants like `s3Cluster`, `azureCluster`, `icebergCluster`) is used.

Old syntax examples:

```sql
SELECT * from s3Cluster('myCluster', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV',
'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))');
SELECT * FROM icebergAzureCluster('mycluster', 'http://azurite1:30000/devstoreaccount1', 'cont', '/table_data', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'Parquet');
```

New syntax examples:

```sql
SELECT * from s3('http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV',
'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')
SETTINGS object_storage_cluster='myCluster';
SELECT * FROM icebergAzure('http://azurite1:30000/devstoreaccount1', 'cont', '/table_data', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'Parquet')
SETTINGS object_storage_cluster='myCluster';
```

This setting also applies to table engines and can be used with tables managed by Iceberg Catalog.

Note: The upstream ClickHouse has introduced analogous settings, such as `parallel_replicas_for_cluster_engines` and `cluster_for_parallel_replicas`. Since version 25.10, these settings work with table engines. It is possible that in the future, the `object_storage_cluster` setting will be deprecated.
56 changes: 56 additions & 0 deletions docs/en/engines/table-engines/integrations/iceberg.md
Original file line number Diff line number Diff line change
Expand Up @@ -324,6 +324,62 @@ SETTINGS iceberg_metadata_staleness_ms=120000

**Note**: Current expectation is that metadata cache size is sufficient to hold the latest metadata snapshot in full for all active tables, if asynchronous prefetching is enabled.

## Altinity Antalya branch

### Specify storage type in arguments

Only in the Altinity Antalya branch does `Iceberg` table engine support all storage types. The storage type can be specified using the named argument `storage_type`. Supported values are `s3`, `azure`, `hdfs`, and `local`.

```sql
CREATE TABLE iceberg_table_s3
ENGINE = Iceberg(storage_type='s3', url, [, NOSIGN | access_key_id, secret_access_key, [session_token]], format, [,compression])

CREATE TABLE iceberg_table_azure
ENGINE = Iceberg(storage_type='azure', connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression])

CREATE TABLE iceberg_table_hdfs
ENGINE = Iceberg(storage_type='hdfs', path_to_table, [,format] [,compression_method])

CREATE TABLE iceberg_table_local
ENGINE = Iceberg(storage_type='local', path_to_table, [,format] [,compression_method])
```

### Specify storage type in named collection

Only in Altinity Antalya branch `storage_type` can be included as part of a named collection. This allows for centralized configuration of storage settings.

```xml
<clickhouse>
<named_collections>
<iceberg_conf>
<url>http://test.s3.amazonaws.com/clickhouse-bucket/</url>
<access_key_id>test<access_key_id>
<secret_access_key>test</secret_access_key>
<format>auto</format>
<structure>auto</structure>
<storage_type>s3</storage_type>
</iceberg_conf>
</named_collections>
</clickhouse>
```

```sql
CREATE TABLE iceberg_table ENGINE=Iceberg(iceberg_conf, filename = 'test_table')
```

The default value for `storage_type` is `s3`.

### The `object_storage_cluster` setting.

Only in the Altinity Antalya branch is an alternative syntax for the `Iceberg` table engine available. This syntax allows execution on a cluster when the `object_storage_cluster` setting is non-empty and contains the cluster name.

```sql
CREATE TABLE iceberg_table_s3
ENGINE = Iceberg(storage_type='s3', url, [, NOSIGN | access_key_id, secret_access_key, [session_token]], format, [,compression]);

SELECT * FROM iceberg_table_s3 SETTINGS object_storage_cluster='cluster_simple';
```

## See also {#see-also}

- [iceberg table function](/sql-reference/table-functions/iceberg.md)
23 changes: 23 additions & 0 deletions docs/en/sql-reference/distribution-on-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Task distribution in *Cluster family functions

## Task distribution algorithm

Table functions such as `s3Cluster`, `azureBlobStorageCluster`, `hdsfCluster`, `icebergCluster`, and table engines like `S3`, `Azure`, `HDFS`, `Iceberg` with the setting `object_storage_cluster` distribute tasks across all cluster nodes or a subset limited by the `object_storage_max_nodes` setting. This setting limits the number of nodes involved in processing a distributed query, randomly selecting nodes for each query.

A single task corresponds to processing one source file.

For each file, one cluster node is selected as the primary node using a consistent Rendezvous Hashing algorithm. This algorithm guarantees that:
* The same node is consistently selected as primary for each file, as long as the cluster remains unchanged.
* When the cluster changes (nodes added or removed), only files assigned to those affected nodes change their primary node assignment.

This improves cache efficiency by minimizing data movement among nodes.

## `lock_object_storage_task_distribution_ms` setting

Each node begins processing files for which it is the primary node. After completing its assigned files, a node may take tasks from other nodes, either immediately or after waiting for `lock_object_storage_task_distribution_ms` milliseconds if the primary node does not request new files during that interval. The default value of `lock_object_storage_task_distribution_ms` is 500 milliseconds. This setting balances between caching efficiency and workload redistribution when nodes are imbalanced.

## `SYSTEM STOP SWARM MODE` command

If a node needs to shut down gracefully, the command `SYSTEM STOP SWARM MODE` prevents the node from receiving new tasks for *Cluster-family queries. The node finishes processing already assigned files before it can safely shut down without errors.

Receiving new tasks can be resumed with the command `SYSTEM START SWARM MODE`.
14 changes: 14 additions & 0 deletions docs/en/sql-reference/table-functions/azureBlobStorageCluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,20 @@ SELECT count(*) FROM azureBlobStorageCluster(

See [azureBlobStorage](/sql-reference/table-functions/azureBlobStorage#using-shared-access-signatures-sas-sas-tokens) for examples.

## Altinity Antalya branch

### `object_storage_cluster` setting.

Only in the Altinity Antalya branch, the alternative syntax for the `azureBlobStorageCluster` table function is avilable. This allows the `azureBlobStorage` function to be used with the non-empty `object_storage_cluster` setting, specifying a cluster name. This enables distributed queries over Azure Blob Storage across a ClickHouse cluster.

```sql
SELECT count(*) FROM azureBlobStorage(
'http://azurite1:10000/devstoreaccount1', 'testcontainer', 'test_cluster_count.csv', 'devstoreaccount1',
'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV',
'auto', 'key UInt64')
SETTINGS object_storage_cluster='cluster_simple'
```

## Related {#related}

- [AzureBlobStorage engine](../../engines/table-engines/integrations/azureBlobStorage.md)
Expand Down
11 changes: 11 additions & 0 deletions docs/en/sql-reference/table-functions/deltalakeCluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,17 @@ A table with the specified structure for reading data from cluster in the specif
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
- `_etag` — The etag of the file. Type: `LowCardinality(String)`. If the etag is unknown, the value is `NULL`.

## Altinity Antalya branch

### `object_storage_cluster` setting.

Only in the Altinity Antalya branch alternative syntax for `deltaLakeCluster` table function is available. This allows the `deltaLake` function to be used with the non-empty `object_storage_cluster` setting, specifying a cluster name. This enables distributed queries over Delta Lake Storage across a ClickHouse cluster.

```sql
SELECT count(*) FROM deltaLake(url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression])
SETTINGS object_storage_cluster='cluster_simple'
```

## Related {#related}

- [deltaLake engine](engines/table-engines/integrations/deltalake.md)
Expand Down
12 changes: 12 additions & 0 deletions docs/en/sql-reference/table-functions/hdfsCluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,18 @@ FROM hdfsCluster('cluster_simple', 'hdfs://hdfs1:9000/{some,another}_dir/*', 'TS
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
:::

## Altinity Antalya branch

### `object_storage_cluster` setting.

Only in the Altinity Antalya branch alternative syntax for `hdfsCluster` table function is available. This allows the `hdfs` function to be used with the non-empty `object_storage_cluster` setting, specifying a cluster name. This enables distributed queries over HDFS Storage across a ClickHouse cluster.

```sql
SELECT count(*)
FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV', 'name String, value UInt32')
SETTINGS object_storage_cluster='cluster_simple'
```

## Related {#related}

- [HDFS engine](../../engines/table-engines/integrations/hdfs.md)
Expand Down
12 changes: 12 additions & 0 deletions docs/en/sql-reference/table-functions/hudiCluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,18 @@ A table with the specified structure for reading data from cluster in the specif
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
- `_etag` — The etag of the file. Type: `LowCardinality(String)`. If the etag is unknown, the value is `NULL`.

## Altinity Antalya branch

### `object_storage_cluster` setting.

Only in the Altinity Antalya branch alternative syntax for `hudiCluster` table function is available. This allows the `hudi` function to be used with the non-empty `object_storage_cluster` setting, specifying a cluster name. This enables distributed queries over Hudi Storage across a ClickHouse cluster.

```sql
SELECT *
FROM hudi(url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression])
SETTINGS object_storage_cluster='cluster_simple'
```

## Related {#related}

- [Hudi engine](engines/table-engines/integrations/hudi.md)
Expand Down
41 changes: 41 additions & 0 deletions docs/en/sql-reference/table-functions/iceberg.md
Original file line number Diff line number Diff line change
Expand Up @@ -649,6 +649,47 @@ GRANT ALTER TABLE ON my_iceberg_table TO my_user;
- The catalog's own authorization (REST catalog auth, AWS Glue IAM, etc.) is enforced independently when ClickHouse updates the metadata
:::

## Altinity Antalya branch

### Specify storage type in arguments

Only in the Altinity Antalya branch does the `iceberg` table function support all storage types. The storage type can be specified using the named argument `storage_type`. Supported values are `s3`, `azure`, `hdfs`, and `local`.

```sql
iceberg(storage_type='s3', url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method])

iceberg(storage_type='azure', connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method])

iceberg(storage_type='hdfs', path_to_table, [,format] [,compression_method])

iceberg(storage_type='local', path_to_table, [,format] [,compression_method])
```

### Specify storage type in named collection

Only in the Altinity Antalya branch can storage_type be included as part of a named collection. This allows for centralized configuration of storage settings.

```xml
<clickhouse>
<named_collections>
<iceberg_conf>
<url>http://test.s3.amazonaws.com/clickhouse-bucket/</url>
<access_key_id>test<access_key_id>
<secret_access_key>test</secret_access_key>
<format>auto</format>
<structure>auto</structure>
<storage_type>s3</storage_type>
</iceberg_conf>
</named_collections>
</clickhouse>
```

```sql
iceberg(named_collection[, option=value [,..]])
```

The default value for `storage_type` is `s3`.

## See Also {#see-also}

* [Iceberg engine](/engines/table-engines/integrations/iceberg.md)
Expand Down
75 changes: 75 additions & 0 deletions docs/en/sql-reference/table-functions/icebergCluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,81 @@ SELECT * FROM icebergS3Cluster('cluster_simple', 'http://test.s3.amazonaws.com/c
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
- `_etag` — The etag of the file. Type: `LowCardinality(String)`. If the etag is unknown, the value is `NULL`.

## Altinity Antalya branch

### `icebergLocalCluster` table function

Only in the Altinity Antalya branch, `icebergLocalCluster` designed to make distributed cluster queries when Iceberg data is stored on shared network storage mounted with a local path. The path must be identical on all replicas.

```sql
icebergLocalCluster(cluster_name, path_to_table, [,format] [,compression_method])
```

### Specify storage type in function arguments

Only in the Altinity Antalya branch, the `icebergCluster` table function supports all storage backends. The storage backend can be specified using the named argument `storage_type`. Valid values include `s3`, `azure`, `hdfs`, and `local`.

```sql
icebergCluster(storage_type='s3', cluster_name, url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method])

icebergCluster(storage_type='azure', cluster_name, connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method])

icebergCluster(storage_type='hdfs', cluster_name, path_to_table, [,format] [,compression_method])

icebergCluster(storage_type='local', cluster_name, path_to_table, [,format] [,compression_method])
```

### Specify storage type in a named collection

Only in the Altinity Antalya branch, `storage_type` can be part of a named collection.

```xml
<clickhouse>
<named_collections>
<iceberg_conf>
<url>http://test.s3.amazonaws.com/clickhouse-bucket/</url>
<access_key_id>test</access_key_id>
<secret_access_key>test</secret_access_key>
<format>auto</format>
<structure>auto</structure>
<storage_type>s3</storage_type>
</iceberg_conf>
</named_collections>
</clickhouse>
```

```sql
icebergCluster(iceberg_conf[, option=value [,..]])
```

The default value for `storage_type` is `s3`.

### `object_storage_cluster` setting.

Only in the Altinity Antalya branch, an alternative syntax for `icebergCluster` table function is available. This allows the `iceberg` function to be used with the non-empty `object_storage_cluster` setting, specifying a cluster name. This enables distributed queries over Iceberg table across a ClickHouse cluster.

```sql
icebergS3(url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method]) SETTINGS object_storage_cluster='cluster_name'

icebergAzure(connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method]) SETTINGS object_storage_cluster='cluster_name'

icebergHDFS(path_to_table, [,format] [,compression_method]) SETTINGS object_storage_cluster='cluster_name'

icebergLocal(path_to_table, [,format] [,compression_method]) SETTINGS object_storage_cluster='cluster_name'

icebergS3(option=value [,..]) SETTINGS object_storage_cluster='cluster_name'

iceberg(storage_type='s3', url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method]) SETTINGS object_storage_cluster='cluster_name'

iceberg(storage_type='azure', connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method]) SETTINGS object_storage_cluster='cluster_name'

iceberg(storage_type='hdfs', path_to_table, [,format] [,compression_method]) SETTINGS object_storage_cluster='cluster_name'

iceberg(storage_type='local', path_to_table, [,format] [,compression_method]) SETTINGS object_storage_cluster='cluster_name'

iceberg(iceberg_conf[, option=value [,..]]) SETTINGS object_storage_cluster='cluster_name'
```

**See Also**

- [Iceberg engine](/engines/table-engines/integrations/iceberg.md)
Expand Down
17 changes: 17 additions & 0 deletions docs/en/sql-reference/table-functions/s3Cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,23 @@ Users can use the same approaches as document for the s3 function [here](/sql-re

For details on optimizing the performance of the s3 function see [our detailed guide](/integrations/s3/performance).

## Altinity Antalya branch

### `object_storage_cluster` setting.

Only in the Altinity Antalya branch alternative syntax for `s3Cluster` table function is available. This allows the `s3` function to be used with the non-empty `object_storage_cluster` setting, specifying a cluster name. This enables distributed queries over S3 Storage across a ClickHouse cluster.

```sql
SELECT * FROM s3(
'http://minio1:9001/root/data/{clickhouse,database}/*',
'minio',
'ClickHouse_Minio_P@ssw0rd',
'CSV',
'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))'
) ORDER BY (name, value, polygon)
SETTINGS object_storage_cluster='cluster_simple'
```

## Related {#related}

- [S3 engine](../../engines/table-engines/integrations/s3.md)
Expand Down
Loading
Loading