You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We’ve encountered an issue with the logic related to the incremental_strategy set to 'insert_overwrite'.
In previous versions, the logic did not execute the command to set partitionOverwriteMode to STATIC. However, starting from version 1.10, it appears this command is now being executed, which causes an error when running the query in SQL Warehouse.
When running in SQL Warehouse, the following error occurs:
set spark.sql.sources.partitionOverwriteMode = STATIC
: [CONFIG_NOT_AVAILABLE] Configuration spark.sql.sources.partitionOverwriteMode is not available. SQLSTATE: 42K0I
The logic that needs to be adjusted is located in the following file:
This change would prevent the error in environments like SQL Warehouse, where setting partitionOverwriteMode to STATIC is not supported.
In previous versions, the relevant code looked like this:
{#-- Set Overwrite Mode to STATIC for initial replace --#}
{%- if incremental_strategy =='insert_overwrite'and should_full_refresh() -%}
{%- call statement() -%}
setspark.sql.sources.partitionOverwriteMode= STATIC
{%- endcall -%}
{%- endif -%}
Expected behavior
The logic to set partitionOverwriteMode = STATIC should only run on full refresh, and not during normal incremental runs, especially on SQL Warehouse where the setting is not applicable and leads to an error.
Screenshots and log output
set spark.sql.sources.partitionOverwriteMode = STATIC
: [CONFIG_NOT_AVAILABLE] Configuration spark.sql.sources.partitionOverwriteMode is not available. SQLSTATE: 42K0I
System information
01:46:11 Running with dbt=1.9.4
01:46:11 dbt version: 1.9.4
01:46:11 python version: 3.10.14
01:46:11 os info: macOS-15.4.1-arm64-arm-64bit
.....
01:46:13 adapter type: databricks
01:46:13 adapter version: 1.10.0
01:46:13 Configuration:
01:46:13 profiles.yml file [OK found and valid]
01:46:13 dbt_project.yml file [OK found and valid]
01:46:13 Required dependencies:
01:46:13 - git [OK found]
This issue was not present in earlier versions because the set partitionOverwriteMode = STATIC logic was gated by should_full_refresh() and executed only during full refresh scenarios. This change in behavior from version 1.10 causes unintended side effects in SQL Warehouse environments.
The text was updated successfully, but these errors were encountered:
Describe the bug
We’ve encountered an issue with the logic related to the incremental_strategy set to 'insert_overwrite'.
In previous versions, the logic did not execute the command to set partitionOverwriteMode to STATIC. However, starting from version 1.10, it appears this command is now being executed, which causes an error when running the query in SQL Warehouse.
When running in SQL Warehouse, the following error occurs:
The logic that needs to be adjusted is located in the following file:
dbt/include/databricks/macros/materializations/incremental/incremental.sql
Currently, the following code block is executed:
We suggest modifying this block to:
This change would prevent the error in environments like SQL Warehouse, where setting partitionOverwriteMode to STATIC is not supported.
In previous versions, the relevant code looked like this:
Expected behavior
The logic to set partitionOverwriteMode = STATIC should only run on full refresh, and not during normal incremental runs, especially on SQL Warehouse where the setting is not applicable and leads to an error.
Screenshots and log output
System information
01:46:11 Running with dbt=1.9.4
01:46:11 dbt version: 1.9.4
01:46:11 python version: 3.10.14
01:46:11 os info: macOS-15.4.1-arm64-arm-64bit
.....
01:46:13 adapter type: databricks
01:46:13 adapter version: 1.10.0
01:46:13 Configuration:
01:46:13 profiles.yml file [OK found and valid]
01:46:13 dbt_project.yml file [OK found and valid]
01:46:13 Required dependencies:
01:46:13 - git [OK found]
01:46:13 Connection:
01:46:13 host: .cloud.databricks.com
01:46:13 http_path: /sql/1.0/warehouses/f7d**
01:46:13 catalog: hive_metastore
01:46:13 schema: default
01:46:13 session_properties: {'ansi_mode': True}
01:46:13 Registered adapter: databricks=1.10.0
01:46:22 Connection test: [OK connection ok]
01:46:22 All checks passed!
Additional context
This issue was not present in earlier versions because the set partitionOverwriteMode = STATIC logic was gated by should_full_refresh() and executed only during full refresh scenarios. This change in behavior from version 1.10 causes unintended side effects in SQL Warehouse environments.
The text was updated successfully, but these errors were encountered: