Skip to content

Commit

Permalink
Docs: Add Hadoop conf overrides in Spark (apache#2922)
Browse files Browse the repository at this point in the history
  • Loading branch information
kbendick authored Aug 6, 2021
1 parent a5444b5 commit e315d65
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions site/docs/spark-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,14 @@ Spark's built-in catalog supports existing v1 and v2 tables tracked in a Hive Me

This configuration can use same Hive Metastore for both Iceberg and non-Iceberg tables.

### Using catalog specific Hadoop configuration values

Similar to configuring Hadoop properties by using `spark.hadoop.*`, it's possible to set per-catalog Hadoop configuration values when using Spark by adding the property for the catalog with the prefix `spark.sql.catalog.(catalog-name).hadoop.*`. These properties will take precedence over values configured globally using `spark.hadoop.*` and will only affect Iceberg tables.

```plain
spark.sql.catalog.hadoop_prod.hadoop.fs.s3a.endpoint = http://aws-local:9000
```

### Loading a custom catalog

Spark supports loading a custom Iceberg `Catalog` implementation by specifying the `catalog-impl` property.
Expand Down

0 comments on commit e315d65

Please sign in to comment.