Skip to content

Commit

Permalink
[FLINK-33577][dist] Change the default config file to config.yaml in …
Browse files Browse the repository at this point in the history
…flink-dist.

This closes apache#24177.
  • Loading branch information
JunRuiLee authored and zhuzhurk committed Jan 25, 2024
1 parent 80f6e06 commit 9721ce8
Show file tree
Hide file tree
Showing 52 changed files with 510 additions and 456 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@
<td><h5>kubernetes.flink.conf.dir</h5></td>
<td style="word-wrap: break-word;">"/opt/flink/conf"</td>
<td>String</td>
<td>The flink conf directory that will be mounted in pod. The flink-conf.yaml, log4j.properties, logback.xml in this path will be overwritten from config map.</td>
<td>The flink conf directory that will be mounted in pod. The config.yaml, log4j.properties, logback.xml in this path will be overwritten from config map.</td>
</tr>
<tr>
<td><h5>kubernetes.flink.log.dir</h5></td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
<td><h5>python.client.executable</h5></td>
<td style="word-wrap: break-word;">"python"</td>
<td>String</td>
<td>The path of the Python interpreter used to launch the Python process when submitting the Python jobs via "flink run" or compiling the Java/Scala jobs containing Python UDFs. Equivalent to the command line option "-pyclientexec" or the environment variable PYFLINK_CLIENT_EXECUTABLE. The priority is as following: <br />1. the configuration 'python.client.executable' defined in the source code(Only used in Flink Java SQL/Table API job call Python UDF);<br />2. the command line option "-pyclientexec";<br />3. the configuration 'python.client.executable' defined in flink-conf.yaml<br />4. the environment variable PYFLINK_CLIENT_EXECUTABLE;</td>
<td>The path of the Python interpreter used to launch the Python process when submitting the Python jobs via "flink run" or compiling the Java/Scala jobs containing Python UDFs. Equivalent to the command line option "-pyclientexec" or the environment variable PYFLINK_CLIENT_EXECUTABLE. The priority is as following: <br />1. the configuration 'python.client.executable' defined in the source code(Only used in Flink Java SQL/Table API job call Python UDF);<br />2. the command line option "-pyclientexec";<br />3. the configuration 'python.client.executable' defined in config.yaml<br />4. the environment variable PYFLINK_CLIENT_EXECUTABLE;</td>
</tr>
<tr>
<td><h5>python.executable</h5></td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
<td><h5>state.backend.rocksdb.memory.fixed-per-tm</h5></td>
<td style="word-wrap: break-word;">(none)</td>
<td>MemorySize</td>
<td>The fixed total amount of memory, shared among all RocksDB instances per Task Manager (cluster-level option). This option only takes effect if neither 'state.backend.rocksdb.memory.managed' nor 'state.backend.rocksdb.memory.fixed-per-slot' are not configured. If none is configured then each RocksDB column family state has its own memory caches (as controlled by the column family options). The relevant options for the shared resources (e.g. write-buffer-ratio) can be set on the same level (flink-conf.yaml).Note, that this feature breaks resource isolation between the slots</td>
<td>The fixed total amount of memory, shared among all RocksDB instances per Task Manager (cluster-level option). This option only takes effect if neither 'state.backend.rocksdb.memory.managed' nor 'state.backend.rocksdb.memory.fixed-per-slot' are not configured. If none is configured then each RocksDB column family state has its own memory caches (as controlled by the column family options). The relevant options for the shared resources (e.g. write-buffer-ratio) can be set on the same level (config.yaml).Note, that this feature breaks resource isolation between the slots</td>
</tr>
<tr>
<td><h5>state.backend.rocksdb.memory.high-prio-pool-ratio</h5></td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
<td><h5>state.backend.rocksdb.memory.fixed-per-tm</h5></td>
<td style="word-wrap: break-word;">(none)</td>
<td>MemorySize</td>
<td>The fixed total amount of memory, shared among all RocksDB instances per Task Manager (cluster-level option). This option only takes effect if neither 'state.backend.rocksdb.memory.managed' nor 'state.backend.rocksdb.memory.fixed-per-slot' are not configured. If none is configured then each RocksDB column family state has its own memory caches (as controlled by the column family options). The relevant options for the shared resources (e.g. write-buffer-ratio) can be set on the same level (flink-conf.yaml).Note, that this feature breaks resource isolation between the slots</td>
<td>The fixed total amount of memory, shared among all RocksDB instances per Task Manager (cluster-level option). This option only takes effect if neither 'state.backend.rocksdb.memory.managed' nor 'state.backend.rocksdb.memory.fixed-per-slot' are not configured. If none is configured then each RocksDB column family state has its own memory caches (as controlled by the column family options). The relevant options for the shared resources (e.g. write-buffer-ratio) can be set on the same level (config.yaml).Note, that this feature breaks resource isolation between the slots</td>
</tr>
<tr>
<td><h5>state.backend.rocksdb.memory.high-prio-pool-ratio</h5></td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ public static void mergeHadoopConf(JobConf jobConf) {

/**
* Returns a new Hadoop Configuration object using the path to the hadoop conf configured in the
* main configuration (flink-conf.yaml). This method is public because its being used in the
* main configuration (config.yaml). This method is public because its being used in the
* HadoopDataSource.
*
* @param flinkConfiguration Flink configuration object
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -503,7 +503,7 @@ public int hashCode() {
/**
* Restart strategy configuration that could be used by jobs to use cluster level restart
* strategy. Useful especially when one has a custom implementation of restart strategy set via
* flink-conf.yaml.
* config.yaml.
*/
@PublicEvolving
public static final class FallbackRestartStrategyConfiguration
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ public final class ConfigConstants {
/**
* Prefix for passing custom environment variables to Flink's master process. For example for
* passing LD_LIBRARY_PATH as an env variable to the AppMaster, set:
* containerized.master.env.LD_LIBRARY_PATH: "/usr/lib/native" in the flink-conf.yaml.
* containerized.master.env.LD_LIBRARY_PATH: "/usr/lib/native" in the config.yaml.
*
* @deprecated Use {@link ResourceManagerOptions#CONTAINERIZED_MASTER_ENV_PREFIX} instead.
*/
Expand Down Expand Up @@ -482,7 +482,7 @@ public final class ConfigConstants {
/**
* Prefix for passing custom environment variables to Flink's ApplicationMaster (JobManager).
* For example for passing LD_LIBRARY_PATH as an env variable to the AppMaster, set:
* yarn.application-master.env.LD_LIBRARY_PATH: "/usr/lib/native" in the flink-conf.yaml.
* yarn.application-master.env.LD_LIBRARY_PATH: "/usr/lib/native" in the config.yaml.
*
* @deprecated Please use {@code CONTAINERIZED_MASTER_ENV_PREFIX}.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ public class ResourceManagerOptions {
/**
* Prefix for passing custom environment variables to Flink's master process. For example for
* passing LD_LIBRARY_PATH as an env variable to the AppMaster, set:
* containerized.master.env.LD_LIBRARY_PATH: "/usr/lib/native" in the flink-conf.yaml.
* containerized.master.env.LD_LIBRARY_PATH: "/usr/lib/native" in the config.yaml.
*/
public static final String CONTAINERIZED_MASTER_ENV_PREFIX = "containerized.master.env.";

Expand Down
2 changes: 1 addition & 1 deletion flink-dist/src/main/assemblies/bin.xml
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ under the License.

<!-- copy the config file -->
<file>
<source>src/main/resources/flink-conf.yaml</source>
<source>src/main/resources/config.yaml</source>
<outputDirectory>conf</outputDirectory>
<fileMode>0644</fileMode>
</file>
Expand Down
8 changes: 4 additions & 4 deletions flink-dist/src/main/flink-bin/bin/config.sh
Original file line number Diff line number Diff line change
Expand Up @@ -112,12 +112,12 @@ readFromConfig() {
}

########################################################################################################################
# DEFAULT CONFIG VALUES: These values will be used when nothing has been specified in conf/flink-conf.yaml
# DEFAULT CONFIG VALUES: These values will be used when nothing has been specified in conf/config.yaml
# -or- the respective environment variables are not set.
########################################################################################################################

# WARNING !!! , these values are only used if there is nothing else is specified in
# conf/flink-conf.yaml
# conf/config.yaml

DEFAULT_ENV_PID_DIR="/tmp" # Directory to store *.pid files to
DEFAULT_ENV_LOG_MAX=10 # Maximum number of old log files to keep
Expand All @@ -134,7 +134,7 @@ DEFAULT_HADOOP_CONF_DIR="" # Hadoop Configuration Direc
DEFAULT_HBASE_CONF_DIR="" # HBase Configuration Directory, if necessary

########################################################################################################################
# CONFIG KEYS: The default values can be overwritten by the following keys in conf/flink-conf.yaml
# CONFIG KEYS: The default values can be overwritten by the following keys in conf/config.yaml
########################################################################################################################

KEY_TASKM_COMPUTE_NUMA="taskmanager.compute.numa"
Expand Down Expand Up @@ -351,7 +351,7 @@ if [ -z "${HIGH_AVAILABILITY}" ]; then
fi

# Arguments for the JVM. Used for job and task manager JVMs.
# DO NOT USE FOR MEMORY SETTINGS! Use conf/flink-conf.yaml with keys
# DO NOT USE FOR MEMORY SETTINGS! Use conf/config.yaml with keys
# JobManagerOptions#TOTAL_PROCESS_MEMORY and TaskManagerOptions#TOTAL_PROCESS_MEMORY for that!
if [ -z "${JVM_ARGS}" ]; then
JVM_ARGS=""
Expand Down
Loading

0 comments on commit 9721ce8

Please sign in to comment.