diff --git a/docs/content.zh/docs/deployment/config.md b/docs/content.zh/docs/deployment/config.md index 7561852e4d05a..35b85240d0371 100644 --- a/docs/content.zh/docs/deployment/config.md +++ b/docs/content.zh/docs/deployment/config.md @@ -318,10 +318,13 @@ See the [History Server Docs]({{< ref "docs/deployment/advanced/historyserver" > ---- ---- -# Artifact Fetch +# Artifact Fetching + +Flink can fetch user artifacts stored locally, on remote DFS, or accessible via an HTTP(S) endpoint. +{{< hint info >}} +**Note:** This is only supported in Standalone Application Mode and Native Kubernetes Application Mode. +{{< /hint >}} -*Artifact Fetch* is a features that Flink will fetch user artifact stored in DFS or download by HTTP/HTTPS. -Note that it is only supported in StandAlone Application Mode and Native Kubernetes Application Mode. {{< generated/artifact_fetch_configuration >}} ---- diff --git a/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md b/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md index 048a4e800c1a2..4f3a537cdbc4b 100644 --- a/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md +++ b/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md @@ -107,26 +107,24 @@ $ ./bin/flink run-application \ # FileSystem $ ./bin/flink run-application \ --target kubernetes-application \ - -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.17-SNAPSHOT.jar \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.container.image=custom-image-name \ s3://my-bucket/my-flink-job.jar -# Http/Https Schema +# HTTP(S) $ ./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.container.image=custom-image-name \ - http://ip:port/my-flink-job.jar + https://ip:port/my-flink-job.jar ``` {{< hint info >}} -Now, The jar artifact supports downloading from the [flink filesystem]({{< ref "docs/deployment/filesystems/overview" >}}) or Http/Https in Application Mode. -The jar package will be downloaded from filesystem to -[user.artifacts.base.dir]({{< ref "docs/deployment/config" >}}#user-artifacts-base-dir)/[kubernetes.namespace]({{< ref "docs/deployment/config" >}}#kubernetes-namespace)/[kubernetes.cluster-id]({{< ref "docs/deployment/config" >}}#kubernetes-cluster-id) path in image. +JAR fetching supports downloading from [filesystems]({{< ref "docs/deployment/filesystems/overview" >}}) or HTTP(S) in Application Mode. +The JAR will be downloaded to +[user.artifacts.base-dir]({{< ref "docs/deployment/config" >}}#user-artifacts-base-dir)/[kubernetes.namespace]({{< ref "docs/deployment/config" >}}#kubernetes-namespace)/[kubernetes.cluster-id]({{< ref "docs/deployment/config" >}}#kubernetes-cluster-id) path in image. {{< /hint >}} -Note `local` schema is still supported. If you use `local` schema, the jar must be provided in the image or download by a init container like [Example]({{< ref "docs/deployment/resource-providers/native_kubernetes" >}}#example-of-pod-template). - +Note `local` schema is still supported. If you use `local` schema, the JAR must be provided in the image or downloaded by an init container as described in [this example](#example-of-pod-template). The `kubernetes.cluster-id` option specifies the cluster name and must be unique. If you do not specify this option, then Flink will generate a random name. @@ -348,7 +346,7 @@ $ kubectl create clusterrolebinding flink-role-binding-default --clusterrole=edi ``` If you do not want to use the `default` service account, use the following command to create a new `flink-service-account` service account and set the role binding. -Then use the config option `-Dkubernetes.service-account=flink-service-account` to make the JobManager pod use the `flink-service-account` service account to create/delete TaskManager pods and leader ConfigMaps. +Then use the config option `-Dkubernetes.service-account=flink-service-account` to configure the JobManager pod's service account used to create and delete TaskManager pods and leader ConfigMaps. Also this will allow the TaskManager to watch leader ConfigMaps to retrieve the address of JobManager and ResourceManager. ```bash diff --git a/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md b/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md index ea12beb65ec8a..a4c6a37819f1c 100644 --- a/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md +++ b/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md @@ -121,7 +121,7 @@ The *job artifacts* are included into the class path of Flink's JVM process with * all other necessary dependencies or resources, not included into Flink. To deploy a cluster for a single job with Docker, you need to -* make *job artifacts* available locally in all containers under `/opt/flink/usrlib` or pass jar path by *jar-file* argument. +* make *job artifacts* available locally in all containers under `/opt/flink/usrlib`, or pass a list of jars via the `--jars` argument * start a JobManager container in the *Application cluster* mode * start the required number of TaskManager containers. @@ -156,7 +156,6 @@ To make the **job artifacts available** locally in the container, you can * **or extend the Flink image** by writing a custom `Dockerfile`, build it and use it for starting the JobManager and TaskManagers: - ```dockerfile FROM flink ADD /host/path/to/job/artifacts/1 /opt/flink/usrlib/artifacts/1 @@ -175,8 +174,7 @@ To make the **job artifacts available** locally in the container, you can $ docker run flink_with_job_artifacts taskmanager ``` -* **or pass jar path by jar-file argument** when you start the JobManager: - +* **or pass jar path by `jars` argument** when you start the JobManager: ```sh $ FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager" @@ -184,27 +182,31 @@ To make the **job artifacts available** locally in the container, you can $ docker run \ --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \ - --env ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.17-SNAPSHOT.jar \ + --env ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{< version >}}.jar \ --name=jobmanager \ --network flink-network \ flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable >}}latest{{< /unstable >}} standalone-job \ --job-classname com.job.ClassName \ - --jar-file s3://my-bucket/my-flink-job.jar + --jars s3://my-bucket/my-flink-job.jar,s3://my-bucket/my-flink-udf.jar \ [--job-id ] \ [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \ [job arguments] - + ``` + The `standalone-job` argument starts a JobManager container in the Application Mode. #### JobManager additional command line arguments You can provide the following additional command line arguments to the cluster entrypoint: -* `--job-classname `: Class name of the job to run. +* `--job-classname ` (optional): Class name of the job to run. By default, Flink scans its class path for a JAR with a Main-Class or program-class manifest entry and chooses it as the job class. Use this command line argument to manually set the job class. + + {{< hint warning >}} This argument is required in case that no or more than one JAR with such a manifest entry is available on the class path. + {{< /hint >}} * `--job-id ` (optional): Manually set a Flink job ID for the job (default: 00000000000000000000000000000000) @@ -216,12 +218,12 @@ You can provide the following additional command line arguments to the cluster e * `--allowNonRestoredState` (optional): Skip broken savepoint state - Additionally you can specify this argument to allow that savepoint state is skipped which cannot be restored. + Additionally, you can specify this argument to allow that savepoint state is skipped which cannot be restored. -* `--jar-file` (optional): the path of jar artifact +* `--jars` (optional): the paths of the job jar and any additional artifact(s) separated by commas - You can specify this argument to point the job artifacts stored in [flink filesystem]({{< ref "docs/deployment/filesystems/overview" >}}) or Http/Https. Flink will fetch it when deploy the job. - (e.g., s3://my-bucket/my-flink-job.jar). + You can specify this argument to point the job artifacts stored in [flink filesystem]({{< ref "docs/deployment/filesystems/overview" >}}) or download via HTTP(S). + Flink will fetch these during the job deployment. (e.g. `--jars s3://my-bucket/my-flink-job.jar`, `--jars s3://my-bucket/my-flink-job.jar,s3://my-bucket/my-flink-udf.jar` ). If the main function of the user job main class accepts arguments, you can also pass them at the end of the `docker run` command. @@ -326,7 +328,7 @@ services: image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable >}}latest{{< /unstable >}} ports: - "8081:8081" - command: standalone-job --job-classname com.job.ClassName [--job-id ] [--fromSavepoint /path/to/savepoint] [--allowNonRestoredState] ["--jar-file" "/path/to/user-artifact"] [job arguments] + command: standalone-job --job-classname com.job.ClassName [--jars /path/to/artifact1,/path/to/artifact2] [--job-id ] [--fromSavepoint /path/to/savepoint] [--allowNonRestoredState] [job arguments] volumes: - /host/path/to/job/artifacts:/opt/flink/usrlib environment: diff --git a/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md b/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md index c1a3069ac51d3..95fb16d7decd7 100644 --- a/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md +++ b/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md @@ -120,7 +120,7 @@ $ ./bin/flink run -m localhost:8081 ./examples/streaming/TopSpeedWindowing.jar * 可以从 [资源定义示例](#application-cluster-resource-definitions) 中的 `job-artifacts-volume` 处获取。假如是在 minikube 集群中创建这些组件,那么定义示例中的 job-artifacts-volume 可以挂载为主机的本地目录。如果不使用 minikube 集群,那么可以使用 Kubernetes 集群中任何其它可用类型的 volume 来提供 *job artifacts* * 构建一个已经包含 *job artifacts* 参数的[自定义镜像]({{< ref "docs/deployment/resource-providers/standalone/docker" >}}#advanced-customization)。 -* 通过指定[--jar file]({{< ref "docs/deployment/resource-providers/standalone/docker" >}}#jobmanager-additional-command-line-arguments)参数提供 存储在DFS或者可由HTTP/HTTPS下载的*job artifacts*路径 +* 通过指定[--jars]({{< ref "docs/deployment/resource-providers/standalone/docker" >}}#jobmanager-additional-command-line-arguments)参数提供 存储在DFS或者可由HTTP/HTTPS下载的*job artifacts*路径 在创建[通用集群组件](#common-cluster-resource-definitions)后,指定 [Application 集群资源定义](#application-cluster-resource-definitions)文件,执行 `kubectl` 命令来启动 Flink Application 集群: @@ -627,7 +627,7 @@ spec: - name: jobmanager image: apache/flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable >}}latest{{< /unstable >}} env: - args: ["standalone-job", "--job-classname", "com.job.ClassName", , ] # 可选的参数项: ["--job-id", "", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState", "--jar-file", "/path/to/user-artifact"] + args: ["standalone-job", "--job-classname", "com.job.ClassName", , ] # 可选的参数项: ["--job-id", "", "--jars", "/path/to/artifact1,/path/to/artifact2", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"] ports: - containerPort: 6123 name: rpc @@ -686,7 +686,7 @@ spec: apiVersion: v1 fieldPath: status.podIP # 下面的 args 参数会使用 POD_IP 对应的值覆盖 config map 中 jobmanager.rpc.address 的属性值。 - args: ["standalone-job", "--host", "$(POD_IP)", "--job-classname", "com.job.ClassName", , ] # 可选参数项: ["--job-id", "", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState", "--jar-file", "/path/to/user-artifact"] + args: ["standalone-job", "--host", "$(POD_IP)", "--job-classname", "com.job.ClassName", , ] # 可选参数项: ["--job-id", "", "--jars", "/path/to/artifact1,/path/to/artifact2", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"] ports: - containerPort: 6123 name: rpc diff --git a/docs/content/docs/deployment/config.md b/docs/content/docs/deployment/config.md index 53f03a29cfbbb..c654663088807 100644 --- a/docs/content/docs/deployment/config.md +++ b/docs/content/docs/deployment/config.md @@ -320,10 +320,13 @@ See the [History Server Docs]({{< ref "docs/deployment/advanced/historyserver" > ---- ---- -# Artifact Fetch +# Artifact Fetching + +Flink can fetch user artifacts stored locally, on remote DFS, or accessible via an HTTP(S) endpoint. +{{< hint info >}} +**Note:** This is only supported in Standalone Application Mode and Native Kubernetes Application Mode. +{{< /hint >}} -*Artifact Fetch* is a features that Flink will fetch user artifact stored in DFS or download by HTTP/HTTPS. -Note that it is only supported in StandAlone Application Mode and Native Kubernetes Application Mode. {{< generated/artifact_fetch_configuration >}} ---- diff --git a/docs/content/docs/deployment/resource-providers/native_kubernetes.md b/docs/content/docs/deployment/resource-providers/native_kubernetes.md index 0bc75c26f2b55..8b676ab983bf1 100644 --- a/docs/content/docs/deployment/resource-providers/native_kubernetes.md +++ b/docs/content/docs/deployment/resource-providers/native_kubernetes.md @@ -119,19 +119,20 @@ $ ./bin/flink run-application \ -Dkubernetes.container.image=custom-image-name \ s3://my-bucket/my-flink-job.jar -# Http/Https Schema +# HTTP(S) $ ./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.container.image=custom-image-name \ - http://ip:port/my-flink-job.jar + https://ip:port/my-flink-job.jar ``` {{< hint info >}} -Now, The jar artifact supports downloading from the [flink filesystem]({{< ref "docs/deployment/filesystems/overview" >}}) or Http/Https in Application Mode. -The jar package will be downloaded from filesystem to -[user.artifacts.base.dir]({{< ref "docs/deployment/config" >}}#user-artifacts-base-dir)/[kubernetes.namespace]({{< ref "docs/deployment/config" >}}#kubernetes-namespace)/[kubernetes.cluster-id]({{< ref "docs/deployment/config" >}}#kubernetes-cluster-id) path in image. +JAR fetching supports downloading from [filesystems]({{< ref "docs/deployment/filesystems/overview" >}}) or HTTP(S) in Application Mode. +The JAR will be downloaded to +[user.artifacts.base-dir]({{< ref "docs/deployment/config" >}}#user-artifacts-base-dir)/[kubernetes.namespace]({{< ref "docs/deployment/config" >}}#kubernetes-namespace)/[kubernetes.cluster-id]({{< ref "docs/deployment/config" >}}#kubernetes-cluster-id) path in image. {{< /hint >}} -Note `local` schema is still supported. If you use `local` schema, the jar must be provided in the image or download by a init container like [Example]({{< ref "docs/deployment/resource-providers/native_kubernetes" >}}#example-of-pod-template). + +Note `local` schema is still supported. If you use `local` schema, the JAR must be provided in the image or downloaded by an init container as described in [this example](#example-of-pod-template). The `kubernetes.cluster-id` option specifies the cluster name and must be unique. If you do not specify this option, then Flink will generate a random name. @@ -353,7 +354,7 @@ $ kubectl create clusterrolebinding flink-role-binding-default --clusterrole=edi ``` If you do not want to use the `default` service account, use the following command to create a new `flink-service-account` service account and set the role binding. -Then use the config option `-Dkubernetes.service-account=flink-service-account` to make the JobManager pod use the `flink-service-account` service account to create/delete TaskManager pods and leader ConfigMaps. +Then use the config option `-Dkubernetes.service-account=flink-service-account` to configure the JobManager pod's service account used to create and delete TaskManager pods and leader ConfigMaps. Also this will allow the TaskManager to watch leader ConfigMaps to retrieve the address of JobManager and ResourceManager. ```bash diff --git a/docs/content/docs/deployment/resource-providers/standalone/docker.md b/docs/content/docs/deployment/resource-providers/standalone/docker.md index 3b9917089da88..ba355b69ced56 100644 --- a/docs/content/docs/deployment/resource-providers/standalone/docker.md +++ b/docs/content/docs/deployment/resource-providers/standalone/docker.md @@ -121,7 +121,7 @@ The *job artifacts* are included into the class path of Flink's JVM process with * all other necessary dependencies or resources, not included into Flink. To deploy a cluster for a single job with Docker, you need to -* make *job artifacts* available locally in all containers under `/opt/flink/usrlib` or pass jar path by *jar-file* argument. +* make *job artifacts* available locally in all containers under `/opt/flink/usrlib`, or pass a list of jars via the `--jars` argument * start a JobManager container in the *Application cluster* mode * start the required number of TaskManager containers. @@ -156,7 +156,6 @@ To make the **job artifacts available** locally in the container, you can * **or extend the Flink image** by writing a custom `Dockerfile`, build it and use it for starting the JobManager and TaskManagers: - ```dockerfile FROM flink ADD /host/path/to/job/artifacts/1 /opt/flink/usrlib/artifacts/1 @@ -175,8 +174,7 @@ To make the **job artifacts available** locally in the container, you can $ docker run flink_with_job_artifacts taskmanager ``` -* **or pass jar path by jar-file argument** when you start the JobManager: - +* **or pass jar path by `jars` argument** when you start the JobManager: ```sh $ FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager" @@ -184,27 +182,31 @@ To make the **job artifacts available** locally in the container, you can $ docker run \ --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \ - --env ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.17-SNAPSHOT.jar \ + --env ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{< version >}}.jar \ --name=jobmanager \ --network flink-network \ flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable >}}latest{{< /unstable >}} standalone-job \ --job-classname com.job.ClassName \ - --jar-file s3://my-bucket/my-flink-job.jar + --jars s3://my-bucket/my-flink-job.jar,s3://my-bucket/my-flink-udf.jar \ [--job-id ] \ [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \ [job arguments] - + ``` + The `standalone-job` argument starts a JobManager container in the Application Mode. #### JobManager additional command line arguments You can provide the following additional command line arguments to the cluster entrypoint: -* `--job-classname `: Class name of the job to run. +* `--job-classname ` (optional): Class name of the job to run. By default, Flink scans its class path for a JAR with a Main-Class or program-class manifest entry and chooses it as the job class. Use this command line argument to manually set the job class. + + {{< hint warning >}} This argument is required in case that no or more than one JAR with such a manifest entry is available on the class path. + {{< /hint >}} * `--job-id ` (optional): Manually set a Flink job ID for the job (default: 00000000000000000000000000000000) @@ -216,12 +218,12 @@ You can provide the following additional command line arguments to the cluster e * `--allowNonRestoredState` (optional): Skip broken savepoint state - Additionally you can specify this argument to allow that savepoint state is skipped which cannot be restored. + Additionally, you can specify this argument to allow that savepoint state is skipped which cannot be restored. -* `--jar-file` (optional): the path of jar artifact +* `--jars` (optional): the paths of the job jar and any additional artifact(s) separated by commas - You can specify this argument to point the job artifacts stored in [flink filesystem]({{< ref "docs/deployment/filesystems/overview" >}}) or Http/Https. Flink will fetch it when deploy the job. - (e.g., s3://my-bucket/my-flink-job.jar). + You can specify this argument to point the job artifacts stored in [flink filesystem]({{< ref "docs/deployment/filesystems/overview" >}}) or download via HTTP(S). + Flink will fetch these during the job deployment. (e.g. `--jars s3://my-bucket/my-flink-job.jar`, `--jars s3://my-bucket/my-flink-job.jar,s3://my-bucket/my-flink-udf.jar` ). If the main function of the user job main class accepts arguments, you can also pass them at the end of the `docker run` command. @@ -326,7 +328,7 @@ services: image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable >}}latest{{< /unstable >}} ports: - "8081:8081" - command: standalone-job --job-classname com.job.ClassName [--job-id ] [--fromSavepoint /path/to/savepoint] [--allowNonRestoredState] ["--jar-file" "/path/to/user-artifact"] [job arguments] + command: standalone-job --job-classname com.job.ClassName [--job-id ] [--jars /path/to/artifact1,/path/to/artifact2] [--fromSavepoint /path/to/savepoint] [--allowNonRestoredState] [job arguments] volumes: - /host/path/to/job/artifacts:/opt/flink/usrlib environment: diff --git a/docs/content/docs/deployment/resource-providers/standalone/kubernetes.md b/docs/content/docs/deployment/resource-providers/standalone/kubernetes.md index c2759d45fa687..f050a19e21c51 100644 --- a/docs/content/docs/deployment/resource-providers/standalone/kubernetes.md +++ b/docs/content/docs/deployment/resource-providers/standalone/kubernetes.md @@ -124,7 +124,7 @@ The *job artifacts* could be provided by these way: The definition examples mount the volume as a local directory of the host assuming that you create the components in a minikube cluster. If you do not use a minikube cluster, you can use any other type of volume, available in your Kubernetes cluster, to supply the *job artifacts*. * You can build [a custom image]({{< ref "docs/deployment/resource-providers/standalone/docker" >}}#advanced-customization) which already contains the artifacts instead. -* You can specify [--jar file]({{< ref "docs/deployment/resource-providers/standalone/docker" >}}#jobmanager-additional-command-line-arguments)arguments to point the job artifacts stored in [flink filesystem]({{< ref "docs/deployment/filesystems/overview" >}}) or Http/Https. +* You can pass artifacts via the [--jars]({{< ref "docs/deployment/resource-providers/standalone/docker" >}}#jobmanager-additional-command-line-arguments) option that are stored locally, on [remote DFS]({{< ref "docs/deployment/filesystems/overview" >}}), or accessible via an HTTP(S) endpoint. After creating [the common cluster components](#common-cluster-resource-definitions), use [the Application cluster specific resource definitions](#application-cluster-resource-definitions) to launch the cluster with the `kubectl` command: @@ -613,7 +613,7 @@ spec: - name: jobmanager image: apache/flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable >}}latest{{< /unstable >}} env: - args: ["standalone-job", "--job-classname", "com.job.ClassName", , ] # optional arguments: ["--job-id", "", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState", "--jar-file", "/path/to/user-artifact"] + args: ["standalone-job", "--job-classname", "com.job.ClassName", , ] # optional arguments: ["--job-id", "", "--jars", "/path/to/artifact1,/path/to/artifact2", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"] ports: - containerPort: 6123 name: rpc @@ -672,7 +672,7 @@ spec: apiVersion: v1 fieldPath: status.podIP # The following args overwrite the value of jobmanager.rpc.address configured in the configuration config map to POD_IP. - args: ["standalone-job", "--host", "$(POD_IP)", "--job-classname", "com.job.ClassName", , ] # optional arguments: ["--job-id", "", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState", "--jar-file", "/path/to/user-artifact"] + args: ["standalone-job", "--host", "$(POD_IP)", "--job-classname", "com.job.ClassName", , ] # optional arguments: ["--job-id", "", "--jars", "/path/to/artifact1,/path/to/artifact2", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"] ports: - containerPort: 6123 name: rpc diff --git a/docs/content/docs/deployment/resource-providers/standalone/overview.md b/docs/content/docs/deployment/resource-providers/standalone/overview.md index 1c0b4b94ea930..7c37f2789e1b2 100644 --- a/docs/content/docs/deployment/resource-providers/standalone/overview.md +++ b/docs/content/docs/deployment/resource-providers/standalone/overview.md @@ -96,13 +96,25 @@ Then, we can launch the JobManager: $ ./bin/standalone-job.sh start --job-classname org.apache.flink.streaming.examples.windowing.TopSpeedWindowing ``` -The web interface is now available at [localhost:8081](http://localhost:8081). However, the application won't be able to start, because there are no TaskManagers running yet: +The web interface is now available at [localhost:8081](http://localhost:8081). + +{{< hint info >}} +Another approach would be to use the artifact fetching mechanism via the `--jars` option: + +```bash +$ ./bin/standalone-job.sh start -D user.artifacts.base-dir=/tmp/flink-artifacts --jars local:///path/to/TopSpeedWindowing.jar +``` + +Read more about this CLI option [here]({{< ref "docs/deployment/resource-providers/standalone/docker" >}}#jobmanager-additional-command-line-arguments). +{{< /hint >}} + +However, the application won't be able to start, because there are no TaskManagers running yet: ```bash $ ./bin/taskmanager.sh start ``` -Note: You can start multiple TaskManagers, if your application needs more resources. +Note You can start multiple TaskManagers, if your application needs more resources. Stopping the services is also supported via the scripts. Call them multiple times if you want to stop multiple instances, or use `stop-all`: diff --git a/docs/layouts/shortcodes/generated/artifact_fetch_configuration.html b/docs/layouts/shortcodes/generated/artifact_fetch_configuration.html index be2cdb4a324f4..c97d999fccb82 100644 --- a/docs/layouts/shortcodes/generated/artifact_fetch_configuration.html +++ b/docs/layouts/shortcodes/generated/artifact_fetch_configuration.html @@ -9,16 +9,28 @@ -
user.artifacts.base.dir
+
user.artifacts.artifact-list
+ (none) + List<String> + A semicolon-separated list of the additional artifacts to fetch for the job before setting up the application cluster. All given elements have to be valid URIs. Example: s3://sandbox-bucket/format.jar;http://sandbox-server:1234/udf.jar + + +
user.artifacts.base-dir
"/opt/flink/artifacts" String The base dir to put the application job artifacts. -
user.artifacts.http.header
+
user.artifacts.http-headers
(none) Map - Custom HTTP header for HttpArtifactFetcher. The header will be applied when getting the application job artifacts. Expected format: headerKey1:headerValue1,headerKey2:headerValue2. + Custom HTTP header(s) for the HTTP artifact fetcher. The header(s) will be applied when getting the application job artifacts. Expected format: headerKey1:headerValue1,headerKey2:headerValue2. + + +
user.artifacts.raw-http-enabled
+ false + Boolean + Enables artifact fetching from raw HTTP endpoints. diff --git a/docs/layouts/shortcodes/generated/kubernetes_config_configuration.html b/docs/layouts/shortcodes/generated/kubernetes_config_configuration.html index a0c530045cf11..7bbdfd5e404f5 100644 --- a/docs/layouts/shortcodes/generated/kubernetes_config_configuration.html +++ b/docs/layouts/shortcodes/generated/kubernetes_config_configuration.html @@ -290,11 +290,5 @@ Integer Defines the number of Kubernetes transactional operation retries before the client gives up. For example, FlinkKubeClient#checkAndUpdateConfigMap. - -
kubernetes.user.artifacts.emptyDir.enable
- true - Boolean - Whether to enable create mount an empty dir for user.artifacts.base.dir to keep user artifacts if container restart. - diff --git a/flink-clients/pom.xml b/flink-clients/pom.xml index d80b06a1e6b4b..6f654a2c3f889 100644 --- a/flink-clients/pom.xml +++ b/flink-clients/pom.xml @@ -118,6 +118,14 @@ under the License. test job-lib-jar + + + org.apache.flink + flink-clients-test-utils + ${project.version} + test + additional-artifact-jar +