Skip to content

Commit

Permalink
Release 29.12.2022
Browse files Browse the repository at this point in the history
* API Gateway, Cloud Functions, Serverless Containers: updated information about roles.
* Data Transfer: added a tutorial on delivering data from Managed Service for PostgreSQL to Managed Service for ClickHouse.
* Managed Service for OpenSearch: added pricing details.
* Translations updated.
* Fixes and improvements.
  • Loading branch information
DataUI VCS Robot authored and SerjKanunnikov committed Dec 29, 2022
1 parent b0cb378 commit ff0daee
Show file tree
Hide file tree
Showing 235 changed files with 3,388 additions and 1,420 deletions.
Binary file removed en/_assets/vpc/security/service-roles-hierarchy.png
Binary file not shown.
60 changes: 60 additions & 0 deletions en/_assets/vpc/security/service-roles-hierarchy.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions en/_includes/compute/backup-info.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
(optional) To automatically back up your instances using [{{ backup-name }}](../../backup/), under **Backup**, select the option of connecting an instance to the service.

The option shows up if you requested access to the service from our [technical support]({{ link-console-support }}/create-ticket) and selected a supported operating system for your VM. For more information about setting up a VM, see [{#T}](../../backup/concepts/vm-connection.md).
5 changes: 5 additions & 0 deletions en/_includes/compute/password-reset-linux.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{% note info %}

It currently isn't possible to reset a password on a Linux virtual machine using {{ yandex-cloud }} tools.

{% endnote %}
53 changes: 53 additions & 0 deletions en/_includes/compute/terraform-empty-disk-create.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
To create an empty disk:

1. Describe the resource parameters in the `yandex_compute_disk` configuration file.

Example configuration file structure:

```hcl
resource "yandex_compute_disk" "empty-disk" {
name = "empty-disk"
type = "network-hdd"
zone = "<availability_zone>"
size = <disk_size>
block_size = <block_size>
}
```
Where:
* `name`: Disk name. Name format:

{% include [name-format](../../_includes/name-format.md) %}

* `type`: Type of the disk being created.
* `zone`: [Availability zone](../../overview/concepts/geo-scope.md). The availability zone for a disk must be the same as that of the placement group where you want to create the disk. We recommend creating disks in the `{{ region-id }}-a` or `{{ region-id }}-b` availability zone.
* `size`: Disk size in GB. The maximum disk size depends on the chosen block size.
* `block_size`: Block size in bytes (the minimum storage size for information on the disk). By default, the block size of all created disks is 4 KB, but that's not enough for disks larger than 8 TB. For more information, see [{#T}](../../compute/operations/disk-create/empty-disk-blocksize.md).

For more information about the `yandex_compute_disk` resource, see the [provider documentation]({{ tf-provider-link }}/compute_disk).

1. Make sure that the configuration files are valid.

1. In the command line, go to the directory where you created the configuration file.
1. Run the check using the command:

```bash
terraform plan
```

If the configuration is described correctly, the terminal displays a list of created resources and their parameters. If the configuration contains errors, {{ TF }} will point them out.

1. Deploy the cloud resources.

1. If the configuration doesn't contain any errors, run the command:
```bash
terraform apply
```
1. Confirm that you want to create the resources.
Afterwards, all the necessary resources are created in the specified folder. You can verify that the resources are there and properly configured in the [management console]({{ link-console-main }}) or using the following [CLI](../../cli/quickstart.md) command:
```bash
yc compute disk list
```
120 changes: 120 additions & 0 deletions en/_includes/compute/vm-connect-powershell.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
PowerShell Remoting Protocol (PSRP) with access via HTTPS is enabled for images of all versions and editions of the Windows operating system prepared for {{ yandex-cloud }}. When the VM starts (its status is `RUNNING`), you can connect to it using PSRP.

[Security groups](../../vpc/concepts/security-groups.md) of the VM must allow incoming TCP traffic to port 5986.

{% include [security-groups-note](../../compute/_includes_service/security-groups-note.md) %}

To do this, specify its public IP address or fully qualified domain name ([FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name)). Access using FQDN is possible from another {{ yandex-cloud }} VM if it's connected to the same network. You can find out the IP address and FQDN in the management console. Go to the **Network** section on the virtual machine's page.

To connect to the VM:

1. Open the PowerShell console.

1. Create an object named `Credentials` and replace the `<password>` password with that of the `Administrator` user, which you specified when creating the VM:

```powershell
$myUserName = "Administrator"
$myPlainTextPassword = "<password>"
$myPassword = $MyPlainTextPassword | ConvertTo-SecureString -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($MyUserName, $myPassword)
```
1. Make sure that the username and password entered in the object are correct:
```powershell
$networkCredential = $credential.GetNetworkCredential()
$networkCredential | Select-Object UserName, Password
```
Result:
```text
UserName Password
-------- --------
Administrator <password>
```
1. Create a variable for the VM's IP address:
```powershell
$ipAddress = "<ip-address>"
```
1. Create an object named `SessionOption`. In the object, specify the checks to skip:
```powershell
$sessionOption = New-PSSessionOption `
-SkipCACheck `
-SkipCNCheck `
-SkipRevocationCheck
```
1. Connect to an interactive session:
```powershell
$psSession = @{
ComputerName = $ipAddress
UseSSL = $true
Credential = $credential
SessionOption = $sessionOption
}
Enter-PSSession @psSession
```
Result:
```text
[<ip-address>]: PS C:\Users\$myUserName\Documents>
```
Terminate the session:
```powershell
Exit-PSSession
```
1. Create a session for non-interactive command execution:
```powershell
$session = New-PSSession @psSession
```
Get a list of open sessions:
```powershell
Get-PSSession
```
Result:
```text
Id Name ComputerName ComputerType State ConfigurationName Availability
-- ---- ------------ ------------ ----- ----------------- ------------
2 WinRM2 <ip-address> RemoteMachine Opened Microsoft.PowerShell Available
```
Run the command on a remote VM:
```powershell
$scriptBlock = { Get-Process }
$invokeCommand = @{
ScriptBlock = $scriptBlock
Session = $session
}
Invoke-Command @invokeCommand
```
Result:
```text
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName PSComputerName
------- ------ ----- ----- ------ -- -- ----------- --------------
249 13 4248 16200 0.11 4176 2 conhost <ip-address>
283 12 1888 4220 0.20 420 0 csrss <ip-address>
...
```
#### See also {#see-also}
* [PowerShell sessions (PSSessions)](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_pssessions)
4 changes: 2 additions & 2 deletions en/_includes/data-transfer/connectivity-marix.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ Possible source and target combinations:
| Source \ Target | [{{ KF }}](../../data-transfer/operations/endpoint/target/kafka.md) | [{{ CH }}](../../data-transfer/operations/endpoint/target/clickhouse.md) | [{{ GP }}](../../data-transfer/operations/endpoint/target/greenplum.md) | [{{ MG }}](../../data-transfer/operations/endpoint/target/mongodb.md) | [{{ MY }}](../../data-transfer/operations/endpoint/target/mysql.md) | [{{ PG }}](../../data-transfer/operations/endpoint/target/postgresql.md) | [{{ ydb-short-name }}](../../data-transfer/operations/endpoint/target/yandex-database.md) | [{{ objstorage-name }}](../../data-transfer/operations/endpoint/target/object-storage.md) | Source / Target |
|:-------------------------------------------------------------------------------------:|:-------------------------------------------------------------------:|:------------------------------------------------------------------------:|:-----------------------------------------------------------------------:|:---------------------------------------------------------------------:|:-------------------------------------------------------------------:|:------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------:|
| [Airbyte®](#airbyte) | - | C^1^ | - | C^1^ | C^1^ | C^1^ | C^1^ | - | [Airbyte®](#airbyte) |
| [{{ KF }}](../../data-transfer/operations/endpoint/source/kafka.md) | R^1^ | [R^1^](../../data-transfer/tutorials/mkf-to-mch.md) | R^1^ | - | - | - | [R^1^](../../data-transfer/tutorials/mkf-to-ydb.md) | R^1^ | [{{ KF }}](../../data-transfer/operations/endpoint/source/kafka.md) |
| [{{ KF }}](../../data-transfer/operations/endpoint/source/kafka.md) | [R^1^](../../data-transfer/tutorials/mkf-to-mkf.md) | [R^1^](../../data-transfer/tutorials/mkf-to-mch.md) | R^1^ | - | - | - | [R^1^](../../data-transfer/tutorials/mkf-to-ydb.md) | R^1^ | [{{ KF }}](../../data-transfer/operations/endpoint/source/kafka.md) |
| [{{ CH }}](../../data-transfer/operations/endpoint/source/clickhouse.md) | - | [C](../../data-transfer/tutorials/managed-clickhouse.md) | - | - | - | - | - | - | [{{ CH }}](../../data-transfer/operations/endpoint/source/clickhouse.md) |
| [{{ GP }}](../../data-transfer/operations/endpoint/source/greenplum.md) | - | [C](../../data-transfer/tutorials/greenplum-to-clickhouse.md) | [C^1^](../../data-transfer/tutorials/managed-greenplum.md) | - | - | [C^1^](../../data-transfer/tutorials/greenplum-to-postgresql.md) | - | - | [{{ GP }}](../../data-transfer/operations/endpoint/source/greenplum.md) |
| [{{ MG }}](../../data-transfer/operations/endpoint/source/mongodb.md) | - | - | - | [CR](../../data-transfer/tutorials/managed-mongodb.md) | - | - | - | C^1^ | [{{ MG }}](../../data-transfer/operations/endpoint/source/mongodb.md) |
| [{{ MY }}](../../data-transfer/operations/endpoint/source/mysql.md) | [CR](../../data-transfer/tutorials/cdc-mmy.md) | CR | - | - | [C](../../data-transfer/tutorials/managed-mysql.md)R | - | [CR^1^](../../data-transfer/tutorials/managed-mysql-to-ydb.md) | [C^1^](../../data-transfer/tutorials/mmy-objs-migration.md) | [{{ MY }}](../../data-transfer/operations/endpoint/source/mysql.md) |
| [Oracle](../../data-transfer/operations/endpoint/source/oracle.md) | - | CR^1^ | - | - | - | CR^1^ | - | - | [Oracle](../../data-transfer/operations/endpoint/source/oracle.md) |
| [{{ PG }}](../../data-transfer/operations/endpoint/source/postgresql.md) | [CR](../../data-transfer/tutorials/cdc-mpg.md) | [CR](../../data-transfer/tutorials/rdbms-to-clickhouse.md) | [C^1^](../../data-transfer/tutorials/managed-greenplum.md) | - | - | [C](../../data-transfer/tutorials/managed-postgresql.md)R | CR^1^ | C^1^ | [{{ PG }}](../../data-transfer/operations/endpoint/source/postgresql.md) |
| [{{ PG }}](../../data-transfer/operations/endpoint/source/postgresql.md) | [CR](../../data-transfer/tutorials/cdc-mpg.md) | [CR](../../data-transfer/tutorials/rdbms-to-clickhouse.md) | [C^1^](../../data-transfer/tutorials/managed-greenplum.md) | - | - | [C](../../data-transfer/tutorials/managed-postgresql.md)R | [CR^1^](../../data-transfer/tutorials/mpg-to-ydb.md) | [C^1^](../../data-transfer/tutorials/mpg-to-objstorage.md) | [{{ PG }}](../../data-transfer/operations/endpoint/source/postgresql.md) |
| [{{ ydb-short-name }}](../../data-transfer/operations/endpoint/source/ydb.md) | - | C^1^ | - | - | - | - | - | C^1^ | [{{ ydb-short-name }}](../../data-transfer/operations/endpoint/source/ydb.md) |
| [{{ yds-full-name }}](../../data-transfer/operations/endpoint/source/data-streams.md) | - | [R^1^](../../data-transfer/tutorials/yds-to-clickhouse.md) | R^1^ | R^1^ | - | R^1^ | R^1^ | [R^1^](../../data-transfer/tutorials/yds-to-objstorage.md) | [{{ yds-full-name }}](../../data-transfer/operations/endpoint/source/data-streams.md) |

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
### No server connection {#subnet-without-nat}

There is no connection due to specifying a subnet without egress NAT.
There is no connection because the specified subnet has no preconfigured egress NAT gateway.

Error message:

Expand All @@ -9,6 +9,6 @@ Can't connect to server: Can't ping server:
dial tcp <address of an endpoint's host>:<port>: connect: connection timed out
```

A transfer with one endpoint `on_premise` and another one having a subnet with egress NAT disabled fails.
A transfer would fail if it has one `on_premise` endpoint and another endpoint with the subnet that has no egress NAT gateway.

**Solution:** disable the endpoint setting that points to the subnet and [reactivate](../../../../data-transfer/operations/transfer.md#activate) the transfer.
42 changes: 42 additions & 0 deletions en/_includes/data-transfer/troubles/postgresql/lock-replication.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
### Coudn't create a replication slot at the activation step {#lock-replication}

In the beginning of the transfer, one or more [replication slots]({{ pg-docs }}/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS) are created in the source database. The database objects are locked at this point. If some object is locked by another transaction, this results in a competing lock, which will terminate the transfer with an error.

**Solution:**

1. Get the `PID` of the process that competes for locks with the transfer:

```sql
/* Get PID of the transfer */
SELECT active_pid
FROM pg_replication_slots
WHERE slot_name = '<transfer ID>';

/* search the PID of the locking process */
SELECT pid, pg_blocking_pids(pid) as blocked_by
FROM pg_stat_activity
WHERE cardinality(pg_blocking_pids(pid)) > 0;
```

```text
pid | blocked_by
-----------------+-------------------
<transfer PID> | {<locking transaction PID>}
(1 row)
```

1. Look up the locking query:

```sql
SELECT query, usename
FROM pg_stat_activity
WHERE pid = <locking transaction PID>;
```

1. (optional) Stop the transaction by the command:

```sql
SELECT pg_terminate_backend(<locking transaction PID>);
```

1. [Reactivate the transfer](../../../../data-transfer/operations/transfer.md#activate).
2 changes: 1 addition & 1 deletion en/_includes/functions/cloudlogs-trigger-create.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Trigger for {{ cloud-logs-name }} is outdated. Use [triggers for {{ cloud-loggin

{% endnote %}

Create a [{{ cloud-logs-name }} trigger](../../functions/concepts/trigger/cloudlogs-trigger.md) that calls your function when messages are received in a [log group](../../functions/concepts/log-group.md).
Create a [trigger for {{ cloud-logs-name }}](../../functions/concepts/trigger/cloudlogs-trigger.md) that calls your function when messages are received in a [log group](../../functions/concepts/log-group.md).

To create a trigger, you need:

Expand Down
2 changes: 1 addition & 1 deletion en/_includes/functions/cr-trigger-create.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Create a [{{ container-registry-name }} trigger](../../functions/concepts/trigger/cr-trigger.md) to call a {{ sf-name }} [function](../../functions/concepts/function.md) when you create or delete {{ container-registry-name }} [Docker images](../../container-registry/concepts/docker-image.md) or Docker image tags.
Create a [trigger for {{ container-registry-name }}](../../functions/concepts/trigger/cr-trigger.md) to call a {{ sf-name }} [function](../../functions/concepts/function.md) when you create or delete {{ container-registry-name }} [Docker images](../../container-registry/concepts/docker-image.md) or Docker image tags.

## Before you begin {#before-you-begin}

Expand Down
2 changes: 1 addition & 1 deletion en/_includes/functions/cr-trigger.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ A trigger for {{ container-registry-name }} needs a [service account](../../iam/

Read more about [access management](../../functions/security/index.md).

## {{ container-registry-name }} trigger message format {#format}
## Trigger for {{ container-registry-name }} message format {#format}

After the trigger is activated, it sends the following message to the function:

Expand Down
2 changes: 1 addition & 1 deletion en/_includes/functions/iot-core-trigger-create.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,4 +188,4 @@ The trigger must be in the same cloud with the registry or device it reads messa
## See also {#see-also}
* [{{ iot-name }} trigger that passes messages to the {{ serverless-containers-name }} container](../../serverless-containers/operations/iot-core-trigger-create.md).
* [Trigger for {{ iot-name }} that passes messages to the {{ serverless-containers-name }} container](../../serverless-containers/operations/iot-core-trigger-create.md).
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
[`vpc.gateways.editor`](../../../../iam/concepts/access-control/roles.md#vpc-gw-editor): Enables you to manage NAT gateways.
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
* Under **Basic parameters**, enter the template **Description**:
* Under **Image/boot disk selection**, select a system to be deployed on the VM instance's boot disk.


* In the **Disks** section:
* Select the [disk type](../../compute/concepts/disk.md#disks_types).
* Specify the **Size** of the disk.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
1. Under **Instance template**, click **Define** to set up the configuration for a basic instance:
* Under **Basic parameters**, enter the template **Description**:
* Under **Image/boot disk selection**, select a system to be deployed on the VM instance's boot disk.


* In the **Disks** section:
* Select the [disk type](../../compute/concepts/disk.md#disks_types).
Expand Down
3 changes: 3 additions & 0 deletions en/_includes/roles-k8s-admin.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
### {{ roles.k8s.admin }} {#k8s-admin}

The `{{ roles.k8s.admin }}` role enables you to [create](../managed-kubernetes/operations/kubernetes-cluster/kubernetes-cluster-create.md), [delete](../managed-kubernetes/operations/kubernetes-cluster/kubernetes-cluster-delete.md), [edit](../managed-kubernetes/operations/kubernetes-cluster/kubernetes-cluster-update.md), stop, and start [{{ k8s }} clusters](../managed-kubernetes/concepts/index.md#kubernetes-cluster) and [node groups](../managed-kubernetes/concepts/index.md#node-group).
3 changes: 3 additions & 0 deletions en/_includes/roles-k8s-cluster-api-cluster-admin.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
### {{ roles.k8s.cluster-api.cluster-admin }} {#k8s-clusters-api-cluster-admin}

Users with the {{ iam-name }} `{{ roles.k8s.cluster-api.cluster-admin }}` role get the `yc:cluster-admin` group and `cluster-admin` role in {{ k8s }} RBAC.
3 changes: 3 additions & 0 deletions en/_includes/roles-k8s-cluster-api-editor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
### {{ roles.k8s.cluster-api.editor }} {#k8s-clusters-api-editor}

Users with the {{ iam-name }} `{{ roles.k8s.cluster-api.editor }}` role get the `yc:edit` group and the `edit` role in {{ k8s }} RBAC for all [namespaces](../managed-kubernetes/concepts/index.md#namespace) in a [cluster](../managed-kubernetes/concepts/index.md#kubernetes-cluster).
3 changes: 3 additions & 0 deletions en/_includes/roles-k8s-cluster-api-viewer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
### {{ roles.k8s.cluster-api.viewer }} {#k8s-clusters-api-viewer}

Users with the {{ iam-name }} `{{ roles.k8s.cluster-api.viewer }}` role get the `yc:view` group and the `view` role in {{ k8s }} RBAC for all [namespaces](../managed-kubernetes/concepts/index.md#namespace) in a [cluster](../managed-kubernetes/concepts/index.md#kubernetes-cluster).
8 changes: 8 additions & 0 deletions en/_includes/roles-k8s-clusters-agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
### {{ roles.k8s.clusters.agent }} {#k8s-clusters-agent}

`{{ roles.k8s.clusters.agent }}`: A special role for a [{{ k8s }} cluster](../managed-kubernetes/concepts/index.md#kubernetes-cluster)[service account](../iam/concepts/users/service-accounts.md). It enables you to create [node groups](../managed-kubernetes/concepts/index.md#node-group), disks, and internal load balancers. You can use previously created [{{ kms-full-name }} keys](../kms/concepts/key.md) to encrypt and decrypt secrets and connect previously created [security groups](../managed-kubernetes/operations/connect/security-groups.md). In combination with the `load-balancer.admin` role, it enables you to create a network load balancer with a [public IP address](../vpc/concepts/address.md#public-addresses). It includes the following roles:
* `compute.admin`
* `iam.serviceAccounts.user`
* `kms.keys.encrypterDecrypter`
* `load-balancer.privateAdmin`
* `vpc.privateAdmin`
Loading

0 comments on commit ff0daee

Please sign in to comment.