The Cloud SQL Go Connector is a Cloud SQL connector designed for use with the Go language. Using a Cloud SQL connector provides a native alternative to the Cloud SQL Auth Proxy while providing the following benefits:
- IAM Authorization: uses IAM permissions to control who/what can connect to your Cloud SQL instances
- Improved Security: uses robust, updated TLS 1.3 encryption and identity verification between the client connector and the server-side proxy, independent of the database protocol.
- Convenience: removes the requirement to use and distribute SSL certificates, as well as manage firewalls or source/destination IP addresses.
- (optionally) IAM DB Authentication: provides support for Cloud SQL’s automatic IAM DB AuthN feature.
For users migrating from the Cloud SQL Proxy drivers, see the migration guide.
For a quick example, try out the Go Connector in a Codelab.
You can install this repo with go get
:
go get cloud.google.com/go/cloudsqlconn
This package provides several functions for authorizing and encrypting connections. These functions can be used with your database driver to connect to your Cloud SQL instance.
The instance connection name for your Cloud SQL instance is always in the
format project:region:instance
.
This package requires the following to successfully make Cloud SQL Connections:
- IAM principal (user, service account, etc.) with the Cloud SQL Client role or equivalent. This IAM principal will be used for credentials.
- The Cloud SQL Admin API to be enabled within your Google Cloud Project. By default, the API will be called in the project associated with the IAM principal.
This project uses the Application Default Credentials (ADC) strategy for resolving credentials. Please see these instructions for how to set your ADC (Google Cloud Application vs Local Development, IAM user vs service account credentials), or consult the golang.org/x/oauth2/google documentation.
To explicitly set a specific source for the Credentials, see Using Options below.
Postgres users have the option of using the database/sql
interface or
using pgx directly. See pgx's advice on which to choose.
To use the dialer with pgx, we recommend using connection pooling with pgxpool by configuring a Config.DialFunc like so:
import (
"context"
"net"
"cloud.google.com/go/cloudsqlconn"
"github.com/jackc/pgx/v4/pgxpool"
)
func connect() {
// Configure the driver to connect to the database
dsn := "user=myuser password=mypass dbname=mydb sslmode=disable"
config, err := pgxpool.ParseConfig(dsn)
if err != nil {
/* handle error */
}
// Create a new dialer with any options
d, err := cloudsqlconn.NewDialer(context.Background())
if err != nil {
/* handle error */
}
// Tell the driver to use the Cloud SQL Go Connector to create connections
config.ConnConfig.DialFunc = func(ctx context.Context, _ string, instance string) (net.Conn, error) {
return d.Dial(ctx, "project:region:instance")
}
// Interact with the driver directly as you normally would
conn, err := pgxpool.ConnectConfig(context.Background(), config)
if err != nil {
/* handle error */
}
// call cleanup when you're done with the database connection
cleanup := func() error { return d.Close() }
// ... etc
}
To use database/sql
, call pgxv4.RegisterDriver
with any necessary Dialer
configuration. Note: the connection string must use the keyword/value format
with host set to the instance connection name. The returned cleanup
func
will stop the dialer's background refresh goroutine and so should only be called
when you're done with the Dialer
.
import (
"database/sql"
"cloud.google.com/go/cloudsqlconn"
"cloud.google.com/go/cloudsqlconn/postgres/pgxv4"
)
func connect() {
cleanup, err := pgxv4.RegisterDriver("cloudsql-postgres", cloudsqlconn.WithIAMAuthN())
if err != nil {
// ... handle error
}
// call cleanup when you're done with the database connection
defer cleanup()
db, err := sql.Open(
"cloudsql-postgres",
"host=project:region:instance user=myuser password=mypass dbname=mydb sslmode=disable",
)
// ... etc
}
To use database/sql
, use mysql.RegisterDriver
with any necessary Dialer
configuration. The returned cleanup
func
will stop the dialer's background refresh goroutine and so should only be called
when you're done with the Dialer
.
import (
"database/sql"
"cloud.google.com/go/cloudsqlconn"
"cloud.google.com/go/cloudsqlconn/mysql/mysql"
)
func connect() {
cleanup, err := mysql.RegisterDriver("cloudsql-mysql", cloudsqlconn.WithCredentialsFile("key.json"))
if err != nil {
// ... handle error
}
// call cleanup when you're done with the database connection
defer cleanup()
db, err := sql.Open(
"cloudsql-mysql",
"myuser:mypass@cloudsql-mysql(project:region:instance)/mydb",
)
// ... etc
}
To use database/sql
, use mssql.RegisterDriver
with any necessary Dialer
configuration. The returned cleanup
func
will stop the dialer's background refresh goroutine and so should only be called
when you're done with the Dialer
.
import (
"database/sql"
"cloud.google.com/go/cloudsqlconn"
"cloud.google.com/go/cloudsqlconn/sqlserver/mssql"
)
func connect() {
cleanup, err := mssql.RegisterDriver("cloudsql-sqlserver", cloudsqlconn.WithCredentialsFile("key.json"))
if err != nil {
// ... handle error
}
// call cleanup when you're done with the database connection
defer cleanup()
db, err := sql.Open(
"cloudsql-sqlserver",
"sqlserver://user:password@localhost?database=mydb&cloudsql=project:region:instance",
)
// ... etc
}
If you need to customize something about the Dialer
, you can initialize
directly with NewDialer
:
d, err := cloudsqlconn.NewDialer(
ctx,
cloudsqlconn.WithCredentialsFile("key.json"),
)
if err != nil {
log.Fatalf("unable to initialize dialer: %s", err)
}
conn, err := d.Dial(ctx, "project:region:instance")
For a full list of customizable behavior, see Option.
If you want to customize things about how the connection is created, use
Option
:
conn, err := d.Dial(
ctx,
"project:region:instance",
cloudsqlconn.WithPrivateIP(),
)
You can also use the WithDefaultDialOptions
Option to specify
DialOptions to be used by default:
d, err := cloudsqlconn.NewDialer(
ctx,
cloudsqlconn.WithDefaultDialOptions(
cloudsqlconn.WithPrivateIP(),
),
)
Connections using Automatic IAM database authentication are supported when using Postgres or MySQL drivers.
Make sure to configure your Cloud SQL Instance to allow IAM authentication and add an IAM database user.
A Dialer
can be configured to connect to a Cloud SQL instance using
automatic IAM database authentication with the WithIAMAuthN
Option
(recommended) or the WithDialIAMAuthN
DialOption.
d, err := cloudsqlconn.NewDialer(ctx, cloudsqlconn.WithIAMAuthN())
When configuring the DSN for IAM authentication, the password
field can be
omitted and the user
field should be formatted as follows:
Postgres: For an IAM user account, this is the user's email address. For a service account, it is the service account's email without the
.gserviceaccount.com
domain suffix.MySQL: For an IAM user account, this is the user's email address, without the
@
or domain name. For example, for[email protected]
, set theuser
field totest-user
. For a service account, this is the service account's email address without the@project-id.iam.gserviceaccount.com
suffix.
Example DSNs using the [email protected]
service account to connect can be found below.
Postgres:
dsn := "[email protected] dbname=mydb sslmode=disable"
MySQL:
dsn := "user=test-sa dbname=mydb sslmode=disable"
This library includes support for metrics and tracing using OpenCensus. To enable metrics or tracing, you need to configure an exporter. OpenCensus supports many backends for exporters.
Supported metrics include:
cloudsqlconn/dial_latency
: The distribution of dialer latencies (ms)cloudsqlconn/open_connections
: The current number of open Cloud SQL connectionscloudsqlconn/dial_failure_count
: The number of failed dial attemptscloudsqlconn/refresh_success_count
: The number of successful certificate refresh operationscloudsqlconn/refresh_failure_count
: The number of failed refresh operations.
Supported traces include:
cloud.google.com/go/cloudsqlconn.Dial
: The dial operation including refreshing an ephemeral certificate and connecting the instancecloud.google.com/go/cloudsqlconn/internal.InstanceInfo
: The call to retrieve instance metadata (e.g., database engine type, IP address, etc)cloud.google.com/go/cloudsqlconn/internal.Connect
: The connection attempt using the ephemeral certificate- SQL Admin API client operations
For example, to use Cloud Monitoring and Cloud Trace, you would configure an exporter like so:
import (
"contrib.go.opencensus.io/exporter/stackdriver"
"go.opencensus.io/trace"
)
func main() {
sd, err := stackdriver.NewExporter(stackdriver.Options{
ProjectID: "mycoolproject",
})
if err != nil {
// handle error
}
defer sd.Flush()
trace.RegisterExporter(sd)
sd.StartMetricsExporter()
defer sd.StopMetricsExporter()
// Use cloudsqlconn as usual.
// ...
}
As OpenTelemetry has now reached feature parity with OpenCensus, the migration from OpenCensus to OpenTelemetry is strongly encouraged. OpenTelemetry bridge can be leveraged to migrate to OpenTelemetry without the need of replacing the OpenCensus APIs in this library. Example code is shown below for migrating an application using the OpenTelemetry bridge for traces.
import (
texporter "github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/trace"
"go.opencensus.io/trace"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/bridge/opencensus"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
"google.golang.org/api/option"
)
func main() {
// trace.AlwaysSample() is expensive. Replacing it with your own
// sampler for production environments is recommended.
trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
exporter, err := texporter.New(
texporter.WithTraceClientOptions([]option.ClientOption{option.WithTelemetryDisabled()}),
texporter.WithProjectID("mycoolproject"),
)
if err != nil {
// Handle error
}
tp := sdktrace.NewTracerProvider(sdktrace.WithSyncer(exporter))
otel.SetTracerProvider(tp)
tracer := tp.Tracer("Cloud SQL Go Connector Trace")
trace.DefaultTracer = opencensus.NewTracer(tracer)
// Use cloudsqlconn as usual.
// ...
}
A known OpenTelemetry issue has been reported here. It shouldn't impact database operations.
This project uses semantic versioning, and uses the following lifecycle regarding support for a major version:
Active - Active versions get all new features and security fixes (that wouldn’t otherwise introduce a breaking change). New major versions are guaranteed to be "active" for a minimum of 1 year.
Deprecated - Deprecated versions continue to receive security and critical bug fixes, but do not receive new features. Deprecated versions will be supported for 1 year.
Unsupported - Any major version that has been deprecated for >=1 year is considered unsupported.
We follow the Go Version Support Policy used by Google Cloud Libraries for Go.
This project aims for a release on at least a monthly basis. If no new features or fixes have been added, a new PATCH version with the latest dependencies is released.