Skip to content

Commit

Permalink
Enforce leading space on comments (cadence-workflow#5747)
Browse files Browse the repository at this point in the history
# What

Changes all comments like this:
```go
//stuff
type thing struct {
  field string //comment
}
//go:embed directives are left alone
```
To this:
```go
// stuff
type thing struct {
  field string // comment
}
//go:embed directives are left alone
```
And added a fairly basic lint check to prevent them from coming back.
The lint check is imprecise and doesn't catch comments at the ends of non-blank lines of code,
but fixing that seemed like much more work than it was worth.

And, since this will likely neutralize a `//nolint` comment that wasn't doing anything anyway,
I've removed that one comment entirely to avoid being misleading.

# How

"Starts with X but not followed by Y" is famously hard for regex, so I only tried briefly with sed / perl before giving up.
So instead I just had Goland format the entire repository, and undid anything that wasn't a Go file.

Somewhat surprisingly, all it did was add these spaces.  Which is perfect.
Oddly it missed a single obvious one, but I'm willing to forgive it for that.

# Why

IDEs often do this when formatting files, which is thrashing a bit.

gofmt (and therefore goimports) in theory doesn't actually care... but despite their
protestations about not caring about comment formatting in github issues, Go's
source-printer and therefore gofmt *does* treat some comments specially, most notably
"directives" like `go:build`: https://cs.opensource.google/go/go/+/refs/tags/go1.22.1:src/go/printer/comment.go;l=15
and anything that might be a "doc comment" may get restructured too:
```diff
diff --git cmd/server/main.go cmd/server/main.go
index 6af56db58..b857f9211 100644
--- cmd/server/main.go
+++ cmd/server/main.go
@@ -35,6 +35,14 @@ import (
 )

 // main entry point for the cadence server
+//go:build
+// did you know
+//go:link
+// that go special
+//cases:things
+// that use
+//go:embed
+// specific comment structures?
 func main() {
 	app := cadence.BuildCLI(metrics.ReleaseVersion, metrics.Revision)
 	app.Run(os.Args)
```
which `gofmt -w` turns into:
```diff
diff --git cmd/server/main.go cmd/server/main.go
index 6af56db58..2511eb414 100644
--- cmd/server/main.go
+++ cmd/server/main.go
@@ -18,6 +18,8 @@
 // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 // THE SOFTWARE.

+//go:build
+
 package main

 import (
@@ -35,6 +37,14 @@ import (
 )

 // main entry point for the cadence server
+// did you know
+// that go special
+// that use
+// specific comment structures?
+//
+//go:link
+//cases:things
+//go:embed
 func main() {
 	app := cadence.BuildCLI(metrics.ReleaseVersion, metrics.Revision)
 	app.Run(os.Args)
```
which IMO means we should explicitly avoid colliding with those kinds of comment patterns.
Linters like staticcheck generally use suppressing-comment structures like `//nolint:sa123`,
I suspect in part to fit with / avoid running afoul of this kind of reformatting.

Go has only claimed `//x:y` as far as I can tell, so specifically `//x:` is not a "directive"...
but that seems unnecessarily near-colliding to me, and it would allow `//TODO: asdf` which
are definitely not "tool directives" in any form, so I haven't allowed those.
  • Loading branch information
Groxx authored Mar 7, 2024
1 parent 4e86118 commit 233b3ef
Show file tree
Hide file tree
Showing 77 changed files with 198 additions and 192 deletions.
7 changes: 7 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -361,6 +361,13 @@ $(BUILD)/gomod-lint: go.mod internal/tools/go.mod common/archiver/gcloud/go.mod
$(BUILD)/code-lint: $(LINT_SRC) $(BIN)/revive | $(BUILD)
$Q echo "lint..."
$Q $(BIN)/revive -config revive.toml -exclude './vendor/...' -exclude './.gen/...' -formatter stylish ./...
$Q # look for go files with "//comments", and ignore "//go:build"-style directives ("grep -n" shows "file:line: //go:build" so the regex is a bit complex)
$Q bad="$$(find . -type f -name '*.go' -not -path './idls/*' | xargs grep -n -E '^\s*//\S' | grep -E -v '^[^:]+:[^:]+:\s*//[a-z]+:[a-z]+' || true)"; \
if [ -n "$$bad" ]; then \
echo "$$bad" >&2; \
echo 'non-directive comments must have a space after the "//"' >&2; \
exit 1; \
fi
$Q touch $@

# fmt and copyright are mutually cyclic with their inputs, so if a copyright header is modified:
Expand Down
2 changes: 1 addition & 1 deletion canary/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ type (
// Cron contains configuration for the cron workflow for canary
Cron struct {
CronSchedule string `yaml:"cronSchedule"` // default to "@every 30s"
CronExecutionTimeout time.Duration `yaml:"cronExecutionTimeout"` //default to 18 minutes
CronExecutionTimeout time.Duration `yaml:"cronExecutionTimeout"` // default to 18 minutes
StartJobTimeout time.Duration `yaml:"startJobTimeout"` // default to 9 minutes
}

Expand Down
4 changes: 2 additions & 2 deletions common/archiver/s3store/historyArchiver_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -657,8 +657,8 @@ func (s *historyArchiverSuite) TestArchiveAndGet() {
}

func (s *historyArchiverSuite) newTestHistoryArchiver(historyIterator archiver.HistoryIterator) *historyArchiver {
//config := &config.S3Archiver{}
//archiver, err := newHistoryArchiver(s.container, config, historyIterator)
// config := &config.S3Archiver{}
// archiver, err := newHistoryArchiver(s.container, config, historyIterator)
archiver := &historyArchiver{
container: s.container,
s3cli: s.s3cli,
Expand Down
4 changes: 2 additions & 2 deletions common/backoff/retrypolicy_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ func (s *RetryPolicySuite) TestNoMaxAttempts() {
r, clock := createRetrier(policy)
for i := 0; i < 100; i++ {
next := r.NextBackOff()
//print("Iter: ", i, ", Next Backoff: ", next.String(), "\n")
// print("Iter: ", i, ", Next Backoff: ", next.String(), "\n")
s.True(next > 0 || next == done, "Unexpected value for next retry duration: %v", next)
clock.moveClock(next)
}
Expand All @@ -200,7 +200,7 @@ func (s *RetryPolicySuite) TestUnbounded() {
r, clock := createRetrier(policy)
for i := 0; i < 100; i++ {
next := r.NextBackOff()
//print("Iter: ", i, ", Next Backoff: ", next.String(), "\n")
// print("Iter: ", i, ", Next Backoff: ", next.String(), "\n")
s.True(next > 0 || next == done, "Unexpected value for next retry duration: %v", next)
clock.moveClock(next)
}
Expand Down
2 changes: 1 addition & 1 deletion common/config/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ func (c *Metrics) newStatsdScope(logger log.Logger) tally.Scope {
if err != nil {
logger.Fatal("error creating statsd client", tag.Error(err))
}
//NOTE: according to ( https://github.com/uber-go/tally )Tally's statsd implementation doesn't support tagging.
// NOTE: according to ( https://github.com/uber-go/tally )Tally's statsd implementation doesn't support tagging.
// Therefore, we implement Tally interface to have a statsd reporter that can support tagging
reporter := statsdreporter.NewReporter(statter, tallystatsdreporter.Options{})
scopeOpts := tally.ScopeOptions{
Expand Down
2 changes: 1 addition & 1 deletion common/config/tls.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ type (
CertFile string `yaml:"certFile"`
KeyFile string `yaml:"keyFile"`

CaFile string `yaml:"caFile"` //optional depending on server config
CaFile string `yaml:"caFile"` // optional depending on server config
CaFiles []string `yaml:"caFiles"`
// If you want to verify the hostname and server cert (like a wildcard for cass cluster) then you should turn this on
// This option is basically the inverse of InSecureSkipVerify
Expand Down
2 changes: 1 addition & 1 deletion common/domain/replication_queue.go
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ func (q *replicationQueueImpl) GetMessagesFromDLQ(
return nil, nil, fmt.Errorf("failed to decode dlq task: %v", err)
}

//Overwrite to local cluster message id
// Overwrite to local cluster message id
replicationTask.SourceTaskId = common.Int64Ptr(int64(message.ID))
replicationTasks = append(replicationTasks, thrift.ToReplicationTask(&replicationTask))
}
Expand Down
4 changes: 2 additions & 2 deletions common/dynamicconfig/configstore/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ package config

import "time"

//This package is necessary to avoid import cycle as config_store_client imports common/config
//while common/config imports this ClientConfig definition
// This package is necessary to avoid import cycle as config_store_client imports common/config
// while common/config imports this ClientConfig definition

// ClientConfig is the config for the config store based dynamic config client.
// It specifies how often the cached config should be updated by checking underlying database.
Expand Down
2 changes: 1 addition & 1 deletion common/dynamicconfig/constants_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ func TestConstantSuite(t *testing.T) {
}

func (s *constantSuite) TestListAllProductionKeys() {
//check if we given enough capacity
// check if we given enough capacity
testResult := ListAllProductionKeys()
s.GreaterOrEqual(len(IntKeys)+len(BoolKeys)+len(FloatKeys)+len(StringKeys)+len(DurationKeys)+len(MapKeys), len(testResult))
s.Equal(TestGetIntPropertyFilteredByTaskListInfoKey+1, testResult[0])
Expand Down
4 changes: 2 additions & 2 deletions common/elasticsearch/client/v6/client_bulk.go
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,8 @@ func (v *v6BulkProcessor) Add(request *bulk.GenericBulkableAddRequest) {
Version(request.Version).
Doc(request.Doc)
case bulk.BulkableCreateRequest:
//for bulk create request still calls the bulk index method
//with providing operation type
// for bulk create request still calls the bulk index method
// with providing operation type
req = elastic.NewBulkIndexRequest().
OpType("create").
Index(request.Index).
Expand Down
4 changes: 2 additions & 2 deletions common/elasticsearch/client/v7/client_bulk.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ func (v *v7BulkProcessor) Add(request *bulk.GenericBulkableAddRequest) {
Version(request.Version).
Doc(request.Doc)
case bulk.BulkableCreateRequest:
//for bulk create request still calls the bulk index method
//with providing operation type
// for bulk create request still calls the bulk index method
// with providing operation type
req = elastic.NewBulkIndexRequest().
OpType("create").
Index(request.Index).
Expand Down
2 changes: 1 addition & 1 deletion common/elasticsearch/esql/cadencesql.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ func (e *ESql) ConvertCadence(sql string, domainID string, pagination ...interfa
return "", nil, err
}

//sql valid, start to handle
// sql valid, start to handle
switch stmt := stmt.(type) {
case *sqlparser.Select:
dsl, sortFields, err = e.convertSelect(*(stmt), domainID, pagination...)
Expand Down
2 changes: 1 addition & 1 deletion common/elasticsearch/esql/esql.go
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ func (e *ESql) Convert(sql string, pagination ...interface{}) (dsl string, sortF
return "", nil, err
}

//sql valid, start to handle
// sql valid, start to handle
switch stmt := stmt.(type) {
case *sqlparser.Select:
dsl, sortField, err = e.convertSelect(*(stmt), "", pagination...)
Expand Down
2 changes: 1 addition & 1 deletion common/log/tag/interface.go
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ func newBoolTag(key string, value bool) Tag {
}

func newErrorTag(key string, value error) Tag {
//NOTE zap already chosen "error" as key
// NOTE zap already chosen "error" as key
return Tag{
field: zap.Error(value),
}
Expand Down
10 changes: 5 additions & 5 deletions common/log/tag/tags.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ import (
// 1. Workflow: these tags are information that are useful to our customer, like workflow-id/run-id/task-list/...
// 2. System : these tags are internal information which usually cannot be understood by our customers,

/////////////////// Common tags defined here ///////////////////
// ///////////////// Common tags defined here ///////////////////

// Error returns tag for Error
func Error(err error) Tag {
Expand All @@ -59,7 +59,7 @@ func LatestTime(time int64) Tag {
return newInt64("latest-time", time)
}

/////////////////// Workflow tags defined here: ( wf is short for workflow) ///////////////////
// ///////////////// Workflow tags defined here: ( wf is short for workflow) ///////////////////

// WorkflowAction returns tag for WorkflowAction
func workflowAction(action string) Tag {
Expand Down Expand Up @@ -342,7 +342,7 @@ func WorkflowEventType(eventType string) Tag {
return newStringTag("wf-event-type", eventType)
}

/////////////////// System tags defined here: ///////////////////
// ///////////////// System tags defined here: ///////////////////
// Tags with pre-define values

// component returns tag for component
Expand Down Expand Up @@ -753,7 +753,7 @@ func TokenLastEventID(id int64) Tag {
return newInt64("token-last-event-id", id)
}

/////////////////// XDC tags defined here: xdc- ///////////////////
// ///////////////// XDC tags defined here: xdc- ///////////////////

// SourceCluster returns tag for SourceCluster
func SourceCluster(sourceCluster string) Tag {
Expand Down Expand Up @@ -824,7 +824,7 @@ func ResponseMaxSize(size int) Tag {
return newInt("response-max-size", size)
}

/////////////////// Archival tags defined here: archival- ///////////////////
// ///////////////// Archival tags defined here: archival- ///////////////////
// archival request tags

// ArchivalCallerServiceName returns tag for the service name calling archival client
Expand Down
2 changes: 1 addition & 1 deletion common/membership/resolver.go
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ type (
// EvictSelf evicts this member from the membership ring. After this method is
// called, other members should discover that this node is no longer part of the
// ring.
//This primitive is useful to carry out graceful host shutdown during deployments.
// This primitive is useful to carry out graceful host shutdown during deployments.
EvictSelf() error

// Lookup will return host which is an owner for provided key.
Expand Down
2 changes: 1 addition & 1 deletion common/messaging/kafka/partition_ack_manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ import (
// assuming reading messages is in order and continuous(no skipping)
type partitionAckManager struct {
sync.RWMutex
ackMgrs map[int32]messaging.AckManager //map from partition to its ackManager
ackMgrs map[int32]messaging.AckManager // map from partition to its ackManager
scopes map[int32]metrics.Scope // map from partition to its Scope

metricsClient metrics.Client
Expand Down
2 changes: 1 addition & 1 deletion common/metrics/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ import (

// ClientImpl is used for reporting metrics by various Cadence services
type ClientImpl struct {
//parentReporter is the parent scope for the metrics
// parentReporter is the parent scope for the metrics
parentScope tally.Scope
childScopes map[int]tally.Scope
metricDefs map[int]metricDefinition
Expand Down
1 change: 0 additions & 1 deletion common/metrics/defs.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,6 @@ type (

// metricDefinition contains the definition for a metric
metricDefinition struct {
//nolint
metricType MetricType // metric type
metricName MetricName // metric name
metricRollupName MetricName // optional. if non-empty, this name must be used for rolled-up version of this metric
Expand Down
2 changes: 1 addition & 1 deletion common/metrics/tally/statsd/reporter.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ import (
)

type cadenceTallyStatsdReporter struct {
//Wrapper on top of "github.com/uber-go/tally/statsd"
// Wrapper on top of "github.com/uber-go/tally/statsd"
tallystatsd tally.StatsReporter
}

Expand Down
2 changes: 1 addition & 1 deletion common/metrics/tally/statsd/reporter_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ func TestMetricNameWithTagsStability(t *testing.T) {
}
name := "test-metric-name2"

//test the order is stable
// test the order is stable
for i := 1; i <= 16; i++ {
assert.Equal(t, r.metricNameWithTags(name, tags), r.metricNameWithTags(name, tags))
}
Expand Down
8 changes: 4 additions & 4 deletions common/peerprovider/ringpopprovider/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -220,22 +220,22 @@ func (s *RingpopSuite) TestDNSSRVMode() {
"duplicate entries should be removed",
)

//Expect unknown-duplicate.example.net to not resolve
// Expect unknown-duplicate.example.net to not resolve
_, err = cfg.DiscoveryProvider.Hosts()
s.NotNil(err)

//Remove known bad hosts from Unresolved list
// Remove known bad hosts from Unresolved list
provider.UnresolvedHosts = []string{
"service-a.example.net",
"service-b.example.net",
"badhostport",
}

//Expect badhostport to not seperate service name
// Expect badhostport to not seperate service name
_, err = cfg.DiscoveryProvider.Hosts()
s.NotNil(err)

//Remove known bad hosts from Unresolved list
// Remove known bad hosts from Unresolved list
provider.UnresolvedHosts = []string{
"service-a.example.net",
"service-b.example.net",
Expand Down
6 changes: 3 additions & 3 deletions common/persistence/dataStoreInterfaces.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,12 +33,12 @@ import (
)

type (
//////////////////////////////////////////////////////////////////////
// ////////////////////////////////////////////////////////////////////
// Persistence interface is a lower layer of dataInterface.
// The intention is to let different persistence implementation(SQL,Cassandra/etc) share some common logic
// Right now the only common part is serialization/deserialization, and only ExecutionManager/HistoryManager need it.
// TaskManager are the same.
//////////////////////////////////////////////////////////////////////
// ////////////////////////////////////////////////////////////////////

// ShardStore is the lower level of ShardManager
ShardStore interface {
Expand Down Expand Up @@ -95,7 +95,7 @@ type (
Closeable
GetName() string
GetShardID() int
//The below three APIs are related to serialization/deserialization
// The below three APIs are related to serialization/deserialization
GetWorkflowExecution(ctx context.Context, request *InternalGetWorkflowExecutionRequest) (*InternalGetWorkflowExecutionResponse, error)
UpdateWorkflowExecution(ctx context.Context, request *InternalUpdateWorkflowExecutionRequest) error
ConflictResolveWorkflowExecution(ctx context.Context, request *InternalConflictResolveWorkflowExecutionRequest) error
Expand Down
14 changes: 7 additions & 7 deletions common/persistence/data_manager_interfaces.go
Original file line number Diff line number Diff line change
Expand Up @@ -1162,15 +1162,15 @@ type (
TaskID int64
}

//RangeDeleteReplicationTaskFromDLQRequest is used to delete replication tasks from DLQ
// RangeDeleteReplicationTaskFromDLQRequest is used to delete replication tasks from DLQ
RangeDeleteReplicationTaskFromDLQRequest struct {
SourceClusterName string
ExclusiveBeginTaskID int64
InclusiveEndTaskID int64
PageSize int
}

//RangeDeleteReplicationTaskFromDLQResponse is the response of RangeDeleteReplicationTaskFromDLQ
// RangeDeleteReplicationTaskFromDLQResponse is the response of RangeDeleteReplicationTaskFromDLQ
RangeDeleteReplicationTaskFromDLQResponse struct {
TasksCompleted int
}
Expand Down Expand Up @@ -1535,7 +1535,7 @@ type (
// The shard to get history node data
ShardID *int

//DomainName to get metrics created with the domain
// DomainName to get metrics created with the domain
DomainName string
}

Expand Down Expand Up @@ -1616,7 +1616,7 @@ type (
Info string
// The shard to get history branch data
ShardID *int
//DomainName to create metrics for Domain Cost Attribution
// DomainName to create metrics for Domain Cost Attribution
DomainName string
}

Expand All @@ -1642,7 +1642,7 @@ type (
BranchToken []byte
// The shard to delete history branch data
ShardID *int
//DomainName to generate metrics for Domain Cost Attribution
// DomainName to generate metrics for Domain Cost Attribution
DomainName string
}

Expand All @@ -1654,7 +1654,7 @@ type (
ShardID *int
// optional: can provide treeID via branchToken if treeID is empty
BranchToken []byte
//DomainName to create metrics
// DomainName to create metrics
DomainName string
}

Expand Down Expand Up @@ -1863,7 +1863,7 @@ type (
Closeable
FetchDynamicConfig(ctx context.Context, cfgType ConfigType) (*FetchDynamicConfigResponse, error)
UpdateDynamicConfig(ctx context.Context, request *UpdateDynamicConfigRequest, cfgType ConfigType) error
//can add functions for config types other than dynamic config
// can add functions for config types other than dynamic config
}
)

Expand Down
2 changes: 1 addition & 1 deletion common/persistence/elasticsearch/es_visibility_store.go
Original file line number Diff line number Diff line change
Expand Up @@ -503,7 +503,7 @@ const (
jsonRangeOnExecutionTime = `{"range":{"ExecutionTime":`
jsonSortForOpen = `[{"StartTime":"desc"},{"RunID":"desc"}]`
jsonSortWithTieBreaker = `{"RunID":"desc"}`
jsonMissingStartTime = `{"missing":{"field":"StartTime"}}` //used to identify uninitialized workflow execution records
jsonMissingStartTime = `{"missing":{"field":"StartTime"}}` // used to identify uninitialized workflow execution records

dslFieldSort = "sort"
dslFieldSearchAfter = "search_after"
Expand Down
4 changes: 2 additions & 2 deletions common/persistence/elasticsearch/es_visibility_store_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -556,7 +556,7 @@ func (s *ESVisibilitySuite) TestSerializePageToken() {
}

// Move to client_v6_test
//func (s *ESVisibilitySuite) TestConvertSearchResultToVisibilityRecord() {
// func (s *ESVisibilitySuite) TestConvertSearchResultToVisibilityRecord() {
// data := []byte(`{"CloseStatus": 0,
// "CloseTime": 1547596872817380000,
// "DomainID": "bfd5c907-f899-4baf-a7b2-2ab85e623ebd",
Expand Down Expand Up @@ -598,7 +598,7 @@ func (s *ESVisibilitySuite) TestSerializePageToken() {
// }
// info = s.visibilityStore.convertSearchResultToVisibilityRecord(searchHit)
// s.Nil(info)
//}
// }

func (s *ESVisibilitySuite) TestShouldSearchAfter() {
token := &es.ElasticVisibilityPageToken{}
Expand Down
2 changes: 1 addition & 1 deletion common/persistence/nosql/nosql_execution_store_util.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ func (d *nosqlExecutionStore) prepareResetWorkflowExecutionRequestWithMapsAndEve
if err != nil {
return nil, err
}
//reset 6 maps
// reset 6 maps
executionRequest.ActivityInfos, err = d.prepareActivityInfosForWorkflowTxn(resetWorkflow.ActivityInfos)
if err != nil {
return nil, err
Expand Down
Loading

0 comments on commit 233b3ef

Please sign in to comment.