You can find a list of queries, and their associated categories here: https://docs.kics.io/latest/queries/all-queries/
The category list is:
- Access Control
- Availability
- Backup
- Best Practices
- Build Process
- Encryption
- Insecure Configurations
- Insecure Defaults
- Networking and Firewall
- Observability
- Resource Management
- Secret Management
- Structure and Semantics
- Supply-Chain
KICS queries are written in OPA (Rego).
CxPolicy [ result ] {
resource := input.document[i].resource.aws_s3_bucket[name]
role = "public-read"
resource.acl == role
result := {
"documentId": input.document[i].id,
"searchKey": sprintf("aws_s3_bucket[%s].acl", [name]),
"issueType": "IncorrectValue",
"keyExpectedValue": sprintf("aws_s3_bucket[%s].acl is private", [name]),
"keyActualValue": sprintf("aws_s3_bucket[%s].acl is %s", [name, role])
}
}
The anatomy of a query is straightforward. It builds up a policy and defines the result.
The policy builds the pattern that breaks the security of the infrastructure code and which the query is looking for.
The result defines the specific data used to present the vulnerability in the infrastructure code.
Each query has a metadata.json companion file with all the relevant information about the vulnerability, including the severity, category and its description.
For example, the JSON code above is the metadata corresponding to the query in the beginning of this document.
{
"id": "5738faf3-3fe6-4614-a93d-f0003242d4f9",
"queryName": "All Users Group Gets Read Access",
"severity": "HIGH",
"category": "Identity and Access Management",
"descriptionText": "It's not recommended to allow read access for all user groups.",
"descriptionUrl": "https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket#acl",
"platform": "Terraform"
}
Filesystem-wise, KICS queries are organized per IaC technology or tool (e.g., terraform, k8s, dockerfile, etc.) and grouped under provider (e.g., aws, gcp, azure, etc.) when applicable.
Per each query created, it is mandatory the creation of test cases with, at least, one negative and positive case and a JSON file with data about the expected results, as shown below:
[
{
"queryName": "All Users Group Gets Read Access",
"severity": "HIGH",
"line": 3
}
]
Summarizing, the following is the expected file tree for a query:
- <technology>
|- <provider>
| |- <queryfolder>
| | |- test
| | | |- positive<.ext>
| | | |- negative<.ext>
| | | |- positive_expected_result.json
| | |- metadata.json
| | |- query.rego
Also, a query can contains multiples positive and negative files, but all cases files names must start with negative or positive and
each positive file must be referencered on positive_expected_result.json
, as shown below:
[
{
"queryName": "ELB Sensitive Port Is Exposed To Entire Network",
"severity": "HIGH",
"line": 37,
"fileName": "positive1.yaml"
},
{
"queryName": "ELB Sensitive Port Is Exposed To Entire Network",
"severity": "HIGH",
"line": 22,
"fileName": "positive2.yaml"
}
]
And the file tree should be as follows:
- <technology>
|- <provider>
| |- <queryfolder>
| | |- test
| | | |- positive1<.ext>
| | | |- positive2<.ext>
| | | |- negative1<.ext>
| | | |- negative2<.ext>
| | | |- positive_expected_result.json
| | |- metadata.json
| | |- query.rego
If you want to use the functions defined in your own library, you should use the flag -b
to indicate the directory where the libraries are placed. The functions need to be grouped by platform and the library name should follow the following format: <platform>.rego
. It doesn't matter your directory structure. In other words, for example, if you want to indicate a directory that contains a library for your terraform queries, you should group your functions (used in your terraform queries) in a file named terraform.rego
wherever you want.