Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Operator creates SeccompProfile even though in the ProfileRecording kind=SelinuxProfile #2463

Open
nev888 opened this issue Sep 25, 2024 · 18 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@nev888
Copy link

nev888 commented Sep 25, 2024

spod-zjn2z_security-profiles-operator.log
security-profiles-operator-webhook.log
security-profiles-operator.log

What happened:

I'm trying to create SelinuxProfile using ProfileRecordings however SeccompProfile is created instead.

What you expected to happen:

I expected SelinuxProfile to be created.

How to reproduce it (as minimally and precisely as possible):

installation followed: https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/installation-usage.md

profile recording followed:
https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/installation-usage.md#record-profiles-from-workloads-with-profilerecordings

restricted it to single namespace:
https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/installation-usage.md#restricting-to-a-single-namespace-with-upstream-deployment-manifests

Environment:

Openshift environment

  • Cloud provider or hardware configuration:
    NAME="Red Hat Enterprise Linux CoreOS"
    VERSION_ID="4.15"
    PRETTY_NAME="Red Hat Enterprise Linux CoreOS 415.92.202403270524-0 (Plow)"
    REDHAT_BUGZILLA_PRODUCT="OpenShift Container Platform"
    OPENSHIFT_VERSION="4.15"
    RHEL_VERSION="9.2"
@nev888 nev888 added the kind/bug Categorizes issue or PR as related to a bug. label Sep 25, 2024
@nev888
Copy link
Author

nev888 commented Sep 25, 2024

@ccojocar
Copy link
Contributor

Did you change the kind in ProfileRecoding to SelinuxProfile ? It supports only the log recorder for Selinux. Could you share the ProfileRecording CR that you are creating?

Also there are some more guidelines in https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/installation-usage.md#create-a-selinux-profile.

@nev888
Copy link
Author

nev888 commented Sep 28, 2024

Yes, the kind is set to SelinuxProfile and the recording is log. The profile is also attached profilerecoding.txt

@r0binak
Copy link

r0binak commented Nov 27, 2024

@ccojocar @nev888

I have the same issue.

I used this ProfileRecording:

apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: ProfileRecording
metadata:
  name: test-recording
  namespace: my-namespace
spec:
  kind: SelinuxProfile
  recorder: logs
  mergeStrategy: containers
  podSelector:
    matchLabels:
      app: sp-record

In log-enricher container logs i see that:

I1127 09:14:13.464179  243895 enricher.go:446] "audit" logger="log-enricher" timestamp="1732698852.762:8077631" type="seccomp" node="okd-worker-5.corp.company.com" namespace="my-namespace" pod="nginx-deploy-5487d785dc-5rx5t" container="nginx-record" executable="/usr/sbin/nginx" pid=303193 syscallID=3 syscallName="close"

But i will deploy SPO from https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/deploy/openshift-downstream.yaml

@ccojocar
Copy link
Contributor

Thanks for investigating it. It looks like a bug.

@r0binak
Copy link

r0binak commented Nov 29, 2024

@ccojocar

Where do you think the problem might be? Is it hard to fix?

@ccojocar
Copy link
Contributor

Is the Selinux enabled in the spod configuration?

kubectl get -o yaml spod spod

Does it show the enableSelinux: true?

@ccojocar
Copy link
Contributor

Can you also get the annotations and the labels from your pod after it is being created? I want to check if the webhook applies the correct annoations for profile recording.

@r0binak
Copy link

r0binak commented Nov 29, 2024

@ccojocar

Sure, its enabled

apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: SecurityProfilesOperatorDaemon
metadata:
  creationTimestamp: '2024-11-21T20:34:46Z'
  generation: 9
  labels:
    app: security-profiles-operator
    k8slens-edit-resource-version: v1alpha1
  managedFields:
    - apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:app: {}
        f:spec:
          .: {}
          f:disableOciArtifactSignatureVerification: {}
          f:hostProcVolumePath: {}
          f:priorityClassName: {}
          f:selinuxOptions: {}
          f:selinuxTypeTag: {}
          f:staticWebhookConfig: {}
          f:tolerations: {}
      manager: security-profiles-operator
      operation: Update
      time: '2024-11-21T20:34:46Z'
    - apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            f:k8slens-edit-resource-version: {}
        f:spec:
          f:enableLogEnricher: {}
          f:enableMemoryOptimization: {}
          f:enableSelinux: {}
          f:selinuxOptions:
            f:allowedSystemProfiles: {}
      manager: node-fetch
      operation: Update
      time: '2024-11-28T08:25:23Z'
    - apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          .: {}
          f:conditions: {}
          f:state: {}
      manager: security-profiles-operator
      operation: Update
      subresource: status
      time: '2024-11-28T08:27:24Z'
  name: spod
  namespace: security-profiles-operator
  resourceVersion: '97157005'
  uid: 235e1342-fe72-435e-b1c7-f3483187505a
  selfLink: >-
    /apis/security-profiles-operator.x-k8s.io/v1alpha1/namespaces/security-profiles-operator/securityprofilesoperatordaemons/spod
status:
  conditions:
    - lastTransitionTime: '2024-11-28T08:27:24Z'
      reason: Available
      status: 'True'
      type: Ready
  state: RUNNING
spec:
  disableOciArtifactSignatureVerification: false
  enableLogEnricher: true
  enableMemoryOptimization: true
  enableSelinux: true
  hostProcVolumePath: /proc
  priorityClassName: system-node-critical
  selinuxOptions:
    allowedSystemProfiles:
      - container
      - net_container
      - tmp_container
  selinuxTypeTag: spc_t
  staticWebhookConfig: false
  tolerations:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
      operator: Exists
    - effect: NoSchedule
      key: node-role.kubernetes.io/control-plane
      operator: Exists
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists

After deploy ProfileRecording and Pod (from docs – https://docs.openshift.com/container-platform/4.12/security/security_profiles_operator/spo-selinux.html#spo-recording-profiles_spo-selinux) i got this pod:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  namespace: my-namespace
  uid: 9611ce42-e0e0-42a0-9173-78b57030320c
  resourceVersion: '98337438'
  creationTimestamp: '2024-11-29T12:29:25Z'
  labels:
    app: my-app
  annotations:
    io.containers.trace-avcs/nginx: test-recording_nginx_psmpd_1732883365
    io.containers.trace-avcs/redis: test-recording_redis_w6ztd_1732883365
    k8s.ovn.org/pod-networks: >-
      {"default":{"ip_addresses":["10.131.0.99/23"],"mac_address":"0a:58:0a:83:00:63","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.131.0.1"},{"dest":"10.14.0.0/16","nextHop":"10.131.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.131.0.1"}],"ip_address":"10.131.0.99/23","gateway_ip":"10.131.0.1"}}
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "ovn-kubernetes",
          "interface": "eth0",
          "ips": [
              "10.131.0.99"
          ],
          "mac": "0a:58:0a:83:00:63",
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: privileged
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
  managedFields:
    - manager: kubectl-create
      operation: Update
      apiVersion: v1
      time: '2024-11-29T12:29:24Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:app: {}
        f:spec:
          f:containers:
            k:{"name":"nginx"}:
              .: {}
              f:image: {}
              f:imagePullPolicy: {}
              f:name: {}
              f:ports:
                .: {}
                k:{"containerPort":8080,"protocol":"TCP"}:
                  .: {}
                  f:containerPort: {}
                  f:protocol: {}
              f:resources: {}
              f:securityContext:
                .: {}
                f:allowPrivilegeEscalation: {}
                f:capabilities:
                  .: {}
                  f:drop: {}
              f:terminationMessagePath: {}
              f:terminationMessagePolicy: {}
            k:{"name":"redis"}:
              .: {}
              f:image: {}
              f:imagePullPolicy: {}
              f:name: {}
              f:resources: {}
              f:securityContext:
                .: {}
                f:allowPrivilegeEscalation: {}
                f:capabilities:
                  .: {}
                  f:drop: {}
              f:terminationMessagePath: {}
              f:terminationMessagePolicy: {}
          f:dnsPolicy: {}
          f:enableServiceLinks: {}
          f:restartPolicy: {}
          f:schedulerName: {}
          f:securityContext:
            .: {}
            f:runAsNonRoot: {}
            f:seccompProfile:
              .: {}
              f:type: {}
          f:terminationGracePeriodSeconds: {}
    - manager: okd-worker-5.corp.company.com
      operation: Update
      apiVersion: v1
      time: '2024-11-29T12:29:25Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            f:k8s.ovn.org/pod-networks: {}
      subresource: status
    - manager: multus-daemon
      operation: Update
      apiVersion: v1
      time: '2024-11-29T12:29:26Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            f:k8s.v1.cni.cncf.io/network-status: {}
      subresource: status
    - manager: kubelet
      operation: Update
      apiVersion: v1
      time: '2024-11-29T12:29:38Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          f:conditions:
            k:{"type":"ContainersReady"}:
              .: {}
              f:lastProbeTime: {}
              f:lastTransitionTime: {}
              f:status: {}
              f:type: {}
            k:{"type":"Initialized"}:
              .: {}
              f:lastProbeTime: {}
              f:lastTransitionTime: {}
              f:status: {}
              f:type: {}
            k:{"type":"Ready"}:
              .: {}
              f:lastProbeTime: {}
              f:lastTransitionTime: {}
              f:status: {}
              f:type: {}
          f:containerStatuses: {}
          f:hostIP: {}
          f:phase: {}
          f:podIP: {}
          f:podIPs:
            .: {}
            k:{"ip":"10.131.0.99"}:
              .: {}
              f:ip: {}
          f:startTime: {}
      subresource: status
  selfLink: /api/v1/namespaces/my-namespace/pods/my-pod
status:
  phase: Running
  conditions:
    - type: Initialized
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2024-11-29T12:29:25Z'
    - type: Ready
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2024-11-29T12:29:38Z'
    - type: ContainersReady
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2024-11-29T12:29:38Z'
    - type: PodScheduled
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2024-11-29T12:29:25Z'
  hostIP: X.X.X.X
  podIP: 10.131.0.99
  podIPs:
    - ip: 10.131.0.99
  startTime: '2024-11-29T12:29:25Z'
  containerStatuses:
    - name: nginx
      state:
        running:
          startedAt: '2024-11-29T12:29:34Z'
      lastState: {}
      ready: true
      restartCount: 0
      image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21
      imageID: 0cf0789c622a75327ce542cefd74792afced8fcf50911a6ac79a2c5e9ab087ac
      containerID: cri-o://36a551e716ba75e88f1742b1d9ebfcdf1b23384fff43c24a9450b49e1af8f678
      started: true
    - name: redis
      state:
        running:
          startedAt: '2024-11-29T12:29:37Z'
      lastState: {}
      ready: true
      restartCount: 0
      image: quay.io/security-profiles-operator/redis:6.2.1
      imageID: f877e80bb9ef719f7d6e7faa3cde7e20b97dc6d774d46175adcc8442ec7236aa
      containerID: cri-o://c10ac738f8a7ca276154b8dfd358315525b3c8c7bf4ee8af2b73111912fa26ab
      started: true
  qosClass: BestEffort
spec:
  volumes:
    - name: kube-api-access-nj5hh
      projected:
        sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              name: kube-root-ca.crt
              items:
                - key: ca.crt
                  path: ca.crt
          - downwardAPI:
              items:
                - path: namespace
                  fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
          - configMap:
              name: openshift-service-ca.crt
              items:
                - key: service-ca.crt
                  path: service-ca.crt
        defaultMode: 420
  containers:
    - name: nginx
      image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21
      ports:
        - containerPort: 8080
          protocol: TCP
      resources: {}
      volumeMounts:
        - name: kube-api-access-nj5hh
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      securityContext:
        capabilities:
          drop:
            - ALL
        runAsUser: 1000630000
        allowPrivilegeEscalation: false
        seccompProfile:
          type: Localhost
          localhostProfile: operator/security-profiles-operator/log-enricher-trace.json
    - name: redis
      image: quay.io/security-profiles-operator/redis:6.2.1
      resources: {}
      volumeMounts:
        - name: kube-api-access-nj5hh
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      securityContext:
        capabilities:
          drop:
            - ALL
        runAsUser: 1000630000
        allowPrivilegeEscalation: false
        seccompProfile:
          type: Localhost
          localhostProfile: operator/security-profiles-operator/log-enricher-trace.json
  restartPolicy: Always
  terminationGracePeriodSeconds: 30
  dnsPolicy: ClusterFirst
  serviceAccountName: default
  serviceAccount: default
  nodeName: okd-worker-5.corp.company.com
  securityContext:
    seLinuxOptions:
      level: s0:c25,c15
    runAsNonRoot: true
    fsGroup: 1000630000
    seccompProfile:
      type: RuntimeDefault
  schedulerName: default-scheduler
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
  priority: 0
  enableServiceLinks: true
  preemptionPolicy: PreemptLowerPriority

More specifically it has the following labels and annotations:

metadata:
  annotations:
    io.containers.trace-avcs/nginx: test-recording_nginx_psmpd_1732883365
    io.containers.trace-avcs/redis: test-recording_redis_w6ztd_1732883365
    k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.131.0.99/23"],"mac_address":"0a:58:0a:83:00:63","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.131.0.1"},{"dest":"10.14.0.0/16","nextHop":"10.131.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.131.0.1"}],"ip_address":"10.131.0.99/23","gateway_ip":"10.131.0.1"}}'
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "ovn-kubernetes",
          "interface": "eth0",
          "ips": [
              "10.131.0.99"
          ],
          "mac": "0a:58:0a:83:00:63",
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: privileged
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
  creationTimestamp: "2024-11-29T12:29:25Z"
  labels:
    app: my-app
  name: my-pod
  namespace: my-namespace

@ccojocar
Copy link
Contributor

The annotations looks correct, trace-avcs means selinux.

Can you also check the spod daemonset logs?

Check on which cluster node your pod being recorded in scheduled, and also get the spod pod running on that node with:

kubectl get pods -o wide 

Check in the logs of the security-profiles-operator container of that spod pod if you find some logs like:

Recording profile ... kind: 

What kind is in there?

@r0binak
Copy link

r0binak commented Nov 29, 2024

@ccojocar

Its scheduled pod on okd-worker-5.corp.company.com node:

kubectl get po -n my-namespace my-pod -owide                                                                                                                                                                                                                                                                            

NAME     READY   STATUS    RESTARTS   AGE   IP            NODE                           NOMINATED NODE   READINESS GATES
my-pod   2/2     Running   0          15m   10.131.0.99   okd-worker-5.corp.company.com   <none>           <none>

So, check logs in spod pod in security-profiles-operator container that scheduled on the same okd-worker-5.corp.company.com node:
screen

There is no information here about writing a new selinux profile. There is only old logs about deleting old profiles that I applied manually.

Moreover, if you look at the logs in the same pod at spod in the log-enricher container, you can see messages about seccomp creation:

I1129 12:59:31.095548 2891766 enricher.go:446] "audit" logger="log-enricher" timestamp="1732885170.701:11800586" type="seccomp" node="okd-worker-5.corp.company.com" namespace="my-namespace" pod="my-pod" container="redis" executable="/usr/local/bin/redis-server" pid=1894843 syscallID=232 syscallName="epoll_wait"

@ccojocar
Copy link
Contributor

It seems that you have already some selinuxprofiles in the cluster?

kubectl get selinuxprofiles --all-namesapces

kubectl get profilerecording -o json --all-namesapce | jq .items[].spec.kind

@r0binak
Copy link

r0binak commented Nov 29, 2024

Nope

kubectl get selinuxprofiles --all-namespaces                                                                                                                                                                                                                                                                            
No resources found
kubectl get profilerecording -o json -A | jq ".items[].spec.kind"                                                                                                                                                                                                                                                       
"SelinuxProfile"

@ccojocar
Copy link
Contributor

Unfortunately, I don't have a cluster with SELinux enabled to debug it.

Is the selinuxd container running on the spod pod where your pod is being recorded?

If so, what is in the logs? Is this writing stuff into the /var/log/audit/audit.log, or /var/log/syslog in that node filesystem?

I have the feeling that both audit loggers are enabled: seccomp and selinux.

@r0binak
Copy link

r0binak commented Nov 29, 2024

Yes, the selinuxd container in spod pod is running, running successfully on the same node. Its logs:

"level":"info","ts":1732783591.1135437,"caller":"daemon/daemon.go:30","msg":"Started daemon"}
{"level":"info","ts":1732783591.301241,"logger":"state-server","caller":"daemon/status_server.go:190","msg":"Serving status","path":"/var/run/selinuxd/selinuxd.sock","uid":0,"gid":65535}
{"level":"info","ts":1732783591.3026392,"caller":"daemon/status_server.go:73","msg":"Status Server got READY signal"}
{"level":"info","ts":1732783591.8303285,"logger":"file-watcher","caller":"daemon/daemon.go:93","msg":"Installing policy","file":"/etc/selinux.d/nginx-secure_nginx-deploy.cil"}
{"level":"info","ts":1732783591.8304944,"logger":"file-watcher","caller":"daemon/daemon.go:93","msg":"Installing policy","file":"/etc/selinux.d/nginx-secure_nginx-deploy.cil"}
{"level":"info","ts":1732783611.7671561,"logger":"policy-installer","caller":"daemon/daemon.go:131","msg":"The operation was successful","operation":"install - /etc/selinux.d/nginx-secure_nginx-deploy.cil"}
{"level":"info","ts":1732783611.7683005,"logger":"policy-installer","caller":"daemon/daemon.go:131","msg":"The operation was successful","operation":"install - /etc/selinux.d/nginx-secure_nginx-deploy.cil"}
{"level":"info","ts":1732884762.9738984,"logger":"file-watcher","caller":"daemon/daemon.go:90","msg":"Removing policy","file":"/etc/selinux.d/nginx-secure_nginx-deploy.cil"}
{"level":"info","ts":1732884771.542486,"caller":"semanage/semanage.go:89","msg":"Removing last nginx-secure_nginx-deploy module (no other nginx-secure_nginx-deploy module exists at another priority)."}
{"level":"info","ts":1732884798.635022,"logger":"policy-installer","caller":"daemon/daemon.go:131","msg":"The operation was successful","operation":"remove - /etc/selinux.d/nginx-secure_nginx-deploy.cil"}

These logs are also about old profiles that I applied manually.

root@okd-worker-5:/var/log/audit# ls -la /var/log/audit/
total 33044
drwx------.  2 root root      99 Nov 29 13:38 .
drwxr-xr-x. 13 root root    4096 Nov 27 11:35 ..
-rw-------.  1 root root  239442 Nov 29 13:38 audit.log
-r--------.  1 root root 8388799 Nov 29 13:38 audit.log.1
-r--------.  1 root root 8388836 Nov 29 13:34 audit.log.2
-r--------.  1 root root 8388712 Nov 29 13:31 audit.log.3
-r--------.  1 root root 8388695 Nov 29 13:28 audit.log.4
type=SECCOMP msg=audit(1732887528.015:12216466): auid=4294967295 uid=1000630000 gid=0 ses=4294967295 subj=system_u:system_r:container_t:s0:c15,c25 pid=1894843 comm="redis-server" exe="/usr/local/bin/redis-server" sig=0 arch=c000003e syscall=232 compat=0 ip=0x7f74c99437ef code=0x7ffc0000AUID="unset" UID="unknown(1000630000)" GID="root" ARCH=x86_64 SYSCALL=epoll_wait

@ccojocar
Copy link
Contributor

So, you don't see anything in the audit log with type=AVC?

@r0binak
Copy link

r0binak commented Nov 30, 2024

No, I don't see anything like that

@r0binak
Copy link

r0binak commented Dec 3, 2024

@ccojocar

Do you need any other logs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants