After installation of IBM Cloud Private 2.1.0.3, I noticed I did not see any log information in the ICP UI.

While checking the logs, I saw that filebeat did not start correctly (or rather, completely failed to start).

(on the master node: )

root@icpboot ~]#journalctl -xelf
Jul 13 11:54:17 icpboot.tombosmans.eu hyperkube[1825]: E0713 11:54:17.168699  1825 kuberuntime_manager.go:733] container start failed: RunContainerError: failed to start container "ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.  
Jul 13 11:54:17 icpboot.tombosmans.eu hyperkube[1825]: E0713 11:54:17.168734  1825 pod_workers.go:186] Error syncing pod 2406dc66-85e4-11e8-8135-000c299e5111 ("logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"), skipping: failed to "StartContainer" for "filebeat" with RunContainerError: "failed to start container "ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount."  
Jul 13 11:54:28 icpboot.tombosmans.eu hyperkube[1825]: I0713 11:54:28.083562  1825 kuberuntime_manager.go:513] Container {Name:filebeat Image:ibmcom/filebeat:5.5.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:NODE_HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/usr/share/filebeat/filebeat.yml SubPath:filebeat.yml MountPropagation:} {Name:data ReadOnly:false MountPath:/usr/share/filebeat/data SubPath: MountPropagation:} {Name:container-log ReadOnly:true MountPath:/var/log/containers SubPath: MountPropagation:} {Name:pod-log ReadOnly:true MountPath:/var/log/pods SubPath: MountPropagation:} {Name:docker-log ReadOnly:true MountPath:/var/lib/docker/containers/ SubPath: MountPropagation:} {Name:default-token-kbdxx ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 13 11:54:28 icpboot.tombosmans.eu hyperkube[1825]: I0713 11:54:28.083787  1825 kuberuntime_manager.go:757] checking backoff for container "filebeat" in pod "logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"  

This means that in the IBM Cloud Private UI, I don’t see any logs .

Digging a bit further, I saw that the logging-elk-filebeat-ds indeed was not started.

root@icpboot ~]# kubectl get ds --namespace=kube-system  
NAME                 DESIRED  CURRENT  READY   UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE  
auth-apikeys             1     1     1     1      1      role=master   20h  
auth-idp               1     1     1     1      1      role=master   20h  
auth-pap               1     1     1     1      1      role=master   20h  
auth-pdp               1     1     1     1      1      role=master   20h  
calico-node             3     3     3     3      3          20h  
catalog-ui              1     1     1     1      1      role=master   20h  
icp-management-ingress        1     1     1     1      1      role=master   20h  
kube-dns               1     1     1     1      1      master=true   20h  
logging-elk-filebeat-ds       3     3     2     3      0          20h 
metering-reader           3     3     2     3      2          20h  
monitoring-prometheus-nodeexporter  3     3     3     3      3          20h  
nginx-ingress-controller       1     1     1     1      1      proxy=true   20h  
platform-api             1     1     1     1      1      master=true   20h  
platform-deploy           1     1     1     1      1      master=true   20h  
platform-ui             1     1     1     1      1      master=true   20h  
rescheduler             1     1     1     1      1      master=true   20h  
service-catalog-apiserver      1     1     1     1      1      role=master   20h  
unified-router            1     1     1     1      1      master=true   20h  

Now the problem is of course in the log file, but I did not know how to fix it :

Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.  

On each node, execute these commands:

findmnt -o TARGET,PROPAGATION /var/lib/docker/containers  
mount --make-shared /var/lib/docker/containers  

The result looks something like this:

[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers  
TARGET           PROPAGATION  
/var/lib/docker/containers private  
[root@icpworker1 ~]# mount --make-shared /var/lib/docker/containers  
[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers  
TARGET           PROPAGATION  
/var/lib/docker/containers shared  

After that, the logging-elk-filebeat DaemonSet is available :

root@icpboot ~]# kubectl get ds --namespace=kube-system  
NAME                 DESIRED  CURRENT  READY   UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE  
...  
logging-elk-filebeat-ds       3     3     2     3      **2**         20h  
...

I don’t know if this is a bug, or if this is caused by me trying to run ICP on CentOS7 (which is not a supported platform) …