# Troubleshooting

# Debugging

If you suspect something is wrong with the Insights Agent installation, you can use kubectl to debug the problem.

After the agent runs, kubectl get pods should show something like this:

$ kubectl get pods -n insights-agent
NAME                                                    READY   STATUS      RESTARTS   AGE
goldilocks-5sh8s                                        0/2     Completed   0          18m
goldilocks-7pgp6                                        0/2     Completed   0          19m
insights-agent-goldilocks-controller-5b6b45d678-vgbrk   1/1     Running     0          19m
insights-agent-goldilocks-vpa-install-566h8             0/1     Completed   0          19m
kube-bench-dpvbz                                        0/2     Completed   0          18m
kube-hunter-tnmsw                                       0/2     Completed   0          18m
polaris-zk4px                                           0/2     Completed   0          18m
rbac-reporter-1583952600-kwmfz                          0/2     Completed   0          105s
rbac-reporter-sf9cz                                     0/2     Completed   0          18m
release-watcher-6lhm7                                   0/2     Completed   0          18m
trivy-8nw9d                                             0/2     Completed   0          18m
workloads-1583951700-dj6wb                              0/2     Completed   0          16m
workloads-q6gzt                                         0/2     Completed   0          18m

If any of the pods show an error, you can look at the logs. There are typically two containers per pod in the Insights Agent - one to run the auditing tool and another to upload the results. For example, here are typical logs for kube-bench:

$ kubectl logs kube-bench-dpvbz -n insights-agent -c kube-bench
time="2020-03-11T18:32:51Z" level=info msg="Starting:"
time="2020-03-11T18:32:51Z" level=info msg="Updating data."
time="2020-03-11T18:32:54Z" level=info msg="Data updated."

If nothing suspicious appears there, you might find an answer in the second container which uploads the results. It should end with something like this:

$ kubectl logs kube-bench-dpvbz -n insights-agent -c insights-uploader
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  174k  100    16  100  174k     23   254k --:--:-- --:--:-- --:--:--  274k
+ exit 0

# Common Issues

# Resource Limits

We have set reasonable resource requests and limits on each of the audits, but some clusters may push the boundaries of our assumptions. If you're seeing out-of-memory errors or other resource related issues, try setting higher resource limits.

If you're using the helm chart, you can do this by adding

--set $reportType.resources.limits.memory=1Gi
# or
--set $reportType.resources.limits.cpu=1000m

to your helm update --install command.

# Timeouts

We have set a reasonable timeout for each of the audits, but again, some clusters may push the boundaries of our assumptions. If you're seeing timeout issues in the insights-uploader container in one of the report types, you can adjust the timeout by adding:

--set $reportType.timeout=3600  # 3600s = 5min

to your helm update --install command.