Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Striking balance with K8s Network Policies #513

Open
perezjasonr opened this issue Jun 7, 2022 · 1 comment
Open

Striking balance with K8s Network Policies #513

perezjasonr opened this issue Jun 7, 2022 · 1 comment
Labels
question Further information is requested

Comments

@perezjasonr
Copy link

perezjasonr commented Jun 7, 2022

We rolled out some network policies which has basic namespace isolation and believe its possible that kube-hunter job/pod is being hindered by this. If so, what is the expectation of a cluster with network policies and how does one strike a balance. Obviously, we dont just want to open up pathways because finding vulns is a bad thing and I'd think blocking said functionality is working as intended. but if used for CI checks it seems to timeout on us as we use a kubectl wait equivalent logic and then grab the report which doesn't work if times out.

we just run the basic job/pod k8s internal and the job times out now with basic namespace isolation via network policies.

the basic idea is, have a netpol that allows pods to talk within that namespace, put kube-hunter in there and let it run. for us it seems to time out.

should we "punch holes" in our network policies just to make this work? or where should we draw the line? Is there a set of egress/ingress rules that should be allowed for basic functionality (while not being considered a vuln finding)? or should we conclude that it timing out is actually a "good thing" and meaning it cannot find vulns?

@perezjasonr perezjasonr added the question Further information is requested label Jun 7, 2022
@perezjasonr
Copy link
Author

perezjasonr commented Jun 7, 2022

logs look something like this

2022-06-07 15:22:07,368 INFO kube_hunter.modules.report.collector Started hunting
2022-06-07 15:22:07,369 INFO kube_hunter.modules.report.collector Discovering Open Kubernetes Services
2022-06-07 15:22:07,380 INFO kube_hunter.modules.report.collector Found vulnerability "CAP_NET_RAW Enabled" in Local to Pod (kubehunter--1-v85tn)
2022-06-07 15:22:07,381 INFO kube_hunter.modules.report.collector Found vulnerability "Read access to pod's service account token" in Local to Pod (kubehunter--1-v85tn)
2022-06-07 15:22:07,381 INFO kube_hunter.modules.report.collector Found vulnerability "Access to pod's secrets" in Local to Pod (kubehunter--1-v85tn)
2022-06-07 15:24:17,969 WARNING urllib3.connectionpool Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f229210a280>: Failed to establish a new connection: [Errno 110] Operation timed out')': /api/v1/nodes?watch=False
2022-06-07 15:26:29,041 WARNING urllib3.connectionpool Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f229210a310>: Failed to establish a new connection: [Errno 110] Operation timed out')': /api/v1/nodes?watch=False

but if it can do that, wont it say stuff like this?

KHV002 | 10.96.0.1:443        | Initial Access //    | K8s Version          | The kubernetes       | v1.22.2              |
    |        |                      | Exposed sensitive    | Disclosure           | version could be     |                      |
    |        |                      | interfaces           |                      | obtained from the    |                      |
    |        |                      |                      |                      | /version endpoint    |                      |
    +--------+----------------------+----------------------+----------------------+----------------------+----------------------+
    | KHV053 | Local to Pod         | Discovery //         | AWS Metadata         | Access to the AWS    | cidr: 10.42.33.0/24  |
    |        | (kubehunter--        | Instance Metadata    | Exposure             | Metadata API exposes |                      |
    |        | 1-gnjz6)             | API                  |                      | information about    |                      |
    |        |                      |                      |                      | the machines         |                      |
    |        |                      |                      |                      | associated with the  |                      |
    |        |                      |                      |                      | cluster              |                      |
    +--------+----------------------+----------------------+----------------------+----------------------+----------------------+
    | KHV005 | 10.96.0.1:443        | Discovery // Access  | Access to API using  | The API Server port  | b'{"kind":"APIVersio |
    |        |                      | the K8S API Server   | service account      | is accessible.       | ns","versions":["v1" |
    |        |                      |                      | token                |     Depending on     | ],"serverAddressByCl |
    |        |                      |                      |                      | your RBAC settings   | ientCIDRs":[{"client |
    |        |                      |                      |                      | this could expose    | CIDR":"0.0.0.0/0","s |
    |        |                      |                      |                      | access to or control | ...                  |
    |        |                      |                      |                      | of your cluster.     |                      |

whereas I would think blocking that is a success. From the above even "v1" is a finding, but it wants /api/v1/nodes in the logs where it times out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant