On Tue, Mar 12, 2019 at 09:57 Clint Byrum wrote:
Quoting Tristan Cacqueray (2019-03-11 20:09:24)
On Mon, Mar 11, 2019 at 16:05 James E. Blair wrote:
Paul Belanger <pabelanger@redhat.com> writes:
On Mon, Mar 11, 2019 at 01:47:26PM -0700, James E. Blair wrote: ...
Broadly speaking, there are two classes of actions we might want to block:
1) Actions which may expose private information stored on the executor (including information used by Zuul itself or other jobs), or otherwise compromise the security of the executor itself (so that future jobs can not be trusted).
2) Actions which may allow a user to take advantage of privileges afforded to the executor host itself. For example, if the executor is behind a network firewall and therefore can access unprotected network services.
One topic I keep hearing from time to time, is moving where the zuul-executor runs ansible-playbook from. Most commonly, I've heard people asking for that to be run inside a pod (off the static zuul-executor host). I understand in our case it is a little more difficult then that, with how we deal with git repo state. Could you speak to if that would provide a better / safer / other place for ansible-playbook to run?
With respect to the first class, what you describe would not be significantly different. We already run Ansible in a container; running it in a different container would not change much.
Though Zuul executor job containers are missing cgroup, network namespace and seccomp filter. And we assume the executor process can create containers, which may not be possible in some restricted environment.
A seccomp filter would need to be created for every potential job. Any attempt to make a generic one will have limited benefit as it won't really be blocking much.
Well the common seccomp filters usualy provides a good defensive layer that shouldn't be available to any given job (such as using SCTP protocol). Here is the one used by docker: https://docs.docker.com/v18.03/engine/security/seccomp/#significant-syscalls...
What problem that ansible restrictions prevent now, would be prevented by cgroups and/or network namespaces?
Cgroup would let us limit the memory and cpu usage. Network namespaces would enable 2 jobs to bind the same port, and we could setup packet filter to prevent job network to access zuul control plane.
Also USER_NS has been in mainline kernels for *years* now. I'm comfortable with that assumption. We also require Python 3.5+, which isn't in a lot of old OS's, so I'd like to think we can continue the aggressive stance on dependencies.
As explained in another mail, USER_NS is not enough to enable using bwrap in k8s.
I wish we would consider using external pods to run jobs, this would removes the executor privilege requirement and simplifies resources scaling.
Not sure anybody has suggested this isn't possible, but I think what's on the table now is removing the Ansible restrictions without doing external pod execution. Are you suggesting pod execution should be added first?
I'm suggesting we could have a pod execution context driver that could be used in place of the bubblewrap one.
Also there is no executor privilege requirement if your kernel supports USER_NS.
IIUC, you still need SETUID/SETGID cap to be able to complete the user namespace creation. Regards, -Tristan
With respect to the second, it would mean that an operator would be able to worry less about what the Zuul executor has access to, but would need to ensure that whatever off-host non-Zuul executor doesn't have access to anything it shouldn't. Basically, it moves the problem. As long as the things that Zuul requires access to (e.g., gearman, zookeeper) can be protected, I don't think it's a significant difference from a security POV.
Note that ZooKeeper access can not be protected without a fix for https://issues.apache.org/jira/browse/ZOOKEEPER-2937 (at least without network isolation between the executor service and the job tasks).
What a bummer, I didn't know about that.
I believe the workaround for this is to put a TLS terminator in front of Zookeeper that enforces client cert auth. Seems like it would be worth our time to dust off that PR and get it merged and released though.
Now is a good time to clarify that there are *significant* technical problems and potential loss of functionality with that approach, and there are no current plans to support it.
I do like the additional layer of protection we added to ansible, but not really surprised we are wanting to remove it. As we get better at doing things with zuul, we've needed to relax the restrictions on some modules to use (eg: add_host / known_hosts recently). What would it look like, if a zuul operator wanted to manage this additional security themselves? Or is that something we'd even consider. For example, in zuul 3.3.1[1] we released a point release with additional filtering for add_host and SSH, we'd now be saying this was actually okay, but still could be an attack vector right?
The vulnerability there is running arbitrary code on the executor, and, if we decide to proceed with this, we would be making the decision that is no longer a vulnerability but is a feature.
That sounds like rather risky, especially since executor are handling job's secrets. How are we going to prevent one play to not taint the one that are protected?
The current restrictions aren't sufficient anyway. You can write your own Ansible module and do whatever you want that isn't an Ansible action. IMO doing things at this granular level is a neverending time sink, and that's what Jim is suggesting we end: Trust bubblewrap, lock it down further if necessary, and look at your executors for what they are: dangerous remote code executors that need to be watched and controlled.