On 07/10/2018 02:10 AM, Artem Goncharov wrote:
Hi James,
thanks a lot, I really appreciate you thoughts.
Thanks!
First, I'd like to point out that Monty is working on some jobs to automatically build and publish Zuul container images. Perhaps once those are working, you may be able to use those with this project.
I am glad this is happening, but I don't think I will use those directly. I am building them with S2I, and there are absolutely no problem with that. Moreover, on each Openshift meetup a question arise: how do we maintain security of our system using 3rdparty images. RedHat came with a tool in Openshift to inspect and alert about vulnerabilities in used images. When you take a ready image from somewhere, you are stuck with hoping it will be updated promptly. Let's be fair, in the last month critical vulnerabilities are being discovered nearly on a weekly basis. Are we going to patch our images or monitor this in any way? Using S2I I do not have this constraint. Whenever OS image is updated I am ready to build my image.
In addition to that I have flexibility of just specifying the git ref to be used in images. I think operating your own installation of any SW you would want to have a more granular control over components versions. I even have a feeling some here are also basing on specific gerrit changes for their own installations.
Totally. A few thoughts here, for what they're worth. One of the things that will be involved in the upstream jobs to build and publish images would be a job that can build and publish images. Since zuul can, of course, share job config, I would hope we could make a job that would be potentially usable by people with downstream installations like you to build images based on local copies of things (because as you say, people run with patches applied locally, or tied to a specific upstream review, or whatever) Of course, you should never be forced to do that, but I would like whatever job/jobs/scripts we put together for upstream to be re-usable. In my brain, I've been thinking about that sort of like this: - "I want to install and run Zuul on my k8s/openshift" - Run $something which installs Zuul from the upstream images (since they're published) - Update zuul config to start building/publishing images - Update zuul deployment to use the zuul-built images But that's just me, and it may be a terrible idea. Relying on the openshift built-in s2i *service* to build the zuul images feels weird to me, since zuul is, itself, a build automation system. Using the s2i software in the image building process on the other hand might be a fine idea. The pattern pbrx is using right now is *very* similar to what s2i does - in that it uses a builder pattern - and I've got some initial thoughts on how the pbrx code could be restructured to take advantage of s2i. What I don't want to do though is maintain s2i scripts or Dockerfiles for each zuul service - mainly because it's almost _completely_ redundant - and it's only useful for people with container workflows. Pretty much every python project that uses pbr and bindep has every piece of information needed for building appropriate images already expressed in files that are also useful for installing the software without containers. The general steps for this, that apply without modification to every python project using bindep and pbr: - Get a base image (python:slim is the current choice, but python:alpine would work too. python:slim is the current choice because it's a sane python environment. I am **VERY** uninterested in otherwise 'choosing' a distro winner. It's a python project - it should use the python base image) - Update the packages in the base image (for the security reasons you mention above) - Make a bindep builder image, based on the base image, that has bindep installed. - Boot a bindep wheel builder container based on the bindep builder image - In the bindep wheel builder container: - run "bindep -b" to get the list of needed distro packages for the final install - install all of the needed distro packages needed to make wheels of the package and its dependencies (returned by 'bindep -b compile') - install the python project - Collect *all* of the wheels built as a result of installing the python project - Make a project-base image for the project based on the base image: - Install the distro packages collected earlier from bindep -b - Install the wheels collected from the builder image step - For each console-script listed in setup.cfg: - Make an image named for the console-script based on the project-base image with ENTRYPOINT set to the console-script There are a few details - but that process can be followed for everything using bindep/pbr without expressing any duplicate information about dependencies or process anywhere. Now - as we think about using the s2i tool in the context of the above process, we can do the following (assume that '.' is the source repo) (Incidentally, even though this is a python app, I think following https://github.com/openshift/source-to-image/blob/master/docs/runtime_image.... is a good plan, because there are C dependencies that have to get built, so the build toolchain, build depends and intermediate artifacts are much larger than the final container needs to be) - Make and publish a bindep builder image It's a really simple image to make. This Dockerfile: FROM python:slim RUN pip install bindep ENTRYPOINT /usr/local/bin/bindep But to make it an s2i builder image and not just a builder image, we need to add some scripts to it that are named appropriately and in the right place. Let's assume we've done that. Then we do: s2i . bindep-builder package-base --runtime-image python:slim --runtime-artifact /root/.cache/pip/wheels That's actually pretty great, and it gets us the 'package-base' contianer mentioned above. Perhaps that's ok and all we need - afterall, one could do: docker run -it --rm package-base -- console-script-command For any of the commands. BUT - using zuul as a conveniently complex example, two gotchas arise. First - zuul-executor needs a couple of distro packages that are not needed elsewhere. One of them, git, should be fine to just add to bindep. But the other, bubblewrap, is more complicated, since it needs to come from a package repository containing a newer package than what is in the python:slim image. We could just make a zuul-bindep builder image that is based on the bindep builder image but additionally just adds the additional repo - and then add bubblewrap to the normal bindep depends. In fact, writing it down, I think that's likely the way to go. So let's assume that's a good solution there. Second - zuul-dashboard is a javascript application that is co-located with the python zuul project. It needs to be different - and wants to be in a runtime image based on nginx (probably). We can make a second builder image based on node:8 and then run s2i a second time. It also has a build step that produces a final artifact (compiled html/js) so we'll use the same runtime image pattern: s2i . bindep-javascript-builder zuul-dashboard --runtime-image node:8 --runtime-artifact /src/zuul/web/static Now we've got 2 s2i commands - and wind up with two images, zuul-base and zuul-dashboard. Finally, it would be nice to get entrypoint images for each of the other console scripts - but we can do that in a very simple script to read the setup.cfg and just do essentially a simple: for console_script in get_scripts_from_setup_cfg: FROM zuul-base CMD ["/usr/local/bin/{console_script}"] In any case- I think it's worth exploring - but I also think there is a real opportunity for general work here, rather than needing automation specific to the Zuul repo itself. I'm not 100% sure, after all of that, if I think we should change pbrx to use s2i behind the scenes, adopt s2i instead of pbrx or something else. It's mostly trying to share thoughts and think out loud. We're still pretty early on this particular task, so I think switching gears a few times should be be upsetting. There are **very** few details of the above, or any of this I feel strongly about - all of the rest are implementation details. Those details are: - Use python:slim or python:alpine for the base image of the python content. - Make the resulting container images as much 'single-process' designed images as possible. - Avoid build depends or source code repositories in the final images. - Put the built javascript dashboard into its own container image. - Avoid duplicating information about distro-depends, python-depends or entrypoints. As you can probably tell, the last point is particularly important to me. I am VERY against expressing the information in bindep.txt or requirements.txt in a second container-build-specific place. The middle points are also important - because we can actually make some very nice small images here with a little bit of work. I think it's important to have per-entrypoint images rather than a single image with multiple potential commands in it that we invoke differently - because it is possible that in the future we reimplement one of the components in another language and if we're just publishing a single 'zuul' container, if we then rewrote zuul-web in C++, it would get awkward. The first point I explained a bit above - but to expand, given the way the world has changed over the last few years, I would prefer to focus on the fact that we are shipping apps based on a language. I don't think there is a benefit in 'picking' a linux distro by our choice of base image, nor do I think there is benefit in writing the automation to be able to create images based on different base linux distros. These are application containers, not machine containers. For zuul, it's a python app, so it should use the python base image. For the dashboard, it's a static html/js/css application so it should go into an nginx or apache base image. It occurs to me that we should probably write up a spec or a design document or something with thoughts like the above - but I guess let's let the email thread be a first-draft with comments and whatnot and we can circle back around and write up something as a followup.
Generally speaking, I think we should take the approach of being fairly conservative in adding to the official Zuul project. I think things that we adopt in that context should be very widely applicable. If we adopt deployment tooling, then I think it should be very fundamental and provide building blocks that can be used in more complex systems.
By way of examples, I could see some Ansible roles, or perhaps a helm chart for a simple all-in-one system (so that we can use it in our own CI tests and document it as a quickstart deployment method), but more complex systems may not be a good fit.
Of course, those of us who have been working on OpenStack for a while will recognize this question. :)
I think right now zuul-apb would be best served as being a friendly affiliated project, but not one that we officially adopt. Within the OpenStack-Infra project, we're working on an approach for hosting related projects without having to deal with issues of OpenStack (or even Zuul) branding. I think this question would be much easier to answer if we were further along with that.
Official or not official. This is not a point. I offered to make it known and optionally develop on the openstack infra the same way if there is interest in community, as i.e. windmill is. So far I do not see any interest in contradict to what I was expecting. No problem.
I think we're trying to look a bit ahead and be careful that we don't repeat organizational difficulties that OpenStack found itself in. I think (and I imagine Jim does too) that windmill and your zuul-apb code are awesome things to exist. Similarly software factory. In terms of the wider zuul ecosystem it helps humans be successful, which is yay! Today we're not at much risk of people getting confused by anything (or any more than they are already) because of the co-location with the 1000s of OpenStack repos. However, if you consider the canonical location of zuul repos to be git.zuul-ci.org - then we start running in to the issue that OpenStack ran in to with the Big Tent- which is that there are people who infer a support association due to adjacency of git repos. (these people are obviously crazy - but we've gone through this once so are mostly trying to learn the lesson from it) As the zuul team, if we associate with a project such as zuul-apb or windmill, then people are going to assume that we're taking care of it. If we aren't actually collectively taking care of it, then we're potentially communicating something to people that could confuse them. Past the software maint itself, We list things in the documentation tab at https://zuul-ci.org/ - and we have install guide in-work at https://zuul-ci.org/docs/zuul/admin/zuul-from-scratch.html ... how we add things to the main documentation or how we communicate to people how to install things is a thing we need to manage as a community with limited resources. zuul-from-scratch doesn't even use ansible playbooks for now - and that's on purpose, we wanted to make sure there is documentation that doesn't assume any specific automation worldview. We've said that once we're happy that the main docs are good (actually get you a working system) that then providing automated versions of them is a desirable thing. But how many automation options do we want to provide? Ansible seems like a no-brainer, given Zuul's relationship with Ansible. But would we want to provide puppet/chef/salt too? Then we get to container-based deployments. Do we want to cover k8s and openshift? What about docker swarm? What about mesos? APB is a combo of ansible and kubernetes - how do we collectively think about that? With a core team of 13, we clearly can't do all of the options - which I don't think anyone is suggesting. The question is- which ones do we think we can deal with, which ones do we feel we can or should recommend, and how do we navigate that? That's really long winded. I think what I'm trying to say is that I find deployment things essential to our wider ecosystem - but we're _today_ still navigating the openstack/zuul split as well as how to organize ourselves and communicate who we are and what's in/out of scope for us as an upstream. So I think for right now it's about being conservative with how we proceed. It's easier to add zuul-apb *LATER* as a deployment option to our docs later - or as the way we test the containers we build in the gate - than it is to add/adopt today and then walk back later. (what if you get sick of us and stop caring about zuul-apb? what if apb itself goes away or changes or turns in to something new?) All of that is related to listing somethign in the zuul docs, or adding it to zuul-ci.org in some way. However, to make things even more complicated - zuul is hosted in the openstack infrastructure which is pretty darned accepting of projects. Adding zuul-apb to git.openstack.org (but not git.zuul-ci.org) is a thing that doesn't carry the same connotations. (similarly, openstack-infra/puppet-zuul exists, and I do not think that the Zuul team feels it is on the hook for it) I would certainly find it easier to interact with zuul-apb if it was on git.openstack.org than over in github, and I can think of no reason from an openstack-infra point of view why it wouldn't be an acceptable repo to add.
This is a very tricky question, and we may want to periodically re-evaluate it, or even change our minds. I'd like to hear from other folks too as to what direction we should take with regards to deployment tooling.
Strictly personal strong opinion:
Think about a box, and not about what is inside of a box. When you try to make your product attractive, you work not only on a product, but on it's package also. I personally would not buy a product if I need to organize transport packaging for it myself or when I am not able to adapt it in my home. It is not sufficient that the product is great if I can't deploy it. To tell the truth I am tired of those SWs, which put usability and deployment on the second place. You can't expect people will use it, if you did nothing to help them deploy SW and get a functional configuration.
I guess this is exactly the same discussion, that happened couple of days ago in openstack-dev about Openstack deployment as a whole, where some were mentioning that K8s is much easier to operate. I personally see this problem also. Openstack is great, but is incredibly complex and there is way too less work done on easing it's operations. Even when you take RDO it is not much better. When I first started looking at Openstack and tried deploying it it took me around 2 weeks even having step-by-step manual and packages ready. When I first started looking at Zuul it took me over 2 weeks with zuul install guides. It was not hard to install it, but as soon as you try to get more or less reasonable configuration you stuck for weeks. Windmill tries to make it easier, but is there any reference, any hint on it's existence? To tell the truth, assuming not being affiliated with Openstack, if I would have been asked to evaluate usage of Zuul in company, I would gave it up after a week. Documentation is there, but it is not something you need when you try to start using Zuul outside of Openstack. It focuses on the technical aspects, and a "tutorial"-like guides, administration guides are missing as such. Short doc on manual installing Zuul on Fedora is absolutely insufficient for productive use.
Yes, absolutely. I touched on this a little bit above, but I'll say more words here. We ABSOLUTELY care about deployment and operational ease of use for our end users - we're just not there yet. We've been focused on making the documentation complete first- so that we'd actually accomplish that step. If we started with some ansible playbooks, we were worried that we'd never find the time to go back and actually document the end to end. When that documentation is done - or at least complete enough that it can be followed and the user end up with a working system - then providing easy-buttons is a thing I believe we've talked about being quite interested in as a community. I personally think that a simple "install zuul" ansible playbook - perhaps windmill - and a simple helm chart would be great entry points. Maybe zuul-apb wants to be an option too (that's a harder one for me personally as a non-openshift user, as the idea of having kubernetes run my ansible rather than ansible run my kubernetes is alien to me. Not saying I think it's bad - it's just the opposite of how my brain would think of it so there is a learning curve for me on that topic I'll need to deal with)
So overall my suggestion: think about Openstack and affiliated SW also as anyone outside of Openstack would think. Only this way you will be able to see challenges they have. I personally see that Zuul community is focused mostly inside (while still seeking to become successful outside of Openstack).
Yes, I totally agree. We are absolutely focused on folks outside of OpenStack - but we're not there yet due to bandwidth, so I think it's totally fair feedback that to a new community member we might not yet seem like it. Just know it's not for lack of desire. :)
Reminder: it is a strictly personal opinion and I am not trying to blame anyone.
Thanks again for writing the tool and offering it. Deployment systems are important and they strengthen the community overall.
You are welcome
Artem
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss