Proposal to create zuul-operator project
Paul Belanger
pabelanger at redhat.com
Fri Mar 8 16:35:58 UTC 2019
On Fri, Mar 08, 2019 at 08:54:41AM -0500, Paul Belanger wrote:
> On Fri, Mar 08, 2019 at 03:36:50PM +0900, Tristan Cacqueray wrote:
> >
> > On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote:
> > > On 3/7/19 2:13 PM, Paul Belanger wrote:
> > >> On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
> > >>>
> > >>> On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
> > >>>> On 3/6/19 6:12 PM, Paul Belanger wrote:
> > >>>>> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote:
> > >>>>>> Hey everybody,
> > >>>>>>
> > >>>>>> It seems that kubernetes operators are all the rage these days, and in the
> > >>>>>> last 2 weeks I've had multiple conversations that have all come back around
> > >>>>>> to someone wanting a zuul operator to exist.
> > >>>>>>
> > >>>
> > >>> Hello,
> > >>>
> > >>> Thank you for starting this thread, It seems like a great oportunity to
> > >>> converge the different deployment and operation tools.
> > >>>
> > >>> Ideally, we would grow a hierarchy of operators to manage the gating
> > >>> system, and then you could have an opendev-operator to compose:
> > >>> * log-processor-operator
> > >>> * elastic-search-operator
> > >>> * paste-operator, etherpad-operator, ...
> > >>> * gerrit-operator
> > >>>
> > >> I think these are fine to have, but I am on the fence if they are under
> > >> the scope of what zuul should do. As an example, I can totally see
> > >> opendev.org wanting doing this. As long as everything is all
> > >> disconnected from zuul-operator, I would be okay. Personally, I don't
> > >> want zuul to managed a gerrit.
> > >
> > > I agree.
> > >
> > > This is definitely an interesting thought - although I'm not sure we're
> > > all the way there from the opendev POV yet. As I learn more about
> > > operators I could certainly see an opportunity to write one for gitea
> > > once the indexing issue is fixed and we can go back to k8s for it. That
> > > said - I think opendev operators are out of scope for the zuul team - so
> > > we may want to chat about that in a slightly different context. (If the
> > > zuul operator goes well and we find it pleasing I could see that
> > > motivating doing more of them)
> > >
> > >>> We also started to investigate operators for Software Factory component
> > >>> and perhaps we could join force?
> > >
> > > We should absolutely always join forces. :)
> > >
> >
> > Back in the early days of Software Factory we tries to integrate the
> > openstack-infra manifest, but it was such a tedious task to make them
> > work on CentOS outside of the openstack.org domain that we ended up
> > writing our own manifests.
> >
> > Putting the other opendev components aside (e.g. logservices, code
> > hosting, ...), I think this is a great opportunity to converge the
> > existing Zuul deployment system and come up with a common set of ops
> > tasks for example to backup and restart the services.
> >
Now that I've had enough coffee and reread your original reply. I though
you were suggesting logservices, code hosting alongside (you side aside)
:) So my reply basically rehashed my original opinion for no real
reason.
Sorry for the noisy here.
> My first (personal) reaction, is that is too much scope for a
> zuul-operator, as a zuul operator I don't really want to sign up for
> managing a gerrit / log service / code hosting / etc. This is what
> opendev.org is, and I think it is a great thing to provide, but I
> believe is should live outside of zuul-operator by default. If there is
> some other opendev-operator that lives a top of zuul-operator, I think I
> am okay with this.
>
> The main reason I say this, is look at the history of
> puppet-openstackci. In my eyes, it was almost what you mentioned here. A
> shared layer to manage an openstack-infra like environment.
>
> Things like centos support could have been added into puppet-openstackci,
> in fact i think there are patches on gerrit for this, but it was very
> difficult to get them merged for reasons. We are always going to have
> different deployment opinions, which is fine to have, I'd just not like
> to have a single operator, with tons of feature flags, to toggle what
> people want.
>
> I think this is what other threads are suggestion, have opendev-operator
> (logstash, code review. etc) depend on zuul-operator (nodepool / zuul /
> zookeeper), but wanted to get it down again.
>
> > Looking at the awesome-operators page, there are 2 ZooKeeper operators
> > and 5 for MySQL... It would be really nice if we are able to settle
> > on a single zuul-operator implementation that satisfies everyone needs.
> >
> >
> > >>>
> > >>>>>> SO ...
> > >>>>>>
> > >>>>>> I would like to propose that we create a zuul-operator project as part of
> > >>>>>> Zuul.
> > >>>>>>
> > >>>>>> Given our close ties with Ansible, I think we should use ansible-operator
> > >>>>>> [0] to write it.
> > >>>>>>
> > >>>>>> Since we have zuul in production on both vanilla k8s and on OpenShift, I
> > >>>>>> think we should set up gate jobs to make sure it works on both. Once we're
> > >>>>>> happy with it - we should publish it to operatorhub. [1]
> > >>>>>>
> > >>>>>> Does anyone object? It not, I'll get a repo started.
> > >>>>>>
> > >>>>> I think it is a fine idea, I've also seen some talk about it recently
> > >>>>> too.
> > >>>>>
> > >>>>> I do think we should have some consensus on the bits we want to support
> > >>>>> for the deployment. Maybe going as far as saying it is an opinionated
> > >>>>> deployment, for better or worse. As an example, graphite / statsd is
> > >>>>> that in play or not?
> > >>>>
> > >>>> I think that's a great question.
> > >>>>
> > >>>> I do believe it should be an opinionated deployment - but there are
> > >>>> probably places where people will want to make some choices from a
> > >>>> specific set of choices. Like - by default I think it should spin up a
> > >>>> percona-xtradb cluster as provided by the percona operator. BUT - I
> > >>>> could see a setting in teh main Zuul CRD like "use-existing-db" that
> > >>>> takes a sqlalchemy url. That way if there is someone who has an existing
> > >>>> db infrastructure, they could take advantage of it - but the Zuul
> > >>>> operator gets you a working system out of the box.
> > >>>>
> > >>>> Obviously we'll need to spin up a zookeeper. Not sure if there is an
> > >>>> operator for that yet - we might want to make one or work with someone
> > >>>> to make one.
> > >>>>
> > >>>
> > >>> There are actually 2 operators for ZooKeeper already listed in this page:
> > >>> https://github.com/operator-framework/awesome-operators
> > >
> > > WOOT!
> > >
> > >>>
> > >>>> I could see a graphite/statsd option with the ability to point to an
> > >>>> existing graphite. Maybe we just start with CRD field that indicates the
> > >>>> stats_host like we have now?
> > >>>>
> > >>>> But - yes - opinionated, mostly batteries included - maybe some amount
> > >>>> of optional things.
> > >>>>
> > >>>
> > >>> Should we discuss this with a spec? For example:
> > >
> > > Yes - probably so - because there's lots of design points (as you start
> > > to raise here) I'll start one today and we can iterator on it together.
> > >
> > >>> How are we going to configure the connections and the tenants'
> > >>> configuration?
> > >
> > > It's a great question. My first hunch is that both the connections
> > > config and the tenant config should just be parameters to the Zuul CRD,
> > > and then the operator could take that data, transform it into the right
> > > file formats and restart zuul or run zuul-reconfigure as needed.
> > >
> > > It's also feasible we could do it as referenced ConfigMap objects. It's
> > > also conceivable that maybe we just want connections to be parameters to
> > > the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just
> > > have the operator concatenate them appropriately. That last option
> > > starts to worry me though - starting to feel a bit liek the puppet
> > > modules that wanted to re-implement the config language of each thing
> > > (like apache) in the Puppet DSL - and how it turns out we wind up
> > > preferring the ansible "just jinja template the file itself" approach more.
> > >
> > > But I honestly don't know which we'll liek more. I think the EASIEST
> > > thing for a simple first start would be having each content just be
> > > parameters to the Zuul CRD - but we could probably play with the other
> > > things and see what we like. Or maybe there is a different idea.
> > >
> >
> > Everything in the Zuul CRD sounds like the most straightforward.
> > Moreover, the reconcile logic can take care of the differents action
> > depending on what changed.
> >
> > The CRD I started to work on looked like this:
> > ```
> > apiVersion: v1
> > kind: ZuulCluster
> > metadata:
> > name: test-cluster
> > spec:
> > merger:
> > instances: 1
> > executor:
> > instances: 2
> > zookeeperHosts:
> > - hostname:port
> > connections:
> > - name: git.zuul-ci.org
> > driver: git
> > baseurl: https://git.zuul-ci.org
> > - name: sqlreporter
> > driver: sql
> > dburi: mysql+pymysql://...
> > - name: gerrit
> > driver: gerrit
> > username: ...
> > tenants:
> > - name: local
> > source:
> > gerrit:
> > config-projects:
> > - config
> > git.zuul-ci.org:
> > untrusted-projects:
> > - zuul-jobs
> > ```
> > Then I was planning on adding a similar NodepoolCluster and perhaps a
> > ZuulApplication to glue both cluster and manage zookeeper, rdbms and
> > statsd if missing.
> >
> > >>> Software Factory batteries include a zuul-reconfigure job to manage the
> > >>> main.yaml file. Should the zuul-operator setup such job, or perhaps
> > >>> should we also need a zuul-quick-start-operator?
> > > I don't think the zuul operator should set up any jobs, because zuul
> > > jobs come from git repos - and the zuul operator would have to learn
> > > about all of the different types of git backends for it to be able to
> > > push commits into one - and the operator would have to have git account
> > > access.
> > >
> > > That seems like a feature that a software factory operator (or an
> > > opendev operator) would be in a much better position to provide - since
> > > it's reasonable for those operators to know more things about the
> > > surrounding git infrastructure and have accounts for it.
> > >
> > > A zuul-reconfigure job might be a nice addition to zuul-jobs though. And
> > > maybe there would need to be a zuul-operator-reconfigure if applying
> > > main.yaml changes involves pushing a new Zuul CRD manifest via kubectl
> > > to get the operator to update the file and run reconfigure. (take this
> > > main.yaml file, slurp it into the manifest, run kubectl)
> > >
> >
> > That seems out of scope indeed, though later down the road, it seems
> > like a rather common pattern to have the tenant configuration in a git
> > repository that would reconfigure zuul when a change is merged.
> >
> > >>>
> > >>>>> Also, could we maybe say a single distro for all container things? I
> > >>>>> guess by default, that is what the current containers by zuul project is
> > >>>>> running today. I don't much have the energy to get into the weeds of pip
> > >>>>> / dpkg / rpm for how things maybe get installed (if needed).
> > >>>>
> > >>>> Yes. All of our container images as we are publishing them now are
> > >>>> awesome, and are what I expect us to use.
> > >>>>
> > >>>> I am categorically opposed to us making and publishing different images.
> > >>>> One of the nice things about publishing container images is that you can
> > >>>> do them once and do them well. We're using the python base image. It
> > >>>> happens to be based on debian-strech - but that's irrelevant. It's
> > >>>> running python.
> > >>>>
> > >>>
> > >>> That sounds great, but how are we going to cope with the next heartbleed
> > >>> or shellshock? IIUC, we would need to wait for a fix to land in the
> > >>> debian-stretch image, then wait for a new opendevorg/python-builder
> > >>> image and finally wait for a new zuul image to be published.
> > >
> > > Jeremy also responded to this - so this is basically me saying "I agree
> > > with fungi" in more words.
> > >
> > > We have to do that no matter what the base image is. But we don't
> > > actually have to do that. The zuul Dockerfile already does the apt-get
> > > update / apt-get install. If needed we could toss in a dist-upgrade if
> > > the fix has hit the repo but not the base image so that we don't have to
> > > wait for the stack.
> > >
> > > Supporting multiple in-image distros doesn't improve that. In fact, it
> > > increases the work for upstream because now we have to do that exact
> > > process you described, but we have to do it for each distro we're
> > > building images with.
> > >
> >
> > Agreed, there is no point in supporting multiple distros image.
> >
> > >>> Or what happen when the zuul image gets published with an incompatible
> > >>> openstacksdk requirement?
> > >>>
> > >>> Thus I think the base image and how the zuul bits gets bundled are
> > >>> relevant. For example OpenShift users may want to use BuildConfig
> > >>> instead so that a parent layer update automatically trigger childs'
> > >>> layer rebuild.
> > >
> > > Sure - but the zuul images are built with standard Dockerfiles - so
> > > there should be nothing stopping people from rebuilding them locally
> > > however they want.
> > >
> > >> Yah, this is what I was hoping we'd discuss more when we dove into the
> > >> weeds. Can a BuildConfig work with k8s? Or does some other k8s items
> > >> work with openshift? If not, I think it is fine if we only suppoert one
> > >> and not try to do everything. Which one we support, I don't really know,
> > >> hopefully others can say which one is better :) We can also support
> > >> both, but I feel that is a lot of work for minimal gain.
> > > I think it's a strange idea, when you have a Zuul, to want to use a
> > > different build system to build zuul images, since you would not be able
> > > to CI those images as effectively as if you use the zuul image building
> > > jobs. The only good reason for local rebuilds is if you're carrying
> > > local changes.
> >
> > Another good reason to manage the image is for compliance and
> > reproducability. Using another base layer may be easier to audit and
> > update, and users may also want to build their image with curated
> > registries for things like pip or npm to prevent supply chain attack.
> >
> >
> > > If you are carrying local changes, one would imagine you
> > > would want to be able to submit code reviews to a git system to manage
> > > those changes, and you would want those code reviews to get gated, and
> > > you would want to publish the images that result in the gate passing. If
> > > you have all of that set up in zuul to test and publish images from your
> > > local changes, it standa to reason you'd want zuul to also just do an
> > > image rebuild on your local branch in the scenarios you metion, and
> > > using a BuildConfig would not really help you do that at all. **BUT** if
> > > that's what you want to do, I mean, nothing should be stopping you.
> > >
> >
> > My main concern is about the base layer update. BuildConfig can take
> > care of rebuilding your final image when needed. You may use Zuul to
> > promote the resulting image before deploying it, but the point is, base
> > layer updates may happen out of your control, e.g. after a bug is
> > discovered in openssl for example.
> >
> >
> > > It certainly doesn't mean we need to support multi-distro inside the
> > > image content we publish as an upstream. I'm pretty sure an OpenShift
> > > BuildConfig can build an image based on python:slim as well as anything
> > > else.
> > >
> >
> > I didn't mean to derailed the thread to this image building topic which
> > is not actually relevant to the zuul-operator. Sorry about that :)
> > I mean, as long as the zuul-operator can use an optional custom image.
> >
> Yesterday I had a chance to boot the zuul/zuul-merger image in
> openshift, and not surprising that seems to work a little different then
> say a local docker. Openshift wasn't happy that the user inside the
> container was running as root, seems it really wants a non-root user
> inside. So, I do think it we might want to make all our images work as
> well as possible in each thing (docker, podman, k8s, openshift, etc) but
> honestly, I'm not really sure what the work looks like. In this case, I
> think it is just a matter of creating a zuul user inside the container
> and starting zuul / nodepool with it.
>
> > Regads,
> > -Tristan
> >
> > >>>> I DO think we'll likely need to support specifying an image location in
> > >>>> the CRD. (rook does this for ceph too) That's because I'd like for
> > >>>> Tobias to be able to use the operator if he chooses, but he uses local
> > >>>> builds of the images. And I think that's fine.
> > >>>>
> > >>>>> I am likely showing how much I don't know about k8s / openshift here,
> > >>>>> but wanting to limit the amount of different ways to do a deployment
> > >>>>> based on underlying distro bits.
> > >>>>
> > >>>> +10000000000
> > >>>>
> > >>>> I believe there is no value in producing divergent images.
> > >>>>
> > >>>
> > >>
> > >>
> > >>
> > >>> _______________________________________________
> > >>> Zuul-discuss mailing list
> > >>> Zuul-discuss at lists.zuul-ci.org
> > >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
> > >>
> > >>
> > >> _______________________________________________
> > >> Zuul-discuss mailing list
> > >> Zuul-discuss at lists.zuul-ci.org
> > >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
> > >>
> > >
> > > _______________________________________________
> > > Zuul-discuss mailing list
> > > Zuul-discuss at lists.zuul-ci.org
> > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
>
>
>
> > _______________________________________________
> > Zuul-discuss mailing list
> > Zuul-discuss at lists.zuul-ci.org
> > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
>
>
> _______________________________________________
> Zuul-discuss mailing list
> Zuul-discuss at lists.zuul-ci.org
> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
More information about the Zuul-discuss
mailing list