Proposal to create zuul-operator project

Monty Taylor mordred at inaugust.com
Fri Mar 8 18:23:08 UTC 2019



On 3/8/19 6:36 AM, Tristan Cacqueray wrote:
> 
> On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote:
>> On 3/7/19 2:13 PM, Paul Belanger wrote:
>>> On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
>>>>
>>>> On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
>>>>> On 3/6/19 6:12 PM, Paul Belanger wrote:
>>>>>> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote:
>>>>>>> Hey everybody,
>>>>>>>
>>>>>>> It seems that kubernetes operators are all the rage these days, and in the
>>>>>>> last 2 weeks I've had multiple conversations that have all come back around
>>>>>>> to someone wanting a zuul operator to exist.
>>>>>>>
>>>>
>>>> Hello,
>>>>
>>>> Thank you for starting this thread, It seems like a great oportunity to
>>>> converge the different deployment and operation tools.
>>>>
>>>> Ideally, we would grow a hierarchy of operators to manage the gating
>>>> system, and then you could have an opendev-operator to compose:
>>>> * log-processor-operator
>>>> * elastic-search-operator
>>>> * paste-operator, etherpad-operator, ...
>>>> * gerrit-operator
>>>>
>>> I think these are fine to have, but I am on the fence if they are under
>>> the scope of what zuul should do.  As an example, I can totally see
>>> opendev.org wanting doing this. As long as everything is all
>>> disconnected from zuul-operator, I would be okay. Personally, I don't
>>> want zuul to managed a gerrit.
>>
>> I agree.
>>
>> This is definitely an interesting thought - although I'm not sure we're
>> all the way there from the opendev POV yet. As I learn more about
>> operators I could certainly see an opportunity to write one for gitea
>> once the indexing issue is fixed and we can go back to k8s for it. That
>> said - I think opendev operators are out of scope for the zuul team - so
>> we may want to chat about that in a slightly different context. (If the
>> zuul operator goes well and we find it pleasing I could see that
>> motivating doing more of them)
>>
>>>> We also started to investigate operators for Software Factory component
>>>> and perhaps we could join force?
>>
>> We should absolutely always join forces. :)
>>
> 
> Back in the early days of Software Factory we tries to integrate the
> openstack-infra manifest, but it was such a tedious task to make them
> work on CentOS outside of the openstack.org domain that we ended up
> writing our own manifests.
> 
> Putting the other opendev components aside (e.g. logservices, code
> hosting, ...), I think this is a great opportunity to converge the
> existing Zuul deployment system and come up with a common set of ops
> tasks for example to backup and restart the services.
> 
> Looking at the awesome-operators page, there are 2 ZooKeeper operators
> and 5 for MySQL... It would be really nice if we are able to settle
> on a single zuul-operator implementation that satisfies everyone needs.

Yes. I think the world would be a sad place if there were 2 zuul operators.

I definitely also want to list it at operatorhub:

https://www.operatorhub.io

because apparently the Operator Lifecycle Manager will know how to pull 
operators from operatorhub based on dependency metadata declarations.

Incidentally, this:

https://www.operatorhub.io/operator/percona-xtradb-cluster-operator-community.v0.3.0-rc1

is there for database. None of the awesome-operators zookeeper ones are 
on operatorhub - so we may need to dig in to that one a bit more.

> 
>>>>
>>>>>>> SO ...
>>>>>>>
>>>>>>> I would like to propose that we create a zuul-operator project as part of
>>>>>>> Zuul.
>>>>>>>
>>>>>>> Given our close ties with Ansible, I think we should use ansible-operator
>>>>>>> [0] to write it.
>>>>>>>
>>>>>>> Since we have zuul in production on both vanilla k8s and on OpenShift, I
>>>>>>> think we should set up gate jobs to make sure it works on both. Once we're
>>>>>>> happy with it - we should publish it to operatorhub. [1]
>>>>>>>
>>>>>>> Does anyone object? It not, I'll get a repo started.
>>>>>>>
>>>>>> I think it is a fine idea, I've also seen some talk about it recently
>>>>>> too.
>>>>>>
>>>>>> I do think we should have some consensus on the bits we want to support
>>>>>> for the deployment. Maybe going as far as saying it is an opinionated
>>>>>> deployment, for better or worse. As an example, graphite / statsd is
>>>>>> that in play or not?
>>>>>
>>>>> I think that's a great question.
>>>>>
>>>>> I do believe it should be an opinionated deployment - but there are
>>>>> probably places where people will want to make some choices from a
>>>>> specific set of choices. Like - by default I think it should spin up a
>>>>> percona-xtradb cluster as provided by the percona operator. BUT - I
>>>>> could see a setting in teh main Zuul CRD like "use-existing-db" that
>>>>> takes a sqlalchemy url. That way if there is someone who has an existing
>>>>> db infrastructure, they could take advantage of it - but the Zuul
>>>>> operator gets you a working system out of the box.
>>>>>
>>>>> Obviously we'll need to spin up a zookeeper. Not sure if there is an
>>>>> operator for that yet - we might want to make one or work with someone
>>>>> to make one.
>>>>>
>>>>
>>>> There are actually 2 operators for ZooKeeper already listed in this page:
>>>> https://github.com/operator-framework/awesome-operators
>>
>> WOOT!
>>
>>>>
>>>>> I could see a graphite/statsd option with the ability to point to an
>>>>> existing graphite. Maybe we just start with CRD field that indicates the
>>>>> stats_host like we have now?
>>>>>
>>>>> But - yes - opinionated, mostly batteries included - maybe some amount
>>>>> of optional things.
>>>>>
>>>>
>>>> Should we discuss this with a spec? For example:
>>
>> Yes - probably so - because there's lots of design points (as you start
>> to raise here) I'll start one today and we can iterator on it together.
>>
>>>> How are we going to configure the connections and the tenants'
>>>> configuration?
>>
>> It's a great question. My first hunch is that both the connections
>> config and the tenant config should just be parameters to the Zuul CRD,
>> and then the operator could take that data, transform it into the right
>> file formats and restart zuul or run zuul-reconfigure as needed.
>>
>> It's also feasible we could do it as referenced ConfigMap objects. It's
>> also conceivable that maybe we just want connections to be parameters to
>> the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just
>> have the operator concatenate them appropriately. That last option
>> starts to worry me though - starting to feel a bit liek the puppet
>> modules that wanted to re-implement the config language of each thing
>> (like apache) in the Puppet DSL - and how it turns out we wind up
>> preferring the ansible "just jinja template the file itself" approach more.
>>
>> But I honestly don't know which we'll liek more. I think the EASIEST
>> thing for a simple first start would be having each content just be
>> parameters to the Zuul CRD - but we could probably play with the other
>> things and see what we like. Or maybe there is a different idea.
>>
> 
> Everything in the Zuul CRD sounds like the most straightforward.
> Moreover, the reconcile logic can take care of the differents action
> depending on what changed.
> 
> The CRD I started to work on looked like this:
> ```
> apiVersion: v1
> kind: ZuulCluster
> metadata:
>    name: test-cluster
> spec:
>    merger:
>      instances: 1
>    executor:
>      instances: 2
>    zookeeperHosts:
>      - hostname:port
>    connections:
>      - name: git.zuul-ci.org
>        driver: git
>        baseurl: https://git.zuul-ci.org
>      - name: sqlreporter
>        driver: sql
>        dburi: mysql+pymysql://...
>      - name: gerrit
>        driver: gerrit
>        username: ...
>    tenants:
>      - name: local
>        source:
>          gerrit:
>            config-projects:
>              - config
>          git.zuul-ci.org:
>            untrusted-projects:
>              - zuul-jobs
> ```
> Then I was planning on adding a similar NodepoolCluster and perhaps a
> ZuulApplication to glue both cluster and manage zookeeper, rdbms and
> statsd if missing.

Yes - I think that's likely a great approach to start with. (although I 
think we can just call ZuulApplication "Zuul" - but I like typing 
shorter things.

Although I could also see just having a Zuul and a Nodepool and having a 
db field that could optionally be overridden but otherwise would cause a 
percona operator to get created and a sqlreporter entry added to the 
connections list. Similarly the Zuul operator will need to depend on a 
Zookeeper resource provided by an operator - but allow it to be 
overridden. Having that there though - I don't think the user of the 
Zuul operator should have to put in an entry to the Zuul CR telling it 
to put the zookeeper info into the config file- that should be zuul 
operator details. *waves hands*

Basically - when there's an opportunity for the operator to take care of 
connecting some info here we should take it.

The nodepool case is a bit more interesting to think about - because a 
Zuul requires a Nodepool - but the Nodepool config is a bit more complex 
and maybe would start to feel very heavyweight if we just put it all 
into the CRD? i'm sure we'll learn a lot as we poke ...


>>>> Software Factory batteries include a zuul-reconfigure job to manage the
>>>> main.yaml file. Should the zuul-operator setup such job, or perhaps
>>>> should we also need a zuul-quick-start-operator?
>> I don't think the zuul operator should set up any jobs, because zuul
>> jobs come from git repos - and the zuul operator would have to learn
>> about all of the different types of git backends for it to be able to
>> push commits into one - and the operator would have to have git account
>> access.
>>
>> That seems like a feature that a software factory operator (or an
>> opendev operator) would be in a much better position to provide - since
>> it's reasonable for those operators to know more things about the
>> surrounding git infrastructure and have accounts for it.
>>
>> A zuul-reconfigure job might be a nice addition to zuul-jobs though. And
>> maybe there would need to be a zuul-operator-reconfigure if applying
>> main.yaml changes involves pushing a new Zuul CRD manifest via kubectl
>> to get the operator to update the file and run reconfigure. (take this
>> main.yaml file, slurp it into the manifest, run kubectl)
>>
> 
> That seems out of scope indeed, though later down the road, it seems
> like a rather common pattern to have the tenant configuration in a git
> repository that would reconfigure zuul when a change is merged.

Agree

>>>>
>>>>>> Also, could we maybe say a single distro for all container things? I
>>>>>> guess by default, that is what the current containers by zuul project is
>>>>>> running today. I don't much have the energy to get into the weeds of pip
>>>>>> / dpkg / rpm for how things maybe get installed (if needed).
>>>>>
>>>>> Yes. All of our container images as we are publishing them now are
>>>>> awesome, and are what I expect us to use.
>>>>>
>>>>> I am categorically opposed to us making and publishing different images.
>>>>> One of the nice things about publishing container images is that you can
>>>>> do them once and do them well. We're using the python base image. It
>>>>> happens to be based on debian-strech - but that's irrelevant. It's
>>>>> running python.
>>>>>
>>>>
>>>> That sounds great, but how are we going to cope with the next heartbleed
>>>> or shellshock? IIUC, we would need to wait for a fix to land in the
>>>> debian-stretch image, then wait for a new opendevorg/python-builder
>>>> image and finally wait for a new zuul image to be published.
>>
>> Jeremy also responded to this - so this is basically me saying "I agree
>> with fungi" in more words.
>>
>> We have to do that no matter what the base image is. But we don't
>> actually have to do that. The zuul Dockerfile already does the apt-get
>> update / apt-get install. If needed we could toss in a dist-upgrade if
>> the fix has hit the repo but not the base image so that we don't have to
>> wait for the stack.
>>
>> Supporting multiple in-image distros doesn't improve that. In fact, it
>> increases the work for upstream because now we have to do that exact
>> process you described, but we have to do it for each distro we're
>> building images with.
>>
> 
> Agreed, there is no point in supporting multiple distros image.
> 
>>>> Or what happen when the zuul image gets published with an incompatible
>>>> openstacksdk requirement?
>>>>
>>>> Thus I think the base image and how the zuul bits gets bundled are
>>>> relevant. For example OpenShift users may want to use BuildConfig
>>>> instead so that a parent layer update automatically trigger childs'
>>>> layer rebuild.
>>
>> Sure - but the zuul images are built with standard Dockerfiles - so
>> there should be nothing stopping people from rebuilding them locally
>> however they want.
>>
>>> Yah, this is what I was hoping we'd discuss more when we dove into the
>>> weeds. Can a BuildConfig work with k8s? Or does some other k8s items
>>> work with openshift? If not, I think it is fine if we only suppoert one
>>> and not try to do everything. Which one we support, I don't really know,
>>> hopefully others can say which one is better :) We can also support
>>> both, but I feel that is a lot of work for minimal gain.
>> I think it's a strange idea, when you have a Zuul, to want to use a
>> different build system to build zuul images, since you would not be able
>> to CI those images as effectively as if you use the zuul image building
>> jobs. The only good reason for local rebuilds is if you're carrying
>> local changes.
> 
> Another good reason to manage the image is for compliance and
> reproducability. Using another base layer may be easier to audit and
> update, and users may also want to build their image with curated
> registries for things like pip or npm to prevent supply chain attack.
> 
> 
>> If you are carrying local changes, one would imagine you
>> would want to be able to submit code reviews to a git system to manage
>> those changes, and you would want those code reviews to get gated, and
>> you would want to publish the images that result in the gate passing. If
>> you have all of that set up in zuul to test and publish images from your
>> local changes, it standa to reason you'd want zuul to also just do an
>> image rebuild on your local branch in the scenarios you metion, and
>> using a BuildConfig would not really help you do that at all. **BUT** if
>> that's what you want to do, I mean, nothing should be stopping you.
>>
> 
> My main concern is about the base layer update. BuildConfig can take
> care of rebuilding your final image when needed. You may use Zuul to
> promote the resulting image before deploying it, but the point is, base
> layer updates may happen out of your control, e.g. after a bug is
> discovered in openssl for example.

Sure - I just think that it's a better route to have something trigger 
Zuul to do a rebuild and potential promotion when a base layer changes. 
(A simple trigger should be sufficient) BuildConfig is a whole other 
system with a whole other worldview and I would be more worried that 
Zuul and BuildConfig would fight each other. (and I don't think 
BuildConfig adds anything, other than potentially having a trigger 
defined watching for changes to base images)

> 
>> It certainly doesn't mean we need to support multi-distro inside the
>> image content we publish as an upstream. I'm pretty sure an OpenShift
>> BuildConfig can build an image based on python:slim as well as anything
>> else.
>>
> 
> I didn't mean to derailed the thread to this image building topic which
> is not actually relevant to the zuul-operator. Sorry about that :)
> I mean, as long as the zuul-operator can use an optional custom image.

++ - I think it's essential that it can use an optional custom image.

> Regads,
> -Tristan
> 
>>>>> I DO think we'll likely need to support specifying an image location in
>>>>> the CRD. (rook does this for ceph too) That's because I'd like for
>>>>> Tobias to be able to use the operator if he chooses, but he uses local
>>>>> builds of the images. And I think that's fine.
>>>>>
>>>>>> I am likely showing how much I don't know about k8s / openshift here,
>>>>>> but wanting to limit the amount of different ways to do a deployment
>>>>>> based on underlying distro bits.
>>>>>
>>>>> +10000000000
>>>>>
>>>>> I believe there is no value in producing divergent images.
>>>>>
>>>>
>>>
>>>
>>>
>>>> _______________________________________________
>>>> Zuul-discuss mailing list
>>>> Zuul-discuss at lists.zuul-ci.org
>>>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
>>>
>>>
>>> _______________________________________________
>>> Zuul-discuss mailing list
>>> Zuul-discuss at lists.zuul-ci.org
>>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
>>>
>>
>> _______________________________________________
>> Zuul-discuss mailing list
>> Zuul-discuss at lists.zuul-ci.org
>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss



More information about the Zuul-discuss mailing list