From amy at demarco.com Mon Nov 1 00:20:01 2021 From: amy at demarco.com (Amy Marrich) Date: Sun, 31 Oct 2021 19:20:01 -0500 Subject: [Diversity] Diversity and Inclusion Meeting Reminder Message-ID: The Diversity & Inclusion WG invites members of all OIF projects to attend our next meeting Monday November 1st, at 17:00 UTC in the #openinfra- diversity channel on OFTC. The agenda can be found at https://etherpad.openstack.org/p/diversity-wg-agenda. Please feel free to add any topics you wish to discuss at the meeting. Thanks, Amy (apotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Fri Nov 5 12:06:52 2021 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 05 Nov 2021 12:06:52 +0000 Subject: Indexing Zuul build logs Message-ID: <87fssag61v.tristanC@fedora> Hi, I would like to discuss the best practices for build logs processing. The goal is to improve the user experience when debugging a job that produces a lot of logs (e.g. >10MB) and to help discover failures that are transients and unrelated to a particular build. The existing options are (please reply with your own solutions): - opendev/puppet-log_processor : a gearman service that can be notified through a post job task to trigger build log indexing in logstash. This also provides a classify-log.crm function to annotate log with a score. - openstack/ci-log-processing : a work in progress service to replace the post job task and use the log_processor through the builds api. - opendev/log-classify : an import of my logreduce tool that runs as a post task to provide an html report directly along the build log. https://softwarefactory-project.io/cgit/logreduce/tree/README.rst (The initial opendev reviewes are still in open). - github.com/bmwcarit/zubbi : a service to index jobs and roles in elasticsearch. This is not exactly related to the goal, but it is worth mentioning it. I am wondering if Zuul shouldn't support elasticsearch directly, by providing a `upload-logstash` role in zuul-jobs, and by supporting browsing the elasticsearch api directly from zuul-web. This would remove the need for an extra service to do the indexing, and we wouldn't have to process and store the logs twice. Am I missing something, how do you process build logs? Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 515 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Nov 5 15:49:55 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 05 Nov 2021 08:49:55 -0700 Subject: Indexing Zuul build logs In-Reply-To: <87fssag61v.tristanC@fedora> References: <87fssag61v.tristanC@fedora> Message-ID: <82d7ad2e-04e4-4f97-a5f3-713988b1fce9@www.fastmail.com> On Fri, Nov 5, 2021, at 5:06 AM, Tristan Cacqueray wrote: > Hi, > > I would like to discuss the best practices for build logs processing. > > The goal is to improve the user experience when debugging a job that > produces a lot of logs (e.g. >10MB) and to help discover failures that > are transients and unrelated to a particular build. > > The existing options are (please reply with your own solutions): > > - opendev/puppet-log_processor : a gearman service that can be notified > through a post job task to trigger build log indexing in logstash. This > also provides a classify-log.crm function to annotate log with a score. > > - openstack/ci-log-processing : a work in progress service to replace > the post job task and use the log_processor through the builds api. > > - opendev/log-classify : an import of my logreduce tool that runs as > a post task to provide an html report directly along the build log. > https://softwarefactory-project.io/cgit/logreduce/tree/README.rst > (The initial opendev reviewes are still in open). > > - github.com/bmwcarit/zubbi : a service to index jobs and roles in > elasticsearch. This is not exactly related to the goal, but it is > worth mentioning it. > > I am wondering if Zuul shouldn't support elasticsearch directly, by > providing a `upload-logstash` role in zuul-jobs, and by supporting > browsing the elasticsearch api directly from zuul-web. This would remove > the need for an extra service to do the indexing, and we wouldn't have > to process and store the logs twice. > > Am I missing something, how do you process build logs? My personal experience [0] is that Elasticsearch [1] is an extremely powerful secondary index that provides useful insight to CI system users, but there are a number of issues that make it problematic as a primary index or storage for CI logs: * Performance: Indexing logs is not free. The cost involved here scales pretty directly with the size of the input data and the complexity of the processing rules. But you can also be impacted by load on the Elasticsearch cluster, the busier you get the slower your system will be as more jobs fight for indexing time and time to respond to user queries [2]. Zuul is built to do its best to avoid the exceptional cases where you need to deep dive into logs. Optimizing for these cases which should happen infrequently affects the performance of all jobs. * SIze/Volume: CI job logs compress extremely well until you put them into Elasticsearch. Back when OpenDev stored log files on an ext4 filesystem we managed about 30 days of compressed logs on 12TB of disk. Our 6TB Elasticsearch cluster can fill after 7 days and that is after removing all DEBUG level logs and only indexing specific files we know we care about. You can fill a lot of disks very quickly. It is not very efficient in this way and lends itself to being specific about what you index. * Reliability: There is a lot that can go wrong when indexing hundreds of gigabytes a day. Logstash processes can OOM or otherwise hit bugs. Elasticsearch index shards can be corrupted and need to be deleted. The cluster can enter a degraded state due to hosting issues which severely impacts performance (see above) and takes time to recover. When looked at from a very high level these problems aren't major issues if you don't rely on avoiding them to allow jobs to pass. Its OK to have a .01% error rate if those errors don't bubble back to job results. It is a problem when they do. * Maintainability: Upgrading tools like Elasticsearch typically means library updates, logstash updates (which require updating your rules), and kibana updates. The whole system needs to be kept in sync and your upgrades become far more involved than you expect. * Diversity of log data: If you intend on storing everything in Elasticsearch then you will need to have rules capable of processing everything. You also need to take care to ensure the data is accurate as well. It is easy to accidentally combine unrelated multiline logs for example if the input data is arbitrary. Ideally you would have timestamps for every line as well, but many tools don't provide this information. Very quickly the group running the CI system needs to become an expert in everything running in their CI system and this doesn't scale. * Access controls: Elasticsearch doesn't come with access controls out of the box; they are a paid add on. Opensearch does include them thankfully. This complicates being able to stream directly from running jobs into Elasticsearch as you essentially have to rely on firewall rules exposing your cluster. Even with Opensearch I'm not sure how you would safely construct jobs allowing them to write to the cluster without risking abuse or corruption. As a result I am very happy we didn't try to couple the Elasticsearch setup more closely to the CI system [3][4]. The choices we made to decouple Elasticsearch from the CI system were intentional. This choice has allowed us to: rotate indexed logs much more quickly than the CI system would prefer, index fewer logs overall than the CI system produces, handle upgrades out of band of the CI system, restrict access to the cluster to attempt to keep it secure, ignore the 0.01% index failures, rely on those who know the output data to help improve indexing of their logs, and at the same time still had something that was extremely useful when you needed to debug specific issues. Adding optional support for tools like Elasticsearch/Opensearch to Zuul may help some users, but in general I expect you will still have to keep two copies of the logs and users will refer to the longer term static storage by default via Zuul because Elasticsearch indexed logs won't contain everything (either they expire out or were never indexed in the first place). I don't mean to imply these are bad tools merely that they are not a silver bullet for log storage. They are extremely powerful tools, but make more sense decoupled from Zuul with teams that can curate the specific set of indexing rules for the specific sets of logs that they care about. [0] Full disclaimer it has been a while since I did this stuff as part of my day to day. That said I've been talking to people about Opensearch and it seems like a number of these concerns are still valid. [1] I'm going to say Elasticsearch a lot here because it is the tool that I've used for years. But note that Elasticsearch is not longer open source and any future work should probably target Opensearch. [2] Elasticsearch seems to do really well on dedicated hardware with large network connections between all the nodes. It is possible to run a very performant Elasticsearch, but you have to commit to it and I suspect many users are in the cloud where they have less control over this. [3] When we first started doing this I wrote the Jenkins gearman plugin to kick off jobs to trigger work in an asynchronous manner. At the same time another team at HP wrote a direct logstash reporter for Jenkins. Both worked well, but in their case the CI system was specific to their tools and they could keep it constrained to a size that worked for them as well as having all the necessary expertise on the various log files they cared about. [4] OpenDev is currently in a situation where it needs to drop the ElasticSearch setup as we no longer have the necessary help to continue to maintain it. Doing this if it was coupled to Zuul would be much more difficult and painful. > > Regards, > -Tristan From tdecacqu at redhat.com Fri Nov 5 18:23:33 2021 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 05 Nov 2021 18:23:33 +0000 Subject: Indexing Zuul build logs In-Reply-To: <82d7ad2e-04e4-4f97-a5f3-713988b1fce9@www.fastmail.com> References: <87fssag61v.tristanC@fedora> <82d7ad2e-04e4-4f97-a5f3-713988b1fce9@www.fastmail.com> Message-ID: <87cznefom2.tristanC@fedora> On Fri, Nov 05, 2021 at 08:49 Clark Boylan wrote: > On Fri, Nov 5, 2021, at 5:06 AM, Tristan Cacqueray wrote: >> Hi, >> >> I would like to discuss the best practices for build logs processing. >> >> The goal is to improve the user experience when debugging a job that >> produces a lot of logs (e.g. >10MB) and to help discover failures that >> are transients and unrelated to a particular build. >> >> The existing options are (please reply with your own solutions): >> >> - opendev/puppet-log_processor : a gearman service that can be notified >> through a post job task to trigger build log indexing in logstash. This >> also provides a classify-log.crm function to annotate log with a score. >> >> - openstack/ci-log-processing : a work in progress service to replace >> the post job task and use the log_processor through the builds api. >> >> - opendev/log-classify : an import of my logreduce tool that runs as >> a post task to provide an html report directly along the build log. >> https://softwarefactory-project.io/cgit/logreduce/tree/README.rst >> (The initial opendev reviewes are still in open). >> >> - github.com/bmwcarit/zubbi : a service to index jobs and roles in >> elasticsearch. This is not exactly related to the goal, but it is >> worth mentioning it. >> >> I am wondering if Zuul shouldn't support elasticsearch directly, by >> providing a `upload-logstash` role in zuul-jobs, and by supporting >> browsing the elasticsearch api directly from zuul-web. This would remove >> the need for an extra service to do the indexing, and we wouldn't have >> to process and store the logs twice. >> >> Am I missing something, how do you process build logs? > > My personal experience [0] is that Elasticsearch [1] is an extremely powerful secondary index that provides useful insight to CI system users, but there are a number of issues that make it problematic as a primary index or storage for CI logs: > > * Performance: Indexing logs is not free. The cost involved here scales pretty directly with the size of the input data and the complexity of the processing rules. But you can also be impacted by load on the Elasticsearch cluster, the busier you get the slower your system will be as more jobs fight for indexing time and time to respond to user queries [2]. Zuul is built to do its best to avoid the exceptional cases where you need to deep dive into logs. Optimizing for these cases which should happen infrequently affects the performance of all jobs. > > * SIze/Volume: CI job logs compress extremely well until you put them into Elasticsearch. Back when OpenDev stored log files on an ext4 filesystem we managed about 30 days of compressed logs on 12TB of disk. Our 6TB Elasticsearch cluster can fill after 7 days and that is after removing all DEBUG level logs and only indexing specific files we know we care about. You can fill a lot of disks very quickly. It is not very efficient in this way and lends itself to being specific about what you index. > > * Reliability: There is a lot that can go wrong when indexing hundreds of gigabytes a day. Logstash processes can OOM or otherwise hit bugs. Elasticsearch index shards can be corrupted and need to be deleted. The cluster can enter a degraded state due to hosting issues which severely impacts performance (see above) and takes time to recover. When looked at from a very high level these problems aren't major issues if you don't rely on avoiding them to allow jobs to pass. Its OK to have a .01% error rate if those errors don't bubble back to job results. It is a problem when they do. > > * Maintainability: Upgrading tools like Elasticsearch typically means library updates, logstash updates (which require updating your rules), and kibana updates. The whole system needs to be kept in sync and your upgrades become far more involved than you expect. > > * Diversity of log data: If you intend on storing everything in Elasticsearch then you will need to have rules capable of processing everything. You also need to take care to ensure the data is accurate as well. It is easy to accidentally combine unrelated multiline logs for example if the input data is arbitrary. Ideally you would have timestamps for every line as well, but many tools don't provide this information. Very quickly the group running the CI system needs to become an expert in everything running in their CI system and this doesn't scale. > > * Access controls: Elasticsearch doesn't come with access controls out of the box; they are a paid add on. Opensearch does include them thankfully. This complicates being able to stream directly from running jobs into Elasticsearch as you essentially have to rely on firewall rules exposing your cluster. Even with Opensearch I'm not sure how you would safely construct jobs allowing them to write to the cluster without risking abuse or corruption. > > As a result I am very happy we didn't try to couple the Elasticsearch setup more closely to the CI system [3][4]. The choices we made to decouple Elasticsearch from the CI system were intentional. This choice has allowed us to: rotate indexed logs much more quickly than the CI system would prefer, index fewer logs overall than the CI system produces, handle upgrades out of band of the CI system, restrict access to the cluster to attempt to keep it secure, ignore the 0.01% index failures, rely on those who know the output data to help improve indexing of their logs, and at the same time still had something that was extremely useful when you needed to debug specific issues. > > Adding optional support for tools like Elasticsearch/Opensearch to Zuul may help some users, but in general I expect you will still have to keep two copies of the logs and users will refer to the longer term static storage by default via Zuul because Elasticsearch indexed logs won't contain everything (either they expire out or were never indexed in the first place). > > I don't mean to imply these are bad tools merely that they are not a silver bullet for log storage. They are extremely powerful tools, but make more sense decoupled from Zuul with teams that can curate the specific set of indexing rules for the specific sets of logs that they care about. > > [0] Full disclaimer it has been a while since I did this stuff as part of my day to day. That said I've been talking to people about Opensearch and it seems like a number of these concerns are still valid. > [1] I'm going to say Elasticsearch a lot here because it is the tool that I've used for years. But note that Elasticsearch is not longer open source and any future work should probably target Opensearch. > [2] Elasticsearch seems to do really well on dedicated hardware with large network connections between all the nodes. It is possible to run a very performant Elasticsearch, but you have to commit to it and I suspect many users are in the cloud where they have less control over this. > [3] When we first started doing this I wrote the Jenkins gearman plugin to kick off jobs to trigger work in an asynchronous manner. At the same time another team at HP wrote a direct logstash reporter for Jenkins. Both worked well, but in their case the CI system was specific to their tools and they could keep it constrained to a size that worked for them as well as having all the necessary expertise on the various log files they cared about. > [4] OpenDev is currently in a situation where it needs to drop the ElasticSearch setup as we no longer have the necessary help to continue to maintain it. Doing this if it was coupled to Zuul would be much more difficult and painful. > Thank you very much for taking the time to write down these concerns. I still think it's worth investaging how to avoid duplicating the logs when the user only need elasticsearch. Though we could also support an hybrid solution, and if I understand correctly, it seems like we could use an extra upload role in zuul-jobs to replace the existing puppet-log_processor service. Would that works for the opendev setup? Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 515 bytes Desc: not available URL: From fungi at yuggoth.org Fri Nov 5 18:36:55 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 5 Nov 2021 18:36:55 +0000 Subject: Indexing Zuul build logs In-Reply-To: <87cznefom2.tristanC@fedora> References: <87fssag61v.tristanC@fedora> <82d7ad2e-04e4-4f97-a5f3-713988b1fce9@www.fastmail.com> <87cznefom2.tristanC@fedora> Message-ID: <20211105183654.vicldwuck43vxb7g@yuggoth.org> On 2021-11-05 18:23:33 +0000 (+0000), Tristan Cacqueray wrote: [...] > Thank you very much for taking the time to write down these > concerns. I still think it's worth investaging how to avoid > duplicating the logs when the user only need elasticsearch. Though > we could also support an hybrid solution, and if I understand > correctly, it seems like we could use an extra upload role in > zuul-jobs to replace the existing puppet-log_processor service. > Would that works for the opendev setup? The question is probably mis-directed. We don't intend to use ElasticSearch/OpenSearch in the OpenDev collaboratory for a number of reasons, but chief among those is that we don't have people to maintain those additional services nor the integration with them. Instead, some volunteers are going to build and run something on their own, but from our (OpenDev) perspective it can and must be entirely separate from our services and configuration. We definitely wouldn't want a second log upload role added to all jobs, since that makes the jobs take longer to complete, increases the chances those jobs will fail, but most importantly means that our sysadmins end up being tasked with troubleshooting things when that integration fails. Instead we've suggested anyone interested in running a service to index OpenDev's Zuul job logs should query the Zuul API to find out what builds have completed and where their logs can be obtained, so that the burden of support can be completely removed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Nov 5 18:50:39 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 05 Nov 2021 11:50:39 -0700 Subject: Indexing Zuul build logs In-Reply-To: <87cznefom2.tristanC@fedora> References: <87fssag61v.tristanC@fedora> <82d7ad2e-04e4-4f97-a5f3-713988b1fce9@www.fastmail.com> <87cznefom2.tristanC@fedora> Message-ID: <9647345b-e7a2-4f91-a19b-d8c7d1d6b358@www.fastmail.com> On Fri, Nov 5, 2021, at 11:23 AM, Tristan Cacqueray wrote: > On Fri, Nov 05, 2021 at 08:49 Clark Boylan wrote: >> On Fri, Nov 5, 2021, at 5:06 AM, Tristan Cacqueray wrote: >>> Hi, >>> >>> I would like to discuss the best practices for build logs processing. >>> >>> The goal is to improve the user experience when debugging a job that >>> produces a lot of logs (e.g. >10MB) and to help discover failures that >>> are transients and unrelated to a particular build. >>> >>> The existing options are (please reply with your own solutions): >>> >>> - opendev/puppet-log_processor : a gearman service that can be notified >>> through a post job task to trigger build log indexing in logstash. This >>> also provides a classify-log.crm function to annotate log with a score. >>> >>> - openstack/ci-log-processing : a work in progress service to replace >>> the post job task and use the log_processor through the builds api. >>> >>> - opendev/log-classify : an import of my logreduce tool that runs as >>> a post task to provide an html report directly along the build log. >>> https://softwarefactory-project.io/cgit/logreduce/tree/README.rst >>> (The initial opendev reviewes are still in open). >>> >>> - github.com/bmwcarit/zubbi : a service to index jobs and roles in >>> elasticsearch. This is not exactly related to the goal, but it is >>> worth mentioning it. >>> >>> I am wondering if Zuul shouldn't support elasticsearch directly, by >>> providing a `upload-logstash` role in zuul-jobs, and by supporting >>> browsing the elasticsearch api directly from zuul-web. This would remove >>> the need for an extra service to do the indexing, and we wouldn't have >>> to process and store the logs twice. >>> >>> Am I missing something, how do you process build logs? >> >> My personal experience [0] is that Elasticsearch [1] is an extremely powerful secondary index that provides useful insight to CI system users, but there are a number of issues that make it problematic as a primary index or storage for CI logs: >> >> * Performance: Indexing logs is not free. The cost involved here scales pretty directly with the size of the input data and the complexity of the processing rules. But you can also be impacted by load on the Elasticsearch cluster, the busier you get the slower your system will be as more jobs fight for indexing time and time to respond to user queries [2]. Zuul is built to do its best to avoid the exceptional cases where you need to deep dive into logs. Optimizing for these cases which should happen infrequently affects the performance of all jobs. >> >> * SIze/Volume: CI job logs compress extremely well until you put them into Elasticsearch. Back when OpenDev stored log files on an ext4 filesystem we managed about 30 days of compressed logs on 12TB of disk. Our 6TB Elasticsearch cluster can fill after 7 days and that is after removing all DEBUG level logs and only indexing specific files we know we care about. You can fill a lot of disks very quickly. It is not very efficient in this way and lends itself to being specific about what you index. >> >> * Reliability: There is a lot that can go wrong when indexing hundreds of gigabytes a day. Logstash processes can OOM or otherwise hit bugs. Elasticsearch index shards can be corrupted and need to be deleted. The cluster can enter a degraded state due to hosting issues which severely impacts performance (see above) and takes time to recover. When looked at from a very high level these problems aren't major issues if you don't rely on avoiding them to allow jobs to pass. Its OK to have a .01% error rate if those errors don't bubble back to job results. It is a problem when they do. >> >> * Maintainability: Upgrading tools like Elasticsearch typically means library updates, logstash updates (which require updating your rules), and kibana updates. The whole system needs to be kept in sync and your upgrades become far more involved than you expect. >> >> * Diversity of log data: If you intend on storing everything in Elasticsearch then you will need to have rules capable of processing everything. You also need to take care to ensure the data is accurate as well. It is easy to accidentally combine unrelated multiline logs for example if the input data is arbitrary. Ideally you would have timestamps for every line as well, but many tools don't provide this information. Very quickly the group running the CI system needs to become an expert in everything running in their CI system and this doesn't scale. >> >> * Access controls: Elasticsearch doesn't come with access controls out of the box; they are a paid add on. Opensearch does include them thankfully. This complicates being able to stream directly from running jobs into Elasticsearch as you essentially have to rely on firewall rules exposing your cluster. Even with Opensearch I'm not sure how you would safely construct jobs allowing them to write to the cluster without risking abuse or corruption. >> >> As a result I am very happy we didn't try to couple the Elasticsearch setup more closely to the CI system [3][4]. The choices we made to decouple Elasticsearch from the CI system were intentional. This choice has allowed us to: rotate indexed logs much more quickly than the CI system would prefer, index fewer logs overall than the CI system produces, handle upgrades out of band of the CI system, restrict access to the cluster to attempt to keep it secure, ignore the 0.01% index failures, rely on those who know the output data to help improve indexing of their logs, and at the same time still had something that was extremely useful when you needed to debug specific issues. >> >> Adding optional support for tools like Elasticsearch/Opensearch to Zuul may help some users, but in general I expect you will still have to keep two copies of the logs and users will refer to the longer term static storage by default via Zuul because Elasticsearch indexed logs won't contain everything (either they expire out or were never indexed in the first place). >> >> I don't mean to imply these are bad tools merely that they are not a silver bullet for log storage. They are extremely powerful tools, but make more sense decoupled from Zuul with teams that can curate the specific set of indexing rules for the specific sets of logs that they care about. >> >> [0] Full disclaimer it has been a while since I did this stuff as part of my day to day. That said I've been talking to people about Opensearch and it seems like a number of these concerns are still valid. >> [1] I'm going to say Elasticsearch a lot here because it is the tool that I've used for years. But note that Elasticsearch is not longer open source and any future work should probably target Opensearch. >> [2] Elasticsearch seems to do really well on dedicated hardware with large network connections between all the nodes. It is possible to run a very performant Elasticsearch, but you have to commit to it and I suspect many users are in the cloud where they have less control over this. >> [3] When we first started doing this I wrote the Jenkins gearman plugin to kick off jobs to trigger work in an asynchronous manner. At the same time another team at HP wrote a direct logstash reporter for Jenkins. Both worked well, but in their case the CI system was specific to their tools and they could keep it constrained to a size that worked for them as well as having all the necessary expertise on the various log files they cared about. >> [4] OpenDev is currently in a situation where it needs to drop the ElasticSearch setup as we no longer have the necessary help to continue to maintain it. Doing this if it was coupled to Zuul would be much more difficult and painful. >> > > Thank you very much for taking the time to write down these concerns. > I still think it's worth investaging how to avoid duplicating the logs > when the user only need elasticsearch. Though we could also support an > hybrid solution, and if I understand correctly, it seems like we could > use an extra upload role in zuul-jobs to replace the existing > puppet-log_processor service. Would that works for the opendev setup? No, this won't work for OpenDev for a couple of the reasons noted above. We don't have anyone to maintain an Elasticsearch cluster. We have asked for help and not received it. Instead OpenStack is going to do it themselves. I think this aligns better with the needs of such a system as you have to understand the inputs in order to index them well. On a more general level you would need to figure out how to avoid performance costs to job runtime if you remove the asynchronous processing service as well as finding ways to make indexing more resilient to errors (or ignore them I guess). You also need to take care to avoid over exposing your Elasticsearch cluster to the jobs since Elasticsearch doesn't have authentication or access controls. With Opensearch you can probably push logs from the executor in a trusted playbook and authenticate using their authentication systems (though this only addresses authentication if you're talking to Opensearch, it doesn't handle any authentication for log processing in front of the cluster). And finally, you'd need to somehow maintain processing rules for all of the various log files that a job can produce. I guess this is why I've shifted more to thinking this type of tooling makes more sense being close to the people producing the job logs and not in the CI system. Zuul can have appropriate hook points that people can tie into for the exact data they want to index. They can ensure they are able to index those specific files reliably and manage firewalls or authentication in a simpler way. If you bake too much into Zuul itself it will become a major headache. Definitely solvable with enough time and effort, but remember that a lot of that time and effort is happening at an operational level not just as a Zuul developer. > > Regards, > -Tristan From mhuin at redhat.com Fri Nov 5 19:02:51 2021 From: mhuin at redhat.com (Matthieu Huin) Date: Fri, 5 Nov 2021 20:02:51 +0100 Subject: zuul admin GUI - what now? In-Reply-To: References: <68B1B9BA-3F56-46E3-8F64-1B80F2ACD643@bmw.de> Message-ID: Hello, As a quick follow-up from this discussion, I have set up a demo site so that people can play around with the admin features without having to deploy the whole stack of services or configuring tenants and projects the right way (although I do encourage zuul admins and users alike to get familiar with the configuration changes the admin GU features bring). Instructions on how to use the demo site can be found at its root, at https://zuuldemo.strangerpings.eu . I'll keep it up to date with the tip of the patch chain, so expect this to be wiped/redeployed often. I hope it'll help people get a better idea of what is coming, and give feedback on the features/UX. Cheers, MHU On Mon, Oct 25, 2021 at 6:03 PM Matthieu Huin wrote: > I realize there are two things to consider: > - automated validation and testing > - manual deployment and testing > > There's definitely work to do on the first aspect, with good ideas so far. > I was mainly concerned about the second aspect, considering that we not > only need to validate the admin features but also how they're implemented > UX-wise. Generating screenshots as artifacts during testing would be a good > start to help with that, I guess. > > On Fri, Oct 22, 2021 at 8:41 AM wrote: > >> Hi all, >> >> >> >> what could work for changes like this is a mock API. I started with >> something like this [1] a long time ago but never followed up on it. >> >> >> >> This would basically allow us to run (and also develop) the frontend >> against mocked data without the necessity to wait for the required >> >> backend changes to be merged and deployed. We could even experiment with >> new UI features without implementing the backend. >> >> >> >> I?m not exactly sure how that would look like for the authentication >> endpoint but this should also be possible. Maybe it?s worth to spend >> >> some more time on this. To use that in the CI system we could define a >> job that starts a JSON server and runs the frontend against this server. >> >> >> > > If an extra service is going to be run, I think I'd rather go the extra > mile and set up a keycloak server. I'm always a bit wary of mocking stuff > because then you're testing against "your" interpretation of how a service > might behave. If for whatever reason the mock differs from the real thing, > you won't catch the problem until you deploy against the real deal. > > >> This might also be a good improvement for the ?fake API responses? [2] we >> currently have. >> >> >> >> [1]: https://review.opendev.org/c/zuul/zuul/+/760933 >> >> [2]: >> https://zuul-ci.org/docs/zuul/reference/developer/javascript.html#development >> >> >> >> Felix >> >> >> >> *From: *Matthieu Huin >> *Date: *Thursday, 21. October 2021 at 17:54 >> *To: *Zuul Discussion List >> *Subject: *zuul admin GUI - what now? >> >> >> >> (Bringing this to the ML as the BOAF session ran out of time) >> >> >> >> Hey folks, >> >> >> >> I'd like some guidance and opinions on how to go forward with the Zuul >> Admin GUI changes [0]. My main concern is how to ensure reviews can be done >> in the easiest way for everyone. >> >> >> >> I took some steps in that direction: >> >> >> >> * I have provided a test docker compose based on the demo compose in the >> documentation [1]. This compose lives on top of the patch chain, so all the >> changes can be tested at once. It is also possible to cherry-pick the >> change on top of any specific patch you'd like to test in particular. It is >> not perfect, but it does set up authentication via OpenID Connect for zuul >> and gerrit, allowing to experiment further with the new features. Is there >> anything I should do to improve this ? Or should I leave this as is, as it >> is not intended to get merged (it's meant for demo purposes only) ? >> >> >> >> * I have reorganized the patch chain so that the python and the JS >> changes are strictly separated. Furthermore, the changes to the python >> components are at the beginning of the patch chain. I think it'd be great >> to get these reviewed and merged ASAP (they're not big changes), because >> one big downside of the compose mentioned above is that it requires >> rebuilding the zuul containers locally before it can run. Once these >> changes are integrated, the compose can run on the latest published >> containers. >> >> >> >> * I've added various screenshots and video captures to the reviews, so >> that reviewers can get an idea of what the changes would look like without >> having to deploy. >> >> >> >> * I am currently working on deploying a public test instance based on the >> demo compose. >> >> >> >> Is there anything else I can do to get this moving forward ? >> >> >> >> Cheers, >> >> >> >> MHU >> >> >> >> [0] >> https://review.opendev.org/q/topic:%22GUI_admin%22+(status:open%20OR%20status:merged >> >> [1] https://review.opendev.org/c/zuul/zuul/+/769943 >> >> -- >> >> *Matthieu Huin * >> >> Senior Software Developer >> >> Red Hat >> >> [image: Image removed by sender.] >> >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> > > > -- > > Matthieu Huin > > Senior Software Developer > > Red Hat > > > -- Matthieu Huin Senior Software Developer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From asmundo at gmail.com Thu Nov 18 15:12:16 2021 From: asmundo at gmail.com (=?UTF-8?B?w4VzbXVuZCDDmHN0dm9sZA==?=) Date: Thu, 18 Nov 2021 16:12:16 +0100 Subject: dynamic job list Message-ID: Hi All, We are looking into optimizing what jobs need to run. After building we know what artifacts that did change. Knowing this we know what tests that need to be executed. Is there a way to feed that back to the zuul buildset? A change only changes artifacts used by testx and testy compared to what has already been tested. So the static project: project: check: jobs: - build Is at runtime converted into: pipeline- project: check: jobs: - build - testx - testz Cheers, ?smund -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at acmegating.com Thu Nov 18 16:11:28 2021 From: jim at acmegating.com (James E. Blair) Date: Thu, 18 Nov 2021 08:11:28 -0800 Subject: dynamic job list In-Reply-To: (=?iso-8859-1?Q?=22=C5smund_=D8stvold=22's?= message of "Thu, 18 Nov 2021 16:12:16 +0100") References: Message-ID: <8735ntcukv.fsf@fuligin> ?smund ?stvold writes: > Hi All, > > We are looking into optimizing what jobs need to run. After building we > know what artifacts that did change. Knowing this we know what tests that > need to be executed. Is there a way to feed that back to the zuul buildset? > > A change only changes artifacts used by testx and testy compared to what > has already been tested. So the static project: > > project: > check: > jobs: > - build > > Is at runtime converted into: > > pipeline- project: > check: > jobs: > - build > - testx > - testz > > Cheers, > ?smund Yes. You can make all the test jobs depend on build, then have build tell Zuul to only run the applicable one(s). See: https://zuul-ci.org/docs/zuul/reference/jobs.html#skipping-dependent-jobs -Jim From marios at redhat.com Tue Nov 23 16:47:16 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 23 Nov 2021 18:47:16 +0200 Subject: install and use ansible collections in zuul v3 ansible native job? In-Reply-To: References: Message-ID: Hello zuul o/ bumping this thread to ask if there is any update on ansible-galaxy collections support - the openstack/tripleo-ci team is trying (again) to invoke a galaxy collection from a zuul play [1] and it is failing to find the module [2] like: 2021-11-23 13:58:21.389877 | primary | "reason": "couldn't resolve module/action 'tripleo.repos.get_hash'. This often indicates a misspelling, missing collection, or incorrect module path.\n\nThe error appears to be in '/var/lib/zuul/builds/7aaee0c0969a4066b2ca9b59728e74b5/untrusted/project_5/opendev.org/openstack/tripleo-quickstart/roles/repo-setup/tasks/tripleo-get-hash-set-fact.yml': line 9, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tgh_dlrn_url: \"{{ dlrn_baseurl|urlsplit('scheme') }}://{{ dlrn_baseurl|urlsplit('hostname') }}\"\n - name: 'tripleo-get-hash {{ search_tag }} for {{ tgh_release }} on {{ tgh_os_version }} from {{ tgh_dlrn_url }}'\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n" even though we're setting the ANSIBLE_COLLECTIONS_PATHS env vars etc in [1] before we include the role which uses the ansible collection. So, are we just doing something wrong, or is support here still not ready (maybe some combination of both ;)) thanks for your thoughts or any info someone can give us about that. regards, marios [1] https://review.opendev.org/c/openstack/tripleo-ci/+/818222/28/roles/tripleo-build-jobs-repos/tasks/main.yaml [2] https://logserver.rdoproject.org/17/36817/3/check/tripleo-stream9-development-buildimage-overcloud-hardened-uefi-full-master/7aaee0c/job-output.txt On Thu, Jun 25, 2020 at 5:37 PM Marios Andreou wrote: > > hi folks > > long story short: can we install and use ansible collections in a zuul v3/ansible native job? > > In the tripleo-ci squad we are trying to use the tripleo_overcloud_image_build role from the tripleo-operator-ansible collection [1]. > > We are already using other roles from this collection - e.g. there [2]. This is possible because in that case we have zuul calling bash calling ansible... which means we get to configure collections in the bash bit setting ANSIBLE_COLLECTIONS_PATHS [3] before invoking ansible. > > For the case I'm having trouble with - what i refer to as ansible native - here https://review.opendev.org/#/c/733818/33/roles/oooci-build-images/tasks/main.yaml at 74 we have a zuul job playbook directly trying to install and use a collection. > > Currently it fails because instead of collections: (doesn't work " 'collections' is not a valid attribute? [4] ) I am trying to use include_role but then it can't find the collection's modules "ERROR! no action detected in task" [5]. I tried to set it in ansible.cfg but that won't work and anyway it is surely the wrong approach. > > The one thing left to try is to explicitly require ansible 2.9 for the job - but wanted to reach out and see if anyone else has any problems trying to install and use ansible collections (with modules importantly) as part of a zuul v3/ansible native job. > > thanks for reading and for any comments, > > marios > > [1] https://github.com/openstack/tripleo-operator-ansible/tree/master/roles/tripleo_overcloud_image_build > [2] https://opendev.org/openstack/tripleo-quickstart-extras/src/commit/55e74bb8704d61081a419fc1c10df1bde73941d1/roles/undercloud-deploy/tasks/main.yml#L50-L53 > [3] https://opendev.org/openstack/tripleo-quickstart/src/commit/299131bcc6dca86a69436455203e5e7ae6bcedd4/quickstart.sh#L550 > [4] https://23fd78c0f6b01b9c4209-59c294b8f46eb75cda2bd004e2a1c97f.ssl.cf1.rackcdn.com/733818/1/check/tripleo-buildimage-overcloud-full-centos-8/d4674cc/job-output.txt > [5] https://2063b7e2bb67f86f8ec5-87fe67c942382c6ee875c514bffa4f21.ssl.cf2.rackcdn.com/733818/9/check/tripleo-buildimage-overcloud-full-centos-8/76308ee/job-output.txt From mhuin at redhat.com Wed Nov 24 17:50:06 2021 From: mhuin at redhat.com (Matthieu Huin) Date: Wed, 24 Nov 2021 18:50:06 +0100 Subject: zuul admin GUI - what now? In-Reply-To: References: <68B1B9BA-3F56-46E3-8F64-1B80F2ACD643@bmw.de> Message-ID: Hello, The stack is now in a state where it is ready for show time (and reviews): https://review.opendev.org/q/topic:%22GUI_admin%22+(status:open%20OR%20status:merged) A demo site (Gerrit + Zuul + Keycloak SSO) is available to play with at https://zuuldemo.strangerpings.eu, with a few instructions to get started/how to perform privileged actions. Let me know what you think! Cheers, MHU On Fri, Nov 5, 2021 at 8:02 PM Matthieu Huin wrote: > Hello, > > As a quick follow-up from this discussion, I have set up a demo site so > that people can play around with the admin features without having to > deploy the whole stack of services or configuring tenants and projects the > right way (although I do encourage zuul admins and users alike to get > familiar with the configuration changes the admin GU features bring). > > Instructions on how to use the demo site can be found at its root, at > https://zuuldemo.strangerpings.eu . > > I'll keep it up to date with the tip of the patch chain, so expect this to > be wiped/redeployed often. > > I hope it'll help people get a better idea of what is coming, and give > feedback on the features/UX. > > Cheers, > MHU > > On Mon, Oct 25, 2021 at 6:03 PM Matthieu Huin wrote: > >> I realize there are two things to consider: >> - automated validation and testing >> - manual deployment and testing >> >> There's definitely work to do on the first aspect, with good ideas so >> far. I was mainly concerned about the second aspect, considering that we >> not only need to validate the admin features but also how they're >> implemented UX-wise. Generating screenshots as artifacts during testing >> would be a good start to help with that, I guess. >> >> On Fri, Oct 22, 2021 at 8:41 AM wrote: >> >>> Hi all, >>> >>> >>> >>> what could work for changes like this is a mock API. I started with >>> something like this [1] a long time ago but never followed up on it. >>> >>> >>> >>> This would basically allow us to run (and also develop) the frontend >>> against mocked data without the necessity to wait for the required >>> >>> backend changes to be merged and deployed. We could even experiment with >>> new UI features without implementing the backend. >>> >>> >>> >>> I?m not exactly sure how that would look like for the authentication >>> endpoint but this should also be possible. Maybe it?s worth to spend >>> >>> some more time on this. To use that in the CI system we could define a >>> job that starts a JSON server and runs the frontend against this server. >>> >>> >>> >> >> If an extra service is going to be run, I think I'd rather go the extra >> mile and set up a keycloak server. I'm always a bit wary of mocking stuff >> because then you're testing against "your" interpretation of how a service >> might behave. If for whatever reason the mock differs from the real thing, >> you won't catch the problem until you deploy against the real deal. >> >> >>> This might also be a good improvement for the ?fake API responses? [2] >>> we currently have. >>> >>> >>> >>> [1]: https://review.opendev.org/c/zuul/zuul/+/760933 >>> >>> [2]: >>> https://zuul-ci.org/docs/zuul/reference/developer/javascript.html#development >>> >>> >>> >>> Felix >>> >>> >>> >>> *From: *Matthieu Huin >>> *Date: *Thursday, 21. October 2021 at 17:54 >>> *To: *Zuul Discussion List >>> *Subject: *zuul admin GUI - what now? >>> >>> >>> >>> (Bringing this to the ML as the BOAF session ran out of time) >>> >>> >>> >>> Hey folks, >>> >>> >>> >>> I'd like some guidance and opinions on how to go forward with the Zuul >>> Admin GUI changes [0]. My main concern is how to ensure reviews can be done >>> in the easiest way for everyone. >>> >>> >>> >>> I took some steps in that direction: >>> >>> >>> >>> * I have provided a test docker compose based on the demo compose in the >>> documentation [1]. This compose lives on top of the patch chain, so all the >>> changes can be tested at once. It is also possible to cherry-pick the >>> change on top of any specific patch you'd like to test in particular. It is >>> not perfect, but it does set up authentication via OpenID Connect for zuul >>> and gerrit, allowing to experiment further with the new features. Is there >>> anything I should do to improve this ? Or should I leave this as is, as it >>> is not intended to get merged (it's meant for demo purposes only) ? >>> >>> >>> >>> * I have reorganized the patch chain so that the python and the JS >>> changes are strictly separated. Furthermore, the changes to the python >>> components are at the beginning of the patch chain. I think it'd be great >>> to get these reviewed and merged ASAP (they're not big changes), because >>> one big downside of the compose mentioned above is that it requires >>> rebuilding the zuul containers locally before it can run. Once these >>> changes are integrated, the compose can run on the latest published >>> containers. >>> >>> >>> >>> * I've added various screenshots and video captures to the reviews, so >>> that reviewers can get an idea of what the changes would look like without >>> having to deploy. >>> >>> >>> >>> * I am currently working on deploying a public test instance based on >>> the demo compose. >>> >>> >>> >>> Is there anything else I can do to get this moving forward ? >>> >>> >>> >>> Cheers, >>> >>> >>> >>> MHU >>> >>> >>> >>> [0] >>> https://review.opendev.org/q/topic:%22GUI_admin%22+(status:open%20OR%20status:merged >>> >>> [1] https://review.opendev.org/c/zuul/zuul/+/769943 >>> >>> -- >>> >>> *Matthieu Huin * >>> >>> Senior Software Developer >>> >>> Red Hat >>> >>> [image: Image removed by sender.] >>> >>> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >>> >> >> >> -- >> >> Matthieu Huin >> >> Senior Software Developer >> >> Red Hat >> >> >> > > > -- > > Matthieu Huin > > Senior Software Developer > > Red Hat > > > -- Matthieu Huin Senior Software Developer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: