Hi, I would like to discuss the best practices for build logs processing. The goal is to improve the user experience when debugging a job that produces a lot of logs (e.g. >10MB) and to help discover failures that are transients and unrelated to a particular build. The existing options are (please reply with your own solutions): - opendev/puppet-log_processor : a gearman service that can be notified through a post job task to trigger build log indexing in logstash. This also provides a classify-log.crm function to annotate log with a score. - openstack/ci-log-processing : a work in progress service to replace the post job task and use the log_processor through the builds api. - opendev/log-classify : an import of my logreduce tool that runs as a post task to provide an html report directly along the build log. https://softwarefactory-project.io/cgit/logreduce/tree/README.rst (The initial opendev reviewes are still in open). - github.com/bmwcarit/zubbi : a service to index jobs and roles in elasticsearch. This is not exactly related to the goal, but it is worth mentioning it. I am wondering if Zuul shouldn't support elasticsearch directly, by providing a `upload-logstash` role in zuul-jobs, and by supporting browsing the elasticsearch api directly from zuul-web. This would remove the need for an extra service to do the indexing, and we wouldn't have to process and store the logs twice. Am I missing something, how do you process build logs? Regards, -Tristan
On Fri, Nov 5, 2021, at 5:06 AM, Tristan Cacqueray wrote:
Hi,
I would like to discuss the best practices for build logs processing.
The goal is to improve the user experience when debugging a job that produces a lot of logs (e.g. >10MB) and to help discover failures that are transients and unrelated to a particular build.
The existing options are (please reply with your own solutions):
- opendev/puppet-log_processor : a gearman service that can be notified through a post job task to trigger build log indexing in logstash. This also provides a classify-log.crm function to annotate log with a score.
- openstack/ci-log-processing : a work in progress service to replace the post job task and use the log_processor through the builds api.
- opendev/log-classify : an import of my logreduce tool that runs as a post task to provide an html report directly along the build log. https://softwarefactory-project.io/cgit/logreduce/tree/README.rst (The initial opendev reviewes are still in open).
- github.com/bmwcarit/zubbi : a service to index jobs and roles in elasticsearch. This is not exactly related to the goal, but it is worth mentioning it.
I am wondering if Zuul shouldn't support elasticsearch directly, by providing a `upload-logstash` role in zuul-jobs, and by supporting browsing the elasticsearch api directly from zuul-web. This would remove the need for an extra service to do the indexing, and we wouldn't have to process and store the logs twice.
Am I missing something, how do you process build logs?
My personal experience [0] is that Elasticsearch [1] is an extremely powerful secondary index that provides useful insight to CI system users, but there are a number of issues that make it problematic as a primary index or storage for CI logs: * Performance: Indexing logs is not free. The cost involved here scales pretty directly with the size of the input data and the complexity of the processing rules. But you can also be impacted by load on the Elasticsearch cluster, the busier you get the slower your system will be as more jobs fight for indexing time and time to respond to user queries [2]. Zuul is built to do its best to avoid the exceptional cases where you need to deep dive into logs. Optimizing for these cases which should happen infrequently affects the performance of all jobs. * SIze/Volume: CI job logs compress extremely well until you put them into Elasticsearch. Back when OpenDev stored log files on an ext4 filesystem we managed about 30 days of compressed logs on 12TB of disk. Our 6TB Elasticsearch cluster can fill after 7 days and that is after removing all DEBUG level logs and only indexing specific files we know we care about. You can fill a lot of disks very quickly. It is not very efficient in this way and lends itself to being specific about what you index. * Reliability: There is a lot that can go wrong when indexing hundreds of gigabytes a day. Logstash processes can OOM or otherwise hit bugs. Elasticsearch index shards can be corrupted and need to be deleted. The cluster can enter a degraded state due to hosting issues which severely impacts performance (see above) and takes time to recover. When looked at from a very high level these problems aren't major issues if you don't rely on avoiding them to allow jobs to pass. Its OK to have a .01% error rate if those errors don't bubble back to job results. It is a problem when they do. * Maintainability: Upgrading tools like Elasticsearch typically means library updates, logstash updates (which require updating your rules), and kibana updates. The whole system needs to be kept in sync and your upgrades become far more involved than you expect. * Diversity of log data: If you intend on storing everything in Elasticsearch then you will need to have rules capable of processing everything. You also need to take care to ensure the data is accurate as well. It is easy to accidentally combine unrelated multiline logs for example if the input data is arbitrary. Ideally you would have timestamps for every line as well, but many tools don't provide this information. Very quickly the group running the CI system needs to become an expert in everything running in their CI system and this doesn't scale. * Access controls: Elasticsearch doesn't come with access controls out of the box; they are a paid add on. Opensearch does include them thankfully. This complicates being able to stream directly from running jobs into Elasticsearch as you essentially have to rely on firewall rules exposing your cluster. Even with Opensearch I'm not sure how you would safely construct jobs allowing them to write to the cluster without risking abuse or corruption. As a result I am very happy we didn't try to couple the Elasticsearch setup more closely to the CI system [3][4]. The choices we made to decouple Elasticsearch from the CI system were intentional. This choice has allowed us to: rotate indexed logs much more quickly than the CI system would prefer, index fewer logs overall than the CI system produces, handle upgrades out of band of the CI system, restrict access to the cluster to attempt to keep it secure, ignore the 0.01% index failures, rely on those who know the output data to help improve indexing of their logs, and at the same time still had something that was extremely useful when you needed to debug specific issues. Adding optional support for tools like Elasticsearch/Opensearch to Zuul may help some users, but in general I expect you will still have to keep two copies of the logs and users will refer to the longer term static storage by default via Zuul because Elasticsearch indexed logs won't contain everything (either they expire out or were never indexed in the first place). I don't mean to imply these are bad tools merely that they are not a silver bullet for log storage. They are extremely powerful tools, but make more sense decoupled from Zuul with teams that can curate the specific set of indexing rules for the specific sets of logs that they care about. [0] Full disclaimer it has been a while since I did this stuff as part of my day to day. That said I've been talking to people about Opensearch and it seems like a number of these concerns are still valid. [1] I'm going to say Elasticsearch a lot here because it is the tool that I've used for years. But note that Elasticsearch is not longer open source and any future work should probably target Opensearch. [2] Elasticsearch seems to do really well on dedicated hardware with large network connections between all the nodes. It is possible to run a very performant Elasticsearch, but you have to commit to it and I suspect many users are in the cloud where they have less control over this. [3] When we first started doing this I wrote the Jenkins gearman plugin to kick off jobs to trigger work in an asynchronous manner. At the same time another team at HP wrote a direct logstash reporter for Jenkins. Both worked well, but in their case the CI system was specific to their tools and they could keep it constrained to a size that worked for them as well as having all the necessary expertise on the various log files they cared about. [4] OpenDev is currently in a situation where it needs to drop the ElasticSearch setup as we no longer have the necessary help to continue to maintain it. Doing this if it was coupled to Zuul would be much more difficult and painful.
Regards, -Tristan
On Fri, Nov 05, 2021 at 08:49 Clark Boylan wrote:
On Fri, Nov 5, 2021, at 5:06 AM, Tristan Cacqueray wrote:
Hi,
I would like to discuss the best practices for build logs processing.
The goal is to improve the user experience when debugging a job that produces a lot of logs (e.g. >10MB) and to help discover failures that are transients and unrelated to a particular build.
The existing options are (please reply with your own solutions):
- opendev/puppet-log_processor : a gearman service that can be notified through a post job task to trigger build log indexing in logstash. This also provides a classify-log.crm function to annotate log with a score.
- openstack/ci-log-processing : a work in progress service to replace the post job task and use the log_processor through the builds api.
- opendev/log-classify : an import of my logreduce tool that runs as a post task to provide an html report directly along the build log. https://softwarefactory-project.io/cgit/logreduce/tree/README.rst (The initial opendev reviewes are still in open).
- github.com/bmwcarit/zubbi : a service to index jobs and roles in elasticsearch. This is not exactly related to the goal, but it is worth mentioning it.
I am wondering if Zuul shouldn't support elasticsearch directly, by providing a `upload-logstash` role in zuul-jobs, and by supporting browsing the elasticsearch api directly from zuul-web. This would remove the need for an extra service to do the indexing, and we wouldn't have to process and store the logs twice.
Am I missing something, how do you process build logs?
My personal experience [0] is that Elasticsearch [1] is an extremely powerful secondary index that provides useful insight to CI system users, but there are a number of issues that make it problematic as a primary index or storage for CI logs:
* Performance: Indexing logs is not free. The cost involved here scales pretty directly with the size of the input data and the complexity of the processing rules. But you can also be impacted by load on the Elasticsearch cluster, the busier you get the slower your system will be as more jobs fight for indexing time and time to respond to user queries [2]. Zuul is built to do its best to avoid the exceptional cases where you need to deep dive into logs. Optimizing for these cases which should happen infrequently affects the performance of all jobs.
* SIze/Volume: CI job logs compress extremely well until you put them into Elasticsearch. Back when OpenDev stored log files on an ext4 filesystem we managed about 30 days of compressed logs on 12TB of disk. Our 6TB Elasticsearch cluster can fill after 7 days and that is after removing all DEBUG level logs and only indexing specific files we know we care about. You can fill a lot of disks very quickly. It is not very efficient in this way and lends itself to being specific about what you index.
* Reliability: There is a lot that can go wrong when indexing hundreds of gigabytes a day. Logstash processes can OOM or otherwise hit bugs. Elasticsearch index shards can be corrupted and need to be deleted. The cluster can enter a degraded state due to hosting issues which severely impacts performance (see above) and takes time to recover. When looked at from a very high level these problems aren't major issues if you don't rely on avoiding them to allow jobs to pass. Its OK to have a .01% error rate if those errors don't bubble back to job results. It is a problem when they do.
* Maintainability: Upgrading tools like Elasticsearch typically means library updates, logstash updates (which require updating your rules), and kibana updates. The whole system needs to be kept in sync and your upgrades become far more involved than you expect.
* Diversity of log data: If you intend on storing everything in Elasticsearch then you will need to have rules capable of processing everything. You also need to take care to ensure the data is accurate as well. It is easy to accidentally combine unrelated multiline logs for example if the input data is arbitrary. Ideally you would have timestamps for every line as well, but many tools don't provide this information. Very quickly the group running the CI system needs to become an expert in everything running in their CI system and this doesn't scale.
* Access controls: Elasticsearch doesn't come with access controls out of the box; they are a paid add on. Opensearch does include them thankfully. This complicates being able to stream directly from running jobs into Elasticsearch as you essentially have to rely on firewall rules exposing your cluster. Even with Opensearch I'm not sure how you would safely construct jobs allowing them to write to the cluster without risking abuse or corruption.
As a result I am very happy we didn't try to couple the Elasticsearch setup more closely to the CI system [3][4]. The choices we made to decouple Elasticsearch from the CI system were intentional. This choice has allowed us to: rotate indexed logs much more quickly than the CI system would prefer, index fewer logs overall than the CI system produces, handle upgrades out of band of the CI system, restrict access to the cluster to attempt to keep it secure, ignore the 0.01% index failures, rely on those who know the output data to help improve indexing of their logs, and at the same time still had something that was extremely useful when you needed to debug specific issues.
Adding optional support for tools like Elasticsearch/Opensearch to Zuul may help some users, but in general I expect you will still have to keep two copies of the logs and users will refer to the longer term static storage by default via Zuul because Elasticsearch indexed logs won't contain everything (either they expire out or were never indexed in the first place).
I don't mean to imply these are bad tools merely that they are not a silver bullet for log storage. They are extremely powerful tools, but make more sense decoupled from Zuul with teams that can curate the specific set of indexing rules for the specific sets of logs that they care about.
[0] Full disclaimer it has been a while since I did this stuff as part of my day to day. That said I've been talking to people about Opensearch and it seems like a number of these concerns are still valid. [1] I'm going to say Elasticsearch a lot here because it is the tool that I've used for years. But note that Elasticsearch is not longer open source and any future work should probably target Opensearch. [2] Elasticsearch seems to do really well on dedicated hardware with large network connections between all the nodes. It is possible to run a very performant Elasticsearch, but you have to commit to it and I suspect many users are in the cloud where they have less control over this. [3] When we first started doing this I wrote the Jenkins gearman plugin to kick off jobs to trigger work in an asynchronous manner. At the same time another team at HP wrote a direct logstash reporter for Jenkins. Both worked well, but in their case the CI system was specific to their tools and they could keep it constrained to a size that worked for them as well as having all the necessary expertise on the various log files they cared about. [4] OpenDev is currently in a situation where it needs to drop the ElasticSearch setup as we no longer have the necessary help to continue to maintain it. Doing this if it was coupled to Zuul would be much more difficult and painful.
Thank you very much for taking the time to write down these concerns. I still think it's worth investaging how to avoid duplicating the logs when the user only need elasticsearch. Though we could also support an hybrid solution, and if I understand correctly, it seems like we could use an extra upload role in zuul-jobs to replace the existing puppet-log_processor service. Would that works for the opendev setup? Regards, -Tristan
On 2021-11-05 18:23:33 +0000 (+0000), Tristan Cacqueray wrote: [...]
Thank you very much for taking the time to write down these concerns. I still think it's worth investaging how to avoid duplicating the logs when the user only need elasticsearch. Though we could also support an hybrid solution, and if I understand correctly, it seems like we could use an extra upload role in zuul-jobs to replace the existing puppet-log_processor service. Would that works for the opendev setup?
The question is probably mis-directed. We don't intend to use ElasticSearch/OpenSearch in the OpenDev collaboratory for a number of reasons, but chief among those is that we don't have people to maintain those additional services nor the integration with them. Instead, some volunteers are going to build and run something on their own, but from our (OpenDev) perspective it can and must be entirely separate from our services and configuration. We definitely wouldn't want a second log upload role added to all jobs, since that makes the jobs take longer to complete, increases the chances those jobs will fail, but most importantly means that our sysadmins end up being tasked with troubleshooting things when that integration fails. Instead we've suggested anyone interested in running a service to index OpenDev's Zuul job logs should query the Zuul API to find out what builds have completed and where their logs can be obtained, so that the burden of support can be completely removed. -- Jeremy Stanley
On Fri, Nov 5, 2021, at 11:23 AM, Tristan Cacqueray wrote:
On Fri, Nov 05, 2021 at 08:49 Clark Boylan wrote:
On Fri, Nov 5, 2021, at 5:06 AM, Tristan Cacqueray wrote:
Hi,
I would like to discuss the best practices for build logs processing.
The goal is to improve the user experience when debugging a job that produces a lot of logs (e.g. >10MB) and to help discover failures that are transients and unrelated to a particular build.
The existing options are (please reply with your own solutions):
- opendev/puppet-log_processor : a gearman service that can be notified through a post job task to trigger build log indexing in logstash. This also provides a classify-log.crm function to annotate log with a score.
- openstack/ci-log-processing : a work in progress service to replace the post job task and use the log_processor through the builds api.
- opendev/log-classify : an import of my logreduce tool that runs as a post task to provide an html report directly along the build log. https://softwarefactory-project.io/cgit/logreduce/tree/README.rst (The initial opendev reviewes are still in open).
- github.com/bmwcarit/zubbi : a service to index jobs and roles in elasticsearch. This is not exactly related to the goal, but it is worth mentioning it.
I am wondering if Zuul shouldn't support elasticsearch directly, by providing a `upload-logstash` role in zuul-jobs, and by supporting browsing the elasticsearch api directly from zuul-web. This would remove the need for an extra service to do the indexing, and we wouldn't have to process and store the logs twice.
Am I missing something, how do you process build logs?
My personal experience [0] is that Elasticsearch [1] is an extremely powerful secondary index that provides useful insight to CI system users, but there are a number of issues that make it problematic as a primary index or storage for CI logs:
* Performance: Indexing logs is not free. The cost involved here scales pretty directly with the size of the input data and the complexity of the processing rules. But you can also be impacted by load on the Elasticsearch cluster, the busier you get the slower your system will be as more jobs fight for indexing time and time to respond to user queries [2]. Zuul is built to do its best to avoid the exceptional cases where you need to deep dive into logs. Optimizing for these cases which should happen infrequently affects the performance of all jobs.
* SIze/Volume: CI job logs compress extremely well until you put them into Elasticsearch. Back when OpenDev stored log files on an ext4 filesystem we managed about 30 days of compressed logs on 12TB of disk. Our 6TB Elasticsearch cluster can fill after 7 days and that is after removing all DEBUG level logs and only indexing specific files we know we care about. You can fill a lot of disks very quickly. It is not very efficient in this way and lends itself to being specific about what you index.
* Reliability: There is a lot that can go wrong when indexing hundreds of gigabytes a day. Logstash processes can OOM or otherwise hit bugs. Elasticsearch index shards can be corrupted and need to be deleted. The cluster can enter a degraded state due to hosting issues which severely impacts performance (see above) and takes time to recover. When looked at from a very high level these problems aren't major issues if you don't rely on avoiding them to allow jobs to pass. Its OK to have a .01% error rate if those errors don't bubble back to job results. It is a problem when they do.
* Maintainability: Upgrading tools like Elasticsearch typically means library updates, logstash updates (which require updating your rules), and kibana updates. The whole system needs to be kept in sync and your upgrades become far more involved than you expect.
* Diversity of log data: If you intend on storing everything in Elasticsearch then you will need to have rules capable of processing everything. You also need to take care to ensure the data is accurate as well. It is easy to accidentally combine unrelated multiline logs for example if the input data is arbitrary. Ideally you would have timestamps for every line as well, but many tools don't provide this information. Very quickly the group running the CI system needs to become an expert in everything running in their CI system and this doesn't scale.
* Access controls: Elasticsearch doesn't come with access controls out of the box; they are a paid add on. Opensearch does include them thankfully. This complicates being able to stream directly from running jobs into Elasticsearch as you essentially have to rely on firewall rules exposing your cluster. Even with Opensearch I'm not sure how you would safely construct jobs allowing them to write to the cluster without risking abuse or corruption.
As a result I am very happy we didn't try to couple the Elasticsearch setup more closely to the CI system [3][4]. The choices we made to decouple Elasticsearch from the CI system were intentional. This choice has allowed us to: rotate indexed logs much more quickly than the CI system would prefer, index fewer logs overall than the CI system produces, handle upgrades out of band of the CI system, restrict access to the cluster to attempt to keep it secure, ignore the 0.01% index failures, rely on those who know the output data to help improve indexing of their logs, and at the same time still had something that was extremely useful when you needed to debug specific issues.
Adding optional support for tools like Elasticsearch/Opensearch to Zuul may help some users, but in general I expect you will still have to keep two copies of the logs and users will refer to the longer term static storage by default via Zuul because Elasticsearch indexed logs won't contain everything (either they expire out or were never indexed in the first place).
I don't mean to imply these are bad tools merely that they are not a silver bullet for log storage. They are extremely powerful tools, but make more sense decoupled from Zuul with teams that can curate the specific set of indexing rules for the specific sets of logs that they care about.
[0] Full disclaimer it has been a while since I did this stuff as part of my day to day. That said I've been talking to people about Opensearch and it seems like a number of these concerns are still valid. [1] I'm going to say Elasticsearch a lot here because it is the tool that I've used for years. But note that Elasticsearch is not longer open source and any future work should probably target Opensearch. [2] Elasticsearch seems to do really well on dedicated hardware with large network connections between all the nodes. It is possible to run a very performant Elasticsearch, but you have to commit to it and I suspect many users are in the cloud where they have less control over this. [3] When we first started doing this I wrote the Jenkins gearman plugin to kick off jobs to trigger work in an asynchronous manner. At the same time another team at HP wrote a direct logstash reporter for Jenkins. Both worked well, but in their case the CI system was specific to their tools and they could keep it constrained to a size that worked for them as well as having all the necessary expertise on the various log files they cared about. [4] OpenDev is currently in a situation where it needs to drop the ElasticSearch setup as we no longer have the necessary help to continue to maintain it. Doing this if it was coupled to Zuul would be much more difficult and painful.
Thank you very much for taking the time to write down these concerns. I still think it's worth investaging how to avoid duplicating the logs when the user only need elasticsearch. Though we could also support an hybrid solution, and if I understand correctly, it seems like we could use an extra upload role in zuul-jobs to replace the existing puppet-log_processor service. Would that works for the opendev setup?
No, this won't work for OpenDev for a couple of the reasons noted above. We don't have anyone to maintain an Elasticsearch cluster. We have asked for help and not received it. Instead OpenStack is going to do it themselves. I think this aligns better with the needs of such a system as you have to understand the inputs in order to index them well. On a more general level you would need to figure out how to avoid performance costs to job runtime if you remove the asynchronous processing service as well as finding ways to make indexing more resilient to errors (or ignore them I guess). You also need to take care to avoid over exposing your Elasticsearch cluster to the jobs since Elasticsearch doesn't have authentication or access controls. With Opensearch you can probably push logs from the executor in a trusted playbook and authenticate using their authentication systems (though this only addresses authentication if you're talking to Opensearch, it doesn't handle any authentication for log processing in front of the cluster). And finally, you'd need to somehow maintain processing rules for all of the various log files that a job can produce. I guess this is why I've shifted more to thinking this type of tooling makes more sense being close to the people producing the job logs and not in the CI system. Zuul can have appropriate hook points that people can tie into for the exact data they want to index. They can ensure they are able to index those specific files reliably and manage firewalls or authentication in a simpler way. If you bake too much into Zuul itself it will become a major headache. Definitely solvable with enough time and effort, but remember that a lot of that time and effort is happening at an operational level not just as a Zuul developer.
Regards, -Tristan
participants (3)
-
Clark Boylan
-
Jeremy Stanley
-
Tristan Cacqueray