From iwienand at redhat.com Thu Jul 1 05:09:31 2021 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 1 Jul 2021 15:09:31 +1000 Subject: review.opendev.org server upgrade 18/19th July 2021 Message-ID: Hello, Around 2021-07-18 23:00 UTC we plan to provision an upgrade to the review.opendev.org server which hosts our Gerrit instance. We are aware that some users may require firewall updates or similar to access the new server. The new addresses will be: 199.204.45.33 2604:e100:1:0:f816:3eff:fe52:22de We hope the downtime will be limited to a few hours as we migrate and index the data. User-visible changes be should minimal. Due to incompatible schemas between the old hosted database and new database instance we will not be transitioning the table that holds the information about what files you have reviewed (gerrit shows a light-grey "Reviewed" when you have viewed a file). Note this is *not* review comments or code, but just the per-user flag in the UI if you have viewed a file. You can reach admins via email at service-discuss at lists.opendev.org or in OFTC #opendev for any further questions or queries. Thank you, -i From lotorev at gmail.com Thu Jul 1 20:17:30 2021 From: lotorev at gmail.com (Vitaliy Lotorev) Date: Thu, 1 Jul 2021 23:17:30 +0300 Subject: Gerrit master branch is always required? Message-ID: Hi, 1. We have 'linux/kernel' repo (with Linux kernel) with 'upstream' (tracks upsteam), 'projectA', 'projectB'. This repo didn't have 'master' branch as it didn't make sense for repo owners and users (upst. Once I create commit that depends-on commit to linux/kernel then Zuul reports an error 'failed to load configuration from linux/kernel' as the 'master' branch doesn't exist. We had to create master branch in this and other repos that have an upstream project. 2. In git world there is tendency to switch from 'master' to 'main' branch (Gerrit 3.4 introduced host-level defaultbranch config [1], Gitlab announcement for main branch [2]). Even if we would set defaultbranch to 'main' we still need to create 'master' branch? Am I correct that Zuul currently doesn't support repos without a 'master' branch? [1] https://gerrit-review.googlesource.com/Documentation/config-gerrit.html#gerrit.defaultBranch [2] https://about.gitlab.com/blog/2021/03/10/new-git-default-branch-name/ ---- Regards, Vitaliy -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at acmegating.com Thu Jul 1 20:33:26 2021 From: jim at acmegating.com (James E. Blair) Date: Thu, 01 Jul 2021 13:33:26 -0700 Subject: Gerrit master branch is always required? In-Reply-To: (Vitaliy Lotorev's message of "Thu, 1 Jul 2021 23:17:30 +0300") References: Message-ID: <87v95t4vvt.fsf@fuligin> Vitaliy Lotorev writes: > Hi, > > 1. We have 'linux/kernel' repo (with Linux kernel) with 'upstream' (tracks > upsteam), 'projectA', 'projectB'. This repo didn't have 'master' branch as > it didn't make sense for repo owners and users (upst. Once I create commit > that depends-on commit to linux/kernel then Zuul reports an error 'failed > to load configuration from linux/kernel' as the 'master' branch doesn't > exist. > We had to create master branch in this and other repos that have an > upstream project. > > 2. In git world there is tendency to switch from 'master' to 'main' branch > (Gerrit 3.4 introduced host-level defaultbranch config [1], Gitlab > announcement for main branch [2]). Even if we would set defaultbranch to > 'main' we still need to create 'master' branch? > > Am I correct that Zuul currently doesn't support repos without a 'master' > branch? It does support that; here are the relevant config options: https://zuul-ci.org/docs/zuul/reference/project_def.html#attr-project.default-branch https://zuul-ci.org/docs/zuul/reference/tenants.html#attr-tenant.config-projects.%3Cproject%3E.load-branch -Jim From lotorev at gmail.com Thu Jul 1 20:50:04 2021 From: lotorev at gmail.com (Vitaliy Lotorev) Date: Thu, 1 Jul 2021 23:50:04 +0300 Subject: Gerrit master branch is always required? In-Reply-To: <87v95t4vvt.fsf@fuligin> References: <87v95t4vvt.fsf@fuligin> Message-ID: чт, 1 июл. 2021 г. в 23:33, James E. Blair : > Vitaliy Lotorev writes: > > > Hi, > > > > 1. We have 'linux/kernel' repo (with Linux kernel) with 'upstream' > (tracks > > upsteam), 'projectA', 'projectB'. This repo didn't have 'master' branch > as > > it didn't make sense for repo owners and users (upst. Once I create > commit > > that depends-on commit to linux/kernel then Zuul reports an error 'failed > > to load configuration from linux/kernel' as the 'master' branch doesn't > > exist. > > We had to create master branch in this and other repos that have an > > upstream project. > > > > 2. In git world there is tendency to switch from 'master' to 'main' > branch > > (Gerrit 3.4 introduced host-level defaultbranch config [1], Gitlab > > announcement for main branch [2]). Even if we would set defaultbranch to > > 'main' we still need to create 'master' branch? > > > > Am I correct that Zuul currently doesn't support repos without a 'master' > > branch? > > It does support that; here are the relevant config options: > > > https://zuul-ci.org/docs/zuul/reference/project_def.html#attr-project.default-branch > > https://zuul-ci.org/docs/zuul/reference/tenants.html#attr-tenant.config-projects.%3Cproject%3E.load-branch Thanks, I missed project.default-branch and give it a try specifying it in config project. Regards, Vitaliy -------------- next part -------------- An HTML attachment was scrubbed... URL: From lotorev at gmail.com Thu Jul 1 21:23:18 2021 From: lotorev at gmail.com (Vitaliy Lotorev) Date: Fri, 2 Jul 2021 00:23:18 +0300 Subject: Recommendation for logs rotation service Message-ID: Hi, Zuul requires external storage for logs and I failed to find a simple solution for removing jobs log directories after a period of time. What I've found so far: * zuul.opendev.org uploads logs to AFS and Swift. Looking at https://opendev.org/opendev/system-config/ I can't understand how directories are rotated on AFS. * probably Swift has built-in file/logs rotation, but I don't have OpenStack and Swift. * free Aftifactory CE/OSS version doesn't support plugins which are recommended to implement cron cleanup plugin https://jfrog.com/knowledge-base/artifactory-cleanup-best-practices/ Could you help with ideas for in-house OSS storage service that would cleanup directories with binary artifacts/logs? In case of logs kept on AFS zuul.opendev.org: are directories cleaned up just by date (e.g. remove all 'check jobs after two weeks')? -- Regards, Vitaliy -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Jul 1 22:00:02 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 01 Jul 2021 15:00:02 -0700 Subject: Recommendation for logs rotation service In-Reply-To: References: Message-ID: On Thu, Jul 1, 2021, at 2:23 PM, Vitaliy Lotorev wrote: > Hi, > > Zuul requires external storage for logs and I failed to find a simple > solution for removing jobs log directories after a period of time. > What I've found so far: > > * zuul.opendev.org uploads logs to AFS and Swift. Looking at > https://opendev.org/opendev/system-config/ I can't understand how > directories are rotated on AFS. OpenDev doesn't really rotate the contents in AFS. We store data in AFS that we wish to stick around for longer than our normal log retention period. Occasionally admins do manual cleanup in AFS but otherwise we expect that data to stick around. > * probably Swift has built-in file/logs rotation, but I don't have > OpenStack and Swift. Yes, object stores like swift, s3, ceph, etc tend to support object expiry. When you create the object you annotate it with metadata for when the object should be expired then background processes in the service automatically do the cleanup for you. OpenDev relies on this functionality in swift, but I believe the various roles for object store log uploads support the same functionality with other object stores. > * free Aftifactory CE/OSS version doesn't support plugins which are > recommended to implement cron cleanup plugin > https://jfrog.com/knowledge-base/artifactory-cleanup-best-practices/ > > Could you help with ideas for in-house OSS storage service that would > cleanup directories with binary artifacts/logs? > In case of logs kept on AFS zuul.opendev.org: are directories cleaned > up just by date (e.g. remove all 'check jobs after two weeks')? Before OpenDev uploaded logs to swift we had our CI system copy logs using scp/rsync to a log server with 12TB of disk. On that server we ran cron jobs to run find and expire any files older than our desired expiration date. I think this, https://opendev.org/opendev/puppet-openstackci/src/branch/master/files/log_archive_maintenance.sh, is the script we were using at the time. It is fairly complicated and probably worth understanding what it is doing before you try running it since it can be destructive. It is also possible you can simplify it further. Note that it also tries to compress recent files in addition to deleting old files. The major downside to this approach is that you have to stay on top of your disk usage and keep track of whether or not the cleanups are producing enough headroom for new logs otherwise your test jobs will all start failing and then you have to fix things really quickly. > > -- > Regards, Vitaliy From fungi at yuggoth.org Thu Jul 1 22:07:59 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Jul 2021 22:07:59 +0000 Subject: Recommendation for logs rotation service In-Reply-To: References: Message-ID: <20210701220759.4leivdk7gcqfyscq@yuggoth.org> On 2021-07-01 15:00:02 -0700 (-0700), Clark Boylan wrote: [...] > Before OpenDev uploaded logs to swift we had our CI system copy > logs using scp/rsync to a log server with 12TB of disk. On that > server we ran cron jobs to run find and expire any files older > than our desired expiration date. I think this, > https://opendev.org/opendev/puppet-openstackci/src/branch/master/files/log_archive_maintenance.sh, > is the script we were using at the time. It is fairly complicated > and probably worth understanding what it is doing before you try > running it since it can be destructive. It is also possible you > can simplify it further. Note that it also tries to compress > recent files in addition to deleting old files. > > The major downside to this approach is that you have to stay on > top of your disk usage and keep track of whether or not the > cleanups are producing enough headroom for new logs otherwise your > test jobs will all start failing and then you have to fix things > really quickly. [...] Also if you've got too many files and too little CPU or I/O bandwidth on the system hosting it, you can get in a situation where it takes longer to iterate over all the files than the expired files accumulate, at which point your filesystem utilization grows without bound until you run out of space. One solution I was considering back then was rotating logical volumes in a Linux LVM2 volume group, so have a new volume for each day and then just drop each old volume as it exceeds your expiration and create and mount a new empty volume formatted with a fresh filesystem to replace it. This however requires that you shard your file path by date, which may take some effort with the existing log publication roles. It also probably means a bit of additional wasted block storage unless you use something like thin-provisioned volumes. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From iwienand at redhat.com Tue Jul 6 05:33:02 2021 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 6 Jul 2021 15:33:02 +1000 Subject: Debugging non-starting periodic jobs; probably since 4.6.0 Message-ID: Hi, I've been trying to debug why release-wheel-cache https://zuul.opendev.org/t/openstack/builds?job_name=release-wheel-cache doesn't seem to have run since 2021-06-27. I don't believe we've made any changes to these jobs, but I think Zuul has deployed quite a bit of change. The build-wheel-cache jobs are in openstack-zuul-jobs [1]. We then have publish-wheel-cache jobs [2] that are children of these jobs and essentially wrap the build jobs with "get afs token, write wheels to afs" wrappers. These jobs are in the periodic pipeline of the requirements project via project-config [3]. Finally the release-wheel-cache (also [3]) runs after all the publish-wheel-cache-* jobs to "vos release" the various wheel volumes. Note it seems none of the individual jobs (publish-wheel-cache-*) has run since 2021-06-27 either, but I've just concentrated on release-wheel-cache as it depends on all the others. I believe the last time it ran has been captured in [4] starting at 2021-06-27 06:00:35,268 DEBUG zuul.Scheduler: [e:963101cc7c01460abfd34664e5610e18] Forwarding trigger event This is the last result we see in the build log. The next time it tries is [5] at 2021-06-28 06:00:17,437 DEBUG zuul.Scheduler: [e:afdb7039494649c09e8bb2b64158a385] Forwarding trigger event I think the interesting thing here is that this seems to go into a tight loop of 2021-06-28 06:08:34,701 DEBUG zuul.Pipeline.openstack.periodic: [e: afdb7039494649c09e8bb2b64158a385] Checking for changes needed by : 2021-06-28 06:08:34,701 DEBUG zuul.Pipeline.openstack.periodic: [e: afdb7039494649c09e8bb2b64158a385] does not support dependencies ... 2021-07-02 14:20:30,699 DEBUG zuul.Pipeline.openstack.periodic: [e: afdb7039494649c09e8bb2b64158a385] Checking for changes needed by : 2021-07-02 14:20:30,700 DEBUG zuul.Pipeline.openstack.periodic: [e: afdb7039494649c09e8bb2b64158a385] does not support dependencies every few seconds for the next few days; I think until restarted Zuul at @ 2021-07-02 14:22:33 according to logs at [6] (it stops at 14:20) So then the latest run is triggered at [7] 2021-07-05 06:00:13,771 DEBUG zuul.Scheduler: [e: 5fe4dd08d1a44700bf92fd475565f687] Forwarding trigger event but nothing seems to run. Further complicating things, I know the requirements project had some config errors from the recent security fixes until about 2021-06-28 [8]. So far I'm at a bit of a loss to correlate all this into a coherent story. Hopefully some other eyes might see what I'm missing! Thanks, -i [1] https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml#L1457 [2] https://opendev.org/openstack/project-config/src/branch/master/zuul.d/jobs.yaml#L864 [3] https://opendev.org/openstack/project-config/src/branch/master/zuul.d/projects.yaml#L5083 [4] http://people.redhat.com/~iwienand/zuul-periodic-27-06-2021/963101cc7c01460abfd34664e5610e18.txt [5] http://people.redhat.com/~iwienand/zuul-periodic-27-06-2021/afdb7039494649c09e8bb2b64158a385.txt [6] https://wiki.openstack.org/wiki/Infrastructure_Status [7] http://people.redhat.com/~iwienand/zuul-periodic-27-06-2021/5fe4dd08d1a44700bf92fd475565f687.txt [8] https://review.opendev.org/c/openstack/requirements/+/798361 From maxim.afanasyev at technica-engineering.de Tue Jul 6 21:55:05 2021 From: maxim.afanasyev at technica-engineering.de (Maxim Afanasyev) Date: Tue, 6 Jul 2021 21:55:05 +0000 Subject: Running Zuul under docker compose demo with GitLab connection Message-ID: Hello We would like to try Zuul CI in our organization, because we already had very positive experience of configuring jobs in one of projects for our partner company. But in that case Zuul was installed on different hosting, by different team and used GitHub connection. Thus we want to setup it on our own and load all projects by using GitLab driver. To start, we followed these tutorials and sample configurations: https://zuul-ci.org/docs/zuul/tutorials/quick-start.html Quick-Start Installation and Tutorial — Zuul documentation Quick-Start Installation and Tutorial¶. Zuul is not like other CI or CD systems. It is a project gating system designed to assist developers in taking a change from proposal through deployment. zuul-ci.org https://zuul-ci.org/docs/zuul/reference/drivers/gitlab.html GitLab — Zuul documentation This indicates that changes originating from the GitLab connection must be in the opened state (not merged yet).. pipeline.require. ¶. The dictionary passed to the GitLab pipeline require attribute supports the following attributes:. pipeline.require.. open ¶. A boolean value (true or false) that indicates whether the Merge Request must be open in order to be ... zuul-ci.org https://gist.github.com/mpiscaer/2e731c8becea781c4521cc15b5cfeabe and took the sample based on the docker compose (version around 21.05.2021) GitLab driver basically works, but we faced the following issues: 1) SSL certificate verification. By default Zuul did not accept certificates of our GitLab server and unfortunately the adding of public keys under "certs" did not help. The workaround was to add line: self.session.verify = False into the "init" method of "GitlabAPIClient" The question: is it possible to add some option to disable such check in the GitLab driver configuration? 2) API works only partially For example the request to http://ubuntu2:9000/api/tenants in Firefox gives JSON response with list of configured tenants and headers are: Access-Control-Allow-Origin * Content-Length 44 Content-Type application/json; charset=utf-8 Date Tue, 06 Jul 2021 21:33:34 GMT Server CherryPy/18.6.0 but if I call the same URL via "curl" or in Google Chrome, then the webpage is displayed with request to enable Javascript and headers are: Accept-Ranges: bytes Content-Length: 2230 Content-Type: text/html;charset=utf-8 Date: Tue, 06 Jul 2021 21:40:58 GMT Last-Modified: Thu, 20 May 2021 10:56:25 GMT Server: CherryPy/18.6.0 ............. .............. The question: why can it happen, does it depend on request content and headers? I suppose because of this we cannot use webhooks to run jobs triggered by GitLab pushes (actually also HTML page is returned) Thank you in advance Best Regards Maxim Afanasyev Software Developer test frameworks / SWBK mob: + 49 (176) 35-42-55-24 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Jul 6 22:26:12 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Jul 2021 15:26:12 -0700 Subject: Running Zuul under docker compose demo with GitLab connection In-Reply-To: References: Message-ID: <2f6e359d-d140-4aaa-97ca-7f32483b707e@www.fastmail.com> On Tue, Jul 6, 2021, at 2:55 PM, Maxim Afanasyev wrote: > Hello > > We would like to try Zuul CI in our organization, because we already > had very positive experience of configuring jobs in one of projects for > our partner company. But in that case Zuul was installed on different > hosting, by different team and used GitHub connection. Thus we want to > setup it on our own and load all projects by using GitLab driver. > > To start, we followed these tutorials and sample configurations: > https://zuul-ci.org/docs/zuul/tutorials/quick-start.html > Quick-Start Installation and Tutorial — Zuul documentation > > Quick-Start Installation and Tutorial¶. Zuul is not like other CI or CD > systems. It is a project gating system designed to assist developers in > taking a change from proposal through deployment. > zuul-ci.org > https://zuul-ci.org/docs/zuul/reference/drivers/gitlab.html > GitLab — Zuul documentation > > This indicates that changes originating from the GitLab connection must > be in the opened state (not merged yet).. pipeline.require. source> ¶. The dictionary passed to the GitLab pipeline require > attribute supports the following attributes:. pipeline.require. source>. open ¶. A boolean value (true or false) that indicates whether > the Merge Request must be open in order to be ... > zuul-ci.org > https://gist.github.com/mpiscaer/2e731c8becea781c4521cc15b5cfeabe > > and took the sample based on the docker compose (version around 21.05.2021) > > GitLab driver basically works, but we faced the following issues: > > 1) SSL certificate verification. By default Zuul did not accept > certificates of our GitLab server and unfortunately the adding of > public keys under "certs" did not help. The workaround was to add line: > > self.session.verify = False > > into the "init" method of "GitlabAPIClient" > > The question: is it possible to add some option to disable such check > in the GitLab driver configuration? I think the reason this happened is that python requests library vendors its own CA bundles by default and doesn't trust the system CAs. You can override this by setting the REQUESTS_CA_BUNDLE environment to point to your system bundle instead. You might try and see if that works for you? If it does we can probably add a note about this in the zuul docs. Python requests info can be found at https://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification > > 2) API works only partially > For example the request to http://ubuntu2:9000/api/tenants in Firefox > gives JSON response with list of configured tenants and headers are: > > Access-Control-Allow-Origin * > Content-Length 44 > Content-Type application/json; charset=utf-8 > Date Tue, 06 Jul 2021 21:33:34 GMT > Server CherryPy/18.6.0 > > but if I call the same URL via "curl" or in Google Chrome, then the > webpage is displayed with request to enable Javascript and headers are: > > Accept-Ranges: bytes > Content-Length: 2230 > Content-Type: text/html;charset=utf-8 > Date: Tue, 06 Jul 2021 21:40:58 GMT > Last-Modified: Thu, 20 May 2021 10:56:25 GMT > Server: CherryPy/18.6.0 > > ............. .............. > > The question: why can it happen, does it depend on request content and > headers? I suppose because of this we cannot use webhooks to run jobs > triggered by GitLab pushes (actually also HTML page is returned) I am not able to reproduce this behavior using the OpenDev zuul instance. https://zuul.opendev.org/api/tenants returns tenant listing json with Firefox, Chrome, and curl. You might want to check the server side logs to see what sort of requests it is receiving and debug from there? My hunch is that something about the url isn't quite right. If you aren't able to resolve this then more complete debugging info would probably be helpful. Mmaybe a complete paste of the curl -v output with any sensitive info filtered out. > > Thank you in advance > Best Regards > *Maxim Afanasyev* > Software Developer > test frameworks / SWBK > mob: + 49 (176) 35-42-55-24 From iwienand at redhat.com Wed Jul 7 01:44:43 2021 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 7 Jul 2021 11:44:43 +1000 Subject: Debugging non-starting periodic jobs; probably since 4.6.0 In-Reply-To: References: Message-ID: On Tue, Jul 06, 2021 at 03:33:02PM +1000, Ian Wienand wrote: > Hopefully some other eyes might see what I'm missing! Indeed clarkb's did. There was perhaps a confluence of errors here. I'm not sure what happened before the restart. The important line of the logs was Change is already in pipeline, ignoring When examining the periodic queue, indeed there was already running, but stuck jobs there. There were at least two -- the first was for ref 0000.0000 which appears have come in when the queues were restored after the restart. The second was indeed a buildset for refs/heads/master triggered by the periodic timer that was queued on an ARM64 node. We have had some problems with nodepool communicating with one of the ARM64 clouds during this time, but I'm not sure what made the job get stuck as "queued" and never time out exactly. zuul dequeue --tenant openstack --pipeline periodic --project openstack/requirements --ref refs/heads/master first removed the one with the 0000.0000 ref (not sure why); then when run a second time removed the buildset started by the periodic timer some days ago. So mystery largely solved; although I think it shows room for improvement with node allocation error/timeout handling and our re-enqueue methods. -i From guillaume.chauvel at gmail.com Tue Jul 13 12:32:59 2021 From: guillaume.chauvel at gmail.com (Guillaume Chauvel) Date: Tue, 13 Jul 2021 14:32:59 +0200 Subject: Running Zuul under docker compose demo with GitLab connection In-Reply-To: <2f6e359d-d140-4aaa-97ca-7f32483b707e@www.fastmail.com> References: <2f6e359d-d140-4aaa-97ca-7f32483b707e@www.fastmail.com> Message-ID: > > 1) SSL certificate verification. By default Zuul did not accept > > certificates of our GitLab server and unfortunately the adding of > > public keys under "certs" did not help. The workaround was to add line: > > > > self.session.verify = False > > > > into the "init" method of "GitlabAPIClient" > > > > The question: is it possible to add some option to disable such check > > in the GitLab driver configuration? > > I think the reason this happened is that python requests library vendors its own CA bundles by default and doesn't trust the system CAs. You can override this by setting the REQUESTS_CA_BUNDLE environment to point to your system bundle instead. You might try and see if that works for you? If it does we can probably add a note about this in the zuul docs. Python requests info can be found at https://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification Gitlab Quick-Start: https://review.opendev.org/c/zuul/zuul/+/795540 with HTTPS: https://review.opendev.org/c/zuul/zuul/+/800009/1 Gerrit and Github drivers have such an option "verify_ssl" (https://opendev.org/zuul/zuul/src/commit/37c714b2de0d95578963e6d961cf9ddec5aa7604/zuul/driver/github/githubconnection.py#L828-L831) Gitlab driver doesn't have one at the moment. From iwienand at redhat.com Tue Jul 13 19:35:59 2021 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 14 Jul 2021 05:35:59 +1000 Subject: review.opendev.org server upgrade 18/19th July 2021 In-Reply-To: References: Message-ID: Hello, Just a reminder of the planned downtime and server switch detailed below. Please be aware of the new IP addresses if you are in an environment where external access may be filtered. Thanks, -i On Thu, Jul 01, 2021 at 03:09:31PM +1000, Ian Wienand wrote: > Hello, > > Around 2021-07-18 23:00 UTC we plan to provision an upgrade to the > review.opendev.org server which hosts our Gerrit instance. > > We are aware that some users may require firewall updates or similar > to access the new server. The new addresses will be: > > 199.204.45.33 > 2604:e100:1:0:f816:3eff:fe52:22de > > We hope the downtime will be limited to a few hours as we migrate and > index the data. > > User-visible changes be should minimal. Due to incompatible schemas > between the old hosted database and new database instance we will not > be transitioning the table that holds the information about what files > you have reviewed (gerrit shows a light-grey "Reviewed" when you have > viewed a file). Note this is *not* review comments or code, but just > the per-user flag in the UI if you have viewed a file. > > You can reach admins via email at service-discuss at lists.opendev.org or > in OFTC #opendev for any further questions or queries. > > Thank you, > > -i From cboylan at sapwetik.org Wed Jul 14 14:57:33 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 14 Jul 2021 07:57:33 -0700 Subject: Take care when deleting email addresses in Gerrit Message-ID: <2e930333-6aca-46d6-b872-7cc4b0b863a8@www.fastmail.com> Hello everyone, We have discovered a bug in Gerrit [0] that allows you to delete your account's Ubuntu One OpenID association if you delete the email from your account associated with that OpenID. When you do this the next time you log in Gerrit will create a new account for you. Fixing this after the fact is currently difficult. We are working to correct this but the required corrections take time. We would like to avoid these issues in the first place. Please avoid deleting email addresses from your accounts unless you are sure they are not associated with your OpenID. If you would like help double checking that for you please let us know and one of our Gerrit admins can double check. We are working to make this fixable after the fact, and will try to work with the Gerrit upstream to prevent you from deleting Ubuntu One OpenID associations from your account in the first place. Until then we apologize for the inconvenience. Finally, this message and others like it are sent to service-announce at lists.opendev.org. I am sending this email to a much broader set of mailing lists because the service-announce list membership is quite small. All of our users should be subscribed to that list to get important notices like this one. We keep the traffic low and only send mail there when we feel it is important for you to see it. [0] https://bugs.chromium.org/p/gerrit/issues/detail?id=14776 Clark From lotorev at gmail.com Thu Jul 15 21:17:40 2021 From: lotorev at gmail.com (Vitaliy Lotorev) Date: Fri, 16 Jul 2021 00:17:40 +0300 Subject: Recommendation for logs rotation service In-Reply-To: <20210701220759.4leivdk7gcqfyscq@yuggoth.org> References: <20210701220759.4leivdk7gcqfyscq@yuggoth.org> Message-ID: Thanks, I appreciate your answers. I also found tmpwatch utility for removing stale files: https://linux.die.net/man/8/tmpwatch Regards, Vitaliy пт, 2 июля 2021 г., 1:08 Jeremy Stanley : > On 2021-07-01 15:00:02 -0700 (-0700), Clark Boylan wrote: > [...] > > Before OpenDev uploaded logs to swift we had our CI system copy > > logs using scp/rsync to a log server with 12TB of disk. On that > > server we ran cron jobs to run find and expire any files older > > than our desired expiration date. I think this, > > > https://opendev.org/opendev/puppet-openstackci/src/branch/master/files/log_archive_maintenance.sh > , > > is the script we were using at the time. It is fairly complicated > > and probably worth understanding what it is doing before you try > > running it since it can be destructive. It is also possible you > > can simplify it further. Note that it also tries to compress > > recent files in addition to deleting old files. > > > > The major downside to this approach is that you have to stay on > > top of your disk usage and keep track of whether or not the > > cleanups are producing enough headroom for new logs otherwise your > > test jobs will all start failing and then you have to fix things > > really quickly. > [...] > > Also if you've got too many files and too little CPU or I/O > bandwidth on the system hosting it, you can get in a situation where > it takes longer to iterate over all the files than the expired files > accumulate, at which point your filesystem utilization grows without > bound until you run out of space. > > One solution I was considering back then was rotating logical > volumes in a Linux LVM2 volume group, so have a new volume for each > day and then just drop each old volume as it exceeds your expiration > and create and mount a new empty volume formatted with a fresh > filesystem to replace it. This however requires that you shard your > file path by date, which may take some effort with the existing log > publication roles. It also probably means a bit of additional wasted > block storage unless you use something like thin-provisioned > volumes. > -- > Jeremy Stanley > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From y2kenny at gmail.com Tue Jul 20 02:48:20 2021 From: y2kenny at gmail.com (Kenny Ho) Date: Mon, 19 Jul 2021 22:48:20 -0400 Subject: Removal of ansible_connection, etc. for 4.6 Message-ID: Hi, Per the release log, "connection-related variables are no longer allowed to be set in job definitions, as they may be used to circumvent security measures." Does this apply to both untrusted-projects and config-projects or does it only apply to untrusted-projects? Regards, Kenny From cboylan at sapwetik.org Tue Jul 20 16:47:28 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 20 Jul 2021 09:47:28 -0700 Subject: Removal of ansible_connection, etc. for 4.6 In-Reply-To: References: Message-ID: <5b6d1bf3-b3cd-4e78-8a84-567be127b781@www.fastmail.com> On Mon, Jul 19, 2021, at 7:48 PM, Kenny Ho wrote: > Hi, > > Per the release log, "connection-related variables are no longer > allowed to be set in job definitions, as they may be used to > circumvent security measures." Does this apply to both > untrusted-projects and config-projects or does it only apply to > untrusted-projects? Rereading the change that introduced this behavior [0] I'm fairly certain that we apply the checks to all job definitions regardless of their trust level. I think the idea is that if you need to override these you should be doing it in Nodepool and Nodepool will pass that info along to the job. The reason for this is connection details are related to the labels Nodepool provides and shouldn't be job specific but rather node specific. [0] https://opendev.org/zuul/zuul/commit/ad7bd9c6f28e36da69b4d1292627107fcf3fd562 > > Regards, > Kenny From y2kenny at gmail.com Tue Jul 20 16:56:08 2021 From: y2kenny at gmail.com (Kenny Ho) Date: Tue, 20 Jul 2021 12:56:08 -0400 Subject: Removal of ansible_connection, etc. for 4.6 In-Reply-To: <5b6d1bf3-b3cd-4e78-8a84-567be127b781@www.fastmail.com> References: <5b6d1bf3-b3cd-4e78-8a84-567be127b781@www.fastmail.com> Message-ID: Um... I think that is going to be a problem for me. My use case is related to k8s namespace type where I create additional pods or vms inside that namespace, and then add the pod or vm as if it's a 'node'. Is there a solution for this use case? On Tue, Jul 20, 2021 at 12:47 PM Clark Boylan wrote: > > On Mon, Jul 19, 2021, at 7:48 PM, Kenny Ho wrote: > > Hi, > > > > Per the release log, "connection-related variables are no longer > > allowed to be set in job definitions, as they may be used to > > circumvent security measures." Does this apply to both > > untrusted-projects and config-projects or does it only apply to > > untrusted-projects? > > Rereading the change that introduced this behavior [0] I'm fairly certain that we apply the checks to all job definitions regardless of their trust level. I think the idea is that if you need to override these you should be doing it in Nodepool and Nodepool will pass that info along to the job. The reason for this is connection details are related to the labels Nodepool provides and shouldn't be job specific but rather node specific. > > [0] https://opendev.org/zuul/zuul/commit/ad7bd9c6f28e36da69b4d1292627107fcf3fd562 > > > > > Regards, > > Kenny From jim at acmegating.com Tue Jul 20 16:59:55 2021 From: jim at acmegating.com (James E. Blair) Date: Tue, 20 Jul 2021 09:59:55 -0700 Subject: Removal of ansible_connection, etc. for 4.6 In-Reply-To: (Kenny Ho's message of "Tue, 20 Jul 2021 12:56:08 -0400") References: <5b6d1bf3-b3cd-4e78-8a84-567be127b781@www.fastmail.com> Message-ID: <87im143oqc.fsf@fuligin> Kenny Ho writes: > Um... I think that is going to be a problem for me. My use case is > related to k8s namespace type where I create additional pods or vms > inside that namespace, and then add the pod or vm as if it's a 'node'. > Is there a solution for this use case? You should be able to add the variable to Ansible within the playbook using add_host; you just can't pass it through Zuul as a job variable. -Jim From pabelanger at redhat.com Sat Jul 31 15:03:50 2021 From: pabelanger at redhat.com (Paul Belanger) Date: Sat, 31 Jul 2021 11:03:50 -0400 Subject: Zuul 4.7.0 released In-Reply-To: <87v94s1yjs.fsf@fuligin> References: <87v94s1yjs.fsf@fuligin> Message-ID: On Thu, Jul 29, 2021 at 04:37:59PM -0700, James E. Blair wrote: > The Zuul team is pleased to announce the release of Zuul 4.7.0. > > Download it from PyPI: https://pypi.org/project/zuul > > Find container images at: https://hub.docker.com/u/zuul > > The release notes, included below, are also available in the > documentation with hyperlink references: > > https://zuul-ci.org/docs/zuul/4.7.0/reference/releasenotes.html > > ... We managed to upgrade today to 4.7.0 but noticed something new: 2021-07-31 14:29:47,287 ERROR zuul.Scheduler: Error starting Zuul: Traceback (most recent call last): File "/opt/venv/zuul-4.7.0/lib/python3.6/site-packages/zuul/cmd/scheduler.py", line 152, in run self.sched.prime(self.config) File "/opt/venv/zuul-4.7.0/lib/python3.6/site-packages/zuul/scheduler.py", line 636, in prime self._doReconfigureEvent(event) File "/opt/venv/zuul-4.7.0/lib/python3.6/site-packages/zuul/scheduler.py", line 945, in _doReconfigureEvent min_ltimes=min_ltimes) File "/opt/venv/zuul-4.7.0/lib/python3.6/site-packages/zuul/configloader.py", line 2358, in loadTenant abide, unparsed_config, ansible_manager, min_ltimes) File "/opt/venv/zuul-4.7.0/lib/python3.6/site-packages/zuul/configloader.py", line 1572, in fromYaml tenant.addUntrustedProject(tpc) File "/opt/venv/zuul-4.7.0/lib/python3.6/site-packages/zuul/model.py", line 5374, in addUntrustedProject self._addProject(tpc) File "/opt/venv/zuul-4.7.0/lib/python3.6/site-packages/zuul/model.py", line 5256, in _addProject (project,)) Exception: Project ansible-collections/kubernetes.core is already in project index I haven't tried to debug why yet, however 16 days ago we added the duplicate project to our tenant config: https://github.com/ansible/project-config/pull/870 Currently we do not run any sort of tenant validatation for duplicate projects, but will update our linters to check. Before I go out an write a bunch of linter code, how do others deal with this? I don't believe anything changes zuul side (could be wrong) to enforce this. But also know zuul won't dynamically load this config (like pipeline config) to warn the user too. We are allowing more non-zuul folks the ability to add new projects into our tenant but they don't have root access to the scheduler so would like to help catch most of these things before we land the changes. Thanks, Paul From jim at acmegating.com Sat Jul 31 15:32:28 2021 From: jim at acmegating.com (James E. Blair) Date: Sat, 31 Jul 2021 08:32:28 -0700 Subject: Zuul 4.7.0 released In-Reply-To: (Paul Belanger's message of "Sat, 31 Jul 2021 11:03:50 -0400") References: <87v94s1yjs.fsf@fuligin> Message-ID: <87r1fe1otv.fsf@fuligin> Paul Belanger writes: > Before I go out an write a bunch of linter code, how do others deal with > this? I don't believe anything changes zuul side (could be wrong) to > enforce this. But also know zuul won't dynamically load this config > (like pipeline config) to warn the user too. You might consider this: https://zuul-ci.org/docs/zuul/reference/client.html#tenant-conf-check - Jim From pabelanger at redhat.com Sat Jul 31 20:57:56 2021 From: pabelanger at redhat.com (Paul Belanger) Date: Sat, 31 Jul 2021 16:57:56 -0400 Subject: Zuul 4.7.0 released In-Reply-To: <87r1fe1otv.fsf@fuligin> References: <87v94s1yjs.fsf@fuligin> <87r1fe1otv.fsf@fuligin> Message-ID: On Sat, Jul 31, 2021 at 08:32:28AM -0700, James E. Blair wrote: > Paul Belanger writes: > > > Before I go out an write a bunch of linter code, how do others deal with > > this? I don't believe anything changes zuul side (could be wrong) to > > enforce this. But also know zuul won't dynamically load this config > > (like pipeline config) to warn the user too. > > You might consider this: > > https://zuul-ci.org/docs/zuul/reference/client.html#tenant-conf-check > I knew I looked into it before: https://review.opendev.org/c/zuul/zuul/+/591868 Currently duplicate projects are not captured via tenant-conf-check. Will have to look into more again. Paul