zuul-preview - a helper service for JAMstack preview sites
Heya, One of the cool things we do in Zuul jobs are our static html preview sites. We use this for zuul-dashboard preview, as well as storyboard, docs builds and others. Recently some of us in OpenDev land have been working with the OpenStack Foundation folks to support using netlify-cms with gerrit, which lets people who would prefer to edit web content via vim/emacs via git collaborate with those who would prefer to do such a thing in a visual CMS-like manner. It's really cool - except, one of the things that came out of it is the realization that a non-zero number of static-site-generators and/or JAMstack applications make some assumptions that they exist at the root of a URL. While we could argue that it would be better if they didn't make that assumption, supporting them would be potentially friendlier. To that end, over the weekend Jim and I hacked up a service using Apache's mod_rewrite and its ability to call out to a helper program to find the target URL. We wrote a program that takes the input from apache, hits zuul's REST API and returns the appropriate url. With that, given a url like: http://site.926bb0aaddad4bc3853269451e115dcb.openstack.preview.opendev.org/ It can serve the content published on the fileserver, or in swift - basically allowing us to serve preview content as-if it was at the root of a url without sacrifcing the zuul pattern of jobs being the ones to tell zuul where they published the content. It is a small program bundled with pre-configured Apache in a Docker container, so anyone who wants to run this at their site can simply run the published docker container as-is. We think it would be appropriate for this service to be part of Zuul in general so that it's easily available to sites which want to run it, and so we wanted to see what people thought about Zuul adopting it. https://github.com/emonty/upgraded-octo-giggle We wrote it in C++, for two reasons: 1) It's a performance sensitive service, so C++ is helpful. 2) We've talked about potentially writing some of zuul's components in not-python in the future. We're not broadly ready for that yet, but we thought having a simple program in C++ (that's not likely to get any more complex) but that goes things like REST and JSON interactions, would give us a basis for a more informed discussion later. So ... what do people think? Should the Zuul project adopt a small c++ service? Monty
Hi, On Mon, Feb 11, 2019 at 1:21 PM Monty Taylor <mordred@inaugust.com> wrote:
Heya,
One of the cool things we do in Zuul jobs are our static html preview sites. We use this for zuul-dashboard preview, as well as storyboard, docs builds and others.
Recently some of us in OpenDev land have been working with the OpenStack Foundation folks to support using netlify-cms with gerrit, which lets people who would prefer to edit web content via vim/emacs via git collaborate with those who would prefer to do such a thing in a visual CMS-like manner. It's really cool - except, one of the things that came out of it is the realization that a non-zero number of static-site-generators and/or JAMstack applications make some assumptions that they exist at the root of a URL. While we could argue that it would be better if they didn't make that assumption, supporting them would be potentially friendlier.
To that end, over the weekend Jim and I hacked up a service using Apache's mod_rewrite and its ability to call out to a helper program to find the target URL. We wrote a program that takes the input from apache, hits zuul's REST API and returns the appropriate url. With that, given a url like:
http://site.926bb0aaddad4bc3853269451e115dcb.openstack.preview.opendev.org/
It can serve the content published on the fileserver, or in swift - basically allowing us to serve preview content as-if it was at the root of a url without sacrifcing the zuul pattern of jobs being the ones to tell zuul where they published the content.
It is a small program bundled with pre-configured Apache in a Docker container, so anyone who wants to run this at their site can simply run the published docker container as-is.
We think it would be appropriate for this service to be part of Zuul in general so that it's easily available to sites which want to run it, and so we wanted to see what people thought about Zuul adopting it.
https://github.com/emonty/upgraded-octo-giggle
We wrote it in C++, for two reasons:
1) It's a performance sensitive service, so C++ is helpful.
2) We've talked about potentially writing some of zuul's components in not-python in the future. We're not broadly ready for that yet, but we thought having a simple program in C++ (that's not likely to get any more complex) but that goes things like REST and JSON interactions, would give us a basis for a more informed discussion later.
So ... what do people think? Should the Zuul project adopt a small c++ service?
I think this is fine that it's in C++, IMO. If it really becomes burdensome for some unforeseen reason, it's also simple enough to rewrite in something else (but not java.... NEVER java). So let's just go with it. -Dave
Aye, as long as the service's name remains "upgraded-octo-giggle". On Mon, Feb 11, 2019 at 7:58 PM David Shrewsbury <shrews@redhat.com> wrote:
Hi,
On Mon, Feb 11, 2019 at 1:21 PM Monty Taylor <mordred@inaugust.com> wrote:
Heya,
One of the cool things we do in Zuul jobs are our static html preview sites. We use this for zuul-dashboard preview, as well as storyboard, docs builds and others.
Recently some of us in OpenDev land have been working with the OpenStack Foundation folks to support using netlify-cms with gerrit, which lets people who would prefer to edit web content via vim/emacs via git collaborate with those who would prefer to do such a thing in a visual CMS-like manner. It's really cool - except, one of the things that came out of it is the realization that a non-zero number of static-site-generators and/or JAMstack applications make some assumptions that they exist at the root of a URL. While we could argue that it would be better if they didn't make that assumption, supporting them would be potentially friendlier.
To that end, over the weekend Jim and I hacked up a service using Apache's mod_rewrite and its ability to call out to a helper program to find the target URL. We wrote a program that takes the input from apache, hits zuul's REST API and returns the appropriate url. With that, given a url like:
http://site.926bb0aaddad4bc3853269451e115dcb.openstack.preview.opendev.org/
It can serve the content published on the fileserver, or in swift - basically allowing us to serve preview content as-if it was at the root of a url without sacrifcing the zuul pattern of jobs being the ones to tell zuul where they published the content.
It is a small program bundled with pre-configured Apache in a Docker container, so anyone who wants to run this at their site can simply run the published docker container as-is.
We think it would be appropriate for this service to be part of Zuul in general so that it's easily available to sites which want to run it, and so we wanted to see what people thought about Zuul adopting it.
https://github.com/emonty/upgraded-octo-giggle
We wrote it in C++, for two reasons:
1) It's a performance sensitive service, so C++ is helpful.
2) We've talked about potentially writing some of zuul's components in not-python in the future. We're not broadly ready for that yet, but we thought having a simple program in C++ (that's not likely to get any more complex) but that goes things like REST and JSON interactions, would give us a basis for a more informed discussion later.
So ... what do people think? Should the Zuul project adopt a small c++ service?
I think this is fine that it's in C++, IMO. If it really becomes burdensome for some unforeseen reason, it's also simple enough to rewrite in something else (but not java.... NEVER java).
So let's just go with it.
-Dave _______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
Quoting David Shrewsbury (2019-02-11 10:58:05)
On Mon, Feb 11, 2019 at 1:21 PM Monty Taylor <[1]mordred@inaugust.com> wrote: So ... what do people think? Should the Zuul project adopt a small c++ service?
I think this is fine that it's in C++, IMO. If it really becomes burdensome for some unforeseen reason, it's also simple enough to rewrite in something else (but not java.... NEVER java).
You'll get more enthusiasm from me if it's in Rust, but C++ is fine, meh, w'ever.
Quoting Clint Byrum (2019-02-11 15:02:35)
Quoting David Shrewsbury (2019-02-11 10:58:05)
On Mon, Feb 11, 2019 at 1:21 PM Monty Taylor <[1]mordred@inaugust.com> wrote: So ... what do people think? Should the Zuul project adopt a small c++ service?
I think this is fine that it's in C++, IMO. If it really becomes burdensome for some unforeseen reason, it's also simple enough to rewrite in something else (but not java.... NEVER java).
You'll get more enthusiasm from me if it's in Rust, but C++ is fine, meh, w'ever.
BTW, I put my code where my mouth is: https://github.com/SpamapS/upgraded-octo-giggle/blob/master/src/main.rs I had to alter it a bit to support white-labeled Zuul API, but otherwise I think it works the same as the C++ version... and yay, a memory safe language with C++-like performance... in theory. I was wondering if you could provide a reference test and docs of some kind for how these should work? I've fed it urls like you specified, and they seem to work with builds, but I'm not sure how the error handling is supposed to work, etc.
On Mon, Feb 11, 2019 at 22:52 Clint Byrum wrote:
Quoting Clint Byrum (2019-02-11 15:02:35)
Quoting David Shrewsbury (2019-02-11 10:58:05)
On Mon, Feb 11, 2019 at 1:21 PM Monty Taylor <[1]mordred@inaugust.com> wrote: So ... what do people think? Should the Zuul project adopt a small c++ service?
I think this is fine that it's in C++, IMO. If it really becomes burdensome for some unforeseen reason, it's also simple enough to rewrite in something else (but not java.... NEVER java).
You'll get more enthusiasm from me if it's in Rust, but C++ is fine, meh, w'ever.
BTW, I put my code where my mouth is:
https://github.com/SpamapS/upgraded-octo-giggle/blob/master/src/main.rs
I had to alter it a bit to support white-labeled Zuul API, but otherwise I think it works the same as the C++ version... and yay, a memory safe language with C++-like performance... in theory.
I was wondering if you could provide a reference test and docs of some kind for how these should work? I've fed it urls like you specified, and they seem to work with builds, but I'm not sure how the error handling is supposed to work, etc.
What is the performance issue if this rewrite map was written in Python? For benchmark purpose, please find my implementation below. -Tristan ```python from functools import lru_cache import requests @lru_cache(maxsize=20) def get(r): api, vhost = r.split() artifact, build, tenant, _ = vhost.split('.', 3) return requests.get("%s/api/tenant/%s/build/%s" % ( api, tenant, build)).json()["log_url"] while True: try: print(get(input())) except: print("error") ```
Quoting Tristan Cacqueray (2019-02-12 00:56:03)
On Mon, Feb 11, 2019 at 22:52 Clint Byrum wrote:
Quoting Clint Byrum (2019-02-11 15:02:35)
Quoting David Shrewsbury (2019-02-11 10:58:05)
On Mon, Feb 11, 2019 at 1:21 PM Monty Taylor <[1]mordred@inaugust.com> wrote: So ... what do people think? Should the Zuul project adopt a small c++ service?
I think this is fine that it's in C++, IMO. If it really becomes burdensome for some unforeseen reason, it's also simple enough to rewrite in something else (but not java.... NEVER java).
You'll get more enthusiasm from me if it's in Rust, but C++ is fine, meh, w'ever.
BTW, I put my code where my mouth is:
https://github.com/SpamapS/upgraded-octo-giggle/blob/master/src/main.rs
I had to alter it a bit to support white-labeled Zuul API, but otherwise I think it works the same as the C++ version... and yay, a memory safe language with C++-like performance... in theory.
I was wondering if you could provide a reference test and docs of some kind for how these should work? I've fed it urls like you specified, and they seem to work with builds, but I'm not sure how the error handling is supposed to work, etc.
What is the performance issue if this rewrite map was written in Python? For benchmark purpose, please find my implementation below. -Tristan
```python from functools import lru_cache import requests
@lru_cache(maxsize=20) def get(r): api, vhost = r.split() artifact, build, tenant, _ = vhost.split('.', 3) return requests.get("%s/api/tenant/%s/build/%s" % ( api, tenant, build)).json()["log_url"]
while True: try: print(get(input())) except: print("error") ```
This code is approximately 10x slower in a single thread than the Rust or the C++, which are virtually the same (actually the Rust gets better the more LRU expirations it does). I added a requests Session object and got it to more like 7x slower, but the python still is much slower. Very scientific tests on my laptop (you can find them in the linked github fork under tests). These are all typical of repeated runs: Python: $ /usr/bin/time python3 ../octo.py < api/manifest.txt > /dev/null 11.88user 0.41system 0:13.39elapsed 91%CPU (0avgtext+0avgdata 28592maxresident)k 0inputs+0outputs (0major+12340minor)pagefaults 0swaps C++: $ /usr/bin/time ../zuul-preview/zuul-preview < api/manifest.txt > /dev/null 1.33user 0.94system 0:01.96elapsed 115%CPU (0avgtext+0avgdata 9880maxresident)k 0inputs+0outputs (0major+895minor)pagefaults 0swaps Rust: $ /usr/bin/time ../target/release/upgraded-octo-giggle < api/manifest.txt > /dev/null 0.69user 0.32system 0:01.35elapsed 75%CPU (0avgtext+0avgdata 8516maxresident)k 0inputs+0outputs (0major+618minor)pagefaults 0swaps This is doing 10,000 unique simulated builds with a local nginx server serving the build responses statically. The more builds, the more Rust wins, suggesting that the LRU expiration in C++ has a performance problem. Anyway, is this code really performance sensitive enough to warrant a new language?
On Tue, Feb 12, 2019 at 02:53 Clint Byrum wrote:
Quoting Tristan Cacqueray (2019-02-12 00:56:03)
On Mon, Feb 11, 2019 at 22:52 Clint Byrum wrote:
Quoting Clint Byrum (2019-02-11 15:02:35)
Quoting David Shrewsbury (2019-02-11 10:58:05)
On Mon, Feb 11, 2019 at 1:21 PM Monty Taylor <[1]mordred@inaugust.com> wrote: So ... what do people think? Should the Zuul project adopt a small c++ service?
I think this is fine that it's in C++, IMO. If it really becomes burdensome for some unforeseen reason, it's also simple enough to rewrite in something else (but not java.... NEVER java).
You'll get more enthusiasm from me if it's in Rust, but C++ is fine, meh, w'ever.
BTW, I put my code where my mouth is:
https://github.com/SpamapS/upgraded-octo-giggle/blob/master/src/main.rs
I had to alter it a bit to support white-labeled Zuul API, but otherwise I think it works the same as the C++ version... and yay, a memory safe language with C++-like performance... in theory.
I was wondering if you could provide a reference test and docs of some kind for how these should work? I've fed it urls like you specified, and they seem to work with builds, but I'm not sure how the error handling is supposed to work, etc.
What is the performance issue if this rewrite map was written in Python? For benchmark purpose, please find my implementation below. -Tristan
```python from functools import lru_cache import requests
@lru_cache(maxsize=20) def get(r): api, vhost = r.split() artifact, build, tenant, _ = vhost.split('.', 3) return requests.get("%s/api/tenant/%s/build/%s" % ( api, tenant, build)).json()["log_url"]
while True: try: print(get(input())) except: print("error") ```
This code is approximately 10x slower in a single thread than the Rust or the C++, which are virtually the same (actually the Rust gets better the more LRU expirations it does). I added a requests Session object and got it to more like 7x slower, but the python still is much slower.
Very scientific tests on my laptop (you can find them in the linked github fork under tests). These are all typical of repeated runs:
Python: $ /usr/bin/time python3 ../octo.py < api/manifest.txt > /dev/null 11.88user 0.41system 0:13.39elapsed 91%CPU (0avgtext+0avgdata 28592maxresident)k 0inputs+0outputs (0major+12340minor)pagefaults 0swaps
C++: $ /usr/bin/time ../zuul-preview/zuul-preview < api/manifest.txt > /dev/null 1.33user 0.94system 0:01.96elapsed 115%CPU (0avgtext+0avgdata 9880maxresident)k 0inputs+0outputs (0major+895minor)pagefaults 0swaps
Rust: $ /usr/bin/time ../target/release/upgraded-octo-giggle < api/manifest.txt > /dev/null 0.69user 0.32system 0:01.35elapsed 75%CPU (0avgtext+0avgdata 8516maxresident)k 0inputs+0outputs (0major+618minor)pagefaults 0swaps
This is doing 10,000 unique simulated builds with a local nginx server serving the build responses statically. The more builds, the more Rust wins, suggesting that the LRU expiration in C++ has a performance problem.
Note that a typical use-case would yield more cache-hit since a visitor would query the endpoint multiple time for a single build. The gain from C++/Rust seems less than 1ms, which seems negligable compared to the logserver request round-trip that usually takes hundreds of ms.
Anyway, is this code really performance sensitive enough to warrant a new language?
There is also the memory cost of a python interpreter. I wouldn't mind Zuul embracing other languages, but this also has a maintainance cost. -Tristan
Tristan Cacqueray <tdecacqu@redhat.com> writes:
Note that a typical use-case would yield more cache-hit since a visitor would query the endpoint multiple time for a single build. The gain from C++/Rust seems less than 1ms, which seems negligable compared to the logserver request round-trip that usually takes hundreds of ms.
Agreed, the performance here only matters with cache hits. The first request for /index.html will be slow regardless, but the subsequent 100 requests for all the components of the page should be fast.
Anyway, is this code really performance sensitive enough to warrant a new language?
There is also the memory cost of a python interpreter. I wouldn't mind Zuul embracing other languages, but this also has a maintainance cost.
This thread has been great -- this is the real reason to write this in something other than Python: so that we can have this conversation about something both concrete and trivial. We've been able to solve an actual problem using some of the tools and techniques (rest/json) that we use elsewhere in Zuul using another language, but the problem is so simple that we can implement it 3 times and not have to care too much about which one we use. Clint's port to Rust is very compelling. It's technically better (though I'm sure the faults in the C++ LRU cache are mine), but it also does a great job of illustrating the same things that the C++ version did in terms of tools and techniques. It's a great overture to C++ programmers to say "and here's what that looks like in Rust". I'm not in a rush to rewrite Zuul. At the very least, I don't think we can really contemplate introducing any significant non-python code until after we've switched to ZK everywhere and have the distributed scheduler. But I would like to start preparing for that conversation if we decide to have it some day. And I think this is a great start. At the moment, I believe that of those working on Zuul who have an interest in a language other than Python, there are more who are familiar with C++ than Rust. Therefore, I'd like to land the C++ version now, and update it to fix the whitelabel issue that Clint spotted. After that, however, I'd like for Clint to propose his Rust version as a change. I'd like to leave it in review for a while so that those of us more familiar with C++ can download it and play with it for a bit. I'd like to start an etherpad[1] where we identify questions we identify during this process. After a while, let's regroup and see how we feel about it. Personally, I like quite a lot about Rust, but I last seriously looked at it when it was *very* new. Clint has shown me in his patch that now may be a good time to seriously consider it for this. My not-so-secret hope is that this process convinces me this is the better choice. :) -Jim [1] https://etherpad.openstack.org/p/zuul-other-languages
Quoting James E. Blair (2019-02-12 13:13:24)
Tristan Cacqueray <tdecacqu@redhat.com> writes:
Note that a typical use-case would yield more cache-hit since a visitor would query the endpoint multiple time for a single build. The gain from C++/Rust seems less than 1ms, which seems negligable compared to the logserver request round-trip that usually takes hundreds of ms.
Agreed, the performance here only matters with cache hits. The first request for /index.html will be slow regardless, but the subsequent 100 requests for all the components of the page should be fast.
Great point, I went ahead and performed the test again, with each URL repeated 10 times. Rust showed basically no change in clock time to do 100,000 cached vs. 10,000 uncached, though it did use more CPU, which is exactly the desired trade-off. $ /usr/bin/time ../target/release/upgraded-octo-giggle < api/manifest.txt > /dev/null 0.82user 0.44system 0:01.65elapsed 76%CPU (0avgtext+0avgdata 8372maxresident)k 0inputs+0outputs (0major+617minor)pagefaults 0swaps $ sort -u api/manifest.txt | /usr/bin/time ../target/release/upgraded-octo-giggle > /dev/null 0.79user 0.27system 0:01.93elapsed 55%CPU (0avgtext+0avgdata 8320maxresident)k 0inputs+0outputs (0major+617minor)pagefaults 0swaps Python runs showed a lot of variability, sometimes being identical with/without cache hits, and sometimes taking 10% longer. I have to wonder if we're hitting gc in the LRU expirations. $ /usr/bin/time python3 ../octo.py < api/manifest.txt > /dev/null 12.56user 0.59system 0:14.25elapsed 92%CPU (0avgtext+0avgdata 28628maxresident)k 0inputs+0outputs (0major+12347minor)pagefaults 0swaps $ sort -u api/manifest.txt | /usr/bin/time python3 ../octo.py > /dev/null 12.50user 0.43system 0:14.37elapsed 90%CPU (0avgtext+0avgdata 28308maxresident)k 0inputs+0outputs (0major+12351minor)pagefaults 0swaps Sadly, C++ could not complete the challenge, as it had a segfault. (Is it irony when a memory unsafe implementation fails against a memory safe implementation due to segfault, or is that just defeat?). I did not look too deep to find a solution, but here is the backtrace: #0 0x00007ffff6f72dfa in std::__detail::_List_node_base::_M_unhook() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #1 0x0000555555560b20 in std::__cxx11::list<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const> > >::_M_erase (__position= {first = "artifact.2f641b63d938416fa01a2323a45a6af7.tenant", second = "\000\361xUUU\000\000\300\333\377\377\377\177\000\000\260\221zUUU\000\000\060\000\000\000\000\000\000\000\060\000\000\000\000\000\000\000bcdefghi\340\036\000\314\377\177\000\000L\000\000\000\000\000\000\000L\000\000\000\000\000\000\000eef/"}, this=0x7fffffffdbc0) at /usr/include/c++/7/bits/stl_list.h:1812 #2 std::__cxx11::list<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const> > >::erase (__position=..., this=0x7fffffffdbc0) at /usr/include/c++/7/bits/list.tcc:157 #3 Cache::get (key="artifact.2f641b63d938416fa01a2323a45a6af7.tenant", this=0x7fffffffdbc0) at zuul-preview/main.cc:72 #4 main () at zuul-preview/main.cc:138
Anyway, is this code really performance sensitive enough to warrant a new language?
There is also the memory cost of a python interpreter. I wouldn't mind Zuul embracing other languages, but this also has a maintainance cost.
This thread has been great -- this is the real reason to write this in something other than Python: so that we can have this conversation about something both concrete and trivial. We've been able to solve an actual problem using some of the tools and techniques (rest/json) that we use elsewhere in Zuul using another language, but the problem is so simple that we can implement it 3 times and not have to care too much about which one we use.
Indeed, it's been a lot of fun to work on too. :)
Clint's port to Rust is very compelling. It's technically better (though I'm sure the faults in the C++ LRU cache are mine), but it also does a great job of illustrating the same things that the C++ version did in terms of tools and techniques. It's a great overture to C++ programmers to say "and here's what that looks like in Rust".
I was surprised, I had put Rust down about 6 months ago, and came back to this learning the new things in the 2018 edition. The ergonomics have gotten much, much better. Note that async, network code is still a bit wonky, which we didn't really touch here.
I'm not in a rush to rewrite Zuul. At the very least, I don't think we can really contemplate introducing any significant non-python code until after we've switched to ZK everywhere and have the distributed scheduler. But I would like to start preparing for that conversation if we decide to have it some day. And I think this is a great start.
Me too.
At the moment, I believe that of those working on Zuul who have an interest in a language other than Python, there are more who are familiar with C++ than Rust. Therefore, I'd like to land the C++ version now, and update it to fix the whitelabel issue that Clint spotted.
+1, should be rather simple. :)
After that, however, I'd like for Clint to propose his Rust version as a change. I'd like to leave it in review for a while so that those of us more familiar with C++ can download it and play with it for a bit. I'd like to start an etherpad[1] where we identify questions we identify during this process. After a while, let's regroup and see how we feel about it.
Personally, I like quite a lot about Rust, but I last seriously looked at it when it was *very* new. Clint has shown me in his patch that now may be a good time to seriously consider it for this. My not-so-secret hope is that this process convinces me this is the better choice. :)
I have a weird sense of joy when working with Rust, so I'd encourage anybody who is curious to lean on me (We have some noobs, SpamapS on IRC). I look forward to submitting the Rust port as a change. I wonder if the segfault detected above might accelerate using it? :)
On 13.02.19, 01:03, "Clint Byrum" <clint@fewbar.com> wrote: Quoting James E. Blair (2019-02-12 13:13:24) > Tristan Cacqueray <tdecacqu@redhat.com> writes: > > > Note that a typical use-case would yield more cache-hit since a visitor > > would query the endpoint multiple time for a single build. > > The gain from C++/Rust seems less than 1ms, which seems negligable > > compared to the logserver request round-trip that usually takes hundreds of ms. > > Agreed, the performance here only matters with cache hits. The first > request for /index.html will be slow regardless, but the subsequent 100 > requests for all the components of the page should be fast. > Great point, I went ahead and performed the test again, with each URL repeated 10 times. Rust showed basically no change in clock time to do 100,000 cached vs. 10,000 uncached, though it did use more CPU, which is exactly the desired trade-off. $ /usr/bin/time ../target/release/upgraded-octo-giggle < api/manifest.txt > /dev/null 0.82user 0.44system 0:01.65elapsed 76%CPU (0avgtext+0avgdata 8372maxresident)k 0inputs+0outputs (0major+617minor)pagefaults 0swaps $ sort -u api/manifest.txt | /usr/bin/time ../target/release/upgraded-octo-giggle > /dev/null 0.79user 0.27system 0:01.93elapsed 55%CPU (0avgtext+0avgdata 8320maxresident)k 0inputs+0outputs (0major+617minor)pagefaults 0swaps Python runs showed a lot of variability, sometimes being identical with/without cache hits, and sometimes taking 10% longer. I have to wonder if we're hitting gc in the LRU expirations. $ /usr/bin/time python3 ../octo.py < api/manifest.txt > /dev/null 12.56user 0.59system 0:14.25elapsed 92%CPU (0avgtext+0avgdata 28628maxresident)k 0inputs+0outputs (0major+12347minor)pagefaults 0swaps $ sort -u api/manifest.txt | /usr/bin/time python3 ../octo.py > /dev/null 12.50user 0.43system 0:14.37elapsed 90%CPU (0avgtext+0avgdata 28308maxresident)k 0inputs+0outputs (0major+12351minor)pagefaults 0swaps Sadly, C++ could not complete the challenge, as it had a segfault. (Is it irony when a memory unsafe implementation fails against a memory safe implementation due to segfault, or is that just defeat?). I did not look too deep to find a solution, but here is the backtrace: #0 0x00007ffff6f72dfa in std::__detail::_List_node_base::_M_unhook() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #1 0x0000555555560b20 in std::__cxx11::list<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const> > >::_M_erase (__position= {first = "artifact.2f641b63d938416fa01a2323a45a6af7.tenant", second = "\000\361xUUU\000\000\300\333\377\377\377\177\000\000\260\221zUUU\000\000\060\000\000\000\000\000\000\000\060\000\000\000\000\000\000\000bcdefghi\340\036\000\314\377\177\000\000L\000\000\000\000\000\000\000L\000\000\000\000\000\000\000eef/"}, this=0x7fffffffdbc0) at /usr/include/c++/7/bits/stl_list.h:1812 #2 std::__cxx11::list<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const> > >::erase (__position=..., this=0x7fffffffdbc0) at /usr/include/c++/7/bits/list.tcc:157 #3 Cache::get (key="artifact.2f641b63d938416fa01a2323a45a6af7.tenant", this=0x7fffffffdbc0) at zuul-preview/main.cc:72 #4 main () at zuul-preview/main.cc:138 > >> Anyway, is this code really performance sensitive enough to warrant a > >> new language? > > > > There is also the memory cost of a python interpreter. I wouldn't mind > > Zuul embracing other languages, but this also has a maintainance cost. > > This thread has been great -- this is the real reason to write this in > something other than Python: so that we can have this conversation about > something both concrete and trivial. We've been able to solve an actual > problem using some of the tools and techniques (rest/json) that we use > elsewhere in Zuul using another language, but the problem is so simple > that we can implement it 3 times and not have to care too much about > which one we use. > Indeed, it's been a lot of fun to work on too. :) > Clint's port to Rust is very compelling. It's technically better > (though I'm sure the faults in the C++ LRU cache are mine), but it also > does a great job of illustrating the same things that the C++ version > did in terms of tools and techniques. It's a great overture to C++ > programmers to say "and here's what that looks like in Rust". > I was surprised, I had put Rust down about 6 months ago, and came back to this learning the new things in the 2018 edition. The ergonomics have gotten much, much better. Note that async, network code is still a bit wonky, which we didn't really touch here. > I'm not in a rush to rewrite Zuul. At the very least, I don't think we > can really contemplate introducing any significant non-python code until > after we've switched to ZK everywhere and have the distributed > scheduler. But I would like to start preparing for that conversation if > we decide to have it some day. And I think this is a great start. > Me too. > At the moment, I believe that of those working on Zuul who have an > interest in a language other than Python, there are more who are > familiar with C++ than Rust. Therefore, I'd like to land the C++ > version now, and update it to fix the whitelabel issue that Clint > spotted. > +1, should be rather simple. :) > After that, however, I'd like for Clint to propose his Rust version as a > change. I'd like to leave it in review for a while so that those of us > more familiar with C++ can download it and play with it for a bit. I'd > like to start an etherpad[1] where we identify questions we identify > during this process. After a while, let's regroup and see how we feel > about it. > > Personally, I like quite a lot about Rust, but I last seriously looked > at it when it was *very* new. Clint has shown me in his patch that now > may be a good time to seriously consider it for this. My not-so-secret > hope is that this process convinces me this is the better choice. :) > I have a weird sense of joy when working with Rust, so I'd encourage anybody who is curious to lean on me (We have some noobs, SpamapS on IRC). I look forward to submitting the Rust port as a change. I wonder if the segfault detected above might accelerate using it? :) Sounds like y'all had a lot of fun with this experiment (. I like this proxy idea and am open for adding new languages to zuul. In fact we already have more than python when we look at zuul-web. I've never used rust so far. Maybe now it's the time to try it out... However I also think that C++ is more widespread and I like it too. Btw, I think we should remove this non-standard '#include <bits/stdc++.h>' laziness and use the correct standard headers like vector, iostream, etc... Also as this already uses boost, it seems that there is already a lru cache included: https://www.boost.org/doc/libs/1_69_0/boost/compute/detail/lru_cache.hpp -Tobias
<Tobias.Henkel@bmw.de> writes:
On 13.02.19, 01:03, "Clint Byrum" <clint@fewbar.com> wrote:
Quoting James E. Blair (2019-02-12 13:13:24) > Tristan Cacqueray <tdecacqu@redhat.com> writes: > > > Note that a typical use-case would yield more cache-hit since a visitor > > would query the endpoint multiple time for a single build. > > The gain from C++/Rust seems less than 1ms, which seems negligable > > compared to the logserver request round-trip that usually takes hundreds of ms. > > Agreed, the performance here only matters with cache hits. The first > request for /index.html will be slow regardless, but the subsequent 100 > requests for all the components of the page should be fast. >
Sadly, C++ could not complete the challenge, as it had a segfault. (Is it irony when a memory unsafe implementation fails against a memory safe implementation due to segfault, or is that just defeat?). I did not look too deep to find a solution, but here is the backtrace:
Alanis says it's ironic.
Sounds like y'all had a lot of fun with this experiment (. I like this proxy idea and am open for adding new languages to zuul. In fact we already have more than python when we look at zuul-web. I've never used rust so far. Maybe now it's the time to try it out... However I also think that C++ is more widespread and I like it too. Btw, I think we should remove this non-standard '#include <bits/stdc++.h>' laziness and use the correct standard headers like vector, iostream, etc... Also as this already uses boost, it seems that there is already a lru cache included: https://www.boost.org/doc/libs/1_69_0/boost/compute/detail/lru_cache.hpp
Yes, we should probably use that LRU cache. :) Okay, I'm going to go ahead and import the C++ version and see where we go from there. -Jim
Tried implementing it in Go just for the fun of it: https://github.com/westphahl/upgraded-octo-giggle/tree/golang Simon
Here is a change to add this to Gerrit, which we can also treat as a review of whether to adopt it into the Zuul project. -Jim
participants (8)
-
Clint Byrum
-
corvus@inaugust.com
-
David Shrewsbury
-
Matthieu Huin
-
Monty Taylor
-
Simon.Westphahl@bmw.de
-
Tobias.Henkel@bmw.de
-
Tristan Cacqueray