What operational data and data streams should be gathered?
TL;DR: As we think about making operational data from the Mass Open Cloud <https://massopen.cloud>available to the open source community, we need to define what data and data streams will be useful and gathered. This email is meant to begin a discussion and define that. TS;WM: Some context - the Mass Open Cloud (MOC) is a research cloud formed as a public-private partnership between 5 Research Universities, a group of partner companies, and the state of Massachusetts. The MOC has always had a goal of sharing operational data about the MOC to enable research on cloud computing technologies. As teams working on OI Labs projects began discussing operational data, one question was, “can the MOC share that data to the open source community?” To a great extent, this becomes part of a “Research Compliance” question and may be affected by various grants which have funded the MOC (good news - in almost every grant the PI’s have made clear that data sharing is a goal). As we looked into making such information available, I had a brief discussion with the director of BU’s Research Computing group who helped frame how we should approach this question. The first steps are: 1. Defining what data and data streams will be useful and gathered is the first step 2. Based on that we may identify which populations may be affected 3. Decide on the next steps based on the answers to item 1 and 2 above
I would start with monitoring data, i.e. all prometheus metrics being collected. But those in itself are not really useful without a picture of the infrastructure and access to the infrastructure. And you might need access to ticket systems, since you want to correlate the metrics with incidents. My point is, we need to open up _all_ aspects of operations to make it actually useful, otherwise it'll be just a pile of data. I was hoping that the newly build out NERC could be the blueprint for this. On Thu, Apr 16, 2020 at 3:15 AM Michael Daitzman <msd@bu.edu> wrote:
TL;DR:
As we think about making operational data from the Mass Open Cloud <https://massopen.cloud> available to the open source community, we need to define what data and data streams will be useful and gathered.
This email is meant to begin a discussion and define that.
TS;WM:
Some context - the Mass Open Cloud (MOC) is a research cloud formed as a public-private partnership between 5 Research Universities, a group of partner companies, and the state of Massachusetts.
The MOC has always had a goal of sharing operational data about the MOC to enable research on cloud computing technologies.
As teams working on OI Labs projects began discussing operational data, one question was, “can the MOC share that data to the open source community?”
To a great extent, this becomes part of a “Research Compliance” question and may be affected by various grants which have funded the MOC (good news - in almost every grant the PI’s have made clear that data sharing is a goal).
As we looked into making such information available, I had a brief discussion with the director of BU’s Research Computing group who helped frame how we should approach this question. The first steps are:
1.
Defining what data and data streams will be useful and gathered is the first step 2.
Based on that we may identify which populations may be affected 3.
Decide on the next steps based on the answers to item 1 and 2 above
_______________________________________________ Openinfralabs mailing list Openinfralabs@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/openinfralabs
On 2020-04-16 12:52:28 +0200 (+0200), Marcel Hild wrote:
I would start with monitoring data, i.e. all prometheus metrics being collected.
I agree that's a good route to take initially. Using OpenDev as an example, while obviously not the same systems and not necessarily constrained by the same risks, we try to keep all non-sensitive monitoring and trending data public, like: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=25&rra_id=all http://grafana.openstack.org/d/T6vSHcSik/zuul-status?orgId=1
But those in itself are not really useful without a picture of the infrastructure and access to the infrastructure.
One thing we've done in that vein is to try to keep all our deployment and systems management notes (runbooks or whatever you like to term them) on a public site: https://docs.opendev.org/opendev/system-config/ We also try to publish logs for some services of interest if we can be certain they won't leak things like PII or credentials: https://nb01.openstack.org/ (sorry, self-signed cert on that one)
And you might need access to ticket systems, since you want to correlate the metrics with incidents.
Absolutely, though relying on a public incident/defect tracker does also mean you need to train your users to leverage privacy features for anything which may contain sensitive data, or to avoid putting such information in tickets and instead forwarding it via more confidential channels. It also means getting better about redacting certain classes of data either automatically or on request (and then dealing with fallout for the latter, treating that material as though it has leaked even if you've masked or deleted it after the fact to limit the damage).
My point is, we need to open up _all_ aspects of operations to make it actually useful, otherwise it'll be just a pile of data. [...]
For more general day-to-day operations, as well as scheduled maintenance or similar change activity, we've found it's useful to keep discussion on publicly-archived mailing lists and in publicly logged IRC channels so that they're easy to refer back to later. http://lists.opendev.org/pipermail/service-discuss/2020-April/thread.html http://eavesdrop.openstack.org/irclogs/%23opendev/%23opendev.2020-04-30.log.... http://eavesdrop.openstack.org/meetings/opendev_maint/2020/opendev_maint.202... We're also using an IRC bot which our sysadmins can command to log important status information to a Web page: https://wiki.openstack.org/wiki/Infrastructure_Status That allows us to easily publish notes about what we're doing which might have public impact, and we also tie it into a notification system which can echo messages in subscribed IRC channels or temporarily update their channel topics to reflect important service status information. Another major choice we've made is to perform installation and life-cycle management of a lot of our services through continuous deployment jobs in a public-facing CI/CD system: https://zuul.opendev.org/t/openstack/builds?pipeline=deploy It does require careful thought, however, to make sure you're not exposing anything which could compromise the integrity or security of those systems. -- Jeremy Stanley
Wow, that's some good open data. Is there a single entry page, from which I could've discovered that myself? And is that all related to a single OpenStack cluster that's being run by the community or is it build related infrastructure? On Thu, Apr 30, 2020 at 6:02 PM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-04-16 12:52:28 +0200 (+0200), Marcel Hild wrote:
I would start with monitoring data, i.e. all prometheus metrics being collected.
I agree that's a good route to take initially. Using OpenDev as an example, while obviously not the same systems and not necessarily constrained by the same risks, we try to keep all non-sensitive monitoring and trending data public, like:
http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=25&rra_id=all
http://grafana.openstack.org/d/T6vSHcSik/zuul-status?orgId=1
But those in itself are not really useful without a picture of the infrastructure and access to the infrastructure.
One thing we've done in that vein is to try to keep all our deployment and systems management notes (runbooks or whatever you like to term them) on a public site:
https://docs.opendev.org/opendev/system-config/
We also try to publish logs for some services of interest if we can be certain they won't leak things like PII or credentials:
https://nb01.openstack.org/ (sorry, self-signed cert on that one)
And you might need access to ticket systems, since you want to correlate the metrics with incidents.
Absolutely, though relying on a public incident/defect tracker does also mean you need to train your users to leverage privacy features for anything which may contain sensitive data, or to avoid putting such information in tickets and instead forwarding it via more confidential channels. It also means getting better about redacting certain classes of data either automatically or on request (and then dealing with fallout for the latter, treating that material as though it has leaked even if you've masked or deleted it after the fact to limit the damage).
My point is, we need to open up _all_ aspects of operations to make it actually useful, otherwise it'll be just a pile of data. [...]
For more general day-to-day operations, as well as scheduled maintenance or similar change activity, we've found it's useful to keep discussion on publicly-archived mailing lists and in publicly logged IRC channels so that they're easy to refer back to later.
http://lists.opendev.org/pipermail/service-discuss/2020-April/thread.html
http://eavesdrop.openstack.org/irclogs/%23opendev/%23opendev.2020-04-30.log....
http://eavesdrop.openstack.org/meetings/opendev_maint/2020/opendev_maint.202...
We're also using an IRC bot which our sysadmins can command to log important status information to a Web page:
https://wiki.openstack.org/wiki/Infrastructure_Status
That allows us to easily publish notes about what we're doing which might have public impact, and we also tie it into a notification system which can echo messages in subscribed IRC channels or temporarily update their channel topics to reflect important service status information.
Another major choice we've made is to perform installation and life-cycle management of a lot of our services through continuous deployment jobs in a public-facing CI/CD system:
https://zuul.opendev.org/t/openstack/builds?pipeline=deploy
It does require careful thought, however, to make sure you're not exposing anything which could compromise the integrity or security of those systems. -- Jeremy Stanley _______________________________________________ Openinfralabs mailing list Openinfralabs@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/openinfralabs
On 2020-04-30 18:19:22 +0200 (+0200), Marcel Hild wrote:
Wow, that's some good open data.
Is there a single entry page, from which I could've discovered that myself?
It's not all aggregated in one place yet, unfortunately, though those sites tend to be mentioned in various places in the sysadmin docs we publish. Eventually we hope to have a lot more of it directly discoverable from the main https://opendev.org/ page.
And is that all related to a single OpenStack cluster that's being run by the community or is it build related infrastructure? [...]
We're running the collaboration services used by OpenStack and many other communities (including a majority of those represented by the OSF), so not actually a deployment of OpenStack software. This is why my earlier E-mail said it wasn't a perfect example, due to being different software and so different associated risks. -- Jeremy Stanley
participants (3)
-
Jeremy Stanley
-
Marcel Hild
-
Michael Daitzman