Afterlife of the PTG edge discussions
Hi, Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š I can help in organizing the notes after the PTG workshop. We used 3 etherpads: * PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin * Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis * Alans problems: https://etherpad.openstack.org/p/edge-alans-problems In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems. I think both of these are valuable information and we should somehow have the following things: * a description about what we would like to achieve. Maybe in some other format than a list of etherpads. * a list of concrete requirements to specific projects (being existing or something new) * maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?) Any opinions? Br, Gerg0
Hi Gerg0, > > [...] > I can help in organizing the notes after the PTG workshop. > > Thanks, > > [...] > > I think both of these are valuable information and we should somehow > have the following things: > > * a description about what we would like to achieve. Maybe in > some other format than a list of etherpads. Definitely ;-) First, I think the monday's list should be completed with the additional informations that have been identified through the allan's problem - being able to define roles per regions/sites (one user may have some specific rights on one site while being not allowed to use specific services on another one. - ... Moreover, we should probably redistribute the different requirements according to the perspective of an end-user/developer vs an administrator (right now, most of the admin operations belong to Level 5, not sure this is the best way to classify, especially if the goal is to class each operation according to the complexity level). > * a list of concrete requirements to specific projects (being > existing or something new) Yes, maybe this requires a bit of works from our side but I think you're right: it is definitely the right way to go. > * maybe some prototypes based on the Tuesday afternoon > discussions for the keystone data and image distribution (by the > way can someone post the picture of THE PLAN?) I think Ildiko took a photo of the Jonathan's diagram. >From my side to be honest, there are still some points that are unclear (i.e. or at least I need more time to be convinced before starting prototyping :-P). In particular: - Using this new NKB means that each edge site will be passive (in the sense that you cannot create a new role/a new VM image/a new... without going through the NKB service? What will happen if someone wants to create a new VM image from the edge but the NKB is not reachable? Don't get me wrong, I'm convinced that being non intrusive is good (so developing a mechanism that enables collaborations between independent edge sites by leveraging only APIs makes sense). The doubts I have are regarding how such mechanisms can be implemented while mitigating as much as possible (i) communications between edge sites and this service and (ii) possible inconsistency issues. - At the end, we also briefly mentioned that the NKB will ensure some guarantees (i.e. like an autonomous checker in charge of maintaining the presence/availability of specific images across the sites). This also makes sense but looks to be an advanced self-healing feature. Ok, Why not? If we can do it it is great. However, I'm bit scared about what will be present at the end inside this NKB service (and all golden* services)? - What is the difference between such a new NKB and the tricircle proposal (before splitting) is also something I think it would be valuable to better understand? - I didn't attend the nova session unfortunately this morning but I get the feeling that we can also learn a lot from the questions they have been discussing for more than two years regarding the cell V2 architecture (see for instance [2] line 37 the discussion regarding glance for all cells vs one glance per cells vs....). Moving forward on the question of being able to benefit from the locality aspects, which is also an advanced feature from my viewpoint (i.e. instead of pulling my image from the golden glance, I would like to get it from my neighbourhood). I'm wondering how this run nowadays with Glance on a large cluster (are Glance/Nova rack aware: if an image is available on one compute node and you want to provision a new VM with the same image on a server which belongs to the same rack. Is Glance/Nova pulling the image from the official URI or from the compute node in order to mitigate the overhead on the network). - ... Maybe we can leverage either the edge sessions organised by the foundation and the FEMDC IRC meetings to dig a bit more of this architecture and to better understand what does it mean also for Nova and Neutron (the discussion focused on keystone and glance yesterday). This can lead to a possible reference architecture/blueprint with well-identified requirements for the different projects. We can setup an etherpad and try to prepare the different questions we may all have. Last but not the least, I really appreciated the discussion with Alan yesterday (thanks BTW ;)). It enabled us to highlight/discuss a few problems that ATT have to deal with. I'm wondering whether we cannot have such an exchange with some other folks. I know OVH for instance are operating several regions also. I think it would be valuable to understand how they deal with all the administrator challenges Alan pointed yesterday and whether they are also facing any other problems. Maybe the foundation can help us to organise such an exchange. Jonathan? My two cents, Adrien PS: thanks for starting the discussion! > > > > > Any opinions? > > > > Br, > > Gerg0
[..] - What is the difference between such a new NKB and the tricircle proposal (before splitting) [1] is also something I think it would be valuable to better understand?
[1] https://wiki.openstack.org/wiki/Tricircle_before_splitting
- I didn't attend the nova session unfortunately this morning but I get the feeling that we can also learn a lot from the questions they have been discussing for more than two years regarding the cell V2 architecture (see for instance [2] line 37 the discussion regarding glance for all cells vs one glance per cells vs....).
[2] https://etherpad.openstack.org/p/nova-ptg-rocky
[...]
sorry Ad_ri3n_
tldr; from my iPhone but the short answer I would give is: Tricircle makes assumptions about hierarchy that don't seem to fit into our view of edge solutions. We did discuss surveying options and if feasible approaching tricircle about extracting some pieces into a separate library/repo/project to serve both use cases. I think that theoretical piece we called NKB. The prerequisite to any concrete statement is the technical survey. / Chris Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: lebre.adrien@free.fr <lebre.adrien@free.fr> Sent: Wednesday, February 28, 2018 3:21:56 PM To: Gergely Csatari (Nokia - HU/Budapest) Cc: edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
[..] - What is the difference between such a new NKB and the tricircle proposal (before splitting) [1] is also something I think it would be valuable to better understand?
[1] https://wiki.openstack.org/wiki/Tricircle_before_splitting
- I didn't attend the nova session unfortunately this morning but I get the feeling that we can also learn a lot from the questions they have been discussing for more than two years regarding the cell V2 architecture (see for instance [2] line 37 the discussion regarding glance for all cells vs one glance per cells vs....).
[2] https://etherpad.openstack.org/p/nova-ptg-rocky
[...]
sorry Ad_ri3n_ _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Yeah was some interesting discussions. I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture. My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...) First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below. * multi-region solution * i.e. central region and many edge regions * where region == geographically edge site * For larger solutions, this would extend to multiple levels, e.g. central region --> edge region --> further-out-edge regions (sites) ... * scaling up to 1,000s of regions/sites eventually * edge region/site can be any size, from small 1-node system to 100s-of-nodes system * a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions * referred to in meeting as the "New King Bird" (NKB), * supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions * where system-wide data includes: * users, tenants, VM images (glance) ā¦ as discussed in meeting * ā¦ and, we never discussed but I would also throw in * nova flavors (& extra specs), nova key pairs, neutron security groups, and * nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) * will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, * synchronization process will retry continuously on failure, * synchronization process will automatically synch data with any newly created/joined edge region, * user will be able to query sync state of system-wide data on all/individual edge regions, * ABC Service in the central region will hold the system-wide ABC data to be synchronized * e.g. keystone, glance, nova, neutron, ā¦ * New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data * For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites. And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)? let me know what you think, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr" <lebre.adrien@free.fr>, "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: [Edge-computing] Afterlife of the PTG edge discussions Hi, Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š I can help in organizing the notes after the PTG workshop. We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems. I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?) Any opinions? Br, Gerg0
lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) ________________________________ From: Waines, Greg <Greg.Waines@windriver.com> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Yeah was some interesting discussions. I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture. My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...) First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below. * multi-region solution * i.e. central region and many edge regions * where region == geographically edge site * For larger solutions, this would extend to multiple levels, e.g. central region --> edge region --> further-out-edge regions (sites) ... * scaling up to 1,000s of regions/sites eventually * edge region/site can be any size, from small 1-node system to 100s-of-nodes system * a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions * referred to in meeting as the "New King Bird" (NKB), * supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions * where system-wide data includes: * users, tenants, VM images (glance) ā¦ as discussed in meeting * ā¦ and, we never discussed but I would also throw in * nova flavors (& extra specs), nova key pairs, neutron security groups, and * nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) * will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, * synchronization process will retry continuously on failure, * synchronization process will automatically synch data with any newly created/joined edge region, * user will be able to query sync state of system-wide data on all/individual edge regions, * ABC Service in the central region will hold the system-wide ABC data to be synchronized * e.g. keystone, glance, nova, neutron, ā¦ * New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data * For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites. And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)? let me know what you think, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr" <lebre.adrien@free.fr>, "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: [Edge-computing] Afterlife of the PTG edge discussions Hi, Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š I can help in organizing the notes after the PTG workshop. We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems. I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?) Any opinions? Br, Gerg0
Hi, @Adrien:<mailto:adrian.peret@nokia.com> Is it okay if I create a subpage to https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds so we can document the results stored in all of these etherpads? By default I will use āOpenStack Edge Discussionsā as a title, but Iām open to any sugestions. If no objections I will make the first steps 16h Dublin time. Br, Gerg0 From: Christopher Price [mailto:christopher.price@est.tech] Sent: Friday, March 2, 2018 1:00 PM To: Waines, Greg <Greg.Waines@windriver.com>; Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com>; lebre.adrien@free.fr; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) ________________________________ From: Waines, Greg <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>; edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Yeah was some interesting discussions. I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture. My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...) First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below. * multi-region solution * i.e. central region and many edge regions * where region == geographically edge site * For larger solutions, this would extend to multiple levels, e.g. central region --> edge region --> further-out-edge regions (sites) ... * scaling up to 1,000s of regions/sites eventually * edge region/site can be any size, from small 1-node system to 100s-of-nodes system * a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions * referred to in meeting as the "New King Bird" (NKB), * supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions * where system-wide data includes: * users, tenants, VM images (glance) ā¦ as discussed in meeting * ā¦ and, we never discussed but I would also throw in * nova flavors (& extra specs), nova key pairs, neutron security groups, and * nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) * will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, * synchronization process will retry continuously on failure, * synchronization process will automatically synch data with any newly created/joined edge region, * user will be able to query sync state of system-wide data on all/individual edge regions, * ABC Service in the central region will hold the system-wide ABC data to be synchronized * e.g. keystone, glance, nova, neutron, ā¦ * New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data * For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites. And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)? let me know what you think, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>" <lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: [Edge-computing] Afterlife of the PTG edge discussions Hi, Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š I can help in organizing the notes after the PTG workshop. We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems. I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?) Any opinions? Br, Gerg0
Hi,
@Adrien: Is it okay if I create a subpage to https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds so we can document the results stored in all of these etherpads?
By default I will use āOpenStack Edge Discussionsā as a title, but Iām open to any sugestions.
Please Gerg0 do it, maybe add a Dublin_PTG suffix in the name. Thanks, Adrien PS: as I mentioned on Tuesday, I will be offline next week (so feel free to move forward, we are an open community ;))
If no objections I will make the first steps 16h Dublin time.
Br,
Gerg0
From: Christopher Price [mailto:christopher.price@est.tech] Sent: Friday, March 2, 2018 1:00 PM To: Waines, Greg <Greg.Waines@windriver.com>; Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com>; lebre.adrien@free.fr; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;)
From: Waines, Greg < Greg.Waines@windriver.com > Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr ; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Yeah was some interesting discussions.
I thought we made the most progress on Tuesday Morning where we worked thru ā Alanās Problems ā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture.
My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...)
First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below.
* multi-region solution
* i.e. central region and many edge regions * where region == geographically edge site
* For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ...
* scaling up to 1,000s of regions/sites eventually * edge region/site can be any size, from small 1-node system to 100s-of-nodes system * a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions
* referred to in meeting as the "New King Bird" (NKB), * supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions * where system-wide data includes:
* users, tenants, VM images (glance) ā¦ as discussed in meeting * ā¦ and, we never discussed but I would also throw in
* nova flavors (& extra specs), nova key pairs, neutron security groups, and * nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions)
* will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions' , for each item of system-wide data, * synchronization process will retry continuously on failure, * synchronization process will automatically synch data with any newly created/joined edge region, * user will be able to query sync state of system-wide data on all/individual edge regions,
* ABC Service in the central region will hold the system-wide ABC data to be synchronized
* e.g. keystone, glance, nova, neutron, ā¦
* New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data * For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites.
And finally, I have to throw out the first new name suggestion for the ānew king birdā project,
how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)?
let me know what you think,
Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" < gergely.csatari@nokia.com > Date: Wednesday, February 28, 2018 at 1:58 PM To: " lebre.adrien@free.fr " < lebre.adrien@free.fr >, " edge-computing@lists.openstack.org " < edge-computing@lists.openstack.org > Subject: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion ļ
I can help in organizing the notes after the PTG workshop.
We used 3 etherpads:
- PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin
- Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis
- Alans problems: https://etherpad.openstack.org/p/edge-alans-problems
In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems.
I think both of these are valuable information and we should somehow have the following things:
- a description about what we would like to achieve. Maybe in some other format than a list of etherpads.
- a list of concrete requirements to specific projects (being existing or something new)
- maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?)
Any opinions?
Br,
Gerg0
Hi, Okay, thanks. I will add Dublin to the page, but I decided to do it next week, when I can not drink Guiness anymore. Br, Gerg0 ________________________________ From: lebre.adrien@free.fr <lebre.adrien@free.fr> Sent: Friday, March 2, 2018 5:42:06 PM To: Csatari, Gergely (Nokia - HU/Budapest) Cc: edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
@Adrien: Is it okay if I create a subpage to https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds so we can document the results stored in all of these etherpads?
By default I will use āOpenStack Edge Discussionsā as a title, but Iām open to any sugestions.
Please Gerg0 do it, maybe add a Dublin_PTG suffix in the name. Thanks, Adrien PS: as I mentioned on Tuesday, I will be offline next week (so feel free to move forward, we are an open community ;))
If no objections I will make the first steps 16h Dublin time.
Br,
Gerg0
From: Christopher Price [mailto:christopher.price@est.tech] Sent: Friday, March 2, 2018 1:00 PM To: Waines, Greg <Greg.Waines@windriver.com>; Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com>; lebre.adrien@free.fr; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;)
From: Waines, Greg < Greg.Waines@windriver.com > Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr ; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Yeah was some interesting discussions.
I thought we made the most progress on Tuesday Morning where we worked thru ā Alanās Problems ā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture.
My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...)
First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below.
* multi-region solution
* i.e. central region and many edge regions * where region == geographically edge site
* For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ...
* scaling up to 1,000s of regions/sites eventually * edge region/site can be any size, from small 1-node system to 100s-of-nodes system * a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions
* referred to in meeting as the "New King Bird" (NKB), * supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions * where system-wide data includes:
* users, tenants, VM images (glance) ā¦ as discussed in meeting * ā¦ and, we never discussed but I would also throw in
* nova flavors (& extra specs), nova key pairs, neutron security groups, and * nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions)
* will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions' , for each item of system-wide data, * synchronization process will retry continuously on failure, * synchronization process will automatically synch data with any newly created/joined edge region, * user will be able to query sync state of system-wide data on all/individual edge regions,
* ABC Service in the central region will hold the system-wide ABC data to be synchronized
* e.g. keystone, glance, nova, neutron, ā¦
* New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data * For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites.
And finally, I have to throw out the first new name suggestion for the ānew king birdā project,
how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)?
let me know what you think,
Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" < gergely.csatari@nokia.com > Date: Wednesday, February 28, 2018 at 1:58 PM To: " lebre.adrien@free.fr " < lebre.adrien@free.fr >, " edge-computing@lists.openstack.org " < edge-computing@lists.openstack.org > Subject: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion ļ
I can help in organizing the notes after the PTG workshop.
We used 3 etherpads:
- PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin
- Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis
- Alans problems: https://etherpad.openstack.org/p/edge-alans-problems
In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems.
I think both of these are valuable information and we should somehow have the following things:
- a description about what we would like to achieve. Maybe in some other format than a list of etherpads.
- a list of concrete requirements to specific projects (being existing or something new)
- maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?)
Any opinions?
Br,
Gerg0
Hi everyone, As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin. I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link: https://zoom.us/j/777719876 Thanks, Jonathan
On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech> wrote:
lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Yeah was some interesting discussions.
I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture.
My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...)
First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below.
ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites.
And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)?
let me know what you think, Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr" <lebre.adrien@free.fr>, "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š
I can help in organizing the notes after the PTG workshop.
We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems
In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems.
I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?)
Any opinions?
Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Hi, As I promised I started to put the conclusions of the Dublin discussions into t wiki page: https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I'm far from being ready, but in case of any comments please do not hesitate to adit the pages or indicate your comment to me. The main aim of this excercise is to collect the results of our discussions in a space what is less volatile and a bit more organized than an increasing number of etherpads. Br, Gerg0 -----Original Message----- From: Jonathan Bryce [mailto:jonathan@openstack.org] Sent: Monday, March 19, 2018 7:20 PM To: edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi everyone, As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin. I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link: https://zoom.us/j/777719876 Thanks, Jonathan
On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech> wrote:
lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Yeah was some interesting discussions.
I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture.
My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...)
First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below.
ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites.
And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)?
let me know what you think, Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr" <lebre.adrien@free.fr>, "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š
I can help in organizing the notes after the PTG workshop.
We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems
In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems.
I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?)
Any opinions?
Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Some initial comments: Ā· Features and requirements o Initially talks about ā... feature/requirement discussions on two levels ...ā Ā§ top->down feature oriented discussion and Ā§ bottom->up missing requirements of existing mechanisms discussion, o However I only see notes on the ātop->down feature oriented discussion. o Features Ā§ Base assumptions for the features Ā· Why do we have the statement that āedge site is composed of at least 5 serversā ? o Edge sites should be scalable from a single all-in-one server to at least 10s of servers (maybe more) Ā§ Feature Levels Ā· Maybe just me ... but I am not familiar with the use of the term ārequirement levelsā for increasing complexity of requirements o I think this also caused some confusion at the Dublin PTG meetings Ā· I would suggest o A prioritized list of requirements, o Where the top N requirements on that list would represent the definition of the MANDATORY / Minimum-Viable-Solution(MVS) set of requirements. Ā§ General comments on Level 1 thru Level 7 Ā· Generally I think there is too much focus on the āoperationalā requirements o E.g. creating VMs, creating network, migrating VMs, etc. Ā· I think the high-priority/MVS requirements should be more focused on the āadministrativeā requirements o E.g. managing various admin type configurations across all the edge clouds, like users, projects, images, flavors, security groups, quotas, etc. o I think this administrative work alone, would be a good significant first step. Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com> Date: Sunday, March 25, 2018 at 9:05 AM To: Jonathan Bryce <jonathan@openstack.org>, "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi, As I promised I started to put the conclusions of the Dublin discussions into t wiki page: https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I'm far from being ready, but in case of any comments please do not hesitate to adit the pages or indicate your comment to me. The main aim of this excercise is to collect the results of our discussions in a space what is less volatile and a bit more organized than an increasing number of etherpads. Br, Gerg0 -----Original Message----- From: Jonathan Bryce [mailto:jonathan@openstack.org] Sent: Monday, March 19, 2018 7:20 PM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi everyone, As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin. I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link: https://zoom.us/j/777719876 Thanks, Jonathan On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech<mailto:christopher.price@est.tech>> wrote: lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>; edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Yeah was some interesting discussions. I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture. My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...) First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below. ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites. And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)? let me know what you think, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>" <lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: [Edge-computing] Afterlife of the PTG edge discussions Hi, Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š I can help in organizing the notes after the PTG workshop. We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems. I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?) Any opinions? Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Hi, Thanks for the comments. Going inline with the reactions. From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Wednesday, March 28, 2018 12:01 PM Some initial comments: * Features and requirements * Initially talks about ā... feature/requirement discussions on two levels ...ā * top->down feature oriented discussion and * bottom->up missing requirements of existing mechanisms discussion, * However I only see notes on the ātop->down feature oriented discussion. [G0]: Yes, the requirements are still to be added. * Features * Base assumptions for the features * Why do we have the statement that āedge site is composed of at least 5 serversā ? * Edge sites should be scalable from a single all-in-one server to at least 10s of servers (maybe more) [G0]: This statement is from line 5 of https://etherpad.openstack.org/p/edge-gap-analysis I tend to agree with you on the lower number, like an edge site can be a single machine. This is one thing what Iāve learned from Beth yesterday. I correct it to one. If there are objections from other we can discuss š * Feature Levels * Maybe just me ... but I am not familiar with the use of the term ārequirement levelsā for increasing complexity of requirements * I think this also caused some confusion at the Dublin PTG meetings * I would suggest * A prioritized list of requirements, * Where the top N requirements on that list would represent the definition of the MANDATORY / Minimum-Viable-Solution(MVS) set of requirements. [G0]: Okay I will reorganize the list based on this. I will look for your comments once Iām ready. * General comments on Level 1 thru Level 7 * Generally I think there is too much focus on the āoperationalā requirements * E.g. creating VMs, creating network, migrating VMs, etc. * I think the high-priority/MVS requirements should be more focused on the āadministrativeā requirements * E.g. managing various admin type configurations across all the edge clouds, like users, projects, images, flavors, security groups, quotas, etc. * I think this administrative work alone, would be a good significant first step. [G0]: I think both of them are important and we should focus on both of them. Features are describing the high level target while the requirements will describe the concrete issues (like managing of the admin config). I agree however, that solving the synchronization of administrative data would be an impressive achiecement. Br, Gerg0 Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Sunday, March 25, 2018 at 9:05 AM To: Jonathan Bryce <jonathan@openstack.org<mailto:jonathan@openstack.org>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi, As I promised I started to put the conclusions of the Dublin discussions into t wiki page: https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I'm far from being ready, but in case of any comments please do not hesitate to adit the pages or indicate your comment to me. The main aim of this excercise is to collect the results of our discussions in a space what is less volatile and a bit more organized than an increasing number of etherpads. Br, Gerg0 -----Original Message----- From: Jonathan Bryce [mailto:jonathan@openstack.org] Sent: Monday, March 19, 2018 7:20 PM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi everyone, As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin. I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link: https://zoom.us/j/777719876 Thanks, Jonathan On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech<mailto:christopher.price@est.tech>> wrote: lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>; edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Yeah was some interesting discussions. I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture. My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...) First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below. ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites. And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)? let me know what you think, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>" <lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: [Edge-computing] Afterlife of the PTG edge discussions Hi, Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š I can help in organizing the notes after the PTG workshop. We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems. I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?) Any opinions? Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Hi See my comments in line.
On 29 Mar 2018, at 19:25, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com> wrote:
Hi,
Thanks for the comments. Going inline with the reactions.
From: Waines, Greg [mailto:Greg.Waines@windriver.com <mailto:Greg.Waines@windriver.com>] Sent: Wednesday, March 28, 2018 12:01 PM
Some initial comments:
Features and requirements Initially talks about ā... feature/requirement discussions on two levels ...ā top->down feature oriented discussion and bottom->up missing requirements of existing mechanisms discussion, However I only see notes on the ātop->down feature oriented discussion. [G0]: Yes, the requirements are still to be added.
FYI, we had a couple of exchange between folks working in the FEMDC SiG. For the moment, we reduced the number of levels to 5. Level 1: operations (admins/devops) that include on single site (local/remote) Level 2: operations (admins/devops) that include several sites (collaborations between sites) + For Level 2, there are several use-cases, such as bootiing a VM on site A considering a VMI from site B, booting two VMs on distinct sites and interconnecting them, make a provisioning request on site A but site B should satisfy the request because there is no available resources on site A, ā¦ + Please consider that all operations that can be satisfied by contacting a site directly without requiring modifications or extensions within the codebase are then considered as Level 1 operations. Level 3: Level 2 operations but operations should consider intermittent networks (such as split brain situations) Level 4: Level 3 operations but across several versions of OpenStack (L4.1) and accros distinct software stack (L4.2). L4.2 may include Kubernetesā¦ Level 5: Level 3 but between distinct operators. We are currently writing a small report and should be able to deliver soon. In addition to simplifying the first classification, the goal is to also try to identify what are the top priorities, in particular in Level 2.
Features Base assumptions for the features Why do we have the statement that āedge site is composed of at least 5 serversā ? Edge sites should be scalable from a single all-in-one server to at least 10s of servers (maybe more) [G0]: This statement is from line 5 of https://etherpad.openstack.org/p/edge-gap-analysis <https://etherpad.openstack.org/p/edge-gap-analysis> I tend to agree with you on the lower number, like an edge site can be a single machine. This is one thing what Iāve learned from Beth yesterday. I correct it to one. If there are objections from other we can discuss š
Would it be preferable to clarify what do we mean by one server. - we can consider one server as a compute/storage node (i.e. the control services are deployed higher in the hierarchy) - we can consider one server that host an all-in-one OpenStack but we should probably then discussed what are the mandatory mechanisms we would like to have in this all-in-one server. Maybe we can organize a short telco (or irc-meeting) to go through the current wiki page. My two cents, Adrien
Feature Levels Maybe just me ... but I am not familiar with the use of the term ārequirement levelsā for increasing complexity of requirements I think this also caused some confusion at the Dublin PTG meetings I would suggest A prioritized list of requirements, Where the top N requirements on that list would represent the definition of the MANDATORY / Minimum-Viable-Solution(MVS) set of requirements. [G0]: Okay I will reorganize the list based on this. I will look for your comments once Iām ready.
General comments on Level 1 thru Level 7 Generally I think there is too much focus on the āoperationalā requirements E.g. creating VMs, creating network, migrating VMs, etc. I think the high-priority/MVS requirements should be more focused on the āadministrativeā requirements E.g. managing various admin type configurations across all the edge clouds, like users, projects, images, flavors, security groups, quotas, etc. I think this administrative work alone, would be a good significant first step.
[G0]: I think both of them are important and we should focus on both of them. Features are describing the high level target while the requirements will describe the concrete issues (like managing of the admin config). I agree however, that solving the synchronization of administrative data would be an impressive achiecement.
Br, Gerg0
Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> Date: Sunday, March 25, 2018 at 9:05 AM To: Jonathan Bryce <jonathan@openstack.org <mailto:jonathan@openstack.org>>, "edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
As I promised I started to put the conclusions of the Dublin discussions into t wiki page:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG <https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG> I'm far from being ready, but in case of any comments please do not hesitate to adit the pages or indicate your comment to me.
The main aim of this excercise is to collect the results of our discussions in a space what is less volatile and a bit more organized than an increasing number of etherpads.
Br, Gerg0
-----Original Message----- From: Jonathan Bryce [mailto:jonathan@openstack.org <mailto:jonathan@openstack.org>] Sent: Monday, March 19, 2018 7:20 PM To: edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi everyone,
As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin.
I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems <https://etherpad.openstack.org/p/edge-alans-problems>
If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link:https://zoom.us/j/777719876 <https://zoom.us/j/777719876>
Thanks,
Jonathan
On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech <mailto:christopher.price@est.tech>> wrote: lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com <mailto:Greg.Waines@windriver.com>> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>; edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Yeah was some interesting discussions.
I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture.
My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...)
First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below.
ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites.
And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)?
let me know what you think, Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>" <lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>>, "edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>> Subject: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š
I can help in organizing the notes after the PTG workshop.
We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin <https://etherpad.openstack.org/p/edge-ptg-dublin> - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis <https://etherpad.openstack.org/p/edge-gap-analysis> - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems <https://etherpad.openstack.org/p/edge-alans-problems>
In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems.
I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?)
Any opinions?
Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing>
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing> _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing>
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Hi, Iāve added the requirements and updated the page based ont he coments: https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG Some questions: * Iāve added some MVS / Non-MVS classification to the features. Please check and comment them (is there a way to comment this wiki somehow?) * In the āOperability data aggregation data provider partā section I could not figure out what data we would like to aggregate. Can you please help in this? * In the āRemote control controlling partā section I could not figure out what operation we would like to issue. Can you please help in this? * I described the synchronisation of the keystone data as it is done by a new service and not by a partitioned keystone. Should I update the descriptions according to what is described in https://etherpad.openstack.org/p/edge-alans-problems , line 94? Any comments to any other things on the wiki are welcome. Thanks, Gerg0 From: free [mailto:lebre.adrien@free.fr] Sent: Friday, March 30, 2018 9:33 AM To: edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi See my comments in line. On 29 Mar 2018, at 19:25, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> wrote: Hi, Thanks for the comments. Going inline with the reactions. From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Wednesday, March 28, 2018 12:01 PM Some initial comments: * Features and requirements * Initially talks about ā... feature/requirement discussions on two levels ...ā * top->down feature oriented discussion and * bottom->up missing requirements of existing mechanisms discussion, * However I only see notes on the ātop->down feature oriented discussion. [G0]: Yes, the requirements are still to be added. FYI, we had a couple of exchange between folks working in the FEMDC SiG. For the moment, we reduced the number of levels to 5. Level 1: operations (admins/devops) that include on single site (local/remote) Level 2: operations (admins/devops) that include several sites (collaborations between sites) + For Level 2, there are several use-cases, such as bootiing a VM on site A considering a VMI from site B, booting two VMs on distinct sites and interconnecting them, make a provisioning request on site A but site B should satisfy the request because there is no available resources on site A, ā¦ + Please consider that all operations that can be satisfied by contacting a site directly without requiring modifications or extensions within the codebase are then considered as Level 1 operations. Level 3: Level 2 operations but operations should consider intermittent networks (such as split brain situations) Level 4: Level 3 operations but across several versions of OpenStack (L4.1) and accros distinct software stack (L4.2). L4.2 may include Kubernetesā¦ Level 5: Level 3 but between distinct operators. We are currently writing a small report and should be able to deliver soon. In addition to simplifying the first classification, the goal is to also try to identify what are the top priorities, in particular in Level 2. * Features * Base assumptions for the features * Why do we have the statement that āedge site is composed of at least 5 serversā ? * Edge sites should be scalable from a single all-in-one server to at least 10s of servers (maybe more) [G0]: This statement is from line 5 of https://etherpad.openstack.org/p/edge-gap-analysis I tend to agree with you on the lower number, like an edge site can be a single machine. This is one thing what Iāve learned from Beth yesterday. I correct it to one. If there are objections from other we can discuss š Would it be preferable to clarify what do we mean by one server. - we can consider one server as a compute/storage node (i.e. the control services are deployed higher in the hierarchy) - we can consider one server that host an all-in-one OpenStack but we should probably then discussed what are the mandatory mechanisms we would like to have in this all-in-one server. Maybe we can organize a short telco (or irc-meeting) to go through the current wiki page. My two cents, Adrien * Feature Levels * Maybe just me ... but I am not familiar with the use of the term ārequirement levelsā for increasing complexity of requirements * I think this also caused some confusion at the Dublin PTG meetings * I would suggest * A prioritized list of requirements, * Where the top N requirements on that list would represent the definition of the MANDATORY / Minimum-Viable-Solution(MVS) set of requirements. [G0]: Okay I will reorganize the list based on this. I will look for your comments once Iām ready. * General comments on Level 1 thru Level 7 * Generally I think there is too much focus on the āoperationalā requirements * E.g. creating VMs, creating network, migrating VMs, etc. * I think the high-priority/MVS requirements should be more focused on the āadministrativeā requirements * E.g. managing various admin type configurations across all the edge clouds, like users, projects, images, flavors, security groups, quotas, etc. * I think this administrative work alone, would be a good significant first step. [G0]: I think both of them are important and we should focus on both of them. Features are describing the high level target while the requirements will describe the concrete issues (like managing of the admin config). I agree however, that solving the synchronization of administrative data would be an impressive achiecement. Br, Gerg0 Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Sunday, March 25, 2018 at 9:05 AM To: Jonathan Bryce <jonathan@openstack.org<mailto:jonathan@openstack.org>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi, As I promised I started to put the conclusions of the Dublin discussions into t wiki page:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I'm far from being ready, but in case of any comments please do not hesitate to adit the pages or indicate your comment to me. The main aim of this excercise is to collect the results of our discussions in a space what is less volatile and a bit more organized than an increasing number of etherpads. Br, Gerg0 -----Original Message----- From: Jonathan Bryce [mailto:jonathan@openstack.org] Sent: Monday, March 19, 2018 7:20 PM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi everyone, As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin. I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link:https://zoom.us/j/777719876 Thanks, Jonathan On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech<mailto:christopher.price@est.tech>> wrote: lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>; edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Yeah was some interesting discussions. I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture. My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...) First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below. ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites. And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)? let me know what you think, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>" <lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: [Edge-computing] Afterlife of the PTG edge discussions Hi, Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š I can help in organizing the notes after the PTG workshop. We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems. I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?) Any opinions? Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Hi Gerg0,, Hi All, First of all thanks Gerg0 for your work, it is great to see a consolidation of the different discussions. I have a couple of questions (in particular regarding a couple of acronyms) and I think it would be great to discuss in details some points such as the different levels (as discussed some discussions from the FEMDC SiG led us to consider only 5 levels now, Iām not sure we are right but debating on that can be useful). Could we arrange a dedicated meeting to go through this wiki page? (can be an irc exchange, a webconfā¦.)? How could we progress? Should we go through a blueprint like process? Thanks ad_rien_ BTW, the FEMDC SiG is tomorrow at 15:00 UTC.
On 10 Apr 2018, at 16:39, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com> wrote:
Hi,
Iāve added the requirements and updated the page based ont he coments:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG <https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG>
Some questions: Iāve added some MVS / Non-MVS classification to the features. Please check and comment them (is there a way to comment this wiki somehow?) In the āOperability data aggregation data provider partā section I could not figure out what data we would like to aggregate. Can you please help in this? In the āRemote control controlling partā section I could not figure out what operation we would like to issue. Can you please help in this? I described the synchronisation of the keystone data as it is done by a new service and not by a partitioned keystone. Should I update the descriptions according to what is described in https://etherpad.openstack.org/p/edge-alans-problems <https://etherpad.openstack.org/p/edge-alans-problems> , line 94?
Any comments to any other things on the wiki are welcome.
Thanks, Gerg0 Ā <> From: free [mailto:lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>] Sent: Friday, March 30, 2018 9:33 AM To: edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi
See my comments in line.
On 29 Mar 2018, at 19:25, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> wrote:
Hi,
Thanks for the comments. Going inline with the reactions.
From: Waines, Greg [mailto:Greg.Waines@windriver.com <mailto:Greg.Waines@windriver.com>] Sent: Wednesday, March 28, 2018 12:01 PM
Some initial comments:
Features and requirements Initially talks about ā... feature/requirement discussions on two levels ...ā top->down feature oriented discussion and bottom->up missing requirements of existing mechanisms discussion, However I only see notes on the ātop->down feature oriented discussion. [G0]: Yes, the requirements are still to be added.
FYI, we had a couple of exchange between folks working in the FEMDC SiG. For the moment, we reduced the number of levels to 5. Level 1: operations (admins/devops) that include on single site (local/remote) Level 2: operations (admins/devops) that include several sites (collaborations between sites) + For Level 2, there are several use-cases, such as bootiing a VM on site A considering a VMI from site B, booting two VMs on distinct sites and interconnecting them, make a provisioning request on site A but site B should satisfy the request because there is no available resources on site A, ā¦ + Please consider that all operations that can be satisfied by contacting a site directly without requiring modifications or extensions within the codebase are then considered as Level 1 operations. Level 3: Level 2 operations but operations should consider intermittent networks (such as split brain situations) Level 4: Level 3 operations but across several versions of OpenStack (L4.1) and accros distinct software stack (L4.2). L4.2 may include Kubernetesā¦ Level 5: Level 3 but between distinct operators.
We are currently writing a small report and should be able to deliver soon. In addition to simplifying the first classification, the goal is to also try to identify what are the top priorities, in particular in Level 2.
Features Base assumptions for the features Why do we have the statement that āedge site is composed of at least 5 serversā ? Edge sites should be scalable from a single all-in-one server to at least 10s of servers (maybe more) [G0]: This statement is from line 5 of https://etherpad.openstack.org/p/edge-gap-analysis <https://etherpad.openstack.org/p/edge-gap-analysis> I tend to agree with you on the lower number, like an edge site can be a single machine. This is one thing what Iāve learned from Beth yesterday. I correct it to one. If there are objections from other we can discuss š
Would it be preferable to clarify what do we mean by one server. - we can consider one server as a compute/storage node (i.e. the control services are deployed higher in the hierarchy) - we can consider one server that host an all-in-one OpenStack but we should probably then discussed what are the mandatory mechanisms we would like to have in this all-in-one server.
Maybe we can organize a short telco (or irc-meeting) to go through the current wiki page.
My two cents, Adrien
Feature Levels Maybe just me ... but I am not familiar with the use of the term ārequirement levelsā for increasing complexity of requirements I think this also caused some confusion at the Dublin PTG meetings I would suggest A prioritized list of requirements, Where the top N requirements on that list would represent the definition of the MANDATORY / Minimum-Viable-Solution(MVS) set of requirements. [G0]: Okay I will reorganize the list based on this. I will look for your comments once Iām ready.
General comments on Level 1 thru Level 7 Generally I think there is too much focus on the āoperationalā requirements E.g. creating VMs, creating network, migrating VMs, etc. I think the high-priority/MVS requirements should be more focused on the āadministrativeā requirements E.g. managing various admin type configurations across all the edge clouds, like users, projects, images, flavors, security groups, quotas, etc. I think this administrative work alone, would be a good significant first step.
[G0]: I think both of them are important and we should focus on both of them. Features are describing the high level target while the requirements will describe the concrete issues (like managing of the admin config). I agree however, that solving the synchronization of administrative data would be an impressive achiecement.
Br, Gerg0
Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> Date: Sunday, March 25, 2018 at 9:05 AM To: Jonathan Bryce <jonathan@openstack.org <mailto:jonathan@openstack.org>>, "edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
As I promised I started to put the conclusions of the Dublin discussions into t wiki page:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG <https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG> I'm far from being ready, but in case of any comments please do not hesitate to adit the pages or indicate your comment to me.
The main aim of this excercise is to collect the results of our discussions in a space what is less volatile and a bit more organized than an increasing number of etherpads.
Br, Gerg0
-----Original Message----- From: Jonathan Bryce [mailto:jonathan@openstack.org <mailto:jonathan@openstack.org>] Sent: Monday, March 19, 2018 7:20 PM To: edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi everyone,
As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin.
I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems <https://etherpad.openstack.org/p/edge-alans-problems>
If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link:https://zoom.us/j/777719876 <https://zoom.us/j/777719876>
Thanks,
Jonathan
On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech <mailto:christopher.price@est.tech>> wrote: lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com <mailto:Greg.Waines@windriver.com>> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>; edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Yeah was some interesting discussions.
I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture.
My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...)
First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below.
ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites.
And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)?
let me know what you think, Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>" <lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>>, "edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>> Subject: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š
I can help in organizing the notes after the PTG workshop.
We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin <https://etherpad.openstack.org/p/edge-ptg-dublin> - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis <https://etherpad.openstack.org/p/edge-gap-analysis> - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems <https://etherpad.openstack.org/p/edge-alans-problems>
In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems.
I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?)
Any opinions?
Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing>
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing> _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing>
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing>
Hi, Iām open to discuss any comment sor questions in mail or over #edge-computing-group. We can even set up an hour meeting for this on #edge-computing-group. #levels: One of the comments I received was not to use these different levels. I agree, that however there is a dependency between the levels, but still they are jus different feature groups. On blueprint like process you mean to convert the text to rst, add it to a repo and do Gerrit review? Br, Gerg0 From: free [mailto:lebre.adrien@free.fr] Sent: Tuesday, April 10, 2018 7:47 PM To: Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com> Cc: edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi Gerg0,, Hi All, First of all thanks Gerg0 for your work, it is great to see a consolidation of the different discussions. I have a couple of questions (in particular regarding acronyms) and I think it would be great to discuss in details some points such as the different levels (as mentioned recent discussions in the FEMDC SiG led us to consider only 5 levels now, Iām not sure we are going in the right direction but debating on that point can be useful). Could we arrange a dedicated meeting to go through this wiki page? (can be an irc exchange, a webconfā¦.)? How could we progress? Should we go through a blueprint like process? Thanks ad_rien_ BTW, the FEMDC SiG is tomorrow at 15:00 UTC. On 10 Apr 2018, at 16:39, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> wrote: Hi, Iāve added the requirements and updated the page based ont he coments:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG Some questions: Ā· Iāve added some MVS / Non-MVS classification to the features. Please check and comment them (is there a way to comment this wiki somehow?) Ā· In the āOperability data aggregation data provider partā section I could not figure out what data we would like to aggregate. Can you please help in this? Ā· In the āRemote control controlling partā section I could not figure out what operation we would like to issue. Can you please help in this? Ā· I described the synchronisation of the keystone data as it is done by a new service and not by a partitioned keystone. Should I update the descriptions according to what is described in https://etherpad.openstack.org/p/edge-alans-problems , line 94? Any comments to any other things on the wiki are welcome. Thanks, Gerg0 From: free [mailto:lebre.adrien@free.fr] Sent: Friday, March 30, 2018 9:33 AM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi See my comments in line. On 29 Mar 2018, at 19:25, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> wrote: Hi, Thanks for the comments. Going inline with the reactions. From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Wednesday, March 28, 2018 12:01 PM Some initial comments: * Features and requirements * Initially talks about ā... feature/requirement discussions on two levels ...ā * top->down feature oriented discussion and * bottom->up missing requirements of existing mechanisms discussion, * However I only see notes on the ātop->down feature oriented discussion. [G0]: Yes, the requirements are still to be added. FYI, we had a couple of exchange between folks working in the FEMDC SiG. For the moment, we reduced the number of levels to 5. Level 1: operations (admins/devops) that include on single site (local/remote) Level 2: operations (admins/devops) that include several sites (collaborations between sites) + For Level 2, there are several use-cases, such as bootiing a VM on site A considering a VMI from site B, booting two VMs on distinct sites and interconnecting them, make a provisioning request on site A but site B should satisfy the request because there is no available resources on site A, ā¦ + Please consider that all operations that can be satisfied by contacting a site directly without requiring modifications or extensions within the codebase are then considered as Level 1 operations. Level 3: Level 2 operations but operations should consider intermittent networks (such as split brain situations) Level 4: Level 3 operations but across several versions of OpenStack (L4.1) and accros distinct software stack (L4.2). L4.2 may include Kubernetesā¦ Level 5: Level 3 but between distinct operators. We are currently writing a small report and should be able to deliver soon. In addition to simplifying the first classification, the goal is to also try to identify what are the top priorities, in particular in Level 2. * Features * Base assumptions for the features * Why do we have the statement that āedge site is composed of at least 5 serversā ? * Edge sites should be scalable from a single all-in-one server to at least 10s of servers (maybe more) [G0]: This statement is from line 5 of https://etherpad.openstack.org/p/edge-gap-analysis I tend to agree with you on the lower number, like an edge site can be a single machine. This is one thing what Iāve learned from Beth yesterday. I correct it to one. If there are objections from other we can discuss š Would it be preferable to clarify what do we mean by one server. - we can consider one server as a compute/storage node (i.e. the control services are deployed higher in the hierarchy) - we can consider one server that host an all-in-one OpenStack but we should probably then discussed what are the mandatory mechanisms we would like to have in this all-in-one server. Maybe we can organize a short telco (or irc-meeting) to go through the current wiki page. My two cents, Adrien * Feature Levels * Maybe just me ... but I am not familiar with the use of the term ārequirement levelsā for increasing complexity of requirements * I think this also caused some confusion at the Dublin PTG meetings * I would suggest * A prioritized list of requirements, * Where the top N requirements on that list would represent the definition of the MANDATORY / Minimum-Viable-Solution(MVS) set of requirements. [G0]: Okay I will reorganize the list based on this. I will look for your comments once Iām ready. * General comments on Level 1 thru Level 7 * Generally I think there is too much focus on the āoperationalā requirements * E.g. creating VMs, creating network, migrating VMs, etc. * I think the high-priority/MVS requirements should be more focused on the āadministrativeā requirements * E.g. managing various admin type configurations across all the edge clouds, like users, projects, images, flavors, security groups, quotas, etc. * I think this administrative work alone, would be a good significant first step. [G0]: I think both of them are important and we should focus on both of them. Features are describing the high level target while the requirements will describe the concrete issues (like managing of the admin config). I agree however, that solving the synchronization of administrative data would be an impressive achiecement. Br, Gerg0 Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Sunday, March 25, 2018 at 9:05 AM To: Jonathan Bryce <jonathan@openstack.org<mailto:jonathan@openstack.org>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi, As I promised I started to put the conclusions of the Dublin discussions into t wiki page:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I'm far from being ready, but in case of any comments please do not hesitate to adit the pages or indicate your comment to me. The main aim of this excercise is to collect the results of our discussions in a space what is less volatile and a bit more organized than an increasing number of etherpads. Br, Gerg0 -----Original Message----- From: Jonathan Bryce [mailto:jonathan@openstack.org] Sent: Monday, March 19, 2018 7:20 PM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi everyone, As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin. I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link:https://zoom.us/j/777719876 Thanks, Jonathan On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech<mailto:christopher.price@est.tech>> wrote: lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>; edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Yeah was some interesting discussions. I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture. My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...) First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below. ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites. And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)? let me know what you think, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>" <lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: [Edge-computing] Afterlife of the PTG edge discussions Hi, Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š I can help in organizing the notes after the PTG workshop. We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems. I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?) Any opinions? Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
On 11 Apr 2018, at 11:31, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com> wrote:
Hi,
Iām open to discuss any comment sor questions in mail or over #edge-computing-group. We can even set up an hour meeting for this on #edge-computing-group.
Yes, should we propose a few slots and see which is the best one?
#levels: One of the comments I received was not to use these different levels. I agree, that however there is a dependency between the levels, but still they are jus different feature groups.
On blueprint like process you mean to convert the text to rst, add it to a repo and do Gerrit review?
yes ad_ri3n_
Br, Gerg0
Ā <> From: free [mailto:lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>] Sent: Tuesday, April 10, 2018 7:47 PM To: Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> Cc: edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi Gerg0,, Hi All,
First of all thanks Gerg0 for your work, it is great to see a consolidation of the different discussions.
I have a couple of questions (in particular regarding acronyms) and I think it would be great to discuss in details some points such as the different levels (as mentioned recent discussions in the FEMDC SiG led us to consider only 5 levels now, Iām not sure we are going in the right direction but debating on that point can be useful).
Could we arrange a dedicated meeting to go through this wiki page? (can be an irc exchange, a webconfā¦.)? How could we progress? Should we go through a blueprint like process?
Thanks ad_rien_ BTW, the FEMDC SiG is tomorrow at 15:00 UTC.
On 10 Apr 2018, at 16:39, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> wrote:
Hi,
Iāve added the requirements and updated the page based ont he coments:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG <https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG>
Some questions: Ā· Iāve added some MVS / Non-MVS classification to the features. Please check and comment them (is there a way to comment this wiki somehow?) Ā· In the āOperability data aggregation data provider partā section I could not figure out what data we would like to aggregate. Can you please help in this? Ā· In the āRemote control controlling partā section I could not figure out what operation we would like to issue. Can you please help in this? Ā· I described the synchronisation of the keystone data as it is done by a new service and not by a partitioned keystone. Should I update the descriptions according to what is described in https://etherpad.openstack.org/p/edge-alans-problems <https://etherpad.openstack.org/p/edge-alans-problems> , line 94?
Any comments to any other things on the wiki are welcome.
Thanks, Gerg0
From: free [mailto:lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>] Sent: Friday, March 30, 2018 9:33 AM To: edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi
See my comments in line.
On 29 Mar 2018, at 19:25, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> wrote:
Hi,
Thanks for the comments. Going inline with the reactions.
From: Waines, Greg [mailto:Greg.Waines@windriver.com <mailto:Greg.Waines@windriver.com>] Sent: Wednesday, March 28, 2018 12:01 PM
Some initial comments:
Features and requirements Initially talks about ā... feature/requirement discussions on two levels ...ā top->down feature oriented discussion and bottom->up missing requirements of existing mechanisms discussion, However I only see notes on the ātop->down feature oriented discussion. [G0]: Yes, the requirements are still to be added.
FYI, we had a couple of exchange between folks working in the FEMDC SiG. For the moment, we reduced the number of levels to 5. Level 1: operations (admins/devops) that include on single site (local/remote) Level 2: operations (admins/devops) that include several sites (collaborations between sites) + For Level 2, there are several use-cases, such as bootiing a VM on site A considering a VMI from site B, booting two VMs on distinct sites and interconnecting them, make a provisioning request on site A but site B should satisfy the request because there is no available resources on site A, ā¦ + Please consider that all operations that can be satisfied by contacting a site directly without requiring modifications or extensions within the codebase are then considered as Level 1 operations. Level 3: Level 2 operations but operations should consider intermittent networks (such as split brain situations) Level 4: Level 3 operations but across several versions of OpenStack (L4.1) and accros distinct software stack (L4.2). L4.2 may include Kubernetesā¦ Level 5: Level 3 but between distinct operators.
We are currently writing a small report and should be able to deliver soon. In addition to simplifying the first classification, the goal is to also try to identify what are the top priorities, in particular in Level 2.
Features Base assumptions for the features Why do we have the statement that āedge site is composed of at least 5 serversā ? Edge sites should be scalable from a single all-in-one server to at least 10s of servers (maybe more) [G0]: This statement is from line 5 of https://etherpad.openstack.org/p/edge-gap-analysis <https://etherpad.openstack.org/p/edge-gap-analysis> I tend to agree with you on the lower number, like an edge site can be a single machine. This is one thing what Iāve learned from Beth yesterday. I correct it to one. If there are objections from other we can discuss š
Would it be preferable to clarify what do we mean by one server. - we can consider one server as a compute/storage node (i.e. the control services are deployed higher in the hierarchy) - we can consider one server that host an all-in-one OpenStack but we should probably then discussed what are the mandatory mechanisms we would like to have in this all-in-one server.
Maybe we can organize a short telco (or irc-meeting) to go through the current wiki page.
My two cents, Adrien
Feature Levels Maybe just me ... but I am not familiar with the use of the term ārequirement levelsā for increasing complexity of requirements I think this also caused some confusion at the Dublin PTG meetings I would suggest A prioritized list of requirements, Where the top N requirements on that list would represent the definition of the MANDATORY / Minimum-Viable-Solution(MVS) set of requirements. [G0]: Okay I will reorganize the list based on this. I will look for your comments once Iām ready.
General comments on Level 1 thru Level 7 Generally I think there is too much focus on the āoperationalā requirements E.g. creating VMs, creating network, migrating VMs, etc. I think the high-priority/MVS requirements should be more focused on the āadministrativeā requirements E.g. managing various admin type configurations across all the edge clouds, like users, projects, images, flavors, security groups, quotas, etc. I think this administrative work alone, would be a good significant first step.
[G0]: I think both of them are important and we should focus on both of them. Features are describing the high level target while the requirements will describe the concrete issues (like managing of the admin config). I agree however, that solving the synchronization of administrative data would be an impressive achiecement.
Br, Gerg0
Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> Date: Sunday, March 25, 2018 at 9:05 AM To: Jonathan Bryce <jonathan@openstack.org <mailto:jonathan@openstack.org>>, "edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
As I promised I started to put the conclusions of the Dublin discussions into t wiki page:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG <https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG> I'm far from being ready, but in case of any comments please do not hesitate to adit the pages or indicate your comment to me.
The main aim of this excercise is to collect the results of our discussions in a space what is less volatile and a bit more organized than an increasing number of etherpads.
Br, Gerg0
-----Original Message----- From: Jonathan Bryce [mailto:jonathan@openstack.org <mailto:jonathan@openstack.org>] Sent: Monday, March 19, 2018 7:20 PM To: edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Hi everyone,
As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin.
I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems <https://etherpad.openstack.org/p/edge-alans-problems>
If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link:https://zoom.us/j/777719876 <https://zoom.us/j/777719876>
Thanks,
Jonathan
On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech <mailto:christopher.price@est.tech>> wrote: lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com <mailto:Greg.Waines@windriver.com>> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>; edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
Yeah was some interesting discussions.
I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture.
My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...)
First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below.
ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites.
And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)?
let me know what you think, Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com <mailto:gergely.csatari@nokia.com>> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>" <lebre.adrien@free.fr <mailto:lebre.adrien@free.fr>>, "edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org <mailto:edge-computing@lists.openstack.org>> Subject: [Edge-computing] Afterlife of the PTG edge discussions
Hi,
Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š
I can help in organizing the notes after the PTG workshop.
We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin <https://etherpad.openstack.org/p/edge-ptg-dublin> - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis <https://etherpad.openstack.org/p/edge-gap-analysis> - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems <https://etherpad.openstack.org/p/edge-alans-problems>
In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems.
I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?)
Any opinions?
Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing>
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing> _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing>
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org <mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing <http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing>
Hi, Uh-oh, sharing the Doodle link in the correct thread also: https://doodle.com/poll/azac9a3bxchawy83 On the blueprint like process: I think it is a good idea as the wiki we use is not able to handle inline comments, but for this we would need a repository first š Br, Gerg0 From: free [mailto:lebre.adrien@free.fr] Sent: Wednesday, April 11, 2018 2:35 PM To: Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com> Cc: edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions On 11 Apr 2018, at 11:31, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> wrote: Hi, Iām open to discuss any comment sor questions in mail or over #edge-computing-group. We can even set up an hour meeting for this on #edge-computing-group. Yes, should we propose a few slots and see which is the best one? #levels: One of the comments I received was not to use these different levels. I agree, that however there is a dependency between the levels, but still they are jus different feature groups. On blueprint like process you mean to convert the text to rst, add it to a repo and do Gerrit review? yes ad_ri3n_ Br, Gerg0 From: free [mailto:lebre.adrien@free.fr] Sent: Tuesday, April 10, 2018 7:47 PM To: Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Cc: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi Gerg0,, Hi All, First of all thanks Gerg0 for your work, it is great to see a consolidation of the different discussions. I have a couple of questions (in particular regarding acronyms) and I think it would be great to discuss in details some points such as the different levels (as mentioned recent discussions in the FEMDC SiG led us to consider only 5 levels now, Iām not sure we are going in the right direction but debating on that point can be useful). Could we arrange a dedicated meeting to go through this wiki page? (can be an irc exchange, a webconfā¦.)? How could we progress? Should we go through a blueprint like process? Thanks ad_rien_ BTW, the FEMDC SiG is tomorrow at 15:00 UTC. On 10 Apr 2018, at 16:39, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> wrote: Hi, Iāve added the requirements and updated the page based ont he coments:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG Some questions: ā¢ Iāve added some MVS / Non-MVS classification to the features. Please check and comment them (is there a way to comment this wiki somehow?) ā¢ In the āOperability data aggregation data provider partā section I could not figure out what data we would like to aggregate. Can you please help in this? ā¢ In the āRemote control controlling partā section I could not figure out what operation we would like to issue. Can you please help in this? ā¢ I described the synchronisation of the keystone data as it is done by a new service and not by a partitioned keystone. Should I update the descriptions according to what is described in https://etherpad.openstack.org/p/edge-alans-problems , line 94? Any comments to any other things on the wiki are welcome. Thanks, Gerg0 From: free [mailto:lebre.adrien@free.fr] Sent: Friday, March 30, 2018 9:33 AM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi See my comments in line. On 29 Mar 2018, at 19:25, Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> wrote: Hi, Thanks for the comments. Going inline with the reactions. From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Wednesday, March 28, 2018 12:01 PM Some initial comments: * Features and requirements * Initially talks about ā... feature/requirement discussions on two levels ...ā * top->down feature oriented discussion and * bottom->up missing requirements of existing mechanisms discussion, * However I only see notes on the ātop->down feature oriented discussion. [G0]: Yes, the requirements are still to be added. FYI, we had a couple of exchange between folks working in the FEMDC SiG. For the moment, we reduced the number of levels to 5. Level 1: operations (admins/devops) that include on single site (local/remote) Level 2: operations (admins/devops) that include several sites (collaborations between sites) + For Level 2, there are several use-cases, such as bootiing a VM on site A considering a VMI from site B, booting two VMs on distinct sites and interconnecting them, make a provisioning request on site A but site B should satisfy the request because there is no available resources on site A, ā¦ + Please consider that all operations that can be satisfied by contacting a site directly without requiring modifications or extensions within the codebase are then considered as Level 1 operations. Level 3: Level 2 operations but operations should consider intermittent networks (such as split brain situations) Level 4: Level 3 operations but across several versions of OpenStack (L4.1) and accros distinct software stack (L4.2). L4.2 may include Kubernetesā¦ Level 5: Level 3 but between distinct operators. We are currently writing a small report and should be able to deliver soon. In addition to simplifying the first classification, the goal is to also try to identify what are the top priorities, in particular in Level 2. * Features * Base assumptions for the features * Why do we have the statement that āedge site is composed of at least 5 serversā ? * Edge sites should be scalable from a single all-in-one server to at least 10s of servers (maybe more) [G0]: This statement is from line 5 of https://etherpad.openstack.org/p/edge-gap-analysis I tend to agree with you on the lower number, like an edge site can be a single machine. This is one thing what Iāve learned from Beth yesterday. I correct it to one. If there are objections from other we can discuss š Would it be preferable to clarify what do we mean by one server. - we can consider one server as a compute/storage node (i.e. the control services are deployed higher in the hierarchy) - we can consider one server that host an all-in-one OpenStack but we should probably then discussed what are the mandatory mechanisms we would like to have in this all-in-one server. Maybe we can organize a short telco (or irc-meeting) to go through the current wiki page. My two cents, Adrien * Feature Levels * Maybe just me ... but I am not familiar with the use of the term ārequirement levelsā for increasing complexity of requirements * I think this also caused some confusion at the Dublin PTG meetings * I would suggest * A prioritized list of requirements, * Where the top N requirements on that list would represent the definition of the MANDATORY / Minimum-Viable-Solution(MVS) set of requirements. [G0]: Okay I will reorganize the list based on this. I will look for your comments once Iām ready. * General comments on Level 1 thru Level 7 * Generally I think there is too much focus on the āoperationalā requirements * E.g. creating VMs, creating network, migrating VMs, etc. * I think the high-priority/MVS requirements should be more focused on the āadministrativeā requirements * E.g. managing various admin type configurations across all the edge clouds, like users, projects, images, flavors, security groups, quotas, etc. * I think this administrative work alone, would be a good significant first step. [G0]: I think both of them are important and we should focus on both of them. Features are describing the high level target while the requirements will describe the concrete issues (like managing of the admin config). I agree however, that solving the synchronization of administrative data would be an impressive achiecement. Br, Gerg0 Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Sunday, March 25, 2018 at 9:05 AM To: Jonathan Bryce <jonathan@openstack.org<mailto:jonathan@openstack.org>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi, As I promised I started to put the conclusions of the Dublin discussions into t wiki page:https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG I'm far from being ready, but in case of any comments please do not hesitate to adit the pages or indicate your comment to me. The main aim of this excercise is to collect the results of our discussions in a space what is less volatile and a bit more organized than an increasing number of etherpads. Br, Gerg0 -----Original Message----- From: Jonathan Bryce [mailto:jonathan@openstack.org] Sent: Monday, March 19, 2018 7:20 PM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Hi everyone, As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin. I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link:https://zoom.us/j/777719876 Thanks, Jonathan On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price@est.tech<mailto:christopher.price@est.tech>> wrote: lol, I like it. If we go with that name I'm using "rex" as an abbreviation. ;) From: Waines, Greg <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>> Sent: Friday, March 2, 2018 11:50:06 AM To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>; edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions Yeah was some interesting discussions. I thought we made the most progress on Tuesday Morning where we worked thru āAlanās Problemsā, and were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture. My notes and view of those initial requirements and initial strawman architecture that we discussed are below: ( ... feel free to throw harpoons at my interpretation ...) First Iād like to suggest that we do the initial work on this New King Bird with the āgoalā of NOT changing ANY existing service (like keystone, or glance, or ...). I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below. ā¢ multi-region solution ā¢ i.e. central region and many edge regions ā¢ where region == geographically edge site ā¢ For larger solutions, this would extend to multiple levels, e.g. central region Ć edge region Ć further-out-edge regions (sites) ... ā¢ scaling up to 1,000s of regions/sites eventually ā¢ edge region/site can be any size, from small 1-node system to 100s-of-nodes system ā¢ a new service runs on central region for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions ā¢ referred to in meeting as the "New King Bird" (NKB), ā¢ supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions ā¢ where system-wide data includes: ā¢ users, tenants, VM images (glance) ā¦ as discussed in meeting ā¢ ā¦ and, we never discussed but I would also throw in ā¢ nova flavors (& extra specs), nova key pairs, neutron security groups, and ā¢ nova and neutron quotas (although that is more complex than simple synchronization if you want quota management across the system/multiple-edge-regions) ā¢ will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions', for each item of system-wide data, ā¢ synchronization process will retry continuously on failure, ā¢ synchronization process will automatically synch data with any newly created/joined edge region, ā¢ user will be able to query sync state of system-wide data on all/individual edge regions, ā¢ ABC Service in the central region will hold the system-wide ABC data to be synchronized ā¢ e.g. keystone, glance, nova, neutron, ā¦ ā¢ New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data ā¢ For a large multi-level region hierarchy, the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites. And finally, I have to throw out the first new name suggestion for the ānew king birdā project, how about Tyrannus ( i.e. a genus of large insect-eating birds, commonly known as kingbirds)? let me know what you think, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Wednesday, February 28, 2018 at 1:58 PM To: "lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>" <lebre.adrien@free.fr<mailto:lebre.adrien@free.fr>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: [Edge-computing] Afterlife of the PTG edge discussions Hi, Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion š I can help in organizing the notes after the PTG workshop. We used 3 etherpads: - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis - Alans problems: https://etherpad.openstack.org/p/edge-alans-problems In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems. I think both of these are valuable information and we should somehow have the following things: - a description about what we would like to achieve. Maybe in some other format than a list of etherpads. - a list of concrete requirements to specific projects (being existing or something new) - maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?) Any opinions? Br, Gerg0 _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org<mailto:Edge-computing@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
participants (6)
-
Christopher Price
-
Csatari, Gergely (Nokia - HU/Budapest)
-
free
-
Jonathan Bryce
-
lebre.adrienļ¼ free.fr
-
Waines, Greg