hi, thanks for sharing this! At TripleO we're looking at implementing in Stein deployment of at least 1 regional DC and N edge zones. More comments below. On 9/25/18 11:21 AM, Ildiko Vancsa wrote:
Hi,
Hereby I would like to give you a short summary on the discussions that happened at the PTG in the area of edge.
The Edge Computing Group sessions took place on Tuesday where our main activity was to draw an overall architecture diagram to capture the basic setup and requirements of edge towards a set of OpenStack services. Our main and initial focus was around Keystone and Glance, but discussion with other project teams such as Nova, Ironic and Cinder also happened later during the week.
The edge architecture diagrams we drew are part of a so called Minimum Viable Product (MVP) which refers to the minimalist nature of the setup where we didn’t try to cover all aspects but rather define a minimum set of services and requirements to get to a functional system. This architecture will evolve further as we collect more use cases and requirements.
To describe edge use cases on a higher level with Mobile Edge as a use case in the background we identified three main building blocks:
* Main or Regional Datacenter (DC) * Edge Sites * Far Edge Sites or Cloudlets
We examined the architecture diagram with the following user stories in mind:
* As a deployer of OpenStack I want to minimize the number of control planes I need to manage across a large geographical region. * As a user of OpenStack I expect instance autoscale continues to function in an edge site if connectivity is lost to the main datacenter. * As a deployer of OpenStack I want disk images to be pulled to a cluster on demand, without needing to sync every disk image everywhere. * As a user of OpenStack I want to manage all of my instances in a region (from regional DC to far edge cloudlets) via a single API endpoint.
We concluded to talk about service requirements in two major categories:
1. The Edge sites are fully operational in case of a connection loss between the Regional DC and the Edge site which requires control plane services running on the Edge site 2. Having full control on the Edge site is not critical in case a connection loss between the Regional DC and an Edge site which can be satisfied by having the control plane services running only in the Regional DC
In the first case the orchestration of the services becomes harder and is not necessarily solved yet, while in the second case you have centralized control but losing functionality on the Edge sites in the event of a connection loss.
We did not discuss things such as HA at the PTG and we did not go into details on networking during the architectural discussion either.
while TripleO used to rely on pacemaker to manage cinder-volume A/P in the controlplane, we'd like to push for cinder-volume A/A in the edge zone and avoid the deployment of pacemaker in the edge zones the safety of cinder-volume A/A seems to depend mostly on the backend driver and for RBD we should be good
We agreed to prefer federation for Keystone and came up with two work items to cover missing functionality:
* Keystone to trust a token from an ID Provider master and when the auth method is called, perform an idempotent creation of the user, project and role assignments according to the assertions made in the token * Keystone should support the creation of users and projects with predictable UUIDs (eg.: hash of the name of the users and projects). This greatly simplifies Image federation and telemetry gathering
For Glance we explored image caching and spent some time discussing the option to also cache metadata so a user can boot new instances at the edge in case of a network connection loss which would result in being disconnected from the registry:
* I as a user of Glance, want to upload an image in the main datacenter and boot that image in an edge datacenter. Fetch the image to the edge datacenter with its metadata
We are still in the progress of documenting the discussions and draw the architecture diagrams and flows for Keystone and Glance.
for glance we'd like to deploy only one glance-api in the regional dc and configure glance/cache in each edge zone ... pointing all instances to a shared database this should solve the metadata problem and also provide for storage "locality" into every edge zone
In addition to the above we went through Dublin PTG wiki (https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG) capturing requirements:
* we agreed to consider the list of requirements on the wiki finalized for now * agreed to move there the additional requirements listed on the Use Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases) wiki page
For the details on the discussions with related OpenStack projects you can check the following etherpads for notes:
* Cinder: https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 * Glance: https://etherpad.openstack.org/p/glance-stein-edge-architecture * Ironic: https://etherpad.openstack.org/p/ironic-stein-ptg-edge * Keystone: https://etherpad.openstack.org/p/keystone-stein-edge-architecture * Neutron: https://etherpad.openstack.org/p/neutron-stein-ptg * Nova: https://etherpad.openstack.org/p/nova-ptg-stein
Notes from the StarlingX sessions: https://etherpad.openstack.org/p/stx-PTG-agenda
here is a link to the TripleO edge squad etherpad as well: https://etherpad.openstack.org/p/tripleo-edge-squad-status the edge squad is meeting weekly.
We are still working on the MVP architecture to clean it up and discuss comments and questions before moving it to a wiki page. Please let me know if you would like to get access to the document and I will share it with you.
Please let me know if you have any questions or comments to the above captured items.
thanks again! -- Giulio Fidente GPG KEY: 08D733BA