Hey ... just taking a look at the options in https://wiki.openstack.org/wiki/Keystone_edge_architectures . For the first option, i.e. ‘Several keystone instances with federation with API synchronsation’ · I am assuming that the Keystone Instance at each Edge Cloud Instance is communicating with a non-local central Identity Provider · If this is the case, the concern list above related to operability with no connectivity o i.e. “There may be significant times with no connectivity and all functions (e.g. autoscaling) must continue to function” In the ‘Distributed Cloud’ sub-project of the StarlingX project ( i.e. see summit presentation @ https://www.openstack.org/videos/vancouver-2018/edge-computing-operations-da... ) · our initial keystone approach is simply the standard multi-region centralized shared keystone, so no scalability and no autonomy for edge clouds on loss of connectivity, · · BUT we are currently taking more of the ‘second option’ approach (i.e. ‘Keystone database replication’) ... with some additions i.e. o Every Edge Cloud instance runs its own keystone instance, o Keystone resources are replicated from central site to edge clouds using our distribute-cloud-replication-framework, § i.e. projects, users, groups, domains, roles, ... § ( i.e. not a low-level DB synchronization ... more a high-level journaling / synchronization of resources ) o AND o Also supporting Fernet Key synchronization and management across Edge Clouds in order to enable Tokens created at any Edge / Central cloud being able to be used (and authenticated) in any other clouds. § Required for some distributed services scenarios, e.g. glance-api pulling from a remote glance-registry, etc. (likely for future scenarios we don’t currently understand). Comments ? Greg.
Hi, What do you mean by high level journaling / syncrronsation? For me one of the basic differences between option 1 and 2 is the understanding of the data semantics during the synchronisaiton in case of option 1 and synchronising blobs of data in case of option 2. Can you share what data do you synchronise between the Keystone instances? This is also a piece of information what we are looking for. Thanks, Gerg0 From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Wednesday, June 6, 2018 3:44 PM To: edge-computing@lists.openstack.org Subject: [Edge-computing] Keystone Edge Architectures Hey ... just taking a look at the options in https://wiki.openstack.org/wiki/Keystone_edge_architectures . For the first option, i.e. ‘Several keystone instances with federation with API synchronsation’ * I am assuming that the Keystone Instance at each Edge Cloud Instance is communicating with a non-local central Identity Provider * If this is the case, the concern list above related to operability with no connectivity * i.e. “There may be significant times with no connectivity and all functions (e.g. autoscaling) must continue to function” In the ‘Distributed Cloud’ sub-project of the StarlingX project ( i.e. see summit presentation @ https://www.openstack.org/videos/vancouver-2018/edge-computing-operations-da... ) * our initial keystone approach is simply the standard multi-region centralized shared keystone, so no scalability and no autonomy for edge clouds on loss of connectivity, * * BUT we are currently taking more of the ‘second option’ approach (i.e. ‘Keystone database replication’) ... with some additions i.e. * Every Edge Cloud instance runs its own keystone instance, * Keystone resources are replicated from central site to edge clouds using our distribute-cloud-replication-framework, * i.e. projects, users, groups, domains, roles, ... * ( i.e. not a low-level DB synchronization ... more a high-level journaling / synchronization of resources ) * AND * Also supporting Fernet Key synchronization and management across Edge Clouds in order to enable Tokens created at any Edge / Central cloud being able to be used (and authenticated) in any other clouds. * Required for some distributed services scenarios, e.g. glance-api pulling from a remote glance-registry, etc. (likely for future scenarios we don’t currently understand). Comments ? Greg.
Hey Gergely, Wrt High-Level journaling/synchronization ... I really mean · We are NOT doing some low-level semantic-unaware DB synchronization, · Instead we are doing a more high level, semantic-aware replication of Keystone Resources from the Central Site to ALL the Edge Clouds o we are generally replicating REST API commands for creating/modifying/deleting Keystone Resources between Central Site and ALL the Edge Clouds § although more sophisticated than that, as it supports retries on failures and audits of resources to handle scenarios where Edge Clouds are disconnected during changes to the Central Site, etc. o the Keystone Resources we are currently synchronizing are: Users, Projects, Roles and Assignments § ... should be doing Groups and Domains as well ... but currently don’t leverage User Groups and only have default Domain in StarlingX § and, will also synchronize Fernet keys so that tokens created on any cloud can be used/validated on any cloud. · This is built on a common synchronization framework for providing common utilities / mechanisms for broadcast of messages to all Edge Clouds, retries, audits, etc o E.g. we use this framework for also synchronizing § Nova’s – flavors, flavor extra-specs, keypairs, quotas § Neutron’s – security groups, security group rules § Cinder’s - quotas Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com> Date: Thursday, June 7, 2018 at 4:15 AM To: Greg Waines <Greg.Waines@windriver.com>, "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: RE: [Edge-computing] Keystone Edge Architectures Hi, What do you mean by high level journaling / syncrronsation? For me one of the basic differences between option 1 and 2 is the understanding of the data semantics during the synchronisaiton in case of option 1 and synchronising blobs of data in case of option 2. Can you share what data do you synchronise between the Keystone instances? This is also a piece of information what we are looking for. Thanks, Gerg0 From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Wednesday, June 6, 2018 3:44 PM To: edge-computing@lists.openstack.org Subject: [Edge-computing] Keystone Edge Architectures Hey ... just taking a look at the options in https://wiki.openstack.org/wiki/Keystone_edge_architectures . For the first option, i.e. ‘Several keystone instances with federation with API synchronsation’ * I am assuming that the Keystone Instance at each Edge Cloud Instance is communicating with a non-local central Identity Provider * If this is the case, the concern list above related to operability with no connectivity * i.e. “There may be significant times with no connectivity and all functions (e.g. autoscaling) must continue to function” In the ‘Distributed Cloud’ sub-project of the StarlingX project ( i.e. see summit presentation @ https://www.openstack.org/videos/vancouver-2018/edge-computing-operations-da... ) * our initial keystone approach is simply the standard multi-region centralized shared keystone, so no scalability and no autonomy for edge clouds on loss of connectivity, * * BUT we are currently taking more of the ‘second option’ approach (i.e. ‘Keystone database replication’) ... with some additions i.e. * Every Edge Cloud instance runs its own keystone instance, * Keystone resources are replicated from central site to edge clouds using our distribute-cloud-replication-framework, * i.e. projects, users, groups, domains, roles, ... * ( i.e. not a low-level DB synchronization ... more a high-level journaling / synchronization of resources ) * AND * Also supporting Fernet Key synchronization and management across Edge Clouds in order to enable Tokens created at any Edge / Central cloud being able to be used (and authenticated) in any other clouds. * Required for some distributed services scenarios, e.g. glance-api pulling from a remote glance-registry, etc. (likely for future scenarios we don’t currently understand). Comments ? Greg.
Hi, Thanks for the answer. According to my understanding this is what we tried to describe in the first option. I’ve also added the list of synched data to the wiki: https://wiki.openstack.org/wiki/Keystone_edge_architectures#Replicated_data Is the replication service part of the open sourced StarlingX? Thanks, Gerg0 From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Thursday, June 7, 2018 1:37 PM To: Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com>; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Keystone Edge Architectures Hey Gergely, Wrt High-Level journaling/synchronization ... I really mean * We are NOT doing some low-level semantic-unaware DB synchronization, * Instead we are doing a more high level, semantic-aware replication of Keystone Resources from the Central Site to ALL the Edge Clouds * we are generally replicating REST API commands for creating/modifying/deleting Keystone Resources between Central Site and ALL the Edge Clouds * although more sophisticated than that, as it supports retries on failures and audits of resources to handle scenarios where Edge Clouds are disconnected during changes to the Central Site, etc. * the Keystone Resources we are currently synchronizing are: Users, Projects, Roles and Assignments * ... should be doing Groups and Domains as well ... but currently don’t leverage User Groups and only have default Domain in StarlingX * and, will also synchronize Fernet keys so that tokens created on any cloud can be used/validated on any cloud. * This is built on a common synchronization framework for providing common utilities / mechanisms for broadcast of messages to all Edge Clouds, retries, audits, etc * E.g. we use this framework for also synchronizing * Nova’s – flavors, flavor extra-specs, keypairs, quotas * Neutron’s – security groups, security group rules * Cinder’s - quotas Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Thursday, June 7, 2018 at 4:15 AM To: Greg Waines <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: RE: [Edge-computing] Keystone Edge Architectures Hi, What do you mean by high level journaling / syncrronsation? For me one of the basic differences between option 1 and 2 is the understanding of the data semantics during the synchronisaiton in case of option 1 and synchronising blobs of data in case of option 2. Can you share what data do you synchronise between the Keystone instances? This is also a piece of information what we are looking for. Thanks, Gerg0 From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Wednesday, June 6, 2018 3:44 PM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: [Edge-computing] Keystone Edge Architectures Hey ... just taking a look at the options in https://wiki.openstack.org/wiki/Keystone_edge_architectures . For the first option, i.e. ‘Several keystone instances with federation with API synchronsation’ * I am assuming that the Keystone Instance at each Edge Cloud Instance is communicating with a non-local central Identity Provider * If this is the case, the concern list above related to operability with no connectivity * i.e. “There may be significant times with no connectivity and all functions (e.g. autoscaling) must continue to function” In the ‘Distributed Cloud’ sub-project of the StarlingX project ( i.e. see summit presentation @ https://www.openstack.org/videos/vancouver-2018/edge-computing-operations-da... ) * our initial keystone approach is simply the standard multi-region centralized shared keystone, so no scalability and no autonomy for edge clouds on loss of connectivity, * * BUT we are currently taking more of the ‘second option’ approach (i.e. ‘Keystone database replication’) ... with some additions i.e. * Every Edge Cloud instance runs its own keystone instance, * Keystone resources are replicated from central site to edge clouds using our distribute-cloud-replication-framework, * i.e. projects, users, groups, domains, roles, ... * ( i.e. not a low-level DB synchronization ... more a high-level journaling / synchronization of resources ) * AND * Also supporting Fernet Key synchronization and management across Edge Clouds in order to enable Tokens created at any Edge / Central cloud being able to be used (and authenticated) in any other clouds. * Required for some distributed services scenarios, e.g. glance-api pulling from a remote glance-registry, etc. (likely for future scenarios we don’t currently understand). Comments ? Greg.
Responses in-lined below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com> Date: Friday, June 8, 2018 at 5:23 AM To: Greg Waines <Greg.Waines@windriver.com>, "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: RE: [Edge-computing] Keystone Edge Architectures Hi, Thanks for the answer. According to my understanding this is what we tried to describe in the first option. [Greg] Ok ... maybe this is me not understanding what ‘federated’ keystone is. My understanding of federated keystone ( please correct me if I am wrong ): · All clouds run a Keystone Instance (service provider) · But all Keystone Instances leverage the same remote IdentityProvider as the backend for User Authentication o i.e. there is a single remote IdentityProvider with all the user credentials · The Keystone Instances in each cloud must have the MAPPINGs between IdentityProvider user & SAML Assertions - to – Keystone Project/User/Token/etc. So is the “API Synchronization” that you mention in this first option, referring to the synchronization of these MAPPINGs ? We are NOT using keystone federation in StarlingX. Maybe what we are doing in StarlingX is a fourth option. Similar to Option 2 – “Keystone database replication” .... but using an “API-based synchronization” and Fernet Key Synchronization to enable use of Tokens across all clouds. e.g. Option 4 – Keystone API Synchronization & Fernet Key Synchronization · Every Edge Cloud instance runs its own keystone instance, · Keystone resources are replicated from central site to edge clouds using API-based Synchronization, o i.e. projects, users, groups, domains, ... · Also supporting Fernet Key synchronization and management across Edge Clouds in order to enable Tokens created at any Edge / Central cloud being able to be used (and authenticated) in any other clouds. I’ve also added the list of synched data to the wiki: https://wiki.openstack.org/wiki/Keystone_edge_architectures#Replicated_data [Greg] And you probably want to put an asterisk (*) by the Quotas items, as similar to Kingbird, we dynamically manage quotas across the Edge Clouds in order to provide Quotas at the Distributed Cloud Level. I.e. a project that has a quota of 10 instances, can only create 10 instances across ALL Edge Clouds; NOT 10 instances per Edge Cloud. Is the replication service part of the open sourced StarlingX? [Greg] Yes. Thanks, Gerg0 From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Thursday, June 7, 2018 1:37 PM To: Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com>; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Keystone Edge Architectures Hey Gergely, Wrt High-Level journaling/synchronization ... I really mean * We are NOT doing some low-level semantic-unaware DB synchronization, * Instead we are doing a more high level, semantic-aware replication of Keystone Resources from the Central Site to ALL the Edge Clouds * we are generally replicating REST API commands for creating/modifying/deleting Keystone Resources between Central Site and ALL the Edge Clouds * although more sophisticated than that, as it supports retries on failures and audits of resources to handle scenarios where Edge Clouds are disconnected during changes to the Central Site, etc. * the Keystone Resources we are currently synchronizing are: Users, Projects, Roles and Assignments * ... should be doing Groups and Domains as well ... but currently don’t leverage User Groups and only have default Domain in StarlingX * and, will also synchronize Fernet keys so that tokens created on any cloud can be used/validated on any cloud. * This is built on a common synchronization framework for providing common utilities / mechanisms for broadcast of messages to all Edge Clouds, retries, audits, etc * E.g. we use this framework for also synchronizing * Nova’s – flavors, flavor extra-specs, keypairs, quotas * Neutron’s – security groups, security group rules * Cinder’s - quotas Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Thursday, June 7, 2018 at 4:15 AM To: Greg Waines <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: RE: [Edge-computing] Keystone Edge Architectures Hi, What do you mean by high level journaling / syncrronsation? For me one of the basic differences between option 1 and 2 is the understanding of the data semantics during the synchronisaiton in case of option 1 and synchronising blobs of data in case of option 2. Can you share what data do you synchronise between the Keystone instances? This is also a piece of information what we are looking for. Thanks, Gerg0 From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Wednesday, June 6, 2018 3:44 PM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: [Edge-computing] Keystone Edge Architectures Hey ... just taking a look at the options in https://wiki.openstack.org/wiki/Keystone_edge_architectures . For the first option, i.e. ‘Several keystone instances with federation with API synchronsation’ * I am assuming that the Keystone Instance at each Edge Cloud Instance is communicating with a non-local central Identity Provider * If this is the case, the concern list above related to operability with no connectivity * i.e. “There may be significant times with no connectivity and all functions (e.g. autoscaling) must continue to function” In the ‘Distributed Cloud’ sub-project of the StarlingX project ( i.e. see summit presentation @ https://www.openstack.org/videos/vancouver-2018/edge-computing-operations-da... ) * our initial keystone approach is simply the standard multi-region centralized shared keystone, so no scalability and no autonomy for edge clouds on loss of connectivity, * * BUT we are currently taking more of the ‘second option’ approach (i.e. ‘Keystone database replication’) ... with some additions i.e. * Every Edge Cloud instance runs its own keystone instance, * Keystone resources are replicated from central site to edge clouds using our distribute-cloud-replication-framework, * i.e. projects, users, groups, domains, roles, ... * ( i.e. not a low-level DB synchronization ... more a high-level journaling / synchronization of resources ) * AND * Also supporting Fernet Key synchronization and management across Edge Clouds in order to enable Tokens created at any Edge / Central cloud being able to be used (and authenticated) in any other clouds. * Required for some distributed services scenarios, e.g. glance-api pulling from a remote glance-registry, etc. (likely for future scenarios we don’t currently understand). Comments ? Greg.
Hi, Thanks for the answers. I’ve added a 4th option and the note to the Quotas. https://wiki.openstack.org/wiki/Keystone_edge_architectures Br, Gerg0 From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Friday, June 8, 2018 2:31 PM To: Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com>; edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Keystone Edge Architectures Responses in-lined below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Friday, June 8, 2018 at 5:23 AM To: Greg Waines <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: RE: [Edge-computing] Keystone Edge Architectures Hi, Thanks for the answer. According to my understanding this is what we tried to describe in the first option. [Greg] Ok ... maybe this is me not understanding what ‘federated’ keystone is. My understanding of federated keystone ( please correct me if I am wrong ): * All clouds run a Keystone Instance (service provider) * But all Keystone Instances leverage the same remote IdentityProvider as the backend for User Authentication * i.e. there is a single remote IdentityProvider with all the user credentials * The Keystone Instances in each cloud must have the MAPPINGs between IdentityProvider user & SAML Assertions - to – Keystone Project/User/Token/etc. So is the “API Synchronization” that you mention in this first option, referring to the synchronization of these MAPPINGs ? We are NOT using keystone federation in StarlingX. Maybe what we are doing in StarlingX is a fourth option. Similar to Option 2 – “Keystone database replication” .... but using an “API-based synchronization” and Fernet Key Synchronization to enable use of Tokens across all clouds. e.g. Option 4 – Keystone API Synchronization & Fernet Key Synchronization * Every Edge Cloud instance runs its own keystone instance, * Keystone resources are replicated from central site to edge clouds using API-based Synchronization, * i.e. projects, users, groups, domains, ... * Also supporting Fernet Key synchronization and management across Edge Clouds in order to enable Tokens created at any Edge / Central cloud being able to be used (and authenticated) in any other clouds. I’ve also added the list of synched data to the wiki: https://wiki.openstack.org/wiki/Keystone_edge_architectures#Replicated_data [Greg] And you probably want to put an asterisk (*) by the Quotas items, as similar to Kingbird, we dynamically manage quotas across the Edge Clouds in order to provide Quotas at the Distributed Cloud Level. I.e. a project that has a quota of 10 instances, can only create 10 instances across ALL Edge Clouds; NOT 10 instances per Edge Cloud. Is the replication service part of the open sourced StarlingX? [Greg] Yes. Thanks, Gerg0 From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Thursday, June 7, 2018 1:37 PM To: Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>>; edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Keystone Edge Architectures Hey Gergely, Wrt High-Level journaling/synchronization ... I really mean * We are NOT doing some low-level semantic-unaware DB synchronization, * Instead we are doing a more high level, semantic-aware replication of Keystone Resources from the Central Site to ALL the Edge Clouds * we are generally replicating REST API commands for creating/modifying/deleting Keystone Resources between Central Site and ALL the Edge Clouds * although more sophisticated than that, as it supports retries on failures and audits of resources to handle scenarios where Edge Clouds are disconnected during changes to the Central Site, etc. * the Keystone Resources we are currently synchronizing are: Users, Projects, Roles and Assignments * ... should be doing Groups and Domains as well ... but currently don’t leverage User Groups and only have default Domain in StarlingX * and, will also synchronize Fernet keys so that tokens created on any cloud can be used/validated on any cloud. * This is built on a common synchronization framework for providing common utilities / mechanisms for broadcast of messages to all Edge Clouds, retries, audits, etc * E.g. we use this framework for also synchronizing * Nova’s – flavors, flavor extra-specs, keypairs, quotas * Neutron’s – security groups, security group rules * Cinder’s - quotas Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari@nokia.com<mailto:gergely.csatari@nokia.com>> Date: Thursday, June 7, 2018 at 4:15 AM To: Greg Waines <Greg.Waines@windriver.com<mailto:Greg.Waines@windriver.com>>, "edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>" <edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org>> Subject: RE: [Edge-computing] Keystone Edge Architectures Hi, What do you mean by high level journaling / syncrronsation? For me one of the basic differences between option 1 and 2 is the understanding of the data semantics during the synchronisaiton in case of option 1 and synchronising blobs of data in case of option 2. Can you share what data do you synchronise between the Keystone instances? This is also a piece of information what we are looking for. Thanks, Gerg0 From: Waines, Greg [mailto:Greg.Waines@windriver.com] Sent: Wednesday, June 6, 2018 3:44 PM To: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: [Edge-computing] Keystone Edge Architectures Hey ... just taking a look at the options in https://wiki.openstack.org/wiki/Keystone_edge_architectures . For the first option, i.e. ‘Several keystone instances with federation with API synchronsation’ * I am assuming that the Keystone Instance at each Edge Cloud Instance is communicating with a non-local central Identity Provider * If this is the case, the concern list above related to operability with no connectivity * i.e. “There may be significant times with no connectivity and all functions (e.g. autoscaling) must continue to function” In the ‘Distributed Cloud’ sub-project of the StarlingX project ( i.e. see summit presentation @ https://www.openstack.org/videos/vancouver-2018/edge-computing-operations-da... ) * our initial keystone approach is simply the standard multi-region centralized shared keystone, so no scalability and no autonomy for edge clouds on loss of connectivity, * * BUT we are currently taking more of the ‘second option’ approach (i.e. ‘Keystone database replication’) ... with some additions i.e. * Every Edge Cloud instance runs its own keystone instance, * Keystone resources are replicated from central site to edge clouds using our distribute-cloud-replication-framework, * i.e. projects, users, groups, domains, roles, ... * ( i.e. not a low-level DB synchronization ... more a high-level journaling / synchronization of resources ) * AND * Also supporting Fernet Key synchronization and management across Edge Clouds in order to enable Tokens created at any Edge / Central cloud being able to be used (and authenticated) in any other clouds. * Required for some distributed services scenarios, e.g. glance-api pulling from a remote glance-registry, etc. (likely for future scenarios we don’t currently understand). Comments ? Greg.
A question for the general team and the ‘KEYSTONE’ Edge team in particular. It seems like we are going down the solution path of ‘federated’ keystone. ( i.e. the first option described at https://wiki.openstack.org/wiki/Keystone_edge_architectures ) My understanding of ‘federated’ keystone (and this solution) is that · All clouds run a Keystone Instance (service provider) · But all Keystone Instances leverage some remote IdentityProvider as the backend for User Authentication o i.e. there is a remote IdentityProvider with all the user credentials · i.e. local Keystone instances (in edge clouds), if they lose connectivity to remote IdentityProvider, can NOT do authentication of user or validation of tokens. If this is the case, then how does the ‘federated’ keystone option meet the requirement that “There may be significant times where an edge cloud has no remote connectivity and all functions (e.g. autoscaling) must continue to function” ? I would have thought that we would have ruled this out immediately ? Greg.
Hi Greg, Jumping in a bit late, but I can try and clear up some of the questions around keystone's federated identity implementation. Your initial assessment of federated identity is accurate where each keystone node in the deployment refers to an external identity provider as the source of truth for user identities. But, keystone doesn't actively reach out to the external identity provider at authentication time. Instead, a user presents keystone (which is acting as a service provider) with a SAML document that keystone will verify with a set of certificates from the identity provider. If keystone can prove the SAML assertion came from a trusted identity provider, it processes the attributes through a mapping engine, which essentially translates the SAML assertion into OpenStack-specific terminology. The actual validation of the SAML assertion doesn't require a connection to the identity provider that issued it. This trust relationship is established when setting up federated identity via configuration (e.g. these are the certs for the identity provider that I trust). Hopefully that helps Lance
Thanks Lance ... this clears up my general confusion. So it sounds like user authentication can be performed by the edge keystone, even when it does not currently have connectivity to the remote identity provider. So ... just a few clarifying questions so I understand at high-level how this works. Is the following correct flow of events on the Edge Cloud? ( note have some embedded questions as well ) · Some time during configuration or initial connectivity (?) with the remote Federated Identity Provider, the Edge Cloud Keystone gets CERTIFICATE(s) of the remote Federated Identity Provider, o I assume that these CERTIFICATE(s) can be updated as well by Federated Identity Provider as certs expire or change, · Also at configuration time or initial connectivity, I assume there is also a set of SAML --> OpenStack User/Tenant/Role MAPPINGs that get configured on the Edge Cloud, (Correct ?) · A LOCAL user/client to the Edge Cloud, when sending an authenticated REST API Request, first gets a token from the LOCAL Keystone by sending a Token Request to the LOCAL Keystone with a ‘signed’ SAML doc as the user authentication information o The LOCAL keystone uses the CERTIFICATE(s) (of the federated identity provider) that it has, to validate the SAML doc is authentic, and o Then uses the info in the SAML doc to convert to the associated OpenStack User/Tenant/Role , and o Then generates the appropriate local token. o CORRECT ? o QUESTIONS § How does the LOCAL user/client get the ‘signed’ SAML doc ? · ? does this required connectivity to remote Federated Identity Provider ? Greg. From: Lance Bragstad <lbragstad@gmail.com> Date: Monday, June 25, 2018 at 4:26 PM To: "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: [Edge-computing] Keystone Edge Architectures Hi Greg, Jumping in a bit late, but I can try and clear up some of the questions around keystone's federated identity implementation. Your initial assessment of federated identity is accurate where each keystone node in the deployment refers to an external identity provider as the source of truth for user identities. But, keystone doesn't actively reach out to the external identity provider at authentication time. Instead, a user presents keystone (which is acting as a service provider) with a SAML document that keystone will verify with a set of certificates from the identity provider. If keystone can prove the SAML assertion came from a trusted identity provider, it processes the attributes through a mapping engine, which essentially translates the SAML assertion into OpenStack-specific terminology. The actual validation of the SAML assertion doesn't require a connection to the identity provider that issued it. This trust relationship is established when setting up federated identity via configuration (e.g. these are the certs for the identity provider that I trust). Hopefully that helps Lance
Just following up on this mailing list item ... we discussed at the edge-computing weekly meeting. Summary: · FAILURE SCENARIO o EDGE CLOUD and all its LOCAL Users/Clients (e.g. CLI running on cloud or adjacent workstation), have LOST CONNECTIVITY to the Central Cloud and the Central Remote Identity Provider · If a LOCAL User/Client has already authenticated with the Central Remote Identity Provider and already has a signed SAML assertion o Then the LOCAL User/Client can use that signed SAML assertion to create a token with LOCAL Keystone § Where LOCAL Keystone can independently validate the signed SAML Assertion and map to appropriate Keystone objects and create a TOKEN o AND the LOCAL User/Client can then use this TOKEN to proceed with a request to a LOCAL Service. · HOWEVER, If the LOCAL User/Client o Does NOT have a signed SAML assertion yet ... or it expired (? Not sure if they can expire ?), THEN o The LOCAL User/Client can not create a TOKEN. § ( unless there are local (i.e. non-federated) users defined ... but seems to defeat the purpose of federation ) So suspect there still are some connectivity failure scenarios that restrict the autonomy of a disconnected Edge Cloud. *** I would suggest that the testing that we are trying to setup around federated Keystone SHOULD explicitly test these connectivity failure scenarios in order to characterize how bad of an issue this is or not. Let me know if I did not summarize correctly, Greg. From: Greg Waines <Greg.Waines@windriver.com> Date: Tuesday, June 26, 2018 at 7:52 AM To: Lance Bragstad <lbragstad@gmail.com>, "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Keystone Edge Architectures Thanks Lance ... this clears up my general confusion. So it sounds like user authentication can be performed by the edge keystone, even when it does not currently have connectivity to the remote identity provider. So ... just a few clarifying questions so I understand at high-level how this works. Is the following correct flow of events on the Edge Cloud? ( note have some embedded questions as well ) · Some time during configuration or initial connectivity (?) with the remote Federated Identity Provider, the Edge Cloud Keystone gets CERTIFICATE(s) of the remote Federated Identity Provider, o I assume that these CERTIFICATE(s) can be updated as well by Federated Identity Provider as certs expire or change, · Also at configuration time or initial connectivity, I assume there is also a set of SAML --> OpenStack User/Tenant/Role MAPPINGs that get configured on the Edge Cloud, (Correct ?) · A LOCAL user/client to the Edge Cloud, when sending an authenticated REST API Request, first gets a token from the LOCAL Keystone by sending a Token Request to the LOCAL Keystone with a ‘signed’ SAML doc as the user authentication information o The LOCAL keystone uses the CERTIFICATE(s) (of the federated identity provider) that it has, to validate the SAML doc is authentic, and o Then uses the info in the SAML doc to convert to the associated OpenStack User/Tenant/Role , and o Then generates the appropriate local token. o CORRECT ? o QUESTIONS § How does the LOCAL user/client get the ‘signed’ SAML doc ? · ? does this required connectivity to remote Federated Identity Provider ? Greg. From: Lance Bragstad <lbragstad@gmail.com> Date: Monday, June 25, 2018 at 4:26 PM To: "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> Subject: [Edge-computing] Keystone Edge Architectures Hi Greg, Jumping in a bit late, but I can try and clear up some of the questions around keystone's federated identity implementation. Your initial assessment of federated identity is accurate where each keystone node in the deployment refers to an external identity provider as the source of truth for user identities. But, keystone doesn't actively reach out to the external identity provider at authentication time. Instead, a user presents keystone (which is acting as a service provider) with a SAML document that keystone will verify with a set of certificates from the identity provider. If keystone can prove the SAML assertion came from a trusted identity provider, it processes the attributes through a mapping engine, which essentially translates the SAML assertion into OpenStack-specific terminology. The actual validation of the SAML assertion doesn't require a connection to the identity provider that issued it. This trust relationship is established when setting up federated identity via configuration (e.g. these are the certs for the identity provider that I trust). Hopefully that helps Lance
On 06/27/2018 11:30 AM, Waines, Greg wrote:
Just following up on this mailing list item ... we discussed at the edge-computing weekly meeting.
Summary:
· FAILURE SCENARIO
o EDGE CLOUD and all its LOCAL Users/Clients (e.g. CLI running on cloud or adjacent workstation), have LOST CONNECTIVITY to the Central Cloud and the Central Remote Identity Provider
· If a LOCAL User/Client has already authenticated with the Central Remote Identity Provider and already has a signed SAML assertion
o Then the LOCAL User/Client can use that signed SAML assertion to create a token with LOCAL Keystone
§ Where LOCAL Keystone can independently validate the signed SAML Assertion and map to appropriate Keystone objects and create a TOKEN
o AND the LOCAL User/Client can then use this TOKEN to proceed with a request to a LOCAL Service.
· HOWEVER, If the LOCAL User/Client
o Does NOT have a signed SAML assertion yet ... or it expired (? Not sure if they can expire ?),
I should clarify this a bit. Keystone doesn't deal with SAML directly when it act as a service provider. Instead it uses Apache plugins (e.g. mod_shib) to do the SAML verification. This is also related to the bit on the call where Kristi and I talked about how keystone needs to establish trust with the identity provider. Keystone, specifically mod_shib and the web server, need to be configured with the certificates used to validate SAML from the identity provider, which is an out-of-band process.
THEN
o The LOCAL User/Client can not create a TOKEN.
§ ( unless there are local (i.e. non-federated) users defined ... but seems to defeat the purpose of federation )
So suspect there still are some connectivity failure scenarios that restrict the autonomy of a disconnected Edge Cloud.
*/*** I would suggest that the testing that we are trying to setup around federated Keystone SHOULD explicitly test these connectivity failure scenarios in order to characterize how bad of an issue this is or not./*
Let me know if I did not summarize correctly,
Greg.
The concept of replicating data over the API was also brought up towards the end of the call as a possible solution. The TL;DR is that each deployment would be independent, but data would be kept in sync by making API requests that forcibly create entities with the same IDs (manual replication if you will). This idea has been discussed at length over the last couple years [0]. We most recently discussed it during the summit in Sydney. Where we really tripped was replication of revocation events. For example, assume you have the ability in keystone to forcibly create things with specific IDs, allowing you to manually replicate data across your edge deployments. At the same time you're keeping the fernet keys in sync, so a user can actually generate tokens in one deployment and use them in other (since each keystone node has the same IDs for projects, domains, users, etc..), even though replication isn't happening at the database. Revocation events are records used internally by keystone to track events that affect token validity (e.g. a user changing their password or deleting a specific token), and they are not exposed via keystone's API. The concern is that without replicating this table, it would be possible for a user to: - generate a token in deployment A - use the token in deployment B to do something - change their password in deployment A, which creates a revocation event invalidating the token from step one in deployment A - the user attempts to use the token from step one in deployment A but it is considered invalid (due to the revocation event) - the user successfully executes API calls with the token from step one in deployment B because deployment B has no knowledge of the revocation event in deployment A We didn't get into that much detail in the call, but wanted to share where the conversation ended up the last time we had it. [0] https://review.openstack.org/#/c/323499/
*From: *Greg Waines <Greg.Waines@windriver.com> *Date: *Tuesday, June 26, 2018 at 7:52 AM *To: *Lance Bragstad <lbragstad@gmail.com>, "edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> *Subject: *Re: [Edge-computing] Keystone Edge Architectures
Thanks Lance ... this clears up my general confusion.
So it sounds like user authentication can be performed by the edge keystone, even when it does not currently have connectivity to the remote identity provider.
So ... just a few clarifying questions so I understand at high-level how this works.
Is the following correct flow of events on the Edge Cloud?
( note have some embedded questions as well )
· Some time during configuration or initial connectivity (?) with the remote Federated Identity Provider, the Edge Cloud Keystone gets CERTIFICATE(s) of the remote Federated Identity Provider,
o I assume that these CERTIFICATE(s) can be updated as well by Federated Identity Provider as certs expire or change,
· Also at configuration time or initial connectivity, I assume there is also a set of SAML àOpenStack User/Tenant/Role MAPPINGs that get configured on the Edge Cloud, (Correct ?)
· A _LOCAL user/client_ to the Edge Cloud, when sending an authenticated REST API Request, first gets a token from the LOCAL Keystone by sending a Token Request to the LOCAL Keystone with a ‘signed’ SAML doc as the user authentication information
o The LOCAL keystone uses the CERTIFICATE(s) (of the federated identity provider) that it has, to validate the SAML doc is authentic, and
o Then uses the info in the SAML doc to convert to the associated OpenStack User/Tenant/Role , and
o Then generates the appropriate local token.
o CORRECT ?
o QUESTIONS
§ How does the LOCAL user/client get the ‘signed’ SAML doc ?
· ? does this required connectivity to remote Federated Identity Provider ?
Greg.
*From: *Lance Bragstad <lbragstad@gmail.com> *Date: *Monday, June 25, 2018 at 4:26 PM *To: *"edge-computing@lists.openstack.org" <edge-computing@lists.openstack.org> *Subject: *[Edge-computing] Keystone Edge Architectures
Hi Greg,
Jumping in a bit late, but I can try and clear up some of the questions
around keystone's federated identity implementation.
Your initial assessment of federated identity is accurate where each
keystone node in the deployment refers to an external identity provider
as the source of truth for user identities. But, keystone doesn't
actively reach out to the external identity provider at authentication
time. Instead, a user presents keystone (which is acting as a service
provider) with a SAML document that keystone will verify with a set of
certificates from the identity provider. If keystone can prove the SAML
assertion came from a trusted identity provider, it processes the
attributes through a mapping engine, which essentially translates the
SAML assertion into OpenStack-specific terminology.
The actual validation of the SAML assertion doesn't require a connection
to the identity provider that issued it. This trust relationship is
established when setting up federated identity via configuration (e.g.
these are the certs for the identity provider that I trust).
Hopefully that helps
Lance
Hey Lance, Thanks for the info below So seems like from previous discussions there is a little hesitation about “use of fernet issued from one keystone in another”, unless if “replicated/in-sync keystones” are used. ( is that correct interpretation ? ... or is it a total ‘-2’ on fernet tokens from one keystone used in another ? ) Does “replicated/in-sync keystones” refer to keytsones whose DBs are sync’d at the DB-level ? ( e.g. one read/write DB and many read replicas ? ) If yes, .... how does one handle Software Upgrade across a Distributed Cloud in this “replicated keystone” environment, where the Central Cloud Keystone could have a different DB schema than the Edge Subcloud ? Greg. < SNIP > The concept of replicating data over the API was also brought up towards the end of the call as a possible solution. The TL;DR is that each deployment would be independent, but data would be kept in sync by making API requests that forcibly create entities with the same IDs (manual replication if you will). This idea has been discussed at length over the last couple years [0]. We most recently discussed it during the summit in Sydney. Where we really tripped was replication of revocation events. For example, assume you have the ability in keystone to forcibly create things with specific IDs, allowing you to manually replicate data across your edge deployments. At the same time you're keeping the fernet keys in sync, so a user can actually generate tokens in one deployment and use them in other (since each keystone node has the same IDs for projects, domains, users, etc..), even though replication isn't happening at the database. Revocation events are records used internally by keystone to track events that affect token validity (e.g. a user changing their password or deleting a specific token), and they are not exposed via keystone's API. The concern is that without replicating this table, it would be possible for a user to: - generate a token in deployment A - use the token in deployment B to do something - change their password in deployment A, which creates a revocation event invalidating the token from step one in deployment A - the user attempts to use the token from step one in deployment A but it is considered invalid (due to the revocation event) - the user successfully executes API calls with the token from step one in deployment B because deployment B has no knowledge of the revocation event in deployment A We didn't get into that much detail in the call, but wanted to share where the conversation ended up the last time we had it. [0] https://review.openstack.org/#/c/323499/ < SNIP >
Hey Lance, Just following up again on the info below you provided on previous discussions on issues with API-based Synchronization of Keystone between clouds. In your example below, wouldn’t the API-based Synchronization of the user password change @ A (assuming A is the central site), result in the user password change @ B “and” the corresponding revocation event being created @ B ? Greg. < SNIP > The concept of replicating data over the API was also brought up towards the end of the call as a possible solution. The TL;DR is that each deployment would be independent, but data would be kept in sync by making API requests that forcibly create entities with the same IDs (manual replication if you will). This idea has been discussed at length over the last couple years [0]. We most recently discussed it during the summit in Sydney. Where we really tripped was replication of revocation events. For example, assume you have the ability in keystone to forcibly create things with specific IDs, allowing you to manually replicate data across your edge deployments. At the same time you're keeping the fernet keys in sync, so a user can actually generate tokens in one deployment and use them in other (since each keystone node has the same IDs for projects, domains, users, etc..), even though replication isn't happening at the database. Revocation events are records used internally by keystone to track events that affect token validity (e.g. a user changing their password or deleting a specific token), and they are not exposed via keystone's API. The concern is that without replicating this table, it would be possible for a user to: - generate a token in deployment A - use the token in deployment B to do something - change their password in deployment A, which creates a revocation event invalidating the token from step one in deployment A - the user attempts to use the token from step one in deployment A but it is considered invalid (due to the revocation event) - the user successfully executes API calls with the token from step one in deployment B because deployment B has no knowledge of the revocation event in deployment A We didn't get into that much detail in the call, but wanted to share where the conversation ended up the last time we had it. [0] https://review.openstack.org/#/c/323499/ < SNIP >
On 07/03/2018 07:01 AM, Waines, Greg wrote:
Hey Lance,
Just following up again on the info below you provided on previous discussions on issues with API-based Synchronization of Keystone between clouds.
In your example below, wouldn’t the API-based Synchronization of the user password change @ A (assuming A is the central site), result in the user password change @ B “and” the corresponding revocation event being created @ B ?
I guess I need alittle more information in order to answer the question. Do users live in SQL or are they centrally located somewhere else? If they livein SQL, does that mean you're replicating users manually to have the same ID, name, and password across every edge deployment? If so, then you'd beable to manually create revocation events by changing theuser's password at every edge deployment. So if you have 100 deployments, you'll have to change a user's password in 100 places in order to get around the issueI described in the previous note. This leads me to another question. If users are in SQL, are you planning on relying on any PCI-DSS features[0]? If so, this might be another attack surface to consider since: - an attacker could attempt to brute force a password in deployment A - the attacker is locked out due to hitting maximum failed authentication attempts - the attackerattempts to brute force the same account in deployment B (which doesn't know the same user just tried authenticatingand failed X times in deployment A) [0] https://docs.openstack.org/keystone/latest/admin/identity-security-complianc...
Greg.
< SNIP >
The concept of replicating data over the API was also brought up towards the end of the call as a possible solution. The TL;DR is that each deployment would be independent, but data would be kept in sync by making API requests that forcibly create entities with the same IDs (manual replication if you will). This idea has been discussed at length over the last couple years [0]. We most recently discussed it during the summit in Sydney.
Where we really tripped was replication of revocation events. For example, assume you have the ability in keystone to forcibly create things with specific IDs, allowing you to manually replicate data across your edge deployments. At the same time you're keeping the fernet keys in sync, so a user can actually generate tokens in one deployment and use them in other (since each keystone node has the same IDs for projects, domains, users, etc..), even though replication isn't happening at the database. Revocation events are records used internally by keystone to track events that affect token validity (e.g. a user changing their password or deleting a specific token), and they are not exposed via keystone's API. The concern is that without replicating this table, it would be possible for a user to:
- generate a token in deployment A - use the token in deployment B to do something - change their password in deployment A, which creates a revocation event invalidating the token from step one in deployment A - the user attempts to use the token from step one in deployment A but it is considered invalid (due to the revocation event) - the user successfully executes API calls with the token from step one in deployment B because deployment B has no knowledge of the revocation event in deployment A
We didn't get into that much detail in the call, but wanted to share where the conversation ended up the last time we had it.
[0] https://review.openstack.org/#/c/323499/
*< SNIP >*
For the API-based Synchronization approach, - the users could live locally in SQL, and - the idea would be to have a centralized API Synchronization Framework that would automatically synchronize User’s UUID, name and password across all edge deployments. ( ... provided that UUID could be specified on the POST of the USER ...) o ... which would indirectly result in revocation events being generated on the edge clouds But agree that your comment on PCI-DSS features, is yet another example of side-affect changes in the keystone DB ... which, in this case, would not get indirectly replicated thru an API-based Synchronization Framework. Thanks for your input, Greg. On 2018-07-09, 3:25 PM, "Lance Bragstad" <lbragstad@gmail.com> wrote: On 07/03/2018 07:01 AM, Waines, Greg wrote: Hey Lance, Just following up again on the info below you provided on previous discussions on issues with API-based Synchronization of Keystone between clouds. In your example below, wouldn’t the API-based Synchronization of the user password change @ A (assuming A is the central site), result in the user password change @ B “and” the corresponding revocation event being created @ B ? I guess I need a little more information in order to answer the question. Do users live in SQL or are they centrally located somewhere else? If they live in SQL, does that mean you're replicating users manually to have the same ID, name, and password across every edge deployment? If so, then you'd be able to manually create revocation events by changing the user's password at every edge deployment. So if you have 100 deployments, you'll have to change a user's password in 100 places in order to get around the issue I described in the previous note. This leads me to another question. If users are in SQL, are you planning on relying on any PCI-DSS features [0]? If so, this might be another attack surface to consider since: - an attacker could attempt to brute force a password in deployment A - the attacker is locked out due to hitting maximum failed authentication attempts - the attacker attempts to brute force the same account in deployment B (which doesn't know the same user just tried authenticating and failed X times in deployment A) [0] https://docs.openstack.org/keystone/latest/admin/identity-security-complianc... Greg. < SNIP > The concept of replicating data over the API was also brought up towards the end of the call as a possible solution. The TL;DR is that each deployment would be independent, but data would be kept in sync by making API requests that forcibly create entities with the same IDs (manual replication if you will). This idea has been discussed at length over the last couple years [0]. We most recently discussed it during the summit in Sydney. Where we really tripped was replication of revocation events. For example, assume you have the ability in keystone to forcibly create things with specific IDs, allowing you to manually replicate data across your edge deployments. At the same time you're keeping the fernet keys in sync, so a user can actually generate tokens in one deployment and use them in other (since each keystone node has the same IDs for projects, domains, users, etc..), even though replication isn't happening at the database. Revocation events are records used internally by keystone to track events that affect token validity (e.g. a user changing their password or deleting a specific token), and they are not exposed via keystone's API. The concern is that without replicating this table, it would be possible for a user to: - generate a token in deployment A - use the token in deployment B to do something - change their password in deployment A, which creates a revocation event invalidating the token from step one in deployment A - the user attempts to use the token from step one in deployment A but it is considered invalid (due to the revocation event) - the user successfully executes API calls with the token from step one in deployment B because deployment B has no knowledge of the revocation event in deployment A We didn't get into that much detail in the call, but wanted to share where the conversation ended up the last time we had it. [0] https://review.openstack.org/#/c/323499/ < SNIP >
participants (3)
-
Csatari, Gergely (Nokia - HU/Budapest)
-
Lance Bragstad
-
Waines, Greg