Hi,


Okay, thanks. I will add Dublin to the page, but I decided to do it next week, when I can not drink Guiness anymore.


Br,

Gerg0


From: lebre.adrien@free.fr <lebre.adrien@free.fr>
Sent: Friday, March 2, 2018 5:42:06 PM
To: Csatari, Gergely (Nokia - HU/Budapest)
Cc: edge-computing@lists.openstack.org
Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
 
>
> Hi,
>
>
>
> @Adrien: Is it okay if I create a subpage to
> https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds
> so we can document the results stored in all of these etherpads?
>
> By default I will use „OpenStack Edge Discussions” as a title, but
> I’m open to any sugestions.


Please Gerg0 do it, maybe add a Dublin_PTG suffix in the name.

Thanks,
Adrien
PS: as I mentioned on Tuesday, I will be offline next week (so feel free to move forward, we are an open community ;))

>
>
>
> If no objections I will make the first steps 16h Dublin time.
>
>
>
> Br,
>
> Gerg0
>
>
>
>
>
> From: Christopher Price [mailto:christopher.price@est.tech]
> Sent: Friday, March 2, 2018 1:00 PM
> To: Waines, Greg <Greg.Waines@windriver.com>; Csatari, Gergely (Nokia
> - HU/Budapest) <gergely.csatari@nokia.com>; lebre.adrien@free.fr;
> edge-computing@lists.openstack.org
> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
>
>
>
>
> lol, I like it. If we go with that name I'm using "rex" as an
> abbreviation. ;)
>
>
>
>
> From: Waines, Greg < Greg.Waines@windriver.com >
> Sent: Friday, March 2, 2018 11:50:06 AM
> To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien@free.fr ;
> edge-computing@lists.openstack.org
> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
>
>
>
>
>
>
> Yeah was some interesting discussions.
>
>
>
> I thought we made the most progress on Tuesday Morning where we
> worked thru “ Alan’s Problems ”, and
> were able to put down some clear requirements and even some stakes in
> the ground wrt the beginnings of a strawman architecture.
>
>
>
> My notes and view of those initial requirements and initial strawman
> architecture that we discussed are below:
> ( ... feel free to throw harpoons at my interpretation ...)
>
>
>
> First I’d like to suggest that we do the initial work on this New
> King Bird with the “goal” of NOT changing ANY existing service (like
> keystone, or glance, or ...). I think it would allow us to move
> forward more quickly ... and is not that restrictive wrt being able
> to meet the requirements below.
>
>
>
>     * multi-region solution
>
>
>
>
>
>
>             * i.e. central region and many edge regions
>             * where region == geographically edge site
>
>
>
>
>
>
>
>
>
>
>                     * For larger solutions, this would extend to
>                     multiple levels, e.g. central region à edge
>                     region à further-out-edge regions (sites) ...
>
>
>     * scaling up to 1,000s of regions/sites eventually
>     * edge region/site can be any size, from small 1-node system to
>     100s-of-nodes system
>     * a new service runs on central region for
>     configuring/synchronizing/queryingSyncStatusOf system-wide data
>     across all edge regions
>
>
>
>
>
>
>             * referred to in meeting as the "New King Bird" (NKB),
>             * supports REST API for configuring/queryingSyncStatusOf
>             of system wide data across all regions
>             * where system-wide data includes:
>
>
>
>
>
>
>
>
>
>
>                     * users, tenants, VM images (glance) … as
>                     discussed in meeting
>                     * … and, we never discussed but I would also
>                     throw in
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>                             * nova flavors (& extra specs), nova key
>                             pairs, neutron security groups, and
>                             * nova and neutron quotas (although that
>                             is more complex than simple
>                             synchronization if you want quota
>                             management across the
>                             system/multiple-edge-regions)
>
>
>
>
>
>
>             * will be able to specify sync-policy of 'to ALL edge
>             regions' or 'to specific subset of edge regions' , for
>             each item of system-wide data,
>             * synchronization process will retry continuously on
>             failure,
>             * synchronization process will automatically synch data
>             with any newly created/joined edge region,
>             * user will be able to query sync state of system-wide
>             data on all/individual edge regions,
>
>
>     * ABC Service in the central region will hold the system-wide ABC
>     data to be synchronized
>
>
>
>
>
>
>             * e.g. keystone, glance, nova, neutron, …
>
>
>     * New King Bird Service will hold the meta data wrt sync-policy
>     and sync-state for system-wide data
>     * For a large multi-level region hierarchy, the New King Bird
>     service would also run on selected edge clouds that would then
>     sync data to further-out-edge regions/sites.
>
>
>
>
> And finally, I have to throw out the first new name suggestion for
> the ‘new king bird’ project,
>
> how about Tyrannus ( i.e. a genus of large insect-eating birds,
> commonly known as kingbirds)?
>
>
>
> let me know what you think,
>
> Greg.
>
>
>
>
>
>
>
>
> From: "Csatari, Gergely (Nokia - HU/Budapest)" <
> gergely.csatari@nokia.com >
> Date: Wednesday, February 28, 2018 at 1:58 PM
> To: " lebre.adrien@free.fr " < lebre.adrien@free.fr >, "
> edge-computing@lists.openstack.org " <
> edge-computing@lists.openstack.org >
> Subject: [Edge-computing] Afterlife of the PTG edge discussions
>
>
>
>
> Hi,
>
>
>
> Thanks Adrian for facilitating the workshop. It was a very
> interesting and diverse discussion 
>
>
>
> I can help in organizing the notes after the PTG workshop.
>
>
>
> We used 3 etherpads:
>
> - PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin
>
> - Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis
>
> - Alans problems:
> https://etherpad.openstack.org/p/edge-alans-problems
>
>
>
> In the PTG schedule and the Gap analyzis we have high level
> discussions mostly about what we would like to do while in Alans
> problems we have detailed notes about some parts of how we would
> like to solve some of the problems.
>
>
>
> I think both of these are valuable information and we should somehow
> have the following things:
>
> - a description about what we would like to achieve. Maybe in some
> other format than a list of etherpads.
>
> - a list of concrete requirements to specific projects (being
> existing or something new)
>
> - maybe some prototypes based on the Tuesday afternoon discussions
> for the keystone data and image distribution (by the way can someone
> post the picture of THE PLAN?)
>
>
>
> Any opinions?
>
>
>
> Br,
>
> Gerg0