Classifying edge-site scenarios / Identifying expected features
Hi, Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires). A possible classification can start with: a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections. b) The same infrastructure but with LTE connections c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators d) scenario c) but with LTE e) .. In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint. I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently. My two cents, ad_ri3n_
We should probably focus more on defining the connectivity characteristics between the associated entities between the edge entities to make progress here. We could start discussion of the network connectivity at high level as "black box" representation of the network focusing on the latency, bandwith, drop rates, attachment redundancy, maximum un-availability etc. characteristics in the context of various specific use cases and/or connectivity constraints. These parameters can vary widely depending on the specific deployment scenarios. It is also possible to have connectivity from the e.g. "thick" vCPE edge instance located in customer customer premises that has heterogenous connectivity - to use the "LTE" example below, you can have a box that has a single wired (say ethernet) connectivity that is used normally, and wireless LTE connectivity that is used when the main connection fails, but has dissimilar bandwith, latency etc. characteristics. Also, top level separation should perhaps be at higher level constructs at this early stage, e.g. "mobile" 4G/LTE...5G/eMBB/URULL/..., "wireless" (802 stuff), "satellite", "wireline" (ptp fiber, xPON, Cable, xDSL/g.fast, ...) rather than getting to technology specifics (characteristics of connectivity vary a lot depending on the tech.). So far it seems to me that there is not yet sufficient focus on the cases for "edge" locations that are within the network, such as cell sites, V/CRAN aggregation sites, COs, metro locations etc. - esp. 5G vRAN edge has already three tiers of sites on many plans (i.e. radio sites, distributed aggregation sites w/ low-latency connections to radio sites and central sites, which all should be in the scope of the "edge"). Pasi On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote:
Hi,
Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires).
A possible classification can start with: a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections. b) The same infrastructure but with LTE connections c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators d) scenario c) but with LTE e) ..
In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint.
I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently.
My two cents, ad_ri3n_
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Hi, I agree with you when we classify edge-site scenarios. Also we need to understand applicability of edge computing use case (not NFV use case we already know) at each type of edge-site. According to the following 5G Automotive Association’s WP(p11), “A typical challenge in IoV is the big data processing and storage resulting from a huge and growing number of sensors in smart/connected vehicles and roads. This, in turn, requires new services platforms with powerful processing and computing resources, low cost and trusted storage, and real-time communication capabilities. MEC appears as a promising opportunity to migrate Cloud resources closer to the vehicles and the data sources and hence save network resources and reduce the latency. “ http://5gaa.org/news/toward-fully-connected-vehicles-edge-computing-for-adva... How can OpenStack Edge Computing help Automotive industry? -Hyde 2018-01-10 3:48 GMT+09:00 Pasi Vaananen <pvaanane@redhat.com>:
We should probably focus more on defining the connectivity characteristics between the associated entities between the edge entities to make progress here. We could start discussion of the network connectivity at high level as "black box" representation of the network focusing on the latency, bandwith, drop rates, attachment redundancy, maximum un-availability etc. characteristics in the context of various specific use cases and/or connectivity constraints. These parameters can vary widely depending on the specific deployment scenarios.
It is also possible to have connectivity from the e.g. "thick" vCPE edge instance located in customer customer premises that has heterogenous connectivity - to use the "LTE" example below, you can have a box that has a single wired (say ethernet) connectivity that is used normally, and wireless LTE connectivity that is used when the main connection fails, but has dissimilar bandwith, latency etc. characteristics.
Also, top level separation should perhaps be at higher level constructs at this early stage, e.g. "mobile" 4G/LTE...5G/eMBB/URULL/..., "wireless" (802 stuff), "satellite", "wireline" (ptp fiber, xPON, Cable, xDSL/g.fast, ...) rather than getting to technology specifics (characteristics of connectivity vary a lot depending on the tech.).
So far it seems to me that there is not yet sufficient focus on the cases for "edge" locations that are within the network, such as cell sites, V/CRAN aggregation sites, COs, metro locations etc. - esp. 5G vRAN edge has already three tiers of sites on many plans (i.e. radio sites, distributed aggregation sites w/ low-latency connections to radio sites and central sites, which all should be in the scope of the "edge").
Pasi
Hi,
Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires).
A possible classification can start with: a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections. b) The same infrastructure but with LTE connections c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators d) scenario c) but with LTE e) ..
In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from
On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote: the developers viewpoint.
I have the feeling that we all have relevant scenarios with specifics
according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently.
My two cents, ad_ri3n_
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
On Tuesday 09 January 2018 23:20:07 Hyde SUGIYAMA wrote:
How can OpenStack Edge Computing help Automotive industry?
A key insight is that "edge" has to mean inside the vehicle, not only at cell tower or close to it. In other words, every vehicle should have a substantial on-board cloudlet. See these two references on why: - http://elijah.cs.cmu.edu/DOCS/hu-livemap2017.pdf - http://elijah.cs.cmu.edu/DOCS/satya-lanman2017.pdf Edge-enabled OpenStack in every vehicle? -- Satya
Thanks for sharing the links, but my current scope is Edge in infrastructure to aggregate data traffic(like your MQTT traffic) from vehicles. PS. Discussion about edge inside the vehicle should start from Automotive Linux summit. So far, they focus on IVI but the scope is not limited as far as I know. I have't heard that they need Edge-enabled OpenStack in every vehicle... -- Hyde 2018-01-10 21:40 GMT+09:00 Mahadev Satyanarayanan <satya@cs.cmu.edu>:
On Tuesday 09 January 2018 23:20:07 Hyde SUGIYAMA wrote:
How can OpenStack Edge Computing help Automotive industry?
A key insight is that "edge" has to mean inside the vehicle, not only at cell tower or close to it. In other words, every vehicle should have a substantial on-board cloudlet. See these two references on why: - http://elijah.cs.cmu.edu/DOCS/hu-livemap2017.pdf - http://elijah.cs.cmu.edu/DOCS/satya-lanman2017.pdf
Edge-enabled OpenStack in every vehicle?
-- Satya
Hi, Hmm, runing OpenStack components in unikernels limits the hw variants that can be used and the os features the guests can use. Br, Gerg0 From: Hyde SUGIYAMA [mailto:Hyde@redhat.com] Sent: Thursday, January 11, 2018 5:26 AM To: Mahadev Satyanarayanan <satya@cs.cmu.edu> Cc: edge-computing@lists.openstack.org Subject: Re: [Edge-computing] Classifying edge-site scenarios / Identifying expected features Thanks for sharing the links, but my current scope is Edge in infrastructure to aggregate data traffic(like your MQTT traffic) from vehicles. PS. Discussion about edge inside the vehicle should start from Automotive Linux summit. So far, they focus on IVI but the scope is not limited as far as I know. I have't heard that they need Edge-enabled OpenStack in every vehicle... -- Hyde 2018-01-10 21:40 GMT+09:00 Mahadev Satyanarayanan <satya@cs.cmu.edu<mailto:satya@cs.cmu.edu>>: On Tuesday 09 January 2018 23:20:07 Hyde SUGIYAMA wrote:
How can OpenStack Edge Computing help Automotive industry?
A key insight is that "edge" has to mean inside the vehicle, not only at cell tower or close to it. In other words, every vehicle should have a substantial on-board cloudlet. See these two references on why: - http://elijah.cs.cmu.edu/DOCS/hu-livemap2017.pdf - http://elijah.cs.cmu.edu/DOCS/satya-lanman2017.pdf Edge-enabled OpenStack in every vehicle? -- Satya
Hi, I’ve added MY view of the answers to the use case questions to the medium edge deployment scenario from line 13 to here: https://etherpad.openstack.org/p/edge-use-case-questions Also mapped the user stories from here https://etherpad.openstack.org/p/2017_edge_computing_working_sessions_existi... to the same text. Any comments are welcome, but be aware, that I will be out of office for two weeks starting from Friday. Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Thursday, January 11, 2018 11:16 AM To: 'Hyde SUGIYAMA' <Hyde@redhat.com>; Mahadev Satyanarayanan <satya@cs.cmu.edu> Cc: edge-computing@lists.openstack.org Subject: RE: [Edge-computing] Classifying edge-site scenarios / Identifying expected features Hi, Hmm, runing OpenStack components in unikernels limits the hw variants that can be used and the os features the guests can use. Br, Gerg0 From: Hyde SUGIYAMA [mailto:Hyde@redhat.com] Sent: Thursday, January 11, 2018 5:26 AM To: Mahadev Satyanarayanan <satya@cs.cmu.edu<mailto:satya@cs.cmu.edu>> Cc: edge-computing@lists.openstack.org<mailto:edge-computing@lists.openstack.org> Subject: Re: [Edge-computing] Classifying edge-site scenarios / Identifying expected features Thanks for sharing the links, but my current scope is Edge in infrastructure to aggregate data traffic(like your MQTT traffic) from vehicles. PS. Discussion about edge inside the vehicle should start from Automotive Linux summit. So far, they focus on IVI but the scope is not limited as far as I know. I have't heard that they need Edge-enabled OpenStack in every vehicle... -- Hyde 2018-01-10 21:40 GMT+09:00 Mahadev Satyanarayanan <satya@cs.cmu.edu<mailto:satya@cs.cmu.edu>>: On Tuesday 09 January 2018 23:20:07 Hyde SUGIYAMA wrote:
How can OpenStack Edge Computing help Automotive industry?
A key insight is that "edge" has to mean inside the vehicle, not only at cell tower or close to it. In other words, every vehicle should have a substantial on-board cloudlet. See these two references on why: - http://elijah.cs.cmu.edu/DOCS/hu-livemap2017.pdf - http://elijah.cs.cmu.edu/DOCS/satya-lanman2017.pdf Edge-enabled OpenStack in every vehicle? -- Satya
In addition to cell sites and V/CRAN aggregation Control and Data Plane seperation efforts in 5G may result in keeping control plane elements anchored at a central or regional core and only have data plane SDN switches or something light on the edge. Ajay
On Jan 9, 2018, at 1:48 PM, Pasi Vaananen <pvaanane@redhat.com> wrote:
We should probably focus more on defining the connectivity characteristics between the associated entities between the edge entities to make progress here. We could start discussion of the network connectivity at high level as "black box" representation of the network focusing on the latency, bandwith, drop rates, attachment redundancy, maximum un-availability etc. characteristics in the context of various specific use cases and/or connectivity constraints. These parameters can vary widely depending on the specific deployment scenarios.
It is also possible to have connectivity from the e.g. "thick" vCPE edge instance located in customer customer premises that has heterogenous connectivity - to use the "LTE" example below, you can have a box that has a single wired (say ethernet) connectivity that is used normally, and wireless LTE connectivity that is used when the main connection fails, but has dissimilar bandwith, latency etc. characteristics.
Also, top level separation should perhaps be at higher level constructs at this early stage, e.g. "mobile" 4G/LTE...5G/eMBB/URULL/..., "wireless" (802 stuff), "satellite", "wireline" (ptp fiber, xPON, Cable, xDSL/g.fast, ...) rather than getting to technology specifics (characteristics of connectivity vary a lot depending on the tech.).
So far it seems to me that there is not yet sufficient focus on the cases for "edge" locations that are within the network, such as cell sites, V/CRAN aggregation sites, COs, metro locations etc. - esp. 5G vRAN edge has already three tiers of sites on many plans (i.e. radio sites, distributed aggregation sites w/ low-latency connections to radio sites and central sites, which all should be in the scope of the "edge").
Pasi
On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote:
Hi,
Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires).
A possible classification can start with: a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections. b) The same infrastructure but with LTE connections c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators d) scenario c) but with LTE e) ..
In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint.
I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently.
My two cents, ad_ri3n_
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
There are two aspects to this, first being the split architecture of the functionality between the (Remote)Radio Heads, Distributed Units and Centralized units for RAN, and second aspects is the CP/DP separation. In addition, core location (and associated splits) as well as splicing affect location of functions, and can be service specific - i.e. location of components for URULL service can be different than location for same components for eMBB service. RAN functions themselves have hard real-time requirements constraining the latencies (e.g. 100us OWD target for L1 stuff in CPRI/eCPRI and other low level split specs), which limits distance from radio to ~20km. Since goal of some new services / edge deployments is latency minimization (including the 1ms target for whole RAN, including both transmission and processing), that further limits on what can be placed where. Pasi On 01/10/2018 07:10 PM, Ajay Simha wrote:
In addition to cell sites and V/CRAN aggregation Control and Data Plane seperation efforts in 5G may result in keeping control plane elements anchored at a central or regional core and only have data plane SDN switches or something light on the edge.
Ajay
On Jan 9, 2018, at 1:48 PM, Pasi Vaananen <pvaanane@redhat.com> wrote:
We should probably focus more on defining the connectivity characteristics between the associated entities between the edge entities to make progress here. We could start discussion of the network connectivity at high level as "black box" representation of the network focusing on the latency, bandwith, drop rates, attachment redundancy, maximum un-availability etc. characteristics in the context of various specific use cases and/or connectivity constraints. These parameters can vary widely depending on the specific deployment scenarios.
It is also possible to have connectivity from the e.g. "thick" vCPE edge instance located in customer customer premises that has heterogenous connectivity - to use the "LTE" example below, you can have a box that has a single wired (say ethernet) connectivity that is used normally, and wireless LTE connectivity that is used when the main connection fails, but has dissimilar bandwith, latency etc. characteristics.
Also, top level separation should perhaps be at higher level constructs at this early stage, e.g. "mobile" 4G/LTE...5G/eMBB/URULL/..., "wireless" (802 stuff), "satellite", "wireline" (ptp fiber, xPON, Cable, xDSL/g.fast, ...) rather than getting to technology specifics (characteristics of connectivity vary a lot depending on the tech.).
So far it seems to me that there is not yet sufficient focus on the cases for "edge" locations that are within the network, such as cell sites, V/CRAN aggregation sites, COs, metro locations etc. - esp. 5G vRAN edge has already three tiers of sites on many plans (i.e. radio sites, distributed aggregation sites w/ low-latency connections to radio sites and central sites, which all should be in the scope of the "edge").
Pasi
On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote:
Hi,
Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires).
A possible classification can start with: a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections. b) The same infrastructure but with LTE connections c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators d) scenario c) but with LTE e) ..
In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint.
I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently.
My two cents, ad_ri3n_
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Following up on the conversation in the call today, have folks on the list read "My VM is Lighter (and Safer) than your Container"[0]? It's a good example of the kind of work we could do now on OpenStack, if we select a small number of representative use cases/scenarios, build sample deployments, and share concrete data with developers on specific changes we need in OpenStack. This kind of work can be ongoing at the same time as higher-level and broader-scope conversations about areas where we're still unsure of the best architecture or implementation details. Allison [0] Filipe Manco, Costin Lupu, Florian Schmidt, Jose Mendes, Simon Kuenzer, Sumit Sati, Kenichi Yasukata, Costin Raiciu, and Felipe Huici (2017) "My VM is Lighter (and Safer) than your Container." In Proceedings of the 26th Symposium on Operating Systems Principles (SOSP '17). ACM, New York, NY, USA, 218-233. DOI: https://doi.org/10.1145/3132747.3132763, Also available at: http://cnp.neclab.eu/projects/lightvm/lightvm.pdf On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote:
Hi,
Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires).
A possible classification can start with: a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections. b) The same infrastructure but with LTE connections c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators d) scenario c) but with LTE e) ..
In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint.
I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently.
My two cents, ad_ri3n_
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Allison, Thanks for sharing. I was not aware of this work and I found the article interesting. Today we discussed two aspects: 1- How to make openstack fit into smaller footprint (i.e. into an edge site) 2- How to make openstack more distributed (with Compute nodes at edge sites connected via a WAN link) This article certainly fits in the first category. In particular, if we think about deploying Openstack components in Unikernels instead of deploying them with Containers. Paul-André -- On 1/9/18, 6:00 PM, "Allison Randal" <allison@lohutok.net> wrote: Following up on the conversation in the call today, have folks on the list read "My VM is Lighter (and Safer) than your Container"[0]? It's a good example of the kind of work we could do now on OpenStack, if we select a small number of representative use cases/scenarios, build sample deployments, and share concrete data with developers on specific changes we need in OpenStack. This kind of work can be ongoing at the same time as higher-level and broader-scope conversations about areas where we're still unsure of the best architecture or implementation details. Allison [0] Filipe Manco, Costin Lupu, Florian Schmidt, Jose Mendes, Simon Kuenzer, Sumit Sati, Kenichi Yasukata, Costin Raiciu, and Felipe Huici (2017) "My VM is Lighter (and Safer) than your Container." In Proceedings of the 26th Symposium on Operating Systems Principles (SOSP '17). ACM, New York, NY, USA, 218-233. DOI: https://doi.org/10.1145/3132747.3132763, Also available at: http://cnp.neclab.eu/projects/lightvm/lightvm.pdf On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote: > Hi, > > Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires). > > A possible classification can start with: > a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections. > b) The same infrastructure but with LTE connections > c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators > d) scenario c) but with LTE > e) .. > > In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint. > > I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently. > > My two cents, > ad_ri3n_ > > _______________________________________________ > Edge-computing mailing list > Edge-computing@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing > _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
On Tuesday 09 January 2018 22:38:44 Paul-Andre Raymond wrote:
2- How to make openstack more distributed (with Compute nodes at edge sites connected via a WAN link)
One key capability for this is VM Handoff over WAN links. Essentially "live migration" of VMs across cloudlets outside the data center. We have extended OpenStack to do this. See cmusatyalab/elijah-openstack on github and this description of how it works: http://elijah.cs.cmu.edu/DOCS/ha-sec2017.pdf -- Satya
The typical pattern for unikernels is to deploy the application workloads as unikernels rather than as containers or VM/cloud images. (I also pondered the possibility of trying the OpenStack services deployed as unikernels, as I read the article, but that was more idle curiosity, since it wouldn't buy us much in an edge scenario.) I also wasn't suggesting directly trying the LightVM implementation in OpenStack, since it's mostly a proof-of-concept at the moment. (Though I'll be interested to see where they go with it.) Mainly, I was suggesting that their workflow for performance analysis and optimization of Xen is a good example of what we could do with OpenStack Edge deployments, once we've defined a small set of representative edge-site scenarios. That performance analysis workflow applies equally well to either the small-footprint or distributed approach to using OpenStack for edge infrastructure, you just have to use a more distributed set of performance analysis tools for the distributed case. Allison On 01/09/2018 10:38 PM, Paul-Andre Raymond wrote:
Allison,
Thanks for sharing. I was not aware of this work and I found the article interesting. Today we discussed two aspects: 1- How to make openstack fit into smaller footprint (i.e. into an edge site) 2- How to make openstack more distributed (with Compute nodes at edge sites connected via a WAN link)
This article certainly fits in the first category. In particular, if we think about deploying Openstack components in Unikernels instead of deploying them with Containers.
Paul-André --
On 1/9/18, 6:00 PM, "Allison Randal" <allison@lohutok.net> wrote:
Following up on the conversation in the call today, have folks on the list read "My VM is Lighter (and Safer) than your Container"[0]? It's a good example of the kind of work we could do now on OpenStack, if we select a small number of representative use cases/scenarios, build sample deployments, and share concrete data with developers on specific changes we need in OpenStack. This kind of work can be ongoing at the same time as higher-level and broader-scope conversations about areas where we're still unsure of the best architecture or implementation details.
Allison
[0] Filipe Manco, Costin Lupu, Florian Schmidt, Jose Mendes, Simon Kuenzer, Sumit Sati, Kenichi Yasukata, Costin Raiciu, and Felipe Huici (2017) "My VM is Lighter (and Safer) than your Container." In Proceedings of the 26th Symposium on Operating Systems Principles (SOSP '17). ACM, New York, NY, USA, 218-233. DOI: https://doi.org/10.1145/3132747.3132763, Also available at: http://cnp.neclab.eu/projects/lightvm/lightvm.pdf
On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote: > Hi, > > Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires). > > A possible classification can start with: > a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections. > b) The same infrastructure but with LTE connections > c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators > d) scenario c) but with LTE > e) .. > > In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint. > > I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently. > > My two cents, > ad_ri3n_ > > _______________________________________________ > Edge-computing mailing list > Edge-computing@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing >
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Hi, Just wondering whether multit-tenancy is a hard requirement for some Edge environment, e.g. device, gateway, NFV appliance? Those setup sound more like single tenancy, therefore container + baremetal is good enough? On Wed, Jan 10, 2018 10:17 PM, Allison Randal allison@lohutok.net wrote: The typical pattern for unikernels is to deploy the application workloads as unikernels rather than as containers or VM/cloud images. (I also pondered the possibility of trying the OpenStack services deployed as unikernels, as I read the article, but that was more idle curiosity, since it wouldn't buy us much in an edge scenario.) I also wasn't suggesting directly trying the LightVM implementation in OpenStack, since it's mostly a proof-of-concept at the moment. (Though I'll be interested to see where they go with it.) Mainly, I was suggesting that their workflow for performance analysis and optimization of Xen is a good example of what we could do with OpenStack Edge deployments, once we've defined a small set of representative edge-site scenarios. That performance analysis workflow applies equally well to either the small-footprint or distributed approach to using OpenStack for edge infrastructure, you just have to use a more distributed set of performance analysis tools for the distributed case. Allison On 01/09/2018 10:38 PM, Paul-Andre Raymond wrote:
Allison,
Thanks for sharing. I was not aware of this work and I found the article interesting.
Today we discussed two aspects:
1- How to make openstack fit into smaller footprint (i.e. into an edge site)
2- How to make openstack more distributed (with Compute nodes at edge sites connected via a WAN link)
This article certainly fits in the first category.
In particular, if we think about deploying Openstack components in Unikernels instead of deploying them with Containers.
Paul-André
--
On 1/9/18, 6:00 PM, "Allison Randal" <allison@lohutok.net> wrote:
Following up on the conversation in the call today, have folks on the
list read "My VM is Lighter (and Safer) than your Container"[0]? It's a
good example of the kind of work we could do now on OpenStack, if we
select a small number of representative use cases/scenarios, build
sample deployments, and share concrete data with developers on specific
changes we need in OpenStack. This kind of work can be ongoing at the
same time as higher-level and broader-scope conversations about areas
where we're still unsure of the best architecture or implementation details.
Allison
[0] Filipe Manco, Costin Lupu, Florian Schmidt, Jose Mendes, Simon
Kuenzer, Sumit Sati, Kenichi Yasukata, Costin Raiciu, and Felipe Huici
(2017) "My VM is Lighter (and Safer) than your Container." In
Proceedings of the 26th Symposium on Operating Systems Principles (SOSP
'17). ACM, New York, NY, USA, 218-233. DOI:
https://doi.org/10.1145/3132747.3132763, Also available at:
On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote:
Hi,
Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires).
A possible classification can start with:
a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections.
b) The same infrastructure but with LTE connections
c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators
d) scenario c) but with LTE
e) ..
In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint.
I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently.
My two cents,
ad_ri3n_
_______________________________________________
Edge-computing mailing list
Edge-computing@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
_______________________________________________
Edge-computing mailing list
Edge-computing@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing ============================================================Our greatest contribution to the world is our dissatisfaction of everything ============================================================
Sometimes even unikernel + bare metal is enough. It really depends on the use case and target application workloads. On 01/10/2018 09:55 PM, Gnep Zhao wrote:
Hi,
Just wondering whether multit-tenancy is a hard requirement for some Edge environment, e.g. device, gateway, NFV appliance? Those setup sound more like single tenancy, therefore container + baremetal is good enough?
On Wed, Jan 10, 2018 10:17 PM, Allison Randal allison@lohutok.net <mailto:allison@lohutok.net> wrote:
__
The typical pattern for unikernels is to deploy the application
workloads as unikernels rather than as containers or VM/cloud images. (I
also pondered the possibility of trying the OpenStack services deployed
as unikernels, as I read the article, but that was more idle curiosity,
since it wouldn't buy us much in an edge scenario.)
I also wasn't suggesting directly trying the LightVM implementation in
OpenStack, since it's mostly a proof-of-concept at the moment. (Though
I'll be interested to see where they go with it.)
Mainly, I was suggesting that their workflow for performance analysis
and optimization of Xen is a good example of what we could do with
OpenStack Edge deployments, once we've defined a small set of
representative edge-site scenarios. That performance analysis workflow
applies equally well to either the small-footprint or distributed
approach to using OpenStack for edge infrastructure, you just have to
use a more distributed set of performance analysis tools for the
distributed case.
Allison
On 01/09/2018 10:38 PM, Paul-Andre Raymond wrote:
> Allison,
>
> Thanks for sharing. I was not aware of this work and I found the article interesting.
> Today we discussed two aspects:
> 1- How to make openstack fit into smaller footprint (i.e. into an edge site)
> 2- How to make openstack more distributed (with Compute nodes at edge sites connected via a WAN link)
>
> This article certainly fits in the first category.
> In particular, if we think about deploying Openstack components in Unikernels instead of deploying them with Containers.
>
>
> Paul-André
> --
>
>
> On 1/9/18, 6:00 PM, "Allison Randal" <allison@lohutok.net> wrote:
>
> Following up on the conversation in the call today, have folks on the
> list read "My VM is Lighter (and Safer) than your Container"[0]? It's a
> good example of the kind of work we could do now on OpenStack, if we
> select a small number of representative use cases/scenarios, build
> sample deployments, and share concrete data with developers on specific
> changes we need in OpenStack. This kind of work can be ongoing at the
> same time as higher-level and broader-scope conversations about areas
> where we're still unsure of the best architecture or implementation details.
>
> Allison
>
> [0] Filipe Manco, Costin Lupu, Florian Schmidt, Jose Mendes, Simon
> Kuenzer, Sumit Sati, Kenichi Yasukata, Costin Raiciu, and Felipe Huici
> (2017) "My VM is Lighter (and Safer) than your Container." In
> Proceedings of the 26th Symposium on Operating Systems Principles (SOSP
> '17). ACM, New York, NY, USA, 218-233. DOI:
> https://doi.org/10.1145/3132747.3132763, Also available at:
> http://cnp.neclab.eu/projects/lightvm/lightvm.pdf
>
> On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote:
> > Hi,
> >
> > Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires).
> >
> > A possible classification can start with:
> > a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections.
> > b) The same infrastructure but with LTE connections
> > c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators
> > d) scenario c) but with LTE
> > e) ..
> >
> > In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint.
> >
> > I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently.
> >
> > My two cents,
> > ad_ri3n_
> >
> > _______________________________________________
> > Edge-computing mailing list
> > Edge-computing@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
> >
>
> _______________________________________________
> Edge-computing mailing list
> Edge-computing@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
>
>
_______________________________________________
Edge-computing mailing list
Edge-computing@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
============================================================ *Our greatest contribution to the world is our dissatisfaction of everything* ============================================================
Cannot comment on "some", but it is generally requirement for "many". Containers do not mean that the do not need multi-tenancy, but if you want to include "all" cases vs. only "some", then it is a requirement. Pasi On 01/10/2018 09:55 PM, Gnep Zhao wrote:
Hi,
Just wondering whether multit-tenancy is a hard requirement for some Edge environment, e.g. device, gateway, NFV appliance? Those setup sound more like single tenancy, therefore container + baremetal is good enough?
On Wed, Jan 10, 2018 10:17 PM, Allison Randal allison@lohutok.net <mailto:allison@lohutok.net> wrote:
The typical pattern for unikernels is to deploy the application
workloads as unikernels rather than as containers or VM/cloud images. (I
also pondered the possibility of trying the OpenStack services deployed
as unikernels, as I read the article, but that was more idle curiosity,
since it wouldn't buy us much in an edge scenario.)
I also wasn't suggesting directly trying the LightVM implementation in
OpenStack, since it's mostly a proof-of-concept at the moment. (Though
I'll be interested to see where they go with it.)
Mainly, I was suggesting that their workflow for performance analysis
and optimization of Xen is a good example of what we could do with
OpenStack Edge deployments, once we've defined a small set of
representative edge-site scenarios. That performance analysis workflow
applies equally well to either the small-footprint or distributed
approach to using OpenStack for edge infrastructure, you just have to
use a more distributed set of performance analysis tools for the
distributed case.
Allison
On 01/09/2018 10:38 PM, Paul-Andre Raymond wrote:
> Allison,
>
> Thanks for sharing. I was not aware of this work and I found the article interesting.
> Today we discussed two aspects:
> 1- How to make openstack fit into smaller footprint (i.e. into an edge site)
> 2- How to make openstack more distributed (with Compute nodes at edge sites connected via a WAN link)
>
> This article certainly fits in the first category.
> In particular, if we think about deploying Openstack components in Unikernels instead of deploying them with Containers.
>
>
> Paul-André
> --
>
>
> On 1/9/18, 6:00 PM, "Allison Randal" <allison@lohutok.net> wrote:
>
> Following up on the conversation in the call today, have folks on the
> list read "My VM is Lighter (and Safer) than your Container"[0]? It's a
> good example of the kind of work we could do now on OpenStack, if we
> select a small number of representative use cases/scenarios, build
> sample deployments, and share concrete data with developers on specific
> changes we need in OpenStack. This kind of work can be ongoing at the
> same time as higher-level and broader-scope conversations about areas
> where we're still unsure of the best architecture or implementation details.
>
> Allison
>
> [0] Filipe Manco, Costin Lupu, Florian Schmidt, Jose Mendes, Simon
> Kuenzer, Sumit Sati, Kenichi Yasukata, Costin Raiciu, and Felipe Huici
> (2017) "My VM is Lighter (and Safer) than your Container." In
> Proceedings of the 26th Symposium on Operating Systems Principles (SOSP
> '17). ACM, New York, NY, USA, 218-233. DOI:
> https://doi.org/10.1145/3132747.3132763, Also available at:
> http://cnp.neclab.eu/projects/lightvm/lightvm.pdf
>
> On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote:
> > Hi,
> >
> > Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires).
> >
> > A possible classification can start with:
> > a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections.
> > b) The same infrastructure but with LTE connections
> > c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators
> > d) scenario c) but with LTE
> > e) ..
> >
> > In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint.
> >
> > I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently.
> >
> > My two cents,
> > ad_ri3n_
> >
> > _______________________________________________
> > Edge-computing mailing list
> > Edge-computing@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
> >
>
> _______________________________________________
> Edge-computing mailing list
> Edge-computing@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
>
>
_______________________________________________
Edge-computing mailing list
Edge-computing@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
============================================================ *Our greatest contribution to the world is our dissatisfaction of everything* ============================================================
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
Agreed. The infra layer (compute, not storage and network) has two type of runtime technologies: container and virtualization. To provide a universal and portable platform for applications, the underlying runtime is an implementation choice. The package format, aka Image, is the key. Application should be packaged only once in a "standard" format, and be deployed on any runtime, based on various requirements. With that said, not all apps can be "dockerized" or turned into "micro-service", some are built with the notion of Server. But that doesn't necessarily mean that it has to run as VMs. LXD is able to provide a VM-like experience on top of cgroup/namespace; and Kata is able to provide a Docker experience on top of KVM. The myth is that Docker image has to be microservices. In fact, we can build "fat", full server-like Docker images. But uniting both container and virtualization under one single, de-facto image format and workflow has tremendous benefits. On Fri, Jan 12, 2018 2:41 AM, Pasi Vaananen pvaanane@redhat.com wrote: Cannot comment on "some", but it is generally requirement for "many". Containers do not mean that the do not need multi-tenancy, but if you want to include "all" cases vs. only "some", then it is a requirement. Pasi On 01/10/2018 09:55 PM, Gnep Zhao wrote: Hi, Just wondering whether multit-tenancy is a hard requirement for some Edge environment, e.g. device, gateway, NFV appliance? Those setup sound more like single tenancy, therefore container + baremetal is good enough? On Wed, Jan 10, 2018 10:17 PM, Allison Randal allison@lohutok.net wrote: The typical pattern for unikernels is to deploy the application workloads as unikernels rather than as containers or VM/cloud images. (I also pondered the possibility of trying the OpenStack services deployed as unikernels, as I read the article, but that was more idle curiosity, since it wouldn't buy us much in an edge scenario.) I also wasn't suggesting directly trying the LightVM implementation in OpenStack, since it's mostly a proof-of-concept at the moment. (Though I'll be interested to see where they go with it.) Mainly, I was suggesting that their workflow for performance analysis and optimization of Xen is a good example of what we could do with OpenStack Edge deployments, once we've defined a small set of representative edge-site scenarios. That performance analysis workflow applies equally well to either the small-footprint or distributed approach to using OpenStack for edge infrastructure, you just have to use a more distributed set of performance analysis tools for the distributed case. Allison On 01/09/2018 10:38 PM, Paul-Andre Raymond wrote:
Allison,
Thanks for sharing. I was not aware of this work and I found the article interesting.
Today we discussed two aspects:
1- How to make openstack fit into smaller footprint (i.e. into an edge site)
2- How to make openstack more distributed (with Compute nodes at edge sites connected via a WAN link)
This article certainly fits in the first category.
In particular, if we think about deploying Openstack components in Unikernels instead of deploying them with Containers.
Paul-André
--
On 1/9/18, 6:00 PM, "Allison Randal" <allison@lohutok.net> wrote:
Following up on the conversation in the call today, have folks on the
list read "My VM is Lighter (and Safer) than your Container"[0]? It's a
good example of the kind of work we could do now on OpenStack, if we
select a small number of representative use cases/scenarios, build
sample deployments, and share concrete data with developers on specific
changes we need in OpenStack. This kind of work can be ongoing at the
same time as higher-level and broader-scope conversations about areas
where we're still unsure of the best architecture or implementation details.
Allison
[0] Filipe Manco, Costin Lupu, Florian Schmidt, Jose Mendes, Simon
Kuenzer, Sumit Sati, Kenichi Yasukata, Costin Raiciu, and Felipe Huici
(2017) "My VM is Lighter (and Safer) than your Container." In
Proceedings of the 26th Symposium on Operating Systems Principles (SOSP
'17). ACM, New York, NY, USA, 218-233. DOI:
https://doi.org/10.1145/3132747.3132763, Also available at:
On 01/05/2018 06:53 PM, lebre.adrien@free.fr wrote:
Hi,
Following the LTE discussion, I'm wondering whether we should not try to classify the edge-site scenarios by level of the complexity (i.e., features/capabilities each scenario implies/requires).
A possible classification can start with:
a) An edge infrastructure composed of several edge sites operated by a same organisation with WAN wired connections.
b) The same infrastructure but with LTE connections
c) Scenario a) but where the edge infrastructure is spread over several edge sites belonging to different organisations/operators
d) scenario c) but with LTE
e) ..
In parallel, we can try to identify w.r.t these scenarios what are the expected features/services from the administrator viewpoint and then from the developers viewpoint.
I have the feeling that we all have relevant scenarios with specifics according to our targets. Having a classification can allow us to move forward by just focusing on classes instead of each particular scenario independently.
My two cents,
ad_ri3n_
_______________________________________________
Edge-computing mailing list
Edge-computing@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
_______________________________________________
Edge-computing mailing list
Edge-computing@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing ============================================================ Our greatest contribution to the world is our dissatisfaction of everything ============================================================ _______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing ============================================================Our greatest contribution to the world is our dissatisfaction of everything ============================================================
participants (9)
-
Ajay Simha
-
Allison Randal
-
Csatari, Gergely (Nokia - HU/Budapest)
-
Gnep Zhao
-
Hyde SUGIYAMA
-
lebre.adrien@free.fr
-
Mahadev Satyanarayanan
-
Pasi Vaananen
-
Paul-Andre Raymond