rfc9817.original   rfc9817.txt 
COINRG I. Kunze Internet Research Task Force (IRTF) I. Kunze
Internet-Draft K. Wehrle Request for Comments: 9817 K. Wehrle
Intended status: Informational RWTH Aachen Category: Informational RWTH Aachen
Expires: 7 June 2025 D. Trossen ISSN: 2070-1721 D. Trossen
Huawei Huawei
M. J. Montpetit M-J. Montpetit
McGill McGill
X. de Foy X. de Foy
InterDigital Communications, LLC InterDigital Communications, LLC
D. Griffin D. Griffin
M. Rio M. Rio
UCL UCL
4 December 2024 July 2025
Use Cases for In-Network Computing Use Cases for In-Network Computing
draft-irtf-coinrg-use-cases-07
Abstract Abstract
Computing in the Network (COIN) comes with the prospect of deploying Computing in the Network (COIN) comes with the prospect of deploying
processing functionality on networking devices, such as switches and processing functionality on networking devices such as switches and
network interface cards. While such functionality can be beneficial, network interface cards. While such functionality can be beneficial,
it has to be carefully placed into the context of the general it has to be carefully placed into the context of the general
Internet communication and it needs to be clearly identified where Internet communication, and it needs to be clearly identified where
and how those benefits apply. and how those benefits apply.
This document presents some use cases to demonstrate how a number of This document presents some use cases to demonstrate how a number of
salient COIN-related applications can benefit from COIN. salient COIN-related applications can benefit from COIN.
Furthermore, to guide research on COIN, it identifies essential Furthermore, to guide research on COIN, it identifies essential
research questions and outlines desirable capabilities that COIN research questions and outlines desirable capabilities that COIN
systems addressing the use cases may need to support. Finally, the systems addressing these use cases may need to support. Finally, the
document provides a preliminary categorization of the described document provides a preliminary categorization of the described
research questions to source future work in this domain. It is a research questions to source future work in this domain. This
product of the Computing in the Network Research Group (COINRG). It document is a product of the Computing in the Network Research Group
is not an IETF product and it is not a standard. (COINRG). It is not an IETF product and it is not a standard.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This document is not an Internet Standards Track specification; it is
provisions of BCP 78 and BCP 79. published for informational purposes.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months This document is a product of the Internet Research Task Force
and may be updated, replaced, or obsoleted by other documents at any (IRTF). The IRTF publishes the results of Internet-related research
time. It is inappropriate to use Internet-Drafts as reference and development activities. These results might not be suitable for
material or to cite them other than as "work in progress." deployment. This RFC represents the consensus of the Computing in
the Network (COIN) Research Group of the Internet Research Task Force
(IRTF). Documents approved for publication by the IRSG are not
candidates for any level of Internet Standard; see Section 2 of RFC
7841.
This Internet-Draft will expire on 7 June 2025. Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
https://www.rfc-editor.org/info/rfc9817.
Copyright Notice Copyright Notice
Copyright (c) 2024 IETF Trust and the persons identified as the Copyright (c) 2025 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents (https://trustee.ietf.org/ Provisions Relating to IETF Documents
license-info) in effect on the date of publication of this document. (https://trustee.ietf.org/license-info) in effect on the date of
Please review these documents carefully, as they describe your rights publication of this document. Please review these documents
and restrictions with respect to this document. carefully, as they describe your rights and restrictions with respect
to this document.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Terminology
3. Providing New COIN Experiences . . . . . . . . . . . . . . . 5 3. Providing New COIN Experiences
3.1. Mobile Application Offloading . . . . . . . . . . . . . . 5 3.1. Mobile Application Offloading
3.2. Extended Reality and Immersive Media . . . . . . . . . . 10 3.2. Extended Reality and Immersive Media
3.3. Personalized and interactive performing arts . . . . . . 16 3.3. Personalized and Interactive Performing Arts
4. Supporting new COIN Systems . . . . . . . . . . . . . . . . . 20 4. Supporting New COIN Systems
4.1. In-Network Control / Time-sensitive applications . . . . 20 4.1. In-Network Control / Time-Sensitive Applications
4.2. Large Volume Applications . . . . . . . . . . . . . . . . 23 4.2. Large-Volume Applications
4.3. Industrial Safety . . . . . . . . . . . . . . . . . . . . 26 4.3. Industrial Safety
5. Improving existing COIN capabilities . . . . . . . . . . . . 28 5. Improving Existing COIN Capabilities
5.1. Content Delivery Networks . . . . . . . . . . . . . . . . 28 5.1. Content Delivery Networks
5.2. Compute-Fabric-as-a-Service (CFaaS) . . . . . . . . . . . 30 5.2. Compute-Fabric-as-a-Service (CFaaS)
5.3. Virtual Networks Programming . . . . . . . . . . . . . . 32 5.3. Virtual Networks Programming
6. Enabling new COIN capabilities . . . . . . . . . . . . . . . 36 6. Enabling New COIN Capabilities
6.1. Distributed AI Training . . . . . . . . . . . . . . . . . 36 6.1. Distributed AI Training
7. Preliminary Categorization of the Research Questions . . . . 38 7. Preliminary Categorization of the Research Questions
8. Security Considerations . . . . . . . . . . . . . . . . . . . 40 8. Security Considerations
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 41 9. IANA Considerations
10. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 42 10. Conclusion
11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 42 11. Informative References
12. Informative References . . . . . . . . . . . . . . . . . . . 42 Acknowledgements
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 48 Authors' Addresses
1. Introduction 1. Introduction
The Internet was designed as a best-effort packet network, forwarding The Internet was designed as a best-effort packet network, forwarding
packets from source to destination with limited guarantees regarding packets from source to destination with limited guarantees regarding
their timely and successful reception. Data manipulation, their timely and successful reception. Data manipulation,
computation, and more complex protocol functionality is generally computation, and more complex protocol functionalities are generally
provided by the end-hosts while network nodes are traditionally kept provided by the end hosts, while network nodes are traditionally kept
simple and only offer a "store and forward" packet facility. This simple and only offer a "store and forward" packet facility. This
simplicity of purpose of the network has shown to be suitable for a simplicity of purpose of the network has shown to be suitable for a
wide variety of applications and has facilitated the rapid growth of wide variety of applications and has facilitated the rapid growth of
the Internet while introducing middleboxes with specialized the Internet. However, introducing middleboxes with specialized
functionality for enhancing performance has often led to problems due functionality for enhancing performance has often led to problems due
to their inflexibility. to their inflexibility.
However, with the rise of new services, some of which are described However, with the rise of new services, some of which are described
in this document, there is a growing number of application domains in this document, there is a growing number of application domains
that require more than best-effort forwarding including strict that require more than best-effort forwarding, including strict
performance guarantees or closed-loop integration to manage data performance guarantees or closed-loop integration to manage data
flows. In this context, allowing for a tighter integration of flows. In this context, allowing for a tighter integration of
computing and networking resources for enabling a more flexible computing and networking resources for enabling a more flexible
distribution of computation tasks across the network, e.g., beyond distribution of computation tasks across the network (e.g., beyond
'just' endpoints and without requiring specialized middleboxes, may "just" endpoints and without requiring specialized middleboxes) may
help to achieve the desired guarantees and behaviors, increase help to achieve the desired guarantees and behaviors, increase
overall performance, and improve resilience to failures. overall performance, and improve resilience to failures.
The vision of 'in-network computing' and the provisioning of such The vision of "in-network computing" and the provisioning of such
capabilities that capitalize on joint computation and communication capabilities that capitalize on joint computation and communication
resource usage throughout the network is part of the move from a resource usage throughout the network is part of the move from a
telephone network analogy of the Internet into a more distributed telephone network analogy of the Internet into a more distributed
computer board architecture. We refer to those capabilities as 'COIN computer board architecture. We refer to those capabilities as "COIN
capabilities' in the remainder of the document. capabilities" in the remainder of the document.
We believe that this vision of 'in-network computing' can be best We believe that this vision of in-network computing can be best
outlined along four dimensions of use cases, namely those that (i) outlined along four dimensions of use cases, namely those that:
provide new user experiences through the utilization of COIN
capabilities (referred to as 'COIN experiences'), (ii) enable new i. provide new user experiences through the utilization of COIN
COIN systems, e.g., through new interactions between communication capabilities (referred to as "COIN experiences"),
and compute providers, (iii) improve on already existing COIN
capabilities, and (iv) enable new COIN capabilities. Sections 3 ii. enable new COIN systems (e.g., through new interactions between
through 6 capture those categories of use cases and provide the main communication and compute providers),
structure of this document. The goal is to present how computing
resources inside the network impact existing services and iii. improve on already existing COIN capabilities, and
applications or allow for innovation in emerging application domains.
iv. enable new COIN capabilities.
Sections 3 through 6 capture those categories of use cases and
provide the main structure of this document. The goal is to present
how computing resources inside the network impact existing services
and applications or allow for innovation in emerging application
domains.
By delving into some individual examples within each of the above By delving into some individual examples within each of the above
categories, we outline opportunities and propose possible research categories, we outline opportunities and propose possible research
questions for consideration by the wider community when pushing questions for consideration by the wider community when pushing
forward 'in-network computing' architectures. Furthermore, forward in-network computing architectures. Furthermore, identifying
identifying desirable capabilities for an evolving solution space of desirable capabilities for an evolving solution space of COIN systems
COIN systems is another objective of the use case descriptions. To is another objective of the use case descriptions. To achieve this,
achieve this, the following taxonomy is proposed to describe each of the following taxonomy is proposed to describe each of the use cases:
the use cases:
1. Description: High-level presentation of the purpose of the use Description: A high-level presentation of the purpose of the use
case and a short explanation of the use case behavior. case and a short explanation of the use case behavior.
2. Characterization: Explanation of the services that are being Characterization: An explanation of the services that are being
utilized and realized as well as the semantics of interactions in utilized and realized as well as the semantics of interactions in
the use case. the use case.
3. Existing solutions: Description of current methods that may Existing Solutions: A description of current methods that may
realize the use case (if they exist), not claiming to realize the use case (if they exist), though not claiming to
exhaustively review the landscape of solutions. exhaustively review the landscape of solutions.
4. Opportunities: An outline of how COIN capabilities may support or Opportunities: An outline of how COIN capabilities may support or
improve on the use case in terms of performance and other improve on the use case in terms of performance and other metrics.
metrics.
5. Research questions: Essential questions that are suitable for Research questions: Essential questions that are suitable for
guiding research to achieve the identified opportunities. The guiding research to achieve the identified opportunities. The
research questions also capture immediate capabilities for any research questions also capture immediate capabilities for any
COIN solution addressing the particular use case whose COIN solution addressing the particular use case whose development
development may immediately follow when working toward answers to may immediately follow when working toward answers to the research
the research questions. questions.
6. Additional desirable capabilities: Description of additional Additional desirable capabilities: A description of additional
capabilities that might not require research but may be desirable capabilities that might not require research but may be desirable
for any COIN solution addressing the particular use case; we for any COIN solution addressing the particular use case; we limit
limit these capabilities to those directly affecting COIN, these capabilities to those directly affecting COIN, recognizing
recognizing that any use case will realistically require many that any use case will realistically require many additional
additional capabilities for its realization. We omit this capabilities for its realization. We omit this dedicated section
dedicated section if relevant capabilities are already if relevant capabilities are already sufficiently covered by the
sufficiently covered by the corresponding research questions. corresponding research questions.
This document discusses these six aspects along a number of This document discusses these six aspects along a number of
individual use cases to demonstrate the diversity of COIN individual use cases to demonstrate the diversity of COIN
applications. It is intended as a basis for further analyses and applications. It is intended as a basis for further analyses and
discussions within the wider research community. This document discussions within the wider research community. This document
represents the consensus of COINRG. represents the consensus of COINRG.
2. Terminology 2. Terminology
This document uses the terminology defined below. This document uses the terminology defined below.
Programmable Network Devices (PNDs): network devices, such as network Programmable Network Devices (PNDs): network devices, such as
interface cards and switches, which are programmable, e.g., using P4 network interface cards and switches, which are programmable
[P4] or other languages. (e.g., using P4 [P4] or other languages).
(COIN) Execution Environment: a class of target environments for (COIN) execution environment: a class of target environments for
function execution, for example, a JVM-based execution environment function execution, for example, an execution environment based on
that can run functions represented in JVM byte code the Java Virtual Machine (JVM) that can run functions represented
in JVM byte code.
COIN System: the PNDs (and end systems) and their execution COIN system: the PNDs (and end systems) and their execution
environments, together with the communication resources environments, together with the communication resources
interconnecting them, operated by a single provider or through interconnecting them, operated by a single provider or through
interactions between multiple providers that jointly offer COIN interactions between multiple providers that jointly offer COIN
capabilities capabilities.
COIN Capability: a feature enabled through the joint processing of COIN capability: a feature enabled through the joint processing of
computation and communication resources in the network computation and communication resources in the network.
(COIN) Program: a monolithic functionality that is provided according (COIN) program: a monolithic functionality that is provided
to the specification for said program and which may be requested by a according to the specification for said program and which may be
user. A composite service can be built by orchestrating a requested by a user. A composite service can be built by
combination of monolithic COIN programs. orchestrating a combination of monolithic COIN programs.
(COIN) Program Instance: one running instance of a program (COIN) program instance: one running instance of a program.
COIN Experience: a new user experience brought about through the COIN experience: a new user experience brought about through the
utilization of COIN capabilities utilization of COIN capabilities.
3. Providing New COIN Experiences 3. Providing New COIN Experiences
3.1. Mobile Application Offloading 3.1. Mobile Application Offloading
3.1.1. Description 3.1.1. Description
This scenario can be exemplified in an immersive gaming application, This scenario can be exemplified in an immersive gaming application,
where a single user plays a game using a Virtual Reality (VR) where a single user plays a game using a Virtual Reality (VR)
headset. headset. The headset hosts several (COIN) programs. For instance,
The headset hosts several (COIN) programs. For instance, the the display (COIN) program renders frames to the user, while other
"display" (COIN) program renders frames to the user, while other
programs are realized for VR content processing and to incorporate programs are realized for VR content processing and to incorporate
input data received from sensors, e.g., in bodily worn devices input data received from sensors (e.g., in bodily worn devices
including the VR headset. including the VR headset).
Once this application is partitioned into its constituent (COIN) Once this application is partitioned into its constituent (COIN)
programs and deployed throughout a COIN system, utilizing a COIN programs and deployed throughout a COIN system, utilizing a COIN
execution environment, only the "display" (COIN) program may be left execution environment, only the display (COIN) program may be left in
in the headset, while the compute intensive real-time VR content the headset, while the compute intensive real-time VR content
processing (COIN) program can be offloaded to a nearby resource rich processing (COIN) program can be offloaded to a nearby resource rich
home PC or a programmable network device (PND) in the operator's home PC or a Programmable Network Device (PND) in the operator's
access network, for a better execution (faster and possibly higher access network, for a better execution (faster and possibly higher
resolution generation). resolution generation).
3.1.2. Characterization 3.1.2. Characterization
Partitioning a mobile application into several constituent (COIN) Partitioning a mobile application into several constituent (COIN)
programs allows for denoting the application as a collection of programs allows for denoting the application as a collection of
(COIN) programs for a flexible composition and a distributed (COIN) programs for a flexible composition and a distributed
execution. In our example above, most capabilities of a mobile execution. In our example above, most capabilities of a mobile
application can be categorized into any of three, "receiving", application can be categorized into any of three groups: receiving,
"processing", and "displaying" groups. processing, and displaying.
Any device may realize one or more of the (COIN) programs of a mobile Any device may realize one or more of the (COIN) programs of a mobile
application and expose them to the (COIN) system and its constituent application and expose them to the (COIN) system and its constituent
(COIN) execution environments. When the (COIN) program sequence is (COIN) execution environments. When the (COIN) program sequence is
executed on a single device, the outcome is what you traditionally executed on a single device, the outcome is what you traditionally
see with applications running on mobile devices. see with applications running on mobile devices.
However, the execution of a (COIN) program may be moved to other However, the execution of a (COIN) program may be moved to other
(e.g., more suitable) devices, including PNDs, which have exposed the (e.g., more suitable) devices, including PNDs, which have exposed the
corresponding (COIN) program as individual (COIN) program instances corresponding (COIN) program as individual (COIN) program instances
to the (COIN) system by means of a 'service identifier'. The result to the (COIN) system by means of a service identifier. The result is
is the equivalent to 'mobile function offloading', for possible the equivalent to mobile function offloading, for possible reduction
reduction of power consumption (e.g., offloading CPU intensive of power consumption (e.g., offloading CPU-intensive process
process functions to a remote server) or for improved end user functions to a remote server) or for improved end-user experience
experience (e.g., moving display functions to a nearby smart TV) by (e.g., moving display functions to a nearby smart TV) by selecting
selecting more suitably placed (COIN) program instances in the more suitably placed (COIN) program instances in the overall (COIN)
overall (COIN) system. system.
We can already see a trend toward supporting such functionality with, We can already see a trend toward supporting such functionality with
e.g., gaming platforms rendering content externally, relying on relyiccng on dedicated cloud hardware (e.g., gaming platforms
dedicated cloud hardware. We envision, however, that such rendering content externally). We envision, however, that such
functionality is becoming more pervasive through specific facilities, functionality is becoming more pervasive through specific facilities,
such as entertainment parks or even hotels, to deploy needed edge such as entertainment parks or even hotels, in order to deploy needed
computing capability to enable localized gaming as well as non-gaming edge computing capabilities to enable localized gaming as well as
scenarios. non-gaming scenarios.
Figure 1 shows one realization of the above scenario, where a 'DPR Figure 1 shows one realization of the above scenario, where a "DPR
app' is running on a mobile device (containing the partitioned app" is running on a mobile device (containing the partitioned COIN
Display(D), Process(P) and Receive(R) COIN programs) over a programs Display (D), Process (P) and Receive (R)) over a
programmable switching, e.g., here an SDN, network. The packaged programmable switching network, e.g., a Software-Defined Network
applications are made available through a localized 'playstore (SDN) here. The packaged applications are made available through a
server'. The mobile application installation is realized as a localized "playstore server". The mobile application installation is
'service deployment' process, combining the local app installation realized as a service deployment process, combining the local app
with a distributed deployment (and orchestration) of one or more installation with a distributed deployment (and orchestration) of one
(COIN) programs on most suitable end systems or PNDs ('processing or more (COIN) programs on most suitable end systems or PNDs (here, a
server'). "processing server").
+----------+ Processing Server +----------+ Processing Server
Mobile | +------+ | Mobile | +------+ |
+---------+ | | P | | +---------+ | | P | |
| App | | +------+ | | App | | +------+ |
| +-----+ | | +------+ | | +-----+ | | +------+ |
| |D|P|R| | | | SR | | | |D|P|R| | | | SR | |
| +-----+ | | +------+ | Internet | +-----+ | | +------+ | Internet
| +-----+ | +----------+ / | +-----+ | +----------+ /
| | SR | | | / | | SR | | | /
skipping to change at page 7, line 37 skipping to change at line 315
|+-------+| +-------+ / +----------+ |+-------+| +-------+ / +----------+
|+-------+| /|WIFI AP|/ |+-------+| /|WIFI AP|/
|| D || / +-------+ +--+ || D || / +-------+ +--+
|+-------+|/ |SR| |+-------+|/ |SR|
|+-------+| /+--+ |+-------+| /+--+
|| SR || +---------+ || SR || +---------+
|+-------+| |Playstore| |+-------+| |Playstore|
+---------+ | Server | +---------+ | Server |
TV +---------+ TV +---------+
Figure 1: Application Function Offloading Example. Figure 1: Application Function Offloading Example
Such localized deployment could, for instance, be provided by a Such localized deployment could, for instance, be provided by a
visiting site, such as a hotel or a theme park. Once the visiting site, such as a hotel or a theme park. Once the processing
'processing' (COIN) program is terminated on the mobile device, the (COIN) program is terminated on the mobile device, the "service
'service routing' (SR) elements in the network route (service) routing (SR)" elements in the network route (service) requests
requests instead to the (previously deployed) 'processing' (COIN) instead to the (previously deployed) processing (COIN) program
program running on the processing server over an existing SDN running on the processing server over an existing SDN network. Here,
network. Here, capabilities and other constraints for selecting the capabilities and other constraints for selecting the appropriate
appropriate (COIN) program, in case of having deployed more than one, (COIN) program, in case of having deployed more than one, may be
may be provided both in the advertisement of the (COIN) program and provided both in the advertisement of the (COIN) program and the
the service request itself. service request itself.
As an extension to the above scenarios, we can also envision that As an extension to the above scenarios, we can also envision that
content from one processing (COIN) program may be distributed to more content from one processing (COIN) program may be distributed to more
than one display (COIN) program, e.g., for multi/many-viewing than one display (COIN) program (e.g., for multi- and many-viewing
scenarios. Here, an offloaded "processing" program may collate input scenarios). Here, an offloaded processing program may collate input
from several users in the (virtual) environment to generate a from several users in the (virtual) environment to generate a
possibly three-dimensional render that is then distributed via a possibly three-dimensional render that is then distributed via a
service-level multicast capability towards more than one "display" service-level multicast capability towards more than one display
(COIN) program. (COIN) program.
3.1.3. Existing Solutions 3.1.3. Existing Solutions
The ETSI Mobile Edge Computing (MEC) [ETSI] suite of technologies The ETSI Mobile Edge Computing (MEC) [ETSI] suite of technologies
provides solutions for mobile function offloading by allowing mobile provides solutions for mobile function offloading by allowing mobile
applications to select resources in edge devices to execute functions applications to select resources in edge devices to execute functions
instead of the mobile device directly. For this, ETSI MEC utilizes a instead of the mobile device directly. For this, ETSI MEC utilizes a
set of interfaces for the selection of suitable edge resources, set of interfaces for the selection of suitable edge resources,
connecting to so-called MEC application servers, while also allowing connecting to so-called MEC application servers, while also allowing
for sending data for function execution to the application server. for sending data for function execution to the application server.
However, the technologies do not utilize micro-services However, the technologies do not utilize microservices
[Microservices] but mainly rely on virtualization approaches such as [Microservices]; they mainly rely on virtualization approaches such
containers or virtual machines, thus requiring a heavier processing as containers or virtual machines, thus requiring a heavier
and memory footprint in a COIN execution environment and the processing and memory footprint in a COIN execution environment and
executing intermediaries. Also, the ETSI work does not allow for the the executing intermediaries. Also, the ETSI work does not allow for
dynamic selection and redirection of (COIN) program calls to varying the dynamic selection and redirection of (COIN) program calls to
edge resources rather than a single MEC application server. varying edge resources rather than a single MEC application server.
Also, the selection of the edge resource (the app server) is Also, the selection of the edge resource (the app server) is
relatively static, relying on DNS-based endpoint selection, which relatively static, relying on DNS-based endpoint selection, which
does not cater to the requirements of the example provided above, does not cater to the requirements of the example provided above,
where the latency for redirecting to another device lies within few where the latency for redirecting to another device lies within a few
milliseconds for aligning with the framerate of the display micro- milliseconds for aligning with the frame rate of the display
service. microservice.
Lastly, MEC application servers are usually considered resources Lastly, MEC application servers are usually considered resources
provided by the network operator through its MEC infrastructure, provided by the network operator through its MEC infrastructure,
while our use case here also foresees the placement and execution of while our use case here also foresees the placement and execution of
micro-services in end user devices. microservices in end-user devices.
There also exists a plethora of mobile offloading platforms provided There also exists a plethora of mobile offloading platforms provided
through proprietary platforms, all of which follow a similar approach through proprietary platforms, all of which follow a similar approach
as ETSI MEC in that a selected edge application server is being as ETSI MEC in that a selected edge application server is being
utilized to send functional descriptions and data for execution. utilized to send functional descriptions and data for execution.
The draft at [APPCENTRES] outlines a number of enabling technologies [APPCENTRES] outlines a number of enabling technologies for the use
for the use case, some of which have been realized in an Android- case, some of which have been realized in an Android-based
based realization of the micro-services as a single application, realization of the microservices as a single application, which is
which is capable to dynamically redirect traffic to other micro- capable of dynamically redirecting traffic to other microservice
service instances in the network. This capability, together with the instances in the network. This capability, together with the
underlying path-based forwarding capability (using SDN) was underlying path-based forwarding capability (using SDN), was
demonstrated publicly, e.g., at the Mobile World Congress 2018 and demonstrated publicly (e.g., at the Mobile World Congress 2018 and
2019. 2019).
3.1.4. Opportunities 3.1.4. Opportunities
* The packaging of (COIN) programs into existing mobile application * The packaging of (COIN) programs into existing mobile application
packaging may enable the migration from current (mobile) device- packaging may enable the migration from current (mobile) device-
centric execution of those mobile applications toward a possible centric execution of those mobile applications toward a possible
distributed execution of the constituent (COIN) programs that are distributed execution of the constituent (COIN) programs that are
part of the overall mobile application. part of the overall mobile application.
* The orchestration for deploying (COIN) program instances in * The orchestration for deploying (COIN) program instances in
skipping to change at page 9, line 29 skipping to change at line 401
for localized infrastructure owners, such as hotels or venue for localized infrastructure owners, such as hotels or venue
owners, to offer their compute capabilities to their visitors for owners, to offer their compute capabilities to their visitors for
improved or even site-specific experiences. improved or even site-specific experiences.
* The execution of (current mobile) app-level (COIN) programs may * The execution of (current mobile) app-level (COIN) programs may
speed up the execution of said (COIN) program by relocating the speed up the execution of said (COIN) program by relocating the
execution to more suitable devices, including PNDs that may reside execution to more suitable devices, including PNDs that may reside
better located in relation to other (COIN) programs and thus better located in relation to other (COIN) programs and thus
improve performance, such as latency. improve performance, such as latency.
* The support for service-level routing of requests (service routing * The support for service-level routing of requests (such as service
in [APPCENTRES] may support higher flexibility when switching from routing in [APPCENTRES]) may support higher flexibility when
one (COIN) program instance to another, e.g., due to changing switching from one (COIN) program instance to another (e.g., due
constraints for selecting the new (COIN) program instance. Here, to changing constraints for selecting the new (COIN) program
PNDs may support service routing solutions by acting as routing instance). Here, PNDs may support service routing solutions by
overlay nodes to implement the necessary additional lookup acting as routing overlay nodes to implement the necessary
functionality and also possibly support the handling of affinity additional lookup functionality and also possibly support the
relations, i.e., the forwarding of one packet to the destination handling of affinity relations (i.e., the forwarding of one packet
of a previous one due to a higher level service relation, as to the destination of a previous one due to a higher level service
discussed and described in [SarNet2021]. relation as discussed and described in [SarNet2021]).
* The ability to identify service-level COIN elements will allow for * The ability to identify service-level COIN elements will allow for
routing service requests to those COIN elements, including PNDs, routing service requests to those COIN elements, including PNDs,
therefore possibly allowing for new COIN functionality to be therefore possibly allowing for a new COIN functionality to be
included in the mobile application. included in the mobile application.
* The support for constraint-based selection of a specific (COIN) * The support for constraint-based selection of a specific (COIN)
program instance over others (constraint-based routing in program instance over others (e.g., constraint-based routing in
[APPCENTRES], showcased for PNDs in [SarNet2021]) may allow for a [APPCENTRES], showcased for PNDs in [SarNet2021]) may allow for a
more flexible and app-specific selection of (COIN) program more flexible and app-specific selection of (COIN) program
instances, thereby allowing for better meeting the app-specific instances, thereby allowing for better meeting the app-specific
and end user requirements. and end-user requirements.
3.1.5. Research Questions 3.1.5. Research Questions
* RQ 3.1.1: How to combine service-level orchestration frameworks, * RQ 3.1.1: How to combine service-level orchestration frameworks,
such as TOSCA orchestration templates[TOSCA], with app-level, such as TOSCA orchestration templates [TOSCA], with app-level
e.g., mobile application, packaging methods, ultimately providing (e.g., mobile application) packaging methods, ultimately providing
means for packaging micro-services for deployments in distributed the means for packaging microservices for deployments in
networked computing environments? distributed networked computing environments?
* RQ 3.1.2: How to reduce latencies involved in (COIN) program * RQ 3.1.2: How to reduce latencies involved in (COIN) program
interactions where (COIN) program instance locations may change interactions where (COIN) program instance locations may change
quickly? Can service-level requests be routed directly through quickly? Can service-level requests be routed directly through
in-band signalling methods instead of relying on out-of-band in-band signaling methods instead of relying on out-of-band
discovery, such as through the DNS? discovery, such as through the DNS?
* RQ 3.1.3: How to signal constraints used for routing requests * RQ 3.1.3: How to signal constraints used for routing requests
towards (COIN) program instances in a scalable manner, i.e., for towards (COIN) program instances in a scalable manner (i.e., for
dynamically choosing the best possible service sequence of one or dynamically choosing the best possible service sequence of one or
more (COIN) programs for a given application experience through more (COIN) programs for a given application experience through
chaining (COIN) program executions? chaining (COIN) program executions)?
* RQ 3.1.4: How to identify (COIN) programs and program instances so * RQ 3.1.4: How to identify (COIN) programs and program instances so
as to allow routing (service) requests to specific instances of a as to allow routing (service) requests to specific instances of a
given service? given service?
* RQ 3.1.5: How to identify a specific choice of (COIN) program * RQ 3.1.5: How to identify a specific choice of a (COIN) program
instances over others, thus allowing to pin the execution of a instance over others, thus allowing pinning the execution of a
service of a specific (COIN) program to a specific resource, i.e., service of a specific (COIN) program to a specific resource (i.e.,
(COIN) program instance in the distributed environment? a (COIN) program instance in the distributed environment)?
* RQ 3.1.6: How to provide affinity of service requests towards * RQ 3.1.6: How to provide affinity of service requests towards
(COIN) program instances, i.e., longer-term transactions with (COIN) program instances (i.e., longer-term transactions with
ephemeral state established at a specific (COIN) program instance? ephemeral state established at a specific (COIN) program
instance)?
* RQ 3.1.7: How to provide constraint-based routing decisions that * RQ 3.1.7: How to provide constraint-based routing decisions that
can be realized at packet forwarding speed, e.g., using techniques can be realized at packet forwarding speed (e.g., using techniques
explored in [SarNet2021] at the forwarding plane or using explored in [SarNet2021] at the forwarding plane or using
approaches like [Multi2020] for extended routing protocols? approaches like [Multi2020] for extended routing protocols)?
* RQ 3.1.8: What COIN capabilities may support the execution of * RQ 3.1.8: What COIN capabilities may support the execution of
(COIN) programs and their instances? (COIN) programs and their instances?
* RQ 3.1.9: How to ensure real-time synchronization and consistency * RQ 3.1.9: How to ensure real-time synchronization and consistency
of distributed application states among (COIN) program instances, of distributed application states among (COIN) program instances,
in particular when frequently changing the choice for a particular in particular, when frequently changing the choice for a
(COIN) program in terms of executing service instance? particular (COIN) program in terms of executing a service
instance?
3.2. Extended Reality and Immersive Media 3.2. Extended Reality and Immersive Media
3.2.1. Description 3.2.1. Description
Extended reality (XR) encompasses VR, Augmented Reality (AR) and Extended Reality (XR) encompasses VR, Augmented Reality (AR) and
Mixed Reality (MR). It provides the basis for the metaverse and is Mixed Reality (MR). It provides the basis for the metaverse and is
the driver of a number of advances in interactive technologies. the driver of a number of advances in interactive technologies.
While initially associated with gaming and immersive entertainment, While initially associated with gaming and immersive entertainment,
applications now include remote diagnosis, maintenance, telemedicine, applications now include remote diagnosis, maintenance, telemedicine,
manufacturing and assembly, intelligent agriculture, smart cities, manufacturing and assembly, intelligent agriculture, smart cities,
and immersive classrooms. XR is one example of the multisource- and immersive classrooms. XR is one example of the multisource-
multidestination problem that combines video and haptics in multidestination problem that combines video and haptics in
interactive multi-party interactions under strict delay requirements interactive multiparty interactions under strict delay requirements.
that can benefit from a functional distribution that includes fog As such, XR can benefit from a functional distribution that includes
computing for local information processing, the edge for aggregation, fog computing for local information processing, the edge for
and the cloud for image processing. aggregation, and the cloud for image processing.
XR stands to benefit significantly from computing capabilities in the XR stands to benefit significantly from computing capabilities in the
network. For example, XR applications can offload intensive network. For example, XR applications can offload intensive
processing tasks to edge servers, considerably reducing latency when processing tasks to edge servers, considerably reducing latency when
compared to cloud-based applications and enhancing the overall user compared to cloud-based applications and enhancing the overall user
experience. More importantly, COIN can enable collaborative XR experience. More importantly, COIN can enable collaborative XR
experiences, where multiple users interact in the same virtual space experiences, where multiple users interact in the same virtual space
in real-time, regardless of their physical locations, by allowing in real time, regardless of their physical locations, by allowing
resource discovery and re-rerouting of XR streams. While not a resource discovery and re-rerouting of XR streams. While not a
feature of most XR implementations, this capability opens up new feature of most XR implementations, this capability opens up new
possibilities for remote collaboration, training, and entertainment. possibilities for remote collaboration, training, and entertainment.
Furthermore, COIN can support dynamic content delivery, allowing XR Furthermore, COIN can support dynamic content delivery, allowing XR
applications to seamlessly adapt to changing environments and user applications to seamlessly adapt to changing environments and user
interactions. Hence, the integration of computing capabilities into interactions. Hence, the integration of computing capabilities into
the network architecture enhances the scalability, flexibility, and the network architecture enhances the scalability, flexibility, and
performance of XR applications by supplying telemetry and advanced performance of XR applications by supplying telemetry and advanced
stream management, paving the way for more immersive and interactive stream management, paving the way for more immersive and interactive
experiences. experiences.
skipping to change at page 12, line 9 skipping to change at line 519
data. Because high bandwidth is needed for high resolution images data. Because high bandwidth is needed for high resolution images
and local rendering for 3D images and holograms, strictly relying on and local rendering for 3D images and holograms, strictly relying on
cloud-based architectures, even with headset processing, limits some cloud-based architectures, even with headset processing, limits some
of its potential benefits in the collaborative space. As a of its potential benefits in the collaborative space. As a
consequence, innovation is needed to unlock the full potential of XR. consequence, innovation is needed to unlock the full potential of XR.
3.2.2. Characterization 3.2.2. Characterization
As mentioned above, XR experiences, especially those involving As mentioned above, XR experiences, especially those involving
collaboration, are difficult to deliver with a client-server cloud- collaboration, are difficult to deliver with a client-server cloud-
based solution as they require a combination of multi-stream based solution. This is because they require a combination of
aggregation, low delays and delay variations, means to recover from multistream aggregation, low delays and delay variations, means to
losses, and optimized caching and rendering as close as possible to recover from losses, and optimized caching and rendering as close as
the user at the network edge. Hence, implementing such XR solutions possible to the user at the network edge. Hence, implementing such
necessitates substantial computational power and minimal latency, XR solutions necessitates substantial computational power and minimal
which, for now, has spurred the development of better headsets not latency, which, for now, has spurred the development of better
networked or distributed solutions as factors like distance from headsets not networked or distributed solutions as factors like
cloud servers and limited bandwidth can still significantly lower distance from cloud servers and limited bandwidth can still
application responsiveness. Furthermore, when XR deals with significantly lower application responsiveness. Furthermore, when XR
sensitive information, XR applications must also provide a secure deals with sensitive information, XR applications must also provide a
environment and ensure user privacy, which represent additional secure environment and ensure user privacy, which represent
burdens for delay sensitive applications. Additionally, the sheer additional burdens for delay-sensitive applications. Additionally,
amount of data needed for and generated by the XR applications, such the sheer amount of data needed for and generated by XR applications,
as video holography, put them squarely in the realm of data-driven such as video holography, put them squarely in the realm of data-
applications that can use recent trend analysis and mechanisms, as driven applications that can use recent trend analysis and
well as machine learning to find the optimal caching and processing mechanisms, as well as machine learning, in order to find the optimal
solution and, ideally, reduce the size of the data that needs caching and processing solution and ideally reduce the size of the
transiting through the network. Other mechanisms, such as data data that needs transiting through the network. Other mechanisms,
filtering and reduction, and functional distribution and partitioning such as data filtering and reduction, and functional distribution and
are also needed to accommodate the low delay needs for the same partitioning, are also needed to accommodate the low delay needs for
applications. the same applications.
With functional decomposition the goal of a better XR experience, the With functional decomposition as the goal of a better XR experience,
elements involved in a COIN XR implementation include: the elements involved in a COIN XR implementation include:
* the XR application residing in the headset, * the XR application residing in the headset,
* edge federation services that allow local devices to communicate * edge federation services that allow local devices to communicate
with one another directly, with one another directly,
* egde application servers that enable local processing but also * edge application servers that enable local processing but also
intelligent stream aggregation to reduce bandwidth requirements, intelligent stream aggregation to reduce bandwidth requirements,
* edge data networks to allow precaching of information based on * edge data networks that allow precaching of information based on
locality and usage, locality and usage,
* cloud-based services for image processing and application * cloud-based services for image processing and application
training, and training, and
* intelligent 5G/6G core networks for managing advanced access * intelligent 5G/6G core networks for managing advanced access
services and providing performance data for XR stream management. services and providing performance data for XR stream management.
These characteristics of XR paired with the capabilities of COIN make These characteristics of XR paired with the capabilities of COIN make
it likely that COIN can help to realize XR over networks for it likely that COIN can help to realize XR over networks for
collaborative applications. In particular, COIN functions can enable collaborative applications. In particular, COIN functions can enable
the distribution of the service components across different nodes in the distribution of the service components across different nodes in
the network. For example, data filtering, image rendering, and video the network. For example, data filtering, image rendering, and video
processing leveraging different hardware capabilities with processing leverage different hardware capabilities with combinations
combinations of CPU and GPU at the network edge and in the fog, where of CPUs and Graphics Processing Units (GPUs) at the network edge and
the content is consumed, represent possible remedies for the high in the fog, where the content is consumed. These represent possible
bandwidth demands of XR. Machine learning across the network nodes remedies for the high bandwidth demands of XR. Machine learning
can better manage the data flows by distributing them over more across the network nodes can better manage the data flows by
adequate paths. In order to provide adequate quality of experience, distributing them over more adequate paths. In order to provide
multi-variate and heterogeneous resource allocation and goal adequate quality of experience, multivariate and heterogeneous
optimization problems need to be solved, likely requiring advanced resource allocation and goal optimization problems need to be solved,
analysis and articificial intelligence. For the purpose of this likely requiring advanced analysis and artificial intelligence. For
document, it is important to note that the use of COIN for XR does the purpose of this document, it is important to note that the use of
not imply a specific protocol but targets an architecture enabling COIN for XR does not imply a specific protocol but targets an
the deployment of the services. In this context, similar architecture enabling the deployment of the services. In this
considerations as for Section 3.1 apply. context, similar considerations as for Section 3.1 apply.
3.2.3. Existing Solutions 3.2.3. Existing Solutions
The XR field has profited from extensive research in the past years The XR field has profited from extensive research in the past years
in gaming, machine learning, network telemetry, high resolution in gaming, machine learning, network telemetry, high resolution
imaging, smart cities, and IoT. Information Centric Networking (and imaging, smart cities, and the Internet of Things (IoT).
related) approaches that combine publish subscribe and distributed Information-centric networking (and related) approaches that combine,
storage are also very suited for the multisource-multidestination publish, subscribe, and distribute storage are also very suited for
applications of XR. New AR/VR headsets and glasses have continued to the multisource-multidestination applications of XR. New AR and VR
evolve towards autonomy with local computation capabilities, headsets and glasses have continued to evolve towards autonomy with
increasingly performing many of the processing that is needed to local computation capabilities, increasingly performing much of the
render and augment the local images. Mechanisms aimed at enhancing processing that is needed to render and augment the local images.
the computational and storage capacities of mobile devices could also Mechanisms aimed at enhancing the computational and storage
improve XR capabilities as they include the discovery of available capacities of mobile devices could also improve XR capabilities as
servers within the environment and using them opportunistically to they include the discovery of available servers within the
enhance the performance of interactive applications and distributed environment and using them opportunistically to enhance the
file systems. performance of interactive applications and distributed file systems.
While there is still no specific COIN research in AR and VR, the need While there is still no specific COIN research in AR and VR, the need
for network-support is important to offload some of the computations for network support is important to offload some of the computations
related to movement, multi-user interactions, and networked related to movement, multiuser interactions, and networked
applications notably in gaming but also in health [NetworkedVR]. applications, notably in gaming but also in health [NetworkedVR].
This new approach to networked AR/VR is exemplified in [eCAR] by This new approach to networked AR and VR is exemplified in [eCAR] by
using synchronized messaging at the edge to share the information using synchronized messaging at the edge to share the information
that all users need to interact. In [CompNet2021] and that all users need to interact. In [CompNet2021] and
[WirelessNet2024], the offloading uses artificial intelligence to [WirelessNet2024], the offloading uses Artificial Intelligence (AI)
assign the 5G resources necessary for the real time interactions and to assign the 5G resources necessary for the real-time interactions,
one could think that implementing this scheme on a PND is essentially and one could think that implementing this scheme on a PND is
a natural next step. Hence, as AR/VR/XR is increasingly becoming essentially a natural next step. Hence, as AR, VR, and XR are
interactive, the efficiency needed to implement novel applications increasingly becoming more interactive, the efficiency needed to
will require some form or another of edge-core implementation and implement novel applications will require some form or another of
COIN support. edge-core implementation and COIN support.
Summarizing, some XR solutions exist and headsets continue to evolve In summary, some XR solutions exist, and headsets continue to evolve
to what is now claimed to be spatial computing. Additionally, with to what is now claimed to be spatial computing. Additionally, with
recent work on the Metaverse, the number of publications related to recent work on the metaverse, the number of publications related to
XR has skyrocketed. However, in terms of networking, which is the XR has skyrocketed. However, in terms of networking, which is the
focus of this document, current deployments do not take advantage of focus of this document, current deployments do not take advantage of
network capabilities. The information is rendered and displayed network capabilities. The information is rendered and displayed
based on the local processing but does not readily discover the other based on the local processing but does not readily discover the other
elements in the vicinity or in the network that could improve its elements in the vicinity or in the network that could improve its
performance either locally, at the edge, or in the cloud. Yet, there performance either locally, at the edge, or in the cloud. Yet, there
are still very few interactive immersive media applications over are still very few interactive and immersive media applications over
networks that allow for federating systems capabilities. networks that allow for federating systems capabilities.
3.2.4. Opportunities 3.2.4. Opportunities
While delay is inherently related to information transmission and if While delay is inherently related to information transmission, if we
we continue the analogy of the computer board to highlight some of continue the analogy of the computer board to highlight some of the
the COIN capabilities in terms of computation and storage but also COIN capabilities in terms of computation and storage but also
allocation of resources, there are some opportunities that XR could allocation of resources, there are some opportunities that XR could
take advantage of: take advantage of:
* Round trip time: 20 ms is usually cited as an upper limit for XR * Round trip time: 20 ms is usually cited as an upper limit for XR
applications. Storage and preprocessing of scenes in local applications. Storage and preprocessing of scenes in local
elements (including in the mobile network) could extend the reach elements (including in the mobile network) could extend the reach
of XR applications at least over the extended edge. of XR applications at least over the extended edge.
* Video transmission: The use of better transcoding, advanced * Video transmission: The use of better transcoding, advanced
context-based compression algorithms, prefetching and precaching, context-based compression algorithms, prefetching and precaching,
as well as movement prediction all help to reduce bandwidth as well as movement prediction all help to reduce bandwidth
consumption. While this is now limited to local processing it is consumption. While this is now limited to local processing, it is
not outside the realm of COIN to push some of these not outside the realm of COIN to push some of these
functionalities to the network especially as realted to caching/ functionalities to the network, especially as related to caching
fetching but also context based flow direction and aggregation. and fetching but also context-based flow direction and
aggregation.
* Monitoring: Since bandwidth and data are fundamental for XR * Monitoring: Since bandwidth and data are fundamental to XR
deployment, COIN functionality could help to better monitor and deployment, COIN functionality could help to better monitor and
distribute the XR services over collaborating network elements to distribute the XR services over collaborating network elements to
optimize end-to-end performance. optimize end-to-end performance.
* Functional decomposition: Advanced functional decomposition, * Functional decomposition: Advanced functional decomposition,
localization, and discovery of computing and storage resources in localization, and discovery of computing and storage resources in
the network can help to optimize user experience in general. the network can help to optimize user experience in general.
* Intelligent network management and configuration: The move to * Intelligent network management and configuration: The move to AI
artificial intelligence in network management to learn about flows in network management to learn about flows and adapt resources
and adapt resources based on both data plane and control plane based on both data plane and control plane programmability can
programmability can help the overall deployment of XR services. help the overall deployment of XR services.
3.2.5. Research Questions 3.2.5. Research Questions
* RQ 3.2.1: Can current PNDs provide the speed required for * RQ 3.2.1: Can current PNDs provide the speed required for
executing complex filtering operations, including metadata executing complex filtering operations, including metadata
analysis for complex and dynamic scene rendering? analysis for complex and dynamic scene rendering?
* RQ 3.2.2: Where should PNDs equipped with these operations be * RQ 3.2.2: Where should PNDs equipped with these operations be
located for optimal performance gains? located for optimal performance gains?
skipping to change at page 15, line 39 skipping to change at line 680
data center and edge computers be leveraged for creating optimal data center and edge computers be leveraged for creating optimal
function allocation and the creation of semi-permanent datasets function allocation and the creation of semi-permanent datasets
and analytics for usage trending and flow management resulting in and analytics for usage trending and flow management resulting in
better localization of XR functions? better localization of XR functions?
* RQ 3.2.4: Can COIN improve the dynamic distribution of control, * RQ 3.2.4: Can COIN improve the dynamic distribution of control,
forwarding, and storage resources and related usage models in XR, forwarding, and storage resources and related usage models in XR,
such as to integrate local and fog caching with cloud-based pre- such as to integrate local and fog caching with cloud-based pre-
rendering, thus jointly optimizing COIN and higher layer protocols rendering, thus jointly optimizing COIN and higher layer protocols
to reduce latency and, more generally, manage the quality of XR to reduce latency and, more generally, manage the quality of XR
sessions, e.g., through reduced in-network congestion and improved sessions (e.g., through reduced in-network congestion and improved
flow delivery by determining how to prioritize XR data? flow delivery by determining how to prioritize XR data)?
* RQ 3.2.5: Can COIN provide the necessary infrastructure for the * RQ 3.2.5: Can COIN provide the necessary infrastructure for the
use of interactive XR everywhere? Particularly, how can a COIN use of interactive XR everywhere? Particularly, how can a COIN
system enable the joint collaboration across all segments of the system enable the joint collaboration across all segments of the
network (fog, edge, core, and cloud) to support functional network (fog, edge, core, and cloud) to support functional
decompositions, including using edge resources without the need decompositions, including using edge resources without the need
for a (remote) cloud connection? for a (remote) cloud connection?
* RQ 3.2.6: How can COIN systems provide multi-stream efficient * RQ 3.2.6: How can COIN systems provide multistream efficient
transmission and stream combining at the edge, including the transmission and stream combining at the edge, including the
ability to dynamically include extra streams, such as audio and ability to dynamically include extra streams, such as audio and
extra video tracks? extra video tracks?
3.2.6. Additional Desirable Capabilities 3.2.6. Additional Desirable Capabilities
In addition to the capabilities driven by the research questions In addition to the capabilities driven by the research questions
above, there are a number of other features that solutions in this above, there are a number of other features that solutions in this
space might benefit from. In particular, the provided XR experience space might benefit from. In particular, the provided XR experience
should be optimized both in amount of transmitted data, while equally should be optimized both in the amount of transmitted data, while
optimizing loss protection. Furthermore, means for trend analysis equally optimizing loss protection. Furthermore, the means for trend
and telemetry to measure performance may foster uptake of the XR analysis and telemetry to measure performance may foster uptake of
services, while the interaction of the XR system with indoor and the XR services, while the interaction of the XR system with indoor
outdoor positioning systems may improve on service experience and and outdoor positioning systems may improve on service experience and
user perception. user perception.
3.3. Personalized and interactive performing arts 3.3. Personalized and Interactive Performing Arts
3.3.1. Description 3.3.1. Description
This use case is a deeper dive into a specific scenario of the This use case is a deeper dive into a specific scenario of the
immersive and extended reality class of use cases discussed in immersive and extended reality class of use cases discussed in
Section 3.2. It focuses on live productions of the performing arts Section 3.2. It focuses on live productions of the performing arts
where the performers and audience members are geographically where the performers and audience members are geographically
distributed. The performance is conveyed through multiple networked distributed. The performance is conveyed through multiple networked
streams which are tailored to the requirements of the remote streams, which are tailored to the requirements of the remote
performers, the director, sound and lighting technicians, and performers, the director, the sound and lighting technicians, and the
individual audience members; performers need to observe, interact and individual audience members. Performers need to observe, interact,
synchronize with other performers in remote locations; and the and synchronize with other performers in remote locations, and the
performers receive live feedback from the audience, which may also be performers receive live feedback from the audience, which may also be
conveyed to other audience members. conveyed to other audience members.
There are two main aspects: i) to emulate as closely as possible the There are two main aspects:
experience of live performances where the performers, audience,
director, and technicians are co-located in the same physical space, i. to emulate as closely as possible the experience of live
such as a theater; and ii) to enhance traditional physical performances where the performers, audience, director, and
performances with features such as personalization of the experience technicians are co-located in the same physical space, such as a
according to the preferences or needs of the performers, directors, theater; and
and audience members.
ii. to enhance traditional physical performances with features such
as personalization of the experience according to the
preferences or needs of the performers, directors, and audience
members.
Examples of personalization include: Examples of personalization include:
* Viewpoint selection such as choosing a specific seat in the * Viewpoint selection, such as choosing a specific seat in the
theater or for more advanced positioning of the audience member's theater or for more advanced positioning of the audience member's
viewpoint outside of the traditional seating - amongst, above, or viewpoint outside of the traditional seating (i.e., amongst,
behind the performers (but within some limits which may be imposed above, or behind the performers, but within some limits that may
by the performers or the director, for artistic reasons); be imposed by the performers or the director for artistic
reasons);
* Augmentation of the performance with subtitles, audio-description, * Augmentation of the performance with subtitles, audio description,
actor-tagging, language translation, advertisements/product- actor tagging, language translation, advertisements and product
placement, other enhancements/filters to make the performance placement, and other enhancements and filters to make the
accessible to disabled audience members (removal of flashing performance accessible to audience members who are disabled (e.g.,
images for epileptics, alternative color schemes for color-blind the removal of flashing images for audience members who have
audience members, etc.). epilepsy or alternative color schemes for those who have color
blindness).
3.3.2. Characterization 3.3.2. Characterization
There are several chained functional entities which are candidates There are several chained functional entities that are candidates for
for being deployed as (COIN) programs: being deployed as (COIN) programs:
* Performer aggregation and editing functions * Performer aggregation and editing functions
* Distribution and encoding functions * Distribution and encoding functions
* Personalization functions * Personalization functions
- to select which of the existing streams should be forwarded to - to select which of the existing streams should be forwarded to
the audience member, remote performer, or member of the the audience member, remote performer, or member of the
production team production team
- to augment streams with additional metadata such as subtitles - to augment streams with additional metadata such as subtitles
- to create new streams after processing existing ones, e.g., to - to create new streams after processing existing ones (e.g., to
interpolate between camera angles to create a new viewpoint or interpolate between camera angles to create a new viewpoint or
to render point clouds from an audience member's chosen to render point clouds from an audience member's chosen
perspective perspective)
- to undertake remote rendering according to viewer position, - to undertake remote rendering according to viewer position
e.g., creation of VR headset display streams according to (e.g., the creation of VR headset display streams according to
audience head position - when this processing has been audience head position) when this processing has been offloaded
offloaded from the viewer's end-system to the COIN function due from the viewer's end system to the COIN function due to
to limited processing power in the end-system, or to limited limited processing power in the end system or due to limited
network bandwidth to receive all of the individual streams to network bandwidth to receive all of the individual streams to
be processed. be processed.
* Audience feedback sensor processing functions * Audience feedback sensor processing functions
* Audience feedback aggregation functions * Audience feedback aggregation functions
These are candidates for deployment as (COIN) Programs in PNDs rather These are candidates for deployment as (COIN) programs in PNDs rather
than being located in end-systems (at the performers' site, the than being located in end systems (at the performers' site, the
audience members' premises or in a central cloud location) for audience members' premises, or in a central cloud location) for
several reasons: several reasons:
* Personalization of the performance according to viewer preferences * Personalization of the performance according to viewer preferences
and requirements makes it infeasible to be done in a centralized and requirements makes it infeasible to be done in a centralized
manner at the performer premises: the computational resources and manner at the performer premises: the computational resources and
network bandwidth would need to scale with the number of network bandwidth would need to scale with the number of
personalized streams. personalized streams.
* Rendering of VR headset content to follow viewer head movements * Rendering of VR headset content to follow viewer head movements
has an upper bound on lag to maintain viewer QoE, which requires has an upper bound on lag to maintain viewer Quality of Experience
the processing to be undertaken sufficiently close to the viewer (QoE), which requires the processing to be undertaken sufficiently
to avoid large network latencies. close to the viewer to avoid large network latencies.
* Viewer devices may not have the processing-power to perform the * Viewer devices may not have the processing power to perform the
personalization tasks, or the viewers' network may not have the personalization tasks, or the viewers' network may not have the
capacity to receive all of the constituent streams to undertake capacity to receive all of the constituent streams to undertake
the personalization functions. the personalization functions.
* There are strict latency requirements for live and interactive * There are strict latency requirements for live and interactive
aspects that require the deviation from the direct network path aspects that require the deviation from the direct network path
between performers and audience members to be minimized, which between performers and audience members to be minimized, which
reduces the opportunity to route streams via large-scale reduces the opportunity to route streams via large-scale
processing capabilities at centralized data-centers. processing capabilities at centralized data centers.
3.3.3. Existing solutions 3.3.3. Existing Solutions
Note: Existing solutions for some aspects of this use case are Note: Existing solutions for some aspects of this use case are
covered in Section 3.1, Section 3.2, and Section 5.1. covered in Section 3.1, Section 3.2, and Section 5.1.
3.3.4. Opportunities 3.3.4. Opportunities
* Executing media processing and personalization functions on-path * Executing media processing and personalization functions on-path
as (COIN) Programs in PNDs can avoid detour/stretch to central as (COIN) programs in PNDs can avoid detour/stretch to central
servers, thus reducing latency and bandwidth consumption. For servers, thus reducing latency and bandwidth consumption. For
example, the overall delay for performance capture, aggregation, example, the overall delay for performance capture, aggregation,
distribution, personalization, consumption, capture of audience distribution, personalization, consumption, capture of audience
response, feedback processing, aggregation, and rendering should response, feedback processing, aggregation, and rendering should
be achieved within an upper bound of latency (the tolerable amount be achieved within an upper bound of latency (the tolerable amount
is to be defined, but in the order of 100s of ms to mimic is to be defined, but in the order of 100s of ms to mimic
performers perceiving audience feedback, such as laughter or other performers perceiving audience feedback, such as laughter or other
emotional responses in a theater setting). emotional responses in a theater setting).
* Processing of media streams allows (COIN) Programs, PNDs and the * Processing of media streams allows (COIN) programs, PNDs, and the
wider (COIN) System/Environment to be contextually aware of flows wider (COIN) system/environment to be contextually aware of flows
and their requirements which can be used for determining network and their requirements, which can be used for determining network
treatment of the flows, e.g., path selection, prioritization, treatment of the flows (e.g., path selection, prioritization,
multi-flow coordination, synchronization and resilience. multiflow coordination, synchronization, and resilience).
3.3.5. Research Questions: 3.3.5. Research Questions
* RQ 3.3.1: In which PNDs should (COIN) Programs for aggregation, * RQ 3.3.1: In which PNDs should (COIN) programs for aggregation,
encoding, and personalization functions be located? Close to the encoding, and personalization functions be located? Close to the
performers or close to the viewers? performers or close to the viewers?
* RQ 3.3.2: How far from the direct network path from performer to * RQ 3.3.2: How far from the direct network path from performer to
viewer should (COIN) programs be located, considering the latency viewer should (COIN) programs be located, considering the latency
implications of path-stretch and the availability of processing implications of path-stretch and the availability of processing
capacity at PNDs? How should tolerances be defined by users? capacity at PNDs? How should tolerances be defined by users?
* RQ 3.3.3: Should users decide which PNDs should be used for * RQ 3.3.3: Should users decide which PNDs should be used for
executing (COIN) Programs for their flows or should they express executing (COIN) programs for their flows, or should they express
requirements and constraints that will direct decisions by the requirements and constraints that will direct decisions by the
orchestrator/manager of a COIN System? In case of the latter, how orchestrator/manager of a COIN system? In case of the latter, how
can users specify requirements on network and processing metrics can users specify requirements on network and processing metrics
(such as latency and throughput bounds)? (such as latency and throughput bounds)?
* RQ 3.3.4: How to achieve synchronization across multiple streams * RQ 3.3.4: How to achieve synchronization across multiple streams
to allow for merging, audio-video interpolation, and other cross- to allow for merging, audio-video interpolation, and other cross-
stream processing functions that require time synchronization for stream processing functions that require time synchronization for
the integrity of the output? How can this be achieved considering the integrity of the output? How can this be achieved considering
that synchronization may be required between flows that are: i) on that synchronization may be required between flows that are:
the same data pathway through a PND/router, ii) arriving/leaving
through different ingress/egress interfaces of the same PND/
router, iii) routed through disjoint paths through different PNDs/
routers? This RQ raises issues associated with synchronisation
across multiple media streams and sub-streams [RFC7272] as well as
time synchronisation between PNDs/routers on multiple paths
[RFC8039].
* RQ 3.3.5: Where will COIN Programs be executed? In the data-plane i. on the same data pathway through a PND/router,
ii. arriving/leaving through different ingress/egress interfaces
of the same PND/router, or
iii. routed through disjoint paths through different PNDs/
routers?
This RQ raises issues associated with synchronization across
multiple media streams and substreams [RFC7272] as well as time
synchronization between PNDs/routers on multiple paths [RFC8039].
* RQ 3.3.5: Where will COIN programs be executed? In the data plane
of PNDs, in other on-router computational capabilities within of PNDs, in other on-router computational capabilities within
PNDs, or in adjacent computational nodes? PNDs, or in adjacent computational nodes?
* RQ 3.3.6: Are computationally-intensive tasks - such as video * RQ 3.3.6: Are computationally intensive tasks, such as video
stitching or media recognition and annotation (cf. Section 3.2) - stitching or media recognition and annotation (cf. Section 3.2),
considered as suitable candidate (COIN) Programs or should they be considered as suitable candidate (COIN) programs or should they be
implemented in end-systems? implemented in end systems?
* RQ 3.3.7: If the execution of COIN Programs is offloaded to * RQ 3.3.7: If the execution of COIN programs is offloaded to
computational nodes outside of PNDs, e.g., for processing by GPUs, computational nodes outside of PNDs (e.g., for processing by
should this still be considered as COIN? Where is the boundary GPUs), should this still be considered as COIN? Where is the
between COIN capabilities and explicit routing of flows to boundary between COIN capabilities and explicit routing of flows
endsystems? to end systems?
3.3.6. Additional Desirable Capabilities 3.3.6. Additional Desirable Capabilities
In addition to the capabilities driven by the research questions In addition to the capabilities driven by the research questions
above, there are a number of other features that solutions in this above, there are a number of other features that solutions in this
space might benefit from. In particular, if users are indeed space might benefit from. In particular, if users are indeed
empowered to specify requirements on network and processing metrics, empowered to specify requirements on network and processing metrics,
one important capability of COIN systems will be to respect these one important capability of COIN systems will be to respect these
user-specified requirements and constraints when routing flows and user-specified requirements and constraints when routing flows and
selecting PNDs for executing (COIN) Programs. Similarly, solutions selecting PNDs for executing (COIN) programs. Similarly, solutions
should be able to synchronize flow treatment and processing across should be able to synchronize flow treatment and processing across
multiple related flows which may be on disjoint paths to provide multiple related flows, which may be on disjoint paths, to provide
similar performance to different entities. similar performance to different entities.
4. Supporting new COIN Systems 4. Supporting New COIN Systems
4.1. In-Network Control / Time-sensitive applications 4.1. In-Network Control / Time-Sensitive Applications
4.1.1. Description 4.1.1. Description
The control of physical processes and components of industrial The control of physical processes and components of industrial
production lines is essential for the growing automation of production lines is essential for the growing automation of
production and ideally allows for a consistent quality level. production and ideally allows for a consistent quality level.
Traditionally, the control has been exercised by control software Traditionally, the control has been exercised by control software
running on programmable logic controllers (PLCs) located directly running on Programmable Logic Controllers (PLCs) located directly
next to the controlled process or component. This approach is best- next to the controlled process or component. This approach is best
suited for settings with a simple model that is focused on a single suited for settings with a simple model that is focused on a single
or few controlled components. or a few controlled components.
Modern production lines and shop floors are characterized by an Modern production lines and shop floors are characterized by an
increasing number of involved devices and sensors, a growing level of increasing number of involved devices and sensors, a growing level of
dependency between the different components, and more complex control dependency between the different components, and more complex control
models. A centralized control is desirable to manage the large models. A centralized control is desirable to manage the large
amount of available information which often has to be preprocessed or amount of available information, which often has to be preprocessed
aggregated with other information before it can be used. As a or aggregated with other information before it can be used. As a
result, computations are increasingly spatially decoupled and moved result, computations are increasingly spatially decoupled and moved
away from the controlled objects, thus inducing additional latency. away from the controlled objects, thus inducing additional latency.
Instead moving compute functionality onto COIN execution environments Instead, moving compute functionality onto COIN execution
inside the network offers a new solution space to these challenges, environments inside the network offers a new solution space to these
providing new compute locations with much smaller latencies. challenges, providing new compute locations with much smaller
latencies.
4.1.2. Characterization 4.1.2. Characterization
A control process consists of two main components as illustrated in A control process consists of two main components as illustrated in
Figure 2: a system under control and a controller. In feedback Figure 2: a system under control and a controller. In feedback
control, the current state of the system is monitored, e.g., using control, the current state of the system is monitored (e.g., using
sensors, and the controller influences the system based on the sensors), and the controller influences the system based on the
difference between the current and the reference state to keep it difference between the current and the reference state to keep it
close to this reference state. close to this reference state.
reference reference
state ------------ -------- Output state ------------ -------- Output
----------> | Controller | ---> | System | ----------> ----------> | Controller | ---> | System | ---------->
^ ------------ -------- | ^ ------------ -------- |
| | | |
| observed state | | observed state |
| --------- | | --------- |
-------------------| Sensors | <----- -------------------| Sensors | <-----
--------- ---------
Figure 2: Simple feedback control model. Figure 2: Simple Feedback Control Model
Apart from the control model, the quality of the control primarily Apart from the control model, the quality of the control primarily
depends on the timely reception of the sensor feedback which can be depends on the timely reception of the sensor feedback, which can be
subject to tight latency constraints, often in the single-digit subject to tight latency constraints, often in the single-digit
millisecond range. Even shorter feedback requirements may exist in millisecond range. Even shorter feedback requirements may exist in
other use cases, such as interferometry or high-energy physics, but other use cases, such as interferometry or high-energy physics, but
these use cases are out of scope for this document. While low these use cases are out of scope for this document. While low
latencies are essential, there is an even greater need for stable and latencies are essential, there is an even greater need for stable and
deterministic levels of latency, because controllers can generally deterministic levels of latency, because controllers can generally
cope with different levels of latency, if they are designed for them, cope with different levels of latency if they are designed for them,
but they are significantly challenged by dynamically changing or but they are significantly challenged by dynamically changing or
unstable latencies. The unpredictable latency of the Internet unstable latencies. The unpredictable latency of the Internet
exemplifies this problem if, e.g., off-premise cloud platforms are exemplifies this problem if, for example, off-premise cloud platforms
included. are included.
4.1.3. Existing Solutions 4.1.3. Existing Solutions
Control functionality is traditionally executed on PLCs close to the Control functionality is traditionally executed on PLCs close to the
machinery. These PLCs typically require vendor-specific machinery. These PLCs typically require vendor-specific
implementations and are often hard to upgrade and update which makes implementations and are often hard to upgrade and update, which makes
such control processes inflexible and difficult to manage. Moving such control processes inflexible and difficult to manage. Moving
computations to more freely programmable devices thus has the computations to more freely programmable devices thus has the
potential of significantly improving the flexibility. In this potential of significantly improving the flexibility. In this
context, directly moving control functionality to (central) cloud context, directly moving control functionality to (central) cloud
environments is generally possible, yet only feasible if latency environments is generally possible, yet only feasible if latency
constraints are lenient. constraints are lenient.
Early approaches such as [RUETH] and [VESTIN] have already shown the Early approaches such as [RÜTH] and [VESTIN] have already shown the
general applicability of leveraging COIN for in-network control. general applicability of leveraging COIN for in-network control.
4.1.4. Opportunities 4.1.4. Opportunities
* Performing simple control logic on PNDs and/or in COIN execution * Performing simple control logic on PNDs and/or in COIN execution
environments can bring the controlled system and the controller environments can bring the controlled system and the controller
closer together, possibly satisfying the tight latency closer together, possibly satisfying the tight latency
requirements. requirements.
* Creating a coupled control that is exercised via (i) simplified * Creating a coupled control that is exercised via
approximations of more complex control algorithms deployed in COIN
execution environments, and (ii) more complex overall control i. simplified approximations of more complex control algorithms
schemes deployed in the cloud can allow for quicker, yet more deployed in COIN execution environments, and
inaccurate responses from within the network while still providing
for sufficient control accuracy at higher latencies from afar. ii. more complex overall control schemes deployed in the cloud
can allow for quicker, yet more inaccurate responses from within
the network while still providing for sufficient control accuracy
at higher latencies from afar.
4.1.5. Research Questions 4.1.5. Research Questions
* RQ 4.1.1: How to derive simplified versions of the global * RQ 4.1.1: How to derive simplified versions of the global
(control) function? (control) function?
* RQ 4.1.2: How to account for the limited computational precision * RQ 4.1.2: How to account for the limited computational precision
of PNDs that typically only allow for integer precision of PNDs that typically only allow for integer precision
computation for enabling high processing rates while floating- computation for enabling high processing rates, while floating-
point precision is needed by most control algorithms (cf. point precision is needed by most control algorithms (cf.
[KUNZE-APPLICABILITY])? [KUNZE-APPLICABILITY])?
* RQ 4.1.3: How to find suitable tradeoffs regarding simplicity of * RQ 4.1.3: How to find suitable tradeoffs regarding simplicity of
the control function ("accuracy of the control") and the control function ("accuracy of the control") and
implementation complexity ("implementability")? implementation complexity ("implementability")?
* RQ 4.1.4: How to (dynamically) distribute simplified versions of * RQ 4.1.4: How to (dynamically) distribute simplified versions of
the global (control) function among COIN execution environments? the global (control) function among COIN execution environments?
* RQ 4.1.5: How to (dynamically) (re-)compose the distributed * RQ 4.1.5: How to (dynamically) compose or recompose the
control functions? distributed control functions?
* RQ 4.1.6: Can there be different control levels, e.g., "quite * RQ 4.1.6: Can there be different control levels, e.g., "quite
inaccurate & very low latency" (PNDs, deep in the network), "more inaccurate & very low latency" (PNDs, deep in the network), "more
accurate & higher latency" (more powerful COIN execution accurate & higher latency" (more powerful COIN execution
environments, farer away), "very accurate & very high latency" environments, farther away), "very accurate & very high latency"
(cloud environments, far away)? (cloud environments, far away)?
* RQ 4.1.7: Who decides which control instance is executed and which * RQ 4.1.7: Who decides which control instance is executed and which
information can be used for this decision? information can be used for this decision?
* RQ 4.1.8: How do the different control instances interact and how * RQ 4.1.8: How do the different control instances interact and how
can we define their hierarchy? can we define their hierarchy?
4.1.6. Additional Desirable Capabilities 4.1.6. Additional Desirable Capabilities
In addition to the capabilities driven by the research questions In addition to the capabilities driven by the research questions
above, there are a number of other features that approaches deploying above, there are a number of other features that approaches deploying
control functionality in COIN execution environments could benefit control functionality in COIN execution environments could benefit
from. For example, having an explicit interaction between the COIN from. For example, having an explicit interaction between the COIN
execution environments and the global controller would ensure that it execution environments and the global controller would ensure that it
is always clear which entity is emitting which signals. In this is always clear which entity is emitting which signals. In this
context, it is also important that actions of COIN execution context, it is also important that actions of COIN execution
environments are overridable by the global controller such that the environments are overridable by the global controller such that the
global controller has the final say in the process behavior. global controller has the final say in the process behavior.
Finally, accommodating the general characteristics of control Finally, by accommodating the general characteristics of control
approaches, functions in COIN execution environments should ideally approaches, functions in COIN execution environments should ideally
expose reliable information on the predicted delay and must expose expose reliable information on the predicted delay and must expose
reliable information on the predicted accuracy to the global control reliable information on the predicted accuracy to the global control
such that these aspects can be accommodated in the overall control. such that these aspects can be accommodated in the overall control.
4.2. Large Volume Applications 4.2. Large-Volume Applications
4.2.1. Description 4.2.1. Description
In modern industrial networks, processes and machines are extensively In modern industrial networks, processes and machines are extensively
monitored by distributed sensors with a large spectrum of monitored by distributed sensors with a large spectrum of
capabilities, ranging from simple binary (e.g., light barriers) to capabilities, ranging from simple binary (e.g., light barriers) to
sophisticated sensors with varying degrees of resolution. Sensors sophisticated sensors with varying degrees of resolution. Sensors
further serve different purposes, as some are used for time-critical further serve different purposes, as some are used for time-critical
process control while others represent redundant fallback platforms. process control, while others represent redundant fallback platforms.
Overall, there is a high level of heterogeneity which makes managing Overall, there is a high level of heterogeneity, which makes managing
the sensor output a challenging task. the sensor output a challenging task.
Depending on the deployed sensors and the complexity of the observed Depending on the deployed sensors and the complexity of the observed
system, the resulting overall data volume can easily be in the range system, the resulting overall data volume can easily be in the range
of several Gbit/s [GLEBKE]. These volumes are often already of several Gbit/s [GLEBKE]. These volumes are often already
difficult to handle in local environments and it becomes even more difficult to handle in local environments, and it becomes even more
challenging when off-premise clouds are used for managing the data. challenging when off-premise clouds are used for managing the data.
While large networking companies can simply upgrade their While large networking companies can simply upgrade their
infrastructure to accommodate the accruing data volumes, most infrastructure to accommodate the accruing data volumes, most
industrial companies operate on tight infrastructure budgets such industrial companies operate on tight infrastructure budgets such
that frequently upgrading is not always feasible or possible. Hence, that frequently upgrading is not always feasible or possible. Hence,
a major challenge is to devise a methodology that is able to handle a major challenge is to devise a methodology that is able to handle
such amounts of data efficiently and flexibily without relying on such amounts of data efficiently and flexibly without relying on
recurring infrastructure upgrades. recurring infrastructure upgrades.
Data filtering and preprocessing, similar to the considerations in Data filtering and preprocessing, similar to the considerations in
Section 3.2, can be building blocks for new solutions in this space. Section 3.2, can be building blocks for new solutions in this space.
Such solutions, however, might also have to address the added Such solutions, however, might also have to address the added
challenge of business data leaving the premises and control of the challenge of business data leaving the premises and control of the
company. As this data could include sensitive information or company. As this data could include sensitive information or
valuable business secrets, additional security measures have to be valuable business secrets, additional security measures have to be
taken. Yet, typical security measures such as encrypting the data taken. Yet, typical security measures such as encrypting the data
make filtering or preprocessing approaches hardly applicable as they make filtering or preprocessing approaches hardly applicable as they
skipping to change at page 24, line 14 skipping to change at line 1095
4.2.2. Characterization 4.2.2. Characterization
In essence, the described monitoring systems consist of sensors that In essence, the described monitoring systems consist of sensors that
produce large volumes of monitoring data. This data is then produce large volumes of monitoring data. This data is then
transmitted to additional components that provide data processing and transmitted to additional components that provide data processing and
analysis capabilities or simply store the data in large data silos. analysis capabilities or simply store the data in large data silos.
As sensors are often set up redundantly, parts of the collected data As sensors are often set up redundantly, parts of the collected data
might also be redundant. Moreover, sensors are often hard to might also be redundant. Moreover, sensors are often hard to
configure or not configurable at all which is why their resolution or configure or not configurable at all, which is why their resolution
sampling frequency is often larger than required. Consequently, it or sampling frequency is often larger than required. Consequently,
is likely that more data is transmitted than is needed or desired, it is likely that more data is transmitted than is needed or desired,
prompting the deployment of filtering techniques. For example, COIN prompting the deployment of filtering techniques. For example, COIN
programs deployed in the on-premise network could filter out programs deployed in the on-premise network could filter out
redundant or undesired data before it leaves the premise using simple redundant or undesired data before it leaves the premise using simple
traffic filters, thus reducing the required (upload) bandwidths. The traffic filters, thus reducing the required (upload) bandwidths. The
available sensor data could be scaled down using standard statistical available sensor data could be scaled down using standard statistical
sampling, packet-based sub-sampling, i.e., only forwarding every n-th sampling, packet-based sub-sampling (i.e., only forwarding every n-th
packet, or using filtering as long as the sensor value is in an packet), or using filtering as long as the sensor value is in an
uninteresting range while forwarding with a higher resolution once uninteresting range while forwarding with a higher resolution once
the sensor value range becomes interesting (cf. [KUNZE-SIGNAL]). the sensor value range becomes interesting (cf. [KUNZE-SIGNAL]).
While the former variants are oblivious to the semantics of the While the former variants are oblivious to the semantics of the
sensor data, the latter variant requires an understanding of the sensor data, the latter variant requires an understanding of the
current sensor levels. In any case, it is important that end-hosts current sensor levels. In any case, it is important that end hosts
are informed about the filtering so that they can distinguish between are informed about the filtering so that they can distinguish between
data loss and data filtered out on purpose. data loss and data filtered out on purpose.
In practice, the collected data is further processed using various In practice, the collected data is further processed using various
forms of computation. Some of them are very complex or need the forms of computation. Some of them are very complex or need the
complete sensor data during the computation, but there are also complete sensor data during the computation, but there are also
simpler operations which can already be done on subsets of the simpler operations that can already be done on subsets of the overall
overall dataset or earlier on the communication path as soon as all dataset or earlier on the communication path as soon as all data is
data is available. One example is finding the maximum of all sensor available. One example is finding the maximum of all sensor values,
values which can either be done iteratively at each intermediate hop which can either be done iteratively at each intermediate hop or at
or at the first hop, where all data is available. Using expert the first hop where all data is available. Using expert knowledge
knowledge about the exact computation steps and the concrete about the exact computation steps and the concrete transmission path
transmission path of the sensor data, simple computation steps can of the sensor data, simple computation steps can thus be deployed in
thus be deployed in the on-premise network, again reducing the the on-premise network, again reducing the overall data volume.
overall data volume.
4.2.3. Existing Solutions 4.2.3. Existing Solutions
Current approaches for handling such large amounts of information Current approaches for handling such large amounts of information
typically build upon stream processing frameworks such as Apache typically build upon stream processing frameworks such as Apache
Flink. These solutions allow for handling large volume applications Flink. These solutions allow for handling large-volume applications
and map the compute functionality to performant server machines or and map the compute functionality to performant server machines or
distributed compute platforms. Augmenting the existing capabilities, distributed compute platforms. Augmenting the existing capabilities,
COIN offers a new dimension of platforms for such processing COIN offers a new dimension of platforms for such processing
frameworks. frameworks.
4.2.4. Opportunities 4.2.4. Opportunities
* (Stream) processing frameworks can become more flexible by * (Stream) processing frameworks can become more flexible by
introducing COIN execution environments as additional deployment introducing COIN execution environments as additional deployment
targets. targets.
* (Semantic) packet filtering based on packet header and payload, as * (Semantic) packet filtering based on packet header and payload, as
well as multi-packet information can (drastically) reduce the data well as multipacket information can (drastically) reduce the data
volume, possibly even without losing any important information. volume, possibly even without losing any important information.
* (Semantic) data (pre-)processing, e.g., in the form of * (Semantic) data preprocessing and processing (e.g., in the form of
computations across multiple packets and potentially leveraging computations across multiple packets and potentially leveraging
packet payload, can also reduce the data volume without losing any packet payload) can also reduce the data volume without losing any
important information. important information.
4.2.5. Research Questions 4.2.5. Research Questions
Some of the following research questions are also relevant in the Some of the following research questions are also relevant in the
context of general stream processing systems. context of general stream processing systems.
* RQ 4.2.1: How can the overall data processing pipeline be divided * RQ 4.2.1: How can the overall data processing pipeline be divided
into individual processing steps that could then be deployed as into individual processing steps that could then be deployed as
COIN functionality? COIN functionality?
* RQ 4.2.2: How to design COIN programs for (semantic) packet * RQ 4.2.2: How to design COIN programs for (semantic) packet
filtering and which filtering criteria make sense? filtering and which filtering criteria make sense?
* RQ 4.2.3: Which kinds of COIN programs can be leveraged for * RQ 4.2.3: Which kinds of COIN programs can be leveraged for
(pre-)processing steps and what complexity can they have? (pre)processing steps and what complexity can they have?
* RQ 4.2.4: How to distribute and coordinate COIN programs? * RQ 4.2.4: How to distribute and coordinate COIN programs?
* RQ 4.2.5: How to dynamically reconfigure and recompose COIN * RQ 4.2.5: How to dynamically reconfigure and recompose COIN
programs? programs?
* RQ 4.2.6: How to incorporate the (pre-)processing and filtering * RQ 4.2.6: How to incorporate the (pre)processing and filtering
steps into the overall system? steps into the overall system?
* RQ 4.2.7: How can changes to the data by COIN programs be signaled * RQ 4.2.7: How can changes to the data by COIN programs be signaled
to the end-hosts? to the end hosts?
4.2.6. Additional Desirable Capabilities 4.2.6. Additional Desirable Capabilities
In addition to the capabilities driven by the research questions In addition to the capabilities driven by the research questions
above, there are a number of other features that such large volume above, there are a number of other features that such large-volume
applications could benefit from. In particular, conforming to applications could benefit from. In particular, conforming to
standard application-level syntax and semantics likely simplifies standard application-level syntax and semantics likely simplifies
embedding filters and preprocessors into the overall system. If embedding filters and preprocessors into the overall system. If
these filters and preprocessors also leverage packet header and these filters and preprocessors also leverage packet header and
payload information for their operation, this could further improve payload information for their operation, this could further improve
the performance of any approach developed based on the above research the performance of any approach developed based on the above research
questions. questions.
4.3. Industrial Safety 4.3. Industrial Safety
4.3.1. Description 4.3.1. Description
Despite an increasing automation in production processes, human Despite an increasing automation in production processes, human
workers are still often necessary. Consequently, safety measures workers are still often necessary. Consequently, safety measures
have a high priority to ensure that no human life is endangered. In have a high priority to ensure that no human life is endangered. In
traditional factories, the regions of contact between humans and traditional factories, the regions of contact between humans and
machines are well-defined and interactions are simple. Simple safety machines are well defined and interactions are simple. Simple safety
measures like emergency switches at the working positions are enough measures like emergency switches at the working positions are enough
to provide a good level of safety. to provide a good level of safety.
Modern factories are characterized by increasingly dynamic and Modern factories are characterized by increasingly dynamic and
complex environments with new interaction scenarios between humans complex environments with new interaction scenarios between humans
and robots. Robots can directly assist humans, perform tasks and robots. Robots can directly assist humans, perform tasks
autonomously, or even freely move around on the shopfloor. Hence, autonomously, or even freely move around on the shop floor. Hence,
the intersect between the human working area and the robots grows and the intersect between the human working area and the robots grows,
it is harder for human workers to fully observe the complete and it is harder for human workers to fully observe the complete
environment. Additional safety measures are essential to prevent environment. Additional safety measures are essential to prevent
accidents and support humans in observing the environment. accidents and support humans in observing the environment.
4.3.2. Characterization 4.3.2. Characterization
Industrial safety measures are typically hardware solutions because Industrial safety measures are typically hardware solutions because
they have to pass rigorous testing before they are certified and they have to pass rigorous testing before they are certified and
deployment-ready. Standard measures include safety switches and deployment ready. Standard measures include safety switches and
light barriers. Additionally, the working area can be explicitly light barriers. Additionally, the working area can be explicitly
divided into 'contact' and 'safe' areas, indicating when workers have divided into "contact" and "safe" areas, indicating when workers have
to watch out for interactions with machinery. For example, markings to watch out for interactions with machinery. For example, markings
on the factory floor can show the areas where robots move or indicate on the factory floor can show the areas where robots move or indicate
their maximum physical reach. their maximum physical reach.
These measures are static solutions, potentially relying on These measures are static solutions, potentially relying on
specialized hardware, and are challenged by the increased dynamics of specialized hardware, and are challenged by the increased dynamics of
modern factories where the factory configuration can be changed on modern factories where the factory configuration can be changed on
demand or where all entities are freely moving around. Software demand or where all entities are freely moving around. Software
solutions offer higher flexibility as they can dynamically respect solutions offer higher flexibility as they can dynamically respect
new information gathered by the sensor systems, but in most cases new information gathered by the sensor systems, but in most cases
they cannot give guaranteed safety. COIN systems could leverage the they cannot give guaranteed safety. COIN systems could leverage the
increased availability of sensor data and the detailed monitoring of increased availability of sensor data and the detailed monitoring of
the factories to enable additional safety measures with shorter the factories to enable additional safety measures with shorter
response times and higher guarantees. Different safety indicators response times and higher guarantees. Different safety indicators
within the production hall could be combined within the network so within the production hall could be combined within the network so
that PNDs can give early responses if a potential safety breach is that PNDs can give early responses if a potential safety breach is
detected. For example, the positions of human workers and robots detected. For example, the positions of human workers and robots
could be tracked and robots could be stopped when they get too close could be tracked, and robots could be stopped when they get too close
to a human in a non-working area or if a human enters a defined to a human in a non-working area or if a human enters a defined
safety zone. More advanced concepts could also include image data or safety zone. More advanced concepts could also include image data or
combine arbitrary sensor data. Finally, the increasing combine arbitrary sensor data. Finally, the increasing
softwarization of industrial processes can also lead to new problems, softwarization of industrial processes can also lead to new problems,
e.g., if software bugs cause unintended movements of robots. Here, e.g., if software bugs cause unintended movements of robots. Here,
COIN systems could independently double check issued commands to void COIN systems could independently double check issued commands to void
unsafe commands. unsafe commands.
4.3.3. Existing Solutions 4.3.3. Existing Solutions
Due to the importance of safety, there is a wide range of software- Due to the importance of safety, there is a wide range of software-
based approaches aiming at enhancing security. One example are tag- based approaches aiming at enhancing security. One example are tag-
based systems, e.g., using RFID, where drivers of forklifts can be based systems (e.g., using RFID), where drivers of forklifts can be
warned if pedestrian workers carrying tags are nearby. Such warned if pedestrian workers carrying tags are nearby. Such
solutions, however, require setting up an additional system and do solutions, however, require setting up an additional system and do
not leverage existing sensor data. not leverage existing sensor data.
4.3.4. Opportunities 4.3.4. Opportunities
* Executing safety-critical COIN functions on PNDs could allow for * Executing safety-critical COIN functions on PNDs could allow for
early emergency reactions based on diverse sensor feedback with early emergency reactions based on diverse sensor feedback with
low latencies. low latencies.
skipping to change at page 27, line 50 skipping to change at line 1270
4.3.5. Research Questions 4.3.5. Research Questions
* RQ 4.3.1: Which additional safety measures can be provided and do * RQ 4.3.1: Which additional safety measures can be provided and do
they actually improve safety? they actually improve safety?
* RQ 4.3.2: Which sensor information can be combined and how? * RQ 4.3.2: Which sensor information can be combined and how?
* RQ 4.3.3: How can COIN-based safety measures be integrated with * RQ 4.3.3: How can COIN-based safety measures be integrated with
existing safety measures without degrading safety? existing safety measures without degrading safety?
* RQ 4.3.4: How can COIN software validate control software-initated * RQ 4.3.4: How can COIN software validate control software-
commands to prevent unsafe operations? initiated commands to prevent unsafe operations?
5. Improving existing COIN capabilities 5. Improving Existing COIN Capabilities
5.1. Content Delivery Networks 5.1. Content Delivery Networks
5.1.1. Description 5.1.1. Description
Delivery of content to end users often relies on Content Delivery Delivery of content to end users often relies on Content Delivery
Networks (CDNs). CDNs store said content closer to end users for Networks (CDNs). CDNs store said content closer to end users for
latency-reduced delivery as well as to reduce load on origin servers. latency-reduced delivery as well as to reduce load on origin servers.
For this, they often utilize DNS-based indirection to serve the For this, they often utilize DNS-based indirection to serve the
request on behalf of the origin server. Both of these objectives are request on behalf of the origin server. Both of these objectives are
within scope to be addressed by COIN methods and solutions. within scope to be addressed by COIN methods and solutions.
5.1.2. Characterization 5.1.2. Characterization
From the perspective of this draft, a CDN can be interpreted as a From the perspective of this draft, a CDN can be interpreted as a
(network service level) set of (COIN) programs. These programs (network service level) set of (COIN) programs. These programs
implement a distributed logic for first distributing content from the implement a distributed logic for first distributing content from the
origin server to the CDN ingress and then further to the CDN origin server to the CDN ingress and then further to the CDN
replication points which ultimately serve the user-facing content replication points, which ultimately serve the user-facing content
requests. requests.
5.1.3. Existing Solutions 5.1.3. Existing Solutions
CDN technologies have been well described and deployed in the CDN technologies have been well described and deployed in the
existing Internet. Core technologies like Global Server Load existing Internet. Core technologies like Global Server Load
Balancing (GSLB) [GSLB] and Anycast server solutions are used to deal Balancing (GSLB) [GSLB] and Anycast server solutions are used to deal
with the required indirection of a content request (usually in the with the required indirection of a content request (usually in the
form of an HTTP request) to the most suitable local CDN server. form of an HTTP request) to the most suitable local CDN server.
Content is replicated from seeding servers, which serve as injection Content is replicated from seeding servers, which serve as injection
points for content from content owners/producers, to the actual CDN points for content from content owners/producers, to the actual CDN
servers, who will eventually serve the user's request. The servers, which will eventually serve the user's request. The
replication architecture and mechanisms itself differs from one (CDN) replication architecture and mechanisms themselves differ from one
provider to another, and often utilizes private peering or network (CDN) provider to another, and often utilize private peering or
arrangements in order to distribute the content internationally and network arrangements in order to distribute the content
regionally. internationally and regionally.
Studies such as those in [FCDN] have shown that content distribution Studies such as those in [FCDN] have shown that content distribution
at the level of named content, utilizing efficient (e.g., Layer 2) at the level of named content, utilizing efficient (e.g., Layer 2
multicast for replication towards edge CDN nodes, can significantly (L2)) multicast for replication towards edge CDN nodes, can
increase the overall network and server efficiency. It also reduces significantly increase the overall network and server efficiency. It
indirection latency for content retrieval as well as required edge also reduces indirection latency for content retrieval as well as
storage capacity by benefiting from the increased network efficiency required edge storage capacity by benefiting from the increased
to renew edge content more quickly against changing demand. Works network efficiency to renew edge content more quickly against
such as those in [SILKROAD] utilize ASICs to replace server-based changing demand. Works such as those in [SILKROAD] utilize
load balancing with significant cost reductions, thus showcasing the Application-Specific Integrated Circuits (ASICs) to replace server-
potential for in-network CN operations. based load balancing with significant cost reductions, thus
showcasing the potential for in-network CN operations.
5.1.4. Opportunities 5.1.4. Opportunities
* Supporting service-level routing of requests (service routing in * Supporting service-level routing of requests (such as service
[APPCENTRES]) to specific (COIN) program instances may improve on routing in [APPCENTRES]) to specific (COIN) program instances may
end user experience in faster retrieving (possibly also more, improve on end-user experience in retrieving faster (and possibly
e.g., better quality) content. better quality) content.
* COIN instances may also be utilized to integrate service-related * COIN instances may also be utilized to integrate service-related
telemetry information to support the selection of the final telemetry information to support the selection of the final
service instance destination from a pool of possible choices service instance destination from a pool of possible choices.
* Supporting the selection of a service destination from a set of * Supporting the selection of a service destination from a set of
possible (e.g., virtualized, distributed) choices, e.g., through possible (e.g., virtualized, distributed) choices, e.g., through
constraint-based routing decisions (see [APPCENTRES]) in (COIN) constraint-based routing decisions (see [APPCENTRES]) in (COIN)
program instances to improve the overall end user experience by program instances to improve the overall end-user experience by
selecting a 'more suitable' service destination over another, selecting a "more suitable" service destination over another,
e.g., avoiding/reducing overload situations in specific service e.g., avoiding/reducing overload situations in specific service
destinations. destinations.
* Supporting Layer 2 capabilities for multicast (compute * Supporting L2 capabilities for multicast (compute interconnection
interconnection and collective communication in [APPCENTRES]), and collective communication in [APPCENTRES]), e.g., through in-
e.g., through in-network/switch-based replication decisions (and network/switch-based replication decisions (and their
their optimizations) based on dynamic group membership optimizations) based on dynamic group membership information, may
information, may reduce the network utilization and therefore reduce the network utilization and therefore increase the overall
increase the overall system efficiency. system efficiency.
5.1.5. Research Questions 5.1.5. Research Questions
In addition to the research questions in Section 3.1.5: In addition to the research questions in Section 3.1.5:
* RQ 5.1.1: How to utilize L2 multicast to improve on CDN designs? * RQ 5.1.1: How to utilize L2 multicast to improve on CDN designs?
How to utilize COIN capabilities in those designs, such as through How to utilize COIN capabilities in those designs, such as through
on-path optimizations for fanouts? on-path optimizations for fanouts?
* RQ 5.1.2: What forwarding methods may support the required * RQ 5.1.2: What forwarding methods may support the required
multicast capabilities (see [FCDN]) and how could programmable multicast capabilities (see [FCDN]) and how could programmable
COIN forwarding elements support those methods (e.g., extending COIN forwarding elements support those methods (e.g., extending
current SDN capabilities)? current SDN capabilities)?
* RQ 5.1.3: What are the constraints, reflecting both compute and * RQ 5.1.3: What are the constraints, reflecting both compute and
network capabilities, that may support joint optimization of network capabilities, that may support joint optimization of
routing and computing? How could intermediary (COIN) program routing and computing? How could intermediary (COIN) program
instances support, e.g., the aggregation of those constraints to instances support, for example, the aggregation of those
reduce overall telemetry network traffic? constraints to reduce overall telemetry network traffic?
* RQ 5.1.4: Could traffic steering be performed on the data path and * RQ 5.1.4: Could traffic steering be performed on the data path and
per service request, e.g., through (COIN) program instances that per service request (e.g., through (COIN) program instances that
perform novel routing request lookup methods? If so, what would perform novel routing request lookup methods)? If so, what would
be performance improvements? be performance improvements?
* RQ 5.1.5: How could storage be traded off against frequent, * RQ 5.1.5: How could storage be traded off against frequent,
multicast-based replication (see [FCDN])? Could intermediary/in- multicast-based replication (see [FCDN])? Could intermediary/in-
network (COIN) elements support the storage beyond current network (COIN) elements support the storage beyond current
endpoint-based methods? endpoint-based methods?
* RQ 5.1.6: What scalability limits exist for L2 multicast * RQ 5.1.6: What scalability limits exist for L2 multicast
capabilities? How to overcome them, e.g., through (COIN) program capabilities? How to overcome them, e.g., through (COIN) program
instances serving as stateful subtree aggregators to reduce the instances serving as stateful subtree aggregators to reduce the
needed identifier space for, e.g., bit-based forwarding? needed identifier space (e.g., for bit-based forwarding)?
5.2. Compute-Fabric-as-a-Service (CFaaS) 5.2. Compute-Fabric-as-a-Service (CFaaS)
5.2.1. Description 5.2.1. Description
We interpret connected compute resources as operating at a suitable We interpret connected compute resources as operating at a suitable
layer, such as Ethernet, InfiBand but also at Layer 3, to allow for layer, such as Ethernet, InfiniBand, but also at Layer 3 (L3), to
the exchange of suitable invocation methods, such as exposed through allow for the exchange of suitable invocation methods, such as those
verb-based or socket-based APIs. The specific invocations here are exposed through verb-based or socket-based APIs. The specific
subject to the applications running over a selected pool of such invocations here are subject to the applications running over a
connected compute resources. selected pool of such connected compute resources.
Providing such pool of connected compute resources, e.g., in regional Providing such a pool of connected compute resources (e.g., in
or edge data centers, base stations, and even end user devices, opens regional or edge data centers, base stations, and even end-user
up the opportunity for infrastructure providers to offer CFaaS-like devices) opens up the opportunity for infrastructure providers to
offerings to application providers, leaving the choice of the offer CFaaS-like offerings to application providers, leaving the
appropriate invocation method to the app and service provider. choice of the appropriate invocation method to the app and service
Through this, the compute resources can be utilized to execute the provider. Through this, the compute resources can be utilized to
desired (COIN) programs of which the application is composed, while execute the desired (COIN) programs of which the application is
utilizing the interconnection between those compute resources to do composed, while utilizing the interconnection between those compute
so in a distributed manner. resources to do so in a distributed manner.
5.2.2. Characterization 5.2.2. Characterization
We foresee those CFaaS offerings to be tenant-specific, a tenant here We foresee those CFaaS offerings to be tenant-specific, with a tenant
defined as the provider of at least one application. For this, we here defined as the provider of at least one application. For this,
foresee an interaction between CFaaS provider and tenant to we foresee an interaction between the CFaaS provider and tenant to
dynamically select the appropriate resources to define the demand dynamically select the appropriate resources to define the demand
side of the fabric. Conversely, we also foresee the supply side of side of the fabric. Conversely, we also foresee the supply side of
the fabric to be highly dynamic with resources being offered to the the fabric to be highly dynamic, with resources being offered to the
fabric through, e.g., user-provided resources (whose supply might fabric through, for example, user-provided resources (whose supply
depend on highly context-specific supply policies) or infrastructure might depend on highly context-specific supply policies) or
resources of intermittent availability such as those provided through infrastructure resources of intermittent availability such as those
road-side infrastructure in vehicular scenarios. provided through road-side infrastructure in vehicular scenarios.
The resulting dynamic demand-supply matching establishes a dynamic The resulting dynamic demand-supply matching establishes a dynamic
nature of the compute fabric that in turn requires trust nature of the compute fabric that in turn requires trust
relationships to be built dynamically between the resource relationships to be built dynamically between the resource
provider(s) and the CFaaS provider. This also requires the provider(s) and the CFaaS provider. This also requires the
communication resources to be dynamically adjusted to suitably communication resources to be dynamically adjusted to suitably
interconnect all resources into the (tenant-specific) fabric exposed interconnect all resources into the (tenant-specific) fabric exposed
as CFaaS. as CFaaS.
5.2.3. Existing Solutions 5.2.3. Existing Solutions
There exist a number of technologies to build non-local (wide area) There exist a number of technologies to build non-local (wide area)
Layer 2 as well as Layer 3 networks, which in turn allows for L2 as well as L3 networks, which in turn allows for connecting
connecting compute resources for a distributed computational task. compute resources for a distributed computational task. For
For instance, 5G-LAN [SA2-5GLAN] specifies a cellular L2 bearer for instance, 5G-LAN [SA2-5GLAN] specifies a cellular L2 bearer for
interconnecting L2 resources within a single cellular operator. The interconnecting L2 resources within a single cellular operator. The
work in [ICN5GLAN] outlines using a path-based forwarding solution work in [ICN-5GLAN] outlines using a path-based forwarding solution
over 5G-LAN as well as SDN-based LAN connectivity together with an over 5G-LAN as well as SDN-based LAN connectivity together with an
ICN-based naming of IP and HTTP-level resources to achieve Information-Centric Network (ICN) based naming of IP and HTTP-level
computational interconnections, including scenarios such as those resources. This is done in order to achieve computational
outlined in Section 3.1. L2 network virtualization (see, e.g., interconnections, including scenarios such as those outlined in
[L2Virt]) is one of the methods used for realizing so-called 'cloud- Section 3.1. L2 network virtualization (see [L2Virt]) is one of the
native' applications for applications developed with 'physical' methods used for realizing so-called "cloud-native" applications for
networks in mind, thus forming an interconnected compute and storage applications developed with "physical" networks in mind, thus forming
fabric. an interconnected compute and storage fabric.
5.2.4. Opportunities 5.2.4. Opportunities
* Supporting service-level routing of compute resource requests * Supporting service-level routing of compute resource requests
(service routing in [APPCENTRES]) may allow for utilizing the (such as service routing in [APPCENTRES]) may allow for utilizing
wealth of compute resources in the overall CFaaS fabric for the wealth of compute resources in the overall CFaaS fabric for
execution of distributed applications, where the distributed execution of distributed applications, where the distributed
constituents of those applications are realized as (COIN) programs constituents of those applications are realized as (COIN) programs
and executed within a COIN system as (COIN) program instances. and executed within a COIN system as (COIN) program instances.
* Supporting the constraint-based selection of a specific (COIN) * Supporting the constraint-based selection of a specific (COIN)
program instance over others (constraint-based routing in program instance over others (such as constraint-based routing in
[APPCENTRES]) will allow for optimizing both the CFaaS provider [APPCENTRES]) will allow for optimizing both the CFaaS provider
constraints as well as tenant-specific constraints. constraints as well as tenant-specific constraints.
* Supporting Layer 2 and 3 capabilities for multicast (compute * Supporting L2 and L3 capabilities for multicast (such as compute
interconnection and collective communication in [APPCENTRES]) will interconnection and collective communication in [APPCENTRES]) will
allow for decreasing both network utilization but also possible allow for decreasing both network utilization but also possible
compute utilization (due to avoiding unicast replication at those compute utilization (due to avoiding unicast replication at those
compute endpoints), thereby decreasing total cost of ownership for compute endpoints), thereby decreasing total cost of ownership for
the CFaaS offering. the CFaaS offering.
* Supporting the enforcement of trust relationships and isolation * Supporting the enforcement of trust relationships and isolation
policies through intermediary (COIN) program instances, e.g., policies through intermediary (COIN) program instances, e.g.,
enforcing specific traffic shares or strict isolation of traffic enforcing specific traffic shares or strict isolation of traffic
through differentiated queueing. through differentiated queueing.
5.2.5. Research Questions 5.2.5. Research Questions
In addition to the research questions in Section 3.1.5: In addition to the research questions in Section 3.1.5:
* RQ 5.2.1: How to convey tenant-specific requirements for the * RQ 5.2.1: How to convey tenant-specific requirements for the
creation of the CFaaS fabric? creation of the CFaaS fabric?
* RQ 5.2.2: How to dynamically integrate resources into the compute * RQ 5.2.2: How to dynamically integrate resources into the compute
fabric being utilized for the app execution (those resources fabric being utilized for the app execution (those resources
include, but are not limited to, end user provided resources), include, but are not limited to, end-user provided resources),
particularly when driven by tenant-level requirements and changing particularly when driven by tenant-level requirements and changing
service-specific constraints? How can those resources be exposed service-specific constraints? How can those resources be exposed
through possible (COIN) execution environments? through possible (COIN) execution environments?
* RQ 5.2.3: How to utilize COIN capabilities to aid the availability * RQ 5.2.3: How to utilize COIN capabilities to aid the availability
and accountability of resources, i.e., what may be (COIN) programs and accountability of resources, i.e., what may be (COIN) programs
for a CFaaS environment that in turn would utilize the distributed for a CFaaS environment that in turn would utilize the distributed
execution capability of a COIN system? execution capability of a COIN system?
* RQ 5.2.4: How to utilize COIN capabilities to enforce traffic and * RQ 5.2.4: How to utilize COIN capabilities to enforce traffic and
skipping to change at page 32, line 43 skipping to change at line 1500
* RQ 5.2.5: How to optimize the interconnection of compute * RQ 5.2.5: How to optimize the interconnection of compute
resources, including those dynamically added and removed during resources, including those dynamically added and removed during
the provisioning of the tenant-specific compute fabric? the provisioning of the tenant-specific compute fabric?
5.3. Virtual Networks Programming 5.3. Virtual Networks Programming
5.3.1. Description 5.3.1. Description
The term "virtual network programming" is proposed to describe The term "virtual network programming" is proposed to describe
mechanisms by which tenants deploy and operate COIN programs in their mechanisms by which tenants deploy and operate COIN programs in their
virtual network. Such COIN programs can, e.g., be P4 programs, virtual network. Such COIN programs can be, for example, P4
OpenFlow rules, or higher layer programs. This feature can enable programs, OpenFlow rules, or higher layer programs. This feature can
other use cases described in this draft to be deployed using virtual enable other use cases described in this draft to be deployed using
networks services, over underlying networks such as datacenters, virtual network services, over underlying networks such as data
mobile networks, or other fixed or wireless networks. centers, mobile networks, or other fixed or wireless networks.
For example, COIN programs could perform the following on a tenant's For example, COIN programs could perform the following on a tenant's
virtual network: virtual network:
* Allow or block flows, and request rules from an SDN controller for * Allow or block flows, and request rules from an SDN controller for
each new flow, or for flows to or from specific hosts that need each new flow, or for flows to or from specific hosts that need
enhanced security enhanced security.
* Forward a copy of some flows towards a node for storage and * Forward a copy of some flows towards a node for storage and
analysis analysis.
* Update metrics based on specific sources/destinations or * Update metrics based on specific sources/destinations or
protocols, for detailed analytics protocols, for detailed analytics.
* Associate traffic between specific endpoints, using specific * Associate traffic between specific endpoints, using specific
protocols, or originated from a given application, to a given protocols, or originated from a given application, to a given
slice, while other traffic uses a default slice slice, while other traffic uses a default slice.
* Experiment with a new routing protocol (e.g., ICN), using a P4 * Experiment with a new routing protocol (e.g., ICN), using a P4
implementation of a router for this protocol implementation of a router for this protocol.
5.3.2. Characterization 5.3.2. Characterization
To provide a concrete example of virtual COIN programming, we To provide a concrete example of virtual COIN programming, we
consider a use case using a 5G underlying network, the 5GLAN consider a use case using a 5G underlying network, the 5GLAN
virtualization technology, and the P4 programming language and virtualization technology, and the P4 programming language and
environment. As an assumption in this use case, some mobile network environment. As an assumption in this use case, some mobile network
equipment (e.g., UPF) and devices (e.g., mobile phones or residential equipment (e.g., User Plane Functions (UPFs)) and devices (e.g.,
gateways) include a network switch functionality that is used as a mobile phones or residential gateways) include a network switch
PND. functionality that is used as a PND.
Section 5.1 of [I-D.ravi-icnrg-5gc-icn] provides a description of the Section 5.1 of [ICN-5GC] provides a description of the 5G network
5G network functions and interfaces relevant to 5GLAN, which are functions and interfaces relevant to 5GLAN, which are otherwise
otherwise specified in [TS23.501] and [TS23.502]. From the 5GLAN specified in [TS23.501] and [TS23.502]. From the 5GLAN service
service customer/tenant standpoint, the 5G network operates as a customer/tenant standpoint, the 5G network operates as a switch.
switch.
In the use case depicted in Figure 3, the tenant operates a network In the use case depicted in Figure 3, the tenant operates a network
including a 5GLAN network segment (seen as a single logical switch), including a 5GLAN network segment (seen as a single logical switch),
as well as fixed segments. The mobile devices (or User Equipment as well as fixed segments. The mobile devices (or User Equipment
nodes) UE1, UE2, UE3 and UE4 are in the same 5GLAN, as well as nodes) UE1, UE2, UE3, and UE4 are in the same 5GLAN, as well as
Device1 and Device2 (through UE4). This scenario can take place in a Device1 and Device2 (through UE4). This scenario can take place in a
plant or enterprise network, using, e.g., a 5G Non-Public Network. plant or enterprise network, using a 5G non-public network, for
The tenant uses P4 programs to determine the operation of both the example. The tenant uses P4 programs to determine the operation of
fixed and 5GLAN switches. The tenant provisions a 5GLAN P4 program both the fixed and 5GLAN switches. The tenant provisions a 5GLAN P4
into the mobile network, and can also operate a controller. program into the mobile network and can also operate a controller.
..... Tenant ........ ..... Tenant ........
P4 program : : P4 program : :
deployment : Operation : deployment : Operation :
V : V :
+-----+ air interface +----------------+ : +-----+ air interface +----------------+ :
| UE1 +----------------+ | : | UE1 +----------------+ | :
+-----+ | | : +-----+ | | :
| | : | | :
+-----+ | | V +-----+ | | V
skipping to change at page 34, line 42 skipping to change at line 1588
`--+ Device2 +----+ P4 Switch +--->(fixed network) `--+ Device2 +----+ P4 Switch +--->(fixed network)
+---------+ +------------+ +---------+ +------------+
Figure 3: 5G Virtual Network Programming Overview Figure 3: 5G Virtual Network Programming Overview
5.3.3. Existing Solutions 5.3.3. Existing Solutions
Research has been conducted, for example by [Stoyanov], to enable P4 Research has been conducted, for example by [Stoyanov], to enable P4
network programming of individual virtual switches. To our network programming of individual virtual switches. To our
knowledge, no complete solution has been developed for deploying knowledge, no complete solution has been developed for deploying
virtual COIN programs over mobile or datacenter networks. virtual COIN programs over mobile or data center networks.
5.3.4. Opportunities 5.3.4. Opportunities
Virtual network programming by tenants could bring benefits such as: Virtual network programming by tenants could bring benefits such as:
* A unified programming model, which can facilitate porting COIN * A unified programming model, which can facilitate porting COIN
programs between data centers, 5G networks, and other fixed and programs between data centers, 5G networks, and other fixed and
wireless networks, as well as sharing controller, code and wireless networks, as well as sharing controller, code and
expertise. expertise.
* Increasing the level of customization available to customers/ * Increasing the level of customization available to customers/
tenants of mobile networks or datacenters compared to typical tenants of mobile networks or data centers compared to typical
configuration capabilities. For example, 5G network evolution configuration capabilities. For example, 5G network evolution
points to an ever increasing specialization and customization of points to an ever-increasing specialization and customization of
private mobile networks, which could be handled by tenants using a private mobile networks, which could be handled by tenants using a
programming model similar to P4. programming model similar to P4.
* Using network programs to influence underlying network services, * Using network programs to influence underlying network services
e.g., request specific QoS for some flows in 5G or datacenters, to (e.g., requesting specific QoS for some flows in 5G or data
increase the level of in-depth customization available to tenants. centers) to increase the level of in-depth customization available
to tenants.
5.3.5. Research Questions 5.3.5. Research Questions
* RQ 5.3.1: Underlying Network Awareness: a virtual COIN program can * RQ 5.3.1: Underlying network awareness
be able to influence, and be influenced by, the underling network.
Research challenges include defining methods to distribute COIN
programs, including in a mobile network context, based on network
awareness, since some information and actions may be available on
some nodes but not on others.
* RQ 5.3.2: Splitting/Distribution: a virtual COIN program may need A virtual COIN program can both influence, and be influenced by,
to be deployed across multiple computing nodes, leading to the underling network. Research challenges include defining
research questions around instance placement and distribution. methods to distribute COIN programs, including in a mobile network
For example, program logic should be applied exactly once or at context, based on network awareness, since some information and
least once per packet (or at least once for idempotent actions may be available on some nodes but not on others.
operations), while allowing optimal forwarding path by the
underlying network. Research challenges include defining manual
(by the programmer) or automatic methods to distribute COIN
programs that use a low or minimal amount of resources.
Distributed P4 programs are studied in
[I-D.hsingh-coinrg-reqs-p4comp] and [Sultana] (based on capability
5.3.2).
* RQ 5.3.3: Multi-Tenancy Support: A COIN system supporting * RQ 5.3.2: Splitting/distribution
virtualization should enable tenants to deploy COIN programs onto
their virtual networks, in such a way that multiple virtual COIN
program instances can run on the same compute node. While
mechanisms were proposed for P4 multi-tenancy in a switch
[Stoyanov], research questions remain about isolation between
tenants and fair repartition of resources (based on capability
5.3.3).
* RQ 5.3.4: Security: how can tenants and underlying networks be A virtual COIN program may need to be deployed across multiple
protected against security risks, including overuse or misuse of computing nodes, leading to research questions around instance
network resources, injection of traffic, or access to unauthorized placement and distribution. For example, program logic should be
traffic? applied exactly once or at least once per packet (or at least once
for idempotent operations), while allowing an optimal forwarding
path by the underlying network. Research challenges include
defining manual (by the programmer) or automatic methods to
distribute COIN programs that use a low or minimal amount of
resources. Distributed P4 programs are studied in [P4-SPLIT] and
[Sultana] (based on capability 5.3.2).
* RQ 5.3.5: Higher layer processing: can a virtual network model * RQ 5.3.3: Multi-tenancy support
facilitate the deployment of COIN programs acting on application
layer data? This is an open question since the present section
focused on packet/flow processing.
6. Enabling new COIN capabilities A COIN system supporting virtualization should enable tenants to
deploy COIN programs onto their virtual networks, in such a way
that multiple virtual COIN program instances can run on the same
compute node. While mechanisms were proposed for P4 multi-tenancy
in a switch [Stoyanov], research questions remain about isolation
between tenants and fair repartition of resources (based on
capability 5.3.3).
* RQ 5.3.4: Security
How can tenants and underlying networks be protected against
security risks, including overuse or misuse of network resources,
injection of traffic, or access to unauthorized traffic?
* RQ 5.3.5: Higher layer processing
Can a virtual network model facilitate the deployment of COIN
programs acting on application-layer data? This is an open
question, since this section focuses on packet/flow processing.
6. Enabling New COIN Capabilities
6.1. Distributed AI Training 6.1. Distributed AI Training
6.1.1. Description 6.1.1. Description
There is a growing range of use cases demanding the realization of AI There is a growing range of use cases demanding the realization of AI
training capabilities among distributed endpoints. One such use case training capabilities among distributed endpoints. One such use case
is to distribute large-scale model training across more than one data is to distribute large-scale model training across more than one data
center, e.g., when facing energy issues at a single site or when center (e.g., when facing energy issues at a single site or when
simply reaching the scale of training capabilities at one site, thus simply reaching the scale of training capabilities at one site, thus
wanting to complement training with capabilities of another, possibly wanting to complement training with the capabilities of another or
many sites. From a COIN perspective, those capabilities may be possibly many sites). From a COIN perspective, those capabilities
realized as (COIN) programs and executed throughout a COIN system, may be realized as (COIN) programs and executed throughout a COIN
including in PNDs. system, including in PNDs.
6.1.2. Characterization 6.1.2. Characterization
Some solutions may desire the localization of reasoning logic, e.g., Some solutions may desire the localization of reasoning logic (e.g.,
for deriving attributes that better preserve privacy of the utilized for deriving attributes that better preserve privacy of the utilized
raw input data. Quickly establishing (COIN) program instances in raw input data). Quickly establishing (COIN) program instances in
nearby compute resources, including PNDs, may even satisfy such nearby compute resources, including PNDs, may even satisfy such
localization demands on-the-fly (e.g., when a particular use is being localization demands on the fly (e.g., when a particular use is being
realized, then terminated after a given time). realized, then terminated after a given time).
Individual training 'sites' may not be a data center, but instead Individual training "sites" may not be a data center, but may instead
consist of powerful, yet stand-along devices, that federate computing consist of powerful, yet stand-along devices that federate computing
power towards training a model, captured as 'federated training' and power towards training a model, captured as "federated training" and
provided through platforms such as [FLOWER]. Use cases here may be provided through platforms such as [FLOWER]. Use cases here may be
that of distributed training on (user) image data, the training over that of distributed training on (user) image data, the training over
federated social media sites [MASTODON], or others. federated social media sites [MASTODON], or others.
Apart from the distribution of compute power, the distribution of Apart from the distribution of compute power, the distribution of
data may be a driver for distributed AI training use cases, such as data may be a driver for distributed AI training use cases, such as
in the Mastodon federated social media sits [MASTODON] or training in the Mastodon federated social media sites [MASTODON] or training
over locally governed patient data or others. over locally governed patient data or others.
6.1.3. Existing Solutions 6.1.3. Existing Solutions
Reasoning frameworks, such as TensorFlow, may be utilized for the Reasoning frameworks, such as TensorFlow, may be utilized for the
realization of the (distributed) AI training logic, building on realization of the (distributed) AI training logic, building on
remote service invocation through protocols such as gRPC [GRPC] or remote service invocation through protocols such as gRPC [GRPC] or
MPI [MPI] with the intention of providing an on-chip NPU (neural the Message Passing Interface (MPI) [MPI] with the intention of
processor unit) like abstraction to the AI framework. providing an on-chip Neural Processor Unit (NPU) like abstraction to
the AI framework.
A number of activities on distributed AI training exist in the area A number of activities on distributed AI training exist in the area
of developing the 5th and 6th generation mobile network with various of developing the 5th and 6th generation mobile network, with various
activities in the 3GPP SDO as well as use cases developed for the activities in the 3GPP Standards Development Organization (SDO) as
ETSI MEC initiative mentioned in previous use cases. well as use cases developed for the ETSI MEC initiative mentioned in
previous use cases.
6.1.4. Opportunities 6.1.4. Opportunities
* Supporting service-level routing of training requests (service * Supporting service-level routing of training requests (such as
routing in [APPCENTRES]), with AI services being exposed to the service routing in [APPCENTRES]), with AI services being exposed
network, where (COIN) program instances may support the selection to the network, where (COIN) program instances may support the
of the most suitable service instance based on control plane selection of the most suitable service instance based on control
information, e.g., on AI worker compute capabilities, being plane information, e.g., on AI worker compute capabilities, being
distributed across (COIN) program instances. distributed across (COIN) program instances.
* Supporting the collective communication primitives, such as all- * Supporting the collective communication primitives, such as all-
to-all, scatter-gather, utilized by the (distributed) AI workers to-all, scatter-gather, utilized by the (distributed) AI workers
to increase the overall network efficiency, e.g., through avoiding to increase the overall network efficiency, e.g., through avoiding
endpoint-based replication or even directly performing, e.g., endpoint-based replication or even directly performing, e.g.,
reduce, collective primitive operations in (COIN) program reduce, collective primitive operations in (COIN) program
instances placed in topologically advantageous places. instances placed in topologically advantageous places.
* Supporting collective communication between multiple instances of * Supporting collective communication between multiple instances of
AI services, i.e., (COIN) program instances, may positively impact AI services (i.e., (COIN) program instances) may positively impact
network but also compute utilization by moving from unicast network but also compute utilization by moving from unicast
replication to network-assisted multicast operation. replication to network-assisted multicast operation.
6.1.5. Research Questions 6.1.5. Research Questions
In addition to the research questions in Section 3.1.5: In addition to the research questions in Section 3.1.5:
* RQ 6.1.1: What are the communication patterns that may be * RQ 6.1.1: What are the communication patterns that may be
supported by collective communication solutions, where those supported by collective communication solutions, where those
solutions directly utilize (COIN) program instance capabilities solutions directly utilize (COIN) program instance capabilities
within the network (e.g., reduce in a central (COIN) program within the network (e.g., reduce in a central (COIN) program
instance)? instance)?
* RQ 6.1.2: How to achieve scalable collective communication * RQ 6.1.2: How to achieve scalable collective communication
primitives with rapidly changing receiver sets, e.g., where primitives with rapidly changing receiver sets (e.g., where
training workers may be dynamically selected based on energy training workers may be dynamically selected based on energy
efficiency constraints [GREENAI]? efficiency constraints [GREENAI])?
* RQ 6.1.3: What COIN capabilities may support the collective * RQ 6.1.3: What COIN capabilities may support the collective
communication patterns found in distributed AI problems? communication patterns found in distributed AI problems?
* RQ 6.1.4: How to support AI-specific invocation protocols, such as * RQ 6.1.4: How to support AI-specific invocation protocols, such as
MPI or RDMA? MPI or Remote Direct Memory Access (RDMA)?
* RQ 6.1.5: What are the constraints for placing (AI) execution * RQ 6.1.5: What are the constraints for placing (AI) execution
logic in the form of (COIN) programs in certain logical execution logic in the form of (COIN) programs in certain logical execution
points (and their associated physical locations), including PNDs, points (and their associated physical locations), including PNDs,
and how to signal and act upon them? and how to signal and act upon them?
7. Preliminary Categorization of the Research Questions 7. Preliminary Categorization of the Research Questions
This section describes a preliminary categorization of the reseach This section describes a preliminary categorization of the research
questions, illustrated in Figure 4. A more comprehensive analysis questions illustrated in Figure 4. A more comprehensive analysis has
has been initiated by members of the COINRG community in been initiated by members of the COINRG community in [USE-CASE-AN]
[USECASEANALYSIS] but has not been completed at the time of writing but has not been completed at the time of writing this memo.
this memo.
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+ Applicability Areas + + Applicability Areas +
+ .............................................................+ + .............................................................+
+ Transport | App | Data | Routing & | (Industrial) + + Transport | App | Data | Routing & | (Industrial) +
+ | Design | Processing | Forwarding | Control + + | Design | Processing | Forwarding | Control +
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+ Distributed Computing FRAMEWORKS and LANGUAGES to COIN + + Distributed Computing FRAMEWORKS and LANGUAGES to COIN +
skipping to change at page 38, line 45 skipping to change at line 1783
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+ ENABLING TECHNOLOGIES for COIN + + ENABLING TECHNOLOGIES for COIN +
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+--------------------------------------------------------------+ +--------------------------------------------------------------+
+ VISION(S) for COIN + + VISION(S) for COIN +
+--------------------------------------------------------------+ +--------------------------------------------------------------+
Figure 4: Research Questions Categories Figure 4: Research Questions Categories
The *VISION(S) for COIN* category is about defining and shaping the The "VISION(S) for COIN" category is about defining and shaping the
exact scope of COIN. In contrast to the ENABLING TECHNOLOGIES exact scope of COIN. In contrast to the "ENABLING TECHNOLOGIES"
category, these research questions look at the problem from a more category, these research questions look at the problem from a more
philosophical perspective. In particular, the questions center philosophical perspective. In particular, the questions center
around where to perform computations, which tasks are suitable for around where to perform computations, which tasks are suitable for
COIN, for which tasks COIN is suitable, and which forms of deploying COIN, for which tasks COIN is suitable, and which forms of deploying
COIN might be desirable. This category includes the research COIN might be desirable. This category includes the research
questions 3.1.8, 3.2.1, 3.3.5, 3.3.6, 3.3.7, 5.3.3, 6.1.1, and 6.1.3. questions 3.1.8, 3.2.1, 3.3.5, 3.3.6, 3.3.7, 5.3.3, 6.1.1, and 6.1.3.
The *ENABLING TECHNOLOGIES for COIN* category digs into what The "ENABLING TECHNOLOGIES for COIN" category digs into what
technologies are needed to enable COIN, which of the existing technologies are needed to enable COIN, which of the existing
technologies can be reused for COIN, and what might be needed to make technologies can be reused for COIN, and what might be needed to make
the VISION(S) for COIN a reality. In contrast to the VISION(S), the "VISION(S) for COIN" a reality. In contrast to the "VISION(S)
these research questions look at the problem from a practical for COIN", these research questions look at the problem from a
perspective, e.g., by considering how COIN can be incorporated in practical perspective (e.g., by considering how COIN can be
existing systems or how the interoperability of COIN execution incorporated in existing systems or how the interoperability of COIN
environments can be enhanced. This category includes the research execution environments can be enhanced). This category includes the
questions 3.1.7, 3.1.8, 3.2.3, 4.2.7, 5.1.1, 5.1.2, 5.1.6, 5.3.1, research questions 3.1.7, 3.1.8, 3.2.3, 4.2.7, 5.1.1, 5.1.2, 5.1.6,
6.1.2, and 6.1.3. 5.3.1, 6.1.2, and 6.1.3.
The *Distributed Computing FRAMEWORKS and LANGUAGES to COIN* category The "Distributed Computing FRAMEWORKS and LANGUAGES to COIN" category
focuses on how COIN programs can be deployed and orchestrated. focuses on how COIN programs can be deployed and orchestrated.
Central questions arise regarding the composition of COIN programs, Central questions arise regarding the composition of COIN programs,
the placement of COIN functions, the (dynamic) operation and the placement of COIN functions, the (dynamic) operation and
integration of COIN systems as well as additional COIN system integration of COIN systems as well as additional COIN system
properties. Notably, COIN diversifies general distributed computing properties. Notably, COIN diversifies general distributed computing
platforms such that many COIN-related research questions could also platforms such that many COIN-related research questions could also
apply to general distributed computing frameworks. This category apply to general distributed computing frameworks. This category
includes the research questions 3.1.1, 3.2.4, 3.3.1, 3.3.2, 3.3.3, includes the research questions 3.1.1, 3.2.4, 3.3.1, 3.3.2, 3.3.3,
3.3.5, 4.1.1, 4.1.4, 4.1.5, 4.1.8, 4.2.1, 4.2.4, 4.2.5, 4.2.6, 4.3.3, 3.3.5, 4.1.1, 4.1.4, 4.1.5, 4.1.8, 4.2.1, 4.2.4, 4.2.5, 4.2.6, 4.3.3,
5.2.1, 5.2.2, 5.2.3, 5.2.5, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, and 5.2.1, 5.2.2, 5.2.3, 5.2.5, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, and
6.1.5. 6.1.5.
In addition to these core categories, there are use-case-specific In addition to these core categories, there are use case specific
research questions that are heavily influenced by the specific research questions that are heavily influenced by the specific
constraints and objectives of the respective use cases. This constraints and objectives of the respective use cases. This
*Applicability Areas* category can be further refined into the "Applicability Areas" category can be further refined into the
following subgroups: following subgroups:
* The *Transport* subgroup addresses the need to adapt transport * The "Transport" subgroup addresses the need to adapt transport
protocols to handle dynamic deployment locations effectively. protocols to handle dynamic deployment locations effectively.
This subgroup includes the research question 3.1.2. This subgroup includes the research question 3.1.2.
* The *App Design* subgroup relates to the design principles and * The "App Design" subgroup relates to the design principles and
considerations when developing COIN applications. This subgroup considerations when developing COIN applications. This subgroup
includes the research questions 4.1.2, 4.1.3, 4.1.7, 4.2.6, 5.1.1, includes the research questions 4.1.2, 4.1.3, 4.1.7, 4.2.6, 5.1.1,
5.1.3, and 5.1.5. 5.1.3, and 5.1.5.
* The *Data Processing* subgroup relates to the handling, storage, * The "Data Processing" subgroup relates to the handling, storage,
analysis, and processing of data in COIN environments. This analysis, and processing of data in COIN environments. This
subgroup includes the research questions 3.2.4, 3.2.6, 4.2.2, subgroup includes the research questions 3.2.4, 3.2.6, 4.2.2,
4.2.3, and 4.3.2. 4.2.3, and 4.3.2.
* The *Routing & Forwarding* subgroup explores efficient routing and * The "Routing & Forwarding" subgroup explores efficient routing and
forwarding mechanisms in COIN, considering factors such as network forwarding mechanisms in COIN, considering factors such as network
topology, congestion control, and quality of service. This topology, congestion control, and quality of service. This
subgroup includes the research questions 3.1.2, 3.1.3, 3.1.4, subgroup includes the research questions 3.1.2, 3.1.3, 3.1.4,
3.1.5, 3.1.6, 3.2.6, 5.1.2, 5.1.3, 5.1.4, and 6.1.4. 3.1.5, 3.1.6, 3.2.6, 5.1.2, 5.1.3, 5.1.4, and 6.1.4.
* The *(Industrial) Control* subgroup relates to industrial control * The "(Industrial) Control" subgroup relates to industrial control
systems, addressing issues like real-time control, automation, and systems, addressing issues like real-time control, automation, and
fault tolerance. This subgroup includes the research questions fault tolerance. This subgroup includes the research questions
3.1.9, 3.2.5, 3.3.1, 3.3.4, 4.1.1, 4.1.6, 4.1.8, 4.2.3, 4.3.1, and 3.1.9, 3.2.5, 3.3.1, 3.3.4, 4.1.1, 4.1.6, 4.1.8, 4.2.3, 4.3.1, and
4.3.4. 4.3.4.
8. Security Considerations 8. Security Considerations
COIN systems, like any other system using ``middleboxes'', can have COIN systems, like any other system using "middleboxes", can have
different security and privacy implications that strongly depend on different security and privacy implications that strongly depend on
the used platforms, the provided functionality, and the deployment the used platforms, the provided functionality, and the deployment
domain, with most if not all considerations for general middleboxes domain, with most if not all considerations for general middleboxes
also applying for COIN systems. also applying to COIN systems.
One critical aspect for early COIN systems is the use of early- One critical aspect for early COIN systems is the use of early
generation PNDs, many of which do not have cryptography support and generation PNDs, many of which do not have cryptography support and
only have limited computational capabilities. Hence, PND-based COIN only have limited computational capabilities. Hence, PND-based COIN
systems typically work on unencrypted data and often customize packet systems typically work on unencrypted data and often customize packet
payload while concepts, such as homomorphic encryption, could serve payload, while concepts such as homomorphic encryption could serve as
as workarounds, allowing PNDs to perform simple operations on the workarounds, allowing PNDs to perform simple operations on the
encrypted data without having access to it. All these approaches encrypted data without having access to it. All these approaches
introduce the same or very similar security implications as any introduce the same or very similar security implications as any
middlebox operating on unencrypted traffic or having access to middlebox operating on unencrypted traffic or having access to
encryption: a middlebox can itself have malicious intentions, e.g., encryption: a middlebox can itself have malicious intentions (e.g.,
because it got compromised, or the deployment of functionality offers because it got compromised or the deployment of functionality offers
new attack vectors to outsiders. new attack vectors to outsiders).
However, similar to middlebox deployments, risks for privacy and of However, similar to middlebox deployments, risks for privacy and data
data exposure have to be carefully considered in the context of the exposure have to be carefully considered in the context of the
concrete deployment. For example, exposing data to an external concrete deployment. For example, exposing data to an external
operator for mobile application offloading leads to a significant operator for mobile application offloading leads to a significant
privacy loss of the user in any case. In contrast, such privacy privacy loss of the user in any case. In contrast, such privacy
considerations are not as relevant for COIN systems where all considerations are not as relevant for COIN systems where all
involved entities are under the same control, such as in an involved entities are under the same control, such as in an
industrial context. Here, exposed data and functionality can instead industrial context. Here, exposed data and functionality can instead
lead to stolen business secrets or the enabling of, e.g., DoS lead to stolen business secrets or the enabling of DoS attacks, for
attacks. Hence, even in fully controlled scenarios, COIN example. Hence, even in fully controlled scenarios, COIN
intermediaries, and middleboxes in general, are ideally operated in a intermediaries, and middleboxes in general, are ideally operated in a
least-privilege mode, where they have exactly those permissions to least-privilege mode, where they have exactly those permissions to
read and alter payload that are necessary to fulfil their purpose. read and alter payload that are necessary to fulfill their purpose.
Research on granting middleboxes access to secured traffic is only in Research on granting middleboxes access to secured traffic is only in
its infancy and a variety of different approaches are proposed and its infancy, and a variety of different approaches are proposed and
analyzed [TLSSURVEY]. In a SplitTLS [SPLITTLS] deployment, e.g., analyzed [TLSSURVEY]. In a SplitTLS [SPLITTLS] deployment, for
middleboxes have different incoming and outgoing TLS channels, such example, middleboxes have different incoming and outgoing TLS
that they have full read and write access to all intercepted traffic. channels, such that they have full read and write access to all
More restrictive approaches for deploying middleboxes rely on intercepted traffic. More restrictive approaches for deploying
searchable encryption or zero-knowledge proofs to expose less data to middleboxes rely on searchable encryption or zero-knowledge proofs to
intermediaries, but those only offer limited functionality. expose less data to intermediaries, but those only offer limited
MADTLS[MADTLS] is tailored to the industrial domain and offers bit- functionality. MADTLS [MADTLS] is tailored to the industrial domain
level read and write access to intermediaries with low latency and and offers bit-level read and write access to intermediaries with low
bandwidth overhead, at the cost of more complex key management. latency and bandwidth overhead, at the cost of more complex key
Overall, different proposals offer different advantages and management. Overall, different proposals offer different advantages
disadvantages that must be carefully considered in the context of and disadvantages that must be carefully considered in the context of
concrete deployments. Further research could pave the way for a more concrete deployments. Further research could pave the way for a more
unified and configurable solution that is easier to maintain and unified and configurable solution that is easier to maintain and
deploy. deploy.
Finally, COIN systems and other middlebox deployments can also lead Finally, COIN systems and other middlebox deployments can also lead
to security risks even if the attack stems from an outsider without to security risks even if the attack stems from an outsider without
direct access to any devices. As such, metadata about the entailed direct access to any devices. As such, metadata about the entailed
processing (processing times, changes in incoming and outgoing data) processing (processing times or changes in incoming and outgoing
can allow an attacker to extract valuable information about the data) can allow an attacker to extract valuable information about the
process. Moreover, such deployments can become central entities process. Moreover, such deployments can become central entities
that, if paralyzed (e.g., through extensive requests), can be that, if paralyzed (e.g., through extensive requests), can be
responsible for large-scale outages. In particular, some deployments responsible for large-scale outages. In particular, some deployments
could be used to amplify DoS attacks. Similar to other middlebox could be used to amplify DoS attacks. Similar to other middlebox
deployments, these potential risks must be considered when deploying deployments, these potential risks must be considered when deploying
COIN functionality and may influence the selection of suitable COIN functionality and may influence the selection of suitable
security protocols. security protocols.
Additional system-level security considerations may arise from Additional system-level security considerations may arise from
regulatory requirements imposed on COIN systems overall, stemming regulatory requirements imposed on COIN systems overall, stemming
from regulation regarding, e.g., lawful interception, data from regulation regarding, lawful interception, data localization, or
localization, or AI use. These requirements may impact, e.g., the AI use, for example. These requirements may impact, for example, the
manner in which (COIN) programs may be placed or executed in the manner in which (COIN) programs may be placed or executed in the
overall system, who can invoke certain (COIN) programs in what PND or overall system, who can invoke certain (COIN) programs in what PND or
COIN device, and what type of (COIN) program can be run. These COIN device, and what type of (COIN) program can be run. These
considerations will impact the design of the possible implementing considerations will impact the design of the possible implementing
protocols but also the policies that govern the execution of (COIN) protocols but also the policies that govern the execution of (COIN)
programs. programs.
9. IANA Considerations 9. IANA Considerations
N/A This document has no IANA actions.
10. Conclusion 10. Conclusion
This document presented use cases gathered from several application This document presents use cases gathered from several application
domains that can and could profit from capabilities that are provided domains that can and could profit from capabilities that are provided
by in-network and, more generally, distributed compute platforms. We by in-network and, more generally, distributed compute platforms. We
distinguished between use cases in which COIN may enable new distinguish between use cases in which COIN may enable new
experiences (Section 3), expose new features (Section 6), or improve experiences (Section 3), expose new features (Section 6), or improve
on existing system capabilities (Section 5), and other use cases on existing system capabilities (Section 5), and other use cases
where COIN capabilities enable totally new applications, for example, where COIN capabilities enable totally new applications, for example,
in industrial networking (Section 4). in industrial networking (Section 4).
Beyond the mere description and characterization of those use cases, Beyond the mere description and characterization of those use cases,
we identified opportunities arising from utilizing COIN capabilities we identify opportunities arising from utilizing COIN capabilities
and formulated corresponding research questions that may need to be and formulate corresponding research questions that may need to be
addressed before being able to reap those opportunities. addressed before being able to reap those opportunities.
We acknowledge that this work offers no comprehensive overview of We acknowledge that this work offers no comprehensive overview of
possible use cases and is thus only a snapshot of what may be possible use cases and is thus only a snapshot of what may be
possible if COIN capabilities existed. possible if COIN capabilities existed. In fact, the decomposition of
In fact, the decomposition of many current client-server applications many current client-server applications into node-by-node transit
into node by node transit could identify other opportunities for could identify other opportunities for adding computing to
adding computing to forwarding notably in supply-chain, health care, forwarding, notably in the supply chain, health care, intelligent
intelligent cities and transportation and even financial services cities and transportation, and even financial services (among
(among others). The presented use cases were selected based on the others). The presented use cases are selected based on the expertise
expertise of the contributing community members at the time of of the contributing community members at the time of writing and are
writing and are intended to cover a diverse range from immersive and intended to cover a diverse range, from immersive and interactive
interactive media, industrial networks, to AI with varying media, industrial networks, to AI with varying characteristics, thus,
characteristics, thus, providing the basis for a thorough subsequent providing the basis for a thorough subsequent analysis.
analysis.
11. Acknowledgements
The authors would like to thank Eric Wagner for providing text on the
security considerations and Jungha Hong for her efforts in continuing
the work on the use case analysis document that has largely sourced
the preliminary categorization section of this document. The authors
would further like to thank Chathura Sarathchandra, David Oran, Phil
Eardley, Stuart Card, Jeffrey He, Toerless Eckert, and Jon Crowcroft
for reviewing earlier versions of the document, Colin Perkins for his
IRTF chair review, and Jerome Francois for his thorough IRSG review.
12. Informative References 11. Informative References
[APPCENTRES] [APPCENTRES]
Trossen, D., Sarathchandra, C., and M. Boniface, "In- Trossen, D., Sarathchandra, C., and M. Boniface, "In-
Network Computing for App-Centric Micro-Services", Work in Network Computing for App-Centric Micro-Services", Work in
Progress, Internet-Draft, draft-sarathchandra-coin- Progress, Internet-Draft, draft-sarathchandra-coin-
appcentres-04, 26 January 2021, appcentres-04, 26 January 2021,
<https://datatracker.ietf.org/doc/html/draft- <https://datatracker.ietf.org/doc/html/draft-
sarathchandra-coin-appcentres-04>. sarathchandra-coin-appcentres-04>.
[CompNet2021] [CompNet2021]
Chen, M., Liu, W., Wang, T., Liu, A., and Z. Zeng, "Edge Chen, M., Liu, W., Wang, T., Liu, A., and Z. Zeng, "Edge
intelligence computing for mobile augmented reality with intelligence computing for mobile augmented reality with
deep reinforcement learning approach", Computer deep reinforcement learning approach", Computer Networks,
Networks vol. 195, pp. 108186, vol. 195, p. 108186, DOI 10.1016/j.comnet.2021.108186,
DOI 10.1016/j.comnet.2021.108186, August 2021, August 2021,
<https://doi.org/10.1016/j.comnet.2021.108186>. <https://doi.org/10.1016/j.comnet.2021.108186>.
[eCAR] Jeon, J. and W. Woo, "eCAR: edge-assisted Collaborative [eCAR] Jeon, J. and W. Woo, "eCAR: edge-assisted Collaborative
Augmented Reality Framework", arXiv article, Augmented Reality Framework", arXiv:2405.06872,
DOI 10.48550/ARXIV.2405.06872, 2024, DOI 10.48550/ARXIV.2405.06872, 2024,
<https://doi.org/10.48550/ARXIV.2405.06872>. <https://doi.org/10.48550/ARXIV.2405.06872>.
[ETSI] ETSI, "Multi-access Edge Computing (MEC)", 2022, [ETSI] ETSI, "Multi-access Edge Computing (MEC)",
<https://www.etsi.org/technologies/multi-access-edge- <https://www.etsi.org/technologies/multi-access-edge-
computing>. computing>.
[FCDN] Al-Naday, M., Reed, M. J., Riihijarvi, J., Trossen, D., [FCDN] Al-Naday, M., Reed, M. J., Riihijarvi, J., Trossen, D.,
Thomos, N., and M. Al-Khalidi, "A Flexible and Efficient Thomos, N., and M. Al-Khalidi, "fCDN: A Flexible and
CDN Infrastructure without DNS Redirection of Content Efficient CDN Infrastructure without DNS Redirection of
Reflection", <https://arxiv.org/pdf/1803.00876.pdf>. Content Reflection", arXiv:1803.00876v2,
<https://arxiv.org/pdf/1803.00876.pdf>.
[FLOWER] Flower Labs GmbH, "A Friendly Federated AI Framework", [FLOWER] Flower Labs GmbH, "A Friendly Federated AI Framework",
2024, <https://flower.ai/>. <https://flower.ai/>.
[GLEBKE] Glebke, R., Henze, M., Wehrle, K., Niemietz, P., Trauth, [GLEBKE] Glebke, R., Henze, M., Wehrle, K., Niemietz, P., Trauth,
D., Mattfeld MBA, P., and T. Bergs, "A Case for Integrated D., Mattfeld, P., and T. Bergs, "A Case for Integrated
Data Processing in Large-Scale Cyber-Physical Systems", Data Processing in Large-Scale Cyber-Physical Systems",
Proceedings of the Annual Hawaii International Conference Proceedings of the Annual Hawaii International Conference
on System Sciences, DOI 10.24251/hicss.2019.871, 2019, on System Sciences, DOI 10.24251/hicss.2019.871, 2019,
<https://doi.org/10.24251/hicss.2019.871>. <https://doi.org/10.24251/hicss.2019.871>.
[GREENAI] Magoula, L., Koursioumpas, N., Thanopoulos, A., Panagea, [GREENAI] Magoula, L., Koursioumpas, N., Thanopoulos, A., Panagea,
T., Petropouleas, N., Gutierrez-Estevez, M., and R. T., Petropouleas, N., Gutierrez-Estevez, M., and R.
Khalili, "A Safe Genetic Algorithm Approach for Energy Khalili, "A Safe Genetic Algorithm Approach for Energy
Efficient Federated Learning in Wireless Communication Efficient Federated Learning in Wireless Communication
Networks", 2023 IEEE 34th Annual International Symposium Networks", 2023 IEEE 34th Annual International Symposium
on Personal, Indoor and Mobile Radio Communications on Personal, Indoor and Mobile Radio Communications
(PIMRC) pp. 1-6, DOI 10.1109/pimrc56721.2023.10293863, (PIMRC), pp. 1-6, DOI 10.1109/pimrc56721.2023.10293863,
September 2023, September 2023,
<https://doi.org/10.1109/pimrc56721.2023.10293863>. <https://doi.org/10.1109/pimrc56721.2023.10293863>.
[GRPC] "High performance open source universal RPC framework", [GRPC] "High performance, open source universal RPC framework",
<https://grpc.io/>. <https://grpc.io/>.
[GSLB] Cloudflare, "What is global server load balancing [GSLB] Cloudflare, "What is global server load balancing
(GSLB)?", 2022, (GSLB)?",
<https://www.cloudflare.com/learning/cdn/glossary/global- <https://www.cloudflare.com/learning/cdn/glossary/global-
server-load-balancing-gslb/>. server-load-balancing-gslb/>.
[I-D.hsingh-coinrg-reqs-p4comp] [ICN-5GC] Ravindran, R., Suthar, P., Trossen, D., Wang, C., and G.
Singh, H. and M. Montpetit, "Requirements for P4 Program
Splitting for Heterogeneous Network Nodes", Work in
Progress, Internet-Draft, draft-hsingh-coinrg-reqs-p4comp-
03, 18 February 2021,
<https://datatracker.ietf.org/doc/html/draft-hsingh-
coinrg-reqs-p4comp-03>.
[I-D.ravi-icnrg-5gc-icn]
Ravindran, R., Suthar, P., Trossen, D., Wang, C., and G.
White, "Enabling ICN in 3GPP's 5G NextGen Core White, "Enabling ICN in 3GPP's 5G NextGen Core
Architecture", Work in Progress, Internet-Draft, draft- Architecture", Work in Progress, Internet-Draft, draft-
ravi-icnrg-5gc-icn-04, 31 May 2019, ravi-icnrg-5gc-icn-04, 31 May 2019,
<https://datatracker.ietf.org/doc/html/draft-ravi-icnrg- <https://datatracker.ietf.org/doc/html/draft-ravi-icnrg-
5gc-icn-04>. 5gc-icn-04>.
[ICN5GLAN] Trossen, D., Wang, C., Robitzsch, S., Reed, M., AL-Naday, [ICN-5GLAN]
Trossen, D., Wang, C., Robitzsch, S., Reed, M., AL-Naday,
M., and J. Riihijarvi, "IP-based Services over ICN in 5G M., and J. Riihijarvi, "IP-based Services over ICN in 5G
LAN Environments", Work in Progress, Internet-Draft, LAN Environments", Work in Progress, Internet-Draft,
draft-trossen-icnrg-ip-icn-5glan-00, 6 June 2019, draft-trossen-icnrg-ip-icn-5glan-00, 6 June 2019,
<https://datatracker.ietf.org/doc/html/draft-trossen- <https://datatracker.ietf.org/doc/html/draft-trossen-
icnrg-ip-icn-5glan-00>. icnrg-ip-icn-5glan-00>.
[KUNZE-APPLICABILITY] [KUNZE-APPLICABILITY]
Kunze, I., Glebke, R., Scheiper, J., Bodenbenner, M., Kunze, I., Glebke, R., Scheiper, J., Bodenbenner, M.,
Schmitt, R., and K. Wehrle, "Investigating the Schmitt, R., and K. Wehrle, "Investigating the
Applicability of In-Network Computing to Industrial Applicability of In-Network Computing to Industrial
Scenarios", 2021 4th IEEE International Conference on Scenarios", 2021 4th IEEE International Conference on
Industrial Cyber-Physical Systems (ICPS) pp. 334-340, Industrial Cyber-Physical Systems (ICPS), pp. 334-340,
DOI 10.1109/icps49255.2021.9468247, May 2021, DOI 10.1109/icps49255.2021.9468247, May 2021,
<https://doi.org/10.1109/icps49255.2021.9468247>. <https://doi.org/10.1109/icps49255.2021.9468247>.
[KUNZE-SIGNAL] [KUNZE-SIGNAL]
Kunze, I., Niemietz, P., Tirpitz, L., Glebke, R., Trauth, Kunze, I., Niemietz, P., Tirpitz, L., Glebke, R., Trauth,
D., Bergs, T., and K. Wehrle, "Detecting Out-Of-Control D., Bergs, T., and K. Wehrle, "Detecting Out-Of-Control
Sensor Signals in Sheet Metal Forming using In-Network Sensor Signals in Sheet Metal Forming using In-Network
Computing", 2021 IEEE 30th International Symposium on Computing", 2021 IEEE 30th International Symposium on
Industrial Electronics (ISIE) pp. 1-6, Industrial Electronics (ISIE), pp. 1-6,
DOI 10.1109/isie45552.2021.9576221, June 2021, DOI 10.1109/isie45552.2021.9576221, June 2021,
<https://doi.org/10.1109/isie45552.2021.9576221>. <https://doi.org/10.1109/isie45552.2021.9576221>.
[L2Virt] Kreger-Stickles, L., "First principles: L2 network [L2Virt] Kreger-Stickles, L., "First principles: L2 network
virtualization for lift and shift", 2022, virtualization for lift and shift", Oracle Cloud
Infrastructure Blog, 9 February 2021,
<https://blogs.oracle.com/cloud-infrastructure/post/first- <https://blogs.oracle.com/cloud-infrastructure/post/first-
principles-l2-network-virtualization-for-lift-and-shift>. principles-l2-network-virtualization-for-lift-and-shift>.
[MADTLS] Wagner, E., Heye, D., Serror, M., Kunze, I., Wehrle, K., [MADTLS] Wagner, E., Heye, D., Serror, M., Kunze, I., Wehrle, K.,
and M. Henze, "Madtls: Fine-grained Middlebox-aware End- and M. Henze, "Madtls: Fine-grained Middlebox-aware End-
to-end Security for Industrial Communication", arXiv, to-end Security for Industrial Communication",
DOI 10.48550/ARXIV.2312.09650, 2023, arXiv:2312.09650, DOI 10.48550/ARXIV.2312.09650, 2023,
<https://doi.org/10.48550/ARXIV.2312.09650>. <https://doi.org/10.48550/ARXIV.2312.09650>.
[MASTODON] Raman, A., Joglekar, S., Cristofaro, E., Sastry, N., and [MASTODON] Raman, A., Joglekar, S., Cristofaro, E., Sastry, N., and
G. Tyson, "Challenges in the Decentralised Web: The G. Tyson, "Challenges in the Decentralised Web: The
Mastodon Case", Proceedings of the Internet Measurement Mastodon Case", Proceedings of the Internet Measurement
Conference pp. 217-229, DOI 10.1145/3355369.3355572, Conference, pp. 217-229, DOI 10.1145/3355369.3355572,
October 2019, <https://doi.org/10.1145/3355369.3355572>. October 2019, <https://doi.org/10.1145/3355369.3355572>.
[Microservices] [Microservices]
Richardson, C., "What are microservices?", 2024, Richardson, C., "What are microservices?",
<https://microservices.io/>. <https://microservices.io/>.
[MPI] Vishnu, A., Siegel, C., and J. Daily, "Scaling Distributed [MPI] Vishnu, A., Siegel, C., and J. Daily, "Scaling Distributed
Machine Learning with In-Network Aggregation", Machine Learning with In-Network Aggregation",
arXiv:1603.02339v2, August 2017,
<https://arxiv.org/pdf/1603.02339.pdf>. <https://arxiv.org/pdf/1603.02339.pdf>.
[Multi2020] [Multi2020]
Sobrinho, J. and M. Ferreira, "Routing on Multiple Sobrinho, J. and M. Ferreira, "Routing on Multiple
Optimality Criteria", Proceedings of the Annual conference Optimality Criteria", Proceedings of the Annual conference
of the ACM Special Interest Group on Data Communication on of the ACM Special Interest Group on Data Communication on
the applications, technologies, architectures, and the applications, technologies, architectures, and
protocols for computer communication pp. 211-225, protocols for computer communication, pp. 211-225,
DOI 10.1145/3387514.3405864, July 2020, DOI 10.1145/3387514.3405864, July 2020,
<https://doi.org/10.1145/3387514.3405864>. <https://doi.org/10.1145/3387514.3405864>.
[NetworkedVR] [NetworkedVR]
Ruan, J. and D. Xie, "Networked VR: State of the Art, Ruan, J. and D. Xie, "Networked VR: State of the Art,
Solutions, and Challenges", Electronics vol. 10, no. 2, Solutions, and Challenges", Electronics, vol. 10, no. 2,
pp. 166, DOI 10.3390/electronics10020166, January 2021, p. 166, DOI 10.3390/electronics10020166, January 2021,
<https://doi.org/10.3390/electronics10020166>. <https://doi.org/10.3390/electronics10020166>.
[P4] Bosshart, P., Daly, D., Gibb, G., Izzard, M., McKeown, N., [P4] Bosshart, P., Daly, D., Gibb, G., Izzard, M., McKeown, N.,
Rexford, J., Schlesinger, C., Talayco, D., Vahdat, A., Rexford, J., Schlesinger, C., Talayco, D., Vahdat, A.,
Varghese, G., and D. Walker, "P4: programming protocol- Varghese, G., and D. Walker, "P4: programming protocol-
independent packet processors", ACM SIGCOMM Computer independent packet processors", ACM SIGCOMM Computer
Communication Review vol. 44, no. 3, pp. 87-95, Communication Review, vol. 44, no. 3, pp. 87-95,
DOI 10.1145/2656877.2656890, July 2014, DOI 10.1145/2656877.2656890, July 2014,
<https://doi.org/10.1145/2656877.2656890>. <https://doi.org/10.1145/2656877.2656890>.
[P4-SPLIT] Singh, H. and M. Montpetit, "Requirements for P4 Program
Splitting for Heterogeneous Network Nodes", Work in
Progress, Internet-Draft, draft-hsingh-coinrg-reqs-p4comp-
03, 18 February 2021,
<https://datatracker.ietf.org/doc/html/draft-hsingh-
coinrg-reqs-p4comp-03>.
[RFC7272] van Brandenburg, R., Stokking, H., van Deventer, O., [RFC7272] van Brandenburg, R., Stokking, H., van Deventer, O.,
Boronat, F., Montagud, M., and K. Gross, "Inter- Boronat, F., Montagud, M., and K. Gross, "Inter-
Destination Media Synchronization (IDMS) Using the RTP Destination Media Synchronization (IDMS) Using the RTP
Control Protocol (RTCP)", RFC 7272, DOI 10.17487/RFC7272, Control Protocol (RTCP)", RFC 7272, DOI 10.17487/RFC7272,
June 2014, <https://www.rfc-editor.org/rfc/rfc7272>. June 2014, <https://www.rfc-editor.org/info/rfc7272>.
[RFC8039] Shpiner, A., Tse, R., Schelp, C., and T. Mizrahi, [RFC8039] Shpiner, A., Tse, R., Schelp, C., and T. Mizrahi,
"Multipath Time Synchronization", RFC 8039, "Multipath Time Synchronization", RFC 8039,
DOI 10.17487/RFC8039, December 2016, DOI 10.17487/RFC8039, December 2016,
<https://www.rfc-editor.org/rfc/rfc8039>. <https://www.rfc-editor.org/info/rfc8039>.
[RUETH] Rueth, J., Glebke, R., Wehrle, K., Causevic, V., and S. [RÜTH] Rüth, J., Glebke, R., Wehrle, K., Causevic, V., and S.
Hirche, "Towards In-Network Industrial Feedback Control", Hirche, "Towards In-Network Industrial Feedback Control",
Proceedings of the 2018 Morning Workshop on In- Proceedings of the 2018 Morning Workshop on In-Network
Network Computing, DOI 10.1145/3229591.3229592, August Computing, pp. 14-19, DOI 10.1145/3229591.3229592, August
2018, <https://doi.org/10.1145/3229591.3229592>. 2018, <https://doi.org/10.1145/3229591.3229592>.
[SA2-5GLAN] [SA2-5GLAN]
3GPP-5glan, "SP-181129, Work Item Description, 3GPP-5glan, "SP-181129, Work Item Description,
Vertical_LAN(SA2), 5GS Enhanced Support of Vertical and Vertical_LAN(SA2), 5GS Enhanced Support of Vertical and
LAN Services", 3GPP , 2021, LAN Services", 3GPP , 2021,
<http://www.3gpp.org/ftp/tsg_sa/TSG_SA/Docs/SP- <http://www.3gpp.org/ftp/tsg_sa/TSG_SA/Docs/SP-
181120.zip>. 181120.zip>.
[SarNet2021] [SarNet2021]
Glebke, R., Trossen, D., Kunze, I., Lou, D., Rueth, J., Glebke, R., Trossen, D., Kunze, I., Lou, D., Ruth, J.,
Stoffers, M., and K. Wehrle, "Service-based Forwarding via Stoffers, M., and K. Wehrle, "Service-based Forwarding via
Programmable Dataplanes", 2021 IEEE 22nd International Programmable Dataplanes", 2021 IEEE 22nd International
Conference on High Performance Switching and Routing Conference on High Performance Switching and Routing
(HPSR) pp. 1-8, DOI 10.1109/hpsr52026.2021.9481814, June (HPSR), pp. 1-8, DOI 10.1109/hpsr52026.2021.9481814, June
2021, <https://doi.org/10.1109/hpsr52026.2021.9481814>. 2021, <https://doi.org/10.1109/hpsr52026.2021.9481814>.
[SILKROAD] Miao, R., Zeng, H., Kim, C., Lee, J., and M. Yu, [SILKROAD] Miao, R., Zeng, H., Kim, C., Lee, J., and M. Yu,
"SilkRoad: Making Stateful Layer-4 Load Balancing Fast and "SilkRoad: Making Stateful Layer-4 Load Balancing Fast and
Cheap Using Switching ASICs", Proceedings of the Cheap Using Switching ASICs", Proceedings of the
Conference of the ACM Special Interest Group on Data Conference of the ACM Special Interest Group on Data
Communication pp. 15-28, DOI 10.1145/3098822.3098824, Communication, pp. 15-28, DOI 10.1145/3098822.3098824,
August 2017, <https://doi.org/10.1145/3098822.3098824>. August 2017, <https://doi.org/10.1145/3098822.3098824>.
[SPLITTLS] Naylor, D., Schomp, K., Varvello, M., Leontiadis, I., [SPLITTLS] Naylor, D., Schomp, K., Varvello, M., Leontiadis, I.,
Blackburn, J., Lopez, D., Papagiannaki, K., Rodriguez Blackburn, J., Lopez, D., Papagiannaki, K., Rodriguez
Rodriguez, P., and P. Steenkiste, "Multi-Context TLS Rodriguez, P., and P. Steenkiste, "Multi-Context TLS
(mcTLS): Enabling Secure In-Network Functionality in TLS", (mcTLS): Enabling Secure In-Network Functionality in TLS",
ACM SIGCOMM Computer Communication Review vol. 45, no. 4, ACM SIGCOMM Computer Communication Review, vol. 45, no. 4,
pp. 199-212, DOI 10.1145/2829988.2787482, August 2015, pp. 199-212, DOI 10.1145/2829988.2787482, August 2015,
<https://doi.org/10.1145/2829988.2787482>. <https://doi.org/10.1145/2829988.2787482>.
[Stoyanov] Stoyanov, R. and N. Zilberman, "MTPSA: Multi-Tenant [Stoyanov] Stoyanov, R. and N. Zilberman, "MTPSA: Multi-Tenant
Programmable Switches", ACM P4 Workshop in Europe Programmable Switches", Proceedings of the 3rd P4 Workshop
(EuroP4'20) , 2020, in Europe, pp. 43-48, DOI 10.1145/3426744.3431329,
December 2020,
<https://eng.ox.ac.uk/media/6354/stoyanov2020mtpsa.pdf>. <https://eng.ox.ac.uk/media/6354/stoyanov2020mtpsa.pdf>.
[Sultana] Sultana, N., Sonchack, J., Giesen, H., Pedisich, I., Han, [Sultana] Sultana, N., Sonchack, J., Giesen, H., Pedisich, I., Han,
Z., Shyamkumar, N., Burad, S., DeHon, A., and B. T. Loo, Z., Shyamkumar, N., Burad, S., DeHon, A., and B. T. Loo,
"Flightplan: Dataplane Disaggregation and Placement for P4 "Flightplan: Dataplane Disaggregation and Placement for P4
Programs", 2020, Programs",
<https://flightplan.cis.upenn.edu/flightplan.pdf>. <https://flightplan.cis.upenn.edu/flightplan.pdf>.
[TLSSURVEY] [TLSSURVEY]
de Carne de Carnavalet, X. and P. van Oorschot, "A Survey de Carné de Carnavalet, X. and P. van Oorschot, "A Survey
and Analysis of TLS Interception Mechanisms and and Analysis of TLS Interception Mechanisms and
Motivations: Exploring how end-to-end TLS is made "end-to- Motivations: Exploring how end-to-end TLS is made 'end-to-
me" for web traffic", ACM Computing Surveys vol. 55, no. me' for web traffic", ACM Computing Surveys, vol. 55, no.
13s, pp. 1-40, DOI 10.1145/3580522, July 2023, 13s, pp. 1-40, DOI 10.1145/3580522, July 2023,
<https://doi.org/10.1145/3580522>. <https://doi.org/10.1145/3580522>.
[TOSCA] OASIS TOSCA, "TOSCA Simple Profile in YAML Version 1.3", [TOSCA] Rutkowski, M., Ed., Lauwers, C., Ed., Noshpitz, C., Ed.,
2020, <https://docs.oasis-open.org/tosca/TOSCA-Simple- and C. Curescu, Ed., "TOSCA Simple Profile in YAML Version
Profile-YAML/v1.3/os/TOSCA-Simple-Profile-YAML- 1.3", OASIS Standard, February 2020, <https://docs.oasis-
v1.3-os.pdf>. open.org/tosca/TOSCA-Simple-Profile-YAML/v1.3/os/TOSCA-
Simple-Profile-YAML-v1.3-os.pdf>.
[TS23.501] 3gpp-23.501, "Technical Specification Group Services and [TS23.501] 3GPP, "System Architecture for the 5G System; Stage 2
System Aspects; System Architecture for the 5G System; (Rel.17)", 3GPP TS 23.501, 2021,
Stage 2 (Rel.17)", 3GPP , 2021,
<https://www.3gpp.org/DynaReport/23501.htm>. <https://www.3gpp.org/DynaReport/23501.htm>.
[TS23.502] 3gpp-23.502, "Technical Specification Group Services and [TS23.502] 3GPP, "Procedures for the 5G System; Stage 2 (Rel.17)",
System Aspects; Procedures for the 5G System; Stage 2 3GPP TS 23.502, 2021,
(Rel.17)", 3GPP , 2021,
<https://www.3gpp.org/DynaReport/23502.htm>. <https://www.3gpp.org/DynaReport/23502.htm>.
[USECASEANALYSIS] [USE-CASE-AN]
Kunze, I., Hong, J., Wehrle, K., Trossen, D., Montpetit, Kunze, I., Hong, J., Wehrle, K., Trossen, D., Montpetit,
M., de Foy, X., Griffin, D., and M. Rio, "Use Case M., de Foy, X., Griffin, D., and M. Rio, "Use Case
Analysis for Computing in the Network", Work in Progress, Analysis for Computing in the Network", Work in Progress,
Internet-Draft, draft-irtf-coinrg-use-case-analysis-02, 4 Internet-Draft, draft-irtf-coinrg-use-case-analysis-02, 4
December 2024, <https://datatracker.ietf.org/doc/html/ December 2024, <https://datatracker.ietf.org/doc/html/
draft-irtf-coinrg-use-case-analysis-02>. draft-irtf-coinrg-use-case-analysis-02>.
[VESTIN] Vestin, J., Kassler, A., and J. Akerberg, "FastReact: In- [VESTIN] Vestin, J., Kassler, A., and J. Akerberg, "FastReact: In-
Network Control and Caching for Industrial Control Network Control and Caching for Industrial Control
Networks using Programmable Data Planes", 2018 IEEE 23rd Networks using Programmable Data Planes", 2018 IEEE 23rd
International Conference on Emerging Technologies and International Conference on Emerging Technologies and
Factory Automation (ETFA) pp. 219-226, Factory Automation (ETFA), pp. 219-226,
DOI 10.1109/etfa.2018.8502456, September 2018, DOI 10.1109/etfa.2018.8502456, September 2018,
<https://doi.org/10.1109/etfa.2018.8502456>. <https://doi.org/10.1109/etfa.2018.8502456>.
[WirelessNet2024] [WirelessNet2024]
Jia, J., Yang, L., Chen, J., Ma, L., and X. Wang, "Online Jia, J., Yang, L., Chen, J., Ma, L., and X. Wang, "Online
delay optimization for MEC and RIS-assisted wireless VR delay optimization for MEC and RIS-assisted wireless VR
networks", Wireless Networks vol. 30, no. 4, pp. networks", Wireless Networks, vol. 30, no. 4, pp.
2939-2959, DOI 10.1007/s11276-024-03706-4, March 2024, 2939-2959, DOI 10.1007/s11276-024-03706-4, March 2024,
<https://doi.org/10.1007/s11276-024-03706-4>. <https://doi.org/10.1007/s11276-024-03706-4>.
Acknowledgements
The authors would like to thank Eric Wagner for providing text on the
security considerations and Jungha Hong for her efforts in continuing
the work on the use case analysis document that has largely sourced
the preliminary categorization section of this document.
The authors would further like to thank Chathura Sarathchandra, David
Oran, Phil Eardley, Stuart Card, Jeffrey He, Toerless Eckert, and Jon
Crowcroft for reviewing earlier versions of the document, Colin
Perkins for his IRTF chair review, and Jerome Francois for his
thorough IRSG review.
Authors' Addresses Authors' Addresses
Ike Kunze Ike Kunze
RWTH Aachen University RWTH Aachen University
Ahornstr. 55 Ahornstr. 55
D-52074 Aachen D-52074 Aachen
Germany Germany
Email: kunze@comsys.rwth-aachen.de Email: kunze@comsys.rwth-aachen.de
Klaus Wehrle Klaus Wehrle
 End of changes. 283 change blocks. 
751 lines changed or deleted 785 lines changed or added

This html diff was produced by rfcdiff 1.48.