{"id":46394,"date":"2023-09-27T07:58:36","date_gmt":"2023-09-27T05:58:36","guid":{"rendered":"https:\/\/www.inovex.de\/?p=46394"},"modified":"2023-09-27T11:09:05","modified_gmt":"2023-09-27T09:09:05","slug":"ebpf-reduce-resource-usage-network-overhead","status":"publish","type":"post","link":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/","title":{"rendered":"eBPF: Reduce Resource Usage and Network Overhead"},"content":{"rendered":"<p>In modern cloud-native architectures, service meshes find broad adoption. Service meshes add a separate layer to microservice clusters which enhances the resilience. The service mesh is formed by sidecar proxies that handle networking, observability, and security-related tasks. However, the additional proxies and their network traffic service meshes add overhead to the hosts.\u00a0With extended Berkley Packet Filter (eBPF), the Linux kernel community added a new extension technology to the kernel, enabling an event-based execution of applications in the kernel context without needing kernel recompiles or restarts.\u00a0This thesis analyzes if eBPF can reduce the increased resource usage of service meshes and focuses thereby on the networking features. Next to the communication between a service and its sidecar, this includes the movement of service mesh features to the kernel previously located in the sidecars. First, eBPF\u2019s possibilities are examined and then evaluated in a practical comparison in load tests.<!--more--><\/p>\n<p>The analysis showed that it is possible to skip the Linux networking stack when using eBPF to communicate between a microservice and its sidecar. Additionally, service mesh networking features up to Open Systems Interconnection (OSI) layer five can be moved from the sidecar proxy to the kernel.\u00a0The load tests were performed using an Istio service mesh that was extended to use eBPF by Merbridge. The results of the tests did show a high variance and inconsistent results across several iterations, which leads to the result that Merbridge is not improving an Istio-based service mesh. Future work should further investigate the reasons for this lack of performance improvement.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_79_2 counter-hierarchy ez-toc-counter ez-toc-custom ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\"><p class=\"ez-toc-title\" style=\"cursor:inherit\"><\/p>\n<\/div><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Introduction\" >Introduction<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Service-Meshes\" >Service Meshes<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Architecture\" >Architecture<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Features\" >Features<\/a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Traffic-Control\" >Traffic Control<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Security\" >Security<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Observability\" >Observability<\/a><\/li><\/ul><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#The-extendet-Berkley-Packet-Filter-eBPF\" >The extendet Berkley Packet Filter (eBPF)<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#eBPF-Program-Structure\" >eBPF Program Structure<\/a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Hooks\" >Hooks<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Maps\" >Maps<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Fields-of-Application\" >Fields of Application<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Service-Mesh-Networking-with-eBPF\" >Service Mesh Networking with eBPF<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-14\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Load-Tests\" >Load Tests<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-15\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Requirements\" >Requirements<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-16\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Test-Setup\" >Test Setup<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-17\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Result-Analysis\" >Result Analysis<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-18\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Conclusion\" >Conclusion<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-19\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#Outlook\" >Outlook<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Introduction\"><\/span>Introduction<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<div title=\"Page 1\">\n<div>\n<p>When applications moved from monolithic approaches to microservice architectures, a standardized communication layer was formed by integrating a communication library into each microservice in the cluster.<\/p>\n<\/div>\n<div>\n<p>Moreover, decoupling, a key principle of microservices, led to the adoption of different programming languages and frameworks for each microservice. While networking aspects across microservices often followed a consistent communication protocol and schema, this resulted in duplicated implementations across multiple frameworks and programming languages. Consequently, managing bug fixes and applying hotfixes became increasingly cumbersome, necessitating many changes.<\/p>\n<div title=\"Page 1\">\n<p>With the rise of microservices, cloud-native, which is \u201cused to describe container-based environments used to develop apps that have been built with services that can scale independently from each other and run on an infrastructure provided by a cloud provider\u201c, developed. In addition to microservices, cloud-native includes their encapsulation in containers and their orchestration and scheduling. In addition, schedulers, like Kubernetes, abstract the underlying hardware, which enables a separate contemplation of the cluster hardware and the applications running in the cluster.\u00a0Next to standardized communication, solutions that help monitor, manage, and troubleshoot the cluster applications became needed, becoming increasingly challenging in fast-changing environments.<\/p>\n<div title=\"Page 1\">\n<p>One solution for these problems is the introduction of service meshes, which implement a new layer for service-to-service communication decoupled from the actual application. They do not need to get integrated into the applications and are, therefore, independent of the microservice implementations. The most prominent architecture is the sidecar model, which adds a proxy application to each microservice. Additionally, service meshes are optimized for the needs of highly dynamic cloud-native environments, such as Kubernetes clusters. Besides the networking features, especially observability and security-related features are optimized for these requirements.<\/p>\n<div title=\"Page 2\">\n<p>With the extended Berkley Packet Filter (eBPF), the Linux kernel community implemented the ability to extend the slowly developing kernel with sandboxed applications at runtime, which negates the long waiting times for new kernel features or the circuitous handling of kernel modules. By its modularity, eBPF opens up new possibilities, enabling custom applications to run in the kernel space. This fuels networking, observability, and security innovation while empowering existing concepts like service meshes to optimize communication paths and resource impact.<\/p>\n<div title=\"Page 2\">\n<p>This thesis builds on this idea and analyzes how far eBPF and service meshes can be combined. The aim is to display and analyze how far service mesh features can be moved to the kernel and if eBPF can enhance service meshes regarding resource usage and networking overhead.<\/p>\n<p>Following an introduction of the main features, service meshes are subsequently examined in terms of strengths, weaknesses, and fields of application. Afterward, eBPF is introduced as a new kernel technology, which includes the features and possible applications. Followed by an implementation of eBPF to service meshes, a focus is set on networking enhancements, which are then validated by load tests.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Service-Meshes\"><\/span>Service Meshes<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<h3><span class=\"ez-toc-section\" id=\"Architecture\"><\/span>Architecture<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div title=\"Page 2\">\n<p>The concept of service meshes is to place a lightweight application next to the actual microservice. Then, all traffic delivered to or sent from the microservice is routed through this application which is why they are called sidecar proxies. All sidecar proxies form the data plane, one of two architectural abstraction layers, which locates the functionality of a service mesh.<\/p>\n<p>The second abstraction layer in service meshes is the control plane, which forms the actual service mesh by connecting the stateless sidecar proxies. In addition, it provides Application Programming Interfaces (APIs) and tools that can control and configure the behavior of the service mesh. The configuration includes the data plane policies, rules, and settings applied to the sidecar proxies. Furthermore, the control plane is responsible for platform integration as, for example, Kubernetes. This is necessary to get information about new services starting or services terminating to keep the service discovery updated. Since the control plane is indispensable in a service mesh, it is a component that has to be highly available and distributed.<\/p>\n<div title=\"Page 2\">\n<div>\n<p>This architectural approach solves several problems. The first one is the decoupling of communication and business logic. The sidecar implements the communication and network requirements, for example, traffic encryption. As a result, the microservices can concentrate on their business logic and the API accessing its functionality. Furthermore, applying communication and network features to all the sidecar proxies is done at once. This enables faster, more agile deployments since just one application has to be changed to apply the update. In addition, the microservices\u2019 source codes need no updates, which reduces the number of changes. Simpler and faster developments and deployments are the results that increase usability.<\/p>\n<\/div>\n<div>\n<p>However, additional service mesh configuration might be necessary, but implementing cluster-specific communication in the microservices is unnecessary. Moreover, the microservices do not have to know that a sidecar exists or that it changes parts of the communication by, for example, routing a request.<\/p>\n<div title=\"Page 2\">\n<p>Summarized, service meshes help to \u201cdecouple the infrastructure from the application code\u201c, which \u201csimplifies the underlying network topology\u201c. The developers, therefore, do not have to implement network behavior, but operators can create policies to describe the network behavior, mode identity, and traffic flow. The goal is to make service-to-service communication visible, manageable, and controlled across the microservice cluster.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Features\"><\/span>Features<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div title=\"Page 2\">\n<p>One of the features is the architecture itself. Service meshes change the communication and networking features\u2019 location into a standardized architectural layer across all microservices. As discussed in Chapter II-A, implementing communication and networking functionalities requires changes in the sidecar but not in every microservice. As a result, feature development or deployment of breaking changes in the communication simplifies. Additionally, modernizing microservices is another benefit when introducing a service mesh. Implementing new networking features in the microservices is unnecessary since the sidecar takes this task. For example, this could be useful if a legacy application in the cluster does not support the newest encryption protocols.<\/p>\n<div title=\"Page 2\">\n<p>Generally, the features of service meshes can be summarized in three categories:<\/p>\n<ul>\n<li>Traffic Control<\/li>\n<li>Security<\/li>\n<li>Observability<\/li>\n<\/ul>\n<h4><span class=\"ez-toc-section\" id=\"Traffic-Control\"><\/span>Traffic Control<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<div title=\"Page 2\">\n<p>The main goal of traffic control is to \u201cprovide granular [and] declarative control over network traffic\u201c. The basic idea is to create an abstraction for delivering requests reliably from one microservice to another. In order to manage the traffic flow, service meshes are aware of all services available and maintain this state in a service registry.<\/p>\n<p>The service registry helps the service mesh to make routing decisions on several OSI layers. The basic routing is located in the third and fourth OSI layers, where the payload and the payload\u2019s encoding do not matter. Additionally, many service meshes provide traffic management on the application layer (OSI layer seven). An example of traffic control in the application layer is routing based on different Hypertext Transfer Protocol (HTTP) endpoints. These features of service meshes enable, for example, canary deployments or A\/B testing.<\/p>\n<div title=\"Page 3\">\n<p>Additionally, service meshes can help to increase resilience by providing features such as circuit breaking, retries, or timeout mechanisms. Furthermore, the sidecar proxies can offer the possibility of mitigating cascading failures or temporary outages. Service meshes can also handle throughput, latency, and (latency-aware) load balancing, which can increase the performance of microservice architectures.<\/p>\n<div title=\"Page 3\">\n<p>Furthermore, since the service mesh is aware of what is happening in the cluster, it can apply changes to solve challenges like temporary outages without additional implementation in the microservices. With the help of the sidecars, service meshes can use retry mechanisms to intercept short-term failures.\u00a0A service mesh can also handle failover mechanisms or dynamic shifting of traffic when performing protocol updates. An example of such an update could be a new field in an API that is required for new features or breaking changes in the communication flow.<\/p>\n<h4><span class=\"ez-toc-section\" id=\"Security\"><\/span>Security<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<div title=\"Page 3\">\n<p>Since the number of microservices increases, the interconnections between services rise too. This leads to more protect-worthy communication links, which lowers the overall security. Additionally, microservices need to be validated as trustworthy.<br \/>\nAn example would be a payment system that only some other services should have access to. Therefore, access to the payment service should be generally prohibited, and only a few services that interact with it have access.<br \/>\nIn order to decide whether requests are allowed, service meshes identify the requesting services by their service name or additional labels. In addition to access rights, identities can help track requests. An often-used model is the Zero Trust security model, which evolved from the Perimeter Security model and states that network participants no longer automatically gain access to the system as soon as they join the network. Therefore, every request must be authenticated and authorized to decide whether access is allowed. Zero Trust security claims to \u201cnever trust, always verify\u201c.<\/p>\n<div title=\"Page 3\">\n<p>In order to increase overall security, service meshes can enforce security, policy, and compliance requirements across all microservices. This includes securing the service-to-service communication by encrypting the communication channel or adding identities to the microservices. Both can be accomplished by protocol translation or encryption protocols like Mutual Transport Layer Security (mTLS). The mTLS protocol is based on the Transport Layer Security (TLS) protocol which authenticates the server a client wants to communicate with. Additionally, the communication between the server will be encrypted. When using mTLS, the client is authenticated by the server in addition to the server\u2019s authentication at the client. Since both parties of the communication have to authenticate, both have verified identities that can be used to check further authorization policies.\u00a0There is no need for complex perimeter security models, like subnets, that are hard to set up and maintain since every request will be authenticated and authorized by the service mesh. In order to enable these features, service meshes can generate and manage certificates, administrate keys, Single Sign-on (SSO) tokens, or API keys.<\/p>\n<h4><span class=\"ez-toc-section\" id=\"Observability\"><\/span>Observability<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<div title=\"Page 3\">\n<p>Since service meshes route all traffic through the sidecar proxy, all communication is transparent to it. This data can enable features in the area of observability, tracing, and diagnostics. Moreover, troubleshooting is more straightforward since the service mesh can help find correlating service interactions that might lead to problems.\u00a0The sidecars will provide the data for observability, tracing, and diagnostics. Therefore, no additional implementation in the microservices is necessary. This includes simple health checks, performance monitoring, and collecting logs.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"The-extendet-Berkley-Packet-Filter-eBPF\"><\/span>The extendet Berkley Packet Filter (eBPF)<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<div title=\"Page 3\">\n<p>The extended Berkley Packet Filter (eBPF) is the further development of the Berkley Packet Filter (BPF), which was developed by Steven McCanne and Van Jacobsen in 1992. BPF is a network packet filter that performs the filtering mechanism in the kernel space instead of the user space. As a result, the performance compared to filters that copy the whole packet from the kernel to the user space improves significantly.\u00a0The further development of BPF, eBPF, was the answer of the Linux kernel community to the long kernel update cycles and the circuitous handling of kernel modules. Kernel modules are an extension feature of the kernel that can enhance its functionality.<\/p>\n<div title=\"Page 3\">\n<p>Like BPF, eBPF is located in the Linux kernel. Compared to BPF, which is a packet filter in the kernel, eBPF can run sandboxed programs of any kind in the privileged context of the kernel. Therefore it is often called the kernel\u2019s mini Virtual Machine (VM).<br \/>\nWith this in mind, eBPF makes the kernel highly customizable and helps to \u201csafely and efficiently extend the kernel\u2019s capabilities without requiring to change the kernel\u2019s source code or load kernel modules\u201c. That is why developers can enhance the kernel and the Operating System (OS) for their needs without waiting for the community to update the kernel officially.<\/p>\n<div title=\"Page 3\">\n<p>On the one hand, eBPF programs are restricted to the kernel space and cannot cross the border between the kernel and the user space. In order to control eBPF, there is a stable API for the user space available. On the other hand, eBPF programs have access to the internal APIs of the OS, which enables rich security, observability, and tracing functionality. This includes monitoring or tracing applications running in the user space. Additionally, the proximity to the kernel allows eBPF programs to optimize for performance heavily. This is also accomplished by compiling out unused functionality.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"eBPF-Program-Structure\"><\/span>eBPF Program Structure<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div title=\"Page 4\">\n<p>Separating eBPF into a development and a runtime mechanism is possible. In the development phase, the actual eBPF program is defined and compiled into bytecode, a binary format the kernel\u2019s eBPF API accepts.<br \/>\nAfterward, the kernel loads the eBPF program, which verifies the application and compiles it to the targeted Central Processing Unit (CPU) architecture.<\/p>\n<div title=\"Page 4\">\n<p>In order to run an eBPF application, the kernel\u2019s eBPF API expects bytecode as input. Since writing bytecode is possible but not the most common development method, several compiler suites (e.g., LLVM) and abstractions (e.g., eBPF Go Library) exist. These allow writing eBPF programs in different programming languages like C, C++, or Go.<br \/>\nThe resulting bytecode includes the actual eBPF program and the description of the used eBPF maps. Before the application can be run, the kernel verifies if the eBPF program harms the kernel and compiles it for the targeted CPU architecture.<\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_46568\" aria-describedby=\"caption-attachment-46568\" style=\"width: 1147px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-46568\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/eBPF_Hook_examples.png\" alt=\"\" width=\"1147\" height=\"660\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/eBPF_Hook_examples.png 1147w, https:\/\/www.inovex.de\/wp-content\/uploads\/eBPF_Hook_examples-300x173.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/eBPF_Hook_examples-1024x589.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/eBPF_Hook_examples-768x442.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/eBPF_Hook_examples-400x230.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/eBPF_Hook_examples-360x207.png 360w\" sizes=\"auto, (max-width: 1147px) 100vw, 1147px\" \/><figcaption id=\"caption-attachment-46568\" class=\"wp-caption-text\">Example hook points<\/figcaption><\/figure>\n<h4><span class=\"ez-toc-section\" id=\"Hooks\"><\/span>Hooks<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<div title=\"Page 4\">\n<p>The execution of an eBPF application is event-driven. Therefore, the program includes a description of the so-called hooks the application wants to listen to. Each eBPF application can specify one hook point.<br \/>\nMany hook points allow, for example, listening for system calls, function entries and exits, network events, or kernel tracepoints. Figure 1 displays examples of points in the Linus kernel at which eBPF programs can be triggered. If the predefined hooks do not fit the requirements, kernel probe (kprobe) and user probe (uprobe), provide the possibility to attach the eBPF program almost anywhere in the kernel or user applications. The hook points are defined by the program type of the implemented eBPF program.<\/p>\n<p>In general, eBPF programs cannot call regular kernel functions, but helper functions exist to simplify the implementation of eBPF programs. They differ in the eBPF program types, and they form a generalized API that enables eBPF programs to \u201cconsult a core kernel-defined set of function calls in order to retrieve\/push data from\/to the kernel\u201c. The most used helper functions are random number generators, time functions, or functions that help access the eBPF maps, which the next chapter focuses on.<\/p>\n<div title=\"Page 4\">\n<h4><span class=\"ez-toc-section\" id=\"Maps\"><\/span>Maps<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<div title=\"Page 4\">\n<p>n order to share data, eBPF introduces a mechanism called maps. Maps can be used by eBPF programs in the kernel space as well as applications running in the user space. For example, with maps, metrics for monitoring can be delivered to an application running in the user space that displays them in a dashboard. The possibility to exchange data between (eBPF) programs helps with more complex scenarios where different hooks and programs must be combined.<br \/>\nMaps consist of efficient key-value stores that are located in the kernel space. In order to enable user space programs to access the data as well, eBPF offers an API to the user space.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Fields-of-Application\"><\/span>Fields of Application<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div title=\"Page 4\">\n<p>The development capabilities of eBPF enable a variety of domains in which eBPF can be used. Since the original BPF is a packet filter, one of eBPF\u2019s biggest strengths is managing network traffic. As in BPF, creating network policies that can even engage the raw packet buffers is possible. This allows quick decisions about what to do with the packet since both enable filtering network packets before the network stack of the Linux kernel. Moreover, with eBPF, it is possible to bypass the complex networking and routing paths of the Linux kernel.<\/p>\n<div title=\"Page 4\">\n<p>Additional to the filters, the ability to access the raw packet buffers enables load-balancing right at the source of the connection. Depending on the network topology, this can remove the overhead of resolving Network Address Translation (NAT) since Destination Network Address Translation (DNAT) does not need to be conducted.<\/p>\n<div title=\"Page 4\">\n<p>Furthermore, eBPF can enhance networking capabilities by offloading TLS and mTLS. Moreover, complex protocol negotiations and parsing can be improved with the help of eBPF.<\/p>\n<div title=\"Page 4\">\n<p>When taking a more detailed look at the possibilities of eBPF, programs of the type eXpress Data Path (XDP) can be used in various use cases. XDP allows packets to no longer be transmitted or prepared for the in-kernel networking stack. Moreover, in the offloaded mode, the XDP program is executed directly in the Network Interface Controler (NIC). This is perfect for Distributed Denial-of-Service (DDoS) mitigation, firewalling, or load balancing at OSI layer three or four.<br \/>\nAn example of a load-balancer, based on XDP, is Facebook\u2019s Katran project, which enables building high-performance layer four load-balancing forwarding planes.<\/p>\n<div title=\"Page 5\">\n<p>Dropping packets at the earliest stage possible is also helpful for filtering mechanisms and processing before the kernel\u2019s networking stack. An example of filtering mechanisms is denying access for specific Internet Protocol (IP) ranges or filtering for the existence of header information.<br \/>\nFor example, if the applications on a node only support Transmission Control Protocol (TCP) traffic, packets of other types can be dropped. Besides filtering, preprocessing before the kernel\u2019s networking stack can be useful, for example, for encapsulation or mechanisms like NAT (especially Source Network Address Translation (SNAT) and DNAT).<\/p>\n<div title=\"Page 5\">\n<p>Additionally to the networking capabilities, eBPF can be used to improve a system\u2019s observability. First, the networking capabilities of eBPF enhance the observability of the network and the tracing of requests. Since eBPF programs can see every request, they can aggregate context data and metrics without routing the traffic to, for example, an extra tool in the user space.<\/p>\n<p>Here, XDP is also a good example since it is useful for flow sampling, monitoring, and other network analytics. With the Linux perf infrastructure, pushing a truncated packet, a packet with the full payload, and custom data to the user space are possible. Furthermore, traffic analysis can be stopped if the traffic flow is classified as trustworthy.<\/p>\n<div title=\"Page 5\">\n<p>With the different probes, tracepoints, and perf events, eBPF programs can attach nearly everywhere in the kernel. Therefore, it is possible to implement a variety of probes and sensors to collect context-rich data, all of which can be accomplished without changing the kernel\u2019s implementation. However, this includes the tracing of the kernel as well as programs running in the user space.<\/p>\n<div title=\"Page 5\">\n<p>The security capabilities of eBPF are based mainly on networking and observability features. This is the case since much security is about what information and data are visible and collectible. Moreover, as displayed by the previous chapters, eBPF can investigate nearly every function call in the Linux kernel. Therefore, it is feasible to undertake system introspections with very low overhead.<\/p>\n<div title=\"Page 5\">\n<p>The different eBPF program types enable the implementation of security-related features at many different layers and hook points in the kernel.<\/p>\n<p>In the networking area, package drops are the most crucial defense mechanism in case of DDoS mitigation. With XDP, this is possible early in the networking stack. This results in minimal computational resources that are used to process the packet. If the packet is dropped, this computational cost should be minimal, introducing XDP as the perfect match for this task.<br \/>\nTherefore Cloudflare, a worldwide acting company providing a Content Delivery Network (CDN), Domain Name Service (DNS) servers, and cloud-related security products, decided, for example, to implement their DDoS defense with XDP, which is now running across their whole infrastructure.<\/p>\n<div title=\"Page 5\">\n<p>Additionally, Cilium, a Container Networking Interface (CNI) based on eBPF, advertises the automatic encryption of connections with the IPsec infrastructure of the kernel. With this mechanism, it is possible to implement encryption on OSI layer three with the help of eBPF and the helper functions provided.<\/p>\n<div title=\"Page 5\">\n<p>Furthermore, seccomp-bpf, a kernel feature based on eBPF, implements a syscalls filter, limiting the potential misuse of syscalls. The filter then checks if syscalls belong to an application\u2019s set of allowed syscalls. This feature is used widely in the area of containerization and orchestration of containers. Docker, for example, uses seccomp-bpf to apply custom seccomp security profiles, and Kubernetes uses it to implement security contexts for, for example, pods.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Service-Mesh-Networking-with-eBPF\"><\/span>Service Mesh Networking with eBPF<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<div title=\"Page 5\">\n<p>Service Meshes often are applied to modern clusters that are based on Kubernetes. The Kubernetes control plane includes a component called kube-proxy, which uses in its standard configuration a technology called iptables to apply, update, and delete rules dependent on the cluster configuration. In general, iptables get used to \u201cset up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel\u201c. More concretely, the kube-proxy implements the iptables rules to provide the Service object abstraction in the cluster. This allows an automatic load balancing between several instances of an application merged in a Service object to only provide one API for all running instances to the cluster.<\/p>\n<div title=\"Page 5\">\n<p>Service meshes often sit on top of this iptables-based infrastructure layer, which is why they also use this technology for their base routing. For example, Isito, one of the most used service mesh implementations, uses iptables to route between an application and its corresponding sidecar proxy.<br \/>\nSince the check of iptables processes sequentially, the process slows down the more rules exist. This can become a problem within large Kubernetes clusters that include a lot of application instances and Service objects. If a service mesh is also deployed to the cluster, the number of rules increases even more. As a result, the check gets even slower, which decreases the performance of the whole cluster. The first impact is a higher latency of connections in the cluster. Additionally, the high number of iptables checks claim more CPU time, which leads to higher utilization.<\/p>\n<div>\n<dl id=\"attachment_46570\">\n<dt>\n<p><figure id=\"attachment_46570\" aria-describedby=\"caption-attachment-46570\" style=\"width: 1207px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-46570 \" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/SM_routing.png\" alt=\"The figure shows the routing between a service and its sidecar proxy with the different networking layers of the Linux kernel.\" width=\"1207\" height=\"729\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/SM_routing.png 1556w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_routing-300x181.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_routing-1024x619.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_routing-768x464.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_routing-1536x928.png 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_routing-400x242.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_routing-360x217.png 360w\" sizes=\"auto, (max-width: 1207px) 100vw, 1207px\" \/><figcaption id=\"caption-attachment-46570\" class=\"wp-caption-text\">Fig. 2. Routing in a service mesh using the example of Istio<\/figcaption><\/figure><\/dt>\n<\/dl>\n<\/div>\n<div title=\"Page 6\">\n<p>Furthermore, when communicating between, for example, two applications in a service mesh, the iptables rules will be checked several times. Figure 2 displays that the rules get checked whenever an application and the correspondent sidecar proxy communicate. In addition, the iptables rules are executed every time a packet leaves or accesses a Kubernetes Pod or, to be more precise, whenever a packet crosses a network namespace border.<\/p>\n<div title=\"Page 6\">\n<p>Summarized, in large service meshes in Kubernetes clusters, the sequential check of iptables rules can decrease performance. As a result, the latency in the cluster increases. In addition, checking the iptables rules needs more CPU cycles, leading to higher utilization.<\/p>\n<div title=\"Page 6\">\n<p>The described problems of iptables can be solved by an updated iptables implementation or an alternative technology. Since, as mentioned before, the further development of kernel technologies is slow and complex, the likelihood of an iptables update is small. However, due to its flexibility, an eBPF program could implement the requirements that currently are solved using iptables in a completely different way.<\/p>\n<div>\n<dl id=\"attachment_46572\">\n<dt><\/dt>\n<dt>\n<p><figure id=\"attachment_46572\" aria-describedby=\"caption-attachment-46572\" style=\"width: 1307px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-46572\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/SM_eBPF_routing.png\" alt=\"The figure shows the networking between a service and its sidecar, as well as between two sidecars that were optimized with the help of eBPF.\" width=\"1307\" height=\"720\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/SM_eBPF_routing.png 1686w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_eBPF_routing-300x165.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_eBPF_routing-1024x564.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_eBPF_routing-768x423.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_eBPF_routing-1536x846.png 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_eBPF_routing-400x220.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_eBPF_routing-528x290.png 528w, https:\/\/www.inovex.de\/wp-content\/uploads\/SM_eBPF_routing-360x198.png 360w\" sizes=\"auto, (max-width: 1307px) 100vw, 1307px\" \/><figcaption id=\"caption-attachment-46572\" class=\"wp-caption-text\">Fig. 3. Routing in a service mesh with eBPF<\/figcaption><\/figure><\/dt>\n<\/dl>\n<\/div>\n<div title=\"Page 6\">\n<p>The raw Linux sockets are the first level in the networking stack eBPF can interact with when sending a packet from a cluster application. At this stage, the eBPF application has access to the packet and the sockmap, a special-purpose eBPF map containing information about all sockets on a host. With this feature, eBPF programs can be implemented that route packets directly between two sockets.<\/p>\n<p>As a result, every time a packet is sent from an application, a corresponding eBPF application can be triggered that sends the packet directly to the destination\u2019s socket. In a service mesh, this application would be the sidecar proxy. This socket-to-socket connection bypasses most of the Linux networking stack, including checking iptables. If the request\u2019s destination is located on the same host, as shown in Figure 3, the routing between the destination\u2019s sidecar can also be optimized using eBPF. Furthermore, mechanisms like NAT are no longer necessary, and the service mesh extends its functionality by interacting with packets at such a low network stack level.<\/p>\n<div title=\"Page 6\">\n<p>The main advantage when using eBPF is that it has access to the information earlier than iptables. The second advantage is the data structure of eBPF compared to iptables. When using iptables, the complete list of rules is checked sequentially every time. This also includes rules for different networking namespaces or IP ranges.\u00a0Compared to that, eBPF\u2019s data structure maps are key-value stores. These key-value stores allow rules to be sorted, categorized, and summarized. Therefore, the check of rules is way more performant since only the subset of relevant rules can be checked.<\/p>\n<div title=\"Page 6\">\n<p>Furthermore, next to the routing between the sidecars and applications, eBPF can also be used to implement other networking features of a service mesh. This includes interactions up to the fourth OSI layer. Consequently, sidecar proxies become unnecessary since they can be bypassed as long as the processing of the network features does not cross the border to OSI layer five. This bypass results in the request paths having two hops less, which decreases the latency and CPU load even more. Additionally, the proxy has to implement fewer features, which results in more lightweight sidecar applications.<\/p>\n<div title=\"Page 6\">\n<p>Implementing service mesh features in eBPF can also be viewed critically. This is mainly because many features require handling on the application layer, which eBPF is not supporting. For this reason, the packages would have to be passed on to a user-space application (proxy application).\u00a0If service mesh features are implemented with eBPF and additional proxies in the user space are necessary, the overall complexity increases. Using one component that handles service mesh features could simplify the understandability, especially while debugging or troubleshooting.<\/p>\n<div title=\"Page 6\">\n<p>When implementing more features of a service mesh with eBPF, the question arises of how the eBPF applications receive the data necessary to make decisions. This includes information like service discovery information for load-balancing decisions or policy rules. Unfortunately, it is difficult for eBPF programs to collect this data since eBPF is a reactive, event-based implementation. In order to realize this by using eBPF, complex implementations are necessary that include broad knowledge about the overlying scheduling and processing layers.\u00a0Therefore, in most cases, it makes sense to implement an additional application in the user space that knows the scheduling and processing mechanisms and has easier access to user space resources via corresponding APIs. The user space application could also preprocess the collected information and transfer it into an easy-to-handle format for eBPF programs. Results can then be written to maps that eBPF programs can use at runtime, and the user space application can update continuously.<\/p>\n<div title=\"Page 7\">\n<p>The described scenarios show that it is possible to implement networking features of service meshes with eBPF. The shift of functionality to the kernel space relieves in all service mesh applications the proxy application. Depending on the feature set used in a service mesh, more features are implemented using eBPF, which shifts criteria like feature isolation or security granularity towards the optimal. With the decreased resource usage of eBPF-based implementations, the sidecar proxy architecture becomes uncomely.\u00a0This architecture shift is, for example, visible in the Cilium Service Mesh, which uses a proxy per node and eBPF instead of the sidecar approach widely used by service mesh implementations such as Istio or Linkerd.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Load-Tests\"><\/span>Load Tests<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<h3><span class=\"ez-toc-section\" id=\"Requirements\"><\/span>Requirements<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div title=\"Page 7\">\n<p>Two aspects are interesting when performing load tests on a sidecar-based service meshes in the context of eBPF. On the one side, this is the performance of the service meshes under different loads, and on the other side, the comparison of the results between the implementation with eBPF and the implementation without eBPF.\u00a0The analysis results from Chapter IV point out two metrics that are interesting to look at. Since eBPF enables skipping the kernel\u2019s networking stack, many calculations in the CPU are omitted. Therefore, the expected result of the load tests is that using eBPF reduces the utilization of the CPU.\u00a0The omitted calculations also save time, leading to an expected latency decrease. Therefore, these load tests will also compare the latency when benchmarking eBPF against an iptables-based implementation.<\/p>\n<div title=\"Page 7\">\n<p>Furthermore, one of the biggest problems when using iptables is the number of rules applied. The more rules that exist, the slower the sequential execution of iptables. Therefore, the load test setup should also consider the number of applied rules.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Test-Setup\"><\/span>Test Setup<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div title=\"Page 7\">\n<p>The load test setup is based on a Kubernetes cluster of three nodes, consisting of four virtual CPUs, eight GB of Random Access Memory (RAM), and 30 GB of storage each. Additionally, one extra node exists, which persists the monitoring data.<br \/>\nThe nodes are VMs in the inovex Cloud Services (iCS), a Cloud-Plattform owned by the inovex GmbH.<\/p>\n<div title=\"Page 7\">\n<p>Next to the Kubernetes control plane node is a node running a monitoring stack consisting of Prometheus, Grafana, and several exporters. The third node in the cluster runs the actual load test, which includes a server and a client application. Since eBPF is a kernel technology restricted to one host, a higher impact is expected if the communicating applications run on the same host. Therefore this node runs a simple web server (httpbin) and a client (k6) that constantly requests that server. The client also collects latency metrics for each request.<\/p>\n<div title=\"Page 7\">\n<p>In order to provide networking functionality to the cluster, a CNI needs to be installed. In this setup, Calico is the installed CNI that enables the basic networking in the cluster. Its base configuration uses iptables to route the traffic to the correspondent destination.<\/p>\n<div title=\"Page 7\">\n<p>To fulfill the requirement of many iptables rules, another component called Cluster-Flooder is deployed to the cluster. The Cluster-Flooder is a proprietary development and takes advantage of the handling of Pods and Services by Kubernetes. Since their creation produces iptables rules, the Cluster-Flooder configures a number of Pods and Services that will be created. To inflate the number of rules, the endpoints for each Service will be all Pods initialized by the Cluster-Flooder. The Cluster-Flooder was configured to start 250 Pods and 250 Services for these tests.<\/p>\n<div title=\"Page 7\">\n<p>Istio is the sidecar-based service mesh selected for these tests. It will inject a sidecar into the Kubernetes Pods of the httpbin and k6, which adds the two applications to the service mesh. All other applications will be excluded from the service mesh since they do not communicate. Proportionally, the overhead of the sidecar proxies is too high compared to the additional iptables rules. For completeness, it is important to mention that adding the server and the client to the Istio service mesh adds more iptables rules.<\/p>\n<p>The service mesh has some basic configuration. This includes the encryption of all communication between the applications in the service mesh by using mTLS. Additionally, there are policies applied to the httpbin. First, all incoming traffic is forbidden, except the k6 application requests the \/get endpoint with an HTTP GET request on TCP port 80. The last configuration added to the service mesh is a two-second timeout for requests to the httpbin application. Two seconds were selected since users tolerate waiting times up to two seconds.<\/p>\n<div title=\"Page 7\">\n<p>To introduce eBPF functionality to the Istio service mesh, Merbridge will be used. Merbridge mainly focuses on routing traffic between the sidecar and the application. However, the traffic to other Pods located on the same cluster node is also managed by Merbridge. Therefore Merbridge ensures that eBPF is used for both the service-to-sidecar and the Pod-to-Pod routing, which perfectly fits the requirements of these load tests. It is important to mention that Merbridge does not move other service mesh features from the networking area to the kernel.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Result-Analysis\"><\/span>Result Analysis<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div>\n<dl id=\"attachment_46574\">\n<dt>\n<p><figure id=\"attachment_46574\" aria-describedby=\"caption-attachment-46574\" style=\"width: 1200px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-46574 size-full\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/combined_excerpt.png\" alt=\"The figure shows the latency distribution of different load tests that were using Calico and Merbridge for networking. \" width=\"1200\" height=\"800\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/combined_excerpt.png 1200w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined_excerpt-300x200.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined_excerpt-1024x683.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined_excerpt-768x512.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined_excerpt-400x267.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined_excerpt-360x240.png 360w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><figcaption id=\"caption-attachment-46574\" class=\"wp-caption-text\">Fig. 4. An excerpt of the latency distribution of the load tests<\/figcaption><\/figure><\/dt>\n<\/dl>\n<\/div>\n<div title=\"Page 8\">\n<p>Figure 4 shows the latencies of different tests with Calico and Merbridge. The box displays the mean, the 25th, and the 75th percentile of the measured latency for each test run. The whiskers show the fifth and the 95th percentile.<br \/>\nWhen taking a look at the results, there are two things noticeable. One conspicuousness is the significant variance in the results for each technology. For example, when looking at the results for Calico, the results reach from approximately 2.5 ms up to nearly 20 ms. Furthermore, when using Merbridge, the whiskers reach from about 3 ms up to 140 ms. Primarily, the test with ID four shows significant outliers, but the results generally show considerable uncertainty.<br \/>\nAdditionally, no improvement is visible when using Merbridge, which was expected from the analysis results.<\/p>\n<div>\n<dl id=\"attachment_46576\">\n<dt>\n<p><figure id=\"attachment_46576\" aria-describedby=\"caption-attachment-46576\" style=\"width: 1200px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-46576 size-full\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/combined.png\" alt=\"The figure shows the maximal CPU utilization of different load tests that were using Calico and Merbridge for networking. \" width=\"1200\" height=\"400\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/combined.png 1200w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-300x100.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-1024x341.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-768x256.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-400x133.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-360x120.png 360w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><figcaption id=\"caption-attachment-46576\" class=\"wp-caption-text\">Fig. 5. The maximum CPU on the third node for different technologies<\/figcaption><\/figure><\/dt>\n<\/dl>\n<\/div>\n<div title=\"Page 8\">\n<p>In addition to missing improvements regarding the latency when using Merbridge, the captured maximal CPU load does not improve either. The maximum CPU is inconsistent when looking at Figure 5 results. Especially when using Merbridge, the results of these tests are in the range of 39.7% to 65.7%, which is quite a big difference.<\/p>\n<div title=\"Page 8\">\n<p>While looking for optimizations, three explanations for these problems were found. The first is cloud noise in the iCS. To accomplish the best capacity utilization grade on the physical hosts, the cloud is slightly over-scheduled with VMs (verbal information from Hannes von Haugwitz and Christian Rohmann, 10.03.2023), which can result in CPU steal. Moreover, the network utilization on the host machine, influencing the load test results, is unknown.<br \/>\nTackling the noisy cloud is challenging since its scheduling and network utilization are not manipulable. In order to keep the time gap between testing the two technologies as small as possible, the execution of their tests happens in succession. As a result, Calico will be tested first and, right afterward, Merbridge with the same test configuration.<\/p>\n<div title=\"Page 8\">\n<p>When taking a closer look at the CPU load on the node the load tests run on, it is noticeable that the kube-proxy has a very high utilization when the Cluster-Flooder\u2019s Pods get started. The kube-proxy utilizes a CPU core with close to 100% for quite a long time by running an iptables-restore process. This was noticeable across deployments that were using Calico as well as Merbridge.\u00a0One way to solve this issue is to wait until the kube-proxy finishes its rules restoration, which takes at least 25 minutes. Another less time-consuming solution embraces changes in the configuration.<\/p>\n<p>After testing different Cluster-Flooder configurations, it is noticeable that the number of Pods created by the Cluster-Flooder has a higher impact on the kube-proxy\u2019s CPU utilization than the number of initialized Services. A sweet spot was found with 50 Pods and 500 Services, resulting in 101806 entries in the iptables ruleset, resulting in an iptables overhead for less than five minutes, which is a significant improvement compared to the previous configuration.<\/p>\n<div title=\"Page 8\">\n<p>Additionally, the httpbin, used as the server application for the load tests, showed a high utilization at a request rate of 200 requests per second. In addition, the latency seemed relatively high for such a simple request pattern, with just 200 requests per second. Therefore, the httpbin was exchanged with an NGINX container to see if it could handle a higher rate of requests better. Tests showed that the NGINX utilizes the CPU significantly less than the httpbin. Therefore, to minimize the likelihood of the httpbin being a bottleneck, resulting in a distortion of the results, for all upcoming tests, k6 will request an NGINX server instead of a httpbin.<\/p>\n<div title=\"Page 8\">\n<p>With these improvements, the load tests are repeated. Additionally, the number of requests was increased to 1000 and 1500 requests per second since the NGINX can handle way more requests per second at the same CPU utilization.<\/p>\n<div>\n<dl id=\"attachment_46578\">\n<dt>\n<p><figure id=\"attachment_46578\" aria-describedby=\"caption-attachment-46578\" style=\"width: 1022px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-46578 \" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/combined-1.png\" alt=\"The figure shows the latency distribution of different load test pairs that were using Calico and Merbridge for networking. \" width=\"1022\" height=\"821\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/combined-1.png 1463w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-1-300x241.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-1-1024x822.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-1-768x617.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-1-400x321.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-1-360x289.png 360w\" sizes=\"auto, (max-width: 1022px) 100vw, 1022px\" \/><figcaption id=\"caption-attachment-46578\" class=\"wp-caption-text\">Fig. 6. The average request duration for different technologies displayed for each testing pair<\/figcaption><\/figure><\/dt>\n<\/dl>\n<\/div>\n<div title=\"Page 8\">\n<p>Figure 6 shows the results of the latencies where the test pairs were executed right after one another. When looking at the boxplots that display the test pairs, it is visible that the results are not consistent either, even though this setup tries to minimize the impact of cloud noise on the measurement.\u00a0This is the case for Calico and Merbridge. Additionally, the results also show that there is no improvement when using Merbridge over using Calico, independent of the number of requests. In some cases, the latency is even worse compared to Calico.<\/p>\n<div>\n<dl id=\"attachment_46580\">\n<dt>\n<p><figure id=\"attachment_46580\" aria-describedby=\"caption-attachment-46580\" style=\"width: 1003px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-46580 \" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/combined-2.png\" alt=\"The figure shows the maximum CPU utilization of different load test pairs that were using Calico and Merbridge for networking. \" width=\"1003\" height=\"806\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/combined-2.png 1464w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-2-300x241.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-2-1024x823.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-2-768x617.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-2-400x322.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/combined-2-360x289.png 360w\" sizes=\"auto, (max-width: 1003px) 100vw, 1003px\" \/><figcaption id=\"caption-attachment-46580\" class=\"wp-caption-text\">Fig. 7. The maximum CPU on the third node for different technologies displayed for each testing pair<\/figcaption><\/figure><\/dt>\n<\/dl>\n<\/div>\n<div title=\"Page 9\">\n<div>\n<div>\n<p>When comparing the test pairs\u2019 maximum CPU utilization in Figure 7, no improvement is visible when using Merbridge. Merbridge performs slightly better in most test cases, but the difference with Calico is marginal. However, it is noticeable that the maximum CPU utilization is noticeably worse when using Merbridge in two of the six test pairs. Therefore a generalized proposition is difficult.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Conclusion\"><\/span>Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<div title=\"Page 9\">\n<div>\n<div>\n<p>The analysis showed that eBPF can bring promising improvements to service meshes. This is especially the case for the networking between a workload and its sidecar. The sidecar application can even be bypassed when the service mesh\u2019s networking functionality below OSI layer five is implemented with eBPF.Since eBPF is located in the kernel space, it is not made to interact with network packets on OSI layers five and above. The kernel is missing features to handle traffic at these layers, so functionality on these layers must be implemented from scratch. As a result, eBPF should not aim to handle such traffic but rather hand it over to user space applications that profit from already existing implementations and programming APIs.<\/p>\n<div title=\"Page 9\">\n<div>\n<div>\n<p>The results of the load tests are sobering, while the theoretical analysis was promising. First, in this load test setup, problems such as cloud noise exist, which makes it difficult to compare the service mesh implementations. Furthermore, the impact of eBPF is way smaller than expected. Even though many layers of the kernel\u2019s networking stack will be skipped with Merbridge, the kernel\u2019s networking implementation is heavily optimized for the use cases of these load tests.<\/p>\n<div title=\"Page 9\">\n<div>\n<div>\n<p>Overall, eBPF seems like a promising technology that can, in theory, enhance service mesh networking. Compared to Calico, no improvement was visible when using eBPF in the form of Merbridge to communicate in an Istio service mesh.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Outlook\"><\/span>Outlook<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<div title=\"Page 9\">\n<div>\n<div>\n<p>Overall, much exciting research can be done based on the results collected.\u00a0The load test results did not clearly show that eBPF in the form of Merbridge improves Istio\u2019s networking implementation. Further investigation is necessary to point out possible architectural or implementation mistakes. This includes a detailed analysis of the various eBPF programs forming Merbridge.<\/p>\n<div title=\"Page 9\">\n<div>\n<div>\n<p>Furthermore, it would be interesting how a complete shift of all service mesh features on the OSI layers three and four to eBPF would look like. Merbridge focussed on communication between the different parts of the service mesh, but networking features, like load balancing, were not yet adapted by eBPF. The impact of these implementations on performance is an exciting future investigation.<\/p>\n<div title=\"Page 9\">\n<div>\n<div>\n<p>The load tests showed that overloading the kube-proxy&#8217;s default configuration is easy. Additionally, the kube-proxy has a mode where it uses an IP Virtual Server (IPVS) instead of iptables. Its rule checks are already optimized, resulting in a consistent processing performance independent of the cluster size. Therefore, it would be interesting to compare these two technologies and analyze to what extent replacing IPVS with eBPF would make sense.<\/p>\n<div title=\"Page 10\">\n<div>\n<div>\n<p>During the load tests, the noise of the used cloud infrastructure was an additional problem. A future investigation could perform the same tests in a more quiet environment. This would help to verify whether the inconsistent measurements problem was caused by the iCS. Besides this, a less noisy environment could help to pinpoint the causes of missing improvements using Merbridge.<\/p>\n<div title=\"Page 10\">\n<div>\n<div>\n<p>The analysis of this thesis mainly focused on the networking enhancements of eBPF in service meshes. Observability and security are two promising fields of application of eBPF which could broadly improve service meshes. In networking, the improvements mainly focused on shifting service mesh functionality into eBPF. In contrast, when using eBPF for observability and security-related tasks, the features of service meshes could be expanded.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>In modern cloud-native architectures, service meshes find broad adoption. Service meshes add a separate layer to microservice clusters which enhances the resilience. The service mesh is formed by sidecar proxies that handle networking, observability, and security-related tasks. However, the additional proxies and their network traffic service meshes add overhead to the hosts.\u00a0With extended Berkley Packet [&hellip;]<\/p>\n","protected":false},"author":362,"featured_media":47363,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"ep_exclude_from_search":false,"footnotes":""},"tags":[68,71,66,823,114,298],"service":[414,422,423,879],"coauthors":[{"id":362,"display_name":"Max Merz","user_nicename":"mmerz"}],"class_list":["post-46394","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-backend","tag-cloud","tag-devops","tag-docker","tag-kubernetes","tag-microservices","service-cloud","service-it-engineering","service-kubernetes","service-security"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>eBPF in Service Meshes - inovex GmbH<\/title>\n<meta name=\"description\" content=\"Master-Thesis summary about how to reduce the resource usage and network overhead in a service mesh with the kernel technology eBPF.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"eBPF in Service Meshes - inovex GmbH\" \/>\n<meta property=\"og:description\" content=\"Master-Thesis summary about how to reduce the resource usage and network overhead in a service mesh with the kernel technology eBPF.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/\" \/>\n<meta property=\"og:site_name\" content=\"inovex GmbH\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/inovexde\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-27T05:58:36+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-27T09:09:05+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Max Merz\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh-1024x576.png\" \/>\n<meta name=\"twitter:creator\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:site\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Max Merz\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"34\u00a0Minuten\" \/>\n\t<meta name=\"twitter:label3\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data3\" content=\"Max Merz\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/\"},\"author\":{\"name\":\"Max Merz\",\"@id\":\"https:\/\/www.inovex.de\/de\/#\/schema\/person\/4d8b4a82f847d2d46f70c10debe0dca3\"},\"headline\":\"eBPF: Reduce Resource Usage and Network Overhead\",\"datePublished\":\"2023-09-27T05:58:36+00:00\",\"dateModified\":\"2023-09-27T09:09:05+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/\"},\"wordCount\":6835,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.inovex.de\/de\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png\",\"keywords\":[\"Backend\",\"Cloud\",\"DevOps\",\"Docker\",\"Kubernetes\",\"Microservices\"],\"articleSection\":[\"English Content\",\"Infrastructure\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/\",\"url\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/\",\"name\":\"eBPF in Service Meshes - inovex GmbH\",\"isPartOf\":{\"@id\":\"https:\/\/www.inovex.de\/de\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png\",\"datePublished\":\"2023-09-27T05:58:36+00:00\",\"dateModified\":\"2023-09-27T09:09:05+00:00\",\"description\":\"Master-Thesis summary about how to reduce the resource usage and network overhead in a service mesh with the kernel technology eBPF.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#primaryimage\",\"url\":\"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png\",\"contentUrl\":\"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png\",\"width\":1920,\"height\":1080,\"caption\":\"The eBPF bee in a honeycomb illustration connected to other honeycombs like a network\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.inovex.de\/de\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"eBPF: Reduce Resource Usage and Network Overhead\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.inovex.de\/de\/#website\",\"url\":\"https:\/\/www.inovex.de\/de\/\",\"name\":\"inovex GmbH\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.inovex.de\/de\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.inovex.de\/de\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.inovex.de\/de\/#organization\",\"name\":\"inovex GmbH\",\"url\":\"https:\/\/www.inovex.de\/de\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png\",\"contentUrl\":\"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png\",\"width\":1921,\"height\":1081,\"caption\":\"inovex GmbH\"},\"image\":{\"@id\":\"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/inovexde\",\"https:\/\/x.com\/inovexgmbh\",\"https:\/\/www.instagram.com\/inovexlife\/\",\"https:\/\/www.linkedin.com\/company\/inovex\",\"https:\/\/www.youtube.com\/channel\/UC7r66GT14hROB_RQsQBAQUQ\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.inovex.de\/de\/#\/schema\/person\/4d8b4a82f847d2d46f70c10debe0dca3\",\"name\":\"Max Merz\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/www.inovex.de\/de\/#\/schema\/person\/image\/7a2b218eef91c85b703a38cb5f9c416e\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/cff5b9c48ff75406f96eb91d6175b5998604050dc834ab6f4afc26779433a24f?s=96&d=retro&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/cff5b9c48ff75406f96eb91d6175b5998604050dc834ab6f4afc26779433a24f?s=96&d=retro&r=g\",\"caption\":\"Max Merz\"},\"sameAs\":[\"linkedin.com\/in\/max-merz\"],\"url\":\"https:\/\/www.inovex.de\/de\/blog\/author\/mmerz\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"eBPF in Service Meshes - inovex GmbH","description":"Master-Thesis summary about how to reduce the resource usage and network overhead in a service mesh with the kernel technology eBPF.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/","og_locale":"de_DE","og_type":"article","og_title":"eBPF in Service Meshes - inovex GmbH","og_description":"Master-Thesis summary about how to reduce the resource usage and network overhead in a service mesh with the kernel technology eBPF.","og_url":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/","og_site_name":"inovex GmbH","article_publisher":"https:\/\/www.facebook.com\/inovexde","article_published_time":"2023-09-27T05:58:36+00:00","article_modified_time":"2023-09-27T09:09:05+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png","type":"image\/png"}],"author":"Max Merz","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh-1024x576.png","twitter_creator":"@inovexgmbh","twitter_site":"@inovexgmbh","twitter_misc":{"Verfasst von":"Max Merz","Gesch\u00e4tzte Lesezeit":"34\u00a0Minuten","Written by":"Max Merz"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#article","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/"},"author":{"name":"Max Merz","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/4d8b4a82f847d2d46f70c10debe0dca3"},"headline":"eBPF: Reduce Resource Usage and Network Overhead","datePublished":"2023-09-27T05:58:36+00:00","dateModified":"2023-09-27T09:09:05+00:00","mainEntityOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/"},"wordCount":6835,"commentCount":0,"publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png","keywords":["Backend","Cloud","DevOps","Docker","Kubernetes","Microservices"],"articleSection":["English Content","Infrastructure"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/","url":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/","name":"eBPF in Service Meshes - inovex GmbH","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#primaryimage"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png","datePublished":"2023-09-27T05:58:36+00:00","dateModified":"2023-09-27T09:09:05+00:00","description":"Master-Thesis summary about how to reduce the resource usage and network overhead in a service mesh with the kernel technology eBPF.","breadcrumb":{"@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#primaryimage","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/ebpf-service-mesh.png","width":1920,"height":1080,"caption":"The eBPF bee in a honeycomb illustration connected to other honeycombs like a network"},{"@type":"BreadcrumbList","@id":"https:\/\/www.inovex.de\/de\/blog\/ebpf-reduce-resource-usage-network-overhead\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.inovex.de\/de\/"},{"@type":"ListItem","position":2,"name":"eBPF: Reduce Resource Usage and Network Overhead"}]},{"@type":"WebSite","@id":"https:\/\/www.inovex.de\/de\/#website","url":"https:\/\/www.inovex.de\/de\/","name":"inovex GmbH","description":"","publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.inovex.de\/de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/www.inovex.de\/de\/#organization","name":"inovex GmbH","url":"https:\/\/www.inovex.de\/de\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","width":1921,"height":1081,"caption":"inovex GmbH"},"image":{"@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/inovexde","https:\/\/x.com\/inovexgmbh","https:\/\/www.instagram.com\/inovexlife\/","https:\/\/www.linkedin.com\/company\/inovex","https:\/\/www.youtube.com\/channel\/UC7r66GT14hROB_RQsQBAQUQ"]},{"@type":"Person","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/4d8b4a82f847d2d46f70c10debe0dca3","name":"Max Merz","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/image\/7a2b218eef91c85b703a38cb5f9c416e","url":"https:\/\/secure.gravatar.com\/avatar\/cff5b9c48ff75406f96eb91d6175b5998604050dc834ab6f4afc26779433a24f?s=96&d=retro&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/cff5b9c48ff75406f96eb91d6175b5998604050dc834ab6f4afc26779433a24f?s=96&d=retro&r=g","caption":"Max Merz"},"sameAs":["linkedin.com\/in\/max-merz"],"url":"https:\/\/www.inovex.de\/de\/blog\/author\/mmerz\/"}]}},"_links":{"self":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/46394","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/users\/362"}],"replies":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/comments?post=46394"}],"version-history":[{"count":6,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/46394\/revisions"}],"predecessor-version":[{"id":48988,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/46394\/revisions\/48988"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media\/47363"}],"wp:attachment":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media?parent=46394"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/tags?post=46394"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/service?post=46394"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/coauthors?post=46394"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}