Effective Management of APIs and the Edge When Adopting Kubernetes
As you adopt Kubernetes, the requirements for your edge change. You now have teams working on multiple services all with different requirements. How can you make sure your edge is Kubernetes-ready?
As organizations that are building cloud native applications adopt Kubernetes, they typically begin by decomposing single applications into multiple microservices. This architecture shift requires a corresponding change in team management, and it also complicates the role of the edge (the boundary between internal services and end users). In a microservices architecture, each service is managed by an individual team that is responsible for that service from development to production. This responsibility extends from the edge of the system, where user requests arrive, through the service’s business logic and down into the associated messaging and data store schema.
In most cases, organizations will deploy an API gateway to manage the edge. With Kubernetes and microservices, your API gateway must address two primary challenges:
- How to scale the people, process, and organizational management of hundreds of services and the associated APIs
- How to support a flexible and broad range of service architectures, protocols, and configurations
In our experience, there are three main strategies that engineering teams can apply to address these challenges and effectively manage the edge: a multi-edge implementation; extending an existing API gateway; and deploying a self-service edge stack.
In this webinar, we’ll talk through these challenges and strategies in detail. We’ll explain why we think self-service and consolidated functionality are the two pillars of an effective Kubernetes API Gateway, and we will explore how you can use these concepts to reduce SRE toil and accelerate developer productivity at the edge.