Istio Integration
Ambassador Edge Stack and Istio: Edge Proxy and Service Mesh together in one. The Edge Stack is deployed at the edge of your network and routes incoming traffic to your internal services (aka "north-south" traffic). Istio is a service mesh for microservices, and is designed to add application-level Layer (L7) observability, routing, and resilience to service-to-service traffic (aka "east-west" traffic). Both Istio and the Ambassador Edge Stack are built using Envoy.
Ambassador Edge Stack and Istio can be deployed together on Kubernetes. In this configuration, incoming traffic from outside the cluster is first routed through the Ambassador Edge Stack, which then routes the traffic to Istio-powered services. The Ambassador Edge Stack handles authentication, edge routing, TLS termination, and other traditional edge functions.
This allows the operator to have the best of both worlds: a high performance, modern edge service (Ambassador Edge Stack) combined with a state-of-the-art service mesh (Istio). While Istio has introduced a Gateway abstraction, the Ambassador Edge Stack still has a much broader feature set for edge routing than Istio. For more on this topic, see our blog post on API Gateway vs Service Mesh.
Getting Ambassador Edge Stack Working With Istio
Getting the Ambassador Edge Stack working with Istio is straightforward. In this example, we'll use the bookinfo
sample application from Istio.
- Install Istio on Kubernetes, following the default instructions (without using mutual TLS auth between sidecars)
- Next, install the Bookinfo sample application, following the instructions.
- Verify that the sample application is working as expected.
By default, the Bookinfo application uses the Istio ingress. To use the Ambassador Edge Stack, we need to:
- Install the Ambassador Edge Stack.\
- Install a sample
Mapping
in the Ambassador Edge Stack by creating a YAML file namedhttpbin.yaml
and paste in the following contents:
---apiVersion: getambassador.io/v2kind: Mappingmetadata:name: httpbinspec:prefix: /httpbin/service: httpbin.orghost_rewrite: httpbin.org
Then, apply it to the Kubernetes with kubectl
:
kubectl apply -f httpbin.yaml
The steps above do several things:
- It creates a Kubernetes service for the Ambassador Edge Stack, of type
LoadBalancer
. Note that if you're not deploying in an environment whereLoadBalancer
is a supported type (i.e. MiniKube), you'll need to change this to a different type of service, e.g.,NodePort
. - It creates a test route that will route traffic from
/httpbin/
to the publichttpbin.org
HTTP Request and Response service (which provides a useful endpoint that can be used for diagnostic purposes). In the Ambassador Edge Stack, Kubernetes annotations (as shown above) are used for configuration. More commonly, you'll want to configure routes as part of your service deployment process, as shown in this more advanced example.
You can see if the two Ambassador Edge Stack services are running correctly (and also obtain the LoadBalancer IP address when this is assigned after a few minutes) by executing the following commands:
$ kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEambassador LoadBalancer 10.63.247.1 35.224.41.XX 8080:32171/TCP 11mambassador-admin NodePort 10.63.250.17 <none> 8877:32107/TCP 12mdetails ClusterIP 10.63.241.224 <none> 9080/TCP 16mkubernetes ClusterIP 10.63.240.1 <none> 443/TCP 24mproductpage ClusterIP 10.63.248.184 <none> 9080/TCP 16mratings ClusterIP 10.63.255.72 <none> 9080/TCP 16mreviews ClusterIP 10.63.252.192 <none> 9080/TCP 16m$ kubectl get podsNAME READY STATUS RESTARTS AGEambassador-2680035017-092rk 2/2 Running 0 13mambassador-2680035017-9mr97 2/2 Running 0 13mambassador-2680035017-thcpr 2/2 Running 0 13mdetails-v1-3842766915-3bjwx 2/2 Running 0 17mproductpage-v1-449428215-dwf44 2/2 Running 0 16mratings-v1-555398331-80zts 2/2 Running 0 17mreviews-v1-217127373-s3d91 2/2 Running 0 17mreviews-v2-2104781143-2nxqf 2/2 Running 0 16mreviews-v3-3240307257-xl1l6 2/2 Running 0 16m
Above we see that external IP assigned to our LoadBalancer is 35.224.41.XX (XX is used to mask the actual value), and that all ambassador pods are running (Ambassador Edge Stack relies on Kubernetes to provide high availability, and so there should be two small pods running on each node within the cluster).
You can test if the Ambassador Edge Stack has been installed correctly by using the test route to httpbin.org
to get the external cluster Origin IP from which the request was made:
$ curl -L 35.224.41.XX/httpbin/ip{"origin": "35.192.109.XX"}
If you're seeing a similar response, then everything is working great!
(Bonus: If you want to use a little bit of awk magic to export the LoadBalancer IP to a variable AMBASSADOR_IP, then you can type export AMBASSADOR_IP=$(kubectl get services ambassador | tail -1 | awk '{ print $4 }')
and use curl -L $AMBASSADOR_IP/httpbin/ip
- Now you are going to modify the
bookinfo
demobookinfo.yaml
manifest to include the necessary Ambassador annotations. See below.
---apiVersion: getambassador.io/v2kind: Mappingmetadata:name: productpagespec:prefix: /productpage/rewrite: /productpageservice: productpage:9080---apiVersion: v1kind: Servicemetadata:name: productpagelabels:app: productpagespec:ports:- port: 9080name: httpselector:app: productpage
The annotation above implements an Ambassador Edge Stack mapping from the /productpage/
URI to the Kubernetes productpage service running on port 9080 ('productpage:9080'). The 'prefix' mapping URI is taken from the context of the root of your Ambassador Edge Stack service that is acting as the ingress point (exposed externally via port 80 because it is a LoadBalancer) e.g. '35.224.41.XX/productpage/'.
You can now apply this manifest from the root of the Istio GitHub repo on your local file system (taking care to wrap the apply with istioctl kube-inject
):
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)
Optionally, delete the Ingress controller from the
bookinfo.yaml
manifest by typingkubectl delete ingress gateway
.Test the Ambassador Edge Stack by going to the IP of the Ambassador LoadBalancer you configured above e.g.
35.192.109.XX/productpage/
. You can see the actual IP address again for the Ambassador Edge Stack by typingkubectl get services ambassador
.
Automatic Sidecar Injection
Newer versions of Istio support Kubernetes initializers to automatically inject the Istio sidecar. You don't need to inject the Istio sidecar into the pods of the Ambassador Edge Stack -- Ambassador's Envoy instance will automatically route to the appropriate service(s). Ambassador Edge Stack's pods are configured to skip sidecar injection, using an annotation as explained in the documentation.
Istio Mutual TLS
Istio versions prior to 1.5 store its TLS certificates as Kubernetes secrets by default, so accessing them is a matter of YAML configuration changes. Istio 1.5 changes how secrets are handled; please contact us on Slack for more details.
- Load Istio's TLS certificates
Istio creates and stores its TLS certificates in Kubernetes secrets. In order to use those secrets you can set up a TLSContext
to read directly from Kubernetes:
---apiVersion: getambassador.io/v2kind: TLSContextmetadata:name: istio-upstreamspec:secret: istio.defaultsecret_namespacing: False
Please note that if you are using RBAC you may need to reference the istio
secret for your service account, e.g. if your service account is ambassador
then your target secret should be istio.ambassador
.
Configure Ambassador Edge Stack to use this
TLSContext
when making connections to upstream servicesThe
tls
attribute in aMapping
configuration tells Ambassador Edge Stack to use theTLSContext
we created above when making connections to upstream services:---apiVersion: getambassador.io/v2kind: Mappingmetadata:name: productpagespec:prefix: /productpage/rewrite: /productpageservice: https://productpage:9080tls: istio-upstreamNote the
tls: istio-upstream
, which lets the Ambassador Edge Stack know which certificate to use when communicating with that service.
Ambassador Edge Stack will now use the certificate stored in the secret to originate TLS to Istio-powered services.
In the definition above we also have TLS termination enabled; please see the TLS termination tutorial or the Host CRD for more details.
PERMISSIVE mTLS
Istio can be configured in either PERMISSIVE or STRICT mode for mTLS. PERMISSIVE
mode allows for services to opt-in to mTLS to make the transition easier.
For service-to-service calls via the Istio proxy, Istio will automatically handle this mTLS opt-in when you configure a DestinationRule. However, since there is no Istio proxy running sidecar to the Ambassador Edge Stack, to do mTLS between Ambassador Edge Stack and an Istio service in PERMISSIVE
mode, we need to tell the service to listen for mTLS traffic by setting alpn_protocols: "istio"
in the TLSContext
:
---apiVersion: getambassador.io/v2kind: TLSContextmetadata:name: istio-upstreamspec:secret: istio.defaultsecret_namespacing: Falsealpn_protocols: "istio"
Istio RBAC Authorization
While using istio.default
secret works for mutual TLS only, to be able to interop with Istio RBAC Authorization the Ambassador Edge Stack needs to have Istio certificate that matches service account that the Ambassador Edge Stack deployment is using (by default the service account is ambassador
).
The istio.default
secret is for default
service account, as can be seen in the certificate Subject Alternative Name: spiffe://cluster.local/ns/default/sa/default
.
So when the Ambassador Edge Stack is using this certificate but running under ambassador
service account the Istio RBAC will not work as expected.
Fortunately, Istio automatically creates a secret for each service account, including ambassador
service account.
These secrets are named as istio.{service account name}
.
So if your Ambassador Edge Stack deployment uses ambassador
service account, the solution is simply to use istio.ambassador
secret instead of istio.default
secret.
Tracing Integration
Istio provides a tracing mechanism based on Zipkin, which is one of the drivers supported by the Ambassador Edge Stack. In order to achieve an end-to-end tracing, it is possible to integrate the Ambassador Edge Stack with Istio's Zipkin.
First, confirm that Istio's Zipkin is up and running in the istio-system
Namespace:
$ kubectl get service zipkin -n istio-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEzipkin ClusterIP 10.102.146.104 <none> 9411/TCP 7m
If Istio's Zipkin is up & running on istio-system
Namespace, add the TracingService
annotation pointing to it:
---apiVersion: getambassador.io/v2kind: TracingServicemetadata:name: tracingspec:service: "zipkin.istio-system:9411"driver: zipkinconfig: {}
Note: We are using the DNS entry zipkin.istio-system
as well as the port that our service is running, in this case, 9411
. Please see Distributed Tracing for more details on Tracing configuration.
Monitoring/Statistics Integration
Istio also provides a Prometheus service that is an open-source monitoring and alerting system which is supported by the Ambassador Edge Stack as well. It is possible to integrate the Ambassador Edge Stack into Istio's Prometheus to have all statistics and monitoring in a single place.
First, we need to change our Ambassador Edge Stack Deployment to use the Prometheus StatsD Exporter as its sidecar. Do this by applying the ambassador-rbac-prometheus.yaml:
$ kubectl apply -f https://www.getambassador.io/yaml/ambassador/ambassador-rbac-prometheus.yaml
This YAML is changing the StatsD container definition on our Deployment to use the Prometheus StatsD Exporter as a sidecar:
- name: statsd-sinkimage: datawire/prom-statsd-exporter:0.6.0restartPolicy: Always
Next, a Service needs to be created pointing to our Prometheus StatsD Exporter
sidecar:
apiVersion: v1kind: Servicemetadata:name: ambassador-monitorlabels:app: ambassadorservice: ambassador-monitorspec:type: ClusterIPports:- port: 9102name: prometheus-metricsselector:service: ambassador
Now we need to add a scrape
configuration to Istio's Prometheus so that it can pool data from our Ambassador Edge Stack. This is done by applying the new ConfigMap:
$ kubectl apply -f https://www.getambassador.io/yaml/ambassador/ambassador-istio-configmap.yaml
This ConfigMap YAML changes the prometheus
ConfigMap that is on istio-system
Namespace and adds the following:
- job_name: 'ambassador'static_configs:- targets: ['ambassador-monitor.default:9102']labels: {'application': 'ambassador'}
Note: Assuming ambassador-monitor service is running in the default namespace.
Note: You can also add the scrape by hand by using kubectl
edit, or the dashboard.
After adding the scrape
, Istio's Prometheus POD needs to be restarted:
$ export PROMETHEUS_POD=`kubectl get pods -n istio-system | grep prometheus | awk '{print $1}'`$ kubectl delete pod $PROMETHEUS_POD -n istio-system
Grafana Dashboard
Istio provides a Grafana dashboard service as well, and it is possible to import an Ambassador Edge Stack Dashboard into it, to monitor the Statistics provided by Prometheus. We're going to use Alex Gervais' template available on Grafana's website under entry 4689 as a starting point.
First, let's start the port-forwarding for Istio's Grafana service:
$ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
Now, open Grafana tool by accessing: http://localhost:3000/
To install the Ambassador Edge Stack Dashboard:
- Click on Create
- Select Import
- Enter number 4698
Now we need to adjust the Dashboard Port to reflect our Ambassador Edge Stack configuration:
- Open the Imported Dashboard
- Click on Settings in the Top Right corner
- Click on Variables
- Change the port to 80 (according to the ambassador service port)
Next, adjust the Dashboard Registered Services metric:
- Open the Imported Dashboard
- Find Registered Services
- Click on the down arrow and select Edit
- Change the Metric to:
envoy_cluster_manager_active_clusters{job="ambassador"}
Now let's save the changes:
- Click on Save Dashboard in the Top Right corner
Questions?
We’re here to help. If you have questions, join our Slack or contact us.