KubernetesLagom

Lagom kubernetes setup considerations

By 17.02.2019. February 27th, 2019 No Comments

Problem

Running and testing services in Lagom development environment, with its support for all needed tooling, is a smooth and straight forward process. Everything is prepared and ready to use out-of-the-box.

Deploying and running it in production requires complete environment setup and it is not as straight forward at first (at least it was not for me :)).

Lagom, with a help of Lightbend orchestration, has an out-of-the box support for deploying and running on kubernetes.

Kubernetes cluster setup depends on the chosen kubernetes implementation and it is out of the scope of this blog. Personally, I’m using Amazon EKS.

So question is, when you have kubernetes cluster running, what else is required to deploy and run your Lagom microservice system.

Solution

Kubernetes basics

I will assume you have a basic knowledge of kubernetes and if not I strongly recommend going through kubernetes official documentation with focus on:

Kubernetes cluster management access

Kubernetes cluster is managed via kubectl CLI tool that needs to be preconfigured to access certain kubernetes cluster.

When you have multiple kubernetes clusters running (test, production #1, production #2,….) switching between kubectl configurations is prone to errors, ending up in performing operations on the wrong cluster. To avoid it I tend to use a bastion host based solution.

Depending on the kubernetes cluster network location you could:

  • dedicate bastion host per kubernetes cluster
  • dedicate bastion host OS user per kubernetes cluster

Kubernetes namespace organization

Kubernetes uses namespaces to support multiple virtual clusters on one physical cluster. It can also be used for multi-tenant deployments but I like to avoid it to keep the setup as simple as possible.

By default, Kubernetes comes with 3 preconfigured namespaces: default, kube-public, kube-system.

I use kube-system for deploying and running kubernetes resources unrelated to my Lagom system.

For Lagom system you could use default but I like to create a separate namespace for grouping my Lagom system services in one logical group.

Example namespace resource configuration:


apiVersion: v1
kind: Namespace
metadata:
  name: lagom
  labels:
    name: lagom

Helm

Helm is kubernetes package manager. I see it as kubernetes APT based tool.

My personal Helm benefits are:

  • using different Helm repositories to get access to offical and community created kubernetes tools (you will see in later which are those) in purpose of simplifying its configuration and deployment
  • template Lagom kubernetes resources to simplify its configuration and deployment when having high count of services

Lagom kubernetes resources

Lightbend orchestration tooling generates, based on provided configuration, Lagom kubernetes resources (deployment, service, ingress). These resources are required to deploy Lagom service on kubernetes. Check official documentation for more details.

Lagom service call  access control

Service API calls can be categorized, depending on the access control requirement, in:

  • internal calls
  • external calls

Internal calls are accessed internally by the trusted callers (service to service running in same Kubernetes cluster) and by that does not require any access control. Communication is not required to be encrypted and caller does not require authentication (simple caller identification could be used if required).

In kubernetes cluster, service to service communication is done by directly accessing service POD IPs and port. When one service wants to connect to another, caller service uses a Lightbend orchestration based implementation of Lagom ServiceLocator to locate, via kubernetes API, called service POD IPs and port. Lagom services are located based on the service name specified in API descriptor. Service name is deployed as kubernetes service resource.


named("account-service")

Kubernetes resource names are restricted and by that service names are restricted. So it is important to follow the rules of these restrictions when defining service name in API descriptor!

External calls are accessed externally (from Internet) by the un-trusted callers and should require communication encryption and caller authentication.

In kubernetes, service external access is configured using ingress resource.

One Lagom service can have one or more ingress resources deployed, configuring what ACL are used.

In order for the ingress resource to work, the cluster must have an ingress controller deployed and running.

Mostly used ingress controller implementation is NGINX ingress controller. It is mostly used because it is supported and maintained by the kubernetes projects itself and can be deployed on almost all kubernetes implementations.

NGINX ingress controller enables you to manage the entire lifecycle of NGINX by subscribing to ingress resource events (ADD/REMOVE), via Kubernetes API, based on which NGINX location configuration is updated automatically, in runtime.

NGIX ingress controller can be deployed using NGINX Ingress controller HELM chart.  Be sure to specify namespace (kube-system) and ingress controller name:

--namespace kube-system --set controller.ingressClass=nginx-external

Ingress resource can be configured to target specific ingress controller, if multiple controllers are running, by specifying ingress controller name in ingress resource annotation:

kubernetes.io/ingress.class: nginx-external

Ingress resource example:

apiVersion: "extensions/v1beta1"
kind: Ingress
metadata:
   name: "account-service"
   namespace: fundtransfersystem
   annotations:
     kubernetes.io/ingress.class: "nginx-external"
     nginx.ingress.kubernetes.io/ssl-redirect: "false"
     ingress.kubernetes.io/ssl-redirect: "false"
spec:
   rules:
   - http:
     paths:
     - path: "/api/external/accounts"
       backend:
       serviceName: "account-service"
       servicePort: 80

So ingress resource is used to expose service external access. If service requires both internal and external access it is required to differentiate it.

This can be done by specifying different URL based contexts.

For example:


/api/accounts # with internal context

/api/external/accounts # with external context

For /api/external/accounts it is required to deploy ingress resource to allow external access and for /api/accounts ingress resource is NOT required because it is used as internal access.

Kubernetes SSL/TLS encryption support

In kubernetes, SSL/TLS configuration is configured by using specific annotations in ingress resource. Ingress controller, based on this configuration, configures and implements SSL/TLS termination.

By this definition we need to apply SSL/TLS configuration for every external accessible service Ingress resource that will not be convenient to maintain. In most use cases the same SSL/TLS configuration (single SSL/TLS termination point) is used for accessing all external accessible services.

To resolve this we could use these two solutions (that I’m aware of):

  1. Use and configure NGINX controller with Default SSL configuration
  2. Deploy additional Ingress controller dedicated for SSL/TLS termination

Solution #1 is explained in referenced documentation.

Solution #2 is to deploy one extra Ingress controller dedicated for SSL/TLS termination and deploy “singleton” Ingress resource that will configure SSL/TLS for SSL/TLS termination and forward all traffic to already created NGINX controller. Singleton in this context means that only one ingress resource is deployed.

With this solution we “extracted” SSL/TLS termination point from already created Ingress controller and by that avoided configuring SSL/TLS configuration per service ingress resource.

For SSL/TLS dedicated ingress controller we could use:

  • NGINX Ingress controller
  • depending on the kubernetes implementation used, cloud provider specific Ingress controller. I use Amazon EKS ALB Ingress controller that leverages Amazon Application Loadbalancer.

If your kubernetes implementation allows only NGINX Ingress controller, solution #2, in general, does not make sense and I would suggest going with solution #1.

In case of using cloud provider, cloud provider specific Ingress controller brings advantage of securing external access outside of your kubernetes environment. In opposite of NGINX Ingress controller running inside of kubernetes cluster.

Example deploying Amazon EKS ALB Ingress controller using aws-alb-ingress-controller helm chart

 
helm install incubator/aws-alb-ingress-controller --name=external-alb-ingress-controller --namespace kube-system --set autoDiscoverAwsRegion=true --set autoDiscoverAwsVpcID=true --set clusterName=myK8s 

AWS ALB singleton ingress resource:

apiVersion: extensions/v1beta1 
kind: Ingress metadata: 
name: "ssl-alb" 
namespace: kube-system
labels: 
   app: "sslAlb" 
   annotations:
     kubernetes.io/ingress.class: "alb" 
     alb.ingress.kubernetes.io/scheme: "internet-facing" 
     alb.ingress.kubernetes.io/target-type: "instance" 
     alb.ingress.kubernetes.io/security-groups: my-security-group-ids, ...  
     alb.ingress.kubernetes.io/subnets: my-vpc-subnets, ... 
     alb.ingress.kubernetes.io/certificate-arn: my-acm-certificate-arn 
     alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]' 
     alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' 
     alb.ingress.kubernetes.io/healthcheck-path: "/" 
     alb.ingress.kubernetes.io/success-codes: "200,404" 
spec: 
rules: 
- http: paths: 
 - path: /* 
   backend: 
     serviceName: ssl-redirect 
     servicePort: use-annotation 
 - path: /* 
   backend: 
     serviceName: "external-nginx-ingress-controller-controller" 
     servicePort: 80

Solution #1:

Solution #2:

External access authentication

For authentication different methods are applicable (HTTP auth, JWT, mutual SSL,..).

Lagom access to “external services” (Cassandra and Kafka)

Lagom services require access to Cassandra (or any other journal store) and, depending on the service use case, optionally access to Kafka. In Lagom this falls in category of “external services”.

Cassandra and Kafka can be deployed, depending on the preferences:

  • in kubernetes cluster
  • on dedicated hosts
  • SAAS

I personally use dedicated hosts for these reasons:

  • prior running Lagom system on kubernetes I was running it on Lightbend ConductR where recommend Cassandra and Kafka deployment was on dedicated hosts. When migrating Lagom system to kubernetes it was not possible to migrate Cassandra and Kafka because of to long required downtime
  • when starting with Lagom there were not so many SAAS options available

Lagom, with support from Lightbend orchestration, uses kubernetes DNS SRV method for allocating external service endpoints.

DNS SRV records, in kubernetes, are generated from kubernetes service resource. In sense of external services, kubernetes service resource is abstracting external service access and by that its deployment type.

If external services are deployed in kubernetes cluster, Cassandra and/or Kafka kubernetes service resource will be deployed and by that DNS SRV records will be generated.

If external services are deployed outside of kubernetes cluster (dedicated hosts or SAAS), kubernetes Headless service can be used to configured it. Headless service, as “regular” kubernates service resource, generates DNS SRV records.

Example of cassandra headless service resource:

apiVersion: v1
kind: Service
metadata:
  name: cassandra
  namespace: lagom
spec:
  ports:
  - name: "cql"
    protocol: "TCP"
    port: 9042
    targetPort: 9042
    nodePort: 0

---
apiVersion: v1
kind: Endpoints
metadata:
 name: cassandra
 namespace: lagom
subsets:
 - addresses:
     - ip: 10.0.1.85
     - ip: 10.0.2.57
     - ip: 10.0.3.106
   ports:
     - name: "cql"
       port: 9042

DNS SRV record example:

_cql._tcp.cassandra.lagom.svc.cluster.local

Lightbend orchestration external service DNS SRV name setup configuration:

-external-service “cas_native=_cql._tcp.cassandra.lagom.svc.cluster.local”

Example of kafka headless service resource:

apiVersion: v1
kind: Service
metadata:
  name: kafka
  namespace: lagom
spec:
  ports:
  - name: "broker"
    protocol: "TCP"
    port: 9092
    targetPort: 9092
    nodePort: 0

---
apiVersion: v1
kind: Endpoints
metadata:
 name: kafka
 namespace: lagom
subsets:
 - addresses:
     - ip: 10.0.1.85
     - ip: 10.0.2.57
     - ip: 10.0.3.106
   ports:
     - name: "broker"
       port: 9092

DNS SRV record example:

_broker._tcp.kafka.lagom.svc.cluster.local

Lightbend orchestration external service DNS SRV name setup configuration:

 -external-service “kafka_native=_broker._tcp.kafka.mynamespace.svc.cluster.local” 

Hope you found this useful. Please share your feedback in form of comment or like. Tnx

Like/unlike

Leave a Reply