Note: This is part two of a two-part series on Amazon EKS. ICYMI, here's a comprehensive intro overview get you up top speed on EKS: “Ten things to know about Kubernetes on AWS”.)

By this point, you probably have a good idea about the moving parts of Kubernetes on AWS. Now, let's tackle a bigger question: how do its benefits help you?

Three great benefits we found with Amazon EKS

Reduced maintenance overhead

Kubernetes cluster spinup and management operations are well-automated by AWS which allows AWS customers who use EKS to focus more on the business and applications instead of spending precious cycles time on Kubernetes clusters provisioning and maintenance.

Integration with AWS services

EKS does not force customers to migrate all workloads to AWS. With EKS, you can still spread a workload across multiple different clouds. Migrating from self-hosted Kubernetes installations to Amazon EKS typically requires no significant changes to your existing system. But there's more to it than that because EKS integrates so well with many common Amazon services:

  • VPC and availability zones:  EKS supports multi-AZ deployment of worker nodes for high availability
  • EKS load-balances traffic for services:  You can use AWS Network Load Balancers, AWS Application Load Balancers, and Elastic Load Balancers
  • Route53:  EKS supports assigning Route 53 DNS records to any service running in the cluster
  • Autoscaling:  Worker nodes are deployed in the autoscaling group. i.e., the cluster can automatically scale up and down by the load or by any CloudWatch metric
  • AWS CloudWatch:  All worker nodes are monitored with CloudWatch, like any other EC2 instances
  • AWS IAM:  Via open-source Heptio authenticator:  You can use IAM roles and policies to authenticate users to EKS and configure the fine-grained access to any Kubernetes namespaces created within it
  • AWS EBS:  EBS volumes are used for Kubernetes persistent storage
  • AWS databases:  This is a big one – Using RDS, DynamoDB, Elasticache, Redshift, and all other AWS  databases available from EKS means that in many cases, you can keep only stateless components in EKS while running databases separately on AWS. This approach provides more flexible failover paths for EKS-deployed components, and at the same time reduces maintenance overhead
  • Elastic Network Interfaces (ENI):  Can be used for EKS Kubernetes cluster networking via the CNI plugin
  • AWS CloudTrail:  Records any actions done on your EKS clusters via the AWS Console, SDK or rest API

Community

The AWS team continuously supports Kubernetes as well as several related products in the community, such as Heptio Authenticator. Additionally, EKS supports many of the open-source solutions built by the Kubernetes community.

Challenges to look out for with Amazon EKS

As AWS EKS fully runs on AWS VPC and EC2, it inherits the security measures from these services. For example, you can use your own VPC security groups and network ACLs for restricting the network access to your EKS clusters. Web application firewall and DDoS protection services such as AWS WAF and AWS Shield can be applied to your EKS installation in the same way as other systems hosted on Amazon. Other things you should look for:

  • User management and RBAC.  When an Amazon EKS cluster is created, the IAM user that creates the cluster is added to the Kubernetes RBAC as the administrator. It is then the responsibility of this EKS/RBAC administrator user to add more users and setup access restrictions for them.
  • IAM root user.  It is possible to create the EKS cluster under your root AWS user but it is impossible to authenticate to it later, therefore EKS clusters can only be used when created under a non-root IAM user.
  • Private Kubernetes cluster with EKS.   When deploying Kubernetes on EC2 hosts without EKS, it is possible to set up the internal load balancer and VPN around K8s master and worker nodes so kubectl access to the cluster is only allowed via VPN. AWS EKS supports this and recommends deploying worker nodes to private subnets for better security.

Here are three other important security-related gotchas to bear in mind:

  • IAM permissions. We noticed that an IAM user with the AdministratorAccess role assigned cannot use AWS EKS until additionally assigned the eks:* permissions. This behavior differs from all other AWS services, for which AWS admin users with AdministratorAccess permission (“Action”: “*”, “Resource”: “*”) are usually granted full access automatically.
  • The heptio-authenticator binary download is very slow. We experienced 10Kb/s download speeds from AWS-recommended endpoints serving the heptio-authenticator. In all likelihood, this is caused by the increased activity across EKS and will be fixed soon. At the moment, we observe that downloading the heptio-authenticator is taking half the time to spin up the AWS EKS cluster from scratch ;)
  • Authentication via heptio-authenticator. EKS uses the alpha feature for running the command on authentication (note the apiVersion: client.authentication.k8s.io/v1alpha1 line as an example of a kubectl config at this AWS documentation page). We haven’t encountered any issues with this alpha feature so far after using it in production for some time, so we are comfortable recommending it.

Deciding if EKS if right for you

Now that we have a good idea of what to anticipate in terms of pluses and minuses, let's see what you need to take into consideration in implementing the plan.

How much does Amazon EKS cost?

As of this writing, EKS is quite affordable, especially for medium and large-scale workloads. Here are the 4 main items to take into account when forecasting your AWS EKS costs:

  1. You pay $0.20 per hour for each Amazon EKS cluster (control plane) that you create which makes about ~$150/month. In terms of infra costs, this is almost the same as running 3 Kubernetes master hosts on EC2 with a load balancer. But, with this price AWS also shoulders the heavy lifting of Kubernetes master nodes management, such as security upgrades and failovers.
  2. In EKS, you'll also pay the usual hourly price for any EC2 instances, EBS volumes and ELBs that you create to run your Kubernetes worker nodes and apps.
  3. As is the case for any other AWS services, EKS only charges you for what you use, as you use it; there are no minimum fees and no upfront commitments. For example, if you deploy the EKS cluster and run your workload on it for 24 hours, you will only be charged for these 24 hours. There is also an option to stop your EKS worker nodes when you don’t need them in order to save costs without missing data on those nodes.
  4. You may use reserved instances to save costs in the long term or use spot instances to save costs in the short term if your workload can tolerate one or more instances eventually going down; the spot price becomes more expensive than the spot bid.

The design rules for saving costs with EKS are the same as for designing a system on ‘raw’ EC2, but with one key difference — the Kubernetes platform can automatically failover any services deployed to it, and it does it well!

Just as an example, if you use Kubernetes for scaling stateless services and you have properly configured the K8s ‘anti-affinity rules’ and enough replicas for each, you can replace each Kubernetes worker instance with a different one (different instance type, different capacity, switch from on-demand to spot instance) without downtime.

That being said, you can build out some automation that can:

  • Automatically selects the best (e.g. the most cost-effective) AWS instance type and switches Kubernetes workers to it one by one
  • When there is a spot price spike, failovers to on-demand instances faster than spots will be terminated by AWS

That can effectively help you to save up to 80% of your AWS costs without impacting the worst-case SLAs or HA requirements too much. In our experience, that's usually affordable for most data processing systems.

Should I redesign my system to start using EKS?

If your system already consists of dockerized microservices that can run in Kubernetes, then the short answer is “no”, you shouldn’t. Amazon EKS is certified to be fully Kubernetes-compatible. Applications running on any standard Kubernetes environment are fully compatible and can be easily migrated to Amazon EKS. That being said, if your system is ready for Kubernetes (or runs in Kubernetes already), then it can easily be migrated to EKS. Otherwise, you need to dockerize the system components first and make sure that they can scale in Kubernetes. A good example of legacy system migration to Kubernetes can be found here. More information about the migration process itself can be found here.

What if Amazon changes the Kubernetes code base?

Yes, the Amazon team commits to Kubernetes and several related products, such as Heptio Authenticator, but Amazon is not even in the top 100 committers' list for any of those. So don’t fret; it’s not all that likely that Amazon is going to turn Kubernetes upside-down any time soon. At the same time, many AWS fixes to the Kubernetes platform have not only optimized Kubernetes to better run on AWS, but have also improved Kubernetes as a platform itself, which is good for the community overall.

When do we recommend using AWS EKS?

Containers can make life easier; container orchestration can be non-trivial. AWS EKS brings the benefits of container-based computing to organizations who want to focus on building applications instead of setting up Kubernetes clusters from scratch.

Three use cases that are a good fit for AWS EKS

So, you want to focus on building applications instead of setting up a Kubernetes cluster from scratch, and EKS can make the building less of a headache. Where do we think you'll gain the most benefit?

  • Running scalable stateless apps Examples: REST APIs and various data processing microservices
  • Running resource-aggressive workflows on auto-scaled spot instances using failover capabilities provided by Kubernetes
  • Running DB and queue server clusters that support running in Kubernetes PetSets and tolerate the failure of one node without downtime. Examples: MongoDB, Elasticsearch, Kafka

Three use cases that are not such a good fit for AWS EKS

We hope the list of approaches to EKS above will help you nail the real compelling ways it can benefit your work loads. But don't take aim at every workload as though kubernetes is a universal hammer. Use cases we don't recommend include:

  • Deploying stateful Big data clusters: We don’t recommend putting your big DBs such as Hadoop/Hbase, HP Vertica or Cassandra into EKS.
  • Deploying products that are not designed for running and scaling in Docker: Any product that requires the operator to change config files on the disk to add or remove nodes (such as Apache HDFS) will become a pain when deployed in Kubernetes.
  • Deploying applications that can’t run in more than one replica. Kubernetes can only show its full power (such as automatic failovers through pod evictions) on apps that run in more than one replica. If your app does not support running more than one replica of itself in the same network, or with the same DB, or only supports the so-called ‘hot-standby’ mode, then Kubernetes (and hence EKS) is probably not the best choice to deploy it. If you have legacy applications that can’t run in Docker or scale properly, then there are many companies on the market (including our own) that can help you seamlessly fix this issue for your business.

We are also excited to announce that CloudGeometry now supports the migration of self-hosted Kubernetes clusters to AWS EKS at any scale. So, if you have such a migration case and you need help, please contact us!

Links: