Integrating WAF into K8s Kong API Gateway

Jiju Jacob
4 min readMar 23, 2022

In this post, we will integrate a WAF (Web Application Firewall) into our API Gateway. We have selected Wallarm WAF. And we are integrating this into a pre-existing API Gateway laid out with Kong on our Kubernetes Cluster. Though it is much easier to integrate a WAF like AWS WAF, the automatic tuning of the filtering logic using ML in Wallarm in response to the various attacks is what appealed to me to pick Wallarm in lieu of AWS WAF.

So, what are we building today?

They say words are clueless about what diagrams have to say!

We will front our existing API-Gateway with an ECS Cluster (AWS Fargate) that runs the Wallarm nodes, which is fronted with an AWS Application Loadbalancer that does the SSL termination. The Wallarm ECS nodes are capable of either BLOCKING, SAFE_BLOCKING, or just monitoring and reporting all kinds of attacks that are directed towards your infrastructure. And we will terraform our way into this configuration. And I am just going to put up the most important parts of code here, not everything.

Let’s start building

Wallarm WAF allows a free trial, I used that to create an account and we need to use the Wallarm UI username and password to be injected as secrets into the AWS secret manager. In the gist below, we are creating secrets based on WAF_DEPLOY_USER and WAF_DEPLOY_PASS passed to the terraform as environment variables.

Next we get on to building a security group for the ECS cluster. You should have a separate security group, one for the ECS cluster and the other for the ALB, but I am reusing the same one for brevity of this story here!

Well, that bit’s away. Now lets get down to building the IAM policies for your ECS cluster that will host your Wallarm nodes.

The permissions above are a bit more wider than you would expect for production, but hey we are having a bit of fun!

Essentially we are creating a task-execution-role and a task-role for the ECS cluster.

Next is to create a Cloudwatch log group that will be used to dump all our ECS cluster’s logs.

The next step in this journey is to define the containers that will run in the ECS cluster (aka Task definition in ECS)

Wallarm nodes are slightly resource hungry, especially the memory. The JSON file above shows you that you should give atleast 1.2 Gigs of memory for the tarantool component. The JSON file is a template that will be used by our terraform to form the actual container definition.

WALLARM_API_HOST will need to be set to api.wallarm.com for the EU Wallarm Cloud and to the us1.api.wallarm.com for the Wallarm US Cloud. In my case, I used the US version

The NGINX_BACKEND is essentially the protected resource that the WAF is fronting. In our case, it is the load balancer address for the NLB / CLB that is integrated with our Kong API Gateway. May be you can build a layer of abstraction using an internal DNS name!

The WALLARM_MODE is our way of saying how strict Wallarm needs to be when dealing with traffic. A value of “block” will block all kinds of attacks. A value of “safe_blocking” will block only attacks from greylisted IP address. A value of “monitoring” will just monitor and report, not block, while a value of “off” — well why are we building this?

Usually you go with “monitoring” first and move on to “safe_blocking” and “block” modes as you harden.

Finally you could have multiple applications that you can configure Wallarm cloud to monitor and draw beautiful graphs against. So you can register an Application into Wallarm UI (More details here). Once you have the numeric ID of that application, you could feed that numeric ID into the WALLARM_APPLICATION variable.

This is the ECS cluster definition here

In the above terraform, we are creating the ECS cluster, ECS service and the task definition for tasks that run as part of the ECS Service.

Among the things that need to be hardened is the placement of the service on the public-subnet

And finally for the ALB setup that fronts the Wallarm ECS cluster

We setup the ALB to terminate SSL with a certificate. We also setup a HTTP to HTTPS redirect and point the target-group of the Application Load Balancer to the ECS cluster running the Wallarm WAF.

One thing that I got stuck on was that the AWS LoadBalancer was doing health checks, and I did not have a proper endpoint for health. So it kept tripping the ECS instances over. So it is important to have a proper health-check endpoint defined that can return a 200 status code, if everything is good.

In this post, I have left out the setup of the VPC, and the Certificates using AWS Cert Manager for brevity.

And voila, we have all the terraform for the entire infrastructure. Terraform weaves its magic and then after a few test requests to this infrastructure, wallarm should start showing up beautiful graphs along with some statistics on how many attacks it foiled…

--

--