Home avatar

👋 A space where I teach and share ideas.👋 A |

An Introduction to Policy as Code (Sentinel)

Sentinel is HashiCorp’s framework for implementation of Policy as Code (PaC). It integrates with Infrastructure as Code (IaC), and allows teams/organizations to be proactive from a compliance/risk standpoint. Sentinel allows for granular, logic-based policy decisions that reads information from external sources to derive a decision. In plain English, based on logic written (policies), Sentinel can act as a decision maker based on information provided. This is pretty handy when you want to prevent users from executing specific actions, or ensure that certain steps/actions are conducted. Example, an employee attempting to deploy a bad practice network rule that allows everyone in the internet inbound access! It’s important to call out that Sentinel is a dynamic programming language, with types and the ability to work with rule constructs based on boolean logic.

This article was originally published on Medium. Link to the Medium article can be found here.

Infrastructure as Code (IaC) — What Is It?

The explosion of public cloud platforms has made the accessibility and consumption of IT infrastructure an uncomplicated experience. The traditional IT infrastructure found in vast and expensive corporate data centers can now be consumed by anyone with an internet connection. As organizations/businesses start consuming public cloud platforms and its infrastructure you often hear the expression, infrastructure as code (IaC).

This article was originally published on Medium. Link to the Medium article can be found here.

If you have ever wondered, the what, the why, and the how, in regards to IaC then you have come to the right place.

Static/Dynamic Infrastructure

Before we dive into the nuts and bolts of IaC it helps to first understand how IT infrastructure works. Let’s start with static infrastructure, think server racks, mainframes, routers, switches, firewalls, and pretty much any equipment you expect to find in a traditional data center. In this static infrastructure environment, when you need more capacity you simply add more capacity though physical provisioning, either through horizontal and/or vertical scaling. The need for physical provisioning and waiting for the compute capacity to become available is what makes this environment static.

Cleaning Up a Terraform State File — The Right Way!

We have all been there, the moment terraform apply crashes because someone made a manual change and removed a resource that terraform is expecting to be available. You try to do a terraform refresh but to no luck! What do you do at this point? Sometimes the only option is to make modifications to the terraform state file. This article will walk you through how to make state file modifications, both the right and the wrong way, so that you can educate other in the future on how to make statefile changes properly.

This article was originally published on Medium. Link to the Medium article can be found here.

The wrong way

One could easily open up the terraform.tfstate file and manually do “JSON surgery” but this is not a recommended action, mainly for the high chance of human errors and potentially wrecking your state file. That being said, allow to me show you how.

What Is a CI/CD Pipeline?

If you don’t know the answer to this question don’t feel bad, engineers and IT professional at all levels sometimes don’t know the answer to this question. In my daily job I often get asked, “What is a pipeline?” The follow up question is 9/10 times, “How do I create a pipeline?” Today I would like to shed some light on the pipeline topic, mainly focusing on the first question but also why it is important to application development.

This article was originally published on Medium. Link to the Medium article can be found here.

The Past

In simple terms a pipeline is a workflow, a workflow that application development teams use to release software.

Ditch Your SSH Keys and Enable AWS SSM!

If you manage AWS for an organization, big or small, chances are you have several Secure Shell (SSH) keys laying around you hardly use, OR WORSE, you don’t recall the account the key was made for. SSH key management is a rabbit hole in itself and most people understand the security concerns that arise with improper SSH key hygiene. Luckily for us, there is a way to bid farewell to the cumbersome practice of using SSH to remote into an EC2 instance. Allow me to introduce you the AWS service, Systems Manager (SSM).

This article was originally published on Medium. Link to the Medium article can be found here.

I will teach you the following in this guide:

  • Identify SSM Remote Session Manager requirements-including for an enterprise
  • Enable Remote Session Manager for all EC2 instances
  • Enable Remote Session Manager logging
  • Lock down Remote Session Manager through IAM User permissions 🔐
  • Debugging Remote Session Manager

Enable SSM Remote Session Manager

The AWS managed service, SSM, comes with a neat feature called Session Manager. Session Manager allows us to connect into an instance and get a shell session through the usage of HTTPS TLS1.2/ port 443, without having to use SSH keys. It’s important to understand that this is NOT an SSH connection but rather an HTTPS connection. The Session Manager allows us to use a terminal session from our web browser directly OR by using the AWS CLI. It’s really that easy…..assuming you have everything configured correctly.

Automate Custom EC2 AMIs

If you work for an organization/company that leverages the services of a public cloud provider such as AWS, chances are there is a customized image available in your environment. Most companies today offer some sort of customized default image or images that contain baked in security tools, proxy variables, repository URL overrides, SSL certificates and so on. This customized image is usually sourced from common images provided by the public cloud provider.

Today, we’re going to look at how we can completely automate a customized image sourced from the AWS Linux2 AMI and deploy it to all accounts inside an organization, while maintaining a minimal infrastructure footprint. Code can be found in the following GitHub repository.

This article was originally published on Medium. Link to the Medium article can be found here.

Assumptions

  • Accounts are under an AWS Organizations.
  • All accounts require the customized AMI.
  • VPC ACLs and Security Groups allow Port 22 into to the VPC (Packer)
  • CI/CD has proper credentials to query AWS Services (Organizations, VPC, EC2).
  • Gitlab and Gitlab Runner available.

Tools Utilized

Terraform Packer AWS SNS AWS Lambda AWS CLI Gitlab Gitlab CI Docker

0%