In the previous post we set up the infrastructure at AWS and we stored the states at GitLab.
Now let’s go for the most fun part, integrate everything.
We will modify a security group to allow HTTP access to the FastAPI app, and we will have all kinds of notifications and requests for approvals sent to Slack.
Every step will be logged and stored in the form of messages in Slack and in a DevOps fashion at GitLab: terraform states, artifacts (which are specifics output that you want to store), logs for each run of the pipeline, etc. This will give you incredible traceability of your IaC.
Now in this post, we will be working only in the root folder of the repo.
4.- Slack
The creation of an App in Slack can be tricky, I will give you general steps here.
To allow GitLab to send you messages to Slack, you will need the Slack WebHook, and add it to your project repository Settings -> Integrations -> Slack
.
There you will add the Slack WebHook and select for which events you want GitLab to send you a message, and to which channel that message should be sent.
Since the pipeline will run each time you change the code and push the commit, you should get a message like this in Slack.
Now, these are the full notification messages that we want to get in Slack: requests, approvals, deploy status, compliance, terraform plan, consistency check, and end of pipeline status.
And last, the execution of the test child pipeline
5.- GitLAB CI (pipelines)
To set up these variables in YOUR fork of the repository in GitLab, go to Settings -> CI/CD -> Variables -> Expand -> Add variable
, and add these two variables. This will allow the pipeline to pass the values when you execute Terraform.
CI/CD -> Pipelines
And if you click in one of the pipelines number you will see a graphical representation of it:
This last one is a very important stage, I will go deeper later. Basically, I’m injecting at the end an external child pipeline code to do some checks:
IMPORTANT to have in mind: Each task will trigger the creation of a container, with an image, import the project repository variables and execute what is specified, produce output, optionally store it as an artifact, and then the container will be destroyed.
Now, if you click in any of these stages and tasks you can see what happened inside in this typical Linux console style. This is the container running in the runner, and this information will remain stored, so this is part of the traceability.
In the same screen, on the right, is where you can see the artifacts, they will remain stored and you can also check them.
gitlab-terraform.sh
is just a wrapper to avoid having big text lines in the middle of the pipeline, which only creates visual noise. (thanks to Nicholas Klick). In this case, I adapted to be able to produce text outputs instead of JSON.5.2.- Stage Validate:
./scripts/validate_compliance.py
5.3.- Stage plan: we create the terraform plan and store it as an artifact for future reference in text format.
Note that I’m using terraform targeting, which is not recommended for everyday use, but for this PoC is a perfect fit.
5.4.- Stage request approval:
For this task you will need to add the Slack BOT USER OAUTH token in GitLab in the project CI/CD variables section like we did before
This is what you will get in Slack, the request notification, with the compliance report and the terraform plan. Also, the link to go to approve the request in GitLab in order to proceed with the changes.
5.5.- Stage deploy: finally, if everything looks OK and the pipeline run has been approved, terraform apply
will launch the changes to the IoC.
Note the line when: manual
indicates just that, manual approval is required for this task, and combined with this other line is allow_failure: false
, force the pipeline to stop any task until it is approved.
To approve the pipeline you will need to go into GitLab, find the pipeline and the task to approve or use the link in Slack from the previous section, and hit the button play.
5.6.- Stage notify deploy: send the deploy notification message to Slack.
Notice the channel ID is specified in the curl command.
The message to Slack notifying the successful approval and deploy
5.7.- Stage verify deploy and notify: in this step, we could verify the integrity of what was executed. Now, this integrity check could get super complicated but is just to show what kind of automated control you could be doing in the script.
Here you will need to add the last variable in GitLab CI/CD, to allow the pipeline to download the Terraform managed states in order to be analyzed by the python script. The user token is the same we got in the previous post for Terraform to upload the states.
You can find it in your notes, or if you don’t, you can generate a new one at GitLab -> user preferences -> access tokens
.
In this case, the python script will print the repo terraform files, the terraform managed state files, and what is actually configured on AWS.
5.8.- Stage notify if failure: is just a generic exit stage that will run only when another stage has failed. It will send to Slack the failed message with the link to the pipeline in GitLab for future reference.
5.9.- Stage check services:
NOW, THIS IS GOLD !!!
In this stage, we will INCLUDE an external child pipeline (what is inside is in the next section).
This enables us to create reusable general networking TEST pipelines, and instruct development teams to include them at the end of their pipelines.
In this way, when someone changes the infrastructure there is a standard test pipeline to include, and the result will be stored and published, which means not only alerting also traceability.
6.- Reusable Test pipeline
This is a very simple and fictional pipeline, you should be able to read it now that we have covered the basics in the previous section.
Basically, it will have 3 stages: prepare, check_services, and notify.
Check the child pipeline at ./test-pipeline/verify-aws-services-gitlab-ci.yml
It will run a Python script that will do the following:
Check the python script at ./test-pipeline/verify-aws-services.py
The Slack message would be like this:
7.- Running the pipeline
Ok great! we are ready to run the pipeline.
To do that you can modify the code in your PC and push it to GitLab, or you can manually run the pipeline from GitLab, in this last case go to CI/CD -> Pipelines
and it the button RUN pipeline, in the next window do the same.
This second window could be used to pass different variables in order to influence the pipeline in different ways, like users, versions, names, etc. But let’s keep it simple to learn the fundamentals.
Now you should see the pipeline moving forward like in the previous screenshots. Be patient until it gets to the stage of Deploy, in which you will need to approve it as I already mentioned before.
You can follow the Slack message links.
Now this approval part is something that should be done with git branches and merges, is the right way to do it. But I would need a paid account in GitLab.
I know it will work, but I don’t need that right now. I want to keep this free to try 😉
Closing
If everything went ok, now you should be able to see the full web pages served by fastAPI in the public IP of your front instance. If you don’t remember the IP, you can check it at AWS or in your terraform state file under the key front_public_ip.
WOW, this was super long and complicated, with many moving parts to learn. Once you get the idea, you will see that is not so crazy at all, but it takes time, it took me several months to internalize everything.
Any comments or suggestions or ideas to explore I will be happy to help!
Thanks for reading.
Adrián.-