Welcome đ¶
This is Gulcanâs Kubernetes DevOps and Corner!
Discover practical solutions and hands-on projects to boost your skills.
Real Kubernetes and CICD Projects, Real Solutions
Simplified DevOps Challenges
Hands-On Learning with Kubernetes, DevOps, GitOps and Automation
Latest Blogs¶
Kubernetes¶
đ Getting Started with Kubernetes
So, youâre ready to dive into the world of Kubernetes but unsure where to start? Fear not! Weâve got you covered with a beginner-friendly guide using our favorite environments â Minikube, k3d, and Vagrant. Whether you prefer to keep it local or venture into the cloud with a Codespace environment, weâve got options for everyone.
Your Favorite Environments Guide¶
1. Minikube: Quick and Simple¶
Minikube is your go-to if youâre looking for a hassle-free Kubernetes setup on your local machine. Hereâs a quick start:
# Install Minikube (if not already installed)
brew install minikube
# Start Minikube cluster
minikube start
# Verify your setup
kubectl get nodes
Now, you have a single-node Kubernetes cluster ready for exploration. Perfect for testing and learning the basics.
2. k3d: Lightweight and Portable¶
If youâre a fan of lightweight environments, k3d is your companion. It spins up a Kubernetes cluster in Docker containers, making it easy to set up and tear down:
# Install k3d (if not already installed)
brew install k3d
# Create k3d cluster
k3d cluster create <yourk3dcluster>
# Verify your setup
kubectl get nodes
Enjoy the simplicity of k3d, which provides a fast and lightweight Kubernetes environment for your experiments.
3. Vagrant: Customizable Kubernetes Playground¶
3-node
For those who prefer a more customizable setup, Vagrant is the way to go. You can choose between a 3-node or 6-node local setup for a more realistic cluster experience:
We have setup a 3 node cluster for simple three tier applications and development purposes. Check Kuberada labs repo for 3-node cluster Vagrantfile setup here.
# Install Vagrant (if not already installed)
brew install vagrant
# Choose your setup
# 3-node setup
vagrant up 3-node
6-node
We have a 6 node vagrant setup for distributed applications, microservices exploration. Check Kuberada labs repo for 6-node cluster setup here.
# 6-node setup
vagrant up 6-node
# Verify your setup
kubectl get nodes
Vagrant allows you to experiment with different node configurations, providing a scalable environment right on your local machine.
Conclusion:¶
Whether youâre a fan of Minikube, k3d, or Vagrant weâve got a Kubernetes environment tailored to your preferences. Choose the setup that suits your learning style, experiment with basic Kubernetes commands, and get ready to unlock the full potential of container orchestration.
Happy Kuberneting! đđ
Devops¶
This page is under development
Kuberada Platform¶
Welcome to Your DevOps Learning Hub đ€!
What you Get At Kuberada:¶
Blogs with Practical Insights: Read our blogs for practical insights into Kubernetes and DevOps. We keep it simple, straightforward, and focused on real-world scenarios.
Hands-On Labs for practicing: Every blog comes with hands-on labs. Apply what youâve learned in a practical way, no frills, just hands-on experience.
Why Kuberada:¶
Learn Faster: Our integrated approach helps you learn faster. Blogs for understanding, labs for application, and Codespaces for seamless practice.
Depth of Understanding: We want you to go beyond the basics. Our content is crafted for a clear and deep understanding of Kubernetes and DevOps.
A Quick Note â
Expect regular updates and new content.
Thank you for joining me on this journey.
Kuberada Labs on Github and Killerkoda¶
Kuberada Labs are now not just on our blog but also on GitHub. Explore hands-on scenarios covering everything from Kubernetes to killer coding challenges. Whether youâre a pro or a coding newbie, thereâs something for everyone. Join our community, share your insights, and let the coding adventures begin!
GitHub Repository¶
Find the labs here đ
Killerkoda scenarios¶
Navigate to killerkoda scenarios here đ

A High-Level Overview of Kubernetes The Hard Way on WSL¶
đ·ïžTagged with:
Grasping the inner workings of Kubernetes can be challenging. Sure, managed Kubernetes services make things easier. But for those who crave a deep understanding of how Kubernetes really works, Kubernetes The Hard Way offers a hands-on approach to setting up a Kubernetes cluster, providing a deeper understanding of its components and their interactions. TThis (mostly) concise,article aims to outline the basic flow of the steps, include some smoke tests on a python flask application (like verifying data at rest encryption, DNS resolving, pod-to-pod communication), and offer various tips.
We will setup the following cluster:
And we will perfom some actions like:
Component Versions¶
Kubernetes: v1.28.x
containerd: v1.7.x
CNI: v1.3.x
etcd: v3.4.x
Environment Setup¶
Using Ubuntu (WSL) as the jumpbox facilitated smooth communication between my host machine and the Kubernetes cluster components.
Used 3 containers, with Ubuntu (WSL) as the jumpbox.
Setup Process Overview¶
Bootstraping the Cluster: Automated setup of compute resources and client tools using functions and scripts (e.g., image build, network configuration, container pausing).
Provision a PKI Infrastructure: Used OpenSSL for each Kubernetes component.
Authentication and Authorization: Generated Kubernetes configuration files for controller manager, kubelet, kube-proxy, scheduler clients, and admin user.
Ensured Data Encryption at Rest: Generated an encryption key and copied the encryption config file to master nodes.
Stored Cluster State: Bootstrapped a three-node etcd cluster and configured it for high availability and secure remote access.
Bootstraped and Provisioned the Kubernetes Control Plane: Downloaded, extracted, and installed API Server, Scheduler, and Controller Manager binaries. Configured RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node.
Provisioned Worker Node Components, runc, container networking plugins, containerd, kubelet, and kube-proxy components.
Generated a kubeconfig file for kubectl based on the admin user credentials.
Added network routes for pod to pod communication.
Installed Cluster-internal DNS server using coreDNS, now my pods could communcate with the cat API
Setup a local NFS server exposing a directory on my host machine via NFS and installed Kubernetes NFS Subdir External Provisioner
Now you should be able manage the cluster using kubectl (kubectl under the hood).
Some Tests¶
Data at rest encryption and printing a hexdump of the kuberada secret stored in etcd:
Port forwarding and making an HTTP request using the forwarding address:
Dns resolution:
Further Steps¶
Implement a Load Balancing Solution: Ensure that communication from the worker nodes to the API server is routed through a load balancer for better availability and scalability. You can use tools like MetalLB or HAProxy to achieve this.
Set Up a Separate etcd Cluster for High Availability: To improve the reliability of your Kubernetes cluster, consider setting up a separate etcd cluster. This separate cluster will ensure that your Kubernetes control plane remains operational even if one of the etcd nodes fails.
Improve Networking Performance, Security, and Observability: Replace the default networking and proxy components in your Kubernetes cluster with Cilium. Cilium leverages eBPF (extended Berkeley Packet Filter) technology, providing a more efficient and secure networking layer. It offers advanced features like network visibility, security policies, and load balancing, enhancing the overall performance and security of your Kubernetes cluster.
CICs(Challenge in Challenge)¶
permission Denied for
nf_conntrack_max
: To resolve this issue, increase the maximum number of tracked connections in the netfilter connection tracking system to 655360 and change the permissions of/etc/sysctl.conf
so that kube-proxy can modify thenf_conntrack_max
setting.WARNING: No blkio throttle.read_bps_device support
andUnable to add a process (PID xyz) to a cgroup
: Ifdocker info | grep Cgroup
command returns v1, it means that some specific features for fine-grained control (throttle.read_bps_device, throttle.write_bps_device, etc.) are not enabled in your current kernel configuration. Upgrade your kernel and reboot WSL.Mapping Domain Names to IP Addresses in
/etc/hosts
: Ensure that domain names are correctly mapped to their respective IP addresses in the/etc/hosts
file. This mapping is crucial for proper communication between components in your Kubernetes cluster.
Adding Routes for Pod Communication Across Nodes: By adding routes to the nodeâs internal IP address for each worker nodeâs Pod CIDR range, you ensure that pods can reach each other across nodes. This step is essential for the Kubernetes networking model to function correctly.
nfsd command not found: If you encounter an issue where the nfsd command is not found, you need to enable NFS by updating the kernel and adding the following lines to /etc/wsl.conf. After making these changes, you should be able to use NFS successfully.
sudo service nfs-kernel-server status
Get the PVâs:
Checking the local path to your NFS shared directory:
I believe itâs time to give my hardware a well-deserved break.
Conclusion¶
Setting up a Kubernetes cluster for production is indeed a complex and time-consuming process. It involves various tasks such as renewing SSL certificates, updating the etcd database, upgrading binaries, adding new nodes to the cluster, and managing operating system patching and maintenance. These tasks can be daunting and require significant expertise to handle effectively.
However, there is a solution to this challenge: managed Kubernetes services provided by major cloud providers. These services, such as AKS, GKE, and EKS, offer a simplified way to create, manage, and scale Kubernetes clusters in the cloud. They handle the maintenance tasks, including renewing SSL certificates, updating the etcd database, and managing operating system patches, allowing you to focus on your applications rather than infrastructure management.
On the other hand, if you are looking to gain a deeper understanding of Kubernetes architecture and components, Kubernetes The Hard Way is an invaluable resource. It guides you through the manual setup of a Kubernetes cluster, providing hands-on experience with each component and how they interact.
In conclusion, whether you choose to use a managed Kubernetes service or set up Kubernetes The Hard Way, both options offer valuable learning experiences and can help you gain a better understanding of Kubernetes.
đ For a wealth of knowledge, check our blog archives. Happy reading! đâš
Did you like kuberada? đ¶

Canary Deployments Made Easy: A CI/CD Journey with GitHub Actions and Argo CD Rollouts¶
đ·ïžTagged with:
In todayâs fast-paced development landscape, delivering software updates quickly and reliably is crucial for success. Traditional deployment methods can be slow, error-prone, and require significant manual intervention. So, this article explores how designing a robust CI/CD pipeline built with Argo Rollouts and GitHub Actions, coupled with the power of Kubernetes for container orchestration(AKS), can streamline your DevOps workflow and revolutionize your deployment process.
While weâll focus on Python specifics, the core concepts apply to various programming languages and microservice architectures. The important thing is to understand the core philosophy of designing a CICD workflow and adapting the pipeline to your specific needs, not the tools or language-specific implementations. For instance, the same logic but different tools for integration tests like Jest/Mocha for Node.js, xUnit/SpecFlow for .NET, or adding a compile step before testing and packaging the application for compiled languages like Java, Go, C++âŠ
Our approach offers various benefits, including:
Enhanced Efficiency: Automated repetitive tasks within the CI/CD pipeline free up valuable time and reduce the risk of human error.
Improved Agility: Faster responses to changing market demands with canary deployments using Argo Rollouts enable quicker iteration, safer rollouts, and production and easy rollbacks.
Increased Reliability: Consistent and automated deployments lead to a more stable and predictable software delivery process.
Letâs dive into the practical implementation of this approach, showcasing the power of Argo, GitHub Actions, Kubernetes, and Argo Rollouts for a smoother, more efficient, and safer DevOps experience.
Project Overview¶
Letâs design a scenario together.
We have a basic Python Flask application on a Kubernetes cluster (AKS). Users come to our application to get their daily cat dose. While analyzing user feedback gathered in user testing, we discovered that eight out of ten users found static cat images boring and said they would prefer dynamic images or GIFs. As a result, the product owner created a Jira ticket, and a Python developer on the team was assigned to implement the feature of displaying cat images dynamically.
While real-life issues may not always be as simple as weâve already highlighted, we aim to practice the philosophy behind designing an automated and secure pipeline, regardless of the intricacy of the issues, languages, or tools.
First Step: From Local Development¶
For a better user experience, the Python developer implements a new feature to display cat images dynamically and opens a pull request. As a DevOps engineer, our part is to design an automated and secure pull request pipeline (actually, we will create three pipelines: one for open/updated, one for closed pull requests, and the last one will trigger automatically when we merge to main) and bump up the new version successfully, adopting GitOps principles.
Prerequisites¶
Development Cluster: A cluster for PR reviews.
Production Cluster: Prod Cluster runs a stable version of our application.
Image Registry: To store our images privately.
We have deployed a production and test cluster on AKS using Terraform for the prod cluster, deployed and installed Argo CD, argo rollouts, Nginx Ingress, and Cert Manager to encrypt the traffic to our application (Adding an A record for frontend.devtechops.dev pointing to Azure Load Balancer).
We have also set some repository secrets like username, PAT, SSH private key, kubeconfig, and Cat API with custom scripts and Taskfiles.
Tools Preference¶
Letâs take a look at some tools we adopted and the alternatives.
Pipelines: GitHub Actions. However, you can migrate to tools like Dagger, Azure DevOps, and GitLab CI.
Temporary Cluster: For this blog, we used another AKS cluster. Some alternatives are Vcluster, KubeVirt, ideal for temporary environments like PR reviews.
GitOps: Argo CD, alternative options include Flux.
Production Cluster: AKS. You can also go with EKS GKE.
Image Registry: GitHub, alternatives are Docker or GitLab, etc.
Ready, letâs get to action!
A Rough Look at the Three Pipelines¶
We should have a system that we can trust and monitor. To achieve this, we have crafted 3 pipelines. Letâs take a look at them closely:
pr-open.yaml: Triggers whenever someone creates or updates a pull request.
pr-closed.yaml: Triggers when we close/merge the pull request. It destroys the temporary cluster.
main.yaml: Runs when we close/merge the pull request, updating the release manifest file with the new image tag and promoting the new version of our app to production with a canary deployment strategy.
For each pipeline, Slack notifies us about the result.
Pull Request Pipeline¶
In the PR open pipeline, we perform several checks related to the functionality of the code:
Ensure the implemented feature works as intended.
Check for potential errors.
Ensure code coverage is at least 80% (can be adjusted based on organization standards).
Verify that sufficient tests cover the new functionality, contributing to code coverage.
Confirm that the new functionality integrates well with the rest of the application.
If the new feature passes these tests, we scan our infrastructure, images, and Kubernetes manifest files. We containerize our application, authenticate to the release repository, and update manifest files with the fresh image tag.
Finally, we create a temporary Kubernetes preview environment with AKS for the pull request (and each future pull request) for review.
Letâs take a look at the steps closer.
PR: Feature/FR-1206-implement-dynamic-images Branch Created¶
The initial step in a CI/CD pipeline begins with the developer. After working on the local copy of the code and ensuring satisfaction with the new changes, the developer runs pre-hooks such as tests and linting before committing to the feature branch to catch potential issues early. Once content with the changes, the developer, not Devin (at least for now), opened a lightweight local cluster using k3d to verify that the application works as expected.
Afterward, the developer destroys the local cluster and crafts a pull request.
Crafting a Pull Request¶
The developer began by describing the new complex feature and giving a proper name to the feature branch. Additionally, the developer incorporated the issue ID (Jira ticket ID) into the branch name for easier reference. He/She added a commit message and pushed the new changes to the version control system, GitHub, in this case, and opened a pull request.
Source Stage: Triggering the Pipeline¶
The pr-open
pipeline started running remotely on GitHub Actions, checking out the source code from the repository, and the review process began. The type of protection mechanisms we have for the mainline is also worth mentioning.
Branch Protection¶
Simple branch protection rules have been implemented on the main branch. A branch protection rule can be set in Settings > Branches > Add Rule. One such rule is âRequire a pull request before merging,â meaning that direct pushes to the main branch are not allowed.
Build Stage: Code (Part I)¶
During the build stage, the focus is simply on testing and detecting any vulnerabilities in the source code. For our Python application, we need to set up Python, install dependencies, and run linters. It is preferred to place SAST analysis early in the pipeline, after unit testing. Once the code is functionally correct, the focus shifts to identifying security vulnerabilities in the codebase and third-party libraries and dependencies using an SCA tool.
Dependencies vs. Source Code¶
Source Code:
The foundation of your application is the human-readable code you write (e.g., in Python) that defines the applicationâs logic and functionality.
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return "I'm a Python Flask application @kuberada"
if __name__ == '__main__':
app.run(debug=True)
Dependencies:
Reusable pieces of code written by others that your application relies on to function. They can be:
Libraries: Collections of functions and classes that provide specific functionalities (e.g., Flask for web development in Python).
Frameworks: Larger collections of code that offer a structure and pattern for building applications (e.g., Django, a Python web framework).
In the example, Flask is a dependency. Your application imports it to use its functionalities.
Key Differences:
Origin: You write the source code, while others write dependencies.
Focus: Source code defines your applicationâs unique logic, while dependencies provide pre-built functionality.
Control: You have complete control over your source code but rely on others to maintain and update dependencies.
Linting¶
Linting and SCA complement each other, providing a more comprehensive security posture. Linting primarily deals with code quality, style, and adherence to predefined coding standards. It identifies potential errors, stylistic inconsistencies, and areas for improvement within your codebase. Linters suggests style improvements based on the PEP 8 style guide (for Python). Some tools include Ruff, flake8, Autoflake, Pylint, pycodestyle, and pyflakes.
For testing frameworks, options include Pythonâs built-in unittest framework, pytest, unittest, and nose. Here, weâll use Ruff for its speed and run unit tests (pytest) with coverage.
- name: linting
run: |
ruff check . -v --fix --output-format=github
ruff format . -v
Code Coverage¶
After running unit tests, we generate a code coverage report (coverage html), which measures how much of the Python code is executed by our tests. This helps ensure that our tests cover a significant portion of our code, reducing the risk of regressions and hidden bugs.
- name: Test with pytest
run: |
pytest test_kuberada.py -v --cov --cov-report=html
- name: Upload pytest test results
uses: actions/upload-artifact@v4
with:
name: index.html
path: ./python-flask/htmlcov
Quality Checks¶
We run quality checks (flake8, pylint, etc.) further to ensure code quality and adherence to coding standards.
SAST (Static Application Security Testing)¶
SAST is placed early in the SDLC before compilation. It analyzes the source code itself, identifying security vulnerabilities in code patterns, coding practices, and potential function misuses. SAST tools focus on analyzing source code in compiled languages, but they can also analyze Python code for patterns that might indicate potential security vulnerabilities.
SCA (Software Composition Analysis)¶
SCA keeps track of every open-source component within an applicationâs codebase. It scans project dependencies (libraries, frameworks) for known vulnerabilities. SCA can run before the build to identify potential issues early on or during the build process to ensure secure dependencies. Safety (rebranded from PyUp) is a tool weâll use for SCA in this case.
- name: SCA analysis
uses: pyupio/safety@2.3.4
with:
api-key: ${{secrets.SAFETY_API_KEY}}
Build-Stage-Container-(Part-II)¶
After ensuring the security of our code, the next step is to scan our Dockerfile and build our image.
IaaC Scan¶
Dockerfiles can inherit vulnerabilities from base images or dependencies. Infrastructure as Code (IaaC) scanners identify these vulnerabilities and alert us to potential security risks within our containerized application. Some popular IaaC scanners include Snyk (free/paid), Aqua Trivy (free/paid), Anchore Engine (paid), Docker Hub Security Scanning (free-basic), docker-scout, and claire.
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.repo }}:canary-pr-${{ env.pr_num }}
format: 'github'
exit-code: '1'
ignore-unfixed: true
vuln-type: 'os,library'
severity: 'CRITICAL,HIGH'
- name: Upload trivy report as a Github artifact
uses: actions/upload-artifact@v4
with:
name: trivy-sbom-report
path: '${{ github.workspace }}/dependency-results.sbom.json'
After receiving the results, we fix any vulnerabilities and push the image to our private GitHub Container Registry.
Integration Tests-Check Functionality¶
Before deploying our application to the development cluster, we must thoroughly test its functionality. In the case of a microservices e-commerce app, this would involve testing dependencies such as payment, checkout, and database functionality. We would run our integration tests in the development Kubernetes cluster based on the application and organizational needs.
For our small application, we added tests to simulate a user clicking the button and tested if the application successfully calls the cat API if the image renders correctly, and if multiple clicks render new images.
Feel free to add more tests as needed for your application.
Pull Request Pipeline Further Considerations¶
We can add more tests on the running cluster, such as:
Simulating real-world user workflows with E2E tests using tools like Cypress or Selenium.
Load/performance tests to check how the new functionality affects resource consumption (CPU, memory, network) on the cluster, using tools like JMeter, K6, or Locust or by simulating real-world traffic.
Performing Chaos Engineering tests to introduce failures such as node restarts or network disruptions and monitoring how our application and cluster react.
Integrate tools like Burp Suite or OWASP ZAP for Dynamic Application Security Testing (DAST) if the application is complex with many dependencies and has various functionalities. However, DAST might be considered overkill for a small application like ours with a limited attack surface.
Deploying the New Version to the development Cluster¶
Once security scans, unit, and integration tests provide a good level of confidence in the functionality and stability of the new code, we deploy our app to a development cluster that lives as long as the pull request (PR) is open. This environment is ephemeral and only available for testing and reviewing.
At this step, you can run whichever tests you want to run.
The only test we will perform at this step is scanning our running cluster using Kubescan and uploading the results as HTML artifacts.
Now, the team members can review the application.
Once we merge the application, the pr-close.yaml
and main.yaml pipelines trigger. pr-close.yaml
deletes the development cluster and notifies admin that the pr is merged. On the other hand, since we push some new code to the main line, main.yaml also triggers and promotes the new version to production.
Letâs take a closer look at the main workflow.
Main Pipeline: Continuous Deployment¶
Once we merge the pull request, the main pipeline bumps up the version and deploys the new version to production.
In this step, we containerize our application, push it to the registry, and commit the updated tags to our release manifests. We donât interact with the production cluster directly; instead, Argo CD is in control and ensures that the desired state (our release repo with newly updated tags) matches the application running in production.
Instead of directly promoting to prod, we use a canary deployment strategy to release code safely. First, we send 20% of traffic to a small canary group and then pause the process to monitor the group. If everything works well, we manually promote the new version to prod.
Finally, the traffic increases gradually until no pod is left from the earlier version of the application, which means that the weight reaches 100% and the rollout is completed successfully.
If problems arise, Argo also allows easy rollback to a previous version in the history.
At this stage, you can do some more tests on the pod such as port forwarding and checking if the app is functional, checking the logs.
Since everything is functional as we expect, we can promote the app to production manually.
Best Practices for Secure CI/CD Pipelines: A GitOps Approach¶
Here are some best practices we should adopt while designing the CI/CD pipeline:
Clear PR Descriptions: Always explain changes in PR descriptions, including details on implemented security measures.
Local Development Checks: Encourage developers to run linters, unit tests, and security scans on their local machines throughout development.
Coding Practices & Static Analysis: Adopt good coding practices and secure coding principles. Integrate static analysis tools to improve code quality security posture and identify vulnerabilities early.
Security is Ongoing: Security is an essential part of the entire development lifecycle, not just an afterthought. Even simple applications deserve a high level of security attention.
GitOps Principles: Infrastructure and application configurations are stored as code within our Git repository, enabling declarative (not manual) deployments and automatic rollbacks with Argo.
Future Optimizations:¶
Our pipeline paves the way for a more agile and efficient software delivery process, but additional avenues exist for continuous improvement.
Enhanced Testing: Depending on your project needs, consider adding more robust tests like Regression, Performance, API, and E2E testingâŠ
Monitoring & Logging: Implement a comprehensive monitoring and logging stack for proactive issue identification and alerting using tools like ELK, Prometheus, Grafana, DataDog, New Relic, and Robusta.
Cost Optimization: Explore cost optimization strategies to ensure efficient resource utilization using tools like cluster autoscaler, Azure Container Insights, HPA, multi-tenancy options, kubecost, analyzing/monitoring important pod, control plane, node, and application metrics.
Code Caching & Dependency Pre-installation: Utilize code caching and dependency pre-installation techniques to speed up the build process.
Parallel Builds: For larger projects, consider implementing parallel builds to improve pipeline execution.
Conclusion¶
From Local Feature to Production (Securely), hereâs a breakdown of what we accomplished with a secure GitOps-based CI/CD pipeline for our application:
We implemented a fully automated CI/CD pipeline leveraging GitOps principles, prioritizing security throughout the process. Our only manual step was to promote the application to prod manually during the canary deployment. All infrastructure and application configurations are stored as code within our Git repository.
As DevOps continues to evolve, leveraging these tools positions you for success in the ever-changing development landscape. Take control of your DevOps workflow and feel the power of automation. Adopting the philosophy, not the tools, is important because they come and go.
Happy codingđ©âđ»
Take a look at the brief overview of the project:
References¶
Here are the links to the tools mentioned in your document:
GitHub Actions: GitHubâs CI/CD platform for automating workflows.
Argo CD: Declarative, GitOps continuous delivery tool for Kubernetes.
Terraform: Infrastructure as Code (IaC) tool for building, changing, and versioning infrastructure safely and efficiently.
AKS (Azure Kubernetes Service): Managed Kubernetes service provided by Microsoft Azure.
Docker: Platform for developing, shipping, and running applications in containers.
Flask: Lightweight WSGI web application framework in Python.
NGINX: High-performance web server, reverse proxy, and load balancer.
Cert Manager: Automated certificate management in Kubernetes.
Kubernetes: Open-source container orchestration platform for automating deployment, scaling, and management of containerized applications.
Python: High-level programming language used for web development, data science, and more.
Cypress: End-to-end testing framework for web applications.
Selenium: Portable framework for testing web applications.
JMeter: Open-source tool for load testing and performance measurement.
K6: Open-source load testing tool and SaaS for engineering teams.
Locust: Open-source load testing tool.
These links should provide more information about each tool and how to use them effectively in your CI/CD pipeline.
đ For a wealth of knowledge, check our blog archives. Happy reading! đâš
Did you like kuberada? đ¶

Secure Your AKS Application with HTTPS: Letâs Encrypt Your Way!¶
đ·ïžTagged with:
Securing the CI/CD pipeline is crucial, but what about the deployed application? So, in this guide, I will focus on securing a Python application I developed for demo purposes, which serves up those adorable dynamic cat images running on AKS. (More will come on how the whole CICD pipeline and automated deployment looks like in another articleâŠ)
So we will achieve this, a secure endpoint for our application:
After the final release step authenticates with the private repository and updates Kubernetes manifests with a new tag, Argo detects these changes and ensures your AKS cluster reflects the latest application state.
Weâll leverage the following technologies:
NGINX Ingress Controller: Directs traffic to your application using reverse proxy.
Letâs Encrypt (CA Cluster Issuer): Provides free, trusted SSL certificates for your domain.
Cert-Manager: Automates certificate issuance and renewal, securing your HTTPS setup.
Domain: Your unique web address for secure access.
Terraform: Automates infrastructure creation, including resources for HTTPS.
Prerequisites
An existing AKS cluster kubectl configured to access your AKS cluster A domain name
Before we start:
You can find all the codes used on the following gist:
Take a look at the manifest and terraform files I used on this blog:
Now, letâs enable HTTPS for secure access!
Step 1- Merge the AKS cluster¶
Configure your local environment to interact with an AKS cluster; it adds new credentials to your KUEBCONFIG file.
az aks get-credentials --resource-group $AZ_RG --name $CLUSTER
Step 2- Deploy the NGINX Ingress Controller¶
Our Python application needs a secure entry point for incoming traffic because users expect a secure experience. Kubernetes Ingress can help us define how external HTTP and HTTPS requests reach our application within the cluster. In this case, we need a virtual name for our application, which is frontend.devtechops.dev
.
The NGINX Ingress Controller utilizes a reverse proxy to manage incoming requests and route them to the appropriate service based on defined rules within the Ingress configuration.
...
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace nginx-ingress \
...
I used Helm in this demo, but you can also use manifest files.
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.0/deploy/static/provider/cloud/deploy.yaml
A pod pods will be coming up in nginx-ingress namespace:
k get po -n nginx-ingress
Now that we have our NGINX Ingress Controller set up, letâs ensure secure communication for our application!
Step 3- Deploy cert-manager¶
So what is a cert-manager? cert-manager is a tool for managing TLS certificates in Kubernetes clusters. It automates the management and issuance of TLS certificates from various issuers. We have two key layers when it comes to cert-manager: Issuers and Certificates
Issuers: These resources define how certificates will be obtained. Common issuers include Letâs Encrypt or integrations with internal Certificate Authorities (CAs) within your organization.
Certificates: Our application will use these actual certificates. They specify the domain name(s) our application uses and reference the chosen Issuer for obtaining the certificate.
You should see a successful output like this:
Now we have installed 2 charts, cert-manager, and nginx ingress controller, on the same namespace:
Step 4- Create Letâs Encrypt Issuer¶
We will define a ClusterIssuer resource in cert-manager. We will tell cert-manager to use a specific Certificate Authority (CA) to issue TLS certificates for our cluster. In this case, the CA is Letâs Encrypt, a popular free CA that provides TLS certificates.
envsubst < cluster-issuer.yaml | kubectl apply -f -
Step 5 - Configure DNS Records¶
Get the external IP of the nginx controller and create an A-Record for your domain.
kubectl get svc -n $INGRESS_NAMESPACE
dig $DOMAIN_NAME ns +trace +nodnssec
Step 6- Deploy your application and set up Route¶
Itâs time to deploy our sample application We need to tell Ingress to route traffic to our Python application.
Make sure to match the service name of your application on the ingress manifest; thatâs how Ingress finds out which application to route traffic to.
Letâs deploy the application and the ingress resource on the nginx controllerâs namespace.
Here comes a tricky part.
When we listed the ingresses, something caught my eye: ingress class was public, cm-acme-http-solver-rcj45 displayed
kubectl patch -n $INGRESS_NAMESPACE ingress <cm-acme-http-solver> --type merge -p '{"spec": {"ingressClassName": "public"}}'
Now, weâre good to go. One quick validation before destroying the resources:
Conclusion¶
In this blog, weâve successfully fortified a Python application deployed on AKS. Now, our users can interact with it securely over HTTPS. We leveraged NGINX ingress, Letâs Encrypt with Cert-Manager, a domain, and Terraform to automate HTTPS access and certificate management for our secure AKS application.
Hereâs a Recap of the Achieved Security Enhancements:
NGINX Ingress Controller: Secured the entry point for user traffic.
Letâs Encrypt (CA Cluster Issuer): Provided free, trusted SSL certificates.
Cert-Manager: Automated certificate issuance and renewal.
Domain: Established a unique, secure access point.
By employing these tools, weâve ensured that communication between users and our application is encrypted, safeguarding data transfer.
Further Bolstering Your Security Posture:
Network Policies: Restrict communication between pods to enforce a secure architecture. Imagine multiple services: Network Policies can prevent the web tier from directly accessing the database tier, forcing it to communicate through the backend tier.
Secrets Management: Integrate Azure Key Vault to securely store sensitive application data like API keys and database credentials, reducing the attack surface by eliminating secrets from your code.
Security Scanning: Once we push the image to the registry the secruty scans stop. It shouldnât. There are new vulnerabilities discovered everyday.So, thatâs why continuous security scanning is crucial. Weâll explore this further in our upcoming article, âDesigning a Secure Automated CICD Pipeline for Python Applications.â
Happy coding!
Find all the resources used in this blog here:¶
All the codes used:
Manifest and terraform files:
References¶
đ For a wealth of knowledge, check our blog archives. Happy reading! đâš
Did you like kuberada? đ¶

How to Setup and Connect ArgoCD with AKS in 5 mins¶
đ·ïžTagged with:
ArgoCD is a powerful tool that utilizes GitOps principles to automate and streamline your Kubernetes application deployments. It continuously monitors your Git repository, acting as the single source of truth for your desired application state. When it detects a change, ArgoCD controller automatically reconciles the running applications with the configurations in your Git repository, ensuring a consistent and version-controlled deployment process.
In this guide, Iâll walk you through setting up ArgoCD on your AKS cluster to manage deployments of Python applications stored in a private registry. While the demo uses a python flask deployment and service file for illustrative purposes (ingress, external DNS and cert manager is on the way!), you can easily substitute it with the NGINX application manifests on our repo. The deployment YAML for NGINX is provided in kuberada labs repo for reference, you can find the nginx manifest files there.
All the codes used:
Manifest and terraform files:
Letâs dive in!
Prerequisites:
An existing Azure subscription with AKS cluster creation permissions
Azure CLI installed and configured with your subscription
kubectl configured to access your AKS cluster
Basic understanding of Kubernetes concepts
Creating the AKS Cluster:
If you donât have an AKS cluster, you can create one using the Azure CLI or terraform. Refer to Microsoftâs documentation for detailed instructions.
While this guide focuses on deploying ArgoCD on an existing AKS cluster, you can spin up a minikube, k3d cluster too. If you donât have one, you can easily create one using the Azure CLI or the terraform code we store on https://github.com/colossus06/Kuberada-Blog-Labs/tree/main/argocd.
Installing ArgoCD on AKS¶
Download the ArgoCD Manifest¶
Start by downloading the ArgoCD manifest file from the ArgoCD repository.
wget https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Configure ArgoCD¶
Rename the install.yaml
file to argo-cm-nodeport.yaml
and configure ArgoCD for your environment:
Disable TLS and configure insecure connection:
data: server.insecure: "true"
Change the
argocd-server
service type to NodePort:spec: type: NodePort
Connect SCM and ArgoCD¶
Since we are using a private repo, we need to create a secret to authenticate with your private Git repository. If youâre using the sample nginx manifest file, you can safely skip this step.
Deploy ArgoCD¶
Apply the modified argo-cm-nodeport.yaml
file and the secret to deploy ArgoCD:
Access the ArgoCD UI¶
Open the ArgoCD UI using port forwarding:
kubectl port-forward svc/argocd-server -n argocd 8080:443
Access the UI at http://localhost:8080
and log in with the default username admin and password you get with following the gist.
Validation¶
As you can see, repo and application is already setup and the status is healthy. We didnât interact with the cluster directly instead delegated out github repo as the single source of truth.
Conclusion¶
Congratulations!
Youâve successfully set up ArgoCD on your AKS cluster. Now you can deploy your applications with ease using GitOps principles.
See all the code used in this demo in this gist here.
Access the manifests on our kuberada repo here
Find all the resources used in this blog here:¶
All the codes used:
Manifest and terraform files:
References¶
đ For a wealth of knowledge, check our blog archives. Happy reading! đâš
Did you like kuberada? đ¶

Securing Docker Images with Docker Scout: A Practical Guide¶
đ·ïžTagged with:
Ensuring the security of Docker images is vital for maintaining the overall security posture of your organization. Docker Scout, a security tool developed by Docker, excels in scanning Docker images for potential vulnerabilities and provides valuable recommendations to enhance image security. In this blog post, we will guide you through the process of leveraging Docker Scout to scan and secure Docker images. Join me as we learn to scan, identify, and address vulnerabilities in container images, effectively minimizing risks in production environments while aligning with cloud-native security practices.
Introducing Vulnerable Python Image¶
If youâve been following our articles on Kuberada, youâre likely aware that weâve built three custom images for our voting app, stored on Dockerhub and Gitlab repositories. With a security-focused mindset, our goal is to assess whether these images are affected by critical CVE vulnerabilities.
Before delving into the assessment, letâs briefly understand what CVE entails.
What is CVE?
CVE (Common Vulnerabilities and Exposures) is a standardized system designed for uniquely identifying and naming vulnerabilities in software and hardware systems. Maintained by the MITRE Corporation, CVE provides a common language for organizations and security practitioners to discuss, share, and exchange information about security vulnerabilities.
Analyzing Python Frontend Application for Vulnerabilities¶
To begin, open Docker Desktop and navigate to the images section. Docker Scout is already enabled on Windows Docker Desktop.
As a quick reminder, we created our images using the following commands:
docker buildx create --name mybuilder --use --bootstrap
DOCKER_IMAGE_NAME=registry.gitlab.com/colossus06/cka-ckad-study-group-2024/frontend_python
docker buildx build --load --platform linux/amd64 -t $DOCKER_IMAGE_NAME:fixed .
Letâs inspect the packages added to each layer of our image and identify any software package affected by critical vulnerabilities.
While there are no critical vulnerabilities, we notice two high vulnerabilities stemming from the same CVE, CVE-2022-40897
. Docker Scout recommends upgrading the setuptools version to 65.5.1
to address this issue. Letâs proceed with the upgrade.
First, detect the current vulnerable package version:
pip list | grep setup
Now, letâs upgrade the package:
pip install setuptools --upgrade
pip freeze > requirements.txt
echo "setuptools==$(pip show setuptools | grep Version | cut -d ' ' -f 2)" >> requirements.txt
Rebuild the image:
docker buildx build --load --platform linux/amd64 -t $DOCKER_IMAGE_NAME:fixed .
Verifying the Fix for CVE-2022-40897
¶
Inspect the layers of the latest image:
Getting Recommendations to Harden the Base Image¶
While weâve addressed specific vulnerabilities, there are still medium vulnerabilities originating from the base image. Docker Scoutâs recommendation engine can help rectify this.
Navigate to recommended fixes in the top right corner.
Applying the Recommendations¶
To eliminate 26 vulnerabilities and save 28 MB of space, we need to change the base image. Replace FROM python:3.9-slim AS base
with FROM python:alpine
in the Dockerfile and adjust the package manager.
FROM python:alpine AS base
RUN apk add --update curl && \
rm -rf /var/cache/apk/*
Rebuild the image:
docker buildx build --load --platform linux/amd64 -t $DOCKER_IMAGE_NAME:base-hardened .
Finally, inspect the latest image using Docker Scout:
Tagging and Pushing Images to Dockerhub¶
Tag the first image before fixing critical vulnerabilities and the latest push.
DOCKER_NEW_IMAGE_NAME=elkakimmie/docker-scout
docker tag $DOCKER_IMAGE_NAME:latest $DOCKER_NEW_IMAGE_NAME:vuln
docker tag $DOCKER_IMAGE_NAME:base-hardened $DOCKER_NEW_IMAGE_NAME:base-hardened
docker push $DOCKER_NEW_IMAGE_NAME:vuln
docker push $DOCKER_NEW_IMAGE_NAME:base-hardened
Now, there are two images available:
elkakimmie/docker-scout/vuln
: The initial image with vulnerabilities.elkakimmie/docker-scout/base-hardened
: The latest image after fixing CVE-2022-40897 and changing the base image to Alpine.
Activating Image Analysis on Scout¶
Scout image analysis is already available by default for Docker Hub repos. You can find it in your repository settings.
Alternatively, activate it on Scout repository settings. As of Feb 1, 2024, you can activate analysis for three repositories under the Docker Scout Free plan.
Once activated, Scout automatically analyzes images upon push, displaying SCOUT ANALYSIS ACTIVE
in your repository/general tags section.
Comparing Images¶
You can compare images using Scout GUI or the docker-scout CLI.
Comparison on Scout
Navigate to Scout, locate your repository in the images section, and compare the two images.
Verify the removal of 29 low vulnerabilities.
Comparison using docker-scout CLI
Compare two images and display differences using the following commands. Consider piping the output to less
.
docker login
docker scout config organization <dockerhub-username>
docker scout compare $DOCKER_NEW_IMAGE_NAME:base-hardened --to $DOCKER_NEW_IMAGE_NAME:vuln --ignore-unchanged
Analyzing Images Locally
Inspect images locally using the following command:
docker scout quickview $DOCKER_NEW_IMAGE_NAME:base-hardened
BONUS-1: Upgrading Docker Scout on Windows WSL
Execute the following script in your terminal to upgrade Docker Scout on Windows WSL:
curl -sSfL https://raw.githubusercontent.com/docker/scout-cli/main/install.sh | sh -s --
docker scout version
BONUS-2: Fixing error storing credentials
If you encounter the following error, try removing ~/.docker/config.json
:
Error saving credentials: error storing credentials - err: fork/exec /usr/bin/docker-credential-desktop.exe: exec format error, out: ``
Conclusion¶
To recap, securing Docker images is a critical aspect of maintaining a robust and resilient production environment. Docker Scout proves to be a valuable tool in identifying and addressing vulnerabilities effectively. By following the practical guide outlined in this article, you have gained hands-on experience in securing Docker images, making the production workloads more resilient to potential threats.
References¶
Sysdig and Docker Forge Alliance to Accelerate Cloud-Native Security: Docker Scout
MITRE Corporation - Common Vulnerabilities and Exposures (CVE)
Happy upskilling!
đ For a wealth of knowledge, check our blog archives. Happy reading! đâš
Did you like kuberada? đ¶

Unlocking Istioâs Power: A Step-by-Step Guide to Seamless Microservices Management¶
đ·ïžTagged with:
In the dynamic world of microservices, orchestrating, securing, and monitoring services can be intricate. Istio, a robust service mesh, steps in as the solution, offering a holistic approach to traffic management, security, and observability. We will see 2 different ways to install and get started with istio: with istioctl and helm. Letâs dive into the intricacies of Istio and explore two different installation methods: using istioctl
and Helm.
In This Article:¶
Install Istio Components
Using istioctl
Using Helm
Enable Automatic Istio Sidecar Injection
Validate Istio-Proxy Sidecar Injection
Explore Pod Communication
Pod Communication Using Service Mesh
Monitor Service Mesh with Prometheus and Grafana
Option 1: Installing Istio with istioctl
¶
Begin by downloading Istioctl on Ubuntu:
curl -L https://istio.io/downloadIstio | sh -
Add Istioctl to your path:
export PATH=$HOME/.istioctl/bin:$PATH
Create a cluster with k3d
:
k3d cluster create istio
Perform the Istio pre-installation check:
istioctl x precheck
Install Istio with the default profile:
istioctl install
Verify the installation:
istioctl verify-install
Uninstall Istio:
istioctl uninstall --purge
Option 2: Installing Istio with Helm¶
Step 1: Install Base Istio Components¶
Add cluster-wide base components using Helm:
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm search repo istio
Customize default values:
helm show values istio/base
helm show values istio/istiod
Install CRDs and Istio control plane components:
k create ns istio-system
helm install istio-base istio/base -n istio-system
helm install istiod istio/istiod -n istio-system
Check installed Helm charts:
k get po -n istio-system
helm ls -n istio-system
Step 2: Enable Automatic Istio Sidecar Injection¶
Automate sidecar injection by labeling pods:
k label ns default istio-injection=enabled
Deploy the voting app resources with Helm:
helm repo add voting-app-istio https://gitlab.com/api/v4/projects/54113378/packages/helm/stable
helm repo update
helm search repo voting-app-istio
helm upgrade --install voting-app --set image.tag=latest voting-app-istio/charts
Step 3: Validate Istio-Proxy Sidecar Injection¶
Describe the voting-app pod to validate Istio-proxy sidecar injection:
k describe po worker-app | less
Step 4: Exploring Current Pod Communication¶
Identify NodePort and ClusterIP services:
Shell into a pod and access a NodePort service:
k exec worker-deploy-7c4c4bc5bc-w4szc -it -- sh
apt update
apt install -y curl;curl voting-service
Use port-forwarding for communication:
k port-forward svc/voting-service 30004:80
Explore pod communication over kube-proxy.
Step 5: Communicate Using Service Mesh¶
Delete kube-proxy and check if pods can still communicate over the service mesh:
k get ds -A
k delete ds kube-proxy -n kube-system
k exec worker-deploy-7c4c4bc5bc-w4szc -it -- curl voting-service
Monitoring Service Mesh Using Prometheus and Grafana¶
Tip
You can find the grafana-value.yaml file on blogâs repository. Clone the repo and change directory into get-started-with-istio
.
Find the labs here đ
Now that you know where to find the custom grafana yaml file, weâre ready to create a monitoring stack for observing pod traffic.
Adding and updating the repos:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
Installing Prometheus and Grafana:
helm install prometheus prometheus-community/prometheus -n monitoring --create-namespace
helm install grafana grafana/grafana -n monitoring -f "grafana-value.yaml"
Importing Istio Workload Dashboard:
Login to Grafana, append /dashboard/import
to the URL, type 7630
to import the Istio workload dashboard, and select Prometheus as the data source.
Connect to the worker service and communicate with the voting-service:
k exec worker-deploy-7c4c4bc5bc-w4szc -it -- curl voting-service
k exec worker-deploy-7c4c4bc5bc-w4szc -it -- curl result-service
Select both source and destination for the reporter and examine the inbound traffic:
Displaying the outgoing traffic from worker app:
Intercepted traffic over the Istio proxy, displaying outgoing requests from worker-deploy
to voting-service
and result-service
.
Displaying the incoming requests in result and voting apps:
Incoming requests to result app by worker app:
Incoming requests to the voting app:
Cleaning¶
Delete all the resources you used in this lab:
k delete ns monitoring
k delete ns istio-system
helm uninstall voting-app
Recap¶
In this blog, we have successfully set up Istio on our Kubernetes cluster and explored its powerful features for traffic management and observability. Use this hands-on guide as a foundation for further experimentation and Istio integration into your microservices architecture. Happy meshing!
References¶
Happy meshing!
đ For a wealth of knowledge, check our blog archives.đâš
Did you like kuberada? đ¶

Mastering GitLab Container Registry with Buildx: Your Comprehensive Guide¶
đ·ïžTagged with:
In this article, weâll guide you through the process of getting started with GitLab Container Registry using the powerful buildx
tool. Follow the steps below to locate the container registry, authenticate securely, and build your Docker image.
Table of Contents¶
Locating Container Registry
Authenticating to the Registry
Tagging and Building the Image using Buildx
Locating Container Registry¶
Navigate to your project in GitLab.
Select the âDeployâ option from the project menu.
Click on âContainer Registry.â Youâll be directed to a blank page with instructions.
Authenticating to the Registry¶
To securely authenticate with GitLab Container Registry, follow these steps:
docker login $CI_REGISTRY -u <username> -p <access_token>
dockerhub:
https://index.docker.io/v1
gitlab ci:
registry.gitlab.com
docker login registry.gitlab.com
Avoid using your password; instead, use a personal access token. Hereâs how you can generate one:
Go to your GitLab profile.
Navigate to âAccess Tokensâ under âEdit Profile.â
Add a new token with a name of your choice.
Set the scope to
read_registry
(for pull access) andwrite_registry
(for push rights).
After generating the token, remove the existing entry in $HOME/.docker/config.json
related to GitLab:
{
"auths": {
"registry.gitlab.com": {}
},
"credsStore": "desktop.exe"
}
Authenticate again using your personal access token:
docker login registry.gitlab.com
Tagging and Building the Image using Buildx¶
Tagging the Image¶
Here is the structure of tagging an image to push to the registry:
tag=registry.gitlab.com/<gitlab-username>/<project-name>
Why Buildx?¶
GitLab recommends using buildx
for building container images due to its enhanced features and capabilities compared to the traditional build
command. You can read more on multi-platform images using buildx here. Letâs take a look at how to properly tag the image for pushing.
What is a Builder?¶
A builder is a tool that uses BuildKit, a build engine, to carry out your builds. BuildKit interprets the instructions in a Dockerfile to generate a container image or other outputs.
You can interact with the builders using the docker buildx ls
command. The asterisk (*) marks the selected platforms for the image:
Now, letâs clarify the difference between docker build
and docker buildx build
:
docker build
: The classic Docker build command for building images, suitable for single-platform builds.docker buildx build
: An advanced version using BuildKit, offering extra features like multi-arch builds and advanced options for a more flexible and capable building process.
BuildKit is the underlying build engine that powers these builders. It efficiently processes the build steps in a Dockerfile to produce the desired container image or other artifacts.
Building the Image¶
Change the directory to your application code where the Dockerfile resides and create a builder instance:
export DOCKER_IMAGE_NAME=registry.gitlab.com/colossus06/cka-ckad-study-group-2024
docker buildx ls
docker buildx create --name mybuilder --use
docker buildx inspect --bootstrap
We can see that the default builder is mybuilder:
Automatic Load with Buildx
Itâs time to start a build from our builder we created in the previous step. We will use --push
to automatically push our multi-platform build result to GitLab Container Registry, and BuildKit will assemble the image manifest for the target architectures.
cd <Dockerfile-directory>
docker buildx build --load --platform linux/amd64 -t $DOCKER_IMAGE_NAME:gitlab .
Now we can see that our builder is registered:
Display our image:
docker image inspect $DOCKER_IMAGE_NAME:gitlab | grep Architecture
Expected output:
Displaying the Image on GitLab Registry
Run the following command to push your image to the GitLab Container Registry:
docker push $DOCKER_IMAGE_NAME:gitlab
Display your registry by navigating to the following URL:
https://gitlab.com/<gitlab-username>/<repo-name>/container_registry/
or project/deploy/container_registry
Editing the Builder Instance
If you wanr to delete the builder instance, run the following:
docker buildx rm mybuilder
Creating a new builder:
docker buildx create --use --name voting-app
docker buildx build --load --platform linux/amd64 -t $DOCKER_IMAGE_NAME:gitlab .
Letâs put together all the commands we used:
docker buildx rm mybuilder
docker buildx ls
docker buildx create --use --name voting-app
docker buildx inspect --bootstrap
docker buildx build --load --platform linux/amd64 -t $DOCKER_IMAGE_NAME:gitlab .
docker push $DOCKER_IMAGE_NAME:gitlab
Final Words¶
In this hands-on blog post, we have authenticated, built, and pushed our Docker image to GitLab Container Registry using buildx
.
Challenges:
Here are two challenges for you.
Can you build and push a multi-arch image to GitLab and Docker Hub?
Try also authenticating to Docker Hub, build a multi-arch image, and push it to the Docker Hub Container Registry:
docker login -u <dockerhub-username> -p <password>
docker buildx build --push --platform linux/amd64,linux/arm64 -t <dockerhub-username>/multiarch:v1 .
Can you save some space by keeping only 5 tags per image?
References¶
Happy building!
đ For a wealth of knowledge, check our blog archives. Happy reading! đâš
Did you like kuberada? đ¶

Getting Started with GitLab Runner on Windows - Step-by-Step Guide¶
đ·ïžTagged with:
Did you happen to receive an email notification from GitLab starting with ânamespace has 5% or less Shared Runner Pipeline minutes remainingâ đ? This email underscores the criticality of managing your Shared Runner Pipeline minutes. To optimize your CI/CD (Continuous Integration/Continuous Deployment) processes, itâs crucial to understand and utilize GitLab Runners effectively.
In this page
GitLab Runner types?
Locating GitLab Runner
Step-by-step installation guide
GitLab Runner Types¶
GitLab Runners come in two types: Specific Runners and Shared Runners.
Specific Runners are dedicated to a single project and can only run jobs for that specific project. Conversely, Shared Runners are available to all projects within a GitLab instance, providing a versatile and shared resource for CI/CD processes.
Locating GitLab Runner¶
To locate GitLab Runner in your GitLab account, follow these steps:
Open your GitLab account and navigate to Settings.
In the Settings menu, find the CI/CD section.
Expand the Runners section.
Now that we know where to find the runners, letâs proceed with the installation.
Step-by-Step Installation Guide¶
Step 1 - Install gitlab-runner.exe
¶
Open PowerShell with administrative privileges and execute the following commands:
# Create a folder for GitLab Runner
New-Item -Path 'C:\GitLab-Runner' -ItemType Directory
# Change to the folder
cd 'C:\GitLab-Runner'
# Download GitLab Runner binary
Invoke-WebRequest -Uri "https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-windows-amd64.exe" -OutFile "gitlab-runner.exe"
# Register and start the runner
.\gitlab-runner.exe install
.\gitlab-runner.exe start
Step 2 - Register the Runner¶
Open a command prompt, navigate to the GitLab Runner folder, and execute the following:
Win + r
cmd
cd GitLab-Runner
.\gitlab-runner.exe register --url https://gitlab.com --token <token>
Ensure you allow the necessary permissions if prompted.
Provide a name for the runner (e.g., <your-name>
) and choose the executor as âshell.â
Expected output:
Runner registered successfully. Feel free to start it, but if itâs running already, the config should be automatically reloaded!
Configuration (with the authentication token) was saved in âC:\GitLab-Runner\config.tomlâ
Activating Runner¶
To activate the runner and start picking up jobs, run the following commands:
Win + r
cmd
cd GitLab-Runner
.\gitlab-runner.exe run
Upon successful activation, navigate to Settings > CI/CD > Runners in your GitLab account to verify the newly configured runner.
Final Words¶
In this quick tutorial, we learned what a GitLab Runner is, how to install it, and start using it on Windows amd64. Now, youâre ready to handle CI/CD jobs for your projects.
References¶
đ For a wealth of knowledge, check our blog archives. Happy reading! đâš
Did you like kuberada? đ¶

đ© Understanding Architecture of a Kubernetes ClusterâïžÂ¶
đ·ïžTagged with:
Kubernetes, an open-source container orchestration platform, has transformed the landscape of deploying and managing containerized applications. To harness its power effectively, itâs crucial to comprehend the architecture that underpins its scalability and fault-tolerance. Letâs navigate through the key components that makes up the Kubernetes architecture.
Why does it called K8s?¶
The number â8â in âK8sâ represents the eight letters between âKâ and âsâ in the word âKubernetes.â So we can say that âK8sâ is a shorthand way of referring to Kubernetes.
Components of Kubernetes Architecture¶
We have 2 core pieces in a kubernetes cluster: control plane and worker node.
Master Node/Control Plane¶
At the heart of Kubernetes architecture lies the master node, acting as the control plane for the entire cluster. It takes charge of managing the cluster state and making pivotal decisions related to scheduling, scaling, and maintaining the desired state of applications.
In production environments, you can observe multiple distributed control plane components to ensure fault tolerance, high availability, and scalability. The number of control plane components is dependent on factors such as the following:
the size of the Kubernetes cluster,
performance requirements,
architecture design,
desired level of redundancy and fault tolerance
You may also observe multiple instances of kube-apiserver to distribute the load and help in redundancy, etcd cluster to prevent a single point of failure, kube-controller-manager and kube-scheduler for high availability.
API Server¶
The API server serves as the central communication hub, exposing the Kubernetes API for seamless interactions with the cluster. It plays a pivotal role in facilitating communication among various components.
Scheduler¶
Tasked with determining the suitable nodes for running specific pod instances, the scheduler factors in resource requirements, constraints, and policies, optimizing the distribution of pods across the cluster.
Controller Manager¶
This component houses controllers (like deployment controller) responsible for preserving the desired state of the cluster. Continuously monitoring the cluster, controllers take corrective actions to ensure alignment with the specified configuration.
etcd¶
etcd, a distributed key-value store, serves as the persistent storage backend for the cluster. It stores configuration data, resource quotas, and access control information, ensuring the integrity of the clusterâs data.
Worker Nodes¶
The worker nodes form the operational layer, executing application workloads and providing the essential resources for running containers.
Kubelet¶
Kubelet, the primary agent on each worker node, communicates with the master node, ensuring that containers run as expected. It executes instructions received from the master node.
Container Runtime¶
Responsible for container management, the container runtime, such as Docker, containerd, and CRI-O, seamlessly integrate with Kubernetes.
kube-proxy¶
Taking charge of network routing and load balancing between containers on different nodes, kube-proxy exposes services to the external world, facilitating smooth inter-container communication.
Pod¶
The pod, the smallest operational unit, embodies a single instance of a running process within the cluster. It can house one or more containers that share networking and storage resources, enabling cohesive communication.
Addons¶
Kubernetes addons are supplementary components that enhance the capabilities of your cluster. They encompass cluster dns,Container Network Interface (CNI) plugins, observability, loggings, metrics agents, and Kubernetes drivers that facilitate the interaction between the cluster and underlying cloud resources.
Displaying the components on a live Kubernetes cluster¶
Essential system components and infrastructure-related pods run in a dedicated namespace called kube-system
.
To display the he output of the below command provides you a glimpse into these components.
kubectl get pods -n kube-system
We have 2 different clusters . For the first cluster we have kubeadm v1.28 and for the second one we have installed k3d.
kubeadm 2 node cluster components¶
The components we see in kubeadm cluster are aligned with a typical Kubernetes cluster. Components like
calico-kube-controllers
,canal
,coredns
,etcd
,kube-apiserver
,kube-controller-manager
,kube-proxy
, andkube-scheduler
are fundamental parts of a standard Kubernetes control plane.This cluster setup is suitable for production-like scenarios with components like
etcd
serving as the distributed key-value store.
k3d cluster components¶
Letâs take a look at the second cluster. This is a k3d cluster named architecture
.
As you can observe, there are differences in the output of kubectl get pods -n kube-system
command.
k3d cluster is a lightweight Kubernetes distribution designed for ease of use and reduced resource requirements. It is suitable for development environments
The components you see in the output like
local-path-provisioner
,helm-install-traefik
,svclb-traefik
, andtraefik
are specific to the K3s distribution used for lightweight and development environments.
Conclusion¶
A robust understanding of Kubernetes architecture is important for seamless deployment and management of applications within a cluster. The collaborative orchestration of master nodes, worker nodes, pods, services, and ingress provides a resilient and scalable platform for containerized applications.
Explore More Insights!¶
Enjoyed this read? Dive deeper into the world of kuberada with the next topic. Uncover additional tips, tricks, and valuable insights to enhance your understanding of kubernetes. Your learning journey continues â donât miss out!
đ For a wealth of knowledge, check our blog archives.
Happy reading! đâš
Did you like kuberada? đ¶ đ

𧞠Ephemeral Storage: A Hands-On Guide with emptyDir, ConfigMap, and Secret in Kubernetes¶
đ·ïžTagged with:
Ephemeral storage refers to temporary, short-lived storage that exists only for the duration of a specific process or containerâs lifecycle. In containerized environments, such as Kubernetes, on-disk files inside the containers are considered ephemeral. You can create and modify these files but canât preserve when the container is deleted or removed from the node for a reason.
When to use ephemeral storage¶
Letâs imagine that youâre running a dynamic website on a Kubernetes cluster where multiple containers collaborate to deliver a seamless user experience.
You need storage persistence across container restarts: If your website containers require data consistency you can utilize âemptyDirâ volume as a shared space. It will acts as a temporary repository and when the containers restart, the new ones will seamlessly access the same files, maintaining continuity.
You need a shared space: Multiple containers in your website infrastructure work in tandem, each handling specific functions like content serving, session management, or background tasks. Containers share an âemptyDirâ volume, which we will learn how in the hands-on section.
You need file sharing: Linuxâs support for arbitrary mounting allows you to flexibly mount volumes at different locations within each containerâs file system. Users can upload images in the shared âemptyDirâ volume. This shared volume is accessible to various containers on the mounted paths, without the need for a separate persistent volume.
Handling Large Datasets: You can utilize the âemptyDirâ volume as an ideal temporary workspace for resource-intensive tasks like sorting large datasets. Even if a container crashes during operations (since the pod isnât removed from the node during a crash), the data in the âemptyDirâ volume remains intact. Reference
You want to manage configuration: You may need to inject specific configuration settings to each container. You can leverage ConfigMaps and secrets to inject configuration data into pods, ensuring that all containers have the necessary settings.
What are various ephemeral volume types¶
Kubernetes offers various ephemeral volumes and how they are ideal for:
emptyDir: Temporary storage needs in batch processing tasks, enabling data sharing across containers and resilience to container restarts.
configMap, downwardAPI, secret: Applications requiring dynamic configuration or handling sensitive information. These volumes inject data directly into the Pod, ensuring seamless and secure management.
CSI ephemeral volumes: Specialized storage requirements, like high-performance or specific file systems, addressed by dedicated CSI drivers.
Generic ephemeral volumes: Applications needing temporary storage but desiring flexibility in choosing different storage solutions, provided by various compatible storage drivers.
Now that we have taken a look at the various ephemeral storage use cases and types, letâs get some hands-on skills that can help us in the Kubernetes exams.
a quick note on solutions
Before starting off with the hands-on section, remember that you can find the manifest files on the github repo of the blog.
Tip
The solution for this task can be found in the blogâs repository. Clone the repo and change directory into storage-ephemeral. Explore the repository to review the codes and files associated with it if youâre encountering this task for the first or second time. Remember to attempt solving it without referring to the codes in the repository afterward.
Find the labs here đ
Hands-on With Ephemeral Storage in Kubernetes¶
Duration: 15 mins
If you dont specify a medium for emptydir, it is mounted to the disk by default as you can see in the following example:
Letâs try mounting emptyDir volume to the RAM adding a medium. You can find the lab resources on the kuberada-labs repository.
We created emptydir using disk and memory.
emptydir scenario¶
Problem Statement
You are tasked with implementing a shared storage solution named shared-vol in a Kubernetes cluster to facilitate communication and data exchange between two pods. The shared volume should persist as long as the pods are alive.
Requirements
Create a pod named tenants with two containers using the alpine:latest image.
Ensure that both pods (ct100 and cta150) remain running as long as the tenants pod is active.
Create a volume named shared-vol, name the first container ct100 and mount the shared volume at /var/www/html.
Name the second container cta150 and mount the same shared volume at /var/html.
Inside the ct100 pod, create an index.html file in the mounted volume (/var/www/html) with some content.
Verify that the cta150 pod can access and list the index.html file from the shared volume (/var/html).
Expected Outcome
Upon successful implementation, the ct100 pod should serve as a producer by creating an index.html file, and the cta150 pod should serve as a consumer by displaying the content of the index.html file from the shared volume. This scenario demonstrates the use of shared ephemeral storage to enable communication and data exchange between containers within a Kubernetes pod.
solution
We will first create a pod spec yaml file and add the second container accordingly. Then navigate to the kubernetes documentation to copy and paste the emptyDir configuration example to declare and mount the volume. For declaration of a volume, we need a name and a type.
volumes:
- name: shared-vol
emptyDir {}
Pay attention to the paths while mounting.
volumeMounts:
- mountPath: /var/www/html or /var/html
name: shared-vol
Lastly verify that we can access the index.html file from the other containers, cta150 in this case.
configmap scenario¶
Task
Youâve been tasked with ensuring smooth configuration management for the âdev02â teamâs frontend application. Letâs navigate through this scenario:
Requirements:
Create a ConfigMap named âcm1602â with key-value pairs, configuring essential parameters for the frontend application. This includes âapp=frontendâ and âteam=dev02â.
Introduce a Pod named âconsumerâ using the âbitnami/nginxâ image. This Pod will serve as the consumer, leveraging the configuration values - provided by the âcm1602â ConfigMap.
Create a volume named shared-cm and mount the ConfigMap as a volume at /var/src.
Display the contents on the
/var/src
directory. This step serves as a validation that the configuration values from the âcm1602â
Expected Outcome:
Upon successful implementation of this scenario, the âconsumerâ Pod showcases the power of ConfigMaps in Kubernetes. The âdev02â teamâs frontend application benefits from seamless configuration, providing a robust foundation for smooth development and operations.
solution
For creating a configmap and mounting it as a volume to a container, all we need is to change the type to configMap as shown in the following sections and on kubernetes docs on Populate a Volume with data stored in a ConfigMap.
volumes:
- name: shared-cm
configMap:
name: cm1602
volumeMounts:
- name: shared-cm
mountPath: /var/src
Letâs list the contents of the cm:
secret scenario¶
Problem Statement:
In the secure realm of the âsecretns102â namespace, youâre tasked with deploying a Pod using the ânginxâ image. The challenge is not just running the Pod but ensuring it securely consumes sensitive information stored in a secret.
Requirements:
Deploy the âsecretâ Pod using the bitnami/nginx image within the âsecretns102â namespace. This Pod will serve as a secure environment where sensitive data will be consumed.
Create a secret named
consumable
on the same namespace from the following key value pairs: psw=admin, user=admin.Consume the secret as a volume within the Pod. Name it as you wish
Mount this secret to the path â/var/srcâ and ensure itâs set with read-only access to safeguard the sensitive information from unauthorized - modifications.
Display the contents of the securely mounted secret at â/var/src.â
Expected Outcome:
Upon successful implementation of this scenario, the âsecretâ Pod in the âsecretns102â namespace stands as a testament to securely consuming and utilizing sensitive information from the âconsumableâ Secret. This approach ensures that confidential data remains protected within the Kubernetes environment.
solution
We will combine all the above techniques for creating for creating a secret and mounting it as a volume. First we need to change the type to secret and name to secretName as shown in the following snippets. These snippets are taken from the official docs and you can read more on the related section here.
volumes:
- name: shared-secret
secret:
secretName: consumable
volumeMounts:
- name: shared-secret
mountPath: "/var/src"
Can you list the contents on the directory? Yes!
Conclusion¶
Ephemeral storage provides dynamic solutions to challenges like data persistence, collaboration, and configuration management within containerized environments.
In your Kubernetes journey, these hands-on experiences and insights serve as valuable tools. Remember, the ability to navigate ephemeral storage nuances not only prepares you for exams but also equips you to tackle real-world challenges in deploying scalable, resilient, and efficient applications in Kubernetes.
đ For a wealth of knowledge, check our blog archives. Happy reading! đâš
Did you like kuberada? đ¶
About us¶
Hi đ
Iâm Gulcan, a DevOps enthusiast passionate about automation, scripting, and addressing real-world challenges.
My therapy is crafting solutions and sharing insights through writing.
What Drives Me:
Kubernetes: I love challenges, especially when the solution involves Kubernetes.
Automation: Automate whenever you can.
DevOps: The glue between development and operations.
DevSecOps: Security is always a priority.
Join me on this journey of continuous learning.
Connect with me on Linkedin¶
đ do you want to contribute¶
Check out our github repo for issues labelled with contribute
or blog
đ§±
Follow me for updates on github!