- :)
- :0
- :D
- ;)
- :]
- :º
How to create Agile Cloud Platforms using technologies like Kubernetes, Prometheus, JAMstack, and Astro to optimize E-Commerce
manager article
Introduction
When a client asks us to improve their company's e-commerce through a solid technological platform, the first thing we ask is if they would be willing to migrate, if not all, at least a large part of their systems to the cloud. The cloud is presented as the optimal solution for the agile deployment of information processing resources, essential in e-commerce. After all, the goal is to reach as many people as possible, right? The cloud facilitates access to a wide catalog of services for hosting and data processing, accessible with just a few clicks. Additionally, competition among various providers has significantly reduced costs compared to maintaining these systems locally. As an added advantage, the cloud allows for the creation of replicated systems, helping to avoid service outages or simply bringing the service closer to the end customer.
Cloud Platform Advantages
Among the many advantages offered by the cloud, its main feature is the ability to easily interconnect different services to collaborate with each other. In an environment with high connectivity between the various components that make up a company, it is easy to review sales processes, design efficient information flows, and understand what action triggers each event, minimizing manual intervention within the sales funnel.
A clear example is businesses operating under the print-on-demand model. Typically, when a sale is made through the web, several processes are triggered automatically. On the one hand, the order is sent directly to the supplier responsible for printing, and on the other hand, email notifications are generated for both the customer and the agent. Additionally, accounting entries are recorded, the monthly sales report is updated, and the characteristics of the sold product are archived for further analysis, making it easier to design more efficient marketing campaigns for the next quarter.
Client Situation Analysis
If the client is aligned with the adoption of new technologies and wants to opt for a similar scenario, the next step is to identify all the components of their current system and review the information flows they use daily. The goal is to catalog all available elements, such as databases, file folders, and software used, and analyze how they interact to manage the company's activity. At this stage, it is common to identify points for improvement in the information flows, which are usually resolved through integrations, automations, or custom developments that provide benefits in the form of reduced errors, time, and costs, especially in a highly interconnected environment like the cloud.
It is not always possible to migrate all software to the cloud, whether due to technical limitations or functional or business requirements. One example is accounting, which can sometimes be difficult to move to the cloud. However, this is not an obstacle to having a modern technological platform, as hybrid solutions can be implemented in these cases, such as VPN connections or offline processes accompanied by batch processing, to integrate local systems with cloud-based ones.
Building the Cloud Platform
Assuming we have already identified and cataloged the elements to be migrated to the cloud as part of our sales processes, the next step is to prepare the foundation on which we will build the entire system. The key characteristics we seek in our new platform are:
Security: The system must be protected against fraud attempts and unauthorized access to sensitive information.
Availability: The higher the availability, the more potential sales can be made.
Reach: The more people we can reach, the greater the likelihood of making sales.
Automation: As much as possible, we should aim for tools and services to collaborate with each other with minimal manual intervention.
Scalability: Once the process reaches its optimal level of automation, we can focus on growing the business through various marketing techniques. The system must be able to handle the increase in request volume.
Agility: No sales system is eternal. Competition, market evolution, and the emergence of new products will require keeping sales flows up to date. It is essential that changes to the platform can be made quickly and affordably.
Observability: It is crucial to monitor and supervise the platform's behavior at all times. It should not become a black box; instead, we must be able to visualize how information flows between components to perform diagnostics when necessary. Additionally, the system should alert us to any detected anomalies.
Each business has its own peculiarities, so in this article, I will limit myself to describing the elements we typically use as the basis for developing the rest of the information flows, adapted to the specific needs and restrictions of each client. If you want more information on any of these points, feel free to write to us at info@arteco-consulting.com.
Choosing a Cloud Provider
There are dozens of cloud providers where we can deploy our services. The top three are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. They all offer the services we need, and if we use standard technologies like Kubernetes (which I will talk about later), we will avoid vendor lock-in, allowing us to switch providers at any time, although this process will require proper planning.
In my case, I usually recommend Google Cloud. It is the provider we have used since the creation of Arteco Consulting SL in 2012, a company that was born cloud-native, without the need for physical servers. Google Cloud offers competitive prices and the possibility of obtaining significant discounts through long-term resource reservations.
Preparing the Software Component Manager
Any of the three providers mentioned is compatible with the most widely used orchestration technology globally, which has become a standard for running cloud software: Kubernetes. This open-source software was originally developed by Google and is currently managed by the Cloud Native Computing Foundation (CNCF), a non-profit organization that is part of the Linux Foundation.
Kubernetes, often abbreviated as K8S, is an orchestrator responsible for keeping our software components running. It provides a broad range of services that facilitate application interaction and ensures their availability, preventing service interruptions or maintenance downtime. In short, Kubernetes manages CPU, RAM, and storage resources to virtualize our applications, whether databases, websites, or API services for application interconnection. Additionally, it ensures that the deployed services run continuously and smoothly.
On the official Kubernetes website, you can access all the necessary documentation to start using this platform, both in the cloud and on your own servers.
Kubernetes requires a minimum number of virtualized servers, known as a "cluster." Creating this cluster is relatively easy if there are no special connectivity needs. Through the web console of each cloud provider, we can create the initial cluster in just a few clicks, configuring the name, geographic location, and server type (specifying the amount of RAM, CPU, and storage). After a few minutes, the cluster will be ready, and we can connect to it through several options, the most common being the use of kubectl, a terminal command that can be installed locally.
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl?hl=es-419
It is important to carefully review the configuration of virtual machines, as these systems have a cost based on usage or resource reservation. The higher the amount of RAM and CPU, the higher the monthly cost. By properly adjusting resources, it is possible to start with a basic cluster for around $100 per month, capable of continuously running between 20 and 30 services, such as databases, websites like WordPress, etc.
On this infrastructure provided by Kubernetes, we will deploy other services that will allow us to automate and optimize the platform, facilitating e-commerce. Below, we describe the basic components.
Basic Components of the Agile Platform
Once we have access to the cluster, we will need to implement services that allow us to achieve the desired goals of the agile platform, such as security, observability, and agility, among others.
Kubernetes already provides some essential features, such as availability. However, the rest of the capabilities will be added using other widely used open-source services and applications. This will allow us to achieve our goals without having to pay for software licenses, limiting ourselves only to infrastructure costs.
Each application has its own deployment instructions, although a frequently used tool is Helm, which simplifies the installation process of many applications. Helm includes configuration templates that enable quick and efficient deployment of common applications within Kubernetes. Instructions for installing certain applications using Helm can easily be found online.
Regardless of whether they are installed manually or via Helm, it is important to ensure that at least the following services are running to build an agile platform. For each goal, there are numerous alternatives that meet the necessary requirements. The following list is a proposal based on solutions we have repeatedly used in production environments, all of which are open source. Feel free to replace any of these options with others that you prefer.
Version Control System
The first step is to implement a version control system that allows tracking changes to both custom development source code and all the configuration files needed to deploy the services. As the infrastructure grows, having all files under version control and well-documented will be essential. These repositories will also be the starting point in case you need to restore the platform or migrate to another cloud provider.
We recommend Gitea as the version control system. It is a lightweight web application that manages Git repositories and offers basic functionalities such as user access control, code reviews, error ticket management, and more. Although it is not the most comprehensive solution, it is quick and easy to install. If more advanced features are required, such as integrated continuous deployments in a single tool, GitLab Community Edition is a more robust alternative.
Continuous Deployment
If you work with custom software (something that is common sooner or later), such as a Jamstack website (highly recommended to optimize SEO) or connectors between components, it is essential that changes are deployed quickly from the moment they are made until they are executed in production.
For example, a developer detects a bug in software that connects two systems. After making the change that fixes the bug, they upload the source code to Gitea. Gitea, in turn, will trigger an event that notifies the platform to redeploy the service with the applied fix. These services are known as Continuous Integration or Continuous Deployment (CI/CD).
To configure this continuous deployment flow, we will use Jenkins, a web tool that allows defining the necessary steps to deploy the updated software within the cluster. With Jenkins, we can set up scripts that automatically execute within our own cluster whenever Gitea logs a source code update. This interaction between tools is done via webhooks, which are simple HTTP messages that trigger actions, such as executing a deployment flow when a code change notification is received.
These triggers are also commonly used when new products are uploaded to the content management system (CMS), for example. This way, editors who lack programming knowledge can update the website, albeit indirectly.
This approach allows the platform to be agile, with rapid and automated updates, optimizing the efficiency of the development team and reducing the time between detecting a problem and resolving it in production.
Observability and Monitoring
One of the fundamental pillars of any agile platform is observability. Having complete visibility into what is happening in the system in real-time is crucial for detecting issues, optimizing performance, and ensuring smooth operation. In this regard, it is necessary to have tools that allow us to monitor the state of the deployed services and applications, as well as analyze logs and trace requests throughout the infrastructure.
Prometheus
Prometheus is one of the most popular tools for monitoring in Kubernetes environments. It integrates seamlessly with containers and provides powerful metric collection and storage capabilities. Prometheus works by collecting metrics from the various services deployed in the cluster and storing them so that we can observe the behavior of our applications over time. It is especially useful for monitoring resources such as CPU usage, memory, and application latency, allowing us to quickly detect anomalies or bottlenecks.
Grafana
Once Prometheus has collected the metrics, we need a tool that allows us to visualize them clearly and effectively. This is where Grafana comes into play, the best option for metric visualization in Kubernetes. With Grafana, we can create custom dashboards to monitor the health of the system and applications. This flexibility allows us to tailor visualizations to our specific needs, providing real-time insight into infrastructure performance, from resource usage to individual service activity.
Indicators don't have to be purely technical, as applications can emit their own metrics, such as the number of sales in the last hour, which can then be displayed in Grafana on a new dashboard designed for a business profile.
ELK Stack (Elasticsearch, Logstash, Kibana)
In addition to metric monitoring, it's vital to have a solution for managing and analyzing logs, which are an invaluable source of information when diagnosing problems or auditing system behavior. For this purpose, the ELK Stack is one of the most robust solutions. This set of tools includes:
- Elasticsearch, which stores and facilitates efficient searching of logs.
- Logstash, which processes and transports logs to Elasticsearch.
- Kibana, which allows us to visualize logs intuitively and create graphs based on the collected information.
For log collection, we can use agents like Filebeat or Fluentd, which handle sending logs generated by services directly to Elasticsearch. This provides a continuous flow of information that we can analyze to identify errors or trends that might affect platform performance.
Log storage systems like Elasticsearch typically come with retention policies to prevent disk volumes from filling up, so older messages are automatically discarded to save space.
Jaeger
In more sophisticated architectures based on microservices, tracing requests through multiple services is essential to understand application behavior. This is where Jaeger plays an important role. This distributed tracing tool allows us to follow the path of a request through different microservices, which is crucial for identifying latency issues, bottlenecks, and other inefficiencies that might arise in a distributed environment. With Jaeger, we can gain detailed insights into our applications' performance, facilitating optimization and diagnosing complex problems affecting the end-user experience.
Load Balancer and Ingress
In an agile platform, it is essential to have mechanisms that efficiently distribute traffic to the services deployed in the cluster, ensuring that requests are handled fairly and that no single point is overloaded. Additionally, it is necessary to expose these services to the outside world in a secure and scalable manner. To meet these needs, load balancers and Ingress Controllers are used to manage incoming public internet traffic and route it to the appropriate services within the infrastructure.
NGINX Ingress Controller
One of the most commonly used tools for managing traffic ingress in Kubernetes is the NGINX Ingress Controller. NGINX is a widely recognized and proven solution in the container environment, known for its ability to handle large volumes of traffic with efficiency and flexibility.
The NGINX Ingress Controller acts as an intermediary between the external world and the internal cluster services. It allows you to expose your services through HTTP/HTTPS routes, facilitating access from outside the Kubernetes environment. It does this by managing routing rules that direct incoming traffic to the appropriate service based on factors like domain, requested URL, or protocol used.
One of the significant advantages of using NGINX as an ingress controller is its capability to manage both secure (HTTPS) and non-secure (HTTP) traffic. Additionally, it is highly configurable, meaning we can easily adjust performance parameters, enable traffic compression, manage SSL certificates for secure connections, and even apply advanced security policies such as access throttling or malicious traffic filtering.
Using the NGINX Ingress Controller is also fundamental for ensuring efficient load balancing. This means that incoming traffic is distributed evenly across different instances of services, preventing overload on a single instance and ensuring high availability. If an instance fails, NGINX automatically redirects traffic to the remaining available instances, maintaining service continuity without noticeable interruptions for the end user.
Implementing an Ingress Controller like NGINX gives us complete control over how traffic to our services is managed, ensuring that the platform is scalable, secure, and capable of handling high volumes of requests without compromising user experience.
Integrating All Components with an Online Store
Once we have all the fundamental components of the platform in place—monitoring, observability, load balancers, and version control—we can focus on building agile and efficient applications. A perfect example of how to integrate these components to create a modern and scalable e-commerce solution is using the JAMstack approach in combination with Astro, one of the most powerful emerging technologies for creating static and optimized websites.
JAMstack with Astro: Product Catalog and Purchases
JAMstack (JavaScript, APIs, and Markup) is a modern architecture for building fast, secure, and easily scalable websites. By separating the frontend logic (the user interface) from the backend (server functions and databases), we can create applications that are deployed as static sites but interact dynamically with external services, achieving an optimal blend of speed, security, and flexibility.
In this case, we will use Astro as the framework to build our online store. Astro specializes in generating static websites, which means our pages will be extremely fast, as most of the content is pre-rendered and served directly from the CDN. This not only enhances user experience but also reduces infrastructure costs and increases security by minimizing direct interactions with servers.
Creating the Product Catalog
The first step in building our online store with Astro is to integrate a CMS (Content Management System) that allows us to manage the product catalog easily. There are many headless CMS options that can be easily connected to a JAMstack project, such as Strapi or Directus, which can also be deployed within our infrastructure. These CMS solutions enable centralized storage and management of products, while Astro handles the static generation of product catalog pages.
For example, each time a new product is added to the CMS, Astro can automatically generate an optimized page for that product, including its description, images, and price details. Additionally, as static pages, the load times will be extremely fast. Thanks to version control and continuous deployment with tools like Gitea and Jenkins, changes to the catalog can be immediately reflected in the store without manual intervention.
Purchase and Payment Process
Once the catalog is up and running and accessible through the NGINX controller, the next step is to enable the purchase process. In this case, we leverage JAMstack's capabilities to integrate third-party services such as PayPal or Stripe, which manage payments securely and efficiently.
Using the APIs from these providers, we can implement a fully dynamic purchase flow without sacrificing the static nature of our site. The typical process would be: the customer adds products to the cart, selects a payment option (such as PayPal or Stripe), and upon completing the transaction, these external services handle the payment securely.
This stage of the sales funnel is crucial. Our platform must verify that the payment has been completed correctly by querying the payment provider to avoid being misled by a potentially malicious user who claims to have paid when they have not. This usually requires some custom development for verification and then recording the purchase in our order management tool. This could range from sending an email to a mailbox to calling other APIs or inserting records into a database. Our recommendation is to use Java and SpringBoot in the form of an API to orchestrate the purchase. However, simpler options like APIs developed with Python, PHP, or Node.js are also valid.
The use of Astro ensures that the store remains fast and secure, as the website remains static while dynamic interactions, such as the payment process, are managed through APIs. This not only guarantees optimal performance but also greater protection against common attacks or vulnerabilities found in traditional websites.
Advantages of an Online Store with JAMstack and Astro
By combining Astro with the JAMstack approach, we achieve an online store that offers several significant advantages:
- Speed: The static sites generated by Astro load incredibly fast, enhancing both user experience and SEO ranking.
- Scalability: With the stack deployed on Kubernetes (K8s), it's extremely easy to deploy new versions of any component of the sales funnel without service interruptions and to scale horizontally if the load requires it.
- Security: Static pages that rely on third-party services for critical processes considerably reduce security risks.
- Simple Maintenance: Thanks to deployment automation with tools like Jenkins and Gitea, maintaining the product catalog and managing the store becomes more efficient and less error-prone.
With this architecture, we’ve created a fast, scalable, and secure online store, leveraging the benefits of JAMstack and Astro to build a static, efficient website ready for modern e-commerce.
Clearly, this architecture is a simplification of complex business environments, which often involve extensive technological requirements and numerous interconnected processes, involving various departments and large management teams. For a small-scale project, a site on WordPress managed by a hosting tool and some additional plugins might be sufficient. However, similar architectures are being used by companies that generate billions in revenue, such as those close to our offices in Palma de Mallorca, where large hotel chains and tourism wholesalers manage hundreds of daily sales in international markets.
In summary, creating an agile and modern platform, whether for e-commerce or other business applications, can offer tremendous advantages in terms of scalability, security, and efficiency. The solutions we have described are just the beginning of what can be achieved with the right combination of tools and technologies. If you are interested in implementing a similar architecture, optimizing your processes, or exploring more about agile platforms, feel free to contact us. At Arteco Consulting, we are here to help you take your business to the next level. Write to us and we’ll be happy to advise you.
INDEX
RELATED
CATEGORIES
kubernetes
platform
prometheus
ecommerce
cloud
Stay Connected
Newsletter
Stay up to date with the latest in technology and business! Subscribe to our newsletter and receive exclusive updates directly to your inbox.
Online Meeting
Don't miss the opportunity to explore new possibilities. Schedule an online meeting with us today and let's start building the future of your business together!
- :D
- :)
- ;]
- :0
- :}
- ;)
- :>
Join the Team
We have a large portfolio of trainees who combine their academic training with experience at Arteco, learning firsthand from those on the front lines. We carry out an intensive training program aimed at rapid incorporation into real development teams.
- :)
- :0
- :D
- ;)
- :]
- :º