Computers and networking have evolved a lot over the last few decades. They have changed almost every aspect of day-to-day activities. For example, learning methods, human interactions, security operations for engineers, and so on.
Technology is evolving, and security requirements are changing with it.
In this article, we will talk about security concepts and how we implement them when building custom web products at Codica.
Security in product development: why is it important?
What is security? Well, better to say what security is not:
- Security is not a product. There is no single object that can make your system secure. The only way you can protect your system more is by following security baselines and guidelines;
- Security is not a one-time static operation. Concepts and threats constantly change;
- Security is not made only of tools and security elements. It's also about ideas and practices.
What is the goal of security?
Security aims to protect objects from threats. The threat can be an object or subject. Let's see in detail what are the most common threats. There are a few such threats like P.A.S.T.A, Trike, and VAST etc. But we will discuss STRIDE, a threat identifying model program, developed by Microsoft.
STRIDE is an acronym, and it stands for the main types of threats, which we will list below.
When intruder is pretending to be a different person.
The list of actions that can modify original data and compromise its integrity.
The ability of an attacker to deny actions. For example, you didn't have logs configured, and attacking subjects deny their actions.
When attacker distributes information to unauthorized entities.
Denial of service
The goal of this attack is the authorized use of resources by overloading the network of hardware
Elevation of privilege
Here, the goal is to escalate privileges, so the application can perform unauthorized actions.
What is security?
Now we can finally define what security is.
Security is a protection from harm that may be caused by third parties. The security consists of many layers and procedures. When any of those layers are vulnerable, the whole system is at risk.
Indeed, good security practices and concepts will help your system be more available and less risky in terms of business and user privacy. Also, they make it more efficient.
There are a lot of security tools, but we will mention only the most important, they include:
- Intrusion Detection System (IDS),
- Encryption tools,
- Packet sniffers,
- Penetration Testing.
Those tools evolve and change all the time. As an example we can take firewalls and how they’ve changed. Let’s see an image below.
Nowadays, security applies not only to networks but also to a lot of other concepts like data storage, which must be encrypted. Access management must also be well controlled and baselines followed if data is sensitive.
At Codica, we maintain a high-level security system and apply the best DevOps practices to provide our customers with reliable and secure software.
CIA and AAA models
Any analysis of security systems cannot be complete without a review of common security standards. They form the basis of a wireless security strategy.
Both models, CIA and AAA, were created to standardize security concepts. They are far from being precise. Their implementation differs significantly in various systems. However, these concepts are very good at illustrating general security concepts and ideas.
To ensure the network's security, you should provide control over access to various elements of the network by users. This is what the AAA mechanism is for.
The CIA is the information security reference model. It is used to assess an organization's information security.
What is AAA?
The AAA term means Authentication, Authorization, and Accounting. This framework was created to give some basic concepts to secure the system’s objects and subjects.
However, this AAA definition is not precise in the interpretation above. For example, ISC (Internet Systems Consortium) recommends at least two more components of the AAA concept. Let's see them all:
- Identification - claiming your identity;
- Authentication - proving that your identity is credible;
- Authorization - granting or denying access;
- Auditing - only a log recording;
- Accounting - review of logs to see identities’ actions.
If at least one component is missing, the system will be insecure.
Now let's clarify how this system works with an example in AWS (Amazon Web Services). We give AWS as an example because, at Codica, we work mainly with this provider. Furthermore, it is the most mature cloud provider for now.
When we log in to our account, we make an Identification by providing a username. Then, we conduct Authentication. We provide a password and Multi-factor authentication (MFA) code.
After that, we are authorized to perform some action, depending on our permissions, to add accounting to the setup. For example, you can configure the setup and enable AWS CloudTrail. This AWS service monitors and records account activity across your AWS infrastructure.
To implement accounting, you can save your logs to Cloudwatch, run queries against them and notify if something is wrong. Cloudwatch is a metric and logs repository, for example if our application needs to save logs it can save them to Cloudwatch, also it supports metrics.
What is CIA?
CIA stands for confidentiality, integrity, and availability. They are typically viewed as the primary goals of a secure infrastructure.
Let’s dive deeper:
Confidentiality is the concept of actions and measures made to ensure data protection. Its goal is to reduce to zero amount of unauthorized access to the data. A lot of security controls can protect confidentiality. Some of them are encryption and steganography.
Confidentiality is implemented by tools and actions from the subject upon the object. It's known as access control. Violations of confidentiality can be different. For example, they can result from a human error or attack, and sometimes both.
The next component of the CIA term is integrity. It helps us keep data correct and unaltered. For integrity to be correctly maintained, objects must be modified by only authorized subjects.
In general, integrity goals are:
- Preventing unauthorized subjects from making any changes against data;
- Preventing authorized subjects from doing unauthorized modifications;
- Keeping objects consistent so we know that any actions against them are consistent and verifiable.
The last one is availability. In short, it means that access to objects is given timely, uninterruptedly, and only for authorized subjects. To ensure the presence availability system should have reliable website data backups.
Furthermore, it needs good performance to handle interruptions. The system should prevent any data loss. By the way, there are a lot of threats to availability. For example, these are power loss, heat, and attacks such as DoS or DDoS.
How can CIA and AAA models help enhance web product security?
At Codica, we use all CIA and AAA.
The understanding of AAA is helpful to develop systems around data classification. It also assists in managing access permissions and auditing action history in case something happened. Those models help to identify when and who made the actions.
The CIA model is helpful to understand what parts of architecture must be secured and keep balance between each element of it. Generally it helps to keep architecture well aligned and, for instance, not to encrypt data that is not sensitive at all.
When the CIA and AAA standards have been met, the company's security profile is better appointed to handle threats and audit attacks. Through these and other concepts, we form the secure architecture.
The shared-responsibility model of AWS
As mentioned above, our experts work mostly with AWS as a cloud provider. So, we will discuss their shared responsibility model.
AWS officially states “Security and Compliance is a shared responsibility between AWS and the customer”. The shared model enables customers to concentrate on the security of an operating system, services, and apps.
In short, AWS’s mission is to protect the cloud. They ensure that data centers are physically secure. Therefore, unauthorized subjects can't access storage and make other attack actions.
A customer is responsible for client-side data, SSE (server-side encryption), networking, and its security. They are also responsible for firewalls and OS (operating system) patches and updates, applications, and access management in the cloud.
Notably, the shared responsibility model changes from service to service. For example, if we take the Relational Database Service (RDS), a customer is responsible for:
- Networking, firewalls,
- Authentication and authorization,
- Encryption at rest.
Other things such as OS patches, maintenance, platform, and database engine updates are the responsibility of AWS.
Interestingly, when using the EC2 (Elastic Compute Cloud) customer has much more responsibility. To demonstrate the scaling of responsibility from service to service we provide a comparison diagram below.
Security pillars in product development: Codica’s experience
We use the infrastructure as code (IaC) to create resources without the risk of forgetting something and to do it quickly when repeating.
IaC is an approach to infrastructure management through code instead of through manual processes. But, why is infrastructure as code so critical?
The benefits of IaC are the following:
- The option to scale out automatically,
- The cost savings, because you don't need to pay for the over provisioned hardware,
- The ability to push massive security processes to the primary cloud provider.
At Codica, we use Terraform in AWS for our web projects. Terraform is one of the most popular tools for implementing infrastructure as code. It allows creating and updating the AWS infrastructure.
Terraform helps us implement the AAA model when a person is logged in and did some actions. CIA model is used as a way of thinking and reasoning about how to build a secure architecture.
The Identification, Authentication, and Authorization are provided by AWS. By the way, in AWS, the process of granting or denying access happens continuously.
Besides, creating users and assigning them policies that grant access is also our part of the verifying process. We will not mention that now, but you can read more in AWS Documentation.
We enable AWS API logs using CloudTrail (service for an ongoing record of events in an AWS account) and save them in log groups. Thus, any action made by a person can not be denied, and every step is recorded. Besides that, we use metric filters. So, if someone does something extraordinary, we instantly get notified about it.
AWS provides us with a huge amount of security products and features, according to their official whitepaper. We will discuss each of those major topics one by one with examples.
As a bonus, we will demonstrate how Terraform helps us to build secure software products.
In this part, we will talk about the networking security on AWS. So, let's take an example of a basic VPC (Virtual Private Cloud) without subnets and routing tables and create one security group with basic rules. Below is an example of an insecure solution for enhancing security.
We can see that we have a lot of issues in the single security group.
So, let's follow recommendations from tfsec (security scanner for your Terraform code) and see what we will have in the end. This method of infrastructure improvement is considered more secure.
Besides, we recommend using this security group rule resource. It will allow you to change security groups without recreation.
If our load balancer is internal, this will be a good security group with no issues. So, what we have just done:
- Improved naming for auditing purposes;
- Made both ingress and egress rules much more efficient by allowing only inside-VPC traffic;
- Added tags for management purposes.
These examples are applicable to a lot of other resources and services. For instance, in VPC, it's better if we configure Flow logs to be able to review traffic. Also, it will be better to encrypt those logs to prevent unauthorized access and data leaks.
Inventory and config management
Inventory and configuration management are helpful when we want to know how our architecture changed. For example, if our server has become more powerful, we can use the configuration recording to understand it.
In short, inventory and configuration management are needed to store all data about the current state of architecture. So, it helps us understand how our configuration and architecture changed.
We use tfsec, terrascan, and dritfctl for security scans and config recording (with Terraform states). Also, we use Cloudwatch metrics filters against the CloudTrail log group. Thus, we are alerted about configuration changes. That is why we don't use AWS Config that much. However, we still do use its config recording.
Now, let's see a few examples of how we use them in our architecture solutions. We follow GitOps practices, so we use the repository as our infrastructure configuration storage and always keep it up to date.
For each new project, we create a Terraform repository in the same group. Also, we configure the CI\CD (Continuous Integration, Continuous Delivery) for IaC (Infrastructure as Code).
Below you can see a typical Terraform infrastructure layout. By the way, we also write most of the modules for infrastructure.
Also, we use checkers, from basic Terraform built-in (like Terraform validate) to tfsec, Infracost, and others. It helps us implement cost control and do cost optimization if needed.
Below we can see an example of Infracost in our Gitlab CI pipeline.
So, in our workflow GitOps is a must, and so is Terraform. The more infra is covered by IaC, the better. For example, if we use ECS, we export task definition to Terraform. Furthermore, we write clusters and log groups, auto-scaling groups, and Fargate clusters. If we need security checks, we also configure it with Terraform.
Thanks to the Terraform, we have many useful features built-in. For example, they enable us automatically check the architecture price and see how it changes all the time.
Besides, Terraform allows us to implement these features at almost any scale. In doing so, we still have config recorded with Terraform state and cost management.
Encryption is a technical process in which information is converted into a secret code. It hides the data that you send, receive, or store. There are two types of encryption - symmetric and asymmetric.
Symmetric encryption uses one key to encrypt and decrypt data. So, only the party, which has this key, can decrypt it.
To compare, asymmetric encryption uses a pair of keys. These are public and private keys. The public key can be transmitted over an insecure channel and is used to encrypt the message. The message is decrypted using a private key known only to the recipient.
In general, encryption is one of the best ways to protect your business's sensitive data from cyber threats. It helps to build a secure architecture of the network.
We do not encrypt everything because this is not efficient. Let's look at the image of data examples and define what is sensitive. As you can see, public assets and pictures of applications are obviously not sensitive. At the same time, database users and credentials are sensitive.
It’s always good to understand what is sensitive and what is not. We encrypt all confidential data with KMS keys (Amazon service that allows the creation and control of keys that encrypt data stored in AWS). Also, we use Terraform for encryption. Let's see the database example.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Make RDS not accessible publicly, and enable encryption. Configure proper subnet groups that place encryption in private subnets. Choose a username and password from random strings.
It is also good to enable logs, keep the keys policy secure from wildcards, and configure security groups to make it accessible only inside VPC. Thus, you will have a well-secured database.
Identity and access
Without the proper access control in-placed, anyone can access the environment and perform unintended actions. We can create users and assign them policies that let us control what access they have with AWS Identity and Access Management (IAM). So, we are able to manage their access to AWS resources and services by permissions.
As you may have already noticed, we use Terraform a lot. We also create users and configure their policies with Terraform. Why? We create users to keep track of them but do not create keys not to keep them in our Terraform state. We just generate them manually.
We also enable MFA on each console user. It's a must. We always ask our clients to enable MFA on root and delete root keys. We use secret scanners to ensure that we don't have any keys in code or in Terraform configuration.
Sometimes we provide access to the AWS console for our developers. In these cases, we use strict policies.
With ssh keys, we use proxy jump and add ssh keys so that only a few developers usually have access to the production system. For application credentials, we use per\service policies and separate users.
Monitoring and logging
Simply put, logging is a record of everything that happens in the application and environment. This record can happen by writing to files and log streams.
Monitoring is about setting up tools that monitor the state of software and the servers it runs on. Good monitoring ensures that we get notified about problems a few seconds after they appear. Also, it enables us to see a lack of performance and change servers' capacity. During the time we have implemented DevOps in Codica, we had to change monitoring a few times and find the one that suits our needs the best. As a result, we’ve chosen Prometheus with Alertmanager, the tool for alerting and monitoring.
Now, we will describe the workflow of our monitoring and logging. So, we add Prometheus to each of our production clusters. If the project is in development, then in staging too.
The whole Prometheus configuration is automated. As a separate service in the ECS cluster, it takes all required metrics and then sends them to our main Prometheus. Alertmanager notifies us in PagerDuty, and then we receive all alters.
We also connect Cloudwatch alarms to our PagerDuty to monitor both instances with the application and AWS account. Let's see an example of a Cloudwatch metrics filter (created with Terraform) and alarm. It will notify us about any changes in security groups.
1 2 3 4 5 6 7 8 9 10 11
CloudTrail will collect the API data and then export it to Cloudwatch logs. Then, if an event occurs, we will have an alert from SNS (Amazon Simple Notification Service) to PagerDuty.
Thus, we will be notified about creating a security group. But i’s just for example. Usually, rules are more complex. So, in the end, we get notified about almost every security event that happens in our AWS accounts.
Also, we use RDS events. They let us know if backups were made successfully. All the alert management happens in PagerDuty with its event rules.
Security of containers
Having secure architecture of web solutions is good, but the application and container security are essential too. In this section, we will briefly explain and demonstrate how we handle this process.
Custom users for your container are a must. However, you should remember that by being a root in the container, a user has significantly fewer privileges than a root on the host. For example, the last one does not have CAP_NET_BIND_SERVICE (kernel capability). So, it can't bind lower ports. Using kernel caps, we can improve the non-root access control.
By the way, you can read about kernel caps in this blog article about Linux Capabilities.
So, why is it essential to have a different user? Because it's very easy to escape from the container and become a root host. You can read more in this blog post about Docker container escapes, and there is also a list of requirements to escape as root and practical examples.
To summarize other concepts, we will list some and explain why to follow them.
Let's see an example of a relatively secure Dockerfile (a text file containing instructions to build Docker images). Further, we will explain why it is secure.
Being not a root user, never mount sensitive directories (/bin, for example). Don't include secrets in Dockerfiles. Why? Because even after an image was built, it is possible to find Dockerfile instructions in that image.
Avoid setid permission setting, because then escape from container will be easier. We recommend using multistage. It will make an image smaller and reduce the attack surface.
So, according to all those recommendations, this Dockerfile was created. It has 0 vulnerabilities and very little attack surface. The image size is about 120 megabytes.
In short, the general recommendation is to use a private registry and encrypt it if possible. Also, use scanners on build and registry built-in. Don't create containers in a way that they download anything at the start.
An app and everything must be packaged in the image. It was a popular practice in the early days of containers. In the end, this will result in having different containers.
There are a lot of scanning solutions but we use trivy or snyk + aws ecr build-in scanning. Let's find out more about daemonless builds and why
docker build is bad.
docker command converts the command into API call and sends it to Docker daemon via Docker socket. In the end, any process that has access to that daemon can send a call.
If we take a look at the Docker daemon, it requires a lot of permissions. Also, permissions are getting “mixed,” so anyone who can run
docker build can also run
docker run and so on.
In fact, you may find out more about secure ways for running containers in this article.
To avoid those problems, we use docker’s BuildKit and Kaniko, an opt-in image building engine. It also supports parallel builds for multi-stage images and has much better caching. In addition, it also gives us new Dockerfile instructions.
Undoubtedly, the degree of security of the final product is no less important than its quality. The functionality of the product is also affected by the inability to completely place and store information in it.
In this article, we have shown our way to implement scalable security with Terraform on AWS. Also, we’ve talked about containers and why it's essential to keep them secure. You've seen a lot of examples and theories. So, follow recommendations and make your web product secure.
At Сodica, our experts always support the projects after their launch. Therefore, we keep the application code is up to date. When the client's business is scaling, we are ready to revise architecture security issues upon request.
If you are looking for a reliable software partner, contact us. We are always eager to help you develop a robust and secure web product.