Containers

LXC, or Linux Containers, is an operating-system-level virtualization method that allows multiple isolated Linux systems (containers) to run on a single control host using the Linux kernel’s features. It provides a user-space interface for the kernel’s facilities like cgroups and namespaces to offer a containerized environment.

LXC and LXD are important tools you need to learn how to use. Take some time to study and learn how to set up containers to build your applications in.

LXC, or Linux Containers, is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host. It leverages the kernel’s cgroups functionality to provide an isolated environment for applications.

How LXC Works

Namespaces:

In computing, particularly within the context of Linux systems, a namespace is a kernel-level feature that provides process isolation. It allows processes to have their own view of the system’s resources, such as process IDs (PIDs), network interfaces, user IDs (UIDs), mount points and more. This isolation mechanism is fundamental to technologies like containers (e.g., LXC, Docker).

Namespaces work by creating different “views” or “scopes” for various system resources. You can only use a name once in any particular namespace.

Each process in a PID namespace thinks it’s in its own process hierarchy, where it can be PID 1 (similar to the init process in a traditional OS). This ensures that processes inside a container don’t interfere with or see processes outside their namespace.

Network Namespace provides isolated network stacks. Each network namespace has its own routing tables, network devices, sockets, etc. Allows containers to have their own network configuration, including IP addresses, without conflicting with others on the same host.

Mount Namespace provides a separate view of the filesystem. Changes to mount points in one namespace do not affect others. It is useful for containers to have their own file system structure, including different root directories (/).

UTS Namespace isolates the hostname and NIS domain name, allowing containers to have unique hostnames even if running on the same physical machine. IPC Namespace segregates inter-process communication resources like System V IPC objects or POSIX message queues, preventing processes in different namespaces from communicating unintentionally via IPC.

User Namespace maps user and group IDs inside the namespace to different IDs outside. This means root inside a user namespace might not have root privileges on the host, enhancing security by allowing processes to run with elevated privileges inside a container without giving them real root access on the host.

Namespaces are created using system calls like unshare(), clone(), or setns(). For instance, unshare(CLONE_NEWNS) creates a new mount namespace. Processes can join existing namespaces using setns(). Linux provides files under /proc/[pid]/ns to manage namespaces for each process. These symbolic links point to the namespace files, which can be used to manipulate or share namespaces.

Containers leverage namespaces to provide the illusion of running on a separate system, improving security, stability and resource management. By isolating system resources, namespaces help in reducing the attack surface; if one namespace is compromised, it doesn’t necessarily affect others. Developers can work in isolated environments that mimic production setups without interfering with the host system.

Namespaces are part of the Linux kernel; they rely on kernel support, which means they’re not universally portable across all OSs or kernel versions without additional layers (like QEMU for other OSes). While namespaces provide significant isolation, there have been vulnerabilities that allow escaping from one namespace to another, emphasizing the need for comprehensive security measures.

Namespaces are a cornerstone of modern Linux system design, enabling advanced features in system administration, security, and application deployment strategies.

Control Groups (cgroups):

Control Groups, or cgroups, are a Linux kernel feature that allows for resource allocation and limitation for groups of processes. They provide a mechanism to manage, monitor, and limit the resource usage of a collection of processes, ensuring that one process or group of processes does not consume all available system resources.

Cgroups are organized in a hierarchical tree structure. Each node in this tree can define resource limits or behaviors for the processes it contains. Subsystems or controllers are kernel components that control specific types of resources. Examples include cpu, memory, blkio (block I/O), net_cls (network class) and cpuset (CPU sets).

Limits CPU usage by specifying time slices or shares, ensuring fair scheduling or priority among groups. Sets limits on how much memory a group can use, including both RAM and swap space. It can also enforce actions when memory limits are hit (like killing processes). Manages disk I/O bandwidth, ensuring that one group doesn’t monopolize disk access.

With net_cls, you can mark network packets for different treatment by traffic control.
Accounting: cgroups can track resource usage, providing statistics that can be used for billing, monitoring, or debugging purposes.

Process Groups can be moved into or out of cgroups dynamically. When a process forks, its children inherit its cgroup membership unless explicitly moved. cgroup filesystem: The virtual filesystem (cgroupfs) under /sys/fs/cgroup/ (or /cgroup in older systems) where admins interact with cgroups. Each subsystem has its own directory where you can create groups, set parameters, and monitor usage.

Within each cgroup directory, files represent different controls or metrics for the subsystem. Writing to these files configures the cgroup’s behavior. Hard limits include absolute caps on resource usage (e.g., max memory). Soft limits trigger actions when crossed (e.g., memory pressure). Through mechanisms like CPU shares, where one group might get more CPU time relative to others when contention occurs.

cgroups are essential for container engines like Docker or LXC, where they ensure that containers do not exhaust host resources. In multi-tenant environments, ensuring that each user or application gets a fair share of resources. System services can be grouped to manage their collective resource consumption, especially useful in cloud or server environments.

To create a cgroup you might make a directory under /sys/fs/cgroup/cpu/ named mygroup, then write 100 to mygroup/cpu.shares to give this group 100 CPU shares for scheduling. You can move a process into this group by echoing its PID into the tasks file of the cgroup.

Managing cgroups involves some kernel overhead, though it’s generally minimal. The hierarchical structure and multiple subsystems can be complex to configure correctly without impacting performance or fairness. While cgroups provide resource isolation, they do not inherently provide security isolation; this requires additional mechanisms like namespaces.

Control groups are a powerful tool for managing system resources, ensuring that processes or services do not dominate system capabilities at the expense of others. They work in tandem with namespaces to form the backbone of modern container technologies, providing both resource control and isolation necessary for efficient, secure multi-tenant environments.

Chroot is traditionally used to change the root directory for a process, providing a simple form of isolation. More modern approaches use OverlayFS or AUFS, where a read-only base filesystem is layered with writable layers for modifications, allowing for lightweight container images.

LXC can drop system capabilities, reducing the attack surface within containers. Additional security modules (SELinux/AppArmor) can enforce mandatory access control policies.

Containers can be given their own network interfaces through virtual Ethernet devices, bridged networks, or even MACVLAN to appear as separate hosts on a network.

Inside the container, a process acts as PID 1, similar to init or systemd on a regular Linux system, managing the lifecycle of other processes.

Lifecycle Management:

Containers are created from templates or by copying existing root filesystems. Commands like lxc-create set up the initial environment. Once created, containers can be started with lxc-start, which runs the specified init system inside the container. Containers can be stopped gently with lxc-stop or forcefully terminated. Containers are removed using lxc-destroy, which cleans up all resources associated with the container.

LXC provides the foundation for more complex container solutions like Docker, which adds layers of management, distribution, and orchestration on top of LXC’s capabilities.

LXD (LXC Daemon)

LXD is an extension of LXC, providing a higher-level user experience for managing containers. It’s designed to be a container “hypervisor”, offering REST API, a command-line client, and integration with various cloud orchestration tools.

LXD runs as a daemon which manages all the containers on a machine. Interaction with LXD is through its client (lxc command line) which communicates with the daemon over a Unix socket or network.

LXD uses images (pre-configured root file systems) for quick container deployment. These images can be from local storage or remote servers like those hosted by Canonical.

LXD supports snapshots, allowing for easy backups and rollbacks of container states. Containers can be migrated between LXD hosts with minimal downtime, useful for maintenance or load balancing.

Enhanced security features include mandatory access control (MAC) enforcement, better integration with systemd, and user namespace mapping for improved isolation. LXD extends LXC’s networking capabilities with easier setup of complex network configurations, including support for OVS (Open vSwitch) for advanced networking.

LXD is often installed via snap or native packages. Commands like lxc launch, lxc stop, lxc list, lxc info provide an intuitive interface for managing containers. LXD is simpler, more powerful management interface compared to LXC alone. It is more integrated with modern Linux features and security enhancements.

LXD is still primarily Linux-focused, although some support for other OSes through virtualization exists. It requires familiarity with container concepts, though more user-friendly than pure LXC.

Both LXC and LXD are pivotal in the world of containerization, with LXC providing the foundational technology for lightweight isolation, and LXD enhancing it with a more user-friendly, feature-rich management layer. Together, they offer robust solutions for deploying applications in isolated environments, each with its own set of use cases depending on the level of management control and complexity required.

The three major ways to isolate your projects from each other and your local operating system is to use a virtual environment, like Virtual Box, or even better, Qemu and KVM. You can use Docker Community Edition or LXC and D. You can also use Flatpak and Snapcraft.

Definitely try out Virtual Box. Install it and use it, so you will know how to do so, when that is the best solution for one of your projects. You can try out entire operating systems in Qemu or Virtual Box, which makes it a very valuable tool. And, its relatively easy to learn and get started using.

Docker CE, which is a commercial version of LXC, saves you from having to setup an entire operating system in your virtual environment. Even though Docker CE is a lot lighter than a Virtual Environment, it is still enterprise class technology that might be over kill for home brew websites.

LXD brings containerization down to the home office level. I’m pretty sure you can use any of these systems in your home office. You decide. LXD will be smaller and faster than the others.

LXD is LXC with a more advanced user interface. One important improvement is that it can be remotely controlled. Even though LXC and LXD use a separate operating system, they are much smaller than a virtual environment like Virtual Box or VMware.

Docker is a proprietary system, with a free and open source version, which enables you to build applications directly in the container, without an underlying operating system. It uses your computer’s kernel, which makes Docker containers smaller and lighter than virtual environments.

Qemu is a fast and open source virtualization technology that seems to require more technical skill to operate properly. After glancing at it’s website, I think I like it better than all the other options.

Qemu is a generic and open source machine emulator and virtualizer. It seems like a combination of containerization and virtualization. You can create containers with it and you can run KVM and XEN virtual machines with near native performance.

Get LXD or Docker or Qemu learned and set up on your system, so you can have a directory in your filesystem for each website or other application. Each application will be isolated from all the others and from your computer’s operating system. They’ll all be backed up and synchronized with a Github repository, so that you’ll be able to clone your projects and work on them on any computer.

You can also build separate containers for your database, your web server and your Django or Drupal. They all need to be properly containerized in a way that enables them to communicate with each other, without being able to inappropriately influence each other or your local operating system. You’ll find your own best practices.

Qemu, LXC, Git and SSH are components of your local development environment. You build your WordPress or Drupal, Flask or Django projects within LXC containers, which you manage in your LXD dashboard. You record and back everything up using Git and a remote repository and then, connect them to your live websites on remote commercial servers using SSH. The one thing I prefer using commercial servers for is web hosting, for security and stability.

Read lots of books and watch many videos about all these technologies. Take online courses and practice using these 21st century tools. Work on some aspect of your development workflow, as much as possible, for at least 5 days a week. Keep working. Keep seeking the truth and adding value to society, for as long as you are alive on earth.

Create a new directory for your new project in your Projects or Websites directory. The first thing you do is create a container in there. Then, you create a Git repository in the container. Then, you install Django or Flask. Use SQLite in your local development environment and PostgreSQL in your production websites. You’ll have to set up SSH keys for each project.