Вы находитесь на странице: 1из 6

OVERVIEW OF DOCKER:

The architecture of Docker uses a client-server model and comprises of the Docker Client, Docker
Daemon, Docker Images, Docker Containers, Docker Registry, Docker Volumes and more. Let’s
look at each of these in some detail.

Docker Client

The Docker Client enables users to interact with Docker Daemon. The client can reside on the
same host as the daemon or connect to a daemon on a remote host. A Docker Client can
communicate with more than one Docker Daemons. It also provides a command line interface
(CLI) that allows you to issue build, run, and stop application commands to the daemon. The
main purpose of the Docker Client is to provide a means to direct the pull of Docker Images from
a Docker Registry and to have it run on the host.

Docker Daemon
The Docker Daemon is a persistent background process that manage Docker Images, Docker
Containers, Docker Networks, Docker Volumes, and more. It constantly listens for Docker API
requests and processes them.

Docker Image

A Docker Image is a read-only binary template used to build Docker Containers. It contains
metadata that describe the container's capabilities and needs. It is also used to store and ship
applications. An image can be used on its own to build a container or customized to add
additional elements to extend the current configuration. A image can be shared across teams
within an enterprise using a private Docker Registry, or shared with the world using a public
registry like Docker Hub. The image is a core part of the Docker experience as they enable
collaboration between developers in a way that was not possible before.

Docker Container

A Docker Container is an encapsulated environment in which you run applications. The


container is defined by the image and any additional configuration options provided on starting
the container, including and not limited to the network connections and storage options. A
container only have access to resources that are defined in the image, unless additional access is
defined when building the image into a container. You can also create a new image based on the
current state of a container. Since containers are much smaller than VMs, they can be spun up in
a matter of seconds, and result in much better server density.

Docker Network

Docker implements networking in an application-driven manner and provides various options


while maintaining enough abstraction for application developers. There are basically two types
of networks available, the default Docker Networks and user-defined Docker Networks. By
default, you get three different networks on the installation of Docker: none, bridge, and host.
The none and host networks are part of the network stack in Docker. The bridge network
automatically creates a gateway and IP subnet and all containers that belong to this network can
talk to each other via IP addressing. This network is not commonly used as it does not scale well
and has constraints in terms of network usability and service discovery.

Docker Volume

You can store data within the writable layer of a Docker Container but it requires a storage
driver. Being non-persistent, it perishes whenever the container is stopped. Moreover, it is not
easy to transfer this data also. With a Docker Volume, you can create persistent storage with the
ability to rename, list and also list the container that is associated with the volume. The volume
sits on the host file system, outside the containers copy on write mechanism and are fairly
efficient.

Docker Registry

A Docker Registry is a service that provide locations from where you can store and download
Docker Images. In other words, a Docker Registry contains repositories that host one or more
Docker Images. Docker Hub is the public registry managed Docker company. Docker also
provides private registry that can be hosted on our data center.

Key Differences between VM & Container

Before learning the key differences between VM and Container, let's spend sometime in
discussing a scenario implementation on both VM and Container. Imagine, a company/team
developing 3 software projects parallely, which comprises of different technology stacks. Inorder
to avoid unwanted issues, they try to be version specific from development to production on all
the software packages, configurations and operating systems that are required to run the stack.
They make sure every developer have dentical development environment to maintain integrity
and consistency during software delivery. Let's assume the stack configurations of their software
projects in below table.

First, lets see how to implement a development environment for above software projects on a
Linux machine using VMs. Assume that we power-on the VMs one by one to get better
understanding.
Second, lets see how to implement a development environment for above software projects on
a Linux machine using Containers. Assume that we power-on the Containers one by one to get
better understanding.

Once the scenario analysis is done, its obvious that we find out the key differences between VM
and Container. Take a look a the below table to know the key differences between Container and
VM.
In Linux, containers are an operating system level virtualization technology for providing multiple
isolated Linux environments on a single Linux host/operating system. Unlike VMs, Containers do
not run dedicated guest operating systems. Rather, they share the host operating system kernel
and make use of the guest operating system and system libraries for providing the required OS
capabilities. Since there is no dedicated operating system, Containers start much faster than
VMs.

Containers make use of Linux kernel features such as namespaces, apparmor, security profiles,
chroot, and cgroups for providing an isolated environment similar to VMs. Linux security
modules guarantee that access to the host machine and the kernel from the Containers is
properly managed to avoid any intrusion activities. In addition Containers can run different Linux
distributions from its host operating system if both operating systems can run on the same CPU
architecture.
In general, Containers provide a means of creating images based on various Linux
distributions, an API for managing the lifecycle of the Containers, client tools for interacting with
the API, features to take snapshots, migrating Container instances from one Container host to
another, etc.

Вам также может понравиться