Subscribe to our 0800-DEVOPS Newsletter

    Get in touch

    Not sure where to start? Let our experts guide you. Send us your query through this contact form.






      Get in touch

      Contact us for all inquiries regarding services and general information






        Use the form below to apply for course





          Get in touch

          Contact us for all inquiries regarding services and general information






          Blog

          Docker containers – how to „squeeze“ more applications into an already existing infrastructure?

          25.04.2017

          Today’s companies have an increasing number of installed applications with different demands, concerning the environment they are performed in (version of Java, program libraries and so on). So, there is a need for isolating those applications, which is traditionally achieved by installing applications into separate virtual computers, custom-built for each and every application. The basic idea of a container is to install multiple, mutually autonomous applications on the container level, within the same operating system. That way, the number of operating system’s instances is reduced, and the number of useful business applications on the same infrastructure is increased; that reduction helps to save the infrastructure’s hardware resources, that can be used for completing business applications, rather than for the operational system’s maintenance.

          Docker, as the most popular implementation of container technology, made the container technology simple for using and applicable to a wide range of people. Fundamentally, Docker is a Linux technology that offers a new level of abstraction and automatization of visualization on the operational system level, using the available isolation functionalities of kernel and union-capable file system.

          A comparison of architecture with virtual computers and containers. Using a container reduces the operational system’s instances in the environment.

          If we compare the architecture, in which the applications are isolated by using separate virtual computers (picture 1 – on the left), to the architecture, where the applications are isolated by Docker containers (picture 1 – on the right), we understand that if we want to install three separate applications on only one server, we need the minimum of four operational system’s installations. On the other hand, when using containers for application isolation, we only need one operational system. By using the container, we have three operational systems less, which frees the server resources for installing additional applications.

          What is a container actually?

          A container is an image of a file system that defines all the run time components, necessary for successful functioning of the application. So, it is safe to say that every computer is actually a „container“. Every virtual computer has a whole operational system with all of its services, which consume a lot of hardware resources, and that is their main deficiency. Docker containers eliminate that problem, because only one operational system is necessary, and the isolation happens on the file system and OS’s kernel level. File system isolation is enabled by using the union-capable file system, which builds a complete image from separately defined layers, and by its union, a functional environment is gained.

          Docker Engine uses a client-server architecture in which a client, service and the repository are all functioning as separate components. That way, we can adapt the solution architecture to our own surrounding. Most often, we manage a few Docker services, all connected to the same image repository, with only one client. (Source: www.docker.com)

          Docker’s main component, Docker Engine, consists of a service, launched on a host operational system, Rest API, that defines the interface through which the Docker service is managed, and a command line API client, through which we issue a command. Docker service is in charge of each container’s life cycle, takes care of their mutual isolation, the operational system’s resources allocation and building a file system image for each container. Images are created with the so-called Dockerfile that contains a list of Docker commands, through which the Docker service knows how to create a new image. The images are then being saved directly to the server they were created in. Docker has an installed support for image management through the central image repository. There are a few publicly available Docker images repositories. Docker service is initially set up to use the DockerHub repository, filled with multiple, already finished images, which contain popular application’s installations in a container format, such as databases, servlet containers, Java environment and so on. We usually start creating our own image on an already finished one, because every image can be used as a base for a new one.

          That kind of concept of a file, which uniquely describes every image, means that the application installation, meant to be performed in a container, is being done by sending or receiving a Docker file or downloading an already finished one from the central repository to the server, on which the Docker service is performed.

          What does that mean for everyday work?

          Apart from the improvements, regarding better infrastructure efficiency, container technology is being applied from the very start of the development cycle. It enables development engineers and testers a quicker formation of new environments. At the same time, it offers a guaranteed environment compatibility from the development station through test, pre-production and production environments. Because of that ability, containers are especially convenient for delivering to clouds. On the other hand, development engineers, in addition to their previous experiences, have to become competent for application environment management, and system engineers have to learn how to manage new infrastructures, which clears out the defined boarders between their business roles and creates a joint one (DevOps). Besides that, Docker containers are a good base for building applications, based on micro service architecture, about which you can read more in this issue’s articles.

          CROZ recognized containers as a useful technology too, so some teams already use Docker containers during the development process for development and application testing. The intention is to use Docker containers for all new applications, from the beginning of the development and all the way to its production.  Docker containers are a technology that users should recognize and start using, and according to that, CROZ’s intern applications will all be developed on the Docker, so we could gain experience in application development and managing the container infrastructure.

          All in all, the container technology is the next big step towards infrastructure visualization. It changes the steady paradigms of application architecture development towards micro services, and Docker, with its simplicity and a wide specter of tools, that are being upgraded and expanded with many new functionalities, represents the best choice for container technology implementation.

          CONTACT

          Get in touch

          Want to hear more about our services and projects? Feel free to contact us.

          Contact us