Вы находитесь на странице: 1из 14

I ngenIería I nvestIgacIón y t ecnología

volumen XIX (número 1), enero-marzo 2018 63-76


ISSN en trámite FI-UNAM artículo arbitrado
Información del artículo: recibido: 7 de octubre de 2016, reevaluado: 12 de marzo de 2017, aceptado: 3 de julio de 2017
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license

Driver LXC development for OpenNebula


Desarrollo de un driver LXC para OpenNebula
García-Perellada Lilia Rosa Rodríguez-De Armas Yalina
José Antonio Echeverría Higher Polytechnic Institute, La Habana, Cuba José Antonio Echeverría Higher Polytechnic Institute, La Habana, Cuba
Electrical Engineering Faculty Faculty of Electrical Engineering
Telecommunication and Telematics Department Information and Communication Technologies Services Department
E-mail: lilianrosa@tele.cujae.edu.cu E-mail: yalina@electrica.cujae.edu.cu

Vega-Gutiérrez Sergio Garófalo-Hernández Alain Abel


José Antonio Echeverría Higher Polytechnic Institute, La Habana, Cuba José Antonio Echeverría Higher Polytechnic Institute, La Habana, Cuba
Electrical Engineering Faculty Electrical Engineering Faculty
Telecommunication and Telematics Department Telecommunication and Telematics Department
E-mail: sergiojvg92@gmail.com E-mail: aagarofal@gmail.com

De la Fé-Herrero José Manuel


José Antonio Echeverría Higher Polytechnic Institute, La Habana, Cuba
Electrical Engineering Faculty
Telecommunication and Telematics Department
E-mail: mdelafe92@gmail.com

Abstract

Operating system level virtualization is a technology that has recently emerged into the cloud services paradigm. It has the advantage
of providing better performance and scalability than para-virtualized or full virtualization hypervisors. This solution is getting accep-
tance into cloud infrastructures. Nowadays public cloud Infrastructure as a Service providers offer applications based in Docker
containers deployed on virtual machines. Only a few bring Infrastructure as a Service on a bare metal container infrastructure. In the
private cloud scenario, however, it hasn’t had a wide acceptance. Private cloud managers, like OpenStack, OpenNebula and Eu-
calyptus, don’t offer good support for it. OpenNebula is a flexible cloud manager, which has been gaining a lot of market over the last
years, so it seemed a good idea to strengthen the operating system virtualization support in this cloud manager. This will contribute
to achieve better interoperability, performance and scalability in OpenNebula clouds. Therefore, the objective of the present work
was to implement a driver to support Linux Containers for OpenNebula. The driver has several features such as: the ability to deploy
containers on File Systems, on Logical Volume Managers and on Ceph; it’s able to attach and detach network interface cards and
disks while the container is on; and it’s able to monitor and limit container’s resources usage.
Keywords: containers, LXC, OpenNebula, operating system virtualization.

Resumen

La virtualización de sistemas operativos es una tecnología emergente en el paradigma de la Computación en la Nube, presentando
mejores índices de desempeño y escalabilidad que los hipervisores soportados por la virtualización completa o por la para-virtuali-
zación. Actualmente abre paso en las infraestructuras de Nubes. Proveedores de Infraestructura como Servicio brindan servicios
basados en contenedores sobre máquinas virtuales, con soluciones como Docker. Pocos brindan Infraestructura como Servicio sobre
una plataforma de contenedores bare-metal. En las Nubes Privadas, sin embargo, los gestores de infraestructuras como OpenStack,
OpenNebula y Eucalyptus, le brindan muy poco soporte, o nulo, a esta tecnología. OpenNebula, gestor con aceptación en el mer-
cado, dada su flexibilidad, modularidad, interoperabilidad, usabilidad y ligereza, podría enriquecerse con la integración de una so-
lución de contenedores, lo que les añadiría a su infraestructura mayor eficiencia. Es por ello que el objetivo trazado en el presente
trabajo fue el desarrollo de un driver para OpenNebula, que le permitiese soportar LinuX Container, una de las principales soluciones
de virtualización de sistemas operativos actualmente. El driver obtenido soporta funcionalidades como el despliegue de contenedo-
res sobre Sistemas de Ficheros, Volúmenes Virtuales y Ceph, adicionar y eliminar interfaces de red, y discos a los contendores en
caliente.
Descriptores: OpenNebula, contenedores virtuales, LXC, virtualización de sistemas operativos.
Driver LXC development for OpenNebula

Introduction like SELinux, AppArmor and Seccomp are not neces-


sary, although solutions like LXC and LXC/LXD use
The majority of the widely deployed virtualization them to add an extra layer of security which may be
platforms in data centers and cloud infrastructures are handy in the event of a kernel security issue. Cgroup,
based in full and para-virtualization technologies. Such restricts the use of physical resources like the Central
is the case of Xen, Kernel-based Virtual Machine (KVM), Processing Unit (CPU), the Random Access Memory
VMware ESXi and Hyper-V hypervisors (Arceo et al., (RAM) and storage devices, through the establishment
2015). On the other hand, the Operating System Level Vir- of quotas and priorities to containers, avoiding poten-
tualization (OSLV) technology is getting acceptance into tial Denial of Services (DoS) attacks. (Petazzoni, 2017;
cloud infrastructures with solutions like: Docker, LinuX Graver 2014a: 2016a). These alternatives are supported
Container (LXC) and LXC’s new interface LXD (LXC/ by leading OSLV exponents today like LXC, LXC/LXD
LXD), which have their roots in the OSLV pioneer solu- and Docker. In addition, Cloud Service Providers (CSPs)
tion OpenVZ (Arceo et al., 2015; Agarwal, 2015; Wall- like Joyent, Kyup and ElasticHosts, are offering Infras-
ner, 2015; 2014). tructure as a Service (IaaS) based on bare metal container
OSLV can be considered as a lightweight alternative infrastructures (Graber, 2014b; 2017a, b, c; 2016b, c). So
to full and para-virtualization technologies. The main it is time to exploits the advantages of the OSVL in DC
difference is that OSLV eliminates the hypervisor layer, infrastructures, especially in Small and Medium-sized
redundant OS kernels, binaries and libraries needed to Enterprises (SMEs).
run workloads in Virtual Machines (VMs). Hypervisors Nowadays public cloud IaaS providers, like Ama-
abstract hardware, which results in overhead in terms zon (Graber, 2015b), offers applications based in Doc-
of virtualizing hardware and virtual device drivers. A ker containers deployed on VM. Joyent, however,
full OS is typically run on top of this virtualized hard- brings IaaS on a bare metal containers infrastructure,
ware in each VM instance. In contrast, containers im- which completely enjoys the OS virtualization’s advan-
plement isolation of processes at the OS level, thus tages (Graber, 2015c). Joyent’s solution for deploying
avoiding such overhead. These containers run on top of containers is named Triton (Graber, 2015b; Cantrill,
the same shared OS kernel of the underlying host ma- 2014). It is free and open source (Cantrill, 2014). In the
chine, and one or more processes can be run within private cloud scenario, however, the OS virtualization
each container. Due to the shared kernel, as well as the technology hasn’t a wide support. Private cloud mana-
OS libraries, container based solutions can achieve hig- gers, like OpenStack, OpenNebula and the Elastic Utili-
her density of virtualized instances with better perfor- ty Computing Architecture for Linking your Programs
mance compared to hypervisor based solutions, to Useful Systems (Eucalyptus), don’t offer good sup-
bringing thus better efficiency in a Data Center (DC) in- port for this technology, if they support it at all.
frastructure (Arceo et al., 2015; Agarwal, 2015; Wallner, OpenStack stands out because it gives support to
2015; Morabito et al., 2015; Graber, 2015a; Petazzoni, LXC, Docker and LXD, but this support is on early stages
2015). (Cantrill, 2017a, b, c). However, OpenStack is not the best
OSLV has been around for over 18 years, but its solution for all the entities. It is a complex Cloud Manage-
adoption has been hindered in DC infrastructures be- ment Platform (CMP) with a steep learning curve and
cause of the shared kernel approach and the mecha- with high hardware requirements for its deployment
nisms to achieve the resource isolation, which can be an compared with others like OpenNebula (Chilipirea et al.,
issue for multitenant security. Nevertheless, nowadays, 2016a). OpenStack in a basic ready production deploy-
OSLV supports a variety of technologies to mitigate ment with high availability, suggests at least seven phy-
most security concerns, removing this drawback. The sical hosts for supporting its controller services, which
main tools are: namespaces, specially user namespaces, are recommended to not be virtualized (Cantril, 2017a;
control groups (cgroup) and Linux Security Modules Chilipirea et al., 2015). Hosts minimum features propo-
(LSMs). Namespaces give the containers their own sed are 32GB of RAM, 2x Intel® Xeon® CPU E5-2620 0 @
view of the system, limiting what containers can see 2.00 GHz and two Network Interface Cards (NICs) at
and therefore use, while cgroup limits how much they 10Gbps (Chilipirea et al., 2015a). On the other hand
can use, achieving resource isolation. User namespaces OpenNebula requires only two VMs with 2GB of RAM,
define by default unprivileged containers, which are two CPUs and two NICs for a production private cloud
safe by design. The container uid 0 is mapped to an un- (Chilipirea et al., 2016b). So, a lightweight, flexible, scala-
privileged user outside of the container and only has ble and easy to use CMP could be the efficient solution
extra rights on resources that it owns itself. Thus LSMs for SMEs, especially for those with restricted budgets,

64 I ngeniería I nvestigación y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM
García-Perellada Lilia Rosa, Vega-Gutiérrez Sergio, De la Fé-Herrero José Manuel, Rodríguez-De Armas Yalina, Garófalo-Hernández Alain Abel

that do not require high compute resources for suppor- Why LXC?
ting their Information Technology (IT) services, but do
need the benefits of the private cloud paradigm. LXC was selected by the authors of the present paper
OpenNebula is an open-source solution, used in se- for being integrated to OpenNebula because it’s a sta-
veral types of environments. Leading organizations, ble solution that supports the majority of the most im-
like National Aeronautics and Space Administration portant features of Linux containers; and because it has
(NASA) and the Tokyo Institute of Technology in Su- a big community developing new functionalities that
percomputing field, use OpenNebula to build enterpri- improve its usability, security, performance and fault
se private clouds, hosting, public cloud services, high tolerance isolations, which can be proved with LXD
performance computing and science clouds. Some fea- (Chilipirea et al., 2015d; e; f; 2011; 2015g). LXC is a deri-
tures that makes OpenNebula a wise choice are: a power- vation of OpenVZ, supported by Canonical Ltd., which
ful user security management, support of multi-tenancy shares many of the same developers as OpenVZ (Wall-
with group management, on-demand Provision of Vir- ner, 2015). It is a collection of user-level tools that assists
tual Data Centers (VDCs), control and monitoring of vir- in the creation, management and termination of contai-
tual and physical infrastructures, distributed resource ners, which is included in most Linux distributions. In
optimization, management of multi-tier applications, most cases installation is as simple as selecting it in the
standard cloud interfaces and simple provisioning por- package manager, removing one of the main OpenVZ
tal for cloud consumers; broad commodity and enterpri- detractions (Aderholdt et al., 2014). LXC has other ad-
se platform support, such as a broad hypervisor and vantages like the supporting of:
storage technologies support; and easy extension and
integration, it is a modular and extensible architecture • A liblxc library and Application Programming Interfa-
able to fit into any existing DCs with customizable dri- ces (APIs) for Ruby, Python 2 and 3, Haskell and Go.
vers. It is a fully open-source technology available under • Different storage systems: Network Attach Storage
Apache License and new drivers can be easily written in (NAS), Direct Attach Storage (DAS) (ext4 and B-tree
any language (Chilipirea et al., 2015b, c). FS (btrfs)) and Storage Area Networks (SANs).
OpenNebula’s features directly contribute to achieve • Any disk format.
the functional and nonfunctional requirements of a cloud • Virtual SANs: Distributed Replicated Block Device
DC. However, its interoperability, adaptability, feasibility (DRBD) 9, Ceph and GlusterFS.
and contribution to an efficient infrastructure with good
levels of performance and scalability, can be improved However, it doesn’t support some features like OpenVZ:
with a reliable support of an OSLV solution. Two drivers live migration and storage Quality of Service (QoS) (Chili-
were developed for the supporting of OpenVZ and LXC, pirea et al., 2015g). But LXC has LXD, a new project of
one by China Mobile and the other by Valentin Bud res- Canonical Ltd., aimed at revitalizing the use of LXC
pectively. Both had deficient features and functionalities, (Scott, 2015a). It is intended (Aderholdt et al., 2014; Scott,
and poor support (Chilipirea et al., 2012; 2016c). 2015a, b; Banerjee, 2014; Ectors, 2014a, b; 2016; 2014):
Therefore, the objective of the present work was to
implement a driver for supporting one of the main OS • To make LXC-based containers easier to use through
virtualization solutions by OpenNebula. The driver de- the addition of a back-end daemon supporting a Re-
veloped was for LXC. It supports the following features: presentational State Transfer APIs (REST APIs) and a
straightforward Command Line Interface (CLI) client
• Deploy containers on File Systems (FS), Logical Volu- that works with both the local daemon and remote
me Manager (LVM) and Ceph. daemons via the REST API.
• Stop, shutdown, reboot, suspend and resume con- • To be image based. No more distribution templates,
tainers. only good, trusted images.
• Live attach and detach NICs and disks. • To support live-migration and snapshotting.
• Monitor hosts and containers. • To support vSecure by default, with AppArmor,
user namespaces and Seccomp.
This paper treats the following topics: reasons for choo-
sing LXC to integrate to OpenNebula; OpenNebula’s LXD isn’t a rewrite of LXC, in fact it’s building on top of
components and interfaces needed for the driver develop- LXC to provide a new, better user experience. Under
ment; the creation of the driver LXCoNe and; the proofs the hood, LXD uses LXC through liblxc and its Go bin-
of concept done to demonstrate its functionalities. ding to create and manage the containers. It’s basically

IngenIería InvestIgacIón y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM 65
Driver LXC development for OpenNebula

an alternative to LXC’s tools and distribution template sible to acquire real time information of the host and
system with the added features that come from being the VM deployed. This driver had to be created by the
controllable over the network (Scott, 2015b). So, why authors of the present paper in order to monitor LXC
not LXD? While LXC is stable, LXD is still undergoing containers through OpenNebula. The VM-API and IM-
a rapid development. Some features haven’t been im- API were needed for the driver to interact with the rest
plemented yet and the documentation is still a bit on of the OpenNebula’s sections.
the light side (Aderholdt et al., 2014; Scott, 2015a, b).
Docker, although it is focused on being the univer- LXC Driver Requirements
sal container for applications (Ectors, 2014c), was not
selected because it is considered by the community and LXC driver for OpenNebula should be able to perform
by the authors of the present paper a solution suited for several actions, some of them mandatory and others
providing Platform as a Service (PaaS) (Banerjee, 2014b), optional but desirable. The following actions must be
not IaaS, for the main following reasons: supported by the LXC driver:

• It restricts the container to a single process only. The • Deploy LXC containers.
default Docker base image OS template is not desig- • Limit container’s resources usage: disk quotas, In/
ned to support multiple applications, processes or Out (I/O) rate limiting, RAM limits, CPU quotas and
services like init, cron, syslog and Secure SHell (SSH). network isolation.
This introduces a certain amount of complexity for • Reboot, reset and shutdown containers.
day to day usage scenarios, since current architectu- • Monitor hosts and containers.
res, applications and services are designed to operate • Create, delete and revert snapshots.
in normal multi process OS environments (Banerjee, • Provide support for DAS FS such as ext4 and btrfs.
2014b, 2015a; Wallner, 2015). • Provide support for SAN networks implemented
• It separates container storage from the application, with Ceph, Internet Small Computers System Interface
which eliminates one of the biggest features of con- (iSCSI) or Fiber Channel (FC).
tainers for end users, easy mobility of containers
across hosts (Banerjee, 2014b, 2015a; Wallner, 2015). The following actions are considered by the authors of
this paper optional, but desirable:
Besides, Docker was initially based on the LXC project,
although it has now developed its own implementation • Provide support for NAS devices over the Network
libcontainer that uses kernel namespaces and cgroups di- File System (NFS) and ZFS. This could provide com-
rectly (Ectors, 2014c). This makes Docker not a virtualiza- patibility.
tion solution, but one that automates the deployments of • Hot attach/detach NICs and disks. This could provi-
applications inside containers, by providing an additio- de elasticity, performance and usability.
nal layer of abstraction and automation of the OSLV.

Components and APIs of OpenNebula Needed for


Integrating the LXC Driver
In order to achieve the goal of the present work, it was
necessary to identify what OpenNebula offers to cloud
integrators. The main strength is the modular and ex-
tensible architecture of OpenNebula, which has been
designed to be easily adapted to any infrastructure and
easily extended with new components (Banerjee,
2015b). Figure 1 shows the OpenNebula’s architecture
and the components and interfaces used for the driver
development. The Virtualization (VM driver) is in char-
ge of all the interaction with the hypervisors. This dri-
ver had to be created by the authors of the present
paper in order to manage LXC containers through Figure 1. Components used in the OpenNebula’s architecture
OpenNebula. The Monitoring (IM driver) makes it pos- (Banerjee, 2015b)

66 I ngeniería I nvestigación y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM
García-Perellada Lilia Rosa, Vega-Gutiérrez Sergio, De la Fé-Herrero José Manuel, Rodríguez-De Armas Yalina, Garófalo-Hernández Alain Abel

• Live snapshot. This could improve usability and yet. It has limitations such as it: only supports rootfs
availability. resizing through the Graphical User Interface (GUI) and
• Live migrate containers. Besides usability and avai- does not support hot disk attach/detach (Bud, 2015c; d;
lability, this action could also provide elasticity. e; 2016a) Proxmox 4.x was the reference for the authors
of the present paper, for the development of the CPU
Integrating LXC in Data Centers and limitations in the LXC driver for OpenNebula, and for
Cloud Managers, Previous Work its Virtual Network Computing (VNC) implementation.
(Bud, 2016b) states that libvirt-lxc is not generally
Today’s free and open source cloud managers that have recommended due to a lack of Apparmor protection for
worked in the support of LXC are OpenStack and containers. This recommendation, together with the
OpenNebula. Proxmox 4.x, a free and open source DC China Mobile experience, caused that the LXC driver
manager for small enterprises, is supporting LXC too. for OpenNebula had been developed with liblxc ins-
So, the drivers of these solutions were analyzed in or- tead of libvirt.
der to be aware of their advantages and drawbacks.
Two previous LXC drivers for OpenNebula were LXC Driver Development for OpenNebula, LXCoNe
found, one made by China Mobile and the other by Va-
lentin Bud: OpenNebula manages the hypervisor underneath by
running scripts on the host. Each operation that Open-
• The driver from China Mobile is not accessible (Ba- Nebula is able to perform over a hypervisor consists of
nerjee, 2014c). However, its developers showed the a specific script located at /var/lib/one/<driver-
bugs that had been found, such as the driver can’t name>/<script-name>. For example, the deploy action
implement reboot, shutdown and restart opera- is a script called deploy, and for hot-attach a NIC there
tions, and explained that the reason behind them is a script called attach_nic. The name of the script is
could be the use of libvirt. This was useful, because always suggestive. The most important and difficult ac-
their experience gave reasons to use liblxc instead of tion in this driver is to deploy a new container.
libvirt. Besides, their community announced that Figure 2 shows the script’s basic blocks. The first
the driver was only able to monitor hosts, deploy step is to read all the information that will be used from
and delete containers (Chilipirea, 2012). the containers template and store it in variables. Once
• On the other hand, the driver from Valentin Bud this is done, a folder that will contain all the necessary
was implemented directly over LXC (Bud, 2015a). It files for the container is created with the right permis-
however only has support for LVM data stores and sions. This folder is created inside the folder configured
with very limited features. Some of these features as the default container location in the lxc user tools, for
does not work well, like container’s monitoring. It example /var/lib/lxc in Ubuntu. In this way the contai-
has poor documentation and almost a year without ners created by OpenNebula are shown at the output of
any support (Chilipirea, 2016c). However the work lxc-ls command, which is necessary for the driver to be
from Valentin Bud gave an example about how to able to monitor containers. Then the configuration in-
write the driver and how to organize it. It also formation for the NIC is extracted, arranged and prepa-
showed how some features like monitoring and red inside a variable in a format that LXC’s configuration
support for LVM could be implemented. file understands. This process is simple and is shown in
Figure 3. Now, the driver will proceed to configure the
OpenStack, a cloud manager with a great market share, root storage as explained in Figure 4.
is actually developing its LXC driver. OpenStack allo- Because this driver supports three different storage
cates its LXC driver inside “Group C”. Drivers inside types: FS, LVM and Ceph, it needs to find out which
this group “have minimal testing and may or may not type it is and act accordingly. In case it is FS and LVM,
work at any given time. Use them at your own risk” the only thing that needs to be done is to indicate to
(Bud, 2015b). For this reason and because this driver LXC a path to the image. It is important to remember
was built over libvirt, it wasn’t used as a reference. that OpenNebula uses images, either raw or qcaw, as
Proxmox 4.x supports LXC. It’s able to deploy, re- virtual disks. LXC supports regular FS and LVM, so
boot, reset, shutdown and monitor containers. It can nothing else needs to be done, but in case the image is
use DAS, SAN and NAS storages. It has support to: live stored as a block device in Ceph, this image will need to
snapshots, limit containers resources and live migra- be mapped in the host and provide LXC the route to
tion, although the latter is in the experimental phase where it was mapped. The reason why LXC doesn’t

IngenIería InvestIgacIón y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM 67
Driver LXC development for OpenNebula

Figure 2. LXCoNe’s main work-flow diagram

Figure 3. NIC Configuration

Figure 4. Root storage set up process

68 I ngeniería I nvestigación y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM
García-Perellada Lilia Rosa, Vega-Gutiérrez Sergio, De la Fé-Herrero José Manuel, Rodríguez-De Armas Yalina, Garófalo-Hernández Alain Abel

support Ceph’s block devices is mainly because it can’t for the hot-attach disk action and also to add extra disks
perform this mapping action by itself, so it needs to be defined by the user before starting the container. The
done by something else before the container starts. A reason behind this was to have a single method to ad-
possible approach here could be to perform this opera- dress both situations. LXC allows managing devices in
tion from OpenNebula itself. This makes sense because running containers by using lxc-device. With this tool,
OpenNebula is already executing code, so the only a block or loop device from the host can be added to a
thing that needs to be done is to write a line that tells running container, so it will be able to see it and mount
the host to map the image. The problem with this ap- it. Now, it could be possible to instruct OpenNebula to
proach is that, in case of an electrical failure or any start the container and then use the previous defined
other issue that could cause the physical host to crash, method to attach any extra disk specified by the user.
the containers that were running will not be able to This solution will definitely work, but it has a major
start automatically. An administrator will need to ma- drawback, containers will only be able to be started and
nually redeploy them from OpenNebula. One of the managed from OpenNebula. One of the goals wanted is
goals wanted is precisely to avoid this, so another ap- to be able to manage containers either from OpenNebu-
proach was used. The required instructions to map the la, liblxc, a ssh session with the container or any other
root image must be executed once the host’s OS initiali- way after they were created by OpenNebula. The solu-
zes. Write it inside /etc/rc.local is a possibility. Also, tion found was to use LXC’s hooks. The instructions to
containers must be configured to start automatically add the device to the container and then mount it are
once the OS initializes. not executed by OpenNebula, but written to the start
The next step will be to generate LXC’s configura- hook of LXC. The script that represents this start hook
tion file. Inside this file will be located all the container’s is executed by LXC before it starts the container. These
parameters, like NIC information, route to root storage, last two steps are explained in Figure 5.
and resources limit. At this point the container is ready Extra disks attachment is not the only thing that is
to start, but first must be checked if the user added configured in this hook at this point, neither the start
another disk(s) to the container. If this results to be the hook is the only one used. The VNC session is configu-
case, the driver must be capable of attaching this disk(s) red in the start hook so it will be started with the contai-
to the container. LXC allows to mount locations inside ner, and then used by OpenNebula. LXC’s post-stop
the container specified in the configuration file, but this hook is also used. It will run at the node’s namespace
is only useful to mount images when the container is after the container has been shut down. With the help
going to be started, so hot-attach is not possible using of this hook a cleanup process will occur. After a contai-
this approach. A method that allowed mounting ima- ner is shut down, files like LXC container’s configura-
ges inside the container while it was on needed to be tion and hooks are left behind. Even these files are small
found. Once a solution was found, it was implemented they could cause problems once they accumulate after

Figure 5. Last steps before the container is ready

IngenIería InvestIgacIón y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM 69
Driver LXC development for OpenNebula

some time. This cleanup process is in charge of erasing This algorithm was implemented in bash at the first
this files and any other remains of the container. Once place. It was deployed in the Private Cloud infrastruc-
this two hooks are created, OpenNebula will instruct ture of the José Antonio Echeverría Higher Polytechnic
LXC to start the container. After this, OpenNebula will Institute (CUJAE)’s DC, supporting the Information
check the container’s status for a short while assuring it and Communications Technology (ICT) of the universi-
was successfully started, and no error occurred. In case ty. A stable release has been made public on github:
of any error, OpenNebula will notice it, will change the https://github.com/OpenNebula/addon-lxcone/, together
status to FAILURE and will log it. with the guidelines for supporting its deployment.
Because every configuration in the node that LXC
might need to be able to successfully start the container Proofs of concept
is set inside the hooks, and liblxc allows performing
operations such as shutdown, suspend, reboot and re- Different proofs of concept were done in the OpenNebu-
sume over containers, they will be easy to implement la Private Cloud of the CUJAE’s DC to check the effecti-
on OpenNebula. A simple command will usually be veness of LXCoNe. Figure 6 shows the logical design of
enough. The only remaining operation is hot-attach/de- the CUJAE’s network, and Figure 7 shows the infra-
tach NICs. Hot-attach can be accomplished easily by structure’s compute nodes. The Frontend was deployed
creating a virtual Ethernet interface and moving it to on an LXC container inside Node-0 using OpenNebula
the container’s namespace. Hot-detach will be achieved 4.14. The Frontend was able to manage containers inside
by deleting the interface. its own node, Node-0, and six other nodes. The first five

Figure 6. CUJAE data center’s logical design

Figure 7. Infrastructure’s compute nodes at ISPJAE/CUJAE’s data center

70 I ngeniería I nvestigación y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM
García-Perellada Lilia Rosa, Vega-Gutiérrez Sergio, De la Fé-Herrero José Manuel, Rodríguez-De Armas Yalina, Garófalo-Hernández Alain Abel

nodes were commodity hardware with identical charac- • Deploy, shut down, suspend and reset of LXC con-
teristics. The 6th. node was an Inspur professional server. tainers. The container “r_nucleo1.cujae.edu.cu”
Two different storage systems were used at the same was configured in the TEMPLATE view and deplo-
time, Ceph and LVM. The tools used in the proofs of con- yed in the VM view. Figure 8 shows that the contai-
cept were the OpenNebula cloud manager and its Suns- ner was in the RUNNING state at the end of the test.
tone administration interface. • Attach and detach disks and NICs. These procedu-
The proofs of concept demonstrated the capacity of LX- res were done at the Network/Storage tab in the VM
CoNe to: view. Figures 9 and 10 show the successful opera-
tions.

Figure 8. VM view. Container in RUNNING state

Figure 9. VM view. Extra HDD attached

IngenIería InvestIgacIón y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM 71
Driver LXC development for OpenNebula

• RAM limit per container. Figure 11 shows the Capa- • Let OpenNebula monitors nodes and LXC contai-
city tab inside the VM view, in which it can be seen ners. Figure 14 shows that the OpenNebula CLI was
that the RAM provisioned to the container was 4GB. used for checking the monitoring.
Figure 12 shows that the Stress tool was configured • Support LVM and Ceph. Figures 15 and 16 show
in the container to fill the RAM up to 5GB. That was containers with different types of storage, LVM and
the only container running in the host. Figure 13 Ceph respectively. Figure 17 confirms that contai-
shows that the amount of RAM consumed didn’t ners were in RUNNING state.
get over 4GB.

Figure 10. VM view. Extra NIC attached

Figure 11. VM view. 4 GB of RAM assigned to the container

Figure 12. Fill the container’s RAM. With stress tool

Figure 13. Host’s maximum used RAM

Figure 14. Host’s maximum used RAM

72 I ngeniería I nvestigación y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM
García-Perellada Lilia Rosa, Vega-Gutiérrez Sergio, De la Fé-Herrero José Manuel, Rodríguez-De Armas Yalina, Garófalo-Hernández Alain Abel

Figure 15. Image view. LVM

Figure 16. Image view. Ceph

Figure 17. Container’s state after the


deployment

This driver was tested on two flavors of Linux: Debian the storage; to limit RAM and CPU resources and to
8 (Jessie) and Ubuntu 14.04 (Trusty Tahr) (De la Fé, monitor containers and hosts. The next steps will be ai-
2016). med to integrate other features which guarantee securi-
ty, high availability and improvement of performance
Conclusions and scalability in the infrastructure.

The LXC integration in OpenNebula contributes to de- Acknowledgments


velop more efficient solutions with high flexibility and
interoperability levels in Cloud infrastructures. It has The authors wish to thank to the IT managers of the
made possible an easier adaptation to the client econo- CUJAE’s datacenter and network for all their support
mics restrictions, the human resources and the initial IT and patience. This work was supported also by the Tele-
technologies of the client. The present work has as a communication and Telematics Department, and by the
main result, the development of a driver for OpenNe- IT Services Department, both of the CUJAE University.
bula to allow the deployment and the monitoring of the
LXC virtualization platform. The driver has several ba- References
sic features such as: to deploy, shutdown, suspend, re-
set LXC containers; to attach and detach disks and NICs Aderholdt F., Caldwell B., Hicks S., Koch S., Naughton T., Pelfrey
to LXC containers; to support LVM and different FS for D., Pogge J., Scott S.L., Shipman G., Sorrillo L. Review of

IngenIería InvestIgacIón y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM 73
Driver LXC development for OpenNebula

enabling technologies to facilitate secure compute customiza- Bud V. Roadmap-Proxmox VE. Proxmox VE, 2016a [on line]. Avai-
tion, Oak Ridge, Tennessee, USA, OAK Ridge National Labo- lable on: http://pve.proxmox.com/wiki/Roadmap.
ratory, ORNL/TM-2015/210, 2014. Bud V. LXC. Ubuntu, 2016b [on line]. Available on: https://help.
Agarwal K. A Study of Virtualization Overheads, (thesis master of ubuntu.com/lts/serverguide/lxc.html.
Science in Computer Science), United State of America, Stony Cantrill B. Smar tData Center and Manta are now open source-
Brook University, 2015, pp. 55 [on line]. Available on: http:// Blog-Joyent. Joyent, Inc., 2014 [on line]. Available on: https://
animal.oscar.cs.stonybrook.edu/papers/files/Kavi- www.joyent.com/blog/sdc-and-manta-are-now-open-source.
taAgarwalMSThesisSubmission.pdf. Cantrill B. Operations Guide Release Version: 15.0.0. OpenStack,
Arceo Feria A., Montejo-Ricardo G., García-Perellada L.R., Irigo- 2017a [on line]. Available on: https://docs.openstack.org/ops-
yen-Saumel A., Garófalo- Hernández A.A. Propuesta de prue- guide/.
bas, parámetros y métricas para comparar plataformas de Cantrill B. Linux Containers-LXD-Getting started-OpenStack. Ca-
virtualización, on: XVI Convención de Ingeniería Eléctrica nonical Ltd., 2017b [on line]. Available on: https://linuxcontai-
(CIE 2015) (16th., 2015, Santa Clara, Cuba). XVI Convención ners.org/lxd/getting-started-openstack/.
de Ingeniería Eléctrica CIE 2015, Santa Clara, Cuba, Martha Cantrill B. Feature Support Matrix — nova 15.0.0.0rc2.dev705 docu-
Abreu University, 2015, pp. 605-612. mentation. OpenStack Foundation, 2017c [on line]. Available on:
Banerjee T. LXC vs LXD vs Docker-Making sense of the rapidly https://docs.openstack.org/developer/nova/support-matrix.html.
evolving container ecosystem. Flockport, 2014a [on line]. Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov
Available on: https://www.flockport.com/lxc-vs-lxd-vs-doc- V., Dobre C. Paper-Linux-VServer. GNU Free Documentation
ker-making-sense-of-the-rapidly-evolving-container-ecosys- License 1.2, 2011 [on line]. Available on: http://linux-vserver.
tem/. org/Paper.
Banerjee T. Understanding the key differences between LXC and Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov
Docker. Flockport, 2014b [on line]. Available on: https:// V., Dobre C. OpenNebula LXC Driver Plugin (OneLXC)–
www.flockport.com/lxc-vs-docker/. OpenNebula, OpenNebula Project, 2012 [on line]. Available
Banerjee T. China Mobile Releases OpenNebula-based Public on: http://opennebula.org/opennebula-lxc-driver-plugin-one-
Cloud, OpenNebula | Blog”, OpenNebula Systems, 2014c [on lxc/.
line]. Available on: http://opennebula.org/blog/?author=52. Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov
Banerjee T. Dockerfile reference, Docker, 2015a [on line]. Available V., Dobre C. OpenStack architecture design guide. OpenStack
on: https://docs.docker.com/reference/builder/. Foundation, 2015a [on line]. Available on: http://docs.opens-
Banerjee T. Scalable Architecture and APIs — OpenNebula 4.12.1 tack.org/arch-design/arch-design.pdf.
documentation, OpenNebula Systems 2015b [on line]. Availa- Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov
ble on: http://docs.opennebula.org/4.12/integration/getting_ V., Dobre C. An Overview of OpenNebula—OpenNebula
started/introapis.html. 4.14.0 documentation, OpenNebula Systems, 2015b [on line].
Bud V. LXC Drivers for OpenNebula. GitHub, Inc., 2015a [on line]. Available on: http://docs.opennebula.org/4.14/design_and_
Available on: https://github.com/OpenNebula/addon-lxc. installation/building_your_cloud/intro.html.
Bud V. HypervisorSupportMatrix-OpenStack. OpenStack Foun- Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov
dation, 2015b [on line]. Available: https://wiki.openstack.org/ V., Dobre C. Features—OpenNebula 4.14.0 documentation,
wiki/HypervisorSupportMatrix. OpenNebula Systems, 2015c [on line]. Available on: http://docs.
Bud V. How to add and/or resize a LXC disk. Proxmox Support opennebula.org/4.14/release_notes/release_notes/features.
Forum, XenForo Ltd., 2015c [on line]. Available on: https:// html#features.
forum.proxmox.com/threads/how-to-add-and-or-resize-a- Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov
lxc-disk.23792/. V., Dobre C. Linux Containers-LXC-News. Canonical Ltd.,
Bud V. Lxc - How to resize a linux container in proxmox-Stack 2015d [on line]. Available on: https://linuxcontainers.org/lxc/
Overflow. Stack Overflow, Stack Exchange Inc., 2015d [on news/.
line]. Available on: http://stackoverflow.com/ques- Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov
tions/32370052/how-to-resize-a-linux-container-in-proxmox. V., Dobre C. Linux Containers-LXD-News. Canonical Ltd.,
Bud V. Problem with LXC disk resize. Proxmox Support Forum, 2015e [on line]. Available on: https://linuxcontainers.org/lxd/
XenForo Ltd., 2015e [on line]. Available on: https://forum. news/.
proxmox.com/threads/problem-with-lxc-disk-resize.24658/. Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov
Bud V. Doubts with LXC File system and LXC disk size. Proxmox V., Dobre C. Roadmap, OpenVZ Virtuozzo Containers Wiki,
Support Forum, XenForo Ltd., 2015f [on line]. Available on: 2015f [on line]. Available on: https://openvz.org/Roadmap.
https://forum.proxmox.com/threads/doubts-with-lxc-file-sys- Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov
tem-and-lxc-disk-size.23124/. V., Dobre C. Comparison, OpenVZ Virtuozzo Containers

74 I ngeniería I nvestigación y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM
García-Perellada Lilia Rosa, Vega-Gutiérrez Sergio, De la Fé-Herrero José Manuel, Rodríguez-De Armas Yalina, Garófalo-Hernández Alain Abel

Wiki, 2015g [on line]. Available on: https://openvz.org/Com- Graber S. Joyent TritonTM Elastic Container Service-Public
parison. Cloud-Joyent, Joyent, Inc., 2015c [on line]. Available on:
Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov https://www.joyent.com/public-cloud.
V., Dobre C. A comparison of private cloud systems, 30th In- Graber S. LXD 2.0: Resource control. Stéphane Graber’s website,
ternational Conference on Advanced Information Networ- 2016a [on line]. Available on: https://stgraber.org/2016/03/26/
king and Applications Workshops (WAINA), 2016a, pp. lxd-2-0-resource-control-412/
139-143. Graber S. Scalable Cloud Hosting on Linux Containers. Kyup.,
Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov 2016b [on line]. Available on: https://kyup.com.
V., Dobre C. OpenNebula 5.0 Deployment guide Release 5.0.2. Graber S. Innovative Cloud Platform on Linux Container. Kyup.,
OpenNebula Systems, 2016b [on line]. Available on: http:// 2016c [on line]. Available: https://kyup.com/linux-containers.
docs.opennebula.org/pdf/5.2/opennebula_5.2_deployment_ Graber S. Joyent Public Cloud Pricing. Joyent, Inc., 2017a [on line].
guide.pdf. Available on: https://www.joyent.com/pricing/cloud/compute.
Chilipirea C., Laurentiu G., Popescu M., Radoveneanu S., Cernov Graber S. Joyent Triton Compute. Joyent, Inc., 2017b [on line].
V., Dobre C. GitHub-OpenNebula/addon-lxc: Hypervisor Available on: https://www.joyent.com/triton/compute.
Drivers for LXC, GitHub, Inc., 2016c [on line]. Available on: Graber S. Pricing ElasticHosts Linux, Windows VPS Hosting.
https://github.com/OpenNebula/addon-lxc. Elastichosts, 2017c [on line]. Available on: https://www.elasti-
De la Fé J.M., Vega S. LXCoNe, Installation & Configuration Gui- chosts.com/pricing/.
de, OpenNebula/addon-lxcone, GitHub, Inc., 2016 [on line]. Morabito R., Kjällman.J., Komu M. Hypervisors vs. lightweight
Available on: https://github.com/OpenNebula/addon-lxcone. virtualization: A performance comparison, on: 2015 IEEE In-
Ectors M. LXD and Docker, Telruptive. Telruptive 2014a [on line]. ternational Conference on Cloud Engineering, 2015, pp. 386-
Available on: http://telruptive.com/2014/11/11/lxd-and-docker/. 393.
Ectors M. LXD and Docker-DZone Cloud. DZone, 2014b [on line]. Petazzoni J. Anatomy of a container: namespaces, cgroups, and
Available on: https://dzone.com/articles/lxd-and-docker. some filesystem magic. Presented at the LinuxCon + CloudO-
Ectors M. LXD and Docker-DZone, 2014c [on line]. Available on: pen + ContainerCon NA 2015, Sheraton Seattle, Seattle, WA,
https://dzone.com/articles/lxd-and-docker. 2015 [on line]. Available on: http://events.linuxfoundation.
Ectors M. LXD: the next-generation container hypervisor for Li- jp/sites/events/files/slides/Anatomy%20of%20a%20contai-
nux | Cloud | Ubuntu, Canonical Ltd, 2016 [on line]. Availa- ner.pdf
ble on: http://www.ubuntu.com/cloud/tools/lxd. __. Linux Containers-LXC-Security. Canonical Ltd., 2017 [on line].
Graber S. LXC 1.0: Security features. Stéphane Graber’s website, Available on: https://linuxcontainers.org/lxc/security/.
2014a [on line]. Available on: https://stgraber.org/2014/01/01/ Scott. A quick introduction to LXD, Scott’s Weblog, 2015a [on
lxc-1-0-security-features/ line]. Available on: http://blog.scottlowe.org/2015/05/06/
Graber S. Elastic Containers. ElasticHosts Blog, 2014b [on line]. quick-intro-lxd/.
Available on: https://www.elastichosts.com/blog/elastic-con- __. Linux Containers-LXD-Introduction, Canonical Ltd, 2015b [on
tainers/. line]. Available on: https://linuxcontainers.org/lxd/introduc-
Graber S. Large scale container management with LXD and tion/.
OpenStack. Presented at the LinuxCon + CloudOpen + Con- Steven J. Vaughan-Nichols. Ubuntu LXD: Not a Docker replace-
tainerCon NA 2015, Sheraton Seattle, Seattle, WA, 2015a [on ment, a Docker enhancement. ZDNet, 2014 [on line]. Available
line]. Available on: http://events.linuxfoundation.jp/sites/ on: http://www.zdnet.com/article/ubuntu-lxd-not-a-docker-re-
events/files/slides/ContainerCon%202015-%20LXD%20 placement-a-docker-enhancement/.
%26%20OpenStack.pdf Wallner R. Linux Containers: Parallels, LXC, OpenVZ, Docker
Graber S. AWS, Amazon EC2 Container Service, Detalles del pro- and More. Au Courant Ttechnology [on line], 2015. Available
duct. Amazon Web Services, Inc., 2015b [on line]. Available on: http://aucouranton.com/2014/06/13/linux-containers-pa-
on: //aws.amazon.com/es/ecs/details/. rallels-lxc-openvz-docker-and-more/.
Wallner R. LXC, 2014 [on line]. Available on: https://help.ubuntu.
com/lts/serverguide/lxc.html.

IngenIería InvestIgacIón y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM 75
Driver LXC development for OpenNebula

Suggested citation:
Chicago style citation
García-Perellada, Lilia Rosa, Sergio Vega-Gutiérrez, José Manuel De
la Fé-Herrero, Yalina Rodríguez-De Armas, Alain Abel Garófalo-Her-
nández. Driver LXC development for OpenNebula. Ingeniería Investi-
gación y Tecnología, XIX, 01 (2018): 63-76.

ISO 690 citation style


García-Perellada L.R., Vega-Gutiérrez S., De la Fé-Herrero J.M., Ro-
dríguez-De Armas Y., Garófalo-Hernández A.A. Driver LXC develop-
ment for OpenNebula. Ingeniería Investigación y Tecnología, volume
XIX (issue 1), January-March 2018: 63-76.

About the authors


Lilia Rosa García-Perellada. Engineer in Telecommunications and Electronics from the
José Antonio Echeverría Higher Polytechnic Institute (CUJAE), La Habana, Cuba.
She holds a M.S. from the CUJAE University too. She is an assistant professor in
the Telecommunication and Telematics Department of the CUJAE University.
Sergio Vega-Gutiérrez. Engineer in Telecommunications and Electronics from the José
Antonio. Echeverría Higher Polytechnic Institute (CUJAE), La Habana, Cuba. He
is currently working in Telecommunication and Telematics Department of the
CUJAE University.
José Manuel de la Fé-Herrero. Engineer in Telecommunications and Electronics from the
José Antonio Echeverría Higher Polytechnic Institute (CUJAE), La Habana, Cuba.
He is currently working in Telecommunication and Telematics Department of the
CUJAE University.
Yalina Rodríguez-De Armas. Engineer in Telecommunications and Electronics from the
José Antonio Echeverría Higher Polytechnic Institute (CUJAE), La Habana, Cuba.
She is an instructor adjunct professor in the Telecommunication and Telematics
Department of the CUJAE University, and works in the IT Services Department of
the CUJAE University.
Alain Abel Garófalo-Hernández. Engineer in Telecommunications and Electronics from
the José Antonio Echeverría Higher Polytechnic Institute (CUJAE), La Habana,
Cuba. He holds a M.S. and a Ph.D. from the CUJAE University too. He is an assis-
tant adjunct professor in the Telecommunication and Telematics Department of
the same university.

76 I ngeniería I nvestigación y tecnología, volumen XIX (número 1), enero-marzo 2018: 63-76 ISSN en trámite FI-UNAM

Вам также может понравиться