Docker has created a big wave over the past year. For those not familiar with Docker, it is an application packaging and orchestration technology which uses containers to transport a set of applications along with exactly specified dependencies and run them anywhere within a lightweight Linux container environment. The reason this is so different from simply transporting virtual machine images is that in Docker, the exact package base and all the changes are specified using the small template file and can be applied precisely and incrementally by a container technology called a mount name space along with overlays. This contrasts with a huge virtual machine image which is difficult to transport, and it’s very difficult to describe what has changed as it progresses through its lifecycle. This cascading lightweight container build approach, coupled with a freely available library of useful templates dramatically shortens the time it takes for enterprises to build, test and deploy applications. Effectively, Docker allows development on top of a platform (PaaS) without requiring that platform to be physically present inside the description file.
However, as has been noted, the Docker technology is not free from problems, a lot of which come from the immaturity of the required container technology in Linux. The specific issues are security (the ability of root to break out of the container environment) isolation (the ability of one container to steal resources that are needed to service another) and migration (the ability to move a running Dockerized application from one node to another without requiring a restart). While the Dockerized applications run safely behind the enterprise firewall, these issues can largely be ignored. However, one of the promises of Docker is the ability to run anywhere and at some point the Dockerized applications need to leave the safety of the enterprise firewall and provide real services for other businesses or customers.
Service providers can naturally pick up business from the enterprise (or, more usually, specific departments within the enterprise looking to service specific business needs) by providing the ability to run these Dockerized applications within their infrastructure. Unfortunately, the security, isolation, and migration problems which can be ignored in the enterprise cannot be so easily solved in the service provider environment. The current best practice for service providers is to run Docker and the allied Linux containers on top of a virtual machine (VM). The VM provides the security, isolation (and even migration), thus insulating the service provider from any of the shortcomings of upstream Linux containers. However, this comes at a cost: if every Linux container running Docker now has to be nested within a VM, not only is the solution not lightweight or elastic, but it also means that every Docker instance now also has to pay the hypervisor performance penalty as well.
Enter Virtuozzo: We’ve been working for a long time to strengthen the containers in the upstream Linux kernel and one day it will be possible to deploy upstream Linux containers without worrying about security, isolation, and migration. In the meantime, though, service providers wishing to take advantage of Docker can now deploy it into Virtuozzo containers, which are available today as the hardened, secure precursors of what will one day be available in upstream Linux. At a stroke, they can fix security, isolation, and migration (because Virtuozzo supplies them all), while keeping the elastic, lightweight container paradigm. As an added bonus, there’s no VM performance overhead and a significant improvement in density to boot.
Initially, Virtuozzo is offering this on a flat customer-managed VPS model (to match the existing market) but longer term, deploying Dockerized applications directly into containers (instead of nested within a VM) can provide several additional and billable services. Firstly, the applications can be metered on resources consumed rather than using the current flat VPS model and they could be scaled (both vertically using container resource limits and horizontally using migration) so that they would respond elastically to the external load placed on them. Secondly, the service provider would be responsible for managing Docker itself rather than pushing that job on to customers. This would allow the appeal of Dockerized applications to expand beyond the current DevOps use case and appeal to all developers. Thirdly, deploying Dockerized applications on containers directly allows huge increases in density and thus much better margins for the service.
The post For Service Providers Using Virtuozzo, Docker Isn’t Just a DevOps Phenomenon Anymore is fed from ReadySpace Cloud Services United States. Contents strictly belongs to ReadySpace and its respective partners.