Have you heard of Docker? Do you know what it is? Do you know if it’s something that you should take an interest in, if your organization uses large IT systems? Find out here …
Picture a busy port in the 1950s: A steam-powered cargo freighter has just arrived with tons of cotton from India. Swarms of sweating dockers are busy carrying sacks and bales of cotton up from the ship’s hold and then down narrow ladders to the wharf.
The dockers place the goods in neat stacks on the wharf’s cobblestones, where the stacks wait to be loaded on to freight train wagons that’ll eventually deploy the shipment to mills and textile factories across the country.
A foreman has orchestrated the whole operation, and he oversees it from a vantage point wearing a flat cap and a fag in the corner of his mouth. If he detects that one of the dockers isn’t working, he replaces him immediately. Get the image?
Picture the same port today: Everything arrives in containers. Unloading and distribution operations are streamlined and automated. There’s no sweat. Far fewer manual operations are involved, and there’s much less overhead, so the goods can come into use much faster.
When you ship something in containers, it can be easily deployed. Now, that’s highly interesting if you deal with software, like we do at Zylinc, and like you do if you read this.
Container technology is a leading-edge cloud computing technology that both Windows and Linux support. It helps make changes to your software much faster than before, and at the same time it maintains a great end-user experience.
With container technology, you can very quickly and easily:
- Upgrade your software, and even the underlying operating system, if that’s required.
- Downgrade to an earlier version if for some reason you aren’t happy with an upgrade’s features
- Automatically detect if a part of your software fails, and then have a container orchestrator automatically replace the failed part with a new instance
- Move your software solution between an on-premise installation and a cloud service provider, or even between different cloud service providers
You’ll find that much of the terminology that we just used, when we pictured the 1950s wharf and today’s streamlined container terminal, is now also used in the software industry: The biggest player is container technology is called Docker, and Docker uses stuff like containers, images, swarms, stacks, and orchestrators to facilitate easy and fast deployment.
If your organization has large software installations, you can’t – and shouldn’t – ignore container technology, like Docker. Here’s why:
- It’s fast: You only need to type a few commands to install, upgrade, or downgrade your software solution, or to start or stop its services or frameworks. This is because each container image contains everything that’s necessary, for example the base operating system image, Java, Tomcat, and .NET framework.
- It’s minimalistic and smart: The amount of data that you need to download, and the time it takes to set up additional containers, is minimal. This is because the base operating system images that containers are built on are released and maintained by the operating system vendors themselves. Many containers can share the same base operating system image, even across tenants, products, or services. You can easily download and cache those base operating system images, and a container image itself only contains a pointer to the base operating system image.
- You get lean machines: The overhead of container images is smaller than for traditional virtual machines. For example, multiple instances of identical network stacks don’t exist for container images. This is because many container images can share the same base operating system image, so they can also share the same kernel of a single instance of an operating system. The operating systems in turn have container features that ensure that each container is completely isolated from other containers and other software, even when they run on the same operating system.
- You can forget about internal ports: With container images, you don’t need to know about network ports that your software solution uses for internal purposes, and those network ports are never exposed to the outside world. This is because a virtual network covers communication between containers or container hosts.
- Backup’s a breeze: There’s simply no data that you need to back up inside the containers. Everything’s stored in databases on database servers, or in other well-defined external data stores. This is because containers are stateless and can’t change over time (they’re what’s called immutable).
- You get one-stop logs: When you need to collect and use log files, for example for a support case, there’s no need to log in to specific nodes. This is because containers have a built-in way to send their log files to a common log repository.
Needless to say, we’ve begun using container technology for our Zylinc solutions, and soon you’ll be able to reap the benefits. Stay tuned!
Morten Müller is Documentation & Localization Manager at Zylinc’s HQ in Denmark.