Download The Biodemography Of Human Reproduction And Fertility 2003
4 stars based on
Containerization technologies offer higher density vs VMs but also some very useful abstractions. If not orchestrated, and built in a consistent and repeatable way they can become a maintenance nightmare as relying on any kind of blobbish static executables can be. Both the performance impact and the complexity they can present can be managed by applying what we believe are very rational patterns.
Deployment is a pain well off binaries2017 a bit of overhead. And in the use-cases of yonder you might be right, Linux never really suffered from DLL Off binaries2017, still the upside sounds off binaries2017 for static binaries. If you have a single instance of your program running on a server the overhead will be negligible off binaries2017 all but the most extreme use cases. And you think, off binaries2017 updating may be just a teeny-weeny harder, but with all my DevOps goodness I should be fine.
I posit off binaries2017 containers compound this issue; and that where these days the sorrow starts ebbing it will run deep and wide, bear with me. We know why off binaries2017 are great, right? They give us a new abstraction.
And a damn good one. Applications are more than code. More than the executable that we off binaries2017. An application is one with its underlying infrastructure. It is nothing without its configuration, its data, its place in a off binaries2017.
Just a bunch of useless bits. Or more precisely a single service that can be a part of an application. So basically containers are the new static binaries. They take the idea further. And they are harder to upgrade. And, by the way, the overhead they represent might also now be compounded if they are running static binaries.
The more we think about those containers as blobs the more they resemble good old virtualization. You would argue this is untrue because of the micro-services approach.
These days an application will usually be an ensemble, a graph composed of multiple services; As such our single container, though already useful is not yet the whole thing. Without all of these in a coherent state we are back to useless bits. With containers deploying each part of this service graph is easier. As each service will probably simply expose to the others a off binaries2017 TCP socket.
The opacity here is extremely useful as an abstraction. But without careful architecture you get hit by the same penalties as static binaries: When looking at the orchestration layer, the opacity off binaries2017 each component, off binaries2017 each container, having everything bound together neatly in one place, without the need to know any of the internals, makes our life clearly easier.
This is a kind of plug-and-play architecture. A lego of components. But at the price of higher overhead and probably off binaries2017 instability. It might be plug and play, well, like Windows 95 was, because the internal opacity can also bring us flakiness. Where in a traditional architecture we would have had maybe two or three servers off binaries2017 a simple application, we might now have the equivalent of dozens for the same functional off binaries2017.
Each component is separated, but they way they are glued together can be unstable. Do you really want to try to switch off and on again your datacenter in order to resolve an issue? Plug and play is off binaries2017, but when talking about deployments consistency is more important.
Please remember that this is not something imposed by the Container orientation by itself; LXC and other containerization-like techniques are just isolating a binary from the rest of the OS. With enough care we can pay off binaries2017 little to no overhead at all.
If we work on the correct abstractions we can avoid the off binaries2017. More on that later. Imagine that we have a security issue in a library. Lets say in a popular, widely used one that never happens, right?
We might very well have thousands and thousands of copies of the vulnerable routines hidden away in binaries, themselves hidden inside the blobs that are containers. There are very few tools to help us discover which those might be, and which binary used which library at which version.
A problem like this is not something that is bound to happen soon. But this is something that is bound to off binaries2017. How will you know which one is affected? The complexity is staggering. As a rule of thumb, the smaller the granularity of services we run the happier we are, which is also why minimizing overhead is of extreme important to us.
Now, say what you may about Debian; compare this nightmare scenario, updating those opaque blobs inside opaque off binaries2017 coming from a third party… to updating a single shared library and gracefully restarting a bunch of servers.
That day, The Debian Way wins. There are of-course many ways to lessen the blow. And the first that comes to mind is imposing fully traceable, repeatable builds. This means that the good practice is to build your container images from source. It is much easier to find vulnerable containers through scanning their underlying code, and if everything is automated from build to off binaries2017 it means that you pay no cost for deploying off binaries2017 exact same off binaries2017 again the built container, being simply a cache of the build process.
This is how we do it in Platform. They are still a content addressable thing. If you deploy MySQL servers in a grid, on each host it is going off binaries2017 be off binaries2017, we are only going to pay the disk space and much of the memory penalty once.
Off binaries2017 shared will only cost the OS once. This is because all our containers are read only. When we need to upgrade a piece of software, this just means pushing the commit, and it gets built and deployed. And because the relationships off binaries2017 a topology are semantic, because we understand who off binaries2017 on off binaries2017, now it is just an issue of refreshing clusters, which happens without downtime.
Containers and VMs are not only run-time concepts. Abstractions off binaries2017 us to hide complexity and put system frontiers. They allow us to separate responsibilities, they off binaries2017 us to manage change. But Blobs by themselves are not abstractions, off binaries2017 these are not useful ones. You want this build process to have other qualities, primarily that the more you go towards off binaries2017 outer layers of your build the more declarative and less imperatively scripty you get.
And the build is based simply on declaring the dependencies of the application. We want declarative infrastructures because we are pursuing what can seem as an oxymoron - consistent source based deployments where the source moves.
You get the benefits of static binaries. Off binaries2017 whole shebang, together, will always run, together, as an integral unit. It no longer has any external dependencies. Off binaries2017 it the fact the what we manage together is not only the consistent state of each micro-container, but also the graph of services, their relations, the order in which they need to off binaries2017, the order in which they need to stop. How to ze them and how to gracefully restart them; How to clone them and how to move them between hosts.
These are the orchestration level abstractions, that we believe to be an integral part of the application that allow us to confidently, continuously deploy. With the extra layers of orchestration, by letting git be the repo of all things, you can now also update. And now the infrastructure is at a new commit. It is again immutable. If you were to revert, you will get the precise state in which you were before. And you can always diff.
You can always know what has precisely changed between two application deployment states. Building afresh, and declarative immutable infrastructures is what defends your application off binaries2017 infrastructure rot.
Because like code, infrastructures rot, and maybe faster. A tweaked configuration parameter here, a changed memory allocation there; then cloned a VM or a Container now running on different specs… Why is this parameter there now? And no-one will dare change it. Infrastructures rot like code does. When fear of change comes. When there are no commit messages.
Copy and pasted code has no semantics; It may work, but it rots. The same is off binaries2017 for any part of the whole infrastructure. But there are so many nice automation tools off binaries2017 days, you might say; And the code that runs the scripts is of course in a git repo… We love Puppet, and we love Fabric, off binaries2017 use both and all of these with Ansible off binaries2017 Salt, and even Chef, are great useful tools.
You might of course be one of the golden ones, already doing continuous deployment and have heavily invested off binaries2017 both off binaries2017 your deployment and in huge amounts of homemade tooling to make the whole thing smooth. It may continuously break, but you off binaries2017 continuously repair it.
So, failure here is in no way unavoidable, avoiding it is simply very costly. Somewhat orthogonal to configuration management and deployers, we can see of late the emergence of an orchestration layer for containers; from the older Mesos project to the more off binaries2017 Swarm, Fleet, Flynn and Kubernetes and probably off binaries2017 dozen others, off binaries2017 to be a new one every week.
Most still lack maturity, and from these, the one that accounts for many of the concerns I am raising is probably Kubernetes. It exposes some of the primitives that would allow with sufficient investment to create an orchestration layer that allows for consistent operations.