Wastholm.com

This article is not an attempt to explain containers in one go. Instead, it's a front-page for my multi-year study of the domain. It outlines the said learning path and then walks you through it, pointing to more in-depth write-ups on this same blog.

Mastering containers is no simple task, so take your time, and don't skip the hands-on parts!

But it turns out that Firecracker is relatively straightforward to use (or at least as straightforward as anything else that's for running VMs), the documentation and examples are pretty clear, you definitely don't need to be a cloud provider to use it, and as advertised, it starts VMs really fast!

So I wanted to write about using Firecracker from a more DIY "I just want to run some VMs" perspective.

I'll start out by talking about what I'm using it for, and then I'll explain a few things I learned about it along the way.

This is a guide on setting up private apt repository that is accessible over a local network via HTTPS and is signed to avoid having to use –allow-unauthenticated to install packages.

...

For my use case I have two distributions of packages, they are production and test distributions. The packages in each distribution varies based on what I have approved to be used in a live/production environment versus a test environment. This is so that I can separate out packages that I am using for normal everyday use versus ones I am currently testing with and not ready to go live with. If you are only using one distribution modify the instructions accordingly.

Most web apps need login information of some kind, and it is a bad idea to put them in your source code where it gets saved to a git repository that everyone can see. Usually these are handled by environment variables, but Docker has come up with what they call Docker secrets. The idea is deceptively simple in retrospect. While you figure it out it is arcane and difficult to parse what is going on.

Starting with Docker 1.12, we have added features to the core Docker Engine to make multi-host and multi-container orchestration easy.

We’ve added new API objects, like Service and Node, that will let you use the Docker API to deploy and manage apps on a group of Docker Engines called a swarm. With Docker 1.12, the best way to orchestrate Docker is Docker!

This is a long — sorry not sorry! — written piece specifically about the high-level aspects of deployment: collaboration, safety, and pace. There's plenty to be said for the low-level aspects as well, but those are harder to generalize across languages and, to be honest, a lot closer to being solved than the high-level process aspects. I love talking about how teams work together, and deployment is one of the most critical parts of working with other people. I think it's worth your time to evaluate how your team is faring, from time to time.

Otto is the successor to Vagrant.

...

Otto knows how to develop and deploy any application on any cloud platform, all controlled with a single consistent workflow to maximize the productivity of you and your team.

There are some obvious issues with running third-party Dockerfiles. Like most of the Docker ecosystem, Dockerfiles were designed for personal use by an individual with root access. Once you start distributing them, however, you’re essentially giving root to a stranger. This blog post is about why you shouldn’t even be using Dockerfiles for your own projects.

Serf is a tool for cluster membership, failure detection, and orchestration that is decentralized, fault-tolerant and highly available. Serf runs on every major platform: Linux, Mac OS X, and Windows. It is extremely lightweight: it uses 5 to 10 MB of resident memory and primarily communicates using infrequent UDP messages.

Strider is an Open Source Continuous Integration and Deployment platform. It is written in JavaScript/Node.JS and uses MongoDB. It is released under the BSD license. While similar conceptually to systems such as Travis and Jenkins, Strider is designed to be easy to setup, use, and customize.

1–10 (17)   Next >   Last >|