Press Ctrl / CMD + C to copy this to your clipboard.
This post will be reported to the moderators as potential spam to be looked at
I have Umbraco running inside a Windows Docker container.
If anyone is interested please see my blog here:
My blog walks through how to do it along with plenty of useful links.
Thanks Phil, an interesting read.
I know it's still relatively early days in Windows Docker world but I'm not sure currently when/why I would use a Docker Container over a Azure Web App or for that matter a Azure Cloud Service for hosting Umbraco.
Price is another reason. For example I've created a few Umbraco sites for family and friends tiny companies that are more just a retired hobby. Running a few tiny sites on my Synology NAS or work's servers in a quick docker would be a great option. :)
AWS now supports windows containers properly so there is no reason why not
I know what you mean. I was mainly investigating windows docker support and as I'd just done some Umbraco project work so I thought why not use Umbraco as a demo.
Docker containers scale quicker and cost less (usually) over cloud apps on server instances, however the Windows containers are massive so this may not hold true in Microsoft land.
One particular use case for docker containers could be to test a production build before deploying to a live environment. By deploying it into a docker container first and performing all your UI tests etc. you are improving the quality of your live deployments.
Sorry to resurrect an old thread, but I've used Docker extensively, mostly as part of a CI flow, but also standalone.
In my mind, the biggest win with Docker is the ability to quickly spin up and share dev environments. When I work with teams of developers, I tend to get rather unhappy (GREG ANGRY! GREG SMASH!) when we create bugs because our environments are slightly different. With Docker, I can almost guarantee that our environments are the same, though this can all quickly crash and burn if you use NPM on the front end.
In that regard, the big difference between a container and a VM is that a container will use the underlying operating system, whereas a VM will run its own operating system protected/isolated from the underlying operating system via a hypervisor. The benefit is that you can often eke out a little extra power (or if you're very lucky, drop your cost). The drawback is that you need to carefully think through your application and decide how important operating system parity is. If small OS differences could make a big difference, you need a process to manage dev machines.
In production, Docker is theoretically amazing and it makes CI flows incredibly easy. Armed with Jenkins, Docker and a bunch of Bash, I can build a CI flow in a matter of hours. In practice, Docker adds quite a bit of complexity after it is running. For example, while you can tell if a container is running, it's not as easy to tell if the application inside the container is running. Consequently, I often find myself building a/test endpoint inside each container so I can monitor what's going on and spin up new containers as things get messy. I get a rash when I deploy a /test endpoint in a production environment - it is incredibly amateur, but it beats the heck out of deploying/running three versions of the same container in the hope that one of three will work most of the time.
Docker also makes certain error codes more painful than usual. 500 in particular will give you a migraine. Maybe the container itself is down, or maybe the application inside the container is down, or maybe the application decided not to respond because some weird undocumented setting put it to sleep, or maybe the underlying OS is having an issue, or maybe you just haven't upgraded the underlying operating system recently enough, or maybe you upgraded it too quickly and got an update mismatch between the machine where you built it and the prod machine. This complexity is hard for less experienced developers to grok, particularly if they're confronted with a broken production system and angry users.
In large deployments, Docker on its own is hard to manage. I strongly recommend orchestrating containers with Kubernetes as it's the most mature tool that I have used. And frankly, running an update with Kubernetes is unbelievably smooth and reliable. Kubernetes' support for windows containers only came out of alpha in January or February of 2018. I haven't tried it for anything non-trivial that actually operates at scale.
I wonder how do we persist the examine index files within docker since they will all be wiped out every time we do deployment which deploys new docker image.
Firstly, you dont need to persist index files, when your new container deploys and boots umbraco it rebuilds the whole index from scratch.
If you really wanted to persist the index files, you would need to make umbraco access them from a different location, i.e. cloud storage? Like we do with the media folder using a FileSystemProvider plugin.
is working on a reply...
Write your reply to:
Image will be uploaded when post is submitted