Вы находитесь на странице: 1из 20

Kubernetes vs. Mesos vs.

Swarm An Opinionated
Discussion
Posted by : Ivana Ivanovic | Docker Containers, DOXLON, Docker | 0 Comments

The topic of the October 2016 DOXLON Meetup was container orchestration. The goal: shed
light on and compare existing technologies: Kubernetes, Mesos and Docker Swarm. The Meetup
started with lighting talks about Kubernetes and Mesos in production at two large companies,
Pearson and Schibsted. This was followed by a panel discussion, moderated by our CTO Steven
Acreman.

The panel was designed as a Q&A on three container technologies, each represented by an
experienced implementor from a leading container consultancy. David O'Dwyer, founder at
LiveWyer represented Kubernetes, Contino's software developer Marcus Maxwell spoke for
Docker Swarm, and Martin Lassen, senior engineer at Container Solutions, represented Mesos.
Watch them debate core features and advantages as they answer questions we crowdsourced
from the Docker channel in Hangops Slack, as well as those from the Doxlon audience.

1. Is your platform opinionated or unopinionated?


2. What feature from one of your competitor's platforms would you like to have?
3. What's your favorite feature?
4. Are you working on an interoperability plain, like Kube scheduler + Swarm + Mesos all working together,
all existing?
5. How are you positioning your system relative to another from high availability point of view?
6. Which one of your platforms has the best enterprise security model?
7. All three containers and container management systems seem very generic. Which do you think is an
outstanding quality feature that you think you have? [Presenter clarifies] Your differentiators, your
unique selling points.
8. Where do you see your platforms going own in terms of development and further release cycles for the
future?
9. Out of the three products, which one is less prone to bugs and when bugs are found and reported,
which one tends to get them resolved the quickest?
10. From the business perspective and a money perspective which one would you choose?
11. I'm new to containers and I understand there are some particular workloads where containers shine
and deliver benefit. What if I wanted to run back office infrastructure on containers? Do you think it's
already time for that?
12. To build a pod like PoC, how many nodes do you need to and are there any glitches when you're doing
a PoC for your products?

Is your platform opinionated or unopinionated?


Marcus (Docker Swarm): Docker Swarm, as everyone knows, released 1.12 and that was
integrated into the core, so that's actually quite opinionated. But it also brings a lot of nice
features for you because you have the same API that stands on what you already know - the API
is very clean and that's why Docker became Docker. And why everyone got on board because
of those APIs and those strong opinions on how you shall be using it, and how you shall be
consuming it. That being said, there are providers like Amazon ECS. They just go and extend
those APIs and consume them. It's not like a black box - you can still go see it and use it the
way you want to.

David (Kubernetes): I'd say that Cube does have opinions but, I'd say that Kube has just enough
opinion around most elements. It was around two years ago where Craig Box presented and
Doxlon, and that was the first time I saw ube. That's where he was going for a number of the
different types of resources and opinions that Kube had by default back then. One of those
opinions was around the idea of a pod, so this idea of a number of containers as being sort of a
deployable unit. Kube by default came with that kind of opinion, and that for me was the sort of:
that's it they get it moment. They kind of understand the kind of opinions that scheduler will
need to make and it will need to have by default for it to be usable from an operational sense.
Since then, they've introduced a number of other resources and a number of opinions around
how to deploy those resources, but they've also done an amazing job to make Kube very
extendable. There's the third party resource which has come in recently within the last couple of
releases. With that is a great example of where you could extend Kube to start to implement
your own resources types. SO, I'd say by default it has just enough opinion ,but by no means
does it smother you with its opinion. You can still extend it in many varied ways.

Martin (Mesos): Is Mesos opinionated? I'll say no, not at all. But, that said, I do not wish for my
worst enemy to write his own scheduler, as we all have learned tonight. You have to put either
Marathon, or you have to go all the way with DCOS and that gives you a completely different
answer. I'll say Marathon is the foundation of DCOS. It's a lot more opinionated, it's a lot closer to
what Kubernetes is today, and it will come even closer with features coming out very soon. But,
the nice thing about having this underlying infrastructure that is not at all opinionated is that you
can always fall back to that. So, when you have some weird needs, you can fulfill them by going
down a level. So, yes and no to that question, I'll say.

What feature from one of your competitor's platforms would


you like to have?
Marcus (Docker Swarm): Everyone knows Docker Swarm is the most recent addition here.
There was an old version of Swarm but they redid it. The feature set is what I like from
Kubernetes - that's why Pearson decided to go with them; its is those rich features: you have
daemon sets, you have replication, you have now [pet sets] and they constantly add a lot of
features. That's what I would like to see in the next 1.13 release, there's many more features
coming to Docker Swarm. Maybe in the future we'll have some feature parity across all the
providers.

David (Kubernetes): I'd say from Docker Swarm, the feature I'd like to see implemented in Kube
is the zero to dev experience. I think it's one of the things that Docker and Docker Swarm has
done a very good job with. On that note, I will say that Cube has done a lot of work around that
in recent months. Releasing Kube Admin, the composer project which has been put on the Kube
incubator, and the Mini Kube projects as well is another good example of getting from zero to
dev on Kube. But, I'd still think that a lot of those changes Kube have drawn inspiration from the
Docker development model.

On the Mesos side, I'd say if there's one feature I'd like to see from there, I suppose it would be
the sort of large scale usage case studies. It's not really a feature, but something that Mesos has:
a lot of these very large scale/large node deployments and I'd like to see some of those
featured in the Kube case study library.

Martin (Mesos): For Mesos, from Docker I will say that yeah, the zero to dev workflow is
definitely something I'd like to see as well. It can be very difficult to get something up and
running on your development machine even though we've built Mini Mesos and things like that,
it's not quite there yet because there's basically no difference between a normal Docker and the
Docker Swarm. Obviously there are, but from a developer's perspective, there's very little.

From Kubernetes, is Kelsey Hightower considered a feature? [David (Kubernetes): yes, since
he's joined Google]. Okay, we'd definitely like to have a guy like him. He's very good at
promoting what they are doing and I think that's also very important for expanding a product.

What's your favorite feature?


Marcus (Docker Swarm): I think the main feature of Docker Swarm is ,like everyone mentions,
zero to dev. You go learn Docker, you start a single container. Then you go and you're like: "I
want a lamp stack." Okay, then you go and you just do Swarm, Swarm minute, swarm join. Bam,
you have a cluster, you can add more nodes, you can use compose. All that you learn just
slightly builds upon all that knowledge. It's like a very nice way to go - as long as you don't have
the feature request for anything else, that's it. You can even go in production with what you do in
dev. It's super easy to translate that into a prod, so that's my favorite feature.

David (Kubernetes): My favorite feature in Kube probably isn't the most glamorous one- it would
be the config map. I think ever since the implementation of the config map resource, it's just
made the workflow of implementing containers or deploying containers within Kube a whole lot
easier. It's created nice, very similar patterns for deploying cloud-native and non-cloud-native
applications using Kube. It's a really good single resource of being able to take, say, a resource
file that just contains key values. Then you've got your main deployment file, which contains all
of the definition for running your container, and you can deploy that now in dev, stage,
production, and just swap in a config map file for each of those environments with all the
environment's specific variables.

This is great, but it also has a feature which allows you to mount the config map onto the file
system, which I think is brilliant. In situations where you don't have an application that's
environment-variable aware, you can just throw the entire configuration file into a config map
and then essentially mount that on the file system. It would just appear inside the container on
deployment. I think the versatility of that resource has certainly sped up our deployment of
containers for a number of customers over the last year.

Martin (Mesos): It's definitely a nice feature. For Mesos, I would say it's the window
independence. Last time I tried to deploy Kubernetes into AWS, it went horribly wrong. I
destroyed a cluster. It didn't seem that it was tested that much. I don't know what went wrong,
well, I do know what went wrong but why it happened - this is a bit of a question mark to me. But
also that it means that you can deploy it on your local machines. It's very easy to actually get
Mesos up and running. There are packages in all major Linux distributions today. It takes roughly
five minutes to get it up.

Are you working on an interoperability plain, like Kube


scheduler + Swarm + Mesos all working together, all existing?
David (Kubernetes): No. [laughter] Do you think Docker will do that?

Marcus (Docker Swarm): I think Docker tries to have everything - to make sure that it's kind of
like a Mac. They want to control Docker system for their future set. As you can see, a lot of
people use Macs because you get this experience thats the same, the application are always to
the same style, they have the similar feature set. You really enjoy the experience. You have to
make some choices. If everything was interchangeable, then basically you won't have the same
experience because you have too many moving parts. That's why Docker Swarm is so easy. You
don't have to bring up any CD cluster because the raft is inside of itself. There's are so few
moving parts. Also, Docker is a company, so they're trying to push for Docker data center which
provides more features for you with the same experience. That's the goal.

David (Kubernetes): If I haven't already used up my turn, Id say its a good dream to have. Being
able to have that sort of shared resource pool across multiple frameworks would only happen
when we have standards, and I think it definitely raises the question of having standards at
every layer of the implementation stack. I think that's something that's been put into the public
eye a little bit more over the last couple of months. There's been a couple of conversations on
Twitter that we've probably all been quite aware of and enjoyed, starting to ask the questions
around how we start to manage standards in this space.

Martin (Mesos): There was a client approaching us some time ago about running Kubernetes on
top of Mesos. The answer -- yeah, we did laugh as well - we were actually wondering how many
layers of various orchestrators can you run on top of each other before you bog down the
machine?

David (Kubernetes): You can also run it on Google Cloud and then have all of those hypervisors
and layers as well.

Martin (Mesos): We did ask Mesos about...Because previously there was support for
Kubernetes, and I think they realized it wasn't good enough, so they removed it from the DCOS
universe and nobody has picked it up ever since. I think there's a reason for that. If you want to
offer your teams both Mesos and Kubernetes, you might as well run two completely different
clusters, run smaller machines, whatever to keep the cost down. I don't think it's worth it to do.

David (Kubernetes): It's obviously one of those things that there was a bit of a fad at the
beginning of the year where everyone was saying how they could run everyones else's
scheduler. It was like Kube could run Mesos, and Mesos could run Kube. Then Docker Swarm
came along and ruined it for everyone [laughter].

You present quite different systems practically very difficult to


compare directly. The question is: how are you positioning
your system relative to another from high availability point of
view? Thank you.
[mikes get passed from one speaker to the next - laughter]

Marcus (Docker Swarm): I think Docker Swarm - and the same will be for Kubernetes and
Mesos - high availability is actually quite easy as long as your DNS doesn't fall over. All of them
pretty much depend on the raft mechanism. All of them pretty much are backed by HCD. All of
them try and communicate - Kubernetes has masters, Docker swarm has everything worked into
the Docker engine. All of them, I think, on high availability are okay and there's not much to
worry there, as long as you have enough replicas and masters.

David (Kubernetes): I agree, I think my understanding is that the user story is very similar for
each product. I'd say that it's -- actually this is probably one I'd ask about Mesos: with Mesos,
here's a number of different sort of components that need to be put into play that are not
necessarily all part of one system. Do you have to HA each of those components individually?
Or is there a general sort of wrapper for having sort of a master collection for each?

Martin (Mesos): In a high availability cluster you have three masters at least. In between
themselves, they will select the leader. If you're using DCOS, there's a DNS record for finding the
leader or getting to the leader at least, and you can just ask any one of them and it'll tell you
who is the leader.

David (Kubernetes): Yeah, so it's very similar for Kube. Essentially you've got there main -- Well,
there's four - you've got your HCD value store which, if you're doing HA, I would recommend a
separate cluster. When it comes to Kube itself you've got the schedule manager, the control
manager and the API. The API is stateless, so you can run that across multiple servers and send
crews to that. But when it comes to the schedule manager and the controller manager, those are
stateful. But, from early 1.0 release, Kube implemented additional like a helper container, called
Podmaster, which handles the master election across multiple systems. Essentially it puts a TTL
value entry into ETCD, and on the TTL expiring, it will automatically elect another mass system to
take over the roles.

I know there is a lot of work going on at the moment within the HAC group to essentially remove
that mechanism, so that they can make the fail over as easy as possible.

Which one of your platforms has the best enterprise security


model?
Martin (Mesos): Mesos has DCOS enterprise, where it can get enterprise support if that's what
you're asking.

Presenter: More around security I think. Enterprise security model, which ones are the most
secure.

David (Kubernetes): I would say that there's no surprise that there are a number of companies
that spawned around Kube and one of the free leading features they sell would be the security
scanning of the container system. It would be the UI, and the last would be the authentication
authorization layer. I think that by default, that was something that was certainly lacking within
Kube. It was not something that was at the forefront of the development model, but because
they've essentially mirrored the service account model that they use within Google Cloud, they
know that they have the architecture to do that.
From their perspective I think it was a matter of when to start heavily pushing that model. I know
in the next point release, 1.5 release, we're going to start seeing more work around service
accounts, more work around new types of resources, and policies associated to service
accounts, etc.

Marcus (Docker Swarm): There are many layers for enterprise features. Your security IT might
say, there has to be TLS end-to-end inside of the cluster. Docker swarm gets you that, it's quite
easy to get all communication TLS. For advanced features, so let's say you want RBAC control,
that's a very popular one, and if you want security scanning, Docker has their own products for
the enterprise. They have the Docker data center and the control plane, both of them. They also
have the Docker trusted registry and you can sign your images and you can verify them and
scan for anything malicious. RBAC control is also built into that product so that's why enterprises
go for that thing.
Docker swarm itself is quite early days so, features like service accounts, I'm hoping are coming
also.

All three containers and container management systems


seem very generic. In particular, which do you think is an
outstanding quality feature that you think you have?
[Presenter clarifies] Your differentiators, I guess. Your unique
selling points.
Martin (Mesos): Mesos doesn't necessarily have to be containers. It can just be a chroot
sandbox alignment. Which, yeah, isn't necessarily containers.

David (Kubernetes): I'd say the varied and width of the resources available, so the feature set
that's available but then I'd also say that the community support is something which is key to
keep. I feel that it's two sides: the community and the enterprise support. On one hand you have
a very wide and very open sourced community, very active Slack channels, very active special
interest groups. On top of that you have a very open platform for discussions around new
features, road mapped features, a very good process for promoting proposals and promoting
them into future releases. On the other side we have, I think, an enterprise ecosystem which is
very healthy and which keeps everyone on their toes but in a nice competitive way.

I think having companies such as Deis, having companies such as Red Hat which have their own
requirements I think it keeps everyone on their toes. I think it balances off nicely that a
number of the larger interest groups and the smaller interest groups get represented.

Marcus (Docker Swarm): For Docker Swarm, like I mentioned before, is that they have to have
production experience because that's what pretty much was promised in the container world -
that you created your container and now you can easily run it anywhere you want. The you way
you run it is also the same, and that's the main differentiator. Regarding future parity, I think both
Mesos is now trying to catch up with some Kubernetes features, and all the other schedulers are
also doing pretty much the same, so it's more like a waiting game. If you asked the same
question a year ago when Docker Swarm was not released, it would be a different conversation
than we have now. It would be a different conversation, if it was Kubernetes 1.1 when there were
a lot less features than in the latest 1.4 It all depends at what point in time youd want to describe
the feature set.

Touching a bit on the feature road map for all the three
platforms, we've recently heard about the project, or the
tendency, or the idea of forking Docker for a different
development stream. We've all seen where container is now,
it was the infrastructure as code a couple of years ago.
Where do you see your platforms going own in terms of
development and further release cycles for the future? Where
do you see it going?
David (Kubernetes): You're saying just the formative element of the individual platforms or
development on the platform? Sorry.

Speaker: Both on the platforms themselves but also of the community support.

David (Kubernetes): I think at the moment, as I alluded to earlier, I honestly feel that Kube at the
moment is very community driven. I think there is a number of more Google opinions, I'd say, are
set. They're there, that's what basically entered the 1.0 release were these primitives that came
out of their experiences of maintaining and building more. I think now what we're seeing are real
world requirements coming in and I think config map is an example of one of those. Ingres
controller is a really good example of another. These are features that solve real world
problems. These are not necessarily features which benefit Google's Cloud. Because, ultimately
if they wanted you to only use their system, they wouldn't have put Ingres controller in. They
would have just said, "Just use an external service with load balancer don't worry about it." I
think what we're seeing now is the sort of new heavy community lead features. Federation,
that's a big feature - the ability to essentially leverage across multiple providers, which is what
we were hearing earlier from Pearson. The ability to distribute across multiple providers, not
having that vendor lock in -- you could say that earlier on in the development of Cube that's
what they're aiming for. I don't believe you can say that now. I feel that it's the transparency of
the community is there and the long term road map is another point actually.

One of the big criticisms for Cube around the 1. release was there was a bit of fuzziness around
where it was going long term. What's going to come up in the next release? What's going to
come up in the release afterward? Last September, they brought on board a community
manager called Sarah Novotny whos done a wonderful job with getting everything in place.
She's done this wonderful job with essentially working with the community to get this road map,
to get the special interest groups together, to get that transparency going. So, really the
question, from Kube's perspective, of where it's going to be going at in six months, twelve
months time, just go onto the milestones in the GitHub. Go into the weekly community chat on
Thursday, which is 6:00 PM at the moment, London time. Go into special interest Google groups
- it's there.

Martin (Mesos): For Mesos, it's mainly focused around DCOS and hardening it. Especially UI it
kind of needs it. Then they are working on implementing a pod-like structure as they have in
Kubernetes. Those are the main things I've heard about that are coming up.

Marcus (Docker Swarm): For Docker Swarm it's also pretty much the same. Catching up with the
feature sets, making the experience still be as polished. Then I think there is a lot more
additional kinds of features that the developers would like. I think the same can be said for
Mesos for Kubernetes - once you get it out of the box, you have a very nice scheduler and you
can do a lot of work. But, in terms of operations, the feature set is usually still not there. What I
mean is very nice logging, infrastructure, very nice dashboards, and some monitoring out of the
box so that you can see what you have when you launch this platform.
Out of the three products, which one is less prone to bugs
and when bugs are found and reported, which one tends to
get them resolved the quickest?
Martin (Mesos): So,I think Mesos is the one that has been around for the longest. It's been
working in Twitters production environment. I think that's hardened it a lot.

David (Kubernetes): I was going to say it's very difficult to comment on the other two products
and how Kube is relative to them. But what I will say is one of the other things that's happened
over the last six months is the end-to-end testing. I believe that the end to end testing model,
just the general development model of the Kubernetes in GitHub is brilliant. I've always held it
up as a sort of standard of how those types of projects should be run. But one of the things that
did let them down was the breadth of their end-to-end tests. This was one of the sort of big
Google criticisms going into this year:that the end to end to tests were very Google Cloud
centric. There wasn't nearly enough AWS coverage, and AWS being one of the primary
destinations providers. I think what you alluded to earlier on about the Kube scripts, they're like
kube-up scripts breaking, that should never happen. That should never happen, and it was
happening simply, because they never had the breadth of providers to essentially do this. I think
it was a slight victim of their own sort of hands off approach where they cut it loose as an over
source project. Google weren't going to come in and provide this AWS infrastructure. It was
something which either another company needed to do or the community needed to do. I know
another large tech company came in and donated a rather large AWS cluster to do this. I know
another recently - in the last two months another provider has offered up a large V sphere
provider, which is why we're seeing huge changes in the vSphere support for Cube.

I'd say the end to end test now are very, very solid. The community is very active, they have
some very good developers with some very hard opinions on co-practices, monitoring the issue
keys and monitoring the code commits. I think the quality of the code has increased and also
the turnaround time of the end-to-end test is shortening.

Marcus (Docker Swarm): Regarding Docker Swarm: I think both Kubernetes and Docker are
written in Go. The nice feature of that is that it's easy to contribute if you want, but they're also
on GitHub and that's the main feature that I like on both projects. I don't know if Mesos is? Yes.
So you just go -- If you ever hit an issue it's easy to find an error. You're going to GitHub issues
and follow the thread. Usually if it's critical there's a bug fix, there's always dot releases in
Docker. If there's anything else, it will be put on the road map and get promoted, and in most
releases I think there werent that many features that anyone requested in Docker that weren't
quickly iterated upon and decided. If not, it will be told in the tickets: we're not doing this for
now, you have to wait. That's pretty much the working model for a lot of open source projects
now.

From the business perspective and a money perspective


which one would you choose?
Audience Member: I want to ask you to look at it more from a business perspective than a
technical perspective. Lets say you have to make a choice of one of those products based on a
budget for applications that are already running on Docker, for example, so you have to take into
consideration how long it takes to train your team, maybe get some more people. How easy it
would be to get new people for each of the products? If you put it all in - the money you would
spend to get within let's say six months or a year to get into a good state, which product would
you chose? Even though you might say, "Oh, some products have a lot better features but you
can spend a bit more money versus others that you'd have to implement some of those features
yourself and it's taking a lot longer. From the business perspective and a money perspective
which one would you choose?

Marcus (Docker Swarm): Being a consultant, Id say it depends. You have to look at your team
size and the growth, and is it a very brown field project or a green field project and what you can
do. For now, and if you don't really want to spend money on Docker Data Center for example,
for small to medium, Docker Swarm is super excellent and super easy. Regarding feature sets
and deciding upon that and if you would like to build more features into Docker Swarm,
something like that, I usually tell any client that thinks that unless you want to contribute and
have team there: is that your business goal for now? Or do you have something else worthwhile
that you can do now in the meantime?
What I found from the evolution of all these schedulers is: if you wait two, three months, the
feature is going to be there, it's going to be maintained by those teams and there is no reason to
reinvent the wheel for yourself and create that.

Martin (Mesos): I would say it's a journey. Obviously you want to write a few micro services, put
them into a Docker image and deploy them on a server. If you're just running a single handful of
applications you might just run it on a single server. Then over time you'll probably realize that
things start to break. You need to keep that server up and running and that's when you have to
start looping in orchestrators. But by that time you'll know a lot more about your application,
what type of needs it has and I'll say at that time it's a lot easier to choose an orchestrator.I
wouldn't necessarily do it from day one.

David (Kubernetes): I'd say from a business perspective the choice of scheduler is way down
the line. I think the first thing you have to look at is the applications that you're working with. Are
they cloud-native? Are they containerized? Are they in a position that you can transfer them into
containers? How is the configuration set? How is looking at your data? How is your data
managed? The persistency of your data. My point is there are a lot of questions that need to be
answered from a total cost perspective of going on a journey of using one of these scheduler
solutions.

I'd say that each of these products has a different way of maybe helping and simplifying ways in
which you onboard those applications. That's where a lot of the graft is going to be initially.
Secondly, with all of these products there'll be a sort of management overhead of how do we
translate our old processes and tools into this new world of containers? We have our old Nagios
system and now we've got to move to our new Outlyer system which manages containers. How
do we do that, how do we work with this tool? How do we work with our deployment pipeline?
How do we work with our bill pipeline? These things are completely agnostic from the scheduler
themselves, but these things need to be answered from a business perspective. I'd say when
you actually get to a point of understanding your application better, understanding the needs of
your application better, understanding the needs of your departments, how they can interact
with the cluster, I'd say you've got a lot of really good questions that will help you narrow down
the choice of scheduler at that point.

I'm new to containers and I understand there are some


particular workloads where containers shine and deliver
benefit. What if I wanted to run back office infrastructure on
containers? Do you think it's already time for that?
Martin (Mesos): Depends. If you're talking about your big Oracle database, I'll say no.

Audience Member: How about Dura, something like Dura?

David (Kubernetes): Yeah, we run Dura on our key cluster.

Audience Member: Are there other benefits, is it more work than it's actually worth to run back
office?

Marcus (Docker Swarm): If you want to bring up Dura or Confluence manually, you have to
install Java, you have to have this much RAM, you have to go click the links, go everywhere. It's
a mess. You can just go to Docker Hub, go bring it up and that's it. So, Jenkins, Dura,
Confluence, I even often see now that its popular to have the laptop scripts where you
bootstrap your laptop. You would just have Docker containers for your small apps and that's how
you would be building your pipelines and everything else. As long you can containerize the app,
it's fine. Probably some very ancient software won't do that.

David (Kubernetes): Just adding, essentially similar to what I said before, which is if it's just one
application, there's a lot of overhead. There's now a lot of overhead in learning how to build
good containers, how to deploy those containers, what to do if the master server goes down,
there's a lot of overhead. So, you're talking about one backroom application then probably not.
But if you're talking about having like fifty different applications of different types of
dependencies then suddenly, yes, it does actually start to make sense. Because now you have a
common platform, which you could then commit change, process change, update, have one
method of updating servers, one method of managing applications, one method of monitoring.

It really is an IT/business decision on when the sort of overhead of learning and implementing a
scheduler such as ours outweighs the day-to-day struggle of actually having to maintain the
current infrastructure.

To build a pod like PoC, how many nodes do you need to and
are there any glitches when you're doing a PoC for your
products?
Martin (Mesos): What do you want to achieve in your PoCs?

Speaker: A production representation of what I'd need to do. Can I run everything on free
nodes? Or do I need twenty nodes to build a Kubernetes

David (Kubernetes): How many nines do you need?

Speaker: Let's go five nines.

David (Kubernetes): I can make it as reliable as you want but it'll cost you. I think it's just a
question of it's a tool like any other infrastructure tool if you were going in and building any
other infrastructure, you need to start to look at the components from an availability point of
view. If you're using Kube or any of the other schedulers ,then it's just other areas that you're
looking at. It's different challenges with each of those technologies but the questions that you
ask yourself are still the same around do you need availability within a region? Do you need
provider availability? Those are the questions which will drive getting out a piece of paper,
working out exactly you need available. Then and only then when it's on paper, because paper's
cheap, that you then start pushing that into okay, which one of these solutions will enable me to
reach that end goal in the most cost effective way for my applications and my business? It may
be that it's Kube, it may be that it's Docker Swarm, it may be that it's Mesos I think you need to
go through that process of understanding your application and what you're trying to do achieve
with it in terms of availability and cost constraints before you could review each of us.

If I want to try it out in all of those products just to see which one is better for my application,
how quickly would that be? Or how easy would that be and how much would it cost me just to
get that small POC done for a client?

David (Kubernetes): Again, I'd say it's dependent upon the product. If you've got something like
Cassandra which can run in a single node, single host path in Kube where you're just essentially
rendering out the data to a local directory - really quickly. Then the time to convert that into a
production-ready cluster is obviously quite large. Whereas if you've got something like Redis or
Memcached, which is just memory persistent, then you load up one Memcached server, there's
that. I'm not saying it's as simple as this but to go from that to a highly available situation, you're
just spinning up the replicas from one to six. There's obviously going to be configuration
settings, there's going to be a lot of things. But, generally you are getting 90% percent of the
way there of making it highly available just by spinning up more replicas. It is very application-
dependent.

Speaker: To avoid watching you squirm a little bit more our answer is twelve. Three TCD, three
minions, three ingress, two OAUTH, one master.

DOXLON is sponsored by Outlyer and LinuxRecruit - a specialist consultancy within the open
source industry, introducing great engineers to great organisations across the UK since 2011.

First Name*
Last Name

Email*

Website

Comment*

Subscribe to follow-up comments for this post

TYPE THE TEXT


Privacy & Terms

SUBMIT COMMENT

FOLLOW THIS BLOG

email

SUBMIT

Recent Popular Categories

How to collect docker metrics with cgroups

New Feature: Docker Monitoring

Monitoring Docker Environments with Outlyer


(Live Webinar)
Why You Should Monitor Services Rather Than
Containers

#DOXLON June 2017: Container Orchestration


Technology

Top 10 Time Series Databases

Kubernetes vs. Mesos vs. Swarm An


Opinionated Discussion

Monitoring Java apps with Nagios, Graphite and


StatsD

What we learned talking to 60 companies about


monitoring

Time-Series Database Benchmarks

Using Monitoring Dashboards to Change


Behaviour

Metrics: Nagios, Graphite, Prometheus &


InfluxDB

Playing with Dalmatiner DB

Beautiful Nagios Scripts

#DOXLON October 2016 - Container


Orchestration

DevOps (37)

Monitoring (33)

DOXLON (27)

Monitoring Wisdom (25)

DevOps Exchange London (23)

DevOps Exchange (DOXLON) (22)

Product Updates (20)

Docker (13)

Meetup (11)

Docker Containers (9)

see all
Go Beyond Cloud Monitoring
Scale your infrastructure monitoring for Cloud, SaaS, Microservices and IoT-ready deployments with Outlyer.
Weve designed a solution for the agile organization. Start now and see your operational metrics in minutes.

START YOUR 14 DAY TRIAL NOW

Outlyer is a Monitoring service for DevOps & Operations Teams Running Online Services. Designed for today's dynamic
Cloud environments, DevOps and Micro-Services, we make Monitoring effortless so you can concentrate on running a
better service for your users.

Subscribe to our Newsletter

email SUBMIT

MENU

Home
Features
Pricing
About
Contact
Blog

OTHER LINKS

Sign In
Sign Up
Support
DevOps Exchange London Meetup
Careers

CONNECT

Outlyer
180 Kings Cross Rd. London WC1X 9DE UK

650 California St. 7th Fl. San Francisco CA 94108 USA

2017 Outlyer

Terms of Service Privacy Policy

Вам также может понравиться