In this episode, we chat with Luca Ravazzolo, product manager for cloud and containers, about Kubernetes - the most popular container orchestration platform today. Kubernetes (K8s) provides a rich set of features for deploying, managing, and maintaining your containers deployed across clusters of machines. Luca also talks a bit about the InterSystems Kubernetes Operator and the future role of Kubernetes within InterSystems products.
For more information about Data Points, visit https://datapoints.intersystems.com. To try InterSystems IRIS today, head over to https://www.intersystems.com/try and launch your instance!
TRANSCRIPT:
Derek Robinson 00:01 Welcome to Data Points, a podcast by InterSystems Learning Services. Make sure to subscribe to the podcast on your favorite podcast app such as Spotify, Apple Podcasts, Google Play, or Stitcher. You can do this by searching for Data Points and hitting that Subscribe button. My name is Derek Robinson, and on today's episode, I'll chat with Luca Ravazzolo, Product Manager for cloud and containers at InterSystems, about Kubernetes.
Derek Robinson 00:39 Welcome to Episode Two of Data Points by InterSystems Learning Services. My name is Derek Robinson. As you may have heard in Episode One, we're excited about the launch of this podcast, and we've already released three episodes for you to check out. In this episode, I'll be talking Kubernetes with Luca Ravazzolo. Luca is a Product Manager here at InterSystems, focused on the area of cloud and containers. He brings a ton of experience to the table. He celebrated 30 years at InterSystems this past fall. What you're going to hear about Derek Robinson 00:01 Welcome to Data Points, a podcast by InterSystems Learning Services. Make sure to subscribe to the podcast on your favorite podcast app such as Spotify, Apple Podcasts, Google Play, or Stitcher. You can do this by searching for Data Points and hitting that Subscribe button. My name is Derek Robinson, and on today's episode, I'll chat with Luca Ravazzolo, Product Manager for cloud and containers at InterSystems, about Kubernetes.
Derek Robinson 00:39 Welcome to Episode Two of Data Points by InterSystems Learning Services. My name is Derek Robinson. As you may have heard in Episode One, we're excited about the launch of this podcast, and we've already released three episodes for you to check out. In this episode, I'll be talking Kubernetes with Luca Ravazzolo. Luca is a Product Manager here at InterSystems, focused on the area of cloud and containers. He brings a ton of experience to the table. He celebrated 30 years at InterSystems this past fall. What you're going to hear about Kubernetes really builds off of the concept of using Docker containers. I'm sure we'll have episodes covering Docker concepts in the future, but for now, definitely browse our learning catalog for starter information about containers, if you're interested in them. Kubernetes is one of these newer technologies that really allows you to take your container approach to the next level. Rather than diving into those details, I'll leave the real explanation to the expert. So here's my interview with Luca.
Derek Robinson 01:38 Alrighty. So welcome to the podcast Luca Ravazzolo, Product Manager for cloud and containers here at InterSystems. Luca, how are you doing?
Luca Ravazzolo I'm doing very well, Derek. How are you doing?
Derek Robinson. Good. So we're happy to have you on the podcast here today and we're going to be talking about a pretty cool, a fairly new cloud topic today, which is Kubernetes. So, I know that you've done some stuff on this at our Global Summit, at InterSystems here, and there's a lot of cool stuff to talk about. So why don't we start with, for the new person, for someone who might not understand what this technology is, what is Kubernetes, in brief?
Luca Ravazzolo 02:06 Well, Kubernetes is a platform, first of all. What does that mean? It means that it is a full suite of software if you like, that holds, that can hold up your application. What does that mean in a little bit more details? Well, it allows you to define how your application is going to run and consider that, even cloud service providers, like, AWS, Amazon, Google, and Microsoft Azure, they all have an implementation or support Kubernetes. But why is that interesting? Well, because of two factors, I think. One is that Kubernetes was born to make sure that your application runs all the time. That's its job. It keeps monitoring that all the workloads that you put up there are actually running all the time. And if anything dies, it picks it up again. So it looks at your definition says, That's what you want me to run. You want me to run your 32 instances of this app, and if 31 are running now, it's going to pick that 32nd and make sure that it does run. And the second part of why Kubernetes is interesting is because it allows you to define all the pieces of an application. If, so, let me just say, if you have like the front-end pages of an application, that's interesting, but that's only a part of it. You need some business logic behind it, right? And if you are a developer, the business logic, that's only part of it. You need somewhere to store information, for example, of the purchases that somebody is doing, right? So you need a database on the back end.
And then, if it's Black Friday, what do you do? Well, you need to make sure that you sustain the new and biggest and largest workload that you have. And so you need to dynamically create more web servers, et cetera. What Kubernetes can do is all that; not only that, it allows you to actually define all of those pieces, all those components, and even load balancers and web servers and DNS engines inside the platform itself. And so it really allows you to define the application, how the application has to run, and that's very powerful; we do not have anything like that in the market right now.
Derek Robinson 04:22 Yeah, that's really cool. So I think — and we won't cover this in this episode — but one of the precursors to this technology is really understanding containers and Docker containers and kind of having that deployment, right? And so, me personally, I've worked with Docker containers a lot, but really only in my local environment. So it sounds like one of the biggest appeals of Kubernetes can be this really enterprise-level deployment of containers. And really when you're looking to do more, like you said, if you have, you know, 30 instances of something or a bunch of load balancers, and really mapping out the whole configuration of your application environment that uses containers, is really the biggest advantage of Kubernetes, it sounds like.
Luca Ravazzolo 04:59 Absolutely. And you really hit the nail there, right? So we've all worked with containers. They're great for developers to just, you know, get the code, configure up all the dependencies of your work, of your libraries that you need, run it on your laptop, and that's great. But what happens when you actually start defining an application, you know, as you said, which need a lot more pieces around it? Well, there's a nice little tool that Docker built, which is called a Docker compose. So you can work with multiple containers, but you're still confined and able to run those containers within one single machine, right? Your laptop typically, or maybe a high-end server, if you want to test some performance issues. But what happens when you go to the cloud, when you go to a data center where you have, you know, many nodes, many VMs, many and, and you need to scatter your workload across many of them so you can take advantage of all the and all the memory that are available.
Well then you need to start installing, you know, things like network overlay layers and then, how do you know if those containers are running properly or not, et cetera. And that's what Kubernetes does for you. So you prepare those nodes, it creates this overlay network for you. It handles all, literally everything that is done in terms of networking and DNS naming and all those complicated parts. And that's why it's very powerful. One other strong characteristic, let me add, just came to my mind, is that as you define your application within the Kubernetes platform, that platform and that definition is totally portable. So if you're working in AWS today and you go this YAML definition, I know you're looking at me strange, you know? YAML, we all love to hate that. But you know, it works, right? For now. So that same definition, you can bring it on site, on prem, maybe with some bare metal because you want, you know, more performance. And that same definition will run there too with the Kubernetes platform orchestrating everything. And that's very, very powerful. So an organization gets portability, they're not locked into any cloud and they get a platform that manages their workload. That's very powerful.
Derek Robinson 07:13 Right. Scaling up some of those benefits of containers like portability and efficiency that you can really do for a whole orchestrated environment there. So kind of taking everything you just sort of said now and moving into maybe a little, an example or two, what are some of the, as you've talked with either customers of InterSystems systems or just other people that you've seen at conferences or just in your networks, what are some of the coolest, or maybe one or two cool use cases that you've seen where Kubernetes has really helped to take someone's environment or application environment to the next level and really leverage all these things that you're talking about?
Luca Ravazzolo 07:46 Yeah, I've got a couple of examples that really spoke to me. One is, a couple of developers started to work with it, and they said, this is great, you know, but I'm a full-stack developer, and typically I want to test it on a medium size type of an environment out there, you know six, 12 nodes. So the easiest thing is just to go on the cloud. So, they provision the infrastructure and they say, well, everything is in containers now, so how do I do that? Well, the easiest thing was for them to just run Kubernetes in the specific cloud. So GKE for example, for Google, or EKS in AWS. And then all of a sudden, they have the possibility to just really run the application, all the components, even components that they did not develop themselves. They were just pulling containers, you know, let's say, back end, the new version of the database with the new schema that the organization has just developed. And he has just developed, for example, some new business logic, and he was just putting everything together on several high-end machines. He was really testing it through properly instead of just running either everything on his laptop or trying to configure everything himself manually, just one single manual YAML definition with everything configured. And it was up and running in a few minutes. So that was the single developer, they really wanted to monitor and follow through the workload, the data coming through where it was going, et cetera. And the other one was other customers that are very close to go to production in Kubernetes, and they were just shocked that sometimes, they left the Kubernetes servers up and running, and then in the morning they come up and, you know, the system had fallen over, but they didn't know if they didn't go and have a look at the logs – it had s. You know, one of the two instances had died, but the application was up and running. And they were just shocked themselves, you know, no pager, that you know whatever you wanted up and running is up and running all the time. And that's part of its job. You know, this controller that keeps checking that everything else is consistent as your definition, which is pretty cool.
Derek Robinson 09:57 Yeah. Cool. And I want to transition to a couple last points about how it relates to InterSystems IRIS. But one thing before we move on, I just want to emphasize too that something you said at the end there, which is kind of that self-healing nature of Kubernetes is…I think you can't emphasize that enough as far as one of the advantages where in that use case, you come in and you didn't even realize something went wrong because it really has this ability to fix itself with some of those failovers and, and bring up a new node in place of it. So I think it's a good thing to emphasize there.
Luca Ravazzolo 10:25 Yeah, absolutely. The self-healing part is very powerful. And the other one of course is that you can auto scale workload automatically so you can set thresholds and say, "Hey, Kubernetes, if these two particular nodes go above, you know, 90% CPU for example, you really need to do something for me. So spin up another couple of these nodes that…and it can do that for you. So you can set these rules as a part of your application so that when Black Friday comes, you just prepare, but you don't have to panic.
Derek Robinson 10:55 Exactly. Cool. So that's really exciting. Moving to kind of the last portion, which is, shifting into our InterSystems IRIS users that are listening, right? So whether that's InterSystems IRIS, or even other InterSystems products that are older than IRIS and people might move to it. What should people know about the ability and what IRIS is doing to work with Kubernetes?
Luca Ravazzolo 11:16 As you said earlier on, you know, it really is an orchestrator for containers. So by the mere fact that we have IRIS in a container, where we can run within a Kubernetes cluster or orchestrate a platform. But there's more to that because things can be complicated to define. Just because I've got this little YAML template, but I might want to put some rules. For example, I want to put some rules, some affinity rule that I want to run my IRIS instance because it's very important to me as a back end database on that particular node that has 32 cores. So you can put all this type of rules, but then when you get into the specific semantics of InterSystems IRIS like, I want, for example, a mirror pair. Well, Kubernetes doesn't know anything about our mirror pair or our ECP communication. And so what we've done is, we built an InterSystems Kubernetes operator that allows you to define all these particular semantics that we have with our product. You just define in the InterSystems Kubernetes operator these particular that you want to run, and it just goes and configures everything for you. And that's very powerful.
Derek Robinson 12:23 That's great. So lots of stuff coming. Last question here. Just kind of taking a step back in general, as you look forward, you know, with the possibilities with Kubernetes, what excites you the most about maybe what's untapped potential, or really how you see this going forward into the future?
Luca Ravazzolo 12:39 Well, I think we're just at the beginning of it, right? If you look at the GitHub repo, since 2015 when…or was it 2014? Well, anyway, a few years back when Google released it, the Kubernetes ecosystem, you know, even get GitHub, you know, a site where more than 300,000 people working on that, it's really exciting. And they're divided even into special interest groups. So if you're interested, people should go there and participate and give opinion for storage, security, all kinds of stuff. I mean we're really talking really high level here, but it really is a full platform. And so I think what we're going to see in the future is a lot more Kubernetes managers, just like some of the work that, AWS and Google and Azure have done and, and a lot more automation, a lot more monitoring and a full ecosystem that allows you to really run even in a more automated way than it is now. So I think we're just the beginning and the portability that offers is just fantastic. So none of us are locked into any specific solution.
Derek Robinson 13:50 Yeah. Very exciting stuff. So Luca Ravazzolo, thank you so much for joining us.
Luca Ravazzolo 13:54 Thank you, Derek. It has been a pleasure. Yeah, see you soon!
Derek Robinson 14:01 Thanks again to Luca for sitting down with us and giving us some really interesting stuff there on Kubernetes. One little side note that might be helpful for those of you looking up content on Kubernetes. This is something that tripped me up a little bit when I was first researching it, is it's often stylized or abbreviated as K8S in written form. As far as I could tell, that's pretty simply swapping in an 8 for the eight letters in the middle of the word Kubernetes between the K and the S. Works for me, but if anyone knows more reasoning behind that, leave some comments for us in the Developer Community to enlighten us on that abbreviation. So hopefully you enjoyed episode two in our conversation with Luca. Remember, make sure to find us on your favorite podcast app and subscribe so that you never miss an episode when it's released. Thanks for listening, and we'll see you next time on Data Points.
In this episode, hear about the InterSystems ObjectScript extension in Visual Studio Code that allows developers to easily connect to their InterSystems IRIS instances...
Welcome to Data Points! In this episode, we chat with Jenny Ames, team lead of online learning content, about InterSystems IRIS – the flexible,...
In this episode, we are joined by product manager Benjamin De Boe to talk about columnar storage in InterSystems IRIS. We discuss how this...