Cloud Connect Chicago, ProfitBricks Podcast with Bob Rizika, U.S. CEO and Achim Weiss, CEO Germany

18 September 2012 Posted by Paul Burns

This podcast was recorded with ProfitBricks at the inaugural Cloud Connect event in Chicago.

In just 16 minutes, Bob Rizika, U.S. CEO of ProfitBricks, and Achim Weiss, CEO of ProfitBricks Germany, provide a look into the future of cloud-based infrastructure-as-a-service with a service that is available today.

What makes an advanced IaaS offering today?  Here are a few hints:

  • Scale-up servers with on-the-fly CPU and RAM elasticity
  • By the minute pricing
  • Infiniband for 80GBps throughput per server
  • Software defined networking
  • Replicated Raid10 storage

Listen now to learn how public clouds can achieve breakthrough performance by using the right architectures and technologies!


Table of Contents


Paul: Okay, this is Paul and I am at the inaugural Cloud Connect event in Chicago and I’m always looking out for new and interesting technologies. As many of you know cloud computing has been based on scale-out in many ways. I am talking to a company here today called ProfitBricks and they are changing things a little bit in the cloud world and adding more of a scale-up approach.

But why don’t I introduce you to some of the folks at ProfitBricks and they can give you some deeper perspectives on this. First I’ll start with Bob Rizika who is the US CEO of ProfitBricks. Do I have that right Bob?

Bob: Yes. Absolutely Paul.

Paul: So maybe you could tell us just a little bit about yourself, or your role, or dive into the company, however you’d like to go.

Bob: Sure. Well just on a high level really quick, just some background on the company. I’m CEO of the US operations but the company was started 2 ½ years ago in Germany. And actually Achim is the parent company CEO with Andreas Gauger as the CMO and the two of them founded the company. They have a rich background in the technology field. They started a company called 1&1, which most people know. They built it over a billion and a half in sales and then left a couple of years ago to do venture investing and then Achim had an idea for a second generation infrastructure-as-a-service company.

Our company ProfitBricks was started by Achim and Andreas 2 ½ years ago and the whole point really was “how do you address the next generation of needs in infrastructure-as-a-service?” You talked about scale out versus scale up. Horizontal versus vertical scaling. There is a bunch of features for me to talk about but Achim’s vision was if you think of how users use technology today, infrastructure-as-a-service, they’re sort of stuck in this environment where they can’t get large vertical scale instances. Maybe they can go up to 10 cores and 20-30 GBs of RAM but it stops at that point. And if you look at what people are using technology for, it’s for traditional applications, or for performance in the web, and most of those services that they are offering use databases. And databases traditionally want to scale vertically. They don’t want to scale horizontally. But because first generation infrastructure services only scaled horizontally, all of those companies had to slice up databases into small pieces, scale horizontally, and it’s actually really inefficient.

So Achim’s vision was “how do you allow customers to scale vertically?” And by vertically with our first introduction here we are introducing 48 cores that you can scale to vertically. You can start with one core and on the fly you can scale that up to 48 cores. On the fly you can start with 1 GB of RAM and go all the way to 192GB of RAM and essentially unlimited storage. On top of that, at Cloud Connect we are introducing one of our key new features, and that is the ability to literally add CPU or add RAM on the fly. So 100% of our competitors, if you have an instance and you want to increase the cores or the RAM, you have to shut that instance down, add the core or RAM or storage, and then restart the instance and continue. But the crazy thing is that if you think about that, when do you want to add performance? Well, when your customers are hitting your service. But that’s the one time you don’t want to take the instance down to add cores and RAM. So we are solving that whole problem. We have a whole lot of other great features that are going to come out in the future around this vertical scale and around how we help you deliver high QoS and the performance that your customers are going to need.

Paul: That is a great introduction and overview. Are you targeting any particular type of applications? I know you mentioned applications with databases that want to scale up. Those are good ones. Are there others?

Bob: Sure, we actually bill by the minute. And we are one of the first companies to truly bill by the minute because the whole philosophy is “pay only for what you need to use, not what you think you are going to need to use at some future time.” And we think we are going to lower everyone’s cost significantly with that philosophy. But from a market segmentation we are focused on a couple of different markets. One is e-commerce. Think about all of those websites that you go to where you have to look up “Is this product available? Is it available in that color? When can I get it delivered?” Those are all examples of database lookups, so that is a perfect example for us. Think of a company like Orbitz which is doing high volumes of lookups in databases, but it peaks at certain times of the day and hits ows are at other times of the day. So it’s another perfect example.

We are also working with test and development companies. I used to run a company that I sold to Juniper and we spent four million dollars on infrastructure so that our 65 software engineers could come in in the morning, write their code, put it on servers, test it, and pull it down. But I wasn’t using the infrastructure at night, and at nine in the morning maybe I used 20% of the infrastructure, and in the afternoon I hit a peak of 95% and then back down. So it’s another perfect example of where they can scale up and scale back down, pay by the minute and truly optimize what they are doing in a second generation infrastructure service. The gaming industry where again you’re scaling up, scaling down, there is a lot of queries going back and forth, that’s another great example.

And then we are also doing startups. And startups are a little bit different. We want to get the word out. We just announced yesterday that we offer 220% performance on the unit benchmark test compared to Amazon. From the IO perspective we’re 190% faster than Rackspace. So, for startups that are trying to get their new technology out, who need performance, who are thinking about the cost of infrastructure, we are also a great solution. And another time we can talk about our flexible networking advantage which is a whole other side of the business. But these are just a couple of the features that we think of as infrastructure-as-a-service 2.0

Paul: Okay that’s great. And since we’ve got Achim here maybe I can ask him some additional questions about the technology and how it works. Achim — maybe you could introduce yourself so we can get your role correct here.

Achim: Yeah, okay, I am Weiss (Achim Weiss). I’m CEO of the whole operation. My background is CTO of 1&1 for the last 15 years, that had 800 developers at the end. I’m pretty knowledgeable about highly scalable infrastructures and data centers and stuff.  So I put that knowledge to use and redesigned the infrastructure-as-a-service space as we would like to have it actually. So we sat down and we didn’t have any legacy, hardware, software, or anything. We could just start from scratch. So we thought “how would you do this for the next ten years? Well let’s see what the key points are,” and we said “Okay, the first things is, we need a really, really fast network” because we don’t have the storage locally attached to the CPU cores anymore like Amazon is doing, or most other guys are doing. If they have a pizza box and they slice it in four pieces and sell it as virtual servers, that’s the unit size you get and that’s it. And if you need another one you get to move to another machine and you get another slice of a different sized machine and that’s it. So we said “Okay, the first thing we need is a really, really fast networking.” So we decided to go with InfiniBand, which is known from the HPC (high performance computing) area. It’s not so widely used so far in hosting, but I think that is going to change because it is a great technology. Each card has 40 GBps of bandwidth. We have two cards on each server so we have a total of 80GBs available for networking and storage access. So we are really able to scale out for IO requests and that is one of the fundamental design decisions that we took.

On top of that we designed a virtualization layer for the networking, so for the customers it looks like just a normal Ethernet network but it’s way faster, has way lower latency. And as Bob mentioned, the whole scaling is not just horizontally. Of course we do horizontal scaling as well. We have an API, the same thing as Amazon is doing. But we also have vertical scaling, and you can slice up your machine as you like. It can go up soon to 64 cores. We offer 48 right now, we are offering 64 very soon, and up to 265 GB of RAM so you can have any size of deployment that you really need. You can have it vertically; you can have it horizontally. And I think that is the core component of the design of our system.

Paul: And do you have some sort of software defined networking layered on that?

Achim: Yes. We have 10 kernel developers that did nothing else but the software defined networking for the last 2 ½ years. So we have totally flexibility networking. You can design whatever you like. You’re not bound to every server having one public IP address and one private, and basically you have one subnet and that’s it. The last 20 years we learned in the Internet business how to do best practice designs. You have your front-end servers, you have your firewalls and you have separated internal network, your backend servers, and you have a management network and so on. And you can do this with us as you like. We have a graphical user interface that we didn’t mention so far. It’s basically like a Visio editor. You just design your data center from a sheet of paper, virtual paper really ,in your browser and you can design your networks as you like. You have the same flexibility as you would with your own hardware, your own switches, and your own cables. So you can make use of all the best practices that we developed in the last 20 years in the Internet business, and don’t have to throw away all the knowledge and start with a flat network and that’s it. So basically it’s a very flexible SDN (software defined network).

Paul: That sounds like a great foundation for the whole offering — that is the network — by choosing InfiniBand, getting that low latency, getting that super high throughput with the 80GBbps per second per server, and then layering on top the SDN capabilities. That is another area where I commonly hear people complain about in public clouds that they have trouble configuring the network the way that they would like it. So SDN gives you, I guess I can’t say unlimited configuration, but pretty much anything that you can do that’s not going to result in circular routing or…

Achim: Yes, I wouldn’t know any use case which you could not set on top of our infrastructure to be honest. You can set up your own broadcasting domains, private lines, segments, whatever you like. It’s just like having network engineers. It takes a few seconds to implement. You change everything around in a matter of a few seconds. So you’re really, really flexible there.

Paul: And so that covers the network pretty well. What about on the server side as Bob was talking about. Not only can you scale up to the large numbers of cores and high quantities of RAM but you can do so dynamically. How did you pull that off?

Achim: That is pretty tricky. We use KVM as the hypervisor and we modified it heavily in many places and we used C-BIOS as the BIOS and that had to be heavily modified. So in the end we basically pretend we are doing hot plugging to the operating system running in the virtual machine and this works really nice right now. So you can really add CPU and RAM on the fly and in a few months we will also have the feature where you can take the RAM and CPU cores out again and then we automate this and call it a “breathing instance” so it really goes up and down with the demand, with the load. We can measure the CPU load, we can measure the RAM pressure the virtual machine is having. And according to that we can go up and down dynamically. So basically we could call it utility computing. You know, just use what you need, and it goes up and down as you need it and you only pay for what you really need. And it’s not fixed sizes anymore. It just changes with your load, very dynamically up and down.

Paul: Last question I guess. How about on the storage side? Is there anything that you do special there?

Achim: Well we have developed a replicated RAID10 storage device, which is not so much special I guess right now. It’s really fast compared to the other guys. We measure everybody else of course and we are way ahead on throughput. So I think that is a great devise so far. What we are working on is what we call NBD. It’s a really distributed block device, distributed over hundreds of thousands of severs with all the capabilities you would want to have, like snapshotting. You can do zillion of snapshots, and all of the snapshots are the same speed and re-writeable. And thin provisioning and you can get what you would expect from a new, very sophisticated storage device. So that is something that we are working on and will be ready to release next year. But, for the customer, the impact will be — it will even be faster for us in the backend, and it has a lot of other advantages. But I think the storage we offer right now is really, really capable and it’s proven to be much faster than the competitors.

Paul: And you certainly have the network to make sure that there is no contention getting to and from the storage.

Achim: That’s a point. If you think about SSD storage for example, Amazon jus=st offers a new server with 2 terabytes of SSD storage. It costs like $3.50 an hour which is a lot of money. And you have 2 terabytes, which is fine — but there are people around who need more than 2 Terabyte and there are people around who don’t need that much. They would be happy with 100-200 GB and say this is for my index of the database, but they have to pay for that huge server. And that is because Amazon cannot split it because it is locally attached and because they don’t have a fast network. For SSD we really need fast, fast access and very low latencies. Otherwise you would destroy the performance of the SSD. It doesn’t make sense to have an SSD storage array with 5 million IOPs per second and use a 1GB network to access it. So, in our opinion, InfiniBand is more or less the only technology that is currently capable of handling these speeds that modern SSDs provide. And InfiniBand — the roadmap of InfiniBand right now — there are 56 GB cards out there, 112 GB cards are specified and will be available next year, and they are working on 300 GB standard right now. 300 GB per single port, so this is amazing. The Ethernet guys are talking about changing to 40 GBps — but it may be months or even years until it becomes available or cheap enough to become feasible for a mass market. And we are way ahead of that.

Paul: Well thanks so much for spending the time today guys. It’s been great to learn more about your solution.

Achim/Bob: You’re welcome. Thank you for having us.

Paul: Thanks

Recent blog posts


Leave a Reply

Your email address will not be published.

internal_server_error <![CDATA[WordPress &rsaquo; Error]]> 500