Tag Archives: 3Tera

[repost]Should you build your next website using 3tera’s grid OS?

Update 2: 3tera has added Dynamic Appliances, which are “packaged data center operations like backup, migration or SLAs that users can add to their applications to provide functionality.”
Update: in an effort to help cross the chasm of how start building a website using their grid OS, 3tera is offering their Assured Success Plan. The idea is to provide training, consulting, and support so you can get started with some confidence you’ll end up succeeding.

If you are starting or extending a website you have a problem: what technologies should you use?
Now there are more answers to that question than ever. One new and refreshingly innovative answer is 3tera‘s grid OS. In this podcast interview with Bert Armijo from 3tera, we’ll learn how 3tera wants to change how you build websites.

How? By transforming the physical into the virtual and then allowing the virtual to be manipulated as if it were real. Could I possibly be more abstract? Not really. But when I think of what they are doing that’s the mental model I see whirling around in my mind. Don’t worry, I promise we’ll drill down to how it can help you in the real world. Let’s see how.

I think of 3tera’s product as like staying at a nice hotel. At home you are in charge. If something needs doing you must do it. If something breaks you must fix it. But at a nice hotel everything just happens for you. Your room is cleaned, beds are made, outrageously expensive candy bars are replaced in the mini-bar, food arrives when you order it and plates disappear when you are done, and the courtesy mint is placed just so on your pillow. You are free to simply enjoy your stay. All the other details of living just happen.

That’s the same sort of experience 3tera is trying to provide for your website. You can concentrate on your application and 3tera, through their GUI on the front-end and their AppLogic grid operating system on the back-end, worries about all the housekeeping.

I think Bert summed up their goal wonderfully when said their aim is to:

Get peoples hands off physical boxes and to give them a way to define complex infrastructures in a reusable way that they can then instantiate, trade, sell, or replicate, backup up and manage as individual units. This is what AppLogic that does incredibly way.

What they are doing is taking hard physical resources like CPU and storage and decoupling them from their physical sources so you can just order and use them on demand without worrying how its done under the covers. This is trend that has been happening for a while, but their grid OS takes that process to the next level.

Your physical co-lo cage is now a private virtual data center. Physical boxes, once lovingly spec’ed, bought, and installed are now allocated on demand from a phalanx of preconfigured and separately maintained servers. Physical storage, once lovingly pieced together from disks, controllers, and networks is now allocated from a vast unending sea of virtual storage. Physical load balancers are now programs you can create.

What this means for you is you can take a website architecture you’ve draw up on your white board and simply and quickly create it in a data center. Its all configurable from a GUI. You can bring on 10 new web servers with a simple drag and drop operation. It’s basically your white board diagram come to life, only you get to skip all the nasty implementation bits. In the virtual world the nasty non application related implementation bits are someone else’s problem.
3tera’s value proposition pretty easy to understand:

  • Simplify the data center. You no longer need to locate, outfit, staff, maintain,
    and support a co-lo space.
  • Simplify operations. A few people can manage a lot machines.
  • Simplify disaster recovery. Failover is complicated and often doesn’t work as planned.
    With AppLogic your redundant data center is always the same because the virtual data center
    is copied a unit. You can pick it up and move it anywhere you want.
  • Simplify the cost model for growth. If you grow how are you going to fund your
    hardware? Growing on a grid is more agile, incremental, and requires less upfront
    investment.
  • Simplify your architecture. The grid OS provides a powerful implementation
    model of how you should structure, grow, and maintain your system. You don’t need to
    code it from scratch or think it up yourself.
  • In short: customers don’t care about your servers. Hardware and the data center do not
    add value. You core competency is in your application and running your business, not playing with servers.

    Well, that’s it for the overview. Please listen to this podcast for all the nitty-gritty details.
    Download audio file (1:16 minutes, mp3).

    Podcast Notes

    I know what you are probably saying. You are saying: “But Todd, the podcast is over an hour long, couldn’t you have please made it longer? I have nothing else to do today and I need to waste more time!” What can I say, Bert was very knowledgeable and helpful, and this is a new model for building scalable websites so I was trying to figure out how I could physically make a website using their product. That takes a lot of questions. I am happy with the result though. I think I have a good picture of how their system works and I think its well worth investigating if you are in the market for creating or expanding a website.

    Here are some notes taken from the podcast.

  • They started 3 years ago. At that time nobody could understand what they were trying to build. They have just now been able to build the higher level features, like Smart Appliances, that they wanted to build originally. They’ve been concentrating on making all the plumbing work.
  • The AppLogic grid operating system allows you to take hard infrastructure servers, load balancers, firewalls, VPNs, all these boxes you need to make a website and it allows you deploy these in a virtual data center
  • A virtual data center (VDC) is like a cage you would buy from a co-location service except you operate
    and manage it through a browser. You can be anywhere in the world and you can use hosting services anywhere in the world.
  • An entry level package ranges from $500 to a few thousand a month. The starting point is 4 – 32 CPUs, some amount of storage and some amount bandwidth. You add resources as you need to. Overage charges are passed through to you from the data center provider. They don’t mark it up.
  • They don’t own any servers. They contract with hosting providers data centers, like Softlayer and Layeredtech, for a uniform set of resources.
  • They offer templates for a scalable virtualized LAMP infrastructure as a starting point for building your own applications.
  • Their GUI shows you the architecture. You don’t have to think of physical boxes.
  • There’s a controller for the VDC through which you can provision your system.
  • You can still login to any physical or logical service. You have root access. You can install anything and manage the system, but you don’t have to worry about where it physically resides.
  • To create an application:
    – You use the controller to provision a LAMP cluster.
    – Then you log into Apache server and configure it how you wish.
    – Then restart and it begins to serve.
    – Say you want 10 front-end web servers.
    – The load balancer is a virtual load balancer you program.
    – You use virtual NAS.
    – Upload code to the NAS.
    – Then have all apache servers run off the NAS. So you don’t have to log into all and upload code.
  • Shared storage is part of the virtual data center by definition. You can create as many volumes as you wish. All are mirrored for high availability. If a virtual server goes down AppLogic will simply restart on another available resource in the data center.
  • Partners build the grid backbone to which nodes and other resources are attached. AppLogic runs that grid backbone. When you sign up you provision the virtual data center the nodes on the backbone are assigned to your VDC. A controller allows you to provision your VDC. Anything you can do in co-lo cage you can do, but there’s nothing physical. AppLogic carries out your commands on the grid.
  • They provide standards for the hosting service. A variety of machine classifications available. Have customers with 50TB of storage. The largest number of CPUs in a single VDC is over 450.
  • To see if the VDC meets your requirements you run a test on the VDC. Once you have resources in your VDC they are not shared with anyone else so you can be confident the performance will be as tested. It’s not a VPS. Their customers run production systems. They are all running a business of some sort.
  • Pricing is designed to be attractive for startups, but not artificially low to over-subscribe.
  • Currently there’s no data center API. It’s scriptable from the CLI. Smart Appliances can package up a data center operations into a drag and drop package. You can drag them into any application. Their first Smart Appliance is “follow me” which can move your application to a data center that is close to you. If you are in Asia you can move your data center to Asia. So your data center can follow you around. No coding is needed on your part. Just drag it into your VDC.
  • With AppLogic instead of managing a bunch of different things you manage your application. You do it once. AppLogic maintains the infrastructure for you.
  • In an upgrade of 10 Apache servers you don’t upgrade standing infrastructure. You take a copy of your application and upgrade the copy.
  • Let’s say you have an Apache server you want to patch. You create one prototype, which they call a class volume. Then when application restarts all the new changes will be picked up everywhere.
  • The power of what it means to be virtual can be seen in their rollback model. You don’t upgrade in-place. You upgrade a copy. Because everything is virtual its easy to make copies of your entire data center. So you can copy your data center, keep the original running, and switch to the upgraded version. If the upgrade version doesn’t work you can rollback to the original version of your VDC. This would be almost impossible using traditional methods. An application is the full state of the application with all its data. So you are operating a full complete copy of the application with all of its data. You can rollback to a complete running instance of the application. You just restart the old version.
  • For upgrades that require transformations, like database upgrades, you can write a script to run a database transformation.
  • They don’t over automate. They don’t only want to have their way of doing things.
  • The model an application has having two parts: the appliance and the content. For a web server this means:
    – Web Server
    – Content that it’s serving.
  • You first create a prototype of what you want your system to look like. This becomes a class from which you later can create instances. There are templates, like the Linux appliance, to build from. Through their on-line system you configure your system, install packages, etc. When it works the way you want you can drag into your catalog as a template for building new instances. You can create hundreds of copies if you choose.
  • Content would be served off a mount location from inside the VDC.
  • You can upgrade the catalog element and restart the appliance and it will automatically upgrade for you. Not transactional. It’s only an individual basis.
  • You can pin machines. You can get the environment to make machine specific configurations. You can put appliances into standby so you can quickly add additional resources on demand.
  • Their load balancer is Pound. No spam detection, but it is session aware. You can use others if you want.
  • They specialize in the code that runs the grid. They aren’t specialists in load balancers and routers, etc.
  • In the VDC you can share infrastructure. You can email each other a clustered database, for example. You can save and package up an integration effort as an assembly. Save it. Sell it. Share it.
  • You can create an active-active redundancy scheme and pay for only resources you need because you can bring on resources like the front-end when you need them.
  • Many companies periodically make a local copy of their VDC and move it to their disaster center.
    – Remember, with a VDC it’s easy to pick up your whole data center and move it somewhere else. The catalog doesn’t have to be copied each time. Just the data for applications can be copied over. Not so bad with a fast backbone.
    – Disaster recovery can be triggered by a 3rd party or scripts.
    – This model is sufficient for companies that can accept some down time.
    – If no data loss can be tolerated you need replicate in an acive-active architecture.
  • Some companies maintain fungible data centers. They constantly copy their data center over to backup locations. If an app goes down they can fire up a replacement.
  • With AppLogic you can create a stub that can start an application on demand if it’s not already running. This allows you to share resources. You can shut it down at night and save those resources for other applications.
  • Here’s how they would handle TechCrunch:
    – Let’s say you have an 8 grid data center. Let’s say your normal load
    takes 20-30% of that.
    – First thing you’ll do is use more resources from within the grid.
    – Then reconfigure appliances with more resources and restart them.
    – Then call your provider to add more resources.
    – Softlayer, for example, has a 500-1000 server inventory. So you can add servers to your grid within an hour a two. Currently this process requires human intervention.
  • Finding good OPs people is difficult. So with the VDC you can automate most of it and you don’t need a big OPs team.
  • In you VDC your data center configuration is in the meta data, so its not kept as tacit knowledge. One or two people can run a thousand servers because you aren’t really running servers, you are running applications.
  • Monitoring
    – AppLogic in control of all resources. You can build dashboards right off the bat.
    – You van plug your monitored variables into their monitoring system.
    – The data are available over the web.
    – Widgets are available for the display of live stats.
  • Different Way of Thinking about Your System
    – Typically you put the database on fastest server. Instead, they recommend allocating high end machines to everything so your database can run anywhere. A different way of thinking about your system.
    – Same with SAN. You don’t need a SAN with the storage in the VDC. You are locking yourself into certain ways of thinking that don’t apply in the VDC. Concept of using a SAN is just another lock-in.

    Some Observations and Conclusions

  • I think the grid/virtualization approach, in one form or another, is the wave of the future. It simply makes it easier for companies to scale applications. And as applications themselves are structured to run natively on a grid, it will become even easier.
  • Reaching the full potential of the virtual data center depends on having a more granular billing strategy and more fine grained control over resource management. For example, if I have 6 CPU grid and I want to upgrade. I don’t want to pay for a 12 CPU data center just so I can upgrade a copy. I don’t need 12 normally, I just transiently need 12. So during my upgrade I want my script to trigger allocating a copy of my VDC, do the upgrade, switch to it, and then decommission the old VDC. So I want for a time to have 6 extra servers for the time it takes to upgrade. Then the old VDC should go away. And I should only be billed for the resources I am using while I am using them. This would also give a more satisfactory solution to the TechCrunch scenario.
  • You need to architect your system to take advantage of the grid. To me this means a shared nothing architecture that can be grown horizontally by adding more machines on demand. Applications should read their configuration off shared storage so the configuration doesn’t need to be configured on each machine and you can bring up new machines based on a template. If you need to scale a new machine should come up and automatically start handling load. Queuing architectures, for example, have this attribute.
  • They need a data center API so you can treat the data center like an object. This would allow you to orchestrate various data centers around the world as a single coopering unit.
  • Operations within a grid would benefit from standardization. I know this enters the application realm, but operations like upgrade and failover are common and hard. So it would be useful of common processes could be developed and easily deployed.
  • They need turnkey options for those new to the game. As it stands the path from signing up to their service and deploying a web service is little scary. They are very honest in saying they do only one part of the overall picture. But many people need a painting, not a brush and paint. It would be helpful to have out of the box plans for solving the most common problems people face.

    I would like to thank Bert again for taking this time for the interview! May the grid be with you, always.

    Related Sites and Articles

  • http://www.3tera.com/
  • On-Demand Infinitely Scalable Database Seed the Amazon EC2 Cloud
  • original:

    Should you build your next website using 3tera’s grid OS?

    [repost]Heroku – Simultaneously Develop and Deploy Automatically Scalable Rails Applications in the Cloud

    original:

    Heroku – Simultaneously Develop and Deploy Automatically Scalable Rails Applications in the Cloud

    Update 4: Heroku versus GAE & GAE/J

    Update 3: Heroku has gone live!. Congratulations to the team. It’s difficult right now to get a feeling for the relative cost and reliability of Heroku, but it’s an impressive accomplishment and a viable option for people looking for a delivery platform.

    Update 2: Heroku Architecture. A great interactive presentation of the Heroku stack. Requests flow into Nginx used as a HTTP Reverse Proxy. Nginx routes requests into a Varnish based HTTP cache. Then requests are injected into an Erlang based routing mesh that balances requests across a grid of dynos. Dynos are your application “VMs” that implement application specific behaviors. Dynos themselves are a stack of: POSIX, Ruby VM, App Server, Rack, Middleware, Framework, Your App. Applications can access PostgreSQL. Memcached is used as an application caching layer.

    Update: Aaron Worsham Interview with James Lindenbaum, CEO of Heroku. Aaron nicely sums up their goal: Heroku is looking to eliminate all the reasons companies have for not doing software projects.
    Adam Wiggins of Heroku presented at the lollapalooza that was the Cloud Computing Demo Night. The idea behind Heroku is that you upload a Rails application into Heroku and it automatically deploys into EC2 and it automatically scales using behind the scenes magic. They call this “liquid scaling.” You just dump your code and go. You don’t have to think about SVN, databases, mongrels, load balancing, or hosting. You just concentrate on building your application. Heroku’s unique feature is their web based development environment that lets you develop applications completely from their control panel. Or you can stick with your own development environment and use their API and Git to move code in and out of their system.

    For website developers this is as high up the stack as it gets. With Heroku we lose that “build your first lightsaber” moment marking the transition out of apprenticeship and into mastery. Upload your code and go isn’t exactly a heroes journey, but it is damn effective…

    I must confess to having an inherent love of Heroku’s idea because I had a similar notion many moons ago, but the trendy language of the time was Perl instead of Rails. At the time though it just didn’t make sense. The economics of creating your own “cloud” for such a different model wasn’t there. It’s amazing the niches utility computing will seed, fertilize, and help grow. Even today when using Eclipse I really wish it was hosted in the cloud and I didn’t have to deal with all its deployment headaches. Firefox based interfaces are pretty impressive these days. Why not?

    Adam views their stack as:
    1. Developer Tools
    2. Application Management
    3. Cluster Management
    4. Elastic Compute Cloud

    At the top level developers see a control panel that lets them edit code, deploy code, interact with the database, see logs, and so on. Your website is live from the first moment you start writing code. It’s a powerful feeling to write normal code, see it run immediately, and know it will scale without further effort on your part. Now, will you be able toss your Facebook app into the Heroku engine and immediately handle a deluge of 500 million hits a month? It will be interesting to see how far a generic scaling model can go without special tweaking by a certified scaling professional. Elastra has the same sort of issue.

    Underneath Heroku makes sure all the software components work together in Lennon-McCartney style harmony. They take care (or will take care of) starting and stopping VMs, deploying to those VMs, billing, load balancing, scaling, storage, upgrades, failover, etc. The dynamic nature of Ruby and the development and deployment infrastructure of Rails is what makes this type of hosting possible. You don’t have to worry about builds. There’s a great infrastructure for installing packages and plugins. And the big hard one of database upgrades is tackled with the new migrations feature.

    A major issue in the Rails world is versioning. Given the precambrian explosion of Rails tools, how does Heroku make sure all the various versions of everything work together? Heroku sees this as their big value add. They are in charge of making sure everything works together. We see a lot companies on the web taking on the role of curator ([1], [2], [3]). A curator is a guardian or an overseer. Of curators Steve Rubel says: They acquire pieces that fit within the tone, direction and – above all – the purpose of the institution. They travel the corners of the world looking for “finds.” Then, once located, clean them up and make sure they are presentable and offer the patron a high quality experience. That’s the role Heroku will play for their deployable Rails environment.

    With great automated power comes great restrictions. And great opportunity. Curating has a cost for developers: flexibility. The database they support is Postgres. Out of luck if you wan’t MySQL. Want a different Ruby version or Rails version? Not if they don’t support it. Want memcache? You just can’t add it yourself. One forum poster wanted, for example, to use the command line version of ImageMagick but was told it wasn’t installed and use RMagick instead. Not the end of the world. And this sort of curating has to be done to keep a happy and healthy environment running, but it is something to be aware of.

    The upside of curation is stuff will work. And we all know how hard it can be to get stuff to work. When I see an EC2 AMI that already has most of what I need my heart goes pitter patter over the headaches I’ll save because someone already did the heavy curation for me. A lot of the value in services like rPath offers, for example, is in curation. rPath helps you build images that work, that can be deployed automatically, and can be easily upgraded. It can take a big load off your shoulders.

    There’s a lot of competition for Heroku. Mosso has a hosting system that can do much of what Heroku wants to do. It can automatically scale up at the webserver, data, and storage tiers. It supports a variery of frameworks, including Rails. And Mosso also says all you have to do is load and go.

    3Tera is another competitor. As one user said: It lets you visually (through a web ui) create “applications” based on “appliances”. There is a standard portfolio of prebuilt applications (SugarCRM, etc.) and templates for LAMP, etc. So, we build our application by taking a firewall appliance, a CentOS appliance, a gateway, a MySql appliance, glue them together, customize them, and then create our own template. You can specify down to the appliance level, the amount of cpu, memory, disk, and bandwidth each are assigned which let’s you scale up your capacity simply by tweaking values through the UI. We can now deploy our Rails/Java hosted offering for new customers in about 20 minutes on our grid. AppLogic has automatic failover so that if anything goes wrong, it reploys your application to a new node in your grid and restarts it. It’s not as cheap as EC2, but much more powerful. True, 3Tera won’t help with your application directly, but most of the hard bits are handled.

    RightScale is another company that combines curation along with load balancing, scaling, failover, and system management.

    What differentiates Heroku is their web based IDE that allows you to focus solely on the application and ignore the details. Though now that they have a command line based interface as well, it’s not as clear how they will differentiate themselves from other offerings.

    The hosting model has a possible downside if you want to do something other than straight web hosting. Let’s say you want your system to insert commercials into podcasts. That sort of large scale batch logic doesn’t cleanly fit into the hosting model. A separate service accessed via something like a REST interface needs to be created. Possibly double the work. Mosso suffers from this same concern. But maybe leaving the web front end to Heroku is exactly what you want to do. That would leave you to concentrate on the back end service without worrying about the web tier. That’s a good approach too.

    Heroku is just getting started so everything isn’t in place yet. They’ve been working on how to scale their own infrastructure. Next is working on scaling user applications beyond starting and stopping mongrels based on load. They aren’t doing any vertical scaling of the database yet. They plan on memcaching reads, implementing read-only slaves via Slony, and using the automatic partitioning features built into Postgres 8.3. The idea is to start a little smaller with them now and grow as they grow. By the time you need to scale bigger they should have the infrastructure in place.

    One concern is that pricing isn’t nailed down yet, but my gut says it will be fair. It’s not clear how you will transfer an existing database over, especially from a non-Postgres database. And if you use the web IDE I wonder how you will normal project stuff like continuous integration, upgrades, branching, release tracking, and bug tracking? Certainly a lot of work to do and a lot of details to work out, but I am sure it’s nothing they can’t handle.

    Related Articles

  • Heroku Rails Podcast
  • Heroku Open Source Plugins etc