Why and How We Migrated babylon.js to Azure: CORS, gzip, and IndexedDB

Originally published at: http://www.sitepoint.com/migrated-babylon-js-azure-cors-gzip-indexeddb/

You are working for a startup. Suddenly that hard year of coding is paying off, but with success comes more growth and demand for your web app to scale.

In this tutorial, I want to humbly use one of our more recent success stories around ourWebGL open-source gaming framework, babylon.js and its website: babylonjs.com. We have been excited to see so many web gaming devs try it out.

But to keep up with the demand, we knew we needed a new web hosting solution. While this tutorial focused on Microsoft Azure, many of the concepts apply to various solutions you might prefer. We are going to see also the various optimizations we have put in place to limit as much as possible the output bandwidth from our servers to your browser.

Introduction

Babylon.js is a personal project we have been working on for over a year now. As it is a personal (i.e.our time and money), we have hosted the website, textures and 3D scenes on a relatively cheap hosting solution using a small, dedicated Windows/IIS machine. The project started in France, but was quickly under the radar of several 3D and web specialists around the globe as well as some gaming studios. We were happy about the community’s feedback but the traffic was manageable!

For instance, between Feb 2014 and April 2014, we had an average of 7K+ users/month with an average of 16K+ pages/viewed/months. Some of the events we have been speaking at have generated some interesting peaks:

image1-march-april-peaks

But the experience on the website was still good enough. Loading our scenes was not done at stellar speed but users were not complaining that much.

However, recently, a cool guy decided to share our work on Hacker News. We were really happy for such news! But look at what happened on the site’s connections:

image2-march-peaks

Game over for our little server! It slowly stopped working and the experience for our users was really bad. The IIS server was spending its time serving large static assets and images, and the CPU usage was too high. As we were about to launch the Assassin’s Creed Pirates WebGL experience project running on babylon.js, it was time to switch to a more scalable professional hosting by using a cloud solution.

But before reviewing our hosting choices, let’s briefly talk about the specifics of our engine and website:

  1. Everything is static on our website. We currently don’t have any server-side code running.
  2. Our scenes (.babylon JSON files) and textures (.png or .jpeg) files could be very big (up to 100 MB). This means that we absolutely needed to activate gzip compression on our “.babylon” scene files. Indeed, in our case, the pricing is going to be indexed a lot on the outgoing bandwidth.
  3. Drawing into the WebGL canvas needs special security checks. You can’t load our scenes and textures from another server without CORS enabled for instance.

Credits:I’d like to special thank Benjamin Talmard, one of our French Azure Technical Evangelists who helped us move to Azure.

Step 1: Moving to Azure Web Sites & the Autoscale service

As we’d like to spend most of our time writing codeand features for our engine, we don’t want losing time on the plumbing. That’s why, we immediately decided to choose a PaaS approach and not a IaaS one.

Moreover, we liked Visual Studio integration with Azure. I can almost do everything from my favorite IDE. And even if babylon.js is hosted on Github, we’re using Visual Studio 2013, TypeScript and Visual Studio Online to code our engine. As a note for your project, you can getVisual Studio Community and an Azure Trial for free.

Moving to Azure took me approximately 5 min:

  1. I created a new Web Site in the admin page: http://manage.windowsazure.com (could be done inside VS too).
  2. I took the right changeset from our source code repository matching the version that was currently online.
  3. I right-clicked the Web project in the Visual Studio Solution Explorer. image3-publish-screenshot
  4. Here comes the awesomeness of the tooling. As I was logged into VS using the Microsoft Account boundto my Azure subscription, the wizard let me simply choose the web site on which I’d like to deploy. image4-azzure-publishing-options

No need to worry about complex authentication, connection strings or whatever.

Next, Next, Next & Publish” and a couple of minutes later, at the end of the uploading process of all our assets and files, the web site was up and running!

On the configuration side, we wanted to benefit from the cool autoscale service. It would have helped a lot in our previous Hacker News scenario.

First, your instance has be configured in “Standard” mode in the “Scale” tab.

image5-web-hosting-plan-mode

Then, you can choose up to how many instances you’d like to automatically scale, in which CPU conditions and also on which scheduled times. In our case, we’ve decided to use up to 3 small instances (1 core, 1.75 GB memory) and to auto-spawn a new instance if the CPU goes over 80% of its utilization. We will remove one instance if the CPU drops under 60%. The autoscaling mechanism is always on in our case, we haven’t set some specific scheduled times.

image6-instace-options

The idea is really to only pay for what you need during specific timeframes and loads. I love the concept. With that, we would have been able to handle previous peaks by doing nothing thanks to this Azure service! This what I call a service.

You’ve got also a quick view on the autoscaling history via the purple chart. In our case, since we’ve moved to Azure, we never went over 1 instance up to now. And we’re going to see below how to minimize the risk into falling into an autoscaling.

To conclude on the web site configuration, we wanted to enable automatic gzip compression on our specific 3D engine resources (.babylon and .babylonmeshdata files). This was critical to us as it could save up to 3x the bandwidth and thus… the price.

Web Sites are running on IIS. To configure IIS, you need to go into the web.config file. We’re using the following configuration in our case:

<system .webServer>
  <staticcontent>
    <mimemap fileExtension=".dds" mimeType="application/dds"></mimemap>
    <mimemap fileExtension=".fx" mimeType="application/fx"></mimemap>
    <mimemap fileExtension=".babylon" mimeType="application/babylon"></mimemap>
    <mimemap fileExtension=".babylonmeshdata" mimeType="application/babylonmeshdata"></mimemap>
    <mimemap fileExtension=".cache" mimeType="text/cache-manifest"></mimemap>
    <mimemap fileExtension=".mp4" mimeType="video/mp4"></mimemap>
  </staticcontent>
  <httpcompression>
    <dynamictypes>
      <clear></clear>
      <add enabled="true" mimeType="text/*"></add>
      <add enabled="true" mimeType="message/*"></add>
      <add enabled="true" mimeType="application/x-javascript"></add>
      <add enabled="true" mimeType="application/javascript"></add>
      <add enabled="true" mimeType="application/json"></add>
      <add enabled="true" mimeType="application/atom+xml"></add>
      <add enabled="true" mimeType="application/atom+xml;charset=utf-8"></add>
      <add enabled="true" mimeType="application/babylonmeshdata"></add>
      <add enabled="true" mimeType="application/babylon"></add>
      <add enabled="false" mimeType="*/*"></add>
    </dynamictypes>
    <statictypes>
      <clear></clear>
      <add enabled="true" mimeType="text/*"></add>
      <add enabled="true" mimeType="message/*"></add>
      <add enabled="true" mimeType="application/javascript"></add>
      <add enabled="true" mimeType="application/atom+xml"></add>
      <add enabled="true" mimeType="application/xaml+xml"></add>
      <add enabled="true" mimeType="application/json"></add>
      <add enabled="true" mimeType="application/babylonmeshdata"></add>
      <add enabled="true" mimeType="application/babylon"></add>
      <add enabled="false" mimeType="*/*"></add>
    </statictypes>
  </httpcompression>
</system>

This solution is working pretty well and we even noticed that the time to load our scenes has been reduced compared to our previous host. I’m guessing this is thanks to the better infrastructure and network used by Azure datacenters.

However, I have been thinking about moving into Azure for a whilenow. And my first idea wasn’t to let web sites instances serving my large assets. Since the beginning, I was more interested in storing my assets into the blob storage better designed for that. It would offer us also a possible CDN scenario.

Step 2: Moving assets into Azure Blob Storage, enabling CORS, gzip support & CDN

The primary reason for using blob storage in our case is to avoid loading the CPU of our web site instances to serve them. If everything is being served via the blob storage except a few html, js&css files, our web site instances will have few chances to autoscale.

But this raises two problems to solve:

  1. As the content will be hosted on another domain name, we will fall into the cross-domain security problem. To avoid that, you need to enable CORS on the remote domain (Azure Blob Storage)
  2. Azure Blob Storage doesn’t support automatic gzip compression. And we don’t want to lower the CPU web site usage if in exchange we’re paying 3x time the price because of the increased bandwidth!

Enabling CORS on blob storage

CORS on blob storage has been supported for a few months now. This article, Windows Azure Storage: Introducing CORS, explains how to use Azure APIs to configure CORS. On my side, I didn’t want to write a small app to do that. I’ve found one on the web already written: Cynapta Azure CORS Helper – Free Tool to Manage CORS Rules for Windows Azure Blob Storage

I then just enabled the support for GET and proper headers on my container. To check if everything works as expected, simply open your F12 developer bar and check the console logs:

As you can see, the green log lines imply that everything works well.

Here is a sample case where it will fail. If you try to load our scenes from our blob storage directly from your localhost machine (or any other domain), you’ll get these errors in the logs:

In conclusion, if you see that your calling domain is not found in the “Access-Control-Allow-Origin” header with an “Access is denied” just after that, it’s because you haven’t set properly your CORS rules. It is very important to control your CORS rules; otherwise, anyone could use your assets, and thus your bandwidth, and thus costingmoney without letting you know!

Continue reading this article on SitePoint

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.