Edge Applications

The cloud and Platform-as-a-Service (PaaS) architecture has enabled a whole new breed of applications to fulfill a variety of both standard and custom requirements.  In this post, I will write about so-called ‘edge applications’ in context of one of our real world deployments. We’ll define an ‘edge application’ as one that mixes and matches cloud services with co-location services that enable the application developer to control and directly manage the risk of the underlying security, data, or any other component or service of the stack one desiresThe one example we use is a digital asset management system we developed and continue to maintain for a client. The application is roughly broken up into the following components:

  • Web-based asset collection from suppliers (millions of photos ranging in size from 0.5 to 5 MBs each)
  • Image curation including meta-tagging, editing, categorization, and organization by client editors
  • Image publishing and distribution to a variety of channels including public and private (ie, lightboxes)
  • End user consumption via web, email, and FTP
  • Basic and advanced search
  • End user invoice creation and payment management
  • Client reporting

The application is about five years old so its fundamental architecture was developed before the cloud was a feasible option. The platform is a traditional Microsoft .NET web stack including redundant application layers, a high performing database, and an asset repository (image files) residing on a storage area network  (“SAN”).

Over the last two years, we’ve been able to introduce two edge components into the application that leverage cloud services to drastically improve key components of the architecture, specifically distribution and storage.

Two years ago our client’s requirements drastically changed as our clients needed to begin heavily using a ‘distribute via FTP’ method. The data to be delivered increased roughly 100 fold over the course of a month and then began to be consistent in total volume per month thereafter. The system was not designed to support such an increase and consequently performance became an issue. This could have been addressed by procuring additional high performing servers and increasing the bandwidth at our co-location facility but we felt (and ultimately measured) that the peak demand was highly variable and needed significant resources (ie, up to 50 times our mean usage) for about 5% of the day, albeit a very important part of the day. We did not want to overextend capacity (translate that to ‘budget’) to be able to hit performance requirements over these short intervals. Our instinct and eventually our analysis, revealed there was a better way and ultimately Amazon Web Services enabled us to arrive at we believe to be the optimal solution.

We leveraged the EC2 cloud and it’s on-demand resource allocation to distribute hundreds of Gigabytes of data a day to paying clients across the globe in what is considered near real-time for this particular industry. The pay-as-you-go model meant that we incurred relatively high hourly charges over a very small timeframe which is in direct contrast to the traditional method of amortizing the costs of dedicated servers over their lifetime. In this case, it became simple Calculus as we simply needed to figure out the area under each curve to determine which solution should be implemented from a monetary standpoint. In this case, the cloud came out heavily in our favor.

Just as important as cost to us was performance. We’ve been able to bring down our mean-time to asset delivery to just under ten minutes from over an hour due to previous backlog queues associated with pulling assets off the SAN and our relatively modest pipes at the co-location facility. These two components coupled together were a win-win for us and the client—better performance at a fraction of the cost.  In the end, two key aspects of cloud computing were important to us and they can roughly be defined as:

  • Unlimited bandwidth
  • Unlimited server capacity (processing and throughput)

In our case ‘unlimited’ simply means the ratio of what’s easily available to us in terms of bandwidth and server capacity compared to the resources we might need at any given time is extremely high, essentially unlimited. By programmatically bringing on resources at predetermined threshold levels and then dropping those resources when not needed, we’ve been able to closely match resource supply with actual demand ensuring that we do not:

  • Over invest in capacity thereby incurring high costs and
  • Under invest in capacity thereby not meeting delivery requirements

Once we had that win under our belt, we then realized our storage capabilities of our SAN were fast becoming scarce. After a relatively simple cost-benefit analysis, we’ve decided to migrate the assets from the SAN to cloud storage over the next several months. In this case, as in previous, we utilized AWS again. This time however, we levered the Simple Storage Service (“S3″) in order to remove the need to purchase and maintain yet another SAN. A pure $/GB of storage comparison was not enough to bring to the cost in favor of the cloud but the relative ease of backup storage, redundancy and the additional maintenance costs our client would have to pay turned the odds in S3’s favor as well as having a highly durable solution.

It’s worthwhile to note, that the core application server and database server, in this case a Microsoft SQL Server Enterprise 2005 database (an instance type not yet available on EC2) remains housed in our leased SAS70 Type II co-location facilities. Unlike the cloud, we can visit the facility on demand and our client is comfortable with access privileges to the hardware and software via our firewalls. Our database sits behind multiple firewalls which protect and secure the data within the database from any potential real or perceived contamination of data from shared or virtual environments. The database is regularly backed up, encrypted and then pushed over an encrypted channel to the S3 service for backup. In all cases, AWS’s .NET SDK was instrumental in programming and deploying out cloud management services.

Time will tell what and when other parts of the application are moved to the ‘edge’ but we’re sure glad we have that ‘real option’ available to us to make the move at any time.

Join the Conversation