Azure Firestarter – coming soon!

I’m excited to announce that our Azure Firestarter series is getting ready to roll!  Registration details at the bottom.  Basically, I’m teaming up with my colleagues Peter Laudati and Jim ONeil and we’ll be travelling around.   We’re intentionally been waiting to do these events so they post-PDC – this means we’ll include the new stuff we’re announcing at PDC!  Also, for those wondering what was going on with the @home series, we’ll be doing that here, too, with some revamped ideas… The Agenda Is cloud computing still a foggy concept for you? Have you heard of Windows Azure, but aren’t quite sure of how it applies to you and the projects you’re working on? Join your Microsoft Developer Evangelists for this free, all-day event combining presentations and hands-on exercises to demystify the latest disruptive (and over-hyped!) technology and to provide some clarity as to where the cloud and Windows Azure can take you. 8:00 a.m. - Registration 8:30 a.m. - Morning Sessions: Getting Your Head into the Cloud Ask ten people to define “Cloud Computing,” and you’ll get a dozen responses. To establish some common ground, we’ll kick off the event by delving into what cloud computing means, not just by presenting an array of acronyms like SaaS and IaaS , but by focusing on the scenarios that cloud computing enables and the opportunities it provides. We’ll use this session to introduce the building blocks of the Windows Azure Platform and set the stage for the two questions most pertinent to you: “how do I take my existing applications to the cloud?” and “how do I design specifically for the cloud?” Migrating Applications to Windows Azure How difficult is it to migrate your applications to the cloud? What about designing your applications to be flexible inside and outside of cloud environments? These are common questions, and in this session, we’ll specifically focus on migration strategies and adapting your applications to be “cloud ready.” We’ll examine how Azure VMs differ from a typical server – covering everything from CPU and memory, to profiling performance, load balancing considerations, and deployment strategies such as dealing with breaking changes in schemas and contracts. We’ll also cover SQL Azure migration strategies and how the forthcoming VM and Admin Roles can aid in migrating to the cloud. Creating Applications for Windows Azure Windows Azure enables you to leverage a great deal of your Visual Studio and .NET expertise on an ‘infinitely scalable’ platform, but it’s important to realize the cloud is a different environment from traditional on-premises or hosted applications. Windows Azure provides new capabilities and features – like Azure storage and the AppFabric – that differentiate an application translated to Azure from one built for Azure. We’ll look at many of these platform features and examine tradeoffs in complexity, performance, and costs. 12:15 - Lunch 1:00 - Cloud Play Enough talk! Bring your laptop or pair with a friend, as we spend the afternoon with our heads (and laptops) in the cloud. Each attendee will receive a two-week “unlimited” Azure account to use during (and after) our instructor-led hands-on lab. During the lab you’ll reinforce the very concepts we discussed in the morning as you develop and deploy a compelling distributed computing application to Windows Azure. 4:00 p.m. The Silver Lining: Evaluations and Giveaways Registration & Details Use the links below to register for the Windows Azure Firestarter in the city closest to you. City Date Registration Tampa, FL November 8 REGISTER HERE! Alpharetta, GA November 10 REGISTER HERE! Charlotte, NC November 11 REGISTER HERE! Rochester, NY November 16 REGISTER HERE! Waltham, MA November 30 REGISTER HERE! New York, NY December 1 REGISTER HERE! Malvern, PA December 7 REGISTER HERE! Chevy Chase, MD December 9 REGISTER HERE! Hope to see you there!

Creating a Poor Man’s Distributed Cache in Azure

If you’ve read up on the Windows Server AppFabric (which contains Velocity, the distributed caching project) you’re likely familiar with the concepts of distributed cache.   Distributed caching isn’t strictly limited to web environments, but for this post (or if I ramble on and it becomes a series) we’ll act like it does.  In a web environment, session state is one of the more problematic server-side features to deal with in multiple server applications.   You are likely already familiar with all of this, but for those who aren’t:  the challenge in storing session state is handling situations where a user’s first request goes to one server in the farm, then the next request goes to another.   If session state is being relied upon, there are only a few options:  1) store session state off-server (for example, in a common SQL Server shared by all web servers) or 2) use “sticky” sessions so that a user’s entire session is served from the same server (in this case, the load balancer typically handles this).   Each method has pros and cons. Caching is similar.  In typical web applications, you cache expensive objects in the web server’s RAM.  In very complex applications, you can create a caching tier – this is exactly the situation Velocity/AppFabric solves really well.  But, it’s often overkill for more basic applications.   The general rule of thumb(s) with caching is:  1) caching should always be considered volatile – if an item isn’t in the cache for any reason, the application should be able to reconstruct itself seamlessly.  And 2) an item in the cache should expire such that no stale data retained.   (The SqlCacheDependency helps in many of these situations, but generally doesn’t apply in the cloud.) The last part about stale data is pretty tricky in some situations.  Consider this situation:  suppose your web app has 4 servers and, on the the home page, a stock ticker for the company’s stock.  This is fetched from a web service, but cached for a period of time (say, 60 minutes) to increase performance.    These values will quickly get out of sync – it might not be that important, but it illustrates the point about keeping cache in sync.  A very simple way to deal with this situation is to expire the cache at an absolute time, such as the top of the hour.  (But this, too, has some downsides.) As soon as you move into a more complicated scenario, things get a bit trickier.  Suppose you want to expire items from a web app if they go out of stock.   Depending on how fast this happens,  you might expire them based on the number in stock – if the number gets really low, you could expire them in seconds, or even not cache them at all.  But what if you aren’t sure when an item might expire?   Take Worldmaps … storing aggregated data is ideal in the cache (in fact, there’s 2 levels of cache in Worldmaps).  In general, handling ‘when’ the data expires is predictable.  Based on age and volume, a map will redraw itself (and stats updated) between 2 and 24 hours.   I also have a tool that lets me click a button to force a redraw.   When one server gets this request, it can flush its own cache, but the other servers know nothing about this.   In situations, then, when user interaction can cause cache expiration, it’s very difficult to cache effectively and often the result is just not caching at all. With all of this background out of the way, even though technologies like the SqlCacheDependency currently don’t exist in Azure, there are a few ways we can effectively create a distributed cache in Azure – or perhaps more appropriately, sync the cache in a Windows Azure project. In the next post, we’ll get technical and I’ll show how to use the RoleEnvironment class and WCF to sync caches across different web roles.  Stay tuned!

@home: The Beginning

This post is part of a series diving into the implementation of the @home With Windows Azure project, which formed the basis of a webcast series by Developer Evangelists Brian Hitney and Jim O’Neil.  Be sure to read the introductory post for the context of this and subsequent articles in the series. To give even more background than in the first post … way back in late March (possibly early April) Jim had the idea to start something we originally called “Azure Across America” … not be confused with AAA :).   If you put yourself in our shoes, Azure is a very difficult technology for us to evangelize.   It reminds me a little of what it was like to explain the value prop of “WPF/e” back when the first bits were made available, long before it took the name Silverlight.   Azure is obviously a pay for use model, so what would be an interesting app to build in a webcast series?  Preferably something that helps illustrate cloud computing, and not just a “hello world” application. While we debated what this would look like (I’ll spare the details), the corp team solidified a trail account program that enabled us to get free trial accounts for attendees to the series.   This changed the game completely, because now we weren’t hindered by signup costs, deployment costs, etc.  In fact, the biggest challenge was doing something interesting enough that would be worth your time to deploy. That’s when we had the idea of a distributed computing project.   Contributing back to a well-known distributed computing project would be interesting, useful, demonstrate cloud computing, and not be hindered by the constant fluctuation of apps going online and offline.   So now that we had the idea, which project would we choose?  We also had a number of limitations in the Azure platform.  Don’t get me wrong:  Azure offers a number of strengths as a fully managed PaaS … but we don’t have administrator rights or the ability to remote desktop into the VMs.  In essence, we need whatever we deploy to not require admin access, and be xcopy deployable.  Stanford’s Folding@home project was perfect for this.  It’s a great cause, and the console version is easy to work with.  What we wanted to do was put together a site that would, in addition to providing the details, how-to’s, and recordings, show stats to track the current progress … In the next posts, I’ll go over the site and some of the issues we faced when developing the app.

@home with Windows Azure: Behind the Scenes

As over a thousand of you know (because you signed up for our Webcast series during May and June), my colleague Jim O’Neil and I have been working on a Windows Azure project – known as @home With Windows Azure – to demonstrate a number of features of the platform to you, learn a bit ourselves, and contribute to a medical research project to boot.  During the series, it quickly became clear (…like after the first session) that the two hours was barely enough time to scratch the surface, and while we hope the series was a useful exercise in introducing you to Windows Azure and allowing you to deploy perhaps your first application to the cloud, we wanted (and had intended) to dive much deeper. So enter not one but two blog series.  This introductory post appears on both of our blogs, but from here on out we’re going to divide and conquer, each of us focusing on one of the two primary aspects of the project.  Jim will cover the application you might deploy (and did if you attended the series), and I will cover the application, which also resides in Azure and serves as the ‘mothership’ for @home with Windows Azure.  Source code for the project is available, so you’ll be able to crack open the solutions and follow along – and perhaps even add to or improve our design.  You are responsible for monitoring your own Azure account utilization.  This project, in particular, can amass significant costs for CPU utilization.  We recommend your self-study be limited to using the Azure development fabric on your local machine, unless you have a limited-time trial account or other consumption plan that will cover the costs of execution. So let’s get started.  In this initial post, we’ll cover a few items Project history Folding@home overview @home with Windows Azure high-level architecture Prerequisites to follow along Project history Jim and I have both been intrigued by Windows Azure and cloud computing in general, but we realized it’s a fairly disruptive technology and can often seem unapproachable for many of your who are focused on your own (typically on-premises) application development projects and just trying to catch up on the latest topical developments in WPF, Silverlight, Entity Framework, WCF, and a host of other technologies that flow from the fire hose at Redmond.   Walking through the steps to deploy “Hello World Cloud” to Windows Azure was an obvious choice (and in fact we did that during our webcast), but we wanted an example that’s a bit more interesting in terms of domain as well as something that wasn’t gratuitously leveraging (or not) the power of the cloud. Originally, we’d considered just doing a blog series, but then our colleague John McClelland had a great idea – run a webcast series (over and over… and over again x9) so we could reach a crop of 100 new Azure-curious viewers each week.  With the serendipitous availability of ‘unlimited’ use, two-week Windows Azure trial accounts for the webcast series, we knew we could do something impactful that wouldn’t break anyone’s individual pocketbook – something along the lines of a distributed computing project, such as SETI.  SETI may be the most well-known of the efforts, but there are numerous others, and we settled on one (, sponsored by Stanford University) based on its mission, longevity, and low barrier to entry (read: no login required and minimal software download).  Once we decided on the project, it was just a matter of building up something in Windows Azure that would not only harness the computing power of Microsoft’s data centers but also showcase a number of the core concepts of Windows Azure and indeed cloud computing in general.  We weren’t quite sure what to expect in terms of interest in the webcast series, but via the efforts of our amazing marketing team (thank you, Jana Underwood and Susan Wisowaty), we ‘sold out’ each of the webcasts, including the last two at which we were able to double the registrants - and then some! For those of you that attended, we thank you.  For those that didn’t, each of our presentations was recorded and is available for viewing.  As we mentioned at the start of this blog post, the two hours we’d allotted seemed like a lot of time during the planning stages, but in practice we rarely got the chance to look at code or explain some the application constructs in our implementation.  Many of you, too, commented that you’d like to have seen us go deeper, and that’s, of course, where we’re headed with this post and others that will be forthcoming in our blogs. Overview of Stanford’s Folding@Home (FAH) project Stanford’s was launched by the Pande lab at the Departments of Chemistry and Structural Biology at Stanford University on October 1, 2000, with a goal “to understand protein folding, protein aggregation, and related diseases,” diseases that include Alzheimer’s, cystic fibrosis, CBE (Mad Cow disease) and several cancers. The project is funded by both the National Institutes of Health and the National Science Foundation, and has enjoyed significant corporate sponsorship as well over the last decade.  To date, over 5 million CPUs have contributed to the project (310,000 CPUs are currently active), and the effort has spawned over 70 academic research papers and a number of awards. The project’s Executive Summary answers perhaps the three most frequently asked questions (a more extensive FAQ is also available): What are proteins and why do they "fold"? Proteins are biology's workhorses -- its "nanomachines." Before proteins can carry out their biochemical function, they remarkably assemble themselves, or "fold." The process of protein folding, while critical and fundamental to virtually all of biology, remains a mystery. Moreover, perhaps not surprisingly, when proteins do not fold correctly (i.e. "misfold"), there can be serious effects, including many well known diseases, such as Alzheimer's, Mad Cow (BSE), CJD, ALS, and Parkinson's disease. What does Folding@Home do? Folding@Home is a distributed computing project which studies protein folding, misfolding, aggregation, and related diseases. We use novel computational methods and large scale distributed computing, to simulate timescales thousands to millions of times longer than previously achieved. This has allowed us to simulate folding for the first time, and to now direct our approach to examine folding related disease. How can you help? You can help our project by downloading and running our client software. Our algorithms are designed such that for every computer that joins the project, we get a commensurate increase in simulation speed. FAH client applications are available for the Macintosh, PC, and Linux, and GPU and SMP clients are also available.  In fact, Sony has developed a FAH client for its Playstation 3 consoles (it’s included with system version 1.6 and later, and downloadable otherwise) to leverage its CELL microprocessor to provide performance at a 20 GigaFLOP scale. As you’ll note in the architecture overview below, the @home with Windows Azure project specifically leverages the FAH Windows console client. @home with Windows Azure high-level architecture The @home with Windows Azure project comprises two distinct Azure applications, the site (on the right in the diagram below) and the application you deploy to your own account via the source code we’ve provided (shown on the left).  We’ll call this the Azure@home application from here on out. has three main purposes: Serve as the ‘go-to’ site for this effort with download instructions, webcast recordings, and links to other Azure resources. Log and reflect the progress made by each of the individual contributors to the project (including the cool Silverlight map depicted below) Contribute itself to the effort by spawning off Folding@home work units.               I’ll focus mostly on this backend piece and other related bits and pieces, design choices, etc.  The other Azure service in play is the one you can download from (either in VS2008 or VS2010 format) – the one we’re referring to as Azure@home.  This cloud application contains a web front end and a worker role implementation that wraps the console client downloaded from the site.  When you deploy this application, you will be setting up a small web site including a default page (image the left below) with a Bing Maps UI and a few text fields to kick off the folding activity.  Worker roles deployed with the Azure service are responsible for spawning the Folding@home console client application - within a VM in Azure - and reporting the progress to both your local account’s Azure storage and the application (via a WCF service call).                    Via your own service’s website you can keep tabs on the contribution your deployment is making to the effort (image to right above), and via you can view the overall – as I’m writing this the project is ranked 583 out of over 184,000 teams; that’s in roughly the top 0.3% after a little over two months, not bad! Jim will be exploring the design and implementation of the Azure@home piece via upcoming posts on my blog. Prerequisites to follow along Everything you need to know about getting started with @home with Windows Azure is available at the site, but here’s a summary: Operating System Windows 7 Windows Server 2008 R2 WIndows Server 2008 Windows Vista Visual Studio development environment Visual Studio 2008 SP1 (standard or above), or Visual Web Developer 2008 Express Edition with SP1, Visual Studio 2010 Professional, Premium or Ultimate (trial download), or Visual Web Developer 2010 Express Windows Azure Tools for Visual Studio (which includes the SDK) and has the following prerequisites IIS 7 with WCF Http Activation enabled SQL Server 2005 Express Edition (or higher) – you can install SQL Server Express with Visual Studio or download it separately. Azure@home source code Visual Studio 2008 version Visual Studio 2010 version Folding@home console client For Windows XP/2003/Vista (from Stanford’s site) In addition to the core pieces listed above, feel free to view one of the webcast recordings or my screencast to learn how to deploy the application.  We won’t be focusing so much on the deployment in the upcoming blog series, but more on the underlying implementation of the constituent Azure services. Lastly, we want to reiterate that the Azure@home application requires a minimum of two Azure roles.  That’s tallied as two CPU hours in terms of Azure consumption, and therefore results in a default charge of $0.24/hour; add to that a much smaller amount of Azure storage charges, and you’ll find that it’s left running 7x24, your monthly charge will be around $172!  There are various Azure platform offers available, including an introductory special; however, the introductory special includes only 25 compute hours per month (equating to12 hours of running the smallest version of Azure@home possible). Most of the units of work assigned by the Folding@home project require at least 24 hours of computation time to complete, so it’s unlikely you can make a substantial contribution to the Stanford effort without leveraging idle CPUs within a paid account or having free access to Azure via a limited-time trial account.  You can, of course, utilize the development fabric on your local machine to run and analyze the application, and theoretically run the Folding@home client application locally to contribute to the project on a smaller scale. That’s it for now.  I’ll be following up with the next post within a few days or so; until then keep your head in the clouds, and your eye on your code.

@home: Most Common Problems #1

Jim and I are nearly done with the @home with Azure series, but we wanted to document some of the biggest issues we see every week.  As we go through the online workshop, many users are deploying an Azure application for the first time after installing the tools and SDK.   In some cases, attendees are installing the tools and SDK in the beginning of the workshop. When installing the tools and SDK, it’s important to make sure all the prerequisites are installed (available on the download page).  The biggest roadblock is typically IIS7 – which basically rules out Windows XP and similar pre-IIS7 operating systems.  IIS7 also needs to be installed (by default, it isn’t), which can be verified by going into the control panels / programs and features. The first time you hit F5 on an Azure project, development storage and the development fabric are initialized, so this is typically the second hurdle to cross.   Development storage relies on SQL Server to house the data for the local development storage simulation.  If you have SQL Express installed, this should just work out of the box.  If you have SQL Server Standard (or other edition), or a non-default instance of SQL Server, you’ll likely receive an error to the effect of, “unable to initialize development storage.” The Azure SDK includes a tool called DSINIT that can be used to configure development storage for these cases.  Using the DSINIT tool, you can configure development storage to use a default or named instance of SQL Server. With these steps complete, you should be up and running!

Windows Azure Guest OS

In a Windows Azure project, you can specify the Guest OS version for your VM.  This is done by setting the osVersion property inside the ServiceConfiguration file: If you don’t specify a version explicitly, the latest Guest OS is chosen for you.  For production applications, it’s probably best to always provide an explicit value, and I have a real world lesson that demonstrates this! MSDN currently has a Guest OS Version and SDK Compatibility Matrix page that is extremely helpful if you’re trying to figure out which versions offer what features.  I recently ran into a problem when examining some of my performance counters – they were all zero (and shouldn’t have been)!  Curious to find out why, I did some digging (which means someone internal told me what I was doing wrong). In short, I had specified a performance counter to monitor like so:  "\ASP.NET Applications(__Total__)\Requests/Sec".  This had worked fine, but when I next redeployed some weeks later, the new Guest OS (with .NET Fx 4.0) took this to mean 4.0 Requests/Sec, because I didn’t specify a version.   So, I was getting zero requests/sec because my app was running on the earlier runtime.  This was fixed by changing the performance counter to "\ASP.NET Apps v2.0.50727(__Total__)\Requests/Sec".   For more information on this, check out this article on MSDN.  And thanks to the guys in the forum for getting me up and running so quickly!

Scaling Down – Text Version

I caught some Flak this weekend at the Charlotte Code Camp when Justin realized my recent Scale Down with Windows Azure post was principally a screencast (aside from the code sample).   So Justin, I’m documenting the screencast just for you! :) First, a good place to start with this concept is on Neil Kidd’s blog post.   Go ahead and read that now … I’ll wait.  Most of this code is based off of his original sample, I’ve modified a few things and brought it forward to work with the latest SDK. So, in a nutshell, a typical worker role template contains a Run() method in which we’d implement the logic to run our worker role.  In many cases, there are multiple tasks and multiple workers.  Unless the majority of the work you are doing is CPU bound (which is entirely possible, as is the case with our Azure-based distributed Folding project), the resources of the VM can be better utilized by multithreading the tasks and workers. The trick is to do this correctly as writing multithreaded code is challenging.  In general, parallel extensions is likely not the right approach in this situation.  There are some exceptions – for example, if you are using a 4-core (large) VM and require lots of parallel processing, PFx might be the best approach.  But that’s not often the case in the cloud.  Instead, we need a lightweight framework that allows us to create a number of “processors” (using quotes here to avoid confusion with a CPU) that are responsible for doing their work independent of any other “processors” in the current instance.  Each “processor” can run on its own thread, but the worker role itself, instead of doing the work, simply monitors the health of all of the threads and restarts them as necessary. The implementation is not terribly complex – but if you aren’t comfortable with threading or just don’t want to reinvent the wheel, check out the base project.  Feel free to add to or modify the project as necessary.  Let’s step through some of the concepts. Download the sample project (Visual Studio 2010) here. First, it doesn’t matter if you implement this in a webrole or a workerrole.  A webrole exposes the same Run() method that a workerrole does, and it doesn’t interfere with the operation of hosting a website – aside from the fact that there are limited resources per VM of course. First up is the IProcessMessages interface.  This interface is simple, basically saying our processors need to define how long they need per work unit, and expose a Process() method to call.   Our health monitor keeps tabs on the processor, so it needs to know how long to wait before assuming the processor is hung: A simple processor is then very easy to create.  We just implement the IProcessMessages interface, and code whatever logic we need our worker to do inside the Process() method.  We’re specifying that this processor needs only 20 seconds per work unit, so the health monitor will restart the worker in the event it doesn’t see progress when 20 seconds elapse.  SyncRoot isn’t needed unless you need to do some locking: So far, pretty simple.  Our processor doesn’t need to be aware of threading, or handling/restarting itself.   The ProcessRecord class does this for us.  It won’t do the actual monitoring, but rather, implements the nuts and bolts of starting the thread for the current processor: When the ProcessorRecord class is told to start the thread, it calls a single Run() method passing in the processor.  This method will essentially run forever, calling Process() each iteration.  Since we’re not getting notified of work, each processor is essentially polling for work.  Because of this, a traditional implementation is to say if there is work to do, keep calling Process() as frequently as possible, but if there’s no work to do, sleep for some amount of time: The current implementation is simple – it doesn’t do exponential back off if there’s no work to do, it just sleeps for the amount of time specified in the ProcessorRecord.  That leaves us with one more task, and that’s defining our processors in the web/worker role Run() method.  The nice thing about this approach is that it’s quite easy to add multiple instances to scale up or down as needed: In the case above, we’re creating 2 processors of the same type, giving them different names (helpful for the log files), the same thread priority, and a sleep time of 5 seconds per iteration if there’s no work to do.  In the Run() method, instead of doing any work, we’ll just monitor the health of all the processors.  Remember, the Run() method shouldn’t exit under normal conditions because it will cause the role to recycle: It may look complicated, but it’s pretty simple.  Each iteration, we’ll look at each Processor.  The Timeout is calculated based on the last known “thread test” (when the thread was last known to be alive and well, plus any process time or sleep time adjustments.  If that time is exceeded, a warning is written to the log file and the Processor is reset.  Worldmaps has been using this approach for about 6 months now, and it’s been flawless. Is this the most robust and complete framework for multithreading worker roles?  No.  It’s a prototype – a good starting place for a more robust solution.  But, the pattern you see here is the right starting point:  the role instance itself knows what processors it wants, but doesn’t concern itself with their implementation or threading details.  Each ProcessorRecord will execute its processor, and implements the threading logic, without regard to the other processors or the host.  The Processors don’t care about threading, other processors, or the host, it just does its work.  This separation of concerns makes it easy to expand or modify this concept as the application changes. If you’re trying to get more performance out of your workers, try this approach and let me know if you have any comments.

Scaling Down with Windows Azure

Awhile back, Neil Kidd created a great blog post on scaling down in Windows Azure.  The concept is to multithread your Azure worker roles (and web roles, too) – even if you have a single core (small instance) VM, most workloads are network IO blocked, not CPU blocked, so creating a lightweight framework for supporting multiple workers in a single role not only saves money, but makes sense architecturally.  In this screencast, I updated Neil’s concept a bit, and bring it forward to work with the latest Azure SDK: Download the sample project (Visual Studio 2010) used in this screencast here.

Book Review: Azure in Action

Most of the people who know me know I’ve invested a lot of my time in Windows Azure.  “I’m all in.”  :)   Last year, we began doing presentations on Azure after it was announced at PDC 2008.  Over the months, from SDS to SQL Azure, PDC 2008 to PDC 2009, the platform evolved and of course, is now under general availability.  I went through a lot of trials and tribulations as I migrated Worldmaps and the created @Home with Windows Azure applications.  Learning any new platform takes a leap of faith.  It requires an investment of time,  and belief in the future of the technology.  Over the coming years, the cloud will become increasingly relevant for both companies and developers, and, in the case of Windows Azure, understanding how to get up to speed quickly and efficiently is critically important. A few weeks ago, I found out that a colleague of mine, Brian Prince, has coauthored a book (with Chris Hay) entitled Azure In Action.  I was able to get an advanced copy, and spent some time over the past couple of weeks reading through the book.  I have to say, I am pretty impressed … I just wished I was able to read this before I touched the Azure platform. The book includes a number of code samples, however will not teach you ASP.NET (as you might expect).  Rather, it details the platform, and how to take advantage of everything from diagnostics to remote management, and of course Azure storage options.   It’s really a great resource on getting up to speed on the cloud quickly and understanding the various offerings.  It also goes into the Windows Azure AppFabric with a few code samples as well, which is nice to see.   AppFabric (both Server and Azure versions) could take its own book, but it’s nice to get a taste of what’s available. If you’re at the point where your considering Azure, would just like to learn the platform, or have an good reference for what features are in the platform, it’s definitely a good read. 

Register To Attend A Windows Azure Virtual Hands-On Workshop

@HOME WITH WINDOWS AZURE I’m really excited to announce a project my colleagues Jim, John and I have been working on.  We wanted to come up with a project that would: 1) be fun for users to learn Azure, 2) help illustrate scale, 3) do something useful, and 4) be fun to develop (from our end).  I think we got it!  Here is a rundown: Elevate your skills with Windows Azure in this hands-on workshop! In this event we’ll guide you through the process of building and deploying a large scale Azure application. Forget about “hello world”! In less than two hours we’ll build and deploy a real cloud app that leverages the Azure data center and helps make a difference in the world. Yes, in addition to building an application that will leave you with a rock-solid understanding of the Azure platform, the solution you deploy will contribute back to Stanford’s Folding@home distributed computing project. There’s no cost to you to participate in this session; each attendee will receive a temporary, self-expiring, full-access account to work with Azure for a period of 2-weeks. Visit the project home page at For this briefing you will: Receive a temporary, self-expiring full-access account to work with Azure for a period of 2-weeks at no cost - accounts will be emailed to all registered attendees 24-48 hours in advance of each event. Build and deploy a real cloud app that leverages the Azure data center Who should attend? Open to developers with an interest in exploring Windows Azure through a short, hands-on workshop AGENDA 15 min WELCOME and STUDENT PREP The goal of today’s event is to help attendees build a local instance of a Windows Azure application and deploy it to an Azure data center. So, are you ready to participate in this hands-on workshop? Did you review the pre-requisites*? We hope so, but just in case you didn’t, we’ll take a few minutes to review them with you now so you’re ready to begin building your app. 15 min AZURE 101 To make sure everyone starts off with a common understanding of Microsoft’s cloud computing platform we’ll cover basic concepts for all attendees new to Azure. We’ll then provide an overview of the project, what “folding” is, and how the application is modeled. 75 min HANDS-ON WORKSHOP We’ll guide you through creating a Windows Azure cloud application in Visual Studio, leveraging both web roles (as a front end for your application) and worker roles (that will carry out the core processing).   Your application will make use of Azure Table Storage as well as Azure local storage for reading/writing files.  Finally, we’ll show you how to deploy your application to the cloud (using accounts provided by Microsoft) and illustrate how to use Windows Azure Diagnostics to monitor the health of the application. 15 min NEXT STEPS and WRAP-UP You’ve got two weeks of no-cost access to Windows Azure before your account expires. Where do you turn next? How can you learn more? In this segment we’ll review a host of online training resources available to you today. And, we’ll explain Microsoft’s Azure offerings for MSDN subscribers, partners, and customers. For instance, did you know an MSDN Premium subscriber receives 6000 hours of Azure compute time at no additional cost? We’ll cover this and more to make sure you leave with the knowledge necessary to take Azure to the next level. *PREREQUISITES The prerequisites are pretty straight forward and we ask that you come prepared to participate in this event by installing the required software in advance of the Live Meeting event. Visual Studio 2008 or Visual Studio 2010 Azure Tools For Visual Studio, Feb 2010 REGISTER TODAY - 9 Events to Choose from! Register By Phone or Online: Click on the Event ID to register today or call 877-673-8368 and reference the Event ID below Wednesday April 28 11:00 AM – 01:00 PM 1032450746 Tuesday May 04 07:00 PM – 09:00 PM 1032450869 Wednesday May 12 11:00 AM – 01:00 PM 1032450870 Wednesday May 19 04:00 PM – 06:00 PM 1032450871 Wednesday May 26 11:00 AM – 01:00 PM 1032450872 Tuesday June 01 11:00 AM – 01:00 PM 1032450876 Wednesday June 09 04:00 PM – 06:00 PM 1032450881 Wednesday June 16 11:00 AM – 01:00 PM 1032450882 Wednesday June 23 07:00 PM – 09:00 PM 1032450883 Presenters: Brian Hitney, Developer Evangelist, Microsoft Jim O’Neil, Developer Evangelist, Microsoft John McClelland, Partner Evangelist, Microsoft

My Apps

Dark Skies Astrophotography Journal Vol 1 Explore The Moon
Mars Explorer Moons of Jupiter Messier Object Explorer
Brew Finder Earthquake Explorer Venus Explorer  

My Worldmap

Month List