Marquee Lives!

I’m helping to organize some East Region Azure Boot Camps (www.azurebootcamp.com) – stay tuned for more info! – and had a humorous moment while surfing the various reg pages we have in place. Our Click To Attend registration site is using … I can’t believe it … a marquee tag!  See for yourself, but it’s short lived for maintenance.  Now, this is 100% Click to Attend and nothing to do with the event on the page.  The event, by the way, is the Ft. Lauderdale Azure Boot Camp!  Awesome full day event (cramming 2 days into 1!) … if you want to learn more about Azure and are in the Ft Lauderdale area, be sure to check it out! More ABC’s coming to Atlanta, Charlotte, and more areas soon!

Slides from Azure Roadshow

I’ve had a number of requests for slides and resources for the recent Azure roadshow in NC and FL – here are the slides and resources.  The slides are for sessions 2 and 3: slides.zip Worldmaps application:  http://www.myworldmaps.net Stumbler application (shown during breaks): http://www.myworldmaps.net/stumbler SETI @ Home: http://setiathome.ssl.berkeley.edu/ Folding @ Home: http://folding.stanford.edu/

Azure Miniseries #4: Monitoring Applications

In this screencast, we'll take a look at monitoring Azure applications by capturing event logs and performance counters. We'll also look at using PowerShell to deploy and configure applications using the management API. Finally, we'll take a sneak peek at Azure Diagnostics Manager, a tool from Cerebrata that allows you explore event logs and look at performance counters visually. Here are some links from the screencast: Windows Azure Cmdlets Cerebrata (Cloud Storage Studio and Azure Diagnostics Monitor) Finally, let’s get into some code snippets! Watch for wrap on the PowerShell lines, and note the single quote ` character as the line continuation: Creating a self-signed certificate command:
makecert -r -pe -a sha1 -n "CN=Azure Service Test" -ss My -len 2048 -sp "Microsoft Enhanced RSA and AES Cryptographic Provider" -sy 24 AzureServiceTest.cer
Uploading a new deployment to the staging slot, and starting it (requires Azure CmdLets):
$cert = Get-Item cert:\CurrentUser\My\{thumbprint} $sub = "{subscription GUID}" $servicename = "{service name}" $package = "CloudApp.cspkg" $config = "ServiceConfiguration.cscfg" [DateTime]$datelabel = Get-Date $lbl = $datelabel.ToString("MM-dd-yyyy-HH:mm") Write-Host "Label for deployment: " $lbl Add-PSSnapin AzureManagementToolsSnapIn Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub | New-Deployment -Slot Staging $package $config -Label $lbl | Get-OperationStatus -WaitToComplete Get-Deployment staging -serviceName $servicename -SubscriptionId $sub -Certificate $cert | Set-DeploymentStatus running | Get-OperationStatus -WaitToComplete
Increasing the number of instances:
Add-PSSnapin AzureManagementToolsSnapIn $cert = Get-Item cert:\CurrentUser\My\{thumbprint} $sub = "{subscription GUID}" $servicename = "{service name}" $storage = "{storage name}" #get storage account name and key $key = (Get-StorageKeys -ServiceName $storage -Certificate $cert -SubscriptionId $sub).Primary $deployId = (Get-HostedService $servicename -SubscriptionId $sub -Certificate $cert | Get-Deployment Production).DeploymentId        Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |     Get-Deployment -Slot Staging |     Set-DeploymentConfiguration {$_.RolesConfiguration["WebRole1"].InstanceCount += 1}
Updating the performance counters – specifically, adding total processor time, ASP.NET req/sec, and memory usage to be polled every 30 seconds, and uploaded every 1 minute:
Add-PSSnapin AzureManagementToolsSnapIn $cert = Get-Item cert:\CurrentUser\My\{thumbprint} $sub = "{subscription GUID}" $servicename = "{service name}" $storage = "{storage name}" #get storage account name and key $key = (Get-StorageKeys -ServiceName $storage -Certificate $cert -SubscriptionId $sub).Primary $deployId = (Get-HostedService $servicename -SubscriptionId $sub -Certificate $cert | Get-Deployment Production).DeploymentId        # rate at which counters are polled $rate = [TimeSpan]::FromSeconds(30) Get-DiagnosticAwareRoles -StorageAccountName $storage -StorageAccountKey $key -DeploymentId $deployId | foreach {     $role = $_     write-host $role     Get-DiagnosticAwareRoleInstances $role -DeploymentId $deployId `         -StorageAccountName $storage -StorageAccountKey $key |     foreach {         $instance = $_         $config = Get-DiagnosticConfiguration -RoleName $role -InstanceId $_ -StorageAccountName $storage `              -StorageAccountKey $key -BufferName PerformanceCounters -DeploymentId $deployId                     $processorCounter = New-Object Microsoft.WindowsAzure.Diagnostics.PerformanceCounterConfiguration `                 -Property @{CounterSpecifier='\Processor(_Total)\% Processor Time'; SampleRate=$rate }         $memoryCounter = New-Object Microsoft.WindowsAzure.Diagnostics.PerformanceCounterConfiguration `                 -Property @{CounterSpecifier='\Memory\Available Mbytes'; SampleRate=$rate }         $requestsCounter = New-Object Microsoft.WindowsAzure.Diagnostics.PerformanceCounterConfiguration `                 -Property @{CounterSpecifier='\ASP.NET Applications(__Total__)\Requests/Sec'; SampleRate=$rate }         $config.DataSources.Clear()         $config.DataSources.Add($processorCounter)         $config.DataSources.Add($memoryCounter)         $config.DataSources.Add($requestsCounter)         Set-PerformanceCounter -PerformanceCounters $config.DataSources -RoleName $role `              -InstanceId $instance -DeploymentId $deployId `              -TransferPeriod 1 `              -StorageAccountName $storage -StorageAccountKey $key                          }   }
And finally, the webrole.cs class from the screencast:
public class WebRole : RoleEntryPoint     {         public override bool OnStart()         {             DiagnosticMonitorConfiguration diagConfig =                 DiagnosticMonitor.GetDefaultInitialConfiguration();             diagConfig.PerformanceCounters.DataSources.Add(                 new PerformanceCounterConfiguration()                 {                     CounterSpecifier = @"\Processor(_Total)\% Processor Time",                     SampleRate = TimeSpan.FromSeconds(5)                 });             diagConfig.PerformanceCounters.DataSources.Add(                 new PerformanceCounterConfiguration()                 {                     CounterSpecifier = @"\Memory\Available Mbytes",                     SampleRate = TimeSpan.FromSeconds(5)                 });             diagConfig.PerformanceCounters.ScheduledTransferPeriod =                 TimeSpan.FromMinutes(1);             diagConfig.Logs.ScheduledTransferLogLevelFilter = LogLevel.Information;             diagConfig.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);             DiagnosticMonitor.Start("DiagnosticsConnectionString", diagConfig);             System.Diagnostics.Trace.TraceInformation("Done configuring diagnostics.");             // For information on handling configuration changes             // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.             RoleEnvironment.Changing += RoleEnvironmentChanging;             return base.OnStart();         }         public override void OnStop()         {             System.Diagnostics.Trace.TraceWarning("Onstop called.");             base.OnStop();         }         private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)         {             // If a configuration setting is changing             if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))             {                 // Set e.Cancel to true to restart this role instance                 e.Cancel = true;             }         }     }

Azure Miniseries #3: ServiceConfig vs web.config

One of the challenges developers will face when developing Windows Azure web applications is: where do I put my settings?  In the ServiceConfiguration file or the web.config? There isn’t one correct answer.  The challenge of keeping everything in the web.config is that it makes changes and deployment much more difficult.  Because the web.config is part of the deployment, any change to the file also requires a redeployment.  If you use a build system that targets your dev/stage/QA/prod environments automatically and can provide the correct settings in the web.config for you, this might mitigate the problem. The answer then is to migrate these settings to the ServiceConfiguration file as it requires no changes to our deployment package.  In this screencast I’ll show you some strategies for doing that for components that are more difficult to migrate, like the SqlMembershipProvider… Link to original post and download links.

Azure Miniseries #2: Deployment

In my first Azure Miniseries post, I showed setting up a new cloud service project and migrating an existing ASP.NET application into Azure.   Before I dive into other topics, I figured I’d jump to the end and discuss deployment – getting your Azure application into the cloud.   Link to original post with download links.

SQL Azure Logins

SQL Azure currently has fairly limited management capabilities.  When you create a database, you receive an administrator account that is tied to your login (you can change the SQL Azure password, though).  Because there is no GUI for user management, there’s a temptation to use this account in all your applications, but I highly recommend you create users for your application that have limited access.  If you limit access to only stored procedures, you need to specify execute permissions.  Assuming you want your connection to have execute permissions on all stored procedures, I recommend a new role that has execute permissions.  That way, you can simply add users to this role and as you add more stored procedures, it simply works.  To create this role, you can do something like this: CREATE ROLE db_executor GRANT EXECUTE TO db_executor Now in the master database (currently, you need to do this in a separate connection – just saying ‘use master’ won’t work) you can create your login for the database: CREATE LOGIN MyUserName WITH PASSWORD = 'Password'; In your application database, you need to create a user – in this case, we’ll just create a user with the same name as the login: CREATE USER MyUserName FOR LOGIN MyUserName; Next, we’ll specify the appropriate roles.  Depending on your needs, you may need only datareader.  I recommend db_owner only if necessary. -- read/write/execute permissions EXEC sp_addrolemember N'db_datareader', N'MyUserName' EXEC sp_addrolemember N'db_datawriter', N'MyUserName' EXEC sp_addrolemember N'db_executor', N'MyUserName' -- only if you need dbo access: EXEC sp_addrolemember N'db_owner', N'MyUserName' You can continue to customize as necessary, as long as you are familiar with the appropriate T-SQL. 

WCF in an Azure WorkerRole

The other day, a colleague got in touch with me looking for help in getting a WCF service working in an Azure WorkerRole.   It would work locally, but not deployed in the cloud.   This is a common problem I’ve run into – for example, calling Open() on a ServiceHost will work locally, but no in the cloud due to permissions. I wasn’t much help in getting John’s situation resolved, but he pinged me about it a couple days later with the solution.  The first is to make sure your service has the correct behavior to respond to any address: [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] The next was to make sure you explicitly set the SecurityMode in the binding: NetTcpBinding binding = new NetTcpBinding(SecurityMode.None);  Webroles are different as they are hosted in IIS and limited to HTTP. Also, there are some good demos mentioned in this post on the MSDN forums that points to the Azure All-In-One demos on CodePlex. 

Azure Miniseries #1: Migration

I’m starting to put together some short form screencasts on Windows Azure related topics.  I’ll use my blog to dive into specifics or display code samples/downloads where appropriate – but first up is a quick look at getting a project setup and migrating existing applications into an Azure webrole.

Azure SLA Confusion

Azure SLA is something that gets discussed quite a bit but there’s something that I see causing a bit of confusion.  The SLA for Azure compute instances states: For compute, we guarantee that when you deploy two or more role instances in different fault and upgrade domains, your internet facing roles will have external connectivity at least 99.95% of the time. Some folks (for example, this post) incorrectly conclude that you need to deploy your solution across 2 or more datacenters to get this SLA.  Actually, that’s not true – you just need to make sure they are in different fault and upgrade domains.  This is something that is typically done by default.  You can think of a fault domain as a physical separation in a different rack, so if there’s a hardware failure on the server or switch, it only affects instances within the same fault domain.  Upgrade domains are logical groupings that control how deployments are upgraded.  For large deployments, you may have multiple upgrade domains so that all roles within an upgrade domain are upgraded as a group. To illustrate this, I spun up 3 instances of Worldmaps running on my local Dev Fabric.  I have an admin tool in the site that shows all current instances, their role, and their domain affiliation: The admin page uses the RoleEnvironment class to check status of the roles (more on this in another post), but also display their fault and upgrade domains.  (A value of “0f” is fault domain 0.  “0u” is upgrade domain 0, and so on).  So by default, my three instances are in separate fault and upgrade domains that correspond to their instance number. All of these instances are in the same datacenter, and as I long as I have at least 2 instances and ensure they have different fault and upgrade domains (which is the default behavior), I’m covered by the SLA.  The principal advantage of keeping everything within the same datacenter is cost savings between roles, storage, and SQL Azure.  Essentially, any bandwidth within the data center (for example, my webrole talking to SQL Azure or Azure Storage) incurs no bandwidth cost.  If I move one of my roles to another datacenter, traffic between datacenters is charged.  Note however there are still transaction costs for Azure Storage. This last fact brings up an interesting and potentially beneficial side effect.  While I’m not trying to get into the scalability differences between Azure Table Storage and SQL Azure, from strictly a cost perspective, it could be infinitely more advantageous to go with SQL Azure in some instances.   As I mentioned in my last post, Azure Storage transaction costs might creep up and surprise you if you aren’t doing your math.  If you’re using Azure Table Storage for session and authentication information and have a medium volume site (say, less than 10 webroles but that’s just my off the cuff number – it really depends on what your applications are doing), SQL Azure represents a fixed cost whereas Table Storage will vary based on traffic to your site. For example, a small SQL Azure instance at $9.99/month = $0.33/day.  Azure Table transactions are $0.01 per 10,000.   If each hit to your site made only 1 transaction to storage, that would mean you could have 330,000 hits per day to achieve the same cost.   Any more, and SQL Azure becomes more attractive, albeit with less scalability.   In many cases, it’s possible you wouldn’t need to go to table storage for every hit, but then again, you might make several transactions per hit, depending on what you’re doing.  This is why profiling your application is important. More soon!

Thoughts on Windows Azure Pricing…

There are a LOT of posts out there talking about Azure pricing.  There’s the Azure TCO Calculator, and some good practices scattered out there that demystify things.  Some of these bear repeating here, but I also wanted to take you through my math on expenses – how you design your app can have serious consequences on your pricing.  So let’s get the basic pricing out of the way first (just for the Azure platform, not AppFabric or SQL Azure): Compute = $0.12 / hour Storage = $0.15 / GB stored / month Storage transactions = $0.01 / 10K Data transfers = $0.10 in / $0.15 out / GB - ($0.30 in / $0.45 out / GB in Asia) Myth #1:  If I shut down my application, I won’t be charged. Fact:  You will be charged for all deployed applications, even if they aren’t running.  This is because the resources are allocated on deployment, not when the app is started.  Therefore, always be sure to remove deployments that aren’t running (unless you have a good reason to keep them there). Myth #2:  If my application is less CPU intensive or idle, I will be charged less. Fact:  For compute hours, you are charged the same whether your app is at 100% CPU or idle.  There’s some confusion (and I was surprised by this, too) because Azure and Cloud provisioning is often referred to as "consumption based” and (in this case, incorrectly) compared to a utility like electricity.  A better analogy is that of a hotel room.  An Azure deployment is reserving a set of resources.  Like the hotel room, whether you use it or not doesn’t change the rate. On the plus side, Compute hours are a fairly easy thing to calculate.  It’s the number of instances in all of your roles * $.12 for small VM instances.  A medium instance (2 core) is $.24, and so on. Myth #3:  There’s no difference between a single medium instance and two small instances. Fact:  While there is no difference in compute price, there is significant difference in that the two small instances offer better redundancy and scalability.  It’s the difference between scaling up vs scaling out.  The ideal scenario is for an application that can add additional instances on demand, but the reality is that applications need to written to support this. In Azure, requests are load balanced across all instances of a given webrole.   This complicates session and state management.  Many organizations do what is called sticky persistence or sticky sessions when implementing their own load balancing solution in their applications.  When a user visits a site, they will continue to visit the same server for their entire session.  The downside of this approach is that should the server go down, the user is redirected to another server and loses all state information.  However, it’s a viable solution in many scenarios, but not one that Azure load balancing offers. Scaling up is done by increasing your VM size to medium (2 core), large (4 core), or XL (8 core), with more RAM allocated at each level.  The single instance becomes much more powerful, but your limited by the hardware of a single machine. In Azure, the machine keys are synchronized among instances so there is no problem with cookies and authentication tokens, such as those in the ASP.NET membership providers.  If you need session state information, this is where things get more complicated.  I will probably get zinged for saying this, but there is currently no good Azure-based session management solution.  The ASP Providers contained in the SDK does have a Table Storage Session State demo, but the performance isn’t ideal.   There are a few other solutions out there, but currently the best bet is to not rely on session state and instead use cookies whenever possible. Now, having said all this, the original purpose of the post:  I wanted to make sure folks understood transactions costs with Azure Storage.  Any time your application so much as thinks about Storage, it’s a transaction.  Let’s use my Worldmaps site as an example.  This is not how it works today, but very easy could have been.  A user visits a blog that pulls an image from Worldmaps.  Let’s follow that through: Step Action Transaction # 1 User’s browser requests image.   2 Worker roll checks queue. (empty) 1 3 If first hit for map (not in cache), stats/data pulled from Storage. 2 4 Application enqueues hit to Azure Queue. 3 5 Application redirects user to Blob Storage for map file. 4 6 Worker dequeues hit. 5 7 Worker deletes message from queue. 6 While #3 is only on first hit for a given map, there are other transactions going on behind the scenes and if you are using the Table Storage Session State provider … well, it’s another transaction per hit (possibly two, if session data is changed and needs to be written back to storage). If Worldmaps does 200,000 map hits per day (not beyond the realm of possibility but currently a bit high), then 200,000 * 6 = 1,200,000 storage transactions.  They are sold in 10,000 transactions for $.01, so that’s 120 “units” or $1.20 per day.  Multiply that by 30 days, and that’s about $36/mo for storage transactions alone – not counting the bandwidth or compute time. I realized this early on and as a result I significantly changed the way the application works.  Tips to save money: If you don’t need durability, don’t use Azure queues.  Worldmaps switches between in-memory queues and Azure queues based on load, configuration, and task.  Since queues are REST calls, you could also make a WCF call directly to another Role. Consider scaling down worker roles by multithreading particularly for IO heavy roles.  Also, a webrole’s run method (not implemented) simply calls Thread.Sleep(-1), so why not override it to do processing?  More on this soon… SQL Azure may be cheaper, depending on what you’re doing.  And potentially faster because of connection pooling. If you aren’t interested in CDN, use Azure Storage only for dynamic content. Don’t forget about LocalStorage.  While it’s volatile, you can use it as a cache to serve items from the role, instead of storage. Nifty backoff algorithms are great, but implement only to save transaction cost.  It won’t affect compute charge. Take advantage of the many programs out there, such as hours included in MSDN subscriptions, etc. Next up will be some tips on scaling down and maximizing the compute power of each instance.

My Apps

Dark Skies Astrophotography Journal Vol 1 Explore The Moon
Mars Explorer Moons of Jupiter Messier Object Explorer
Brew Finder Earthquake Explorer Venus Explorer  

My Worldmap

Month List