Rock, Paper, Azure: Why US Only?

Over the past few days, we’ve gotten many requests from those outside the United States who would like to play Rock, Paper, Azure.  Some have even said, “I don’t care about the prizes, I just want to play!”   So what’s the deal? If we could simply enable the contest to be worldwide, that would be awesome!    But, when prizes are involved, it’s not a simple matter.   There isn’t an easy mechanism by which to say, “it’s ok to enter, you just can’t win.”  Just to provide a little transparency here, there are legal hurdles and frankly, the team that put this together is only a few people doing something part time.   Because we’re in the U.S. subsidiary, our main focus (naturally) is on U.S. – not that we want to exclude anyone, of course.  So what’s next?  First, we’re working with our colleagues in other subsidiaries to enable this in more countries, if at all possible.  It won’t happen on week 1, but hopefully we’ll get there over time.   If you know your local DPE evangelism team, be sure to contact him or her and let him know you’d like to play – if you don’t know who that is, leave a comment here in my blog. 

Rock, Paper, Azure Deep Dive: Part 1

If you’re not sure what Rock, Paper, Azure (RPA) is all about, check out the website or look over some of my recent posts.   In this series of posts, I want to go into some of the technical nuts and bolts regarding the project. First, you can download Aaron’s original project on github (here and here).   The first project is the Compete framework, which is an extensible framework design to host games like Rock, Paper, Scissors Pro! (the second project).    The idea, of course, is that other games can be created to work within the framework. Aaron and the other contributors to the project (I remember Aaron telling me some others had helped with various pieces, but I don’t recall who did what) did a great job in assembling the solution.   When moving it to Windows Azure, we had a number of issues – the bottom line is, our core requirements were a bit different than what was in the original solution.   When I describe some of these changes in this and other posts, don’t mistake it for me being critical of Aaron’s project.   Obviously, having used it at code camps and the basis for RPA shows I have a high regard for the concept, and the implementation, in many parts, were quite impressive. So, if you download those two projects on github, the first challenge is getting it up and running.  You’ll see in a few locations there are references to a local path – by default, I believe this is “c:\compete”.  This is the local scratch folder for bots, games, the db4o database, and the logfile.  Getting this to work in Windows Azure was actually pretty straightforward.   A Windows Azure project has several storage mechanisms.  When it comes to NTFS disk I/O, you have two options in Azure:  Local Storage, or Azure Drives.   Azure Drives are VHD files stored in Azure Blob Storage and can be mounted by a VM.   For our purposes, this was a little overkill because we only needed the disk space as a scratch medium: the players and results were being stored in SQL Azure.  The first thing we needed to do to get local storage configured is add a local storage resource: In this case, we just created a local storage area called compete, 4GB in size, set to clean itself if the role recycles. The next step was to remove any path references.  For example, in Compete.Site.Models, you’ll see directory references like this: Because there’s so much disk I/O going on, we created an AzureHelper project to ultimately help with the abstraction, and have a simple GetLocalScratchFolder method that resolves the right place to put files: Now, we inject that call wherever a directory is needed (about a half dozen or so places, if memory serves).   The next major change was deciding: to Spark, or not to Spark?  If you look at the project references (and in the views themselves, of course), you’ll see the Spark view engine is used: I’m no expert on Spark but having worked with it some, I grew to like its simplicity: The problem is, getting Spark to work in .NET 4.0 with MVC 2 was, at the time, difficult.  That doesn’t appear to be the case today as Spark has been revived a bit on their web page, but we started this a few weeks earlier (before this existed) and while we recompiled the engine and got it working, we ultimately decided to stick with what we knew best. The end result is the Bot Lab project.   While we’re using RPA with the idea that it can help others learn about Azure while having fun, it’s also a great example of why to use Windows Azure.  The Bot Lab project is around 1 MB in size, and the Bot Lab itself can be up and running in no time (open solution, hit F5). Imagine if you wanted to host an RPS style competition at a code camp.  If you have a deployment package, you could take the package and host it locally if you wanted, or upload it to Windows Azure – hosting an extra small instance for 6 hours at a code camp would cost $0.30.   Best of all, there’s no configuring that needs to be done (except for what the application dictates, like a username or password).  This, if you ask me, is one of the greatest strengths behind a platform as a service.

RockPaperAzure Coding Challenge

I’m pleased to announce that we’re finally launching our Rock, Paper, Azure Challenge! For the past couple of months, I’ve been working with Jim O’Neil and Peter Laudati on a new Azure event/game called Rock, Paper, Azure.  The concept is this:  you (hopefully) code a “bot” that plays rock, paper, scissors against the other players in the game.  Simple, right? Here’s where it gets interesting.  Rock, paper, scissors by itself isn’t all that interesting (after all, you can’t really beat random in a computer game – assuming you can figure out a good random generator!), so there are two additional moves in the game.  The first is dynamite, which beats rock, paper, and scissors.   Sounds very powerful – and it is – however, you only have a few per match so you need to decide when to use them.   The other move is a water balloon.  The water balloon beats dynamite, but it loses to everything else.   You have unlimited water balloons. Now, with the additional rules, it becomes a challenge to craft an effective strategy.   We do what we call “continuous integration” on the leaderboard – as soon as your bot enters, it’s an all out slugfest and you see where you are in near real time.   In fact, just a few minutes ago, a few of us playing a test round were constantly tweaking our bots to defeat each other – it was a lot of fun trying to outthink each other. Starting next week, we’ve got some great prizes on the line – including Xbox systems, Kinect, and gift cards – so be sure to check it out!   The project homepage is here: http://www.rockpaperazure.com See you in the game!

Creating a Quick Progress Visual like UpdateProgress

On Thursday, we’re going to go live with our RockPaperAzure coding challenge – and it’s time to brain dump on some of the lessons learned while building out the solution – some small (like this one), some large. When developing the RPA website, I chose to use ASP.NET Webforms and ASP.NET Ajax.  The reason for this is simple … I’m a big fan ASP.NET MVC, but for this, given the very short time frame, it was the fastest route to completion.   (For those familiar with Aaron’s base project on Github, I thought about doing it all client side via JQuery but wasn’t impressed with the perf, so decided to go server side.) ASP.NET Ajax has a nifty UpdateProgress control that is useful during async postbacks to show activity, and while it has a display delay property, it doesn’t have a minimum display time property.  This is problematic because if you want a small spinning icon/wait cursor, displaying it too briefly is just an annoyance.  (Of course, brief is good because it means your page is snappy.)  One of the solutions to this (as I read on StackOverflow) was to simply put a Thread.Sleep in the server postback method, causing the round-trip to take longer and thus display your animation longer.  While this will work, it will crush scalability.  Depending on how many concurrent users you have, a Thread.Sleep on an ASP.NET thread should be avoided at all costs, and this wasn’t something I was willing to do.  There are a few commercial controls that will do this, and indeed, using the Ajax toolkit, you can develop a control that will accomplish the same.  But I wanted something that I could develop in 5 minutes – basically in less time than it is taking me to write this blog post.  I already have the spinning icon, so I wrapped it in a div or span (whatever is appropriate): 1: <span id="ProgressTemplate" style="visibility: hidden;"> 2: <img src="~/images/rpadie.gif" clientidmode="Static" runat="server" 3: id="imganim" height="18" width="18" /> 4: </span> Then, we just plug into the request model to show/hide the control.  It’s a little hack-ish – for example, I reset the visibility to visible on EndRequest because the span, with a default of hidden, will disappear on its own.  This can be fixed by making it a server control and enabling viewstate.  Still, not too bad for 5 minutes: 1: <script language="javascript" type="text/javascript"> 2: <!-- 3: var prm = Sys.WebForms.PageRequestManager.getInstance(); 4: 5: prm.add_initializeRequest(InitializeRequest); 6: prm.add_endRequest(EndRequest); 7: var postBackElement; 8: function InitializeRequest(sender, args) { 9: postBackElement = args.get_postBackElement(); 10: $get('ProgressTemplate').style.visibility = 'visible'; 11: } 12: function EndRequest(sender, args) { 13: $get('ProgressTemplate').style.visibility = 'visible'; 14: setTimeout('hideProgress()', 1000); 15: } 16:  17: function hideProgress() { 18: $get('ProgressTemplate').style.visibility = 'hidden'; 19: } 20: // --> 21: </script> When the Ajax request comes back, it will keep the span visible for 1 second.  Ideally, a nice modification would use a elapsed time mechanism, so if the request actually took 1 second for some reason, it would hide the span without delay.  This isn’t took hard to implement, but broke the 5 minute goal.

Storing Data in Azure: SQL, Tables, or Blobs?

While building the back end to host our “Rock, Paper, Scissors in the cloud” game, we faced a situation of where/how to store the log files for the games that are played.   In my last post, I explained a bit about the idea; in the game, log files are essential at tuning your bot to play effectively.  Just to give a quick example of what the top of a log file might look like:  In this match, I (bhitney) was playing a house team (HouseTeam4) … each match is made up of potentially thousands of games, with one game per line.    From the game’s perspective, we only care about the outcome of the entire match, not the individual games within the match – but we need to store the log for the user.  There’s no right or wrong answer for storing data – but like everything else, understanding the pros and cons is the key.  Azure Tables We immediately ruled out Azure Tables, simply because the entity size is too big.   But what if we stored each game (each line of the log) in an Azure Table?    After all, Azure Tables shine at large, unstructured data.   This would be ideal because we could ask specific questions of the data – such as, “show me all games where…”.  Additionally, size is really not a problem we’d face – tables can scale to TBs.  But, storing individual games isn’t a realistic option.  The number of matches played for a 100 player match 4,950.  Each match has around 2,000 games, so that means we’d be looking at 9,900,000 rows per round.   At a few hundred milliseconds per insert, it would take almost a month to insert that kind of info.  Even if we could get latency to a blazing 10ms, it would still take over a day to insert that amount of data.    Cost wise, it wouldn’t be too bad: about $10 per round for the transaction costs. Blob Storage Blob storage is a good choice as a file repository.  Latency-wise, we’d still be looking at 15 minutes per round.  We almost went this route, but since we’re using SQL Azure anyway for players/bots, it seemed excessive to insert metadata into SQL Azure and then the log files into Blob Storage.  If we were playing with tens of thousands of people, that kind of scalability would be really important.   But what about Azure Drives?   We ruled drives out because we wanted the flexibility of multiple concurrent writers.  SQL Azure Storing binary data in a database (even if that binary data is a text file) typically falls under the “guilty until proven innocent” rule.  Meaning: assume it’s a bad idea.  Still, though, this is the option we decided to pursue.  By using gzip compression on the text, the resulting binary was quite small and didn’t add significant overhead to the original query used to insert the match results to begin with.  Additionally, the connection pooling makes those base inserts incredibly fast – much, much faster that blob/table storage. One other side benefit to this approach is that we can serve the GZip stream without decompressing it.  This saves processing power on the web server, and also takes a 100-200k log file to typically less than 10k, saving a great deal of latency and bandwidth costs. Here’s a simple way to take some text (in our case, the log file) and get a byte array of the compressed data.  This can then be inserted into a varbinary(max) (or deprecated image column) in a SQL database: 1: public static byte[] Compress(string text) 2: { 3: byte[] data = Encoding.UTF8.GetBytes(text); 4: var stream = new MemoryStream(); 5: using (Stream ds = new GZipStream(stream, CompressionMode.Compress)) 6: { 7: ds.Write(data, 0, data.Length); 8: } 9:  10: byte[] compressed = stream.ToArray(); 11:  12: return compressed; 13: } And to get that string back: 1: public static string Decompress(byte[] compressedText) 2: { 3: try 4: { 5: if (compressedText.Length == 0) 6: { 7: return string.Empty; 8: } 9:  10: using (MemoryStream ms = new MemoryStream()) 11: { 12: int msgLength = BitConverter.ToInt32(compressedText, 0); 13: ms.Write(compressedText, 0, compressedText.Length - 0); 14:  15: byte[] buffer = new byte[msgLength]; 16:  17: ms.Position = 0; 18: using (GZipStream zip = new GZipStream(ms, CompressionMode.Decompress)) 19: { 20: zip.Read(buffer, 0, buffer.Length); 21: } 22:  23: return Encoding.UTF8.GetString(buffer); 24: } 25: } 26: catch 27: { 28: return string.Empty; 29: } 30: }   In our case, though, we don’t really need to decompress the log file because we can let the client browser do that!  In our case, we have an Http Handler that will do that, and quite simply it looks like:   1: context.Response.AddHeader("Content-Encoding", "gzip"); 2: context.Response.ContentType = "text/plain"; 3: context.Response.BinaryWrite(data.LogFileRaw); // the byte array 4: context.Response.End(); Naturally, the downside of this approach is that if a browser doesn’t accept GZip encoding, we don’t handle that gracefully.   Fortunately it’s not 1993 anymore, so that’s not a major concern.

Getting Ready to Rock (Paper and Scissors)

We’re gearing up for something that I think will be truly exciting – but I’m getting ahead of myself.  This is likely going to be a long series of posts, so let me start from the beginning. About a year or so ago, at the Raleigh Code Camp, I stumbled into a coding competition that was run by James Avery and Nate Kohari during a few hours in the middle of the day.   The concept was simple:  write a program that plays “Rock, Paper, Scissors” – you would take your code, compiled as a DLL, and upload it to their machine via a website, and the site would run your “bot” against everyone else.   Thus, a coding competition! I was intrigued.  During the first round, I didn’t quite get the competition aspect since return a random move of rock, paper, or scissors seems to be about the best strategy.  Still, though, you start thinking, “What if my opponent was even more lazy and just throws rock all the time?”  So you build in a little logic to detect that.   During round 2, though, things started getting interesting.   In addition to the normal RPS moves, a new moved called Dynamite was introduced.  Dynamite can beat RPS, but you only have a few to use per match.  (In this case, a match is when two players square off – the first player to 1,000 points wins.  You win a point by beating the other player in a single ‘throw.’  Each player has 100 dynamite per match.)  Clearly, your logic now is fairly important.  Do you throw dynamite right away to gain the upper hand, or is that too predictable?   Do you throw dynamite after you tie?    All of a sudden, it’s no longer a game a chance.  Now enter round 3.  In round 3, a new move, Water Balloon, is introduced.  Water Balloon can defeat dynamite, but loses to everything else.  So, if you can predict when your opponent is likely to throw dynamite, you can throw a water balloon and steal the point – albeit with a little risk. This was a lot of fun, but I was intrigued by the back end that supported all of this – and, from a devious viewpoint, the security considerations which are enormous.  James pointed me to the base of the project, available up on GitHub by Aaron Jensen, the author of the Compete framework (which is the underlying engine) and the Rock, Paper, Scissors game which uses the Compete engine. You can download the project today and play around.  At a couple of code camps in the months to come, I ran the same competition, and it went pretty well overall. So, what does this have to do with anything, particularly Azure?   Two things.  First and foremost, I feel that Azure is a great platform for projects like this.  If you download the code, you’ll realize there’s a little setup work involved.  I admit it took me some time to get it working, dealing with IIS, paths, etc.   If I wanted to run this for a code camp again, it would be far easier to take an Azure package at around 5mb, click deploy, and direct people that site.  Leaving a small instance up for the day would be cheap.   I like no hassle. The other potential is using the project as a learning tool on Azure.   You might remember that my colleague Jim and I did something similar last year with our Azure @Home series – we used Azure to contribute back to Stanford’s Folding@home project.  It was a great way to do something fun, useful, and educational. In the coming weeks, we’re rolling out a coding competition in Azure that plays RPS – the idea here is that as a participant, you can host your own bots in a sandbox for testing, and the main game engine can take these bots and continually play in the background.  I’m hoping it’s a lot of fun, slightly competitive, and educational at the same time.  We’ve invested a bit more into polishing this than we did with @home, and I’m getting excited at unveiling. Over the next few posts, I’ll talk more about what we did, what we’ve learned, and how the project is progressing!

Connected Show: Migrating To Azure

I recently sat down with Peter Laudati, my cloud colleague up in the NY/NJ area, and discussed Worldmaps and the migration to the cloud in Peter’s and Dmitry’s Connected Show podcast .   Thanks guys for the opportunity! Connected Show - Episode #40 – Migrating World Maps to Azure A new year, a new episode. This time, the Connected Show hits 40! In this episode, guest Brian Hitney joins Peter to discuss how he migrated the My World Maps application to Windows Azure. Fresh off his Azure Firestarter tour through the eastern US, Brian talks about migration issues, scalability challenges, and blowing up shared hosting. Also, Dmitry and Peter rap about Dancing with the Stars, the XBox 360 Kinect, Dmitry’s TWiT application for Windows Phone 7, and Dmitry’s outdoor adventures at 'Camp Gowannas'. Show Link: http://bit.ly/dVrIXM

Folding@home SMP Client

Wouldn’t you know it!  As soon as we get admin rights in Azure in the form of Startup Tasks and VM Role, the fine folks at Stanford have released a new SMP client that doesn’t require administrative rights.  This is great news, but let me provide a little background on the problem and why this is good for our @home project.  In the @home project, we leverage Stanford’s console client in the worker roles that run their Folding@home application.   The application, however, is single threaded.   During our @home webcasts where we’ve built these clients, we’ve walked through how to select the appropriate VM size – for example, a single core (small) instance, all the way up to an 8 core (XL) instance.  For our purposes, using a small, single core instance is best.  Because the costs are linear (2 single core costs the same as a single dual-core), we might as well just launch 1 small VM for each worker role we need.   The extra processors wouldn’t be utilized and it didn’t matter if we had 1 quad core running 4 instances, or 4 small VMs each with their own instance. The downside to this approach is that the work units assigned to our single core VMs were relatively small, and consequently the points received were very small.   In addition, bonus points are offered based on how fast work is done, which means that for single core machines, we won’t be earning bonus points.  Indeed, if you look at the number of Work Units our team has done, it’s a pretty impressive number compared to our peers, but our score isn’t all that great: As you can see, we’ve processed some 180,000 WU’s – that would take one of our small VMs, working alone, some 450 years to complete!   Points-wise, though, is somewhat ho-hum. Stanford has begun putting together some High Performance Clients that make use of multiple cores, however, until now, were difficult to install in Windows Azure.   With VM Role and admin startup tasks just announced at PDC we could now accomplish these tasks inside of Azure, but it turns out Stanford (a few months back, actually) put together a drop-in replacement that is multicore capable.  Read their install guide here.   This is referred to as the SMP (symmetric multiprocessing) client.  The end result is that instead of having (for example) 8 single-core clients running the folding app, we can instead of 1 8-core machine.   While it will crunch fewer Work Units, the power and point value is far superior.  To test this, I set up a new account with a username of bhitney-test.  After a couple of days, this is result (everyone else is using the non-SMP client): 36 Work Units processed for 97k points is averaging about 2,716 points per WU.   That’s significantly higher than the single core which pulls in about 100 points per WU.  The 2,716 average is quite a bit lower than what it is doing right now, because bonus points don’t kick in for about the first dozen items.  Had we been able to use the SMP client from the beginning, we’d be sitting pretty at a much higher rating – but that’s ok, it’s not about the points. :)

Distributed Cache in Azure: Part III

It has been awhile since my last post on this … but here is the completed project – just in time for distributed cache in the Azure AppFabric!  : )  In all seriousness, even with the AppFabric Cache, this is still a nice viable solution for smaller scale applications. To recap, the premise here is that we have multiple website instances all running independently, but we want to be able to sync cache between them.   Each instance will maintain its own copy of an object in the cache, but in the event that one instance sees a reason to update the cache (for example, a user modifies their account or some other non-predictable action occurs), the other instances can pick up on this change.  It’s possible to pass objects across the WCF service however because each instance knows how to get the objects, it’s a bit cleaner to broadcast ‘flush’ commands.  So that’s what we’ll do.   While it’s completely possible to run this outside of Azure, one of the nice benefits is that the Azure fabric controller maintains the RoleEnvironment class so all of your instances can (if Internal Endpoints are enabled) be aware of each other, even if you spin up new instances or reduce instance counts. You can download this sample project here:  DistributedCache.zip     When you run the project, you’ll see a simple screen like so: Specifically, pay attention to the Date column.  Here we have 3 webrole instances running.  Notice that the Date matches – this is the last updated date/time of some fictitious settings we’re storing in Azure Storage.    If we add a new setting or just click Save New Settings, the webrole updates the settings in Storage, and then broadcasts a ‘flush’ command to the other instances.  After clicking, notice the date of all three changes to reflect the update: In a typical cache situation, you’d have no way to easily update the other instances to handle ad-hoc updates.  Even if you don’t need a distributed cache solution, the code base here may be helpful for getting started with WCF services for inter-role communication.  Enjoy!

Distributed Cache in Azure: Part II

In my last post, I talked about creating a simple distributed cache in Azure.   In reality, we aren’t creating a true distributed cache – what we’re going to do is allow each server manage its own cache, but we’ll use WCF and inter-role communication to keep them in sync.   The downside of this approach is that we’re wasting n* more RAM because each server has to maintain its own copy of the cached item.   The upside is:  this is easy to do. So let’s get the obvious thing out the way.  Using the built in ASP.NET Cache, you can add something to the cache like so that inserts an object that expires in 30 minutes: 1 Cache.Insert("some key", 2 someObj, 3 null, 4 DateTime.Now.AddMinutes(30), 5 System.Web.Caching.Cache.NoSlidingExpiration); With this project, you certainly could use the ASP.NET Cache, but I decided to you use the Patterns and Practices Caching Block.  The reason for this:  it works in Azure worker roles.  Even though we’re only looking at the web roles in this example, it’s flexible enough to go into worker roles, too.  To get started with the caching block, you can download it here on CodePlex.  The documentation is pretty straightforward, but what I did was just set up the caching configuration in the web.config file.  For worker roles, you’d use an app.config: 1 <cachingConfiguration defaultCacheManager="Default Cache Manager"> 2 <backingStores> 3 <add name="inMemory" 4 type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching" /> 5 </backingStores> 6 7 <cacheManagers> 8 <add name="Default Cache Manager" 9 type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching" 10 expirationPollFrequencyInSeconds="60" 11 maximumElementsInCacheBeforeScavenging="250" 12 numberToRemoveWhenScavenging="10" 13 backingStoreName="inMemory" /> 14 </cacheManagers> 15 </cachingConfiguration> The next step was creating a cache wrapper in the application.  Essentially, it’s a simple static class that wraps all the insert/deletes/etc. from the underlying cache.  It doesn’t really matter what the underlying cache is.  The wrapper is also responsible for notifying other roles about a cache change.   If you’re a purist, you’re going to see that this shouldn’t be a wrapper, but instead be implemented as a full fledged cache provider since it isn’t just wrapping the functionality.   That’s true, but again, I’m going for simplicity here – as in, I want this up and running _today_, not in a week. Remember that the specific web app dealing with this request knows whether or not to flush the cache.  For example, it could be a customer updates their profile or other action that only this server knows about.  So when adding or removing items from the cache, we send a notify flag that instructs all other services to get notified… 1 public static void Add(string key, object value, CacheItemPriority priority, 2 DateTime expirationDate, bool notifyRoles) 3 { 4 _Cache.Add(key, 5 value, 6 priority, 7 null, 8 new AbsoluteTime(expirationDate) 9 ); 10 11 if (notifyRoles) 12 { 13 NotificationService.BroadcastCacheRemove(key); 14 } 15 } 16 17 public static void Remove(string key, bool notifyRoles) 18 { 19 _Cache.Remove(key); 20 21 if (notifyRoles) 22 { 23 Trace.TraceWarning(string.Format("Removed key '{0}'.", key)); 24 NotificationService.BroadcastCacheRemove(key); 25 } 26 } The Notification Service is surprisingly simple, and this is the cool part about the Windows Azure platform.  Within the ServiceDefinition file (or through the properties page) we can simply define an internal endpoint: This allows all of our instances to communicate with one another.  Even better, this all maintained by the static RoleEnvironment class.  So, as we add/remove instances to our app, everything magically works.  A simple WCF contract to test this prototype looked like so: 1 [ServiceContract] 2 public interface INotificationService 3 { 4 [OperationContract(IsOneWay = true)] 5 void RemoveFromCache(string key); 6 7 [OperationContract(IsOneWay = true)] 8 void FlushCache(); 9 10 [OperationContract(IsOneWay = false)] 11 int GetCacheItemCount(); 12 13 [OperationContract(IsOneWay = false)] 14 DateTime GetSettingsDate(); 15 } In this case, I want to be able to tell another service to remove an item from its cache, to flush everything in its cache, to give me the number of items in its cache as well as the ‘settings date’ which is the last time the settings were updated.  This is largely for prototyping to make sure everything is in sync. We’ll complete this in the next post where I’ll attach the project you can run yourself, but the next steps are creating the service and a test app to use it.    Check back soon!

My Apps

Dark Skies Astrophotography Journal Vol 1 Explore The Moon
Mars Explorer Moons of Jupiter Messier Object Explorer
Brew Finder Earthquake Explorer Venus Explorer  

My Worldmap

Month List