AwsUsageAnalyzr

Apr 26

Shutting down Usage Report!

Read more on onrails.org

Apr 19

UsageReport Downloader for Amazon Web ServicesTM. A simple tool to download all you usage reports with one click (ec2, s3, sns, sqs and more…)

UsageReport Downloader for Amazon Web ServicesTM is a simple tool to download all you usage reports with one click.

Instal UsageReport Downloader

The files are download in your documents folder. You can change the default folder. You selection is kept for the next time.

Click the Download XML or Download CSV button to choose which format the report should be downloaded from and off you go…

All you files are download to the select download folder (here  /Users/daniel/Documents/usagereport/downloads/Current\ Billing\ Period)

When the application start it checks if you already logged in and you will see the following message.



 You can stay logged in between launching the application, we recommend that you log out once you download all you files. 

If you need to login just enter your email and amazon password as usual for https://aws.amazon.com

If you use the authentication tokens for signing in you will be presented this additional screen:

Please contact me at daniel@appsden.com for any bugs, issues, questions.

Enjoy!

Daniel Wanja

Mar 29

March 29th - Set back?

Amazon just sent out the following annoucement

Announcement: Announcing Combined AWS Data Transfer Pricing 
Dear Amazon EC2 Customer,  
Starting April 1, 2010, your Data Transfer Out pricing tier for a given Region will be based on your total Data Transfer Out usage within that region for Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon SimpleDB, Amazon Relational Database Service (Amazon RDS), Amazon Virtual Private Cloud (Amazon VPC), and Amazon Simple Queue Service (Amazon SQS). Until now, usage tiers have been calculated individually for each service, based on data transfer related to that service. Because AWS is now aggregating your total Data Transfer Out usage across multiple services, you can reach higher usage tiers and lower pricing more quickly. In addition, you’ll benefit from a complimentary tier which provides your first GB of outbound transfer in each Region each month at no charge. 
The tiered pricing for Data Transfer Out is as follows for each Region: 
First 1 GB of data transferred out per month is free
Remainder of first 10 TB per Month: $0.15 per GB
Next 40 TB per Month: $0.11 per GB
Next 100 TB per Month: $0.09 per GB
Over 150 TB per Month: $0.08 per GB
As you may know, all inbound data transfer is free of charge until June 30, 2010. All data transfer usage (both inbound and outbound) for participating Amazon Web Services now appears in aggregate in its own section of your AWS account activity page and monthly bill. As a bonus, you’ll notice that your first GB of outbound data transfer in each Region is now included free of charge. 
As always, thank you for your support. 
Sincerely, 
The Amazon EC2 Team

That’s great for users for the Amazon Web Services, but it also means that I have to change some of my algorithm. When I do price calculation I use the cumulated Data Transfer of a given service to check for price tier change, but only within that service. Now I will have to  check across all services. The problem is that I calculate one service at a time so I currently only have the cumulate data transfer when all services are loaded. So I guess I now need to make it a two-pass algorithm….Pass 1: load all usage entries for all services…Pass 2: calculate Data Transfer Out pricing for each region. The other problem is that it also takes into account the Virtual Price Clouds which I don’t aggregate currently…

This tells me two things.  First I should release a first version that doesn’t do any price calculation, but just shows the usage types. I think showing the pricing is essential as it allows to compare two services that have different usage types which is great to compare services of different nature. Secondly, I need to work on my pricing engine and adapt it to this recent change and make sure I can really calculate the prices of each of the service. 

What’s fun with creating a product is that there are many ups and downs during the development cycle and there are many doubts along the way and many questions. I choose this product because I like using Amazon Web Services and my tool will be a great way to visualize the usage. In the doubts category…I think Amazon may at anytime provide a similar tool or change something in their data format or pa, which could make my effort void. But that’s what also makes product development fun. 

So today I shall first add support for High-Memory EC2 Instances, then I will consider making one version that just shows aggregated usage data and not any pricing. 

11:05am Ok, Added the High-Memory EC2 instances. In fact m2.2xlarge and m2.4xlarge was already there and only the low end m2.xlarge was added.

Change in strategy!

I spent a lot of energy in getting the pricing right and I am not there yet. I still think it’s doable but I need more time and more user input on different usage scenarios. Also I really don’t know the interest there is out there for the tool I’m building. You may ask why I didn’t figure that out before? Well, good question…next. But seriously, I was ready to spend a few weeks of development time to get a great product and then figure that question out. Now it seems that it will take me way longer to reach that point, so I may as well change my strategy…Here goes the new thinking. I should be able to provide a nice utility that allows to visualize the usage reports without providing the price calculation. I think the price calculator will be crucial and key to ask for a higher price for the product, but even without the price calculation the tool will be useful. So the new strategy is to provide a low cost version that doesn’t have any price calculation built-in. This will allow me to figure out the interest people would have in buying such a product as well gather good feedback on what users really want.

Now let’s think about not showing pricing…see if that would still be useful. I have three main screens for a period: 1) Billing Period Summary 2) Dashboard 3) Breakdown. The summary screen wouldn’t make sense without any pricing. Maybe a few details regarding usage could replace it.For the Dashboard the first four diagram show pricing information  (price by service, cumulated price by service, by day of week, by hour of day). We could maybe replace that with facts such as number of instances, zone information, data transfer. For the Breakdown the total tab shows the same summary that on billing period summary. So that needs to be replace. The ‘Total’ tab was breaking down which service costs the most, without price calculation we cannot show that information. So it looks like I will move some of the information from the dash board to the period tab just as summary information of that period.

Then each service may get a custom way of visualizing it’s usage data. For example for EC2 it would be great to show what type of boxes (windows/linux/small/large/…) was used each day. And for SQS we could show a stacked chart grouping each type of requests per day. In fact, that’s going to make the application way more compelling. All right! 

But first I will start using the Swiz framework to clean up some of the code before my refactoring. To configure it I just follow these 4 simple steps, but I won’t bore you with the technical details on this blog…I’ll bore you just with my “thoughts” while I code this application.

09:15pm Not yet complete with the port to Swiz. Now I’m pretty excited about this new direction, besides the fact that I should have started there!

That’s all for today.

Mar 22

March 22nd - online store on the way

11am Again a slow start, I had to fix my car stereo. So what are the major obstacles to release a very limited “beta”…


1) The login into your aws account is not intuitive…so I’ll fix that first.

2) Some of the tools tips of the graph a not readable due to the font of the tool tip. It’s important as the graphs axis legends are not readable either..so at least fixing the tool tip will make it usable. I still have an open bug for the axis renderer.


3) Also not required but I will deploy the limited beta with licensing enable. So I have a license server and when starting the app it check the license (only once during registration). I want to make sure that this work as it must be a seamless experience for the user. 

Maybe I should call this an alpha version as there are still a few bugs open, but I also want to get some outside feedback as soon as possible. So it would be great  If I can get today the aws loign cleaned up.


12:31pm seems that it’s not that obvious to have my own login that hooksup to amazon, which make sense, and amazon doesn’t want your application just to post to their signin service without going through the proper process, they check that the form dong the post has an authorization token…so I could get that token then add it to the post, but this would become brittle.


1:15pm So I ended up restyling the built-in browser as follows. I assume users will get what’s going on and the process should be quite predictable.

2:15pm So 1) is as good as it gets for now and 2) was straight forward to fix thanks to CSS.I will now setup an online application that will host the online store to buy the application and that allows for the license verification.


5:40pm I’ve added several pieces of the server side puzzle, but I now have issues with the database migration on Enginyard  which simply reports a failure. Well…maybe I should try on Heroku instead and see if that goes better. But first let’s move to Panera Bread for a change in scenery.

8:46pm Great I have a online store, a license verification system. I can also now generate licenses to give away. Yea! Note the store is not wired up to a real merchant account only to a google sandbox. So I will need to wire it up to a real paypal and a real google merchant account. Also they are some limitations on running ssl certificates on Heroku due to the dynamic nature of their infrastructure and some constraints of ec2 which is their underlying infrastructure. So if I want to support ssl for Windows XP users then it cost an additional $100 when using Heroku. Dug!… They are working on solving that issue, so maybe I should use Engineyard after all…For now I will see how many people complain…Anyone still using XP out there that wants to buy my product?

Mar 17

Sneak peek of Usage Report for Amazon Web Services™

I’ve just posted on my main blog the following Sneak Peek of AwsUsageAnalyzr which is now called Usage Report for Amazon Web Services

Sneak peek of Usage Report for Amazon Web Services™ from daniel wanja.

[video]

Mar 15

March 15th - Dashboard

Taking a break from trying to setup a store. Note I think I have a simple solution underway.
So I started on a Dashboard that will show 9 aspects of your Amazon Web Service usage reports. I created 3 graphs at the Airport the other day, add the cumulated price by service this morning. And will now add 2 graphs showing usage of DataTransfer-In-Bytes and DataTransfer-Out-Bytes across services. Due to space constraints I will skip a DataTransfer-Regional-Bytes. Then I will add 3 operations graphs for EC2, RDS, and SQS. In a future version I add more type of graphs based on user feedback as well as the possibility to manually add and remove graphs.
13:34pm Here is how the dashboard looks so far. There are a few more formatting details to settle, but for now you will see a first row with two charts, the price by service and the cumulated price by service which gives a good overview of which service costed the most and at what day of the month the charges occurred. Currently for me I don’t have much s3 cost so it’s mostly a linear cost due to box usage. Then there is a break down by day of week and hour of day, which can give an indication when your services are consumed most. On the third row of the dashboard you will see the data transfer broken down by inbound and outbound transfer per service. Note this chart may need to be reviewed as all the other data points are eclipsed by the most prevalent data point. Finally there is a fourth row of pie charts is showing the details of operations cost for each of the services.

Note it’s fairly easy to add new chart and time series, so expect to see the dashboard to evolve over the coming weeks.

Time for some more decaf..a little meeting..and I’ll be back for 2 changes: first a drastic change in look and feel, then I want to improve the login with Amazon. If I get all that done before tonight I’ll be continuing with the online store configuration. 
So first on the look and feel. I did work since November on this product using a ‘white’ style that reminded me Amazon’s own service. But after a while this got boring and I now want to switch to a ‘dark’ style to see how the app looks. The other day Panic showed off their new in-house status board

And I also some how like the “White on Black” look of the system settings of the iphone/ipad…Note it’s just looks good right in the accessibility and note when using normal apps in reverse mode.

So I’m picking a skin from the http://scalenine.com/ website and will try to convert the application.

 
08:54 pm Styling done! I had a lot of details to clean up and I also reorganized the day and hour selector. My original idea to have two horizontal grids didn’t give enough space to show the daily cost for accounts that pay more than $100 a day…So now even large AWS users can see their breakdown and now with a new look and feel:

I think that’s it for today! Next time I’ll improve the Amazon login sequence…then add the support for the new type of EC2 instances…fix a few bugs…setup the store…then ready for a alpha release where I will bug a few people I know that use AWS…to make sure I didn’t miss anything obvious.

Mar 09

EC2 new High-Memory Instances

Amazon just announced the following

"Save up to 35% on your Amazon EC2 Costs with Extra Large High Memory Instances High Memory Extra Large instances are a great new option for customers who are currently leveraging Standard Extra Large instances. High Memory Extra Large instances offer similar processing power and more memory than Standard Extra Large instances at prices that are 25-35% lower."

read more…

High-Memory Instances

Instances of this family offer large memory sizes for high throughput applications, including database and memory caching applications.

High-Memory Extra Large Instance

17.1 GB of memory
6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
420 GB of instance storage
64-bit platform
I/O Performance: Moderate
API name: m2.xlarge

High-Memory Double Extra Large Instance

34.2 GB of memory
13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute Units each)
850 GB of instance storage
64-bit platform
I/O Performance: High
API name: m2.2xlarge

High-Memory Quadruple Extra Large Instance

68.4 GB of memory
26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each)
1690 GB of instance storage
64-bit platform
I/O Performance: High
API name: m2.4xlarge

Mar 01

March 1st - Slow progress on the store

10:10am Unbelievable, already March. I just counted how many days I worked on this project since November 23rd and it has only been 15 full days of work, but working only on Mondays it feels like an eternity. The last few Mondays I did spend only a few hours as I was working on a side project that now has been released: http://vault.ncaa.com. And next week I’ll be off to two conferences in a row, first 360Flex followed by MountainWest RubyConf. Hopefully I will be able to put in quite some hours to move the project forward.

1:36pm I’m investigating opening and hosting my own online store. I did play around with the potionstore but of course there are a few challenges specific to my setup. I want to to create a license generator in Ruby that generates a license that can be validated in Flex/Actionscript. And for this to work I have a few technical issues to solve notably around private/public key encryption. First I need a Base32 encoding and decoding library  and I didn’t find any so I went to adapt one I found for .Net. Note it’s not standard compliant which is not an issue, but I also found out that it doesn’t support non-us characters which will be an issue. But I still have a few larger technical aspects to validate, so I move on.

4:14pm So I moved on but got bit again by my Base32 implementation, so I moved on to a Base64 implementation which results in longer license keys which can not be entered by typing but must be cut/pasted.

Now I need to wrap that license generator in Ruby code, so that it can be used by the potionstore.

5:55pm Got a wrapped version now, will move to Panera then will test how it works.

7:14pm While testing the new license generator it seem that it now fails to submit order to Google checkout. So back to the drawing board.

7:34pm By looking at the PotionStore source code I found a way to work around the license key issue. Yea! Seems to work pretty nicely. Now I will see how to integrate with Paypal.

8:45pm That’s effectively more involved as I expected. And I cannot seem to pass step 4 of the sandbox signup, I get the message “NOTE: If the form shows no fields in error, a processing error may have occurred that cannot be corrected at this time. Please return later, and then try again to sign up or upgrade.”.  Arrggg! All right, I’ll head home. Then I’ll play with setup one of the Rails server to serve the licenses or maybe I’ll go back to the app and add some more details.

9:41pm Home! I just realized that the potionstore has some clear instruction to setup a test account for the Paypal sandbox…and that of course worked. Well payment still don’t work, I’m getting a ’This transaction cannot be processed due to an invalid merchant configuration’ error. I’m using one of the default business accounts, but they don’t seem to have enabled Website Payments Pro feature…

10:34pm That’s it for today.

Feb 15

February 15th - ToDo’s going down!

6:08pm I need to create more pricing structures as I didn’t code region support for S3 and

SimpleDb.SDB regions:

RegionSDB metrics:

I only started instance in the US-West region, so to get the proper usage type code I need to start one instance on the east code and one in Ireland. I assume the Box Usage codes should be USW1-BoxUsage and EU-BoxUsage. Now I cannot just assume, if only amazon had this documented, that would save me numerous hours digging around. Hey, I’m sure I’ll get the complete list from somewhere as soon I’m done reverse engineering all that.

So I’ve create two new SimpleDB databases, one in each region I didn’t try yet. Let’s see when do start showing up in the usage log. Actually it seemed instant. I just reloaded the logs in my tool and saw the European usage:

And the West coast usage:

So that’s gonna be pretty easy to configure. This said that makes me want to program the region aggregation right away. Now I just need to know when they added the regions to ensure proper price calculation for historical periods. Note that the TimedStorage-ByteHrs didn’t appear yet, so I assume I need to wait one hour for this usage to appear.

In the mean time let’s create some S3 buckets across the planet. For S3 they are also 3 regions: US Standard Region, US-West (Northern California) Region, EU (Ireland) Region. Now I assume that the US Standard Region is the US-East (Northern Virginia) as in Sdb. I’m using Transmit but it supports only two regions North America (Default) and Europe. So I created a European one and copied a few files. From the api I must use the s3-us-west-1.amazonaws.com end point for reach the US-West Region.

Note from the API doc “The US Standard Region provides eventual consistency for all requests. The EU (Ireland) and Northern California Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.”   I now create a us-west-1 bucket and added the same file than I did to the European bucket.

The European bucket already showed up in the usage files:

So there seems to be some consistency and the usage codes start with “EU-“, so hopefully the US West ones will start with “USW1-“. Now I need to figure out the Requests-Tier1. Thanks to google I figured these out:

All right, the US West bucket operations started to show up:

7:49pm I was expecting TimedStorage-ByteHrs to appear for SimpleDb, but they didn’t. So I’ll check them later as this needs to be coded into the price calculation.

There are only a few more details to code (and lots’ to test) before I can call the price structure complete. But that was one of the major aspects of this application. Yea.

On a side note I spoke with Juan Sanchez to see if he can spice up this application a little. He sounded interested so we need to flesh out the details of our collaboration but that would be awesome. He also didn’t seem to like the name of the app, AwsUsageAnalyzr, which I totally agree and came up with a cool new name. So we will use the new name for the first test release, so stay tuned…more in a few weeks.

8:26pm Me fried! I guess getting up at 5am to work out, it’s to good for my late night coding! That’s it for today, a great day as the todo list (for the beta) is really getting shorter. I hope the Adobe guys move faster with they payment solution otherwise I will need to spend time to use an alternate solution, which may not be a bad idea based that the other payment solution seem to charge less. But you can beat adding payment processing support with one line of code. So what’s left on the todo? Mainly the usage and pricing graph may have to be selected based on context as in some context one or the other doesn’t apply. Also the graph is not very readable so I may need to aggregate the values further just for the graph or see if I should use a different graph style all together. Other than a few navigation shortcuts would be welcome and maybe improving the login into the amazon website…If it would be for Adobe I would start an alpha version today just for the adventurous!

Enjoy!

Daniel.

Feb 09

Versioning Feature for Amazon S3 Now Available

The team at Amazon doesn’t stop adding new cool features to their Amazon Web Services offering. They just announced the availability of the Versioning feature for beta use across all of the Amazon S3 Regions. You can now enable Versioning for a bucket. From there on Amazon S3 preserves existing objects any time you perform a PUT, POST, COPY, or DELETE operation on them. The GET retrieves the most recent version by default or you can add the version to the request to get a specific version of an object. There is more too it,  you can read the S3 FAQ and Developer Guide or register for an Introduction to Versioning Webinar.

Feb 08

February 8th - Historical Price

8:22am Today is price history. So basically Amazon regularly offers price change. Some temporary changes like the inbout data transfer that is free to June 30th and more often than not it reduces the price. For example starting February 1st Amazon lowers outbound data transfer pricing by $0.02 across all of our services, in all usage tiers, and in all Regions

The new outbound data transfer pricing will be:

Currently I created the price structure manually (programmatically) and have a structure for each services that list all the price points for each aspect of the service across all regions. Ultimately I want to be able to just download any price from the Amazon websites and create this structure automatically, but I’ll leave that for version 1.1. Additionally the application should have a pricing tab that visuallize this pricing structure, but that will also wait for version 1.1. Also as mentioned in a previous blog entry I don’t have the price structure for historical periods so you can download usage data for any of your period but currently the aggregation calculation would be wrong as based on the wrong pricing data. I’m not sure where I can find all the historical prices. I’ll check further on Amazon website and on my bills to see if I can rebuilt a consistent historical pricing structure. If you know where to find historical prices for Amazon’s Web services, please let me know?

Ok, my MiFi battery ran out, time to move from Perkins to my InLaws office.

9:56am All right, I just integrated my new quarters. First thing let’s update to Flex 3.5a which supposedly fixes the auto update issue I encountered a few weeks ago. I just installed the new SDK and remove the work around and the auto update seems to work.

Now back onto the historical price structure. I may have 30 minutes before I switch to another project for a little while, as I have Cameron coming by to my InLaws office to work on that other project.

I have the impression that Amazon always does a price change on the end of a month. I will need to verify this, but assuming that’s true I could just determine the price structure before running the aggregation. Currently the aggregator asks for the pricing structure based on the service: this.pricing = BasePricing.pricingForService(service, “usa”); The service being ‘ec2’, ‘s3’ and so on. Now I could just pass the start of the period to the pricingForService method. Note the usage aggregator doesn’t currently know what period it is aggregating as it just passed the usage data from that period. So this is another change. The Pricing class now needs to return a proper structure based on the time. So I may just start by adding delta’s (price changes) to each Pricing class. If there are more substantial price changes I may have a dedicated pricing instance per service for each price change.  If I look at my current pricing objects this change affects the DataTransfer-Out-Bytes for ec2, rds, sdb, and sqs. Clearly I missed to implement the proper pricing for S3. I’ll fix that next week.

5:45pm After a good coding session with Cameron, I’m now back at Panera to code the historical pricing.

7:12pm I now can register simple price changes for one usage type and chain for multiple price changes:

var februar2010Change:BasePricing = base.withPriceChange(‘RunInstances*DataTransfer-Out-Bytes’,

[GIGA, [{unit:TB10, price:0.15}, {unit:TB40, price:0.11}, {unit:TB100, price:0.09} , 0.08]])

.withPriceChange(‘RunInstances*USW1-DataTransfer-Out-Bytes’,

[GIGA, [{unit:TB10, price:0.15}, {unit:TB40, price:0.11}, {unit:TB100, price:0.09} , 0.08]])

.withPriceChange(‘RunInstances*EU-DataTransfer-Out-Bytes’,

[GIGA, [{unit:TB10, price:0.15}, {unit:TB40, price:0.11}, {unit:TB100, price:0.09} , 0.08]]);

Also if you note I don’t have my price ranges correct yet TB10, TB40, TB100 should be TB10, TB50, TB150 to reflect the progressive pricing. All right, let me fix that. Fortunately it was just a matter or replacing two constants and running the tests to ensure everything works properly. Done!

All in all the progress is good, but now I need to determine region codes for s3 and sdb and fix the sdb pricing structure. I need to do that on a fresh day, but then I should be good to start a beta. For the beta I’ll assume I start using Shibuya even thought I consider it too rough to use at the moment. The other vendor I was considering using is http://e-junkie.com that besides an unfortunate name seems to offer a good service to sell electronic goods online (like software for example). I checked them out after reading this blog entry from Balsmiq (http://www.balsamiq.com/blog/2009/10/30/tools/). So I’ll may give Shibuya a chance but they are currently not very responsive on their forums and time is running out.

8:15pm I’m doing some research on Marketing…or how to find people who might be interested in getting the application. I’m checking out “The Business of Software" forum  run on the Joel on Software website. Phil mentioned he found a great community over there to discuss these kind of issues. I’ll poke around the forum and get familiar with it. Later I may ask what people think of my idea of a Log Analyzer.

Some interesting posts (in no specific order and not even necessarily relevant to my product):

I should post a specific message on the forum. Also it’s gonna be interesting to see how people will react to an AIR application.  It didn’t hinder sales for Balsamiq, but Peldi did an awesome marketing job besides having a great product.

All right one more bug fix before going home…The time of the usage is displayed in charts and in datagrids and used just to contain the plain hour of the day, i.e. 14, so it’s wasn’t obvious what the user was looking at. I changed that everywhere (2 places) and now the time is displayed including the minutes as in 14:00. In fact the minutes are always :00 as Amazon aggregates by the hour but this makes the whole UI easier to read.

8:5pm Moving home,  I started my day at 5am with 90 minutes of Yoga followed by some abs exercises at my Chyro’s gym…so  time to relax!

Stay tuned!

Feb 03

February 3rd - Incremental Price Calculation (cont’d)

18:03 Today: adding tests for incremental price calculation. Last Monday I did identify all the places where I need to change my code, so let’s dive right back into.

So for the following price table:

First 1 GB of data transferred out per month is free; thereafter:

I wrote the following test that ensure that the proper price is used when the cumulated data reaches the next level:

publicfunction testRdsPricing():void {

var pricing:BasePricing = new RdsPricing();

var context:Object = {factor:pricing.getContext().factor, pricing:pricing.getContext().pricing};

var price:Number = pricing.calculatePrice(null, ‘DataTransfer-Out-Bytes’, 1*BasePricing.GIGA, 0, context);

assertEquals(0.17, price);

price = pricing.calculatePrice(null, ‘DataTransfer-Out-Bytes’, 2*BasePricing.GIGA, 0, context);

assertEquals(0.34, price);

price = pricing.calculatePrice(null, ‘DataTransfer-Out-Bytes’, 1*BasePricing.GIGA, BasePricing.TB40, context);

assertEquals(0.13, price);

price = pricing.calculatePrice(null, ‘DataTransfer-Out-Bytes’, 1*BasePricing.GIGA, BasePricing.TB100, context);

assertEquals(0.11, price);

price = pricing.calculatePrice(null, ‘DataTransfer-Out-Bytes’, 1*BasePricing.GIGA, BasePricing.TB100+1, context);

assertEquals(0.10, price);

}

The parameters in bold is the cumulated usage value the price aggregator will pass to the calculatePrice method and is used to determine the price bracket.

So the test seems to produce the correct price, so apparently I an assume that I’m done with the increment price calculation. If only I had some larger usage logs. There are a few more scenarios that I need to test, also while running the application the Simple Db pricing is calculated wrong. So, I’ve added another test and just found out a small bug in my price definition structure. The “TimedStorage-ByteHrs” was calculated wrong. Amazon has the following example of how to calculate this usage:

Conversion to Total GB-Months: 1,481,763,717,120 Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 1.85 GB-Months

20:39 I was a little more involved after all as the usage type units where expressed in GB and the price structure in chunks of TB, so I needed to add a new conversion factor. So I’m getting closer to the correct calculation, but just realized that I need to perform the same exercise I did last week with EC2 with SimpleDb as their pricing structure also defines different pricing on a per zone basis. Ah, the joy of reverse engineering  how Amazon calculate it prices…I didn’t think it would take me that long. But that’s enough for today, let’s play a little with other parts of the application.

Here is a first shot at the logo…before giving to my designer. The idea is to somewhat to remind the cubes used in the Amazon Web Services logo but in the format of a chart….Usually my designer has awesome ideas, so let’s see what he comes up with once I’ll bring him on board. For now here is the logo:

21:00 Time to sign out. Next time I’ll need to add price history, especially since the recent price change.

Feb 02

AWS Lowers Outbound Data Transfer Pricing

The new outbound data transfer pricing will be across all of our services, in all usage tiers, and in all Regions:

These changes are effective February 1, 2010.

What does this mean for AWSUsageAnalyzr. Simple that I will need to add historical data right away.

Feb 01

Feburary 1st - Incremental Price Calculation

18:43 Today was a different Monday as I had several meetings downtown Denver as I start a new cool Flex and maybe Rails project with ThoughtEquity. So I will have to catch up on AweUsageAnalyzr this week.

Last week I’ve added support to parsing usage logs with zone and architectures and this will require that I add these to the low level aggregation so that you can see for example how the price is aggregated for a specific zone.  As the zone seem to currently only apply to the EC2 infrastructure I could also show the aggregation at the UI level. Let me think about that. Also I’m not sure if the architecture (windows, linux, …) should be used for aggregation.

Ok, I’ll be starting fixing a few small issues then I’ll be adding the incremental price calculation. A few fixes went in. Then I’m still bugged by Shibuya as I’m not sure it currently supports coupon code and I would love to provide the software for free to several type of users (those who blog about it, those who helped during the initial beta (too come soon…)). So I looked at e-junkie.com, kagi.com and of course was thinking about creating my own solution. The license system I prefer is where a user can enter a license number that was generated based on it’s name. So something more based on the honor system rather than relying on some server validation. This is common on many OSX applications and seems to work pretty well. So the only thing needed is when a user buys the application an email will be sent to with that license code. Then if I want to give a free version to someone I can just generate the code myself. Done.

20:39 Onto the incremental price calculation. The best way to code this is to write a few unit tests. For example for Simple Db you have the machine utilization priced as follows

First 25 Amazon SimpleDB Machine Hours consumed per month are free

$0.154 per Amazon SimpleDB Machine Hour consumed thereafter

And the data transfer out:

First 1 GB of data transferred out per month is free; thereafter:

So this is based on monthly usage where the more you use during the month the cheaper the price get. Internally I have the following data structures that define these prices:

'BoxUsage'                           :     [ ONE, [{hour:25, price:0}, 0.140]],
'DataTransfer-Out-Bytes'    :     [ GIGA, [{unit:TB10, price:0.17}, {unit:TB40, price:0.13}, {unit:TB100, price:0.11} , 0.10]],

And internally I was just using the first price:

var usagePrice:Number = priceStructure is Array ? priceStructure[0].price : priceStructure as Number;    //FIXME: instead of [0] use cumulateUsage

I also have the cumulated usage (cumulateUsage) available so I just need to verify in what bracket that usage falls

All right, it’s getting a little late and I’m moving home…Arrg…had a long todo list that I couldn’t postpone. So I’ll be back another day.