About this blog

'Going Spatial' is my personal blog, the views on this site are entirely my own and should in no way be attributed to anyone else or as the opinion of any organisation.

My tweets on GIS, Humanitarian, Tech, Games and Randomness

Thursday 28 November 2013

Velocity 2013 - speed me up!

Go faster baby! Velocity 2013


'Velocity 2013 is the conference where people talk about how to get things done (fast) in the real world - if you want to know how the best in the world handle their Operations, Velocity is the place to learn.'

Three days of concentrated focus on the key aspects of web performance, operations, and mobile performance, Velocity is the place to learn.

There you go; that was the strap line.

This is the second time that I have attended in as many years and I find the material and the speakers all very compelling and interesting. Certainly, in the work that we do in GIS, the lessons and processes are not that much different from many other IT companies. We have the same issues over configuration management

The first day of this conference clashed with the last day of the Esri European Developer Summit, so I had to miss the Esri one and the tea and scones they were serving that last day!

Highlights

http://www.flickr.com/photos/oreillyconf/10860068113/
Copyright: Velocity2013


Bring the Noise: Making effective user of quarter of a million metrics. Jon Cowle from Etsy

Definitely interesting talk about Etsy.com's capture and use of 250,000 metrics on a regular basis. Due to their need to practise continuous deployment, they have too many metrics. So they practise anomaly detection instead and stream the metrics to memcache, developed an 'anomaly description alphabet' and used this to detect anomalies as well as providing insight into future events / searches. Also used GRAPHITE, GANGLIA, MESSAGEPACK, STATSD, Carbon Relay, Skyline and Occulus (collectively known as KALE). All on Github.

The highlight of the talk was Jon's 'Four Horseman of the Apocalypse' (when it comes to monitoring...): Too Many Parameters, Normality, Spike Influence, and Seasonality.

Useful Links:

Graphite (a scalable realtime Graphing tool) - http://graphite.wikidot.com/
Ganglia (scalable distributed monitoring system) - http://ganglia.sourceforge.net/

Friday 22 November 2013

Esri European Developer Summit 2013 - my thoughts...

The presenters did not have to dress like this.....

Introduction

The last week has been a very crazy and busy one: with the Esri European Developer Summit venue confirmed to be London, it meant that I could catch up with latest in Esri software development. I did not get a chance to attend the 'main' developer summit in Palm Springs early this year so was very happy to have had this opportunity. Of course, like all good things - the Developer Summit this time round was a clash with the start of the Velocity 2013 European conference and the start of the Amazon Re:Invent conference.

Here are my highlights and observations. These points (like this blog) are entirely my own points and if you disagree, or want clarification - then please contact me. Also, follow me on @wai_ming_lee to get random updates via twitter. 

I was also asked to present at the summit, something that I was not expecting but what the hell! My subject was 'ArcGIS Server for Amazon Web Services', an area that I was comfortable with and should be able to deliver with some enthusiasm. Two days prior to the start of the event, I decided to dive in deep and attended a two-day course on Python, also organised by Esri Inc and part of their pre-summit hands-on training. How did I find it? Brilliant and after a near brain melt-down, I was ready for the first day of the main developer summit. This is the second European Developer Summit that I have attended and it is always a smaller scale than the International Developer Summit held in Palm Springs. I have attended the international one at least twice in the last four years. However, having the Dev Summit on home turf once again was a good thing for all those based in the UK!

Don't break the glass unless you absolutely have to! Think of the kittens!
The first day started off with a fire alarm kicking off - so all the summit delegates, presenters, hotel guests and hotel staff all bundled out into the (light) rain. Not a good start. A colleague did say she could smell burnt toast as we walked towards the plenary, I wonder if this caused it?

The plenary, once resumed, was typical of the previous developer summits: punchy, to the point with little in the way of a sales pitch (come on, it's the developer summit) all MC'ed with some style by Esri Inc's Jim McKinney and Jim Barry.

Stuff I found interesting

Highlights for me? Plenty but here's a few that I wanted to put some words to:

Who has a smart phone? I have.....
  • ArcGIS Collector (Offline and January 2014...) This got a cheer for sure. Having a mobile application remain stable when it was disconnected from the network (WiFi, 3G, 4G) is a wonderful thing. Not everyone can maintain a high quality signal at all times and one should be able to continue to edit, pan and zoom (within a reasonable area) and use the application in it's new disconnected state, seamlessly. Once connected back to the network, everything should seamlessly re-integrate. Looks like Esri Inc has made some firm steps in the right direction. 
Let me do another fly by......
  • 3D Web GIS / CityEngine. This workshop had a high wow-factor for me, I love the CityEngine addition to ArcGIS and incorporating the third dimension into GIS is a natural evolution. What CityEngine has given Esri Inc is a much easier and seamless way to create 3D City content complete with parametric editing, dynamic editing and procedural editing. Great job Esri !
http://marketplace.arcgis.com/
  • ArcGIS Marketplace. This is an update to the ArcGIS marketplace as this went 'live' this year but am expecting a lot of changes as this area of business starts to mature and evolve. I can't fault the idea of a self-service marketplace where applications, data (and services perhaps?) are advertised. Esri and non-Esri organisations can advertise and place their wares on the market place. As far as I know, Esri Inc acts as the shop window / portal only and does not make any money off the successful sale of any non-Esri product; which is a very nice thing to do! 
JARVIS - find me the nearest bad guy to pummel.....
  • Amber Cases' Calm Technology. The keynote speaker, Amber Case (@caseorganic - http://caseorganic.com/) was a very engaging and interesting speaker. Her keynote was subtle and nuanced but I was actually quite blown away by it. Her term for 'calm technology' was an excellent term for an all-pervasive, spatially-aware, wearable, GIS application/tool that can interact with you when you want it in the way you want it. While we're not quite up to the ease of use and familiarity we saw with 'J.A.R.V.I.S.' in Iron Man (see above picture) - her vision of what we can do is very compelling. Still, Paul Bettany rocks. 
  • Just like the Esri Inc Developer Summit Speedgeeking session - without the wine of course...
  • Speedgeeking. Heck yes, while I didn't attend this I loved the idea. We all know what speed-dating is right? Yes, you have 3-5 minutes 'blind date' with someone (who is also there for the same reason, not some random person yanked off the street!!)  and then, at the ring of the bell, you move onto the next 'date' and so on until after an hour or so, you have about 8-10 'dates' and those that catch your fancy, you can go reconnect at a later date. It should really be called 'speed introductions' but anyway, the speedgeeking is similar in concept: you rotate round a number of tables, each with a speaker/geek who will regale you for 3-5 minutes on his pet project, application, cool tool or his iTunes's music collection and then, at the sound of the bell, you move on.  
Ahh - when graphic cards only had 2k of RAM... 
  • Vintage! Jim McKinney (@jmckgis) going back in time with some vintage Arc/Info. This was a bonus talk (it was one of the lightning talks on the first day) and I included it as a highlight because 1) he talked about Arc/Info and 2) there was a screen shot of ArcPlot, 3) there was some AML there and finally 4) because it was Arc/Info. Old school for sure and it got a big cheer from the crowd. Super stuff. 
    Because ninja-cat like things are cool....
  • Esri Inc chucking stuff onto GitHub and this is a great thing for the Esri community specifically and the GIS world at large. Okay, we're not going to see the code-base for ArcGIS Server suddenly appearing but you know, it would be nice if Esri Inc OpenSourced the old ArcView 3.2a! Right? Let's get a campaign going for it. We need a #hashtag for it.....#freeArcView3 Anyway, check out the link for some Esri GitHub goodness. 
It would be nice to get some free pizza with it? No?
  • SDKs are now free. All runtime SDKs are now free! No need for EDN. To start off use, there's a developer's subscription with initial free credit. Woot!

Low-lights (only minor)
  • No dodge-ball :-( - I think this is more of a Palm Springs thing. The European Dev Summit needs something as well as an organised party! 
  • Having the lightning talks AT THE SAME TIME AS FREE BEER BEING SERVED! Bad idea. Talk about conflicting options and at the end of the day as well. 
  • Not enough seating for lunch. At least have some tables for the Geeks to stand around.
  • European Developer Summit clashing with the European Velocity 2013 conference and Amazon's Re:Invent conference in Las Vegas. Sad panda. 

My summary

A super event in my view and rivalling the Esri UK's own user conference earlier this year (and ours was FREE! ;-) )  - clearly the Esri developers have been listening and putting in a lot of new (and improved) goodies into the mix; I especially like the adoption of GitHub and an increased official acceptance of FOSS. ArcGIS Online continues to mature and evolve and with the new disconnected editing with collector and dashboard; there are some truly ground-breaking technologies right here, right now. Then we have the 3D stuff as well - can't forget that as it is looming on the near horizon. Personally, I found the ability to freely mix with other developers, users and staff the main benefit. Putting a face to a name, and in some cases, a name to a face, is the main and lasting benefit. Time to review all the other presentations that I missed and hope that I can get over to the International developer summit in Palm Springs next year.

Links to the presentations will be edited here:

Wednesday 9 October 2013

Is immigration an offense now?

I applaud the use of appropriate new technology for crime prevention but they probably need to take care of some of their crime classifications.....

https://crimestoppers-uk.org/get-involved/our-campaigns/tweetathon-247/

Immigration is a crime?

Typo alert, I think whoever is typing this in needs a spell checker or two..


Wednesday 25 September 2013

Public Cloud Provider going bust...so the sky must be falling right?

Run Forrest, run!
Recently, a large public cloud storage provider, Nirvanix, informed customers that they were going out of business and that anyone with data in the Nirvanix public cloud, should pull it down as soon as possible. Am sure this would have sent massive shock waves around the cloud computing community; both users and suppliers.
There's no word of the impending doom on the website....

While I have no worries about the biggest players in the field (Amazon Web Services, Microsoft Azure, Google to name three) - I think Nirvanix will be the first of a few mid-range suppliers who will struggle to make it long-term.

Now, this makes it an interesting predicament: while most people at my company (and my peers) have embraced the cloud (public/private/hybrid) there's definitely a small but vocal minority who are now crowing 'I told you so'. Reading between the lines from the various press releases, it looks like there's a few Nirvanix customers who kept all their data with the supplier. More worryingly, there are other customers who are using Nirvanix resources, without even knowing it as they might be contracted through a Nirvanix Managed Service Provider. Either way, the situation is not looking too rosey for them at the moment.

One of our disaster recovery scenarios involves AWS going offline (an EMP strike? a malicious insider-led hack of the DNS servers? a slow-burning virus/malware corrupting the data?) or AWS going out of business. The likelihood of this scenario coming to pass for AWS is low, but then again, am sure Nirvanix customers thought of the same thing.

Of course, hindsight is always 20/20 and the view from the rear-view mirror is always clearer than the windscreen (took this off Warren Buffet) but I hope this doesn't take the wind out of the cloud-computing sails. For all our cloud services, we have our data in at least two places. One of them is not with AWS and we're seriously considering a third option of mirroring our data onto either Azure or Rackspace for redundancy. In fact, we're now starting to regard each cloud provider as a 'commodity' and this relevation pulls us neatly into the orbit of multi-cloud management solution providers.

Hmmm which cloud shall I use today?
In my view, what has happened with Nirvanix is an exception rather than the rule. However, it is interesting that a a fair number of businesses do not want to work with the bigger cloud providers citing reasons such as 'too complicated', 'not flexible enough' and the customer requiring very bespoke arrangements, for example complying with EU Safe Harbour regulations.

I wonder what happened with Nirvanix? I know that the main cloud providers have been engaging in a price war that is making some providers very attractive options for those looking for alternatives. It looks like Nirvanix ran out of cash. I think the price will continue to drop which is great for consumers but this could lead to the last man standing syndrome with only the biggest providers remaining. Am sure this would fall foul of one competition commission in one region or another! The mid-range cloud providers offer some very niche, very cool options that the larger providers may not have in their repertoire. For the sake of variety I hope the mid-range providers survive and thrive. For customers; best to spread your risk and use more than one cloud provider and include your own on-premise resources. It might mean more spend but it is the cost of doing business; consider it an insurance policy. For me? Am going to start a new backup job right now and migrate some of the crucial business data off to another provider.....

Friday 13 September 2013

Resizing CentOS 6.4 Partitions in VirtualBox 4.2.16

As part of my day to day messing around, I run a lot of VMs on virtualbox. I play with new stuff and install new software, reinstall old software and do all sorts of crazy things. This experimentation takes time and disk space and sometimes, I just don't have enough space.

I had CentOS6.4 installed on VirtualBox 4.2.16 and I was running out of space. I had 8.0GB total and thought it would be enough. Nope. I wanted to install the latest version of Portal for ArcGIS 10.2 and ArcGIS Server 10.2 and other bits and well, 8.0Gb wasn't going to cut it. I could have spent some time removing a lot of 'test' software on the CentOS 6.4 VM but already had a skinny version of the OS. Anyway, disk is cheap.

So, I wanted to increase the 8.0GB to 20.0GB WITHOUT having to reinstall the whole OS! It shouldn't be too difficult to resize the linux partition now would it?

Ordinarily, if this was a physical machine; it would be quite easy but with it already running inside Virtual Machine; I reckon it would be a little bit difficult....and it was.

For all those who have struggled; here's how I did it. Ironically, it is easier in Windows but it's all part of the learning experience. 

What I had and what you (probably) need:

  • VirtualBox 4.2.16 (you should be running this already)
  • Your CentOS 6.4 OS (already installed - you want to expand this right?)
  • Clonezilla 1.2.8
  • Gparted (or SystemRescueCD 3.2)

The steps broadly are:

You have a VM called 'CentOS' - with an 8.0GB VDI disk attached.

1. Create a new virtual disk of the desired size in VirtualBox. I created a new 20.0GB disk.
2. Using Clonezilla, attach the ISO to the VM and boot into it. Clone the current CentOS 6.4 disk to the new 20.0Gb disk. 
3. Boot into Gparted or SystemRescueCD and resize the existing partitions to fit into the new space.
4. Issue a few commands that extends the file system to fill the new size.
5. Mount the new virtual hard disk and boot into it.
6. New space should be there.

Now in detail.....

==Create your new disk in VirtualBox==

In VirtualBox select your VM in the left hand table of contents and choose settings. In the settings window that pops up, scroll down to the 'storage' option on the left-hand side table of contents. This is where you can can add extra (or different) hard drives and change your DVD rom etc.

Make sure the controller is selected and choose the 'add hard disk' option.


You will be prompted with a new pop up box asking to either 'create new disk', 'choose existing disk', 'cancel'. The option you should choose is 'create new disk'.

Choose 'VDI (VirtualBox Disk Image)' as the disk option and select 'next'.

Now it is time for the storage options - you have a choice of two and I usually go for Dynamically allocated to save disk space and time. Now click 'next'.

The next window specifies the file location and size of the new VDI disk. This usually confuses most people. First thing is to choose a name for the disk, then the size of the disk (i.e 20GB) and then an appropriate location for your new virtual hard drive. Best place would be the same place as your existing VDI disk for the CentOS distribution.


Once done, the drive has been created it will be attached automatically to the CentOS OS's virtual hard drive controller. It will be now ready to used; this new disk will be the target of the next step in which the smaller drive will be cloned onto this new drive.



==Boot into Clonezilla and clone your current disk to the new disk==

Right, time to clone - if you don't have a copy of Clonezilla, download the ISO.

Now attach it to the CentOS VM, you know - the same one with the new disk on it? If you can't remember how to do it in VirtualBox - click on the existing DVD/CD drive and click on the disk icon to the right of the CD/DVD Drive option and select the ISO. Click on the 'Live CD/DVD' check box just to ensure that when the VM boots; that it will do so from the attached ISO  (aka Live CD/DVD) and not the virtual hard drives.



Once this is done, click on the 'ok' option and then start the VM. Now with a new virtual hard drive attached and a bootable Clonezilla ISO. All looking good!

Start up the VM, and you should boot into Clonezilla, a very cool tool. Using Clonezilla - clone the existing disk image to the new image. The time it takes is dependent on the size of disk. Once completed. Shutdown the VM, detach the Clonezilla ISO and detach the original VM disk. Switch the new VM disk to be the primary, bootable drive.

Try starting it up - it should boot.

Once booted, you will notice that, despite the extra disk space - your VM doesn't see it. Don't worry. The next steps will take care of this and resize the partitions without trashing your OS.

And it is also time to use another tool!

==Use GParted to resize the partitions without destroying your OS==
Now, time for another iso download! Get Gparted (http://sourceforge.net/project/downloading.php?group_id=115843&filename=gparted-livecd-0.3.4-11.iso&7005223) and mount it as a bootable disk alongside your new VM disk.

Boot into Gparted and using the GUI, resize your current partition to fill the entirety of the disk. What partitions I hear you ask? Well depending on how you have built the OS and mounted the drive, I would extend your / directory - this should mean everything below it will expand as well. Basically, most of the new applications you install will go into /usr or /etc/bin or something similar. Personal files and stuff, should go into /home. Extend them how you wish - just use the extra space you now have available to you!

Once done - there is one more step to do!

Shutdown your VM and disconnect GParted. Leave your new VM disk where it is.

==Using some command line magic, extend the current file system to fill this new space==
You have cloned the original drive to a newer and bigger drive. You then extended the existing partition(s) to use the new space. One final step is to extend the FILE SYSTEM to fill this new space. Unfortunately, you can't do this (easily) when the machine is booted, so attach either SystemRescueCD or some other LiveCD (preferably CentOS) and boot into this. You will need access to the command line and since this is all linux, this is easy enough.

Let's see what we have available in the Volume Group.

/usr/sbin/vgdisplay
(note the free space now in the Volume Group which can now be assigned to a Logical Volume)

Alloc PE / Size 1596 / 8.00 GB
Free PE / Size 960 / 22.00 GB

Now we have to add the same into our logical volume
/usr/sbin/lvextend -l +100%FREE /dev/VolGroup00/LogVol00
Note: Yes, the attribute does say +100%FREE
Note2: VolGroup00 on my machine = vg_centos and LogVol00 = vg_root

So once the lvextend has completed, tap in:
/usr/sbin/vgdisplay

Alloc PE / Size 1596 / 30.00 GB
Free PE / Size 960 / 00.00 GB

Extending the partition means now there's more space BUT your OS still can't see the new space. Using the 'resize2fs' command; it increases the size of the file system to use the new space. Trigger online resizing of the live and mounted filesystem so the new disk space can be utilized immediately:

resize2fs -p /dev/mapper/VolGroup00-LogVol00

Now, the system has a bigger diskspace to play around with.

df -h

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
45G 3.2G 40G 8% /
 
/dev/sda1 99M 19M 76M 20% /boot
tmpfs 1014M 0 1014M 0% /dev/shm
 
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
30G 8.2G 22.7G 18% / 
/dev/sda1 99M 19M 76M 20% /boot
tmpfs 1014M 0 1014M 0% /dev/shm

Now that is so much better!   

We're almost done. Shutdown the VM and detach the ISO, leaving just your new VM Disk. Make this the primary disk and boot. With luck, the VM will start up, you will be able to log in and it will have the extra space you allocated.

Well done. Definitely a bit involved but immensely satisfying once you've done it!

Now go and enjoy all that new space.


Wednesday 11 September 2013

World's Biggest Data Breaches & Hacks (infographic)



World's Biggest Data Breaches and Hacks

I have spent a good part of my morning and afternoon exploring this simple but powerful infographic. Right now, I am tempted to renew all my credit cards and passwords (again).

'via Blog this'

Tuesday 30 July 2013

Bringing my old Kindle Fire back to life


Not my kindle, just a stock photo...


When I was over in the USA at a conference two years ago, I went to WalMart and while picking up gifts and all that, I came across the Kindle Fire (the original one that never made it to retail in the UK). I said 'why not?' to myself as it was nicely priced and I was going to use it for a while and then root it and slap on JellyBean or something esoteric if I got bored.

Inevitably, I was gifted an iPad as a present soon after so it was with some eagerness that I decided to root the humble Kindle Fire. Geeks gotta keep themselves entertained you see.

However this post isn't about that experience, as enjoyable as it was. No, since getting hold of the Nexus 4 (Google Phone with Jelly Bean 4.2 goodness) and my continous reliance on the 'Old' Kindle Keyboard, my kindle fire was looking a bit underused. I had it running Jelly Bean 4.2 as well but well, with my Nexus 4 in my pocket, iPad in my hand - kindle (keyboard) in my bag....did I need another tablet? 

The Old Kindle still rocks, with a battery life of a couple of weeks!
So, why not remove all the new stuff and slap on the latest Amazon Kindle OS? Revert it back to the original OS and maybe even sell it off? Or I could deep dive into Amazon's eco-system and grab Amazon MP3, Prime and other goodies and use the Kindle for that? Ah, options. So off I went to find out more and interestingly, there wasn't that much in the way of useful threads. Most people got the Kindle Fire, being a relatively cheap tablet and then promptly slapped on a customised Android OS almost immediately...like I did.

However, after much faffing and reading, I figured it out (and the internet did help too!) - so rather than linking to about a dozen articles; here's my distillation of all that geeky goodness...

What you need already installed on your Kindle Fire:
  • TWRP 2.2 (Team Win Recovery Project) -- this software is great and you must have used it to root your Kindle Fire initially...
  • Android Jelly Bean 4.2 (of course) or whatever flavour you decided to install.
TWRP.......Twerp.......
Now going back to Kindle Fire OS:
  • Download a copy of the latest Kindle OS from the Amazon Kindle website. I used Version 6.3.2 but here's the link to the file. By the time this post is out, there's probably something newer already.
With the Kindle OS file downloaded, here's my how-to restore your Kindle Fire.
  1. Connect USB cable to your Kindle Fire.
  2. Mount it as USB Mass Storage mode.
  3. On your PC, rename Kindle Fire Software Update bin file to 'update.zip'. TWRP looks for a zip file - the *bin file will not be seen.
  4. Copy update.zip to Kindle Fire SDcard (root level) - I wouldn't bother with putting it into a folder.
  5. Turn off USB Mass Storage mode.
  6. Unplug USB cable.
  7. Power off Kindle Fire.
  8. Now power it on.
  9. Press power button again until the power light turn is orange.
  10. TWRP 2.0 Recovery will then be loaded.
  11. Now select 'Wipe'.
  12. Select Cache then Wipe cache.
  13. Select Dalvik Cache then Wipe dalvik-cache.
  14. Select Factory Reset then Factory Reset.
  15. Go back to 'Home'.
  16. Select 'Install' this time.
  17. Then select update.zip. You did upload this to the device right?
  18. Select Flash after selecting the file.
  19. After the flash, select 'Reboot System'.
  20. That’s all - your Kindle Fire should be back to what it was before you started to mess around with it...
OK, now I better start to get used to this heavily skinned OS. Now how on earth can I get this US-based Kindle Fire to work in the UK?

Thursday 25 July 2013

Setting up your windows box to run the EC2 CLI and then using it for ArcGIS Server!

Installing the Amazon EC2 Command Line Tools on Windows

Not quite my command line but kinda cool....
In my own style, I will describe how to install the Amazon EC2 command line tools onto a Windows 7 Box. If you're using another version of Windows or Linux, the steps will be slightly different. I intend to install and configure the same tools on my Mint Linux box in the near future. Once completed, I am going to use a few easy to understand scripts to perform some automated steps to my ArcGIS Server 10.1 AWS instance. Soon you will be a guru. All from the comfort of my command line.

 

Why the command line?

Yes indeed a very good question, why use the command line - surely everything is point and click (or look and blink)? Well, there are probably dozens of reasons but for me it is: quicker, easy to script (and hence batch up and repeat), one has more control over options and it is just more efficient. You can also perform a lot of your work on a number of different computers and location; for example via SSH to a Linux box. Finally, being able to master (or at least) tame the command line will give you, the user, more confidence over the box(es) in front of you. 

There, pep talk over. Onto the installation steps!

Process for Installing the Command Line API Tools

Step 1: Download the Command Line Tools (API Tools) zip file
The command line tools are available as a ZIP file on this site: Amazon EC2 CLI Tools. The tools are written in Java and include shell scripts for both Windows and Linux. The ZIP file is self-contained and you simply download the file and then unzip it. I popped the files to D:\TOOLS\AWS but anywhere sensible will do.

Some additional setup is required before you can use the tools; you didn't think it would be that easy do you? These extra steps are discussed next.

Step 2: Set the JAVA_HOME Environment Variable
The Amazon EC2 command line tools read an environment variable (JAVA_HOME) computer to locate the Java runtime. Environmental variables makes things so much easier; you avoid the need to tap out the entire path for a program to execute which saves time and your keyboard! So get to it.

Step 3: To set the JAVA_HOME environment variable
If you don't have Java 1.7 or later installed then download and install Java. To view and download JREs for a range of platforms, go to http://java.oracle.com
  1. Set JAVA_HOME to the full path of the Java home directory. This is the directory that contains the 'bin' directory that contains the Java executable you installed (java.exe). For example, if your Java executable is in D:\JAVA\BIN, set JAVA_HOME to D:\JAVA 
  2. Don't include the bin directory in JAVA_HOME! This is a common mistake, but the command line tools won't work if you do this! I know, I have tried this and learnt my lesson.
  3. Open a new Command Prompt window and verify your JAVA_HOME setting using this command.
  4. C:\%JAVA_HOME%\bin\java -version
    If you've set the environment variable correctly, the output looks something like this.
    Love the Matrix-like colours
     Otherwise, check the setting of JAVA_HOME, fix any errors, open a new Command Prompt window, and try the command again and also, try some of the following:
    • Add the BIN directory, that contains the Java executables to your path.
    • In System variables, select Path, and then click Edit. 
    • In Variable values, before any other versions of Java add ;%JAVA_HOME%\bin.  
    • Reboot?

    Step 4: Set the AWS_ACCESS_KEY and AWS_SECRET_KEY Environment Variables
    When you sign up for an AWS account, AWS creates access credentials for you so that you can make secure requests to AWS without the need to type in your username and password. You must provide these credentials to the Amazon EC2 command line tools so that they know that the commands that you issue come from your account.

    The following steps describes how you can view your access credentials before setting them up to be used by your scripts etc. Keep these credentials as safe; if someone can nick them and use them for nefarious purposes - it is on your head. To improve it, create a new AWS EC2 user account and allocate just resource access rights and no more. There's another post coming on how I use AWS Integrated Access Management (IAM) resources.

    Step 5: To view your AWS access credentials
    1. Go to the Amazon Web Services website at http://aws.amazon.com.
    2. Click My Account/Console, and then click Security Credentials.
    3. Under Your Account, click Security Credentials.
    4. In the spaces provided, type your user name and password, and then click Sign in using our secure server.
    5. Under Access Credentials, on the Access Keys tab, your access key ID is displayed. To view your secret key, under Secret Access Key, click Show.
    You can specify these credentials with the --aws-access-key and --aws-secret-key (or -O and -W) options every time you issue a command. However, it's easier to specify your access credentials using the following environment variables:
    • AWS_ACCESS_KEY—Your access key ID
    • AWS_SECRET_KEY—Your secret access key
    If these environment variables are set properly, their values serve as the default values for these required options, so you can omit them from the command line. This saves typing them in each time AND keeps the keys secure as they are now referenced only as environmental variables. However, definitely keep a copy of both keys in a secure fashion and not copied to a postit note!

    Right, let's create these environmental variables that specify your access credentials:

    Step 6: To set up your environment variables (it's dead easy...)
    1. On the computer you are using to connect to Amazon Web Services, click Start, right-click Computer, and then click Properties
    2. Click Advanced system settings.
    3. Click Environment Variables.
    4. Under System variables, click New.
    5. In Variable name, type 'AWS_ACCESS_KEY'.
    6. In Variable value, specify your access key ID.
    7. Under System variables, click New.
    8. In Variable name, type 'AWS_SECRET_KEY'. 
    9. In Variable value, specify your secret access key.
    Step 7: Set the EC2_HOME Environment Variable
    The Amazon EC2 command line tools read an environment variable (EC2_HOME) to locate supporting libraries. You'll need to set this environment variable before you can use the tools with confidence from the command line. You can do it. 

    Step 8: To set the EC2_HOME environment variable
    1. Click Advanced system settings. 
    2. Click Environment Variables. 
    3. Under System variables, click New. 
    4. In Variable name, type EC2_HOME. 
    5. In Variable value, type the path to the directory where you installed the command line tools. For example, C:\AWS\EC2\ec2-api-tools-1.5.4.0
    6. Open a new Command Prompt window and verify your EC2_HOME setting using this command:
    1. C:\dir %EC2_HOME% 

    If you get a File Not Found error, check the setting of EC2_HOME, fix any errors, open a new Command Prompt window, and try the command again. Typos mate, I reckon - it's typos....
    1. Add the bin directory for the tools to your system Path environment variable. The rest of this guide assumes that you've done this.
      You can update your Path as follows:
      1. In System variables, select Path, and then click Edit.
      1. In Variable values, add ;%EC2_HOME%\bin.
      Open a new Command Prompt window and verify your update to the Path environment variable using this command.

      C:\> ec2-describe-regions 
      If all your environment variables are set correctly, you'll see output that looks something like this.
      If you get an error that this command is not recognized as an internal or external command, check the setting of Path, fix any errors, open a new Command Prompt window, and try the command again. Trust me, it's the path variable; there's a typo!

      If you get an error that required option -O is missing, check the setting of AWS_ACCESS_KEY, fix any errors, open a new Command Prompt window, and try the command again.

      If you get an error that required option -W is missing, check the setting of AWS_SECRET_KEY, fix any errors, open a new Command Prompt window, and try the command again.
        Step 9: Set the Region
        By default, the Amazon EC2 tools use the US East (Northern Virginia) Region (us-east-1) with the ec2.us-east-1.amazonaws.com service endpoint URL. If your instances are in a different region (like ours), you must specify the region where your instances reside. For example, if your instances are in Europe, you must specify the eu-west-1 Region by using the --region eu-west-1 option or by setting the EC2_URL environment variable.

        This section describes how to permanently specify a different region by changing the service endpoint URL.

        Step 10: To specify a different region

        1. To view available Regions, see Regions and Endpoints in the Amazon Web Services General Reference.
        2. To change the service endpoint, set the EC2_URL environment variable.
        3. The following example sets EC2_URL to the EU (Ireland) Region.
          • On the computer you'll use to connect to Amazon Web Services, click Start, right-click Computer, and then click Properties.
          • Click Advanced system settings.
          • Click Environment Variables.
          • Under System variables, click New.
          • In Variable name, type EC2_URL.
          • In Variable value, type https://ec2.eu-west-1.amazonaws.com.
        Boom, you're in the EU region!

        Basic Amazon EC2 scripting examples

        Right, yes - you're now on the way to being a command line guru. However, let's get something useful and ops related. The command line is there to help you execute your regular ops more efficiently. So, below are a few simple scripts for some of our common workflows:

        1) stop an instance (our regular 18:00 evening script),
        2) start an instance and associate an elastic IP (our regular 'Good morning' script),
        3) stop multiple instances and then terminate them.

        Stop an instance

        The simplest script is one that stops an already running instance. You can use the AWS Dashboard but what if you need to stop it in the evening? Of course, you can login each night but you must have better things to do. I know I would prefer playing computer games in the evenings, safe in the knowledge that my scripts are executing my commands automatically. The simple example calls the EC2-STOP-INSTANCES command and passes the instance name that I want to stop. Also, make sure that the AWS credentials you are using have the necessary permissions. You need to get into AWS IAM to manage this and I will get together a new post on this. For now, am assuming you're an IAM expert, or knows someone who is. Back to the script. I’m using sample values for instance names and other parameters, you would simply substitute your information for the placeholders in the examples below.

        ec2-stop-instances i-abcd1234

        Easy, right? In this case we just specify the stop command and an instance name, and the instance is stopped. The instance ID is specific to your instance and is NOT the host name of the instance nor is it the public DNS name. You need to check the instance-id. Check out the screenshot below: 
        'scuse the red ink but I love using paint...
        Ok, now let’s try something a little more interesting.

         

        Start an instance and associate an elastic IP

        This next script starts an instance, then associates an elastic IP address with that instance. This is a very common workflow because when you stop and start an EC2 instance, the IP address changes. Don't ask me why, but it does - the public IP address and the FQDN is NOT permanent when it comes to EC2 instances. However, you may want to permanently 'anchor' an IP address. An Amazon Elastic IP gives you a way to maintain an unchanging address for your instances. The only requirement is that whenever you stop and start the instance, you have to re-associate the address. Here’s the script to do it:

        call ec2-start-instances i-a5cw1214 timeout 300 ec2-associate-address 0.0.0.0 -i i-qwesdff4 ec2-reboot-instance i-aee1884

        The first line calls the EC2-START-INSTANCES command, which has a similar syntax to the EC2-STOP_INSTANCES command showed above. In this case, since we want to run several commands in sequence, we run the EC2 command in its own shell using the CALL command. The EC2 batch commands are actually shell scripts that run Java code, and CALL is needed to return control to the original batch file. Yes, my Padawan, the scripting starts to get more useful. If the CALL command is not used, the batch process terminates after the first EC2 command (ec2-start-instances), and the rest of the commands are never executed and that is a bad thing. There are other options but this is the most elegant.

        What else? The TIMEOUT command  waits five minutes (300 seconds) to give the instance time to start. It is quite generous but some instances that we have take even longer, so do your research. When an instance is first started, the status is classed as “pending” until the EC2 startup process is complete. When it reaches this stage the startup process hands off to the next process and the status changes to “running.” An elastic IP can only be associated with a running instance.

        Next, EC2-ASSOCIATE_ADDRESS is used to associate the elastic IP with the instance, and, finally, the last command in this script reboots the instance. Rebooting is required for ArcGIS Server to recognize the elastic IP. Rebooting is not needed for an enterprise geodatabase instance.

         

        Start/stop several instances

        Of course these commands can be chained together to stop or start several instances. One way is to add a separate command for each instance; however, the EC2 commands enable you to reference several instances in the same command. This is very handy. Saves typing. See the example below, which would stop three instances:

        ec2-stop-instances i-aopl4434 i-yygh1234 i-tykl2334

         

        Scheduling scripts

        Once you have some scripts to work with your EC2 instances, you can use operating system tools to schedule them to run at regular times (such as Friday nights or Monday mornings). For example, Windows Task Scheduler is available on the various desktop and server editions of Windows and gives you an easy-to-use GUI environment for scheduling a script.

        You can set a BAT file to run in the Task Scheduler by creating a Basic Task (Action> Create Basic Task). Once you name your task and set the schedule, you’ll be prompted to select an action to perform. Choose Start a Program and point to your BAT file. Your script is now set to run.

        Keep in mind that you can always manually launch a script by just double-clicking it, or running it from the task scheduler. An advantage of running the script from the Task Scheduler is that a history will be recorded of when the task was run. This information is accessible in the properties of any task under the History tab.

         

        Summary

        In summary, Amazon EC2 includes scripting tools that can help you automate your work. Scripts make it easier to administer your servers remotely, and, in many cases, allow you to cut costs. Once you have a useful script, you can use operating system tools to run it automatically.

        This post is meant as an introduction to scripting, but going further you can use scripting to launch or destroy new instances, create security groups, attach volumes, and so on. Stay tuned to this blog for more scripting tips.

        Wednesday 26 June 2013

        Cool map / infographic from the BBC and Lizard-men?

        Some very interesting findings...
        When is a map not an infographic and vice versa? I like the above graphic from the BBC; click on the link that follows: http://www.bbc.co.uk/news/business-22688596

        Also, something funny from one of my favourite sites, Information is Beautiful. Check it out here or click on the graphic.


        Very cool....
        Love it.