About this blog

'Going Spatial' is my personal blog, the views on this site are entirely my own and should in no way be attributed to anyone else or as the opinion of any organisation.

My tweets on GIS, Humanitarian, Tech, Games and Randomness

Friday 26 September 2014

Shellshock or SHELLSHOCKED!


Oh no, here we go again

Well, there's plenty of fast paced news and updates about the latest vulnerability to assail the internet (and OpenSource / Linux in particular it would seem) - hot on the heels of the 'Heartbleed' bug we now have a new vulnerability dubbed 'Shellshock', additional good articles from The Register and one from TroyHunt all include a lot of decent background material. Mr. Hunt was top of my search list - am sure there are others out there. 

It is a weakness with the well-known 'nix shell called 'Bash' - a shell is one of a number of interpreters that use command line to interact with the system as well as parse scripts and other things. It has been around for ages (I remember using it in university) and is the default shell for Linux and Mac OS X. So it is everywhere and everyone is more or less affected. 

The bug allows the processing of additional shell commands after a function definition - this means that someone can add extra commands to the end of an existing legitimate one, and there's a chance that it will get executed. Basically, a command over-run. 

What is even more scary and from what everyone has been saying: it probably has already been exploited and that the security community has just caught on. Christ, how long? 

Let me check to see if MY pants are down


So. I just carried out a ‘before’ and ‘after’ on one of our Centos boxes:

1.      Fire up a shell (make sure it is Bash)
2.      Enter the follow (no need to SUDO)
   env x=’() { :;}; echo vulnerable’ bash –c “echo this is a test”
3.      If you execute the script and you get: ‘vulnerable, this is a test’ – then patch immediately

4.      If you execute the script and you get: ‘this is a test’ – then the patch worked or you haven’t been vulnerable


So how do I patch for this? 

Quite easy actually. 

Using yum or apt-get will allow you to easily update Bash and fix the vulnerability.

Yum

sudo yum-update

then 

sudo yum update bash

Apt-get

sudo apt-get update

then 

sudo apt-get install --only-upgrade bash

Friday 29 August 2014

Growing EBS Volumes


Been using AWS for a while but some simple things never came across my desk until quite recently: the need to expand the size of the C drive on my Windows images. Ordinarily, an on-premise server would have it shut down, the C drive removed, cloned, replaced and restarted. Or start off with a bigger-than-necessary hard drive.

Anyway, the principle of removing, cloning and replacing a volume in AWS is very similar to the physical process; without the need to whip out a screwdriver.

Like a live running machine, it is not possible to change the size of an existing EBS volume when it is running. Best to shut it down. Interestingly, I have found it is not possible to resize an EBS volume to something smaller, in an effort to save cost. Any EBS volumes created from snapshots must be at least as big as the original snapshot size.

Here are the steps I took to expand the volume.

  1. Stop the instance that has the EBS volume you need to expand. This will ensure that all data has completely been written to the volume.
  2. Backup your instance to an AMI. One can't be too careful - who said we are paranoid?
  3. Record the device that the volume is attached to your instance. For example, /dev/sda1. This information is available in the AWS Management Console, on the volumes page, under the "Attachment Information" column.
  4. Create a snapshot of the volume and wait for the snapshot to complete. This could take anywhere from five minutes up to a few hours. 
  5. Now create a new volume based on your snapshot that was just created in step 4. Specify a new ( and larger) size for the volume. Wait for the new volume's status to be "available". Again, this should only take a few minutes but be patient. I find the AWS CLI gives a more accurate view than the dashboard. 
  6. With the new volume created, detach the old volume from the instance. Do this in the same VOLUMES page on the dashboard. 
  7. Attach the new volume to the instance. Use the same device name that you saved in step 3 - you did write this down right?  
  8. With the new, larger volume attached to your instance. Restart your instance.
  9. Connect......and voila, you can login, but wait, you should notice that the operating system still thinks the volume is the original size! What is going on? You need to 'tell' the operating system to expand the existing partition to use the new space you just created. 

Now to Resize the Partition


If you are using Windows, you can use the Disk Manager to expand the size of the existing partition to include the new unallocated space. Go to 'Computer Management' and select the 'disk management' option. Select the disk (C drive) and you will see that the extra space is there on the disk; but just unallocated. Right click and 'extend'.

For all you Linux-heads, there is the ever reliable 'resize2fs' command.

Once you have confirmed that your instance is running correctly using the new volume, you can delete the old volume to save and money.

Done. You should have a new volume now and more space to fill.

Enjoy.

Thursday 19 June 2014

2014 - Esri UK User Conference

Introduction

It has been under a week since the Esri UK 2014 user conference (and the Esri UK Company Event the day after) and I had meant to push this out earlier but wanted to add and tweak a few things. Also, the start of the little known event the '2014 World Cup' meant that I was a bit distracted. 

Here are my highlights and observations. These points (like this blog) are entirely my own and if you disagree, or want clarification - then please contact me. Also, follow me on @wai_ming_lee to  get random updates via twitter. 

This year, the event was a smashing success with over 1400 registered guests and a fair few turning up on the day who did not, the event was free, the weather grand and the location, absolutely top-notch. 

Tuesday 4 March 2014

What the hell is DevOps?



The last year or so, I have been trying to enable 'DevOps' in my team. I bit hard and swallowed the whole DevOps sandwich a couple of years ago at Velocity Conference. I wanted to experience the benefits of DevOps that included:

  • The ability to deploy often and without fear,
  • Faster recovery,
  • Infrastructure as code (all Ops guys secretly want to be software gurus),
  • Reduced complexity.

Thursday 20 February 2014

Virtualbox - changing the UUID of a hard drive





I have been using VirtualBox for the last few (several) years and I love it. While my main working laptop is chockful of software, it is never enough so I use lots of virtual machines, all being created, run and managed through the excellent VirtualBox.  It means that I can install some very beta software on a variety of platforms, test them out and then get rid of them. I also have virtual machines to ensure that I have a very clean environment for training. For example, I have a windows 7 virtual machine loaded with Python 2.7 that I use to train and practise Python almost exclusively. Pity that I can't really use VMs for gaming....




I tinker with Linux (lots of distros), different versions of Windows and I even have a VM running 'Haiku' (go on, get it here) and I am seriously thinking of getting an emulator and running the old Spectrum, BBC Micro and a few others. I miss the old games!

Monday 3 February 2014

2014 - more AWS and GIS and stuff

(AWS) Awesome sauce


Some awesome news from AWS to kick start 2014: the prices of EC2, EBS and S3 has dropped - some by as much as 50% (!) and there's a new general-purpose instance type called 'M3' to play around with.


We have already started to use the new m3 instances and we are impressed: the m3.xlarge is only a few cents more expensive than the older m1.xlarge but with twice the RAM and twice the vCPU count, as well as being generally faster, they are well worth the switch. No doubt the excellent price reduction, has come from keen price competition from the likes of Azure and Google which is great news for us customers! Long may it continue. Am sure the other players will respond and soon.