Change your ISP WiFi Password in 2017

Here’s a rather odd New Years Resolution for you. If you have SKY Broadband, change your WiFi Password. If you have another ISP, read on. This is likley to apply to you too!

Why? Because the default passwords, while they look random, are pretty weak compared to the tools attackers have available in 2017.. As I found out by hacking my own sky wifi.

Mumble Mumble, WPA2, Secure, no flaws… right?

Well yes, modern wifi protection (WPA2) is very good, there are no known flaws to speak of, which leaves attackers one option: age old password guessing or ‘brute force cracking’.

So whats the problem?

All the Sky Wifi routers i’ve seen so far (friends houses, mine, etc) all have passwords of the following format;

  • 8 Upper Case A-Z characters.

‘be safe online, choose good passwords’ is drummed into us everywhere now, (I even saw some posters on the London underground!) so most of you will see the problem, 8 characters of uppercase A-Z requires a hell of a lot less guesses at the password than if we threw in some numbers, or some lower case characters, or some special characters (* ~ @ etc).

We could also make the password longer, or a mixture of all of the above.

How bad is it?

OK, looking at it technically, any combination of 8 A-Z characters gives you 208827064576 possible combinations.

26 ^ 8 = 208827064576

Sounds like a lot of guesses, but for a modern graphics card, 80,000 to 300,000 guesses a second is pretty trivial depending on the card.

208827064576 / 80000 = 2610339 seconds.
2610339 / 60 (minutes) / 60 (hours) = 725 Hours

So one entry-level graphics card at 80,000 guesses a second would take 725 hours (30 days) to guess every possible password the router could have by default.

Thats not very long considering your neighbours likley posses the computing power needed to be on your network in less than a month.

To the Cloud!

Someone with a graphics card can do the above, but more concerning, anyone with a bit of knowledge can actually guess much quicker for a fraction of the price!

Introducing Amazon web services (AWS), offering computing and number crunching power in the cloud, hired by the second/hour/day; A solution for millions of businesses and startups that don’t want to buy and manage their own server farms. AWS likley powers apps you use every day. Netflix being one example.

But these resources can also be used to speed up our guessing process, here we have an AWS instance (computer in the cloud) offering 16 graphics cards in one, for the low low price of £11.70 an hour.

Amazon AWS Graphics Card Instances.

 

16 times the power!

So now the guessing process just got 16 times quicker, without having to buy any graphics cards or have any computers running at home at all.

Here we can see the AWS instance running a brute force password guessing attack against my router, using all 16 graphics cards at once.

Knowing the password will be 8 upper case A-Z characters makes automating this attack much easier. This tool can just be left running.

We can see that each of the 16 graphics cards is producing over 80,000 guesses a second, giving us a total of 1394,000 guesses/second.

208827064576 / 1394000 = 149805 Seconds
149805 / 60 (minutes) / 60 (hours) = 41.7 Hours

So now we 100% know, that we will have found the password within 41.7 hours. It could take less (remember that 100% is every possible guess, chances are the actual password won’t be the last one we try.. so we could get lucky and find the password after 10%, 40% etc).

You see i’m 4% through, with 1 hour and 20 mins elapsed and 1 day and 15 hours to go. Thats slightly less than our calculator estimate above.

24 + 15 + 1hr20 = 40 Hours 20 Mins.

Say 41 hours in total (including setup of the Amazon AWS machine). Thats £480 and less than two days to guarantee I have access to your network.

Now this may sound like a lot of money, but consider malicious intent, be it corporate espionage, ransomware, spying, further hacking the computers on the network (e-mail, facebook, online banking etc).. £480 is actually affordable to most.

Not Just Sky

I feel it necessary to say i’m not having a go at Sky specifically here. They just happen to be my ISP and I noticed the default passwords were A-Z only.

There are many, many other broadband providers that ship WiFi routers with the same style of A-Z only 8 character passwords. Check yours and if necessary, log into the router and change your password to something more secure, see below for details.

Whats the solution?

So heres the thing about password guessing, knowing the format of the password ahead of time ( 8 characters, all A-Z uppercase for example) makes knowing the amount of guesses simple, as you saw with our easy calculations above.

Changing that length, or changing the ‘known format’, makes an attackers life much harder.

Lets say for example, the attacker knew the password was A-Z uppercase, and between 6 and 8 characters long. Suddenly, they would have to try guesses for

  • A-Z combinations with 6 characters (308915776 guesses)
  • A-Z combinations with 7 characters (8031810176 guesses)
  • A-Z combinations with 8 characters (or original 208827064576 guesses).

Thats an extra 8340725952 guesses on top of our original number in order to guarantee we crack the password.

8340725952 / 1394000 (guesses a second) = 1.67 hours
Costing the attacker an extra £19.53

Now obviously, i’m not suggesting making your WiFi password shorter. I’m just saying that not knowing the exact format and composition of a WiFi password can make the process harder, longer and less effective.

Lets look at what we should do, and the implications to an attacker…

A single extra character, still A-Z uppercase:

5429503678976 possible combinations = 45 Days on our AWS setup = £12,000

Two extra characters, still A-Z uppercase:

141167095653376 possible combinations = 1172 Days (3.2 years!) on our AWS setup = £329,098

8 characters, combination of A-Z upper and a-z lowercase.

54507958502660 possible combinations = 452.5 days on our AWS setup = £127,062

8 characters, combination of A-Z upper, a-z lower and numbers 0-9

221919451578090 possible combinations = 1842.5 days on our AWS setup = £517,387.5

So there you have it.. more characters is good, different ‘character sets’ (numbers, lowercase etc) is good.

I’d recommend not going for <Dictionary Word>123. or <Dictionary Word><Dictionary Word> as other ‘dictionary attacks not covered in this post will try combinations of words to crack the password instead.

Personally, I prefer the options above, random with more characters and character sets, or if you do want to use words to make it really long, add a good number of letters + numbers of randomness at the start, middle or end.

Either way, you’re going to be in a much better position than an attacker seeing a ‘SKYABCD’ style WiFi network and knowing he has a guaranteed way in.

Comments or corrections to twitter @mattdashj

 

Signing Exchange E-Mail on the iPhone 7 / 6 / 5 or iPad

Quick walkthrough for setting up signed outgoing e-mails on the iPhone / iPad

Scenario: You have a free E-Mail signing certificate such as the one from Comodo, you’ve set it up on your desktop/laptop e-mail, but you also send a lot of mail from your iPhone / iPad too.

There are two steps to getting signed mail working on the iPhone.

Step 1: Install your certificate and Private Key onto the iPhone using the ‘Apple Configurator version 2’.

Download the ‘apple configurator 2’ from the App Store onto your mac.
(This is a tool from apple that lets you create profiles and roll-out changes such as certificates to your iPhones/iPads/AppleTV’s.

Open it.

Goto File > new profile.

A new profile window appears, In the general tab, give the profile a name as below:

screen-shot-2016-11-03-at-03-28-16

Then, go into your mac key store (The app is called ‘keychain access’). Goto certificates, you should find your imported Comodo cert listed with your e-mail address as the title as below:

screen-shot-2016-11-03-at-03-29-23

Right click your mail certificate and chose export.

This will export your Certificate and Private key into one ‘.p12. file. You’ll be prompted to protect the exported certificate with a new password. (Don’t leave it blank. You’ll only need the password once in about a minutes time, so may as well make it strong!).

screen-shot-2016-11-03-at-03-29-51

screen-shot-2016-11-03-at-03-30-14

Now you should have a ‘.p12’ file in your documents. Yes? Good.

Back to the Apple Configurator Profile screen.. Click on the ‘Certificates’ section on the Left and click the ‘Configure’ Button. You will be prompted to add a certificate, use the finder window that appears to find and select your new .P12 file.

screen-shot-2016-11-03-at-03-32-09

You will then need to give the Profile the password you just used for the P12 export. Type it in the ‘password:’ field, you’ll know if it’s right as the window will change from showing this:

screen-shot-2016-11-03-at-03-32-25

To this:

screen-shot-2016-11-03-at-03-32-36

Thats it! We can now save this profile and add it to our iPhone/iPad.

Save it by clicking the title at the top of the profile window and give it a name. Mine saved in my iCloud drive, this is fine.

screen-shot-2016-11-03-at-03-32-56

Now, plug your phone into your Mac via USB. It will appear in the ‘Apple Configurator 2’ Main window.

screen-shot-2016-11-03-at-03-35-13

 

 

Right click it, chose Add > Profile. Then select our new .mobileconfig file we’ve just saved.

 

screen-shot-2016-11-03-at-03-35-31

screen-shot-2016-11-03-at-03-35-46

Then, follow the instructions on the Mac and on your iPhone to install the certificate.. The iPhone will need your iPhone password and warn you the ‘Profile is unsigned’. This is fine.

Once done, you can unplug your phone from your Mac, you’re ready for step 2…

Step 2: Turn S/MIME E-Mail signing on within your iPhone settings and select the certificate you just uploaded.

This is the easy bit.

On your phone. Go into settings > Mail.

Chose ‘Accounts’, then select the account the certificate is for (Mine is my Exchange account).

Then, select the ‘Account [email protected]’ line at the top of the screen to drill into that accounts’ settings…

img_3145

From here, Click ‘Advanced Settings’.

Finally, in Advanced settings, turn ‘SMIME’ to on. Then click on the new option ‘Sign’.

img_3146

Turn the sign setting on, you’ll be asked to chose a certificate. The one from the profile we uploaded should be listed for you to select, as below:

img_3147

Thats it, your e-mails should now be sent signed!

Matt

DCOS.io OpenDCOS Authentication Token

Looking to script some containers against an OpenDCOS Deployment however the authentication for OpenDCOS is OAuth against either Google, Github or Microsoft.

DCOS login options screenshot
DCOS login options

 

The docs (here) discusses requesting an auth token for a given user, but the API URL/Path doesn’t seem to work in OpenDCOS.

Turns out, the correct URL is below. Paste in a browser, authenticate and your token will be provided.

https://<YOUR-DCOS-MASTER-IP>/login?redirect_uri=urn:ietf:wg:oauth:2.0:oob

This is the same URL you’ll be asked to authenticate against if you install the DCOS local CLI.

You can then send this in any requests to the DCOS services (such as marathon) using a HTTP header as below:

 

Mac OSX El Capitan Secure Erase

So, it’s time to give my old corporate Macbook Pro 15″ back to who knows where.

Time to move my data to my new (much the same) Macbook Pro 15″ and secure erase my old SSD.. Right?

Wrong! Seems the recovery partition on El Capitan (Hold down CMD + R on boot) completley prevents any of the ‘secure erase’ options; the button for security options just isn’t there!

Anyway, the disk utility is just a pretty GUI on the ‘diskutil’ command line.

So, to run a very secure (and lengthy) 35-pass wipe on your main disk…

once you have the “OSX Utilities” window showing, goto Utilities > Terminal from the menu bar, then on the terminal type the following command:

 diskutil secureErase 3 disk0 

For a quicker, US DoD 7-pass secure erase, run:

 diskutil secureErase 2 disk0 

Or an even quicker, US DoE 3-pass secure erase, run:

 diskutil secureErase 4 disk0

If the command errors with “device in use” you’ll need to unmount your MacOSX partition first with the following command:

 diskutil unmountDisk disk0 

WARNING: Any of these options will permanently, irreversibly destroy ALL data on your disk. Please make sure you have no external storage directly attached, or you may just wipe that instead.

The secureErase commands will then show a progress bar and estimated time to completion. The 34 Pass wipe on a mid-2012 256GB SSD estimates 8 hours.

Yes, you’re going to need a charger 😉

Matt

OpenStack infrastructure automation with Terraform – Part 2

TL;DR: Second of a two post series looking at automation of an openstack project with Terraform, using the new Terraform OpenStack Provider.

With the Openstack provider for Terraform being close to accepted into the Terraform release, it’s time to unleash it’s power on the Cisco Openstack-based Cloud..

In this post, we will:

  • Write a terraform ‘.TF’ file to describe our desired deployment state including;
    • Neutron networks/subnets
    • Neutron gateways
    • Keypairs and Security Groups
    • Virtual machines and Volumes
    • Virtual IP’s
    • Load balancers (LBaaS).
  • Have terraform deploy, modify and rip down our infrastructure.

If you don’t have the terraform openstack beta provider available, you’ll want to read Part 1 of this series.

Terraform Intro

Terraform “provides a common configuration to launch infrastructure“. From IaaS instances and virtual networks to DNS entries and e-mail configuration.

The idea being that a single Terraform deployment file can leverage multiple providers to describe your entire application infrastructure in one deployment tool; even if your DNS, LB and Compute resources come from three different providers.

Support for different infrastructure types is supported by provider modules, it’s the Openstack provider we’re focused on testing here.

If you’re not sure why you want to use Terraform, you’re probably best getting off here and having a look around Terraform.io first!

Terraform Configuration Files

Terraform configuration files describe your desired infrastructure state, built up of multiple resources, using one or more providers.

Configuration files are a custom, but easy to read format with a .TF extension. (They can also be written in JSON for machine generated content.)

Generally, a configuration file will hold necessary parameters for any providers needed, followed by a number of resources from those providers.

Below is a simple example with one provider (Openstack) and one resource (an SSH public key to be uploaded to our Openstack tenant)

Save the above as  demo1.tf and replace the following placeholders with your own Openstack environment login details.

Now run $terraform plan  in the same directory as your demo1.tf  file. Terraform will tell you what it’s going to do (add/remove/update resources), based on checking the current state of the infrastructure:

Terraform checks, the keypair doesn’t already exist on our openstack provider, so a new resource is going to be created if we apply our infrastructure… good!

Terraform Apply!

Success! At this point you can check Openstack to confirm our new keypair exists in the IaaS:

 

Terraform State

Future deployments of this infrastructure will check the state first, running $terraform plan  again shows no changes, as our single resource already exists in Openstack.

That’s basic terraform deployment covered using the openstack provider.

Adding More Resources

The resource we deployed above was ‘ openstack_compute_keypair_v2 ‘. Resource types are named by the author of a given plugin! not centrally by terraform (which means TF config files are not re-usable between differing provider infrastructures).

Realistically this just means you need to read the doc of the provider(s) you choose to use.

Here are some openstack provider resource types we’ll use for the next demo:

“openstack_compute_keypair_v2”
“openstack_compute_secgroup_v2”
“openstack_networking_network_v2” 
“openstack_networking_subnet_v2”
“openstack_networking_router_v2”
“openstack_networking_router_interface_v2”
“openstack_compute_floatingip_v2”
“openstack_compute_instance_v2”
“openstack_lb_monitor_v1”
“openstack_lb_pool_v1”
“openstack_lb_vip_v1”

If you are familiar with Openstack, then their purpose should be clear!

The following Terraform configuration will build on our existing configuration to:

  • Upload a keypair
  • Create a security group
    • SSH and HTTPS in, plus all TCP in from other VM’s in same group.
  • Create a new Quantum network and Subnet
  • Create a new Quantum router with an external gateway
  • Assign the network to the router (router interface)
  • Request two floating IP’s into our Openstack project
  • Spin up three instances of CentOS7 based on an existing image in glance
    • With sample metadata provided in our .tf configuration file
    • Assigned to the security group terraform created
    • Using the keypair terraform created
    • Assigned to the network terraform created
      • Assigned static IP’s 100-103
    • The first two instances will be bound to the two floating IP’s
  • Create a Load Balancer Pool, Monitor and VIP.

Before we go ahead and $terraform plan ; $terraform apply  this configuration.. A couple of notes.

Terraform Instance References / Variables

This configuration introduces a lot of resources, each resource may have a set of required and optional fields.

Some of these fields require the UUID/ID of other openstack resources, but as we haven’t created any of the infrastructure yet via  $terraform apply , we can’t be expected to know the UUID of objects that don’t yet exist.

Terraform allows you to reference other resources in the configuration file by their terraform resource name, terraform will then order the creation of resources and dynamically fill in the required information when needed.

For example. In the following resource section, we need the ID of an Openstack Neutron network in order to create a subnet under it. The ID of the network is not known, as it doesn’t yet exist. So instead a reference to our named instance of the the openstack_network_v2 resource,   tf_network  is used and from that resource we want the ID passing to the subnet resource hence the .id  at the end.

Regions

You will notice each resource has a region=""  field. This is a required field in the openstack terraform provider module for every resource (try deleting it, $terraform plan  will error).

If your openstack target is not region aware/enabled, then you must set the region to null in this way.

Environment specific knowledge

Even with dynamic referencing of ID’s explained above, you are still not going to be able to copy, paste, save and $terraform apply , as there are references in the configuration specific to my openstack environment, just like username, password and openstack API URL in demo1, in demo2 you will need to provide the following in your copy of the configuration:

  • Your own keypair public key
  • The ID of your environment’s ‘external gateway’ network for binding your Neutron router too.
  • The pool name(s) to request floating IP’s from.
  • The Name/ID of a glance image to boot the instances from.
  • The Flavour name(s) of your environment’s instances.

I have placed a sanitised version of the configuration file in a gist, with these locations clearly marked by <<USER_INPUT_NEEDED>> to make the above items easier to find/edit.

http://goo.gl/B3x1o4

Creating the Infrastructure 

With your edits to the configuration done:

Terraform Apply! (for the final time in this post!)

Enjoy your new infrastructure!

We can also confirm these items really do exist in openstack:

Destroying Infrastructure

$terraform destroy  will destroy your infrastructure. I find this often needs running twice, as certain objects (subnets, security groups etc) are still in use when terraform tries to delete them.

This could simply be our terraform API calls being quicker than the state update within openstack, there is a bug open with the openstack terraform provider.

First Run:

Second Run: Remaining resources are now removed.

Thats all for now boys and girls!

Enjoy your weekend.

 

 

 

OpenStack infrastructure automation with Terraform – Part 1

Update: The Openstack provider has been merged into terraform. It comes with the terraform default download as of 0.4.0.

Get it HERE: https://terraform.io/downloads.html

Then proceed directly to the second part of this series to get up and running with Terraform on Openstack quickly!

Or.. read more below for the original post.

Continue reading OpenStack infrastructure automation with Terraform – Part 1

UCS vMedia Configuration and Boot Order

Just a quick note on Cisco UCS vMedia.

If you have configured a remote CD/DVD from a remote ISO and UCS manager is showing the image is ‘mounted’ but your server is still stuck in a PXE/Netboot loop…

It may be helpful to know that your regular boot order policy in your service profile doesn’t apply here.

AKA. If you have ‘CD/DVD’ in your Boot Order.
This still wont automatically boot into a vMedia CD/DVD.

UCS System manager boot priority list
UCS System manager boot priority list

 

Solution
You’ll need to F6 from the KVM console on server boot, there you will see an option for booting from CIMC vMedia DVD.

B Series UCS F6 Boot Options
B Series UCS F6 Boot Options

This will get you where you need to be!

Also. For those that don’t know. You can check the status of your vMedia mount under Equipment > Server > Inventory > CIMC.

Scroll down and you’ll see something like below.

UCS System Manager vMedia inventory

 

Matt

ZFS on Linux resilver & scrub performance tuning

Improving Scrub and Resilver performance with ZFS on Linux.

I’ve been a longtime user of ZFS, since the internal Sun beta’s of Solaris Nevada (OpenSolaris).
However, for over a year i’ve been running a single box at home to provide file storage (ZFS) and VM’s and as I work with Linux day to day, chose to do this on CentOS, using the native port of ZFS for linux.

I had a disk die last week on a 2 disk RAID-0 mirror.
Replacement was easy, however reslivering was way to slow!

After hunting for some performance tuning ideas, I came across this excellent post for Solaris/IllumOS ZFS systems and wanted to translate it for Linux ZFS users. http://broken.net/uncategorized/zfs-performance-tuning-for-scrubs-and-resilvers/

The post covers the tunable parameter names and why we are changing them, so I won’t repeat/shamelessly steal. What I will do is show that they can be set under linux just like regular kernel module parameters:

[[email protected] ~]# ls /etc/modprobe.d/
anaconda.conf blacklist.conf blacklist-kvm.conf dist-alsa.conf dist.conf dist-oss.conf openfwwf.conf zfs.conf

[[email protected] ~]# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=2147483648 zfs_top_maxinflight=64 zfs_resilver_min_time_ms=5000 zfs_resilver_delay=0

Here you can see I have set the zfs IO limit to 64 from 32, the resilver time from 5 sec from 3 and the delay to zero. Parameters can be checked after a reboot:

cat /sys/module/zfs/parameters/zfs_resilver_min_time_ms

Result: After a reboot, my resilver speed increased from ~400KB/s to around 6.5MB/s.

I didn’t tweak anymore, it was good enough for me and had other things to get on with.

One day i’ll revisit these to see what other performance I can get out of it. (I’m aware on my box, the RAM limitation is causing me less than ‘blazing fast’ ZFS usage anyway)

Happy Pools!

[[email protected] ~]# zpool status
pool: F43Protected state: ONLINE
scan: resilvered 134G in 2h21m with 0 errors on Tue Jun 24 01:07:12 2014

pool: F75Volatile
state: ONLINE scan: scrub repaired 0 in 5h41m with 0 errors on Tue Feb 4 03:23:39 2014

 

Matt

CF Push and case insensitive clients

So here’s a weird one that may save someone some time…..

Trying to perform a cf push with a .jar file. Get’s the following strange error!

 

Lots of head scratching;

Looks like (and is) a local error, unpacking the JAR, we haven’t even touched the PaaS/cloudfoundry yet!

Also, unpacking the JAR manually seems to complain also, hmm.

You’re running MacOSX? … (Or maybe Windows?)

Looks like the issue is because the JAR has been built on a CASE-SENSITIVE OS (in our case, Jenkins on Linux)

… and you’re trying to run cf push on a CASE-INSENSITIVE OS (in our case, a 2013 Retina MacBookPro).

Workaround..

Run the same cf push from a linux box and it works fine.

This link put me onto the issue (as the error is more than a bit confusing!);

https://groups.google.com/forum/#!topic/selenium-users/f8OMertwzOY

Hope this helps someone!

Openstack and PaaS; Love a good geek rift!

I’ve been in the bay area, CA for a couple of weeks now (excluding my cheeky jaunt to vegas!) and even though i’m now on vacation, it’s been the perfect place to watch the OpenStack Havana Drama unfold; Mostly stemming from this catalyst;

http://www.mirantis.com/blog/openstack-havanas-stern-warning-open-source-or-die/

Well (for me, anyway) especially this bit;

Too many times we see our customers exploring OpenShift or Cloud Foundry for a while, and then electing instead to use a combination of Heat for orchestration, Trove for the database, LBaaS for elasticity, then glue it all together with scripts and Python code and have a native and supported solution for provisioning their apps.

Hell no! Was my initial reaction, and while there has been a definite retraction from the tone of the whole post… I still think a hell no is where I stand on this.

And i’ll tell you why, but firstly;

  • I like Openstack, as an IaaS. I like it’s modularity for networking and the innovation taking place to provide a rock-solid IaaS layer.
  • It was a much needed alternative to VMWare for a lot of people and it’s growth into stability is something i’ve enjoyed watching (competition is never a bad thing right! 😉 ).

That said, here’s why I’ll take my PaaS served right now, with a sprinkling of CloudFoundry;

  • People tying things together themselves with chunks of internally-written scripts/python (i’d argue even puppet/chef as we strive for more portability across public/private boundaries) is exactly the kind of production environment we want to move away from as an industry;
    • Non-portable.
    • Siloed to that particular company (or more likely, project/team.)
    • Often badly maintained due to knowledge attrition.

.. and into the future;

  • Defined, separated layers with nothing connecting them but an externally facing API was, in my mind, the very POINT of IaaS/PaaS/XaaS and their clear boundaries.
  • These boundaries allow for portability.
    • Between private IaaS providers and the PaaS/SaaS stack.
    • Between public/private cloud-burt style scenarios.
    • For complex HA setups requiring active/active service across multiple, underlying provider technologies.
      • think ‘defence in depth’ for IaaS.
      • This may sound far fetched, but actually is and has already been used to back SLA’s and protect against downtime without requiring different tooling in each location. 
    • I just don’t see how a 1:1 mapping of PaaS and IaaS inside OpenStack is a good thing for people trying to consume the cloud in a flexible and ‘unlocked’ mannor.

It could easily be argued that if we are only talking about private and not public IaaS consumption, i’d have less points to make above; Sure, but I guess it depends on if you really believe the future will be thousands of per-enterprise, siloed, private IaaS/PaaS installations, each with their own specifics.

As an aside, another concern I have with Openstack in general right now is the providers implementing OpenStack. Yes there is an OpenStack API, but it’s amazing how many variations on this there are (maybe i’ll do the maths some day);

  • API versions
  • Custom additions (i’m looking at you, Rackspace!)
  • Full vs. Custom implementation of all/some OpenStack components.

Translate this to the future idea of PaaS and IaaS being offered within OpenStack, and i see conflicting requirements;

From an IaaS I’d want;

  • Easy to move between/consume IaaS providers.
  • Not all IaaS providers necessarily need the same API, but it would be nice if it was one per ‘type’ to make libraries/wrappers/Fog etc easier.

From a PaaS i’d want;

  • Ease of use for Developers
  • Abstracted service integration
    • IaaS / PaaS providers may not be my best option for certain data storage
    • I don’t want to be constrained to the development speed of a monolithic (P+I)aaS stack to test out new Key-Value-Store-X
  • Above all, PORTABILITY

This seems directly against the above for IaaS…

Ie, I don’t mind having to abstract my PaaS installation/management from multiple IaaS API’s so that I can support multiple clouds,(Especially if my PaaS management/installation system can handle that for me!); however i DON’T want lots of potential differences in the presentation in my PaaS API causing issues for the ‘ease of use, just care about your code’ aspect for developers.

I’m not sure where this post stopped becoming a nice short piece and more of a ramble, but i’ll take this as a good place to stop. PaaS vendors are not going anywhere imho and marketing-through-bold-statements on the web is very much still alive 😉

Matt