by Craig

Are you working on your IT infrastructure? You need a clamp meter.

It’s a truism but all that we do in IT starts with the electricity supply. It feeds the servers, the network, the air conditioning and even the vending machine in the lobby. So when designing or re-designing any server room or data centre you really need to know how much of that invisible go juice that you are using.

Recently I have been checking the power usage of our Disaster Recovery (DR) site in order to ensure that it is appropriately provisioned with UPS (Uninterupterable Power Supply) devices. The problem with checking a live site, even if it is a DR site, is that the servers and services are on-line 24/7 and cannot be switched off. So you need to get something that measures the power being consumed without interupting the power flow and thus requiring the servers to be shut down.

So enter Electro Magnetic Field (Wiki Link), or EMF for short, which is a field generated by electricity running through a cable. Now quite how that field works is something i ended up doing at university when studying Electrical Engineering and believe me it isn’t that interesting. What it is though is a both an annoyance and absolutely vital for our current method of delivering our electrical infrastructure no matter where you are.

The electricity running through a cable generates a field in direct proportion to the current running through it. It also generates that field with a rotation much the same as water flowing down the bath plug. The rotation is affected by the direction of current and if you reverse the current then the direction of the field changes. If you run the exact same current in opposite directions at the same time (i have no idea how you would actually do this) then the two fields would cancel each other out.

So, back to measuring power use.

With Direct Current (DC) the current flows only in on direction at any one time. Except we don’t use DC we use Alternating Current (AC) which changes direction continuously; actually it changes direction 50 times a second (ever wondered what Hz meant?). Our use of AC current means that the field will be fluctuating back and forth as the field changes direction every 1/50th of a second making measuring more challenging than you would think :)

Manufacturers have countered this problem by having an instrument that switches round its measuring at the same rate/frequency as the AC does (lets say 50 times a second to be clear). You can pick these up from places like screwfix (http://www.screwfix.com/p/kewtech-kt200-digital-clamp-meter-400a/23318#) but in order to work you need to measure a single wire….

Eh? A single wire? Yes because in any electricity cable there are 3 wires; 1 live, 1 neutral and 1 earth. Forget about the earth for the time being but you can think the live and the neutral as “it flows up one and down the other” and if they are next to each they are always flowing in the opposite direction to each other thus cancelling out that field you are interested in measuring.

So what if there is a big cable and you are not going to be able to cut into it to measure the single wire? Ah that’s when you start looking at more expensive products. These are engineered to realise there is a seperation between the wires and measure the current based on that difference. In other words; they are very clever products. I picked up a clamp meter from Megger to do the job i needed to do:

(Megger MMC850 Multi Core - Single Core Clampmeter)

You can see the dials on the clamp meter to let it know what type of wire configuration it is measuring and then how much current is being drawn through that cable (7.4Amps in this case). The wire is held in the clamp at the top and never needs to be un-sheathed from it’s protective housing or generally interfered with.  


   

Of course you can use it for smaller cables such as this general mains (13Amp) cable where it is reading a heady 2.8 Amps!


Really a clamp meter is about understanding what your basic power requirements are in a live environment. Your power usage also will help you work out what your air conditioning requirements are where the amount of power put into a set of equipment is directly related to the amount of cooling you will need to dispose of the heat.

Get this sort of stuff wrong and you could end up blowing fuses or melting your servers!

by Craig

Project Octopus - Drop Box for VMware?

Very interested in the piece i read on Project Octopus the other day: http://blogs.vmware.com/euc/2011/08/vmworld-2011-tech-peview-vmware-project-octopus.html

It would appear that VMware are developing a multi platform file sharing system very similar to dropbox except with corporate controls.

One of my major problems with technologies like dropbox is that you dont know where your data has gone as it’s not controlled by your corporate systems. You can’t argue with the power of sharing files in a dropbox like fashion so rather than totally reject it i am pleased to see a major vendor taking it head on and offering a solution for corporates!

A video on youtube is the only other source of information i see so far:

Lets hope VMware don’t do their current favourite game and make the product bonkers expensive :/

http://www.vmwareoctopus.com/

by Craig

What's a database? And what is Oracle?

Was having a chat with an administrator in our offices who for years had dealt with some of our staff when they were off to do Oracle work. Oracle work for the company tends to be the installation of Oracle and then our deploying our software or the migration of Oracle from one server to another.

They said that they would like to understand how Oracle actually works so they had some appreciation of what people were talking about and what people onsite were trying to do. So I wrote the below to try and help them out!

(Rather than draw everything out with Visio I decided to draw the pictures! Then got someone else to re-draw them as it turns out my drawing is as good as my writing)  

What’s a database? And what is Oracle?

It’s not the most catchy title in the world but it’s the question that was asked of me. So to put it in terms that most people will understand let’s start with an activity that most businesses and individuals will have done at some point; storing address information.

Typically you will want to store the addresses of your customers (or friends) in a file somewhere much as you would have done before computers even existed! (Remember address books?) To do so we will assume that you create a single file called “ADDRESSES” and put your first address in that file. To be safe, and to let other people see that file, you store it on a server somewhere in your organisation.

So now you have a file which you can open and write to whenever you choose to and that you can share with others.

OK, so this works but what if someone else wants to read the file as well as you? For Microsoft servers such as (but not exclusively) Windows 2008 one person can open a file to write to it whilst another person can open a file to read it. This might work if you only want to share with one other person but is clearly going to be a problem if there is anyone else who might want to read it.

So let’s say that there are now 3 of you and you all want to be able to read and write to the file at the same time. One way to do this is the other two people relay their requests to read and write information to the file through you and you hold the file open permanently. But that’s a total waste of your time so why don’t we get a program to do that, hold open the file permanently and accept requests from multiple different people to read and write to the same file? Enter the database engine.

The database engine holds open your file permanently and you now ask it for the contents of the file and ask it to write to the file. The engine, being built for such things, will allow multiple requests to happen at the same time; like reading from two places in the file at the same time.


We could add in a few more files as well which the database engine could manage like a list of our products or perhaps a list of customers and their phone numbers. You might call these files PRODUCTS and CUSTOMERS as that seems logical.

These files live on the server still and are on a number of hard disks which the database engine reads from each time you ask it for some information (remember that the database engine has our 3 files open all the time). The hard disk(s) can only read and write information at a certain rate which is dictated by the design of the hardware that the server is made up of. This design is something that can make a huge difference to the speed of the return of information and a great deal of time is spent making sure that bottlenecks are removed to ensure high performance. Ultimately though the hardware can only do so much and if many many people are trying to read a lot of information at the same time then problems are likely to happen with the speed of retrieval.

To counter some of this you could move the very busy file (let’s say CUSTOMERS) from the hard disks that ADDRESSES and PRODUCTS are sitting on onto another hard disk. That will stop people using the CUSTOMERS file from affecting the ADDRESSES and PRODUCTS file users and vice versa. The database engine itself takes care of this and, because you only ever ask the database engine for information, and don’t go direct, the move of that file on the server will be transparent to the end user.

Datafiles

So this carries on and more little files are created to store information for the office. This gets a little messy with all these files lurking around on the hard disks and the database engine having to keep open all these little files. So the database creates a really big container (or file) and puts all these smaller files into that container. It can do this by making them virtual files within that big container/file and with some clever code that organises them. This means that the database engine now only holds open a single container/file which, in Oracle terms, is called a “Datafile”.

Now if you remember earlier we said that the CUSTOMERS file (now virtual) is very very busy so we don’t want that in the same Datafile as the ADDRESSES and PRODUCTS so we create a second Datafile and move that virtual file in there. We move that Datafile to another set of disks and we have helped our performance problem. The database doesn’t mind doing this as many times as you like depending on how many disks you have to use, and to the end user it wont make any difference as they don’t see these changes “under the hood”.

Tables

Next problem - the files are getting really chaotic as people add in new information in whatever format they like or they add in duplicates.The files are getting bigger and become increasingly difficult to find the information you are looking for. So we need to organise this information properly and give it structure.

Most of us are familiar with excel spreadsheets and how they can be used to store data. So for this random scribble I will assume that is the case. An excel spreadsheet is a series of columns and rows that makes up a grid. The rows are numbered 1 to lots and the columns are labelled A,B,C etc. Within a database the same thing can except we don’t call them excel spreadsheets and we call them “Tables”.

  Within each Table there are a number of rows and columns, but rather than call the columns A,B,C etc we can give them whatever names we choose. So for our ADDRESSES file/Table we might call a column HOUSENUMBER and another one POSTCODE and in those columns we store the house number and post code. So for each row within that Table we transpose the house number and postcode into the correct column giving us nice, ordered, information.

Structured Query Language (SQL)

Our ADDRESSES table is now within our database and has data within it and this is where the second clever part of the database engine comes in. Most people don’t want to scroll through rows and rows of information in order to find what they are looking for instead they want to ask the database engine something like: “can you get me all the postcodes from the addresses table please”.  The database engine will then go off and bring back all of the postcodes as you asked from the addresses table. This is great but we can extend this further and say: “can you get me all the postcodes from the addresses table where they are near EC4” (which is London). Again the database will happily return all of the postcodes which are near EC4 for you.

Except that databases don’t really understand English and when i say “near” it wont really understand what i mean because; what do i mean? So a method was created to ask database’s questions in order to retrieve data from them. This is called the “Structure Query Language” (SQL) and is commonly used no matter what database to retrieve data.

Using this language we can change some of my earlier questions into the Structure Query Language and our answers back from the database engine.

“Can you get me all the postcodes from the addresses table please”

SQL: "Select postcodes from ADDRESSES"

The above is straight forward; Select - well you could select things from a shelf couldn’t you? Select the beans from the shelf of tins.

Lets extend this further into the second question:

“Can you get me all the postcodes from the addresses table where they are near EC4”

SQL: "Select postcodes from ADDRESSES table where postcodes like 'EC4%'"

This is very similar to the first one except we are now appending a clause or caveat to our original request; can you only get those postcodes where they are like EC4%. The % symbol says that anything can replace it  so it could bring back a postcode of “EC4 Elephant” except we really hope that the postcodes are properly typed in and it will only brings those back that start with EC4 and are really postcodes!

Now we can do some even more clever stuff by asking it to bring back other useful information like:

SQL: Select streetname from addresses where postcode = "EC4A 1AB"

This will now bring us back a whole load of street names in that postcode. Incidentally, if you hadn’t guessed, streetname is another column in the ADDRESSES table which contains the street name for an address. We could call this column “bob” but then it would be difficult to make sense of and the next person who comes along and tries to get information from our table would be mightily confused!

With column naming we are drifting perilously close to name conventions which is something programmers spend a lot of their time doing to ensure that the next person can understand what they have done and often so that they can understand what they have done themselves! I will steer us back away from the naming thing…

So we have our tables with our data in our datafiles.

Users/Schema

But i now have two users (James and Alex) who both want their own copy of ADDRESSES because they want their own Christmas card list and don’t want to share. Now we can’t create a copy of the ADDRESSES table next to the other copy because the database wont know which copy you are talking to when you say “get me postcodes from ADDRESSES” and we can’t rename them as both want their’s to be named ADDRESSES. So to solve this we create them a separate area each.

Under Oracle everyone has a login to the database or User if you would prefer. The Users each can store information within their own private area or Schema. In Oracle a User and a Schema are one and the same thing. So we create the JAMES user/schema and the ALEX user/schema. Each of these have their own private storage areas so within the JAMES schema we create an ADDRESSES table and then within the ALEX schema we create another ADDRESSES table. Now they both have their own tables.

But it turns out that ALEX is very very busy and is using up lots of lots of disk speed and making it difficult for JAMES to work. We need a way of moving that table to another area, we need a storage area for JAMES’ tables and ALEX’s tables, we need space to store their tables. So for both of them we create a TABLESPACE.

Each of these tablespaces have their own, private, datafile as we still need to physically write the information in these tables to disk. So if a tablespace gets busy (as our ALEX example is) we can move that datafile, which contains the tablespace, which contains the table, to another disk on the server. The database engine handles all of that and the end users know no different!

So remember that Table -> Tablespace -> datafile -> Disk

Now someone want to access JAMES’ ADDRESSES table? You would need to tell the database engine that you want to access that one in particular and not the ALEX version. To do so you would say:

SQL: "select streetname from JAMES.ADDRESSES"

You can extend this out to lots of other tables in each of the Schema and give permissions to other users/schema to access tables (or not). There are a lot of other clever things that i wont go into here but some of them are:

  • Views - a query that is stored in the database so you select from the view and not from the tables, useful if selecting from the tables is complicated or you want it hidden for security reasons.
  • Functions/Procedures - pieces of code which sit inside the database and interact with the data.
  • Triggers - perform an action once a particular action is undertaken on a table, for instance a new record might well trigger a function.

Oracle manages all of this and has its own schema called “SYSTEM” which just contains tables which has details about everything else, so it might have a table called “USERS” within which it has ALEX and JAMES as records. It also would contain details on the datafiles, tablespaces etc etc.

Databases (and i will use Oracle as an example here) can do clever things with caching to avoid reading values from the disk. This can be particularly useful when you have a value that is read back repeatedly (such as the current VAT rate) and as such Oracle will hold that in memory rather than retrieve it from disk each time. By clever use of caching and disk management the performance of the database can be altered dramatically.

Thus completes the section on how a database, and a little bit of Oracle, works.

TNS Listener

The final, useful, bit to know is how everyone connects into our new database.

If everyone talked straight to the database engine then it could connect people straight into the database and they could start work. But what if we had two databases on the same server, one live and one test? Would the live one handle the connections for the test database or vice versa?

A better way than having the database engines handle the connections (and remember they are very busy processing information requests) is to have a dedicated process creating the connections. This, in Oracle terms, is the TNS Listener.

The TNS Listener accepts connections on behalf of all databases running on a server and then establishes a connection between the connecting client and the target database before stepping out of the process and letting them carry on without it.  

Done

So there you go, 2500 words on what a database is and what Oracle is - hope you found it useful.

by Craig

Installing vCenter chargeback (Part 2)

I know that last time i was all excited about installing vCenter chargeback, and indeed i was, but now i feel a little deflated. Mostly this is because i really didnt enjoy using the software once i had it up and running.

Now before you think i might launch into a huge review of the good and bad about this well then i am going to disappoint. Really the product is too large and too complicated to get into here so i will summarise my experience in a few points:

  • It really isn’t very user friendly - i did read the (large) manual and still struggled to get the data from vCenter into chargeback. Even then it was still full of someone else’s ideas of how a chargeback model should work rather than yours
  • It’s slightly buggy - when i “slightly” i should really say “a lot”. For a product that that is as rock solid as ESX the chargeback functionality was all over the place throwing errors at odd times with the old login/logout option being the best way to fix them.
  • It’s complicated - i mean REALLY complicated to set-up. Lots of filling in various fields and working out very fine details to create the exact picture of the landscape you want. This might be a good thing for large enterprises who know exactly what they want to do but for me it’s just too much

The nail in the coffin?

  • It’s expensive, very expensive. I’ve looked at other products now and they offer chargeback and more, much more. With everything in VMware now being priced on the Virtual Machines it was getting very expensive very quickly. 

This is what i don’t understand about the entire VMware approach currently, every VM will cost you money on hardware but will cost you on Virtual Software. Meaning that the gap (TCO) between hardware and virtualisation is getting ever narrower and less attractive. Why VMware why?

The chargeback product, although very clever, is a great example of where VMware is going wrong with its pricing. For an environment of 300 Virtual Machines (hardly monsterous) we are looking at between £8,000 - £10,000 to deploy chargeback. That means we need to charge £25-£35 to each virtual machine for the privilege of being charged in the first place!

For that price it just isn’t worth it and hence i stopped working on it. Maybe in the future, maybe.

by Craig

Installing vCenter Chargeback (Part 1)

I decided to, once again, try my hand at the vCenter Chargeback installation and configuration to try and gain some cost control over our virtual estate. To that end this post, and hopefully others, will comprise my successful deployment of the software against our environment.

The environment

The environment I am working against has a physical vCenter server, a physical database server and the rest is mostly virtual machines.

Deployment

The Oracle database has already been built so it is simply a case of connecting in the new virtual Chargeback server (Windows 2008 VM) to the database. Oracle Client Installer - Installation type

To install the oracle client I downloaded the 11.0.2 win64 onto the 2008 server (as they are both 64 bits) and choose to use runtime as the installation type.

This gives me all the connectivity I need without too many superfluous administration tools that are already present on the database server. Oracle Client Installer - Installation Location    

Next up is the location of the oracle client. This is mostly up to you but the settings i used are as per the image; c:\oracle.         Having installed the client you need to use Oracle Net Configuration Assistant to connect you into the database and create your tnsnames.ora file. This is something anyone who has ever used oracle should have done many times before and there is nothing different about this one.  

Oracle user, tablespace and permissions

So these are the bits that i had to work out and most people will be interested in (those of you that are used to Oracle)

Creation of the tablespace (having logged on as system):

create tablespace CHARGEBACK datafile 'c:\\ora_data\\vccore01\\vccore01\\chargeback.dbf' size 200M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Creation of the user/schema, of course change for whatever you want to use as the password:
create user chargeback identified by <password> default tablespace chargeback temporary tablespace temp;
Finally the permissions that need to be applied to the user/schema “chargeback” in order for it to work! (These might be slightly over the top and i would hope to refine them with a few more hours of experimentation):

grant connect to chargeback; 
grant create view to chargeback; 
grant create procedure to chargeback;
grant create table to chargeback; 
grant create sequence to chargeback; 
grant create any sequence to chargeback; 
grant create any table to chargeback; 
grant create type to chargeback; 
grant unlimited tablespace to chargeback; 
grant create session to chargeback; 
grant drop any table to chargeback; 
grant CREATE ANY CLUSTER to chargeback; 
grant DROP ANY CLUSTER to chargeback; 
grant CREATE ANY INDEX to chargeback; 
grant DROP ANY INDEX to chargeback; 
grant CREATE ANY SYNONYM to chargeback; 
grant DROP ANY SYNONYM to chargeback; 
grant CREATE ANY VIEW to chargeback; 
grant DROP ANY VIEW to chargeback; 
grant CREATE DATABASE LINK to chargeback; 
grant CREATE PROCEDURE to chargeback; 
grant CREATE ANY TRIGGER to chargeback; 
grant DROP ANY TRIGGER to chargeback; 
grant CREATE MATERIALIZED VIEW to chargeback; 
grant CREATE ANY DIMENSION to chargeback; 
grant DROP ANY DIMENSION to chargeback; 
As **SYS**: execute on dbms_lock;

Having done all that you should now be in a position to start the installer and run through the wizard to install the vCenter Chargeback.

Most of the installation of Chargeback itself is covered by the documentation and i would advise following that rather than this web page, it doesn’t have any “eccentricities” that i am aware of…

Next up will be the install (maybe) and then the configuration of Chargeback.

by Craig

VMware vCenter chargeback

It might be that I am doing the Vmware VCP course this week, or it might be a nagging feeling at the back of my brain, but something is driving me to look again at VMware’s vCenter chargeback.

I’ve struggled for a while with enterprise management without an effective way to regulate the consumption of resources (as Virtual Machines) by the business. The logical, and obvious, way to do this is though some sort of billing mechanism where each department “pays” for their usage on a periodic basis. Generating the real costs is actually quite difficult so relative costs are where I will start before trying to get them as accurate as is appropriate.

vCenter chargeback should accomplish this, and do so in real time, by querying the vCenter control server and recording the results.

Anyway should be interesting and hopefully not as soul destroying as the last time I tried it…..

by Craig

Password difficult to remember and easy to break?

It may just be a comic but there is a huge amount of truth in it (from www.xkcd.com): Password Strength

So what is the above above then?

Well it’s the same trick that the lottery uses to get you to buy tickets on games with impossibly high odds such as Euromillions - any extra digit hugely increases the difficulty/probability of the guess.

Take a normal password, so “pass”, and try and guess it. That’s 4 digits long so you need to guess the right combination out of the 26 letters of the alphabet - 1/26 x 1/26 x 1/26 x 1/26 which is 1/456,976 or 1 in 456,976 probability of a correct guess.

So lets capitalise letters (“pAss”) and say the case has to be correct as well. That gives us 2 chances per letter or 1/52 per digit and equates to (1/52 x 1/52 x 1/52 x 1/52 = 1/7,311,616)  1 in 7,311,616 probability.

If we go back to the “not caring about uppercase/lowercase” earlier example and add 1 more digit to the password to become “passe” then our password guess probability becomes: (1/26 x 1/26 x 1/26 x 1/26 x 1/26) 1 in 11,881,376 which is much, much more than our introduction of a capital letter (1 in 7,311,616).

The general principle, therefore, is that the longer the password the harder it is to guess no matter whether you use case sensitivity or numbers or punctuation. The only exception to this is where you use a real word, like “passe”, instead of a made up one, like “passm”, as a dictionary attack would circumvent the guessing.

The example above uses 1000 guess from a random computer attack a second. Using our passwords above the following would be the time taken to guess them:

  • “pass” - 1:456,976 - guessed in 7 to 8 minutes
  • “pAss” - 1 in 7,311,616 - guessed in just over 2 hours
  • “passe” - 1 in 11,881,376 - guessed in just over 3 hours

So there you go, longer words are much better than one with a few letters changed in it and even better if they have number substitutions as well!

Lotteries (incidentally)

The Euromillions is ridiculously difficult to win because of the extra digits you need to guess under the same principle. Here follows the maths:

Normal UK Lottery - 6 numbers guessed all between 1 and 49 so the probability calculation is:

6/49 x 5/48 x 4/47 x 3/46 x 2/45 x 1/44 =  720/10,068,347,520 or 1 in 13,983,816 (which is alot!)

For Euromillions it is 5 numbers guessed all between 1 and 50 and then 2 more between 1 and 11. This seems better right? It’s not because you have to guess 7 numbers in total and that makes all the difference:

5/50 x 4/49 x 3/48 x 2/47 x 1/46 x 2/11 x 1/10 = 240/27967632000 or 1 in 116,531,800 (which is loads!)

If we just strip out one of the end guesses (so drop the 2/11 and make it a single 1/10) then the odds drop to 1 in 21,187,600 and all because we only had to guess one number between 1 and 11 rather than 2….

Conclusion?

Yes i am sure there was one! Practice what you preach; I for one use normal everyday words in my passwords and put a # symbol between them so; “coffee#please” would be a very good password!

by Craig

VMWare 5: a consumer victory? Just.

There has been a rather ominous rumbling of discontent towards VMWare as soon as it released it’s new pricing model for VMWare 5 (http://www.vmware.com/files/pdf/vsphere_pricing.pdf) with all kinds of accusations of a “VTax” flying around and people talking about jumping ship.

These were no empty threats either; products such as Microsoft’s Hypervisor and Citrix’s XenServer (to name but two) are eager for a share of the market domination that VMWare currently enjoys. With the new pricing model we were, like many others, faced with having to buy more licences to run our VMWare farm should be follow the upgrade path from Version 4 to Version 5.

It’s not as if we are running massive servers either. By today’s standards our dual CPU, quad core, 64GB RAM servers are under-utilised and should have much more RAM in them. However under the original licencing models we would be buying a third licence per server just to keep them running. Even with the changes we are now maxing out our vRAM allowance at 64GB per server so we are going to be looking at upgrades anyway(http://blogs.vmware.com/rethinkit/2011/08/changes-to-the-vram-licensing-model-introduced-on-july-12-2011.html).

Yes it’s better, but with our server’s CPUs capable or running many more VMs we are going to want to put more RAM in them and therefore have to pay more for our VMWare licensing. Ultimately we may choose to move up to enterprise licenses as a cheaper option with more features, perhaps that was what this was about all along…..?

by Craig

Astonishing admission from Sony

You see things like security breaches all the time these days; from Play.com thinking it might have had some email addresses leaked to Cotton traders losing credit card details during a hacking attempt. But never before have we seen something on the scale and exposure of the Sony hack where up to 77 million users could be affected.

The admission from Sony this week sheds some shards of light onto what is likely to be a very embarrassing and extremely expensive lack of standards from Sony.

A company that is known for quality writes:

Although we are still investigating the details of this incident, we believe that an unauthorized person has obtained the following information that you provided: name, address (city, state, zip), country, email address, birthdate, PlayStation Network/Qriocity password and login, and handle/PSN online ID. It is also possible that your profile data, including purchase history and billing address (city, state, zip), and your PlayStation Network/Qriocity password security answers may have been obtained. If you have authorized a sub-account for your dependent, the same data with respect to your dependent may have been obtained. While there is no evidence at this time that credit card data was taken, we cannot rule out the possibility. If you have provided your credit card data through PlayStation Network or Qriocity, out of an abundance of caution we are advising you that your credit card number (excluding security code) and expiration date may have been obtained..

So to pick apart some of that statement, they believe someone has definitely gained:

  • your name
  • address
  • email address
  • birthdate
  • login AND password
  • payment history
  • credit card details

The only thing seemingly not obtained was the security code for your credit card.  But all the rest is enough for someone to easily impersonate you in a huge shopping spree or, worse, try to login to some other sites where you use the same usernames and passwords.

What amazes me most, from an IT security point of view, is the admission of loss of passwords and credit card details. These things are routinely encrypted in databases to give you the sort of strength-in-depth that offers some protection to these kind of attacks. Assuming that these were not encrypted then that is the sort of thing that makes people cry negligence and, when huge losses are concerned, that could well bring court cases. There are other alternatives such as they were stored encrypted and the hacker used Sony’s own software to decrypt those details and download them but that is a huge huge security flaw with similar questions over security.

Sony are a big, grown up company with an important brand to protect so for them to come out and say this means something has seriously gone wrong. For them to keep the network offline for this long means that is so core to their systems they probably re-writing some of the core systems from scratch. This will present them with a real problem when they eventually tell all of us what really went wrong and what they did to fix it. It will either show a huge problem internally which we will wonder why they didn’t take security more seriously or not enough information and thus not really generate any confidence in their once hacked systems.

Dark times for Sony and for anyone who used their Playstation Network (like we did) I would advise you to immediately cancel the credit card that was tied to that account and get a new one from your provider. Furthermore if you used that password somewhere else (like Amazon for example) I would advise you to change that password as well.

The “abundance of caution” in that Sony statement was perhaps what should have been displayed by Sony when they wrote the security systems in the first place and saved everyone a lot of hassle.