Wednesday, February 22, 2012

SAN migrations

A good rule of thumb for SAN migrations is that any data base exceeding 10TB should have its LUN layouts inspected prior to migration.

Sure new hardware has lots of performance gains but don't expect cache and to make up for lack of spindles.

If there is any existing waitio at the cpu level then resolve this prior to the migration or instead of treating the migration as a SAN migration treat it as a Database migration.....Create new server with new LUN, copy data over, validate performance and then rescheduled a live SAN update and cut over. This may take longer but its better than spending the weekend and nights on an high severity incident.

Regards.

Scott.

Thursday, February 9, 2012

database migration

Sometimes people get confused by big database migrations. so keep it small.
Migrate dev first....then qa....then prod/dr.


If you go into a big project it may just never get done cause its to complex, involves too many people and costs too much.

Take it one bite at a time......and if vertias clustering is involved give the teams 3 weeks to validate and test all the clusterings....don't rush it or you may regret it.....if Veritas tries to get you to license a full power 7 box for one LPAR then just go to hacmp.

pre-defined ip ranges on vmware cluster

Check that you have a full set ip pre-defined IP's that will allow you to avoid joining vlans to your new vmware clusters.

Hate to ask....but do so else you may be half way through a deployment and need to add networks.


Wednesday, February 8, 2012

vswitch instead of physical network segregation

Don't let non-technical people drive physical network segregation into your vmware farms.

Build the vmware farms big to 10+ nodes and us vswitches to give you the segregation. Not only can you control the environment but you can shrink and grow resource pools......

Else you will end up with a bunch of small cluster and you might as well have just go to blade chassis to the old school crowd.

Of course...we never mix dmz with backend.

ESX Hosts and IO

Pushing vmware on to new IBM x3850x5 40 core servers with 512GB of RAM.
This is an uplift from previous 36 cores.

We have been loading up with lots of IO cards but it turns out its overkill. Recent review a SAN IO showed that the busiest 8 node clusters peak COMBINED IO did not exceed 1/10th of a 4GB SAN card capability. Even the network cards did not get stressed on 1gb connections.

As long as your clusters are not stressed IO is not an issue. Note to self ....scan for iso images mounted to vm's and drop them as it will stop vmotion.


Density on Power 7

Keep your minimum core allocations to a tenth of a core on AIX and leave lpars uncapped. After you get your cpu utilization up to 20% at the Frame/Box level then think about raise minimums.

If you go big out the gate you will limit the total number of LPARS as LPARS will not boot if the combined minimum of lpars exceeds the cores available in the frame less vio allocations.




Sunday, February 5, 2012

Enough with the Tools

New tools are great but on average take 6 months to roll out with an aggressive effort and a good team.

First up, focus on what you have and ensure the OS teams and the Application Owners are addressing issues already identified. An industry average is that 10% of known issues are not being addressed and its these 10% that cause the majority of high severity issues.

New tool roll outs can follow and improved coverage.