Saturday, January 29, 2011

FATA disk performance for VMware

Hi,
We are moving to the dataceneter and planning to have tiered storage on EVA4400 - FC RAID 10 for SQL databases and RAID5 across 24 FATA 1TB disks form VMware ESX guests.HP is describing FATA disks as suitable for near online storage, however I am not convinced that 24 spindles will not be enough for running VMWare for 3 ESX servers.
Does anyone has opinion on why this could be a such a bad idea?

  • Honestly, the root partition of modern OSs has very low IOP requirements. Most libraries and executables are cached in RAM and the needs of services like logging, etc are very light.

    The only way to tell for sure though is to calculate the IOP requirements of your guests and see if the disk can handle it. But from my gut feeling 24 spindles even seems overkill for 3 guests.

    pauska : 3 hosts, not 3 guests.
    joshperry : Hmm, yes it seems that I misread that, and his later comment about 40-50 hosts puts my final sentence on shaky ground.
    From joshperry
  • They are covering their bases I would imagine. They want you to buy FC for a couple reasons...one is price...they want to sell a costlier product of course.

    However, they also know that if you can afford it and you buy it, FC disk has the least chance of dissatisfaction from a customer based on performance issues. This is especially true of virtualized environments where iops are in effect consolidated (i.e., disk i/o for how many dozens or hundreds of servers running against relatively few disks). The "bad idea" part comes in when there's a customer that spent good money but didn't get the performance they needed either for whatever reason.

    As you're guessing, 24 spindles of non-FC disk may well be more than fine for your workloads. For regular/moderate workloads of heterogeneous iops, it wouldn't surprise me to find that this would be adequate for 3 ESX hosts' worth of guests (especially if we're not talking really high numbers of guests per host).

    As joshperry says, it's really a matter of working out your iops...once you know that, you can make an informed choice about whether or not the value-proposition of the FATA spindles works for you.

    Sergei : Thank you, I should have more details on IOPS.. Currently our VI is connected to CX3-10 EMC array and most logical way to measure IOPs would be to get stats from this array.Unfortunately enabling stats requires additional licensing.I believe my only choice here is running perfmon for windows and something similiar for linux, then aggregate the data. But I believe that perfmon data can be pretty different from array data due to the caches? Sergei
    damorg : You have to use what you've got available. You're right that guest VM-based stats may not tell the whole story performance-wise as opposed to array stats. It isn't uncommon at all to measure performance using Windows and Linux OS-based tools and roll it up as you are thinking about doing. It is not a bad method...done carefully, it should represent fairly well what your environment's workload looks like. Ideally, array stats would offer another "whole system" view to keep in mind. Any chance of temp/eval licenses to look at thos array stats? ;)
    From damorg
  • I'm a big EVA customer and fan, and I have a lot of those exact 1TB FATA disks and think they're great BUT there's something you need to know. The first box we had with those in saw an enormous number if drive 'failures' within it's first year, some were real requiring new disks, many just eject/reseats - the root cause of the problems were that we run out data centers at a steady 19C and we were told that this was too cold and also that the disk, being a mid/near-line model was specifically not capable of a >30% duty cycle and we'd been using them 24/7. The 450/600GB 15krpm disks are perfect but these 1TB ones were the bain of our lives until we started treating them as they were meant to be treated. Hope thus helps somehow.

    From Chopper3

0 comments:

Post a Comment