So you have a pile of distributed servers cluttering a crowded data center; and another conglomeration of virtual servers running on another pile of distirbuted servers in another overflowing datacenter ... and you're pretty sure there's a better way ... but how do you wrap your arms around the capacity requirements of an alternative run-time?
System z has MIPS. Power has rPerfs. Sun/Oracle has MVALUES. And x86 has a dozen more benchmarks and metrics. How does one convert one metric to another so that an encompassing capacity analysis can be accomplished?
This talk will explore a straightforward means of collecting inventory and runtime data that can be processed using 3rd-party data provided by Gartner (Ideas International) and as required manipulated by simple workload-factored mathematics to meld the distributed world and the mainframe world (zLinux) into a working model of capacity planning for server consolidation.