Lean technology and data center virtualization.

Thursday, January 27, 2011

Next year's stuff and pimping *nix

Our storage array is getting on in age. it's more of a software based device than strait up storage system (NetApp) where you pay off for the extra intellegence on  that layer with some modest performance sacrifices. If you stick with the 80/20 rule, this works  well for 80% of shops imo.

Fibre Channel is a dead end - you pick the best from protocols and hardware and combine them - NFS is still a great bet for those of us who need flexibility and reliability. Fibre optics of some sort are likely to stay ahead of wires for the same reasons wires are able to stay ahead of wireless. They'll all keep going up although it seems things slow down once you get beyond what a single user can use, imo that's why SSDs are only now coming into play - a split between RAM and

So a guy I work with had a good idea - to buy a NetApp front & put what ever the hell we want behind it. We can be FC on the back end loop. Or SAS. FCoE, AoE whatever. Commodity hardware is good enough for our workload.

Hell, there is a guy in Finance that could run a significant portion of our data center off his workstation. My PHONE is faster than my workstation was when I got here. People need to realize virtualization is not near done.


HW>hypervisor>OS>Tomcat is another split>Java>APP

Going SERVER > Virtual SERVER was a nice clean place to break that has sucked for far too long. I suspect Linux is far more efficient an OS than Windows is. I'm building a monitoring system to determine if that's actually true. This needs to tie in with our hardware costs & how many people use a given service at any given time.

I'm looking forward to building a data center on the same technologies that large schools like MIT & NCSU rely on to service their student population. Similar in size to my user base but in general much more discerning since they KNOW their phone is not as big a pain in the ass as their laptop...




No comments:

Post a Comment