- Adobe's acquisition of Antepo spurs chatter on convergence for content creation
- Recent Advances and Remaining Obstacles in Web Application Technology
- Fragmentation, windows, tasks: myths, realities, novelties
- Pumps, Pipes & Pools
- IT As Change Agent
My Way Or The Highway
Published By: Chuck Hollis on February 16, 2007 - 1:59pm
Original Blog Entry Located Here Filed In: IT Management Some of us in the technology community get pretty passionate about our views. That’s a good thing, right? But we’re all better served by – at least attempting – a somewhat dispassionate view that there are reasonable alternatives to a given issue. We, as vendors, should state the problem, talk about alternatives, and then go on to why we think our particular answer is better than the other guys. Customers should ideally be exposed to the thinking, and not just shown the end result. I saw a pointed response from IBM regarding Hu Yoshida’s post regarding storage virtualization. Of course, I don’t agree with Hu. Going farther, I don’t think many people outside of HDS would share his particular point of view. But the real issue here – in my mind – is a framing of the discussion, rather stating the answer. Nothing is cut-and-dried in IT, even though we would like it to be. That’s why we have real smart people working in IT. If it was obvious, they wouldn’t need us, right? Background Hu’s been blogging a lot regarding HDS’s views on storage virtualization. It seems that every post gets a bit more strident and declarative. I guess the urgency of the matter crossed some sort of threshold for Hu. Specifically: "Storage virtualization can only be done in a storage controller. Currently Hitachi is the only vendor to provide this." Well, that settles that, doesn’t it? He triggered a response from Tony Pearson at IBM which is kind of fun to read. I agree with Tony on his basic premise – there are alternatives to the issue. And HDS should be positioning themselves in that framework. Of course, Tony did the usual IBM thing and treated us to a history lesson stretching back 1972 (!) which was fun to read, but not really relevant to the discussion today. It’s a Complex Discussion I’ve written about how hairy the whole storage virtualization discussion is before, and it’s still true today. Lots of different opinions on why you’re doing it in the first place. Pool your storage, manage your storage, commoditize your stoage, tier your storage, replication your storage, migrate your storage, and so on. Lots of different ways to do it. Do it at the server, do it within a storage controller, do it in the network, use your storage controller as a network controller, and so on. Lots of different use cases. Small companies, large companies, mission-critical applications, less-critical applications, non-disruptive migrations, and so on. And lots of different interactions with other parts of the infrastructure. Overall infrastructure management, servers, applications, networks, security and so on. Boy, I’d love to narrow all of this down to a single answer, but I just don’t think that’s realistic. And No One Has Nailed It Yet It’s not realistic for HDS, IBM or EMC to take that position, e.g. “we have the answer and no one else does”. Honestly put – none of us have totally nailed this one yet. Give me 30 minutes with any product evangelist from any of the three companies – or any of the other ones, for that matter – and it won’t be pretty. I think EMC has a decent set of use cases that line up with our business. But the use cases are very different. What’s The Strategy? But going beyond individual use cases, there’s two central strategy questions that underlie the surface discussion. The first strategic question is – do you see network-based storage virtualization as the strategic platform for new kinds of storage functionality that used to be built into arrays? As an example, the answer for EMC is “yes, we do”. In addition to all the usual volume management features (pooling, migrations, etc.) we see it as a great place to do new forms of replication, e.g. RecoverPoint. And security. And certain forms of management. And a long list of things I don’t really want to talk about yet. The answer for HDS also appear to be “yes” simply because they’re using their existing storage controller as a pseudo-network device. Interesting approach, but I have my concerns, shared later. The answer for IBM is “I don’t know” … simply because they don’t seem to have a rich inventory of storage functionality (PR and long history lessons notwithstanding) that could be a candidate for network-based storage virtualization. If they do decide to go this way, they’ve got a lot of big investments to make. The second strategic question is – what’s the best architecture for network-based storage virtualization? The options here are – use a server-based appliance (IBM), use an existing storage controller (HDS), or build a new platform around intelligent SAN technology (EMC). We’ve found great use cases where intelligent SAN technology is the obvious and right choice. And we think that this sort of architectural platform can do things at a price point that can’t be achieved with traditional storage architectures, or – worse – appliance-based approaches – both of which are well-understood. Simply put, we believe – in the long term, for the majority of use cases – network devices will be better than for network-based storage virtualization than either server appliances or repurposed storage controllers. Put differently, if all EMC had to do was offer a storage virtualization product, it would have been far easier (and far cheaper) to simply go the server route, or the repurposed storage controller route. And – believe me – at the time, it was a very passionate debate all around. We made our decision, and we’ve been happy with it. The Journey Has Just Begun If you think that storage virtualization is a mature market, it’s not. As with all the virtualization technologies (server, files, et. al.) the journey has just begun. Every virtualization technology introduces a new functionality point in the stack for doing things a different way than had done before. That’s cool. And every virtualization technology forces re-integration and re-rationalization of the technologies that surround it. That’s hard, but that’s progress. And, let’s be brutally honest, the world is really a better place when customers have workable options for server virtualization, file virtualization, storage virtualization and so on. And I think that EMC has two important roles to play here – not only delivering differentiated solutions in each category, but giving customers useful ways to integrate these technologies so their world ends up better, and not more complex. In my opinion, we’re just a few years into a broader journey. It’s fun, and it’s a fascinating discussion. So don’t spoil the party, Hu!! Bookmark/Search this post with:
Sponsored White Paper
Recent Blog Entries
|
Related Blog Entries
NewsletterGet these headlines/links in a daily e-mail newsletter. Sponsored LinksUser login
NavigationBrowse archives
|