- Intel vs. Slashdot... pull up a seat and bust out the popcorn
- Our Attorney Todd
- Sense of ownership will drive Enterprise 2.0 adoption
- Corporate Blogging Redux
- iSCSI in the enterprise: Today’s Token-ring?
Gaining Control Of Information
Published By: Chuck Hollis on January 31, 2007 - 10:02pm
Original Blog Entry Located Here Filed In: IT Management I just love talking to customers. I could do it all day long -- if I didn't have a day job. Sometimes, you have an intense customer discussion that gives you new insights into what it’s all about. I had just such an experience this morning, which I want to share with you. It was fascinating, at least from my biased perspective. The key question: how does IT gain control of information in a large, complex organization without waiting for a lengthy, top-down organizational process to complete? And, at the end of the story, there’s a few nuggets of learning which I wanted to pass along. The Context This was a large, global energy company in for two days of briefings. I got them to them early on the first day, and we had the big context and strategy discussion. They did a very good job of painting the picture. Large, complicated business: yes, they were in energy, but there were parts of their business that looked like finance, parts that looked like retail, parts that looked like manufacturing, and so on. Businesses within businesses. As they looked up, they saw a very progressive executive management team who “got it” regarding IT and its role as a strategic lever in driving the business. They wanted more. As they looked down, they found – well – a somewhat chaotic situation. Legacy IT fiefdoms. Tactical implementations. Poor architectures. Byzantine funding models. Skill set mis-alignment. And more. Not an uncommon picture. As they looked outside their company, not only did they have the usual competitors, but there were a whole new crew of aggressive folks out there they’d never dealt with before. The regulatory environment and public perception wasn’t favorable as well. External optics were very, very important – and getting more important. The context was changing. They didn't have time to wait for a top-down mandate to result in organizational alignment, form architecture teams that took months and months to analyze and recommend, and many more months to implement. They needed some quick wins. They needed to start to gain control of corporate information -- quickly, effectively and without a lot of fuss. Could EMC help them? What We Talked About We first started with the big ideas – wasn't it all about information? And who owns corporate information? The good news is that they thought they had a preliminary mandate to do so. And there were steering committees and other organizational functions popping up to help them. But the mandate and the strategic intent had to be clarified. No clear, visceral message as to “why”. It almost felt like a communication problem. I did what I could to share EMC’s view as to the why – the informationist manifesto. They understood what I was saying – at least – in theory. And I think they agreed in concept. The conversation quickly turned to the more tactical – what in the technology portfolio could help them start to gain control of information, but do it in such a way that didn’t involve a three-year re-architecting, or organizational disruption? How do you start to become an informationist, when the practical aspects are almost overwhelming? How do you get to quick wins? Show value, start the ship turning, but without years and years of organizational grinding? And good ideas started to bubble up, one right after the other. The big idea -- how do you use existing organizational efforts, coupled with clever use of technology, to get a choke hold on the information beast, and start to wrestle it to the gound? Here's what we found. Details will vary depending on your specifics, but the themes are powerful. The DR Challenge They’d come to the conclusion that they would have to start getting serious about DR – remote data sites, failover, the whole enchilada. They had done some work in this area, but the stage was set for a major upgrade in this department. In addition to the usual business justification, external optics were an unspoken factor that had to be considered. They’d never done a enterprise-scale DR project. Lots of predictable inertia as you would find in any other big undertaking that the organization had never attempted before. What I offered was simple: businesses in other verticals (e.g. financial services) had covered this ground years ago. Just about every decent-sized financial institution had gone through the justification, built the infrastructure, trained their people, and now it was an integral part of their business landscape. Suggestion: rather than framing the problem as an “energy company challenge”, frame it as “we’re learning from the banking industry on this one”. No need to reinvent the wheel, at least conceptually. Sure, there might be a few industry-specific wrinkles, but the overall approach, technology set and methodology were directly applicable. Cut and paste from one context to another. Then we expanded the frame. As the company drove to enterprise-class DR, there was an additional opportunity: clean up some of the realted areas. They didn’t have a good, high-def view of what applications were important and why. That could be fixed as part of the process. A basic application inventory feeds DR rationalization. They felt that some of their infrastructure needed to be re-rationalized. Servers, storage, etc. That could be fixed as part of the effort. Easier to replicate a clean environment that a messy one. Some of their IT processes were in need of a bit of tuning up. As you went to remote DR, a clear opportunity emerged to put in a solid process and methodology framework – essential to be successful for DR. Simply put, maybe use the DR mandate to drive a bit of change in other, related areas. Don’t make it too complicated, but realize that you had an opportunity to put a few related issues on the table at the same time. The File Challenge Understandably, if you’re an energy company, a lot of your information lives in files. Acres and acres of unglamorous files that were a clear opportunity to save money, avoid risk and create new value for the information. Not surprisingly, every business function had their own file store. How to gain control? We introduced the notion of file virtualization, exemplified by Rainfinity Use it as a control point to start to slowly wrest control away from individual functions, and start to provide some corporate-wide value add. First project – global name space. Payback: clear consolidation opportunities that were quantifiable and achievable. No user impact. No turf battles. Second project – next-gen backup and restore using data de-duplication (Avamar). Payback: huge cost savings, easier self-serve model for users, and laying the foundation for the third project. No disruption to users. Minor turf battles. Third project – using the data-dedupe presentation (file systems) to start getting serious about information management. Maybe a quick enterprise search capability. Maybe use it to feed existing knowledge management initiatives that were underway. Maybe do a “compliance run” and just see what sort of land minds that were out there in the great file parking lot. One thing leads to another. At the end of the sequence, they would have control of (almost) all of the file information at their company, be able to show substantial cost savings in a variety of regards, and create new value from the information they already have. And users wouldn’t notice, nor would the business units complain too much. And IT could show a big win. The Management Challenge Not surprisingly, they had ended up with a difficult-to-deliver-service-level environment for the growing business processes that spanned multiple applications and owners. The situation was not pleasant today, and not showing signs of dramatic improvement. Yes, there was room for process improvement around ITIL, but how to get a handle on things without significant organizational, technology and architectural change? I went right to real-time discovery – the capability of sitting on a mirror port on the core network, cracking every packet, and using fingerprint technology to understand every application, every application relationship, every device, every version, every port, etc. etc. etc. Non-intrusive (no agents), no performance impact – and, for the first time, they’d have a realtime view of the legacy infrastructure – and all of its relationships and connection points. And be able to document how it had changed over time. Now, at this point, the senior guy pushed back a bit – politely – yes, this is something we need to look at in the next few years, etc. But not now. So I – hopefully just as politely! – pushed back a bit as well. Were you doing any data center migrations or consolidations? Any new major IT upgrades hitting the floor? Problem resolution desk getting a bit busy these days? How much of these efforts were tied up in knowing exactly what was going on, and how had it changed recently? I made the case that this was a technology that could be immediately deployed, with minimal cost and impact, that would make a serious change in the quality of life of the IT people – and, hopefully, the people they serve. For the first time, they could have a real-time, hi-def CMDB (common management database) that didn’t require a Herculean effort to build and maintain. And, that, of course, could lead to all sorts of good things. Another quick win. The Security Challenge Not surprisingly, the stakes were going up on IT security, and – especially – information security. They had a lot of initiatives going in different directions. I offered two thoughts here. One, look outside your industry to see what other people who deal with sensitive information are doing. Think financial services, government, retail, and so on. The same basic philosophies, technologies, and methodologies can be used for you – the problem is not all that different. And, of course, RSA could help them a good deal. The second thought turned out to be the more important one – how do you make what you have more effective? Turned out that they had lots of access points that were spitting out log information of various types. I described how our Network Intelligence product could capture and analyze the logs in real-time, as well as provide post-mortem analysis. Basically making what they already had more effective, and – more importantly – giving them a centralized point of control for security auditing and analysis. And, once again, they didn’t have to re-architect (or re-organize) to gain this benefit. Another quick win … And there was more … We talked about the challenges inherent in server virtualization, and how that impacted infrastructure and management issues. We talked about their current challenges with SAP, and what we could do there – very long discussion, we could have spent much more time [note to self, worthy of a future post] We talked about what was happening in enterprise collaboration, and why that was a key strategic issue in the near future, as their company became more of a knowledge worker model and less of a transactional model. We talked about what we were doing with SharePoint and Documentum, and why that would be very relevant to them in the near future. What we didn’t talk about much … Storage – except that they had to invest the effort to get the processes and methodologies right here – not only to create an efficient operational environment, but that the storage-related disciplines could serve as the foundation for what was to come. I thought this was particularly interesting, given our -- ahem! -- heritage. Storage virtualization – they had heard a lot about it from the usual suspects. I offered that – given the big panorama they were facing – it probably was a bit down the list. Professional services – except in the context of specific business problems they were facing. There (for example, DR, SAP, Microsoft, server virtualization) we had some good discussion. That's the right way to talk about it, I think. I suppose there’s quite a long list of things we could have talked about, but didn’t. And that's a point. We didn't talk about what we wanted to talk about, we talked about what they wanted to talk about. We kept it focused on their challenge, their goals, and what we could do to help them use technology and process to start to move in the right direction. The Big Wrap Up At the end, they were a bit tired, and so was I. But I felt we were able to cover an enormous amount of conceptual ground in a very short timeframe, and still keep it fun and interesting. Simply put, I tried to show how information infrastructure could help them achieve their new mandate, and potentially do it in a way that could work for them. They told me they got enormous value from this session. They were very polite people, to be sure, but I think there was a sincere appreciation in how we were able to map from their context to ours, and do it in such a way that helped them get ahead in a pragmatic, practial way. And we showed them quick wins -- key, strategic kung-fu moves to start the wrestle the information beast to the ground, without the need for three years of organizational grinding. I hope I get to do more of this in the future. Big fun. Bookmark/Search this post with:
Sponsored White Paper
Recent Blog Entries
|
Related Blog Entries
NewsletterGet these headlines/links in a daily e-mail newsletter. Sponsored LinksUser login
NavigationBrowse archives
|