advertisement
5 tech trends that will impact data centres in the future
At Gartner’s Infrastructure, Operations & Data Centre Summit in Sydney, research analyst, David Cappuccio, detailed new trends that are likely…
At Gartner’s Infrastructure, Operations & Data Centre Summit in Sydney, research analyst, David Cappuccio, detailed new trends that are likely to impact IT operations and data centres of the future.
“A lot of the time, clients are so busy trying to keep the lights on, or just getting last month’s project done, that they don’t have time to look down the road and see what’s coming and how that might impact them,” said Cappuccio.
“So what things are we not paying attention to that might affect us, down the road? This presentation is about these things. Some are societal, some are technology, some are organisational, some are intertwined, and some are not,” he said.
Below are the first 5 of Cappuccio’s top 10 IT trends that are likely to have either a short or long term impact on IT operations.
1. Non-stop demand
“This will be on the list forever – it’s the mantra of IT. Whatever you’re doing it, do more of it, do it faster and do it cheaper,” he says.
“Clients say to us, we know this is all important for the business, and we’ve got to be agile, but by the way, we still have to keep the lights on, we still have to run IT.”
Gartner’s latest research shows that almost three quarters of IT budgets were used on general maintenance and just keep all the existing physical systems running and up to date. Meanwhile, workloads continue to grow 10 per cent annually around the world.
Network bandwidth has 35 per cent annual growth rates, on average, power costs have growth rates of 20 per cent, and storage capacity 50 per cent, which equates to around 600 per cent growth in storage capacity over the next five years in storage alone.
“What’s not mentioned much is input/output. I talk to people now with heavily virtualised data centres and they’re talking about I/O growth rates of four to five times on a year-on-year basis,” says Cappaccio.
These are the demands that the business doesn’t see, and that IT is expected to solve without impacting on what the business thinks is more important.
“It’s forcing us to expand our networks and to make a lot of changes to how we design infrastructure to support that.”
2. Every business unit is a technology startup
Business units are now making decisions that are technically IT decisions – so how do IT leaders deal with that?
“Every business unit is a technology startup now, and their perception is that the IT team are just too slow, they can’t react fast enough for us so we have to do it ourselves,” says Cappuccio.
Decisions by business units are what drove the introduction of PCs into the workplace, the early adoption of cloud in a lot of companies; it drove the introduction of wireless networking, and the use of mobile technology.
“In most cases IT was not the one pushing change, IT was saying ‘wait, let’s take a breath, let’s do things right in a defined architecture’, and business units were saying ‘no, we don’t have time for that, we got to move’.”
As a result, IT usually spends more time reacting to new technologies than anything else, to make sure they support the business, and IT teams that don’t do this will be seen as a detriment.
“We can’t live like that, so it’s driving many IT organisations to change how they look at things and the way they organise and support the business, which is creating architectures that are chaotic at best.”
This can be a costly experimental landscape if not managed correctly, as people launch new things quickly, and if it doesn’t work they will seek out something else – but that service is still running, and someone has to foot the bill.
“If you look at all the pieces it looks cheap, but if you add all those up you realise, we’re throwing a lot of money away on something we could’ve solved ourselves if we’d just taken the time to do it right.
But business sees this is a way to enable themselves faster, if IT is not going to do it for them, they’ll do it without them,” says Cappuccio.
BYOD is a common example, as telling employees they can only support a certain type of device will “last two or three days”, adds Cappuccio, or until an executive decides they want to buy something else.
“You have to adapt to support it, and so business units in many cases are driving IT. I wouldn’t say that’s necessarily a bad thing but if you’re not paying attention to what they’re doing then you’ll keep getting hit upside the head with new technology you didn’t even know was there.”
Cappuccio recommends involving the business units with IT and research labs, and try to fund a lab environment where they can test their new technologies. While this is still not under the control of IT, it at least gives IT the opportunity to monitor what’s going on.
“It’s a great way of getting your people involved in newer technologies – let the business drive it.”
3. The Internet of Things
The buzzword of the week – but how is it going to impact IT? It’s definitely going to, it’s just a case of when and how.
Cisco research shows that today’s global data traffic per month is 24 times that in 2013; it will be 95 times that by 2018, reaching 15.9 exabytes per month by 2018.
“The amount of data is staggering, that’s today’s traffic projected forward. Add it onto the IoT, we’re looking at almost 4000 to 5000 times the amount of traffic we have today on a month-by-month basis because of the IoT. And will that impact on IT and how we do things? Yes,” says Cappuccio.
Every vendor on the planet seems to have an IoT initiative; however it’s not just market hype. In fact the IoT has been a very real trend for 40 odd years, operating technologies in manufacturing companies.
“Most companies will use some level of operating technology outside the purview of IT. This is just bringing it more in-house,” says Cappuccio.
IoT is not a single technology, but a concept, that’s continually being driven by the proliferation of sensors being applied in the world. These sensors are getting smaller, cheaper, and in some cases won’t even need batteries anymore.
The implementation of sensors is the first step, but what’s most important is what you do with that data, how you analyse it, understand it and use it to grow a new business based on it.
“If I could have certain capabilities at the end, or certain type of applications or certain types of analytics capabilities on-premise, there’s further uses for it,” says Cappuccio.
“It’s opening up a lot of incredible business opportunities, but our message is the same as earlier – IT isn’t driving this trend, in fact you shouldn’t even try to. You just need to understand it, start working with businesses units again, ask if there are things you can do with it to enable them to work better.”
With businesses continuing to digitise over the next 3 to 5 years, the IoT is going to be necessary for many organisations to compete, but getting started isn’t just driving all the initiatives, but sitting down with business units to first make a plan, figure out the opportunities to test, and any other techniques you can draw from it.
“If the pilot or beta test works then you can try something else, if you have some failures, that’s fine – let it fail, learn from it, try something else.
“If you don’t do that, your competition will, and already is, so this is where IT can enable digital business, at the edge, designed to engage with all the behaviour of your customers.”
4. Software defined infrastructures
“Software-defined infrastructure, or software-defined everything (SDx) – one analyst just wanted to call it software-defined whatever,” joked Cappuccio.
“Depending on whom you talk to – it could solve world hunger or it’s a specific niche protocol.”
The faster technology changes, the more we find ways to use it, or abuse it, and it’s not slowing down any time soon. Last year, NTT Japan successfully tested a fibre optic cable which pushes 14 trillion bits per second down a single strand of fibre – and that’s very fast.
Software-defined infrastructure can reduce the cost and complexity of a network, plus drastically improve work flow and management.
“The most difficult part of IT infrastructures to manage or change is the network, with probably the most highly paid people in the data centre doing so,” says Cappuccio.
“When you need better data centre performance they have optimised it at a device-by-device level, and that’s very expensive, very time consuming and not very adaptive.
“What if we created an environment we managed with software instead? Then we can manage it based on a common set of principles … suddenly the environment seems a lot more flexible, potentially a lot more scalable and a lot more adaptive.”
Cappuccio says in this environment you can potentially tie in networks, storage, servers, data centres – everything – into the software defined infrastructure. If you can build an environment like the virtual data centre, it doesn’t really matter where the physical components reside, so long as you can tie them all together.
This trend can also change the way we look at projects, changing from a technology point of view to a work flow point of view. Customers can log in from different parts of the world, network traffic would change, and you can manipulate it to get optimum work flow out of the network, based on demand.
The main issue with SDx is, like many trends, many vendors have jumped on the bandwagon really quickly, with software defined stores popping up everywhere filled with the same products but with new terms.
“A lot of them are not quite what we’d call ‘software defined’, the two terms out there are software defined networks (SDN) and software defined data centres (SDDC). I’ve heard one vendor say ‘software defined power’. I’m waiting for ‘software defined software’ to pop up, everyone seems to have their own term,” says Cappuccio.
Another issue is the drastic impact it will have on your most important people working in the data centre, such as storage and network management.
“It is organisationally disruptive, it changes how to do things, who is responsible and in most cases, it changes the skillsets,” says Cappuccio.
“The choice is let those workers get aggravated with the changes and find a way around the problem, or get them involved early to find out the best way to do it, which Gartner recommends,” says Cappuccio.
Many companies are now testing software defined networks and storage, in small beta environments and production environments, though not across the board.
“They’re training their staff, they’re testing out different vendor offerings, getting comfortable with it – it’s an evolutionary thing, not a revolutionary thing.”
5. Integrated systems
Integrated systems are not overly new, in fact in most IT shops they took off five or six years ago when Cisco released UCS, followed by Converged Infrastructure from HP. Since then there have been many variations on the theme, however we can expect more change to come, both from providers and internal systems.
“Vendors were looking at this and wondering, what if we offered not a single component or device to do one thing but a composite of things, and we adopt a managed toolset to run these things,” says Cappuccio.
Historically, with servers or storage, individuals within those stacks would analyse the vendors and the products, while looking for best of breed, all component level analysis.
“With an integrated system, that doesn’t happen as much,” says Cappuccio. “You’re analysing now at the vendor offering level, who’s underneath it. Can you trust this vendor to manage the environment for you and will they grow with it, what’s their product plan?”
Because these systems contain many other elements in one, devices became more expensive, moving the decision point away from IT folks, up the food chain to the a c-level decision makers.
“The analysis may take longer for each device purchase but once the decision is made, it’s a long-term decision,” says Cappuccio. “Once you install a couple of systems and they begin to work as advertised, or even close to it, when it’s time to grow that environment, I’m not going to do the analysis again, I picked my vendor.”
This higher-level analysis means companies will evolve from the ‘best of breed’ to the ‘best of brand’.Gartner has categorised several classes of integrated systems as integrated stack systems (ISS), integrated infrastructure systems (IIS), integrated reference architectures, and fabric-based computing (FBC).
“There’s a stack sale versus a component sale battle going on with vendors, and it’s up to you to decide – do I need to analyse things at the component level or is it easier to do things at a system level?” says Cappuccio.
“IT teams should optimise these systems for single purposes, in any case. We’re starting to see these systems come out for specific use cases. You might have one level of analytics, one type of database or one type of application you want to run at an extremely large scale.
“Integrated systems can be used for single purpose tasks, rather than building your own general purpose system. It requires a lot of power and protocols, a lot of common tools, but we’re not there yet.”