InfoWorld's top 10 emerging enterprise technologies

Which of today's newest shipping technologies will triumph over the long haul? Here are our best guesses

Everyone is a trend watcher. But at a certain point, to determine which trends will actually weave their way into the fabric of business computing, you need to first take a hard look at the technologies that gave life to the latest buzz phrases.

That's the idea behind InfoWorld's top 10 emerging enterprise technologies of 2011. We're every bit as excited as the most vociferous pundit about big changes in the direction of enterprise IT, from the consumerization of IT to infrastructure convergence. But what actual, vapor-free technologies have emerged that enable these big ideas to take shape? That's InfoWorld's stock in trade.

[ Go deep into key business technologies with InfoWorld's series of Deep Dive PDF special reports, including HTML5, private cloud, mobile management, server virtualization, Windows 8, and BI/analytics. | Get the key perspectives, reviews, and news from the InfoWorld Daily newsletter. ]

Among the host of enterprise technologies shipping but not yet widely adopted, we think the following 10 will have the greatest impact. Our selection criteria are subjective rather than objective, derived from many years of evaluating products in the InfoWorld Test Center, observing the ebb and flow of the industry, and taking stock of what appeals to enterprise customers. In other words, this list is based on the collective judgment and experience of InfoWorld editors and contributors, not some magic formula.

Except for the purposes of example, we have for the most part avoided specific product descriptions (visit the InfoWorld Test Center for that). We're focusing on technologies rather than their specific product implementations frozen in time, simply because technology evolves so quickly.

You may not agree with our picks -- in fact, given the contentious world of IT, we'd be surprised if you did. So please post your thoughts in our comments section (Add a comment).

10. HTML59. Client-side hypervisors 8. Continuous build tools 7. Trust on a chip 6. JavaScript replacements 5. Distributed storage tiering 4. Apache Hadoop 3. Advanced synchronization 2. Software-defined networks 1. Private cloud orchestration

10. HTML5 InfoWorld has written a huge amount about HTML5, but we spent some time debating internally whether to include it in this list. The naysayers pointed out that we've been putting tags together to form Web pages since the beginning of the World Wide Web. HTML5 has simply added new tags. Did we stop what we were doing to celebrate when someone invented the <strong> tag?

Others took the practical view that while HTML5 looks similar to old-fashioned HTML, the tasks it accomplishes are dramatically different. The local data storage, the <canvas> tag, and the video tag make it possible to do much more than pour marked-up words and images into a rectangle. Plus, the new HTML5 WebSockets spec defines a new way to conduct full-duplex communication for event-driven Web apps.

In the end, Adobe's decision to end development of mobile Flash tipped the debate. Suddenly an entire corner of the Web that used to deliver video, casual games, and other animated content is back in play. An entire sector of the Web development industry is going to retool as we move to HTML5 from Flash. And that represents a tectonic shift for Web developers. --Peter Wayner

9. Client-side hypervisorsConventional desktop virtualization has faltered for two key reasons: It requires a continuous connection between client and server, and the server itself needs to be beefy to run all those desktop VMs.

A client hypervisor solves both problems. It installs on an ordinary desktop or laptop, leveraging the processing power of the client. And laptop users can take a "business VM" with them containing the OS, apps, and personal configuration settings. That VM is secure and separate from whatever else may be running on that desktop -- such as a malware some clueless user accidentally downloaded -- and you get all the virtualization management advantages, including VM snapshots, portability, easy recovery, and so on.

Type 2 client-side hypervisors such as VMware Player, VirtualBox, and Parallels Desktop have been in existence for years; they run on top of desktop Windows, Linux, or OS X to provide a container for a guest operating system. Type 1 client-side hypervisors -- which run on bare metal and treat every desktop OS as a guest -- provide better security and performance. They're also completely transparent to the end user, never a drawback in a technology looking for widespread adoption.

Client hypervisors point to a future where we bring our own computers to work and download or sync our business virtual machines to start the day. Actually, you could use any computer with a compatible client hypervisor, anywhere. The operative word is "future" -- Citrix, MokaFive, and Virtual Computer are the only companies so far to release a Type 1 client hypervisor, due in part to the problem Windows has dealt with for years: supplying a sufficient number of drivers to run across a broad array of hardware. However, these companies will be joined next year by Microsoft itself, which plans to include Hyper-V in Windows 8.

Make no mistake, Windows 8 Hyper-V will require 64-bit Intel or AMD hardware. Don't expect bare-metal virtualization from your ARM-based Windows 8 tablet -- or any other tablet -- anytime soon. Note too that, unlike Citrix, MokaFive, and Virtual Computer, which built their client hypervisors with the express purpose of easing Windows systems management, Microsoft has stated that Windows 8 Hyper-V will be aimed strictly at developers and IT pros.

But hey, we're talking about Microsoft. It won't stop with developers and IT pros. Yes, tablets are making their way into the workplace, but the fact of the matter is that large-scale Windows desktop deployments are not going away, and Microsoft will be under more pressure than ever to make them easier to manage. With more and more employees working outside of the office -- or using a stipend to buy their own PCs and bring them to work -- the security and manageability of the client-side hypervisor will offer a compelling desktop computing alternative. --Eric Knorr

8. Continuous build toolsThere are two ways for programmers to look at new tools like Jenkins, Hudson, and other "continuous integration" servers, which put all code through a continuous stream of endless tests: The lone cowboy coders shriek with horror at the way that they're shackled to a machine that rides herd over them. The more collaboratively minded among us like the way continuous build tools help us work together for the betterment of the whole.

When a continuous integration server sends you a scolding email about the problems with the code you checked in 10 seconds ago, it doesn't want to ruin your feeling of accomplishment. It's just trying to keep us all moving toward the same goal.

Tools like Hudson or Jenkins aren't new because there have been a number of slick proprietary continuous integration tools for some time. Rational Team Concert, Team City, and Team Foundation Server are just a few of the proprietary tools that are pushing the idea of a team. But the emergence of open source solutions encourages the kind of experimentation and innovation that comes when programmers are given the chance to make their tools better.

There are at least 400 publicly circulated plug-ins for Jenkins and an uncountable number of hacks floating around companies. Many of them integrate with different source code repositories like Git or arrange to build the final code using another language like Python. When the build is finished, a number of plug-ins compete to announce the results with MP3s, Jabber events, or dozens of other signals. Backups, deployment, cloud management, and many uncharacterized plug-ins are ready.

This work is quickly being turned into a service. Cloudbees, for instance, offers a soup-to-nuts cloud of machines that bundles Jenkins with a code repository that feeds directly into a cloud that runs the code. While some cloud companies are just offering raw machines with stripped-down Linux distros, Cloudbees lets you check in your code as it handles everything else in the stack. --Peter Wayner

7. Trust on a chipExperts have long recognized that in order to assure security at the highest application levels, all the layers -- including the physical construction of the computing device -- need to be verified.

The Trusted Platform Module (TPM) from the Trusted Computing Group (TCG) was the first popularly adopted hardware chip to assure trusted hardware and boot sequences. It was used by many leading companies, including Apple and Microsoft, and it forms the backbone of Microsoft's BitLocker Drive Encryption technology and forthcoming Windows 8 UEFI Secure Boot architecture.

This year, Intel combined the TPM chip and a hardware hypervisor layer to protect boot sequences, memory, and other components. Any software vendor can take advantage of it. McAfee, now an Intel subsidiary, announced its first integration of the new technology with its DeepSafe technology. Expect other vendors and OSes to follow.

The TCG, meanwhile, hasn't been resting on its laurels. Its original TPM chip's latest specification has morphed into providing a hardware-based Next Generation Authentication Token. Essentially, you'll be able to carry your smartcard certificate on the TPM chip, along with other digital certificates. Your device is all you'll ever need, with no need to bring additional cards, dongles, or key fobs.

Hardware trust solutions aren't perfectly secure, as the Princeton memory freeze and electron microscope attacks showed, but they beat software-only protection solutions. The hardware protection schemes will only get better. Soon enough, every computer device you can use will have a hardware/software protection solution running. --Roger A. Grimes

6. JavaScript replacementsYogi Berra once said of a famous restaurant, "No one goes there anymore, it's too crowded." The same is becoming true of JavaScript. The language may be the most commonly executed code on the planet, thanks to its position as the foundation for Web pages. If that's not enough, its dominance may grow stronger if server-based tools like Node.js gain traction.

Yet for all of JavaScript's success, everyone is moving on to the next thing. Some want to build entirely new languages that fix all of the troubles with JavaScript, and others are just finding ways to translate their code into JavaScript so that they can pretend they don't use it.

Translated code is all the rage. Google's Web Toolkit cross-compiles Java into JavaScript, so the developer types only properly typed Java code. It continues to get better, and Google has integrated it directly with its App Engine cloud so that you can deploy to it with one button.

Some of the translations are purely cosmetic. Programmers who write their instructions in CoffeeScript don't need to worry about much of the punctuation that makes JavaScript look a bit too old school. The cross-compiler kindly inserts it before it runs.

Other translations are more ambitious. Google's recently announced Dart, a language that will apparently fix many of the limitations that the development team thinks make JavaScript a pain. There are classes, interfaces, and other useful mechanisms for putting up walls between the code, an essential feature for large software projects. Spelling out the type of data held in a variable is now possible, but it's only optional. The Dart lovers say they eventually want to replace JavaScript, but for the time being they want to get their foothold by providing a way to translate Dart into JavaScript. In other words, they want to replace JavaScript by making JavaScript the core of their plan. --Peter Wayner

5. Distributed storage tieringNAND flash memory -- the stuff of which solid-state drives is made -- is up to 1,000 times faster than disk storage and many times cheaper than DRAM. Flash memory is the hottest commodity in storage, and it will be even hotter when storage management software catches up with the potential of flash in the data center.

Flash memory's special combination of high speed and low cost makes it an excellent choice for server-side cache, where it replaces pricier DRAM, and the natural choice for tier-one storage in SANs, where it replaces slower disks. With the cost of flash steadily dropping and the capacities of SSDs steadily on the rise, the days of disk drives in servers and SANs appear to be numbered.

The best part: Having flash storage in servers introduces a possibility that simply wasn't practical with disk -- namely, managing server-side storage as an extension of the SAN. In essence, server-side flash becomes the top tier in the SAN storage pool, drawing on intelligence within the SAN to store the most frequently accessed or most I/O-intensive data closest to the application. It's like caching, but smarter and more cost-effective.

The huge performance advantages of flash have made automated tiering within the SAN more compelling than ever. All of the leading SAN vendors now offer storage systems that combine solid-state drives, hard disk drives, and software that will dynamically migrate the "hottest" data to the fastest drives in the box. The next step will be to overcome the latency introduced by the distance between SAN and servers. The speed of flash and block-level autotiering software -- which operates in chunks as fine as kilobytes or megabytes -- will combine to close this last mile.

Unlike traditional caching, which requires duplicating storage resources and flushing writes to the back-end storage, distributed storage tiering promises both higher application performance and lower storage costs. The server owns the data and the bulk of the I/O processing, reducing SAN performance requirements and stretching your SAN dollar.

The price of these benefits is, per usual, increased complexity. We'll learn more about the promise and challenges of distributed storage tiering as EMC's Project Lightning and other vendor initiatives come to light. --Doug Dineley

4. Apache HadoopTwo years ago we picked MapReduce as the top emerging enterprise technology, mainly because it promised something entirely new: analysis of huge quantities of unstructured (or semi-structured) data such as log files and Web clickstreams using commodity hardware and/or public cloud services. Over the past two years, Apache Hadoop, the leading open source implementation of MapReduce, has found its way into products and services offered by Amazon, EMC, IBM, Informatica, Microsoft, NetApp, Oracle, and SAP -- not to mention scores of startups.

Hadoop breaks new ground by enabling businesses to deploy clusters of commodity servers to crunch through many terabytes of unstructured data -- simply to discover interesting patterns to explore, rather than to start with formal business intelligence objectives. But it must be remembered that Hadoop is basically a software framework on top of a distributed file system. Programs must be written to process Hadoop jobs, developers need to understand Hadoop's structure, and data analysts face a learning curve in determining how to use Hadoop effectively.

Early on, tools were developed to make exploiting Hadoop easier for developers. Apache Hive provides SQL programmers with a familiar SQL-like language called HiveQL for ad hoc queries and big data analysis. And Apache Pig offers a high-level language for creating data analysis programs that are parallel in nature, often a requirement for large processing jobs.

IBM was among the first to provide tools on top of Hadoop that let analysts extract value almost right away. Its InfoSphere BigInsights suite includes BigSheets, which enables users to explore data and build processing jobs without writing code, all using a spreadsheetlike interface.

And Hadoop solutions from startups are popping up everywhere. Cloudera, Hortonworks, and MapR combine their own Hadoop distros with enterprise-oriented management tools. Karmasphere Studio is a specialized IDE that allows developers to prototype, develop, debug, and monitor Hadoop jobs, while Karmasphere Analyst is a GUI tool that enables data analysts to generate SQL queries for Hadoop data sets and view the output in charts and graphs. Another startup, Datameer, offers Datameer Analytics Solution, which also sports a spreadsheet-style user interface.

Where will this all lead? As Hadoop solutions proliferate, businesses will have access to unprecedented insight derived from unstructured data -- in order to predict the behavior of Web customers, optimize workflows, and with the aid of data visualization tools, discover patterns in everything from medical histories to common search terms. The best thing about the new wave of Hadoop analytics is that we're only beginning to discover where it may lead. --Eric Knorr

3. Advanced synchronizationApple and Microsoft may have wildly different strategies, but they agree on one thing: It's time to say good-bye to single-user environments, where each PC or other device is a separate island from the rest of the user's computing world. In fact, both companies are moving to a cloud-enabled fabric of user activities spread across devices and applications.

In October, Apple's iOS 5 debuted alongside iCloud, a cloud-based syncing service that keeps bookmarks, documents, photos, and "key value" data (such as state information) in sync across a user's iOS devices, Macs, and -- to a lesser extent -- Windows PCs. Microsoft's forthcoming Windows 8 takes the concept even further, keeping not just data but application state in sync across Windows 8 PCs and tablets and probably Windows Phone smartphones; as you pick up a device, whatever you were working on with any other device is ready for you to continue with your activity.

This new behavior is going to change a lot of how people work on computers, in ways that should give applications dramatically new utility.

Early iCloud users quickly got used to having their documents available on whatever device they happen to have in hand, for example. That allows automatic backup, of course, but it also creates an expectation of being able to work on anything anywhere. Windows 8 goes even further, letting you pick up where you left off in a document or a task.

Imagine a travel management app that handles your expenses, tickets, and itinerary across your devices -- no more copying and pasting information from one source to another. You can easily imagine your smartphone being your CPU, syncing to data and other resources at hand, such as network storage, a local keyboard, a local monitor, and nearby network, as well as passing on tasks to tablets and PCs when you move to one of them. That's the kind of seamless mobility we can begin to imagine with these fabric-oriented syncing capabilities in the OS and in apps.

When you work this way, the notion of emailing yourself documents, copying files between computers, and otherwise manually managing your context seems old-fashioned. When you couple that automatic syncing of data and metadata with the fact that context such as location, available input methods, presentation constraints, motion, Internet accessibility, and sensor-driven data, you get true user-centric computing.

The "sync fabric" model of computing has profound implications for apps, security models, and other technology approaches we've all gotten comfortable with. The fabric paradigm may finally do away with the endpoint notion that has bedeviled computer security since the work-at-home and laptop trends began, forcing a better approach to identity management and authentication in a world where the device is a variable, not a constant, as it was in the heyday of the office PC.

Then there's the issue of the user experience and the need for applications and back-end services to adjust as the user moves among the fabric of devices. Context awareness must be built in, so the app adjusts as the user changes devices. Yet that awareness also opens new possibilities for applications that developers are just beginning to imagine.

If that sounds like a science-fiction version of the cloud, it is. But just as many sci-fi fantasies have become real, so too is the notion of a computing fabric that we can tap into and move through. iCloud and Windows 8 are merely the first, early examples. --Galen Gruman

2. Software-defined networksLike ancient coral reefs, data center networks have grown slowly and inexorably over time and calcified. While servers and storage have benefited from software abstractions that support dynamic management, networks have remained hardware-bound and static. Almost a virtue for decades, their resistance to change has now become a major roadblock on the road to cloud computing.

The technology that promises to remove that roadblock is software-defined networking (SDN). SDN drapes a software layer over switch and router hardware that serves as both a centrally managed control plane and a platform for innovation. SDN isn't network virtualization, though network virtualization will certainly be one of its by-products. Rather, SDN is a way to "program the network" -- that is, it allows cloud providers and ISVs to build new networking capabilities the rest of us can draw on.

The leading example of SDN today is OpenFlow, but OpenStack's Quantum, Juniper's QFabric, EMC VMware's virtual network APIs, and NEC's ProgrammableFlow also take an SDN approach. In the case of OpenFlow, the network programming layer is an open protocol that is supported by a growing number of network hardware vendors. A key selling point is that OpenFlow requires no changes to the switching hardware, nor does it require that all traffic through the switch be managed through the OpenFlow protocol. It is designed to work within existing network infrastructures.

OpenFlow is the brainchild of university researchers who wanted a way to experiment with new network protocols on large production networks, and first emerged from the lab to overcome networking challenges posed by running enormous big data processing clusters in the public cloud. The next order of business will be solving the problems posed by large-scale virtualization and multitenancy in public and private clouds.

OpenFlow is still emerging, the functionality is currently limited, and it will take more time before the goals are even clearly defined. The consortium behind OpenFlow, the Open Networking Foundation, is less than a year old, but it counts the likes of Facebook, Google, Microsoft, Yahoo, Cisco Systems, Juniper Networks, Hewlett-Packard, Citrix Systems, Dell, IBM, NEC, and VMware as members. All these companies are betting on software-defined networking to make provisioning and managing networks in tomorrow's data centers and clouds as flexible and dynamic as managing virtual machines in today's virtualization clusters. --Doug Dineley

1. Private cloud orchestrationThe old method of dedicating infrastructure and admins to individual projects is killing us, resulting in underutilized capacity, high administrative overhead, and drawn-out project cycles. One solution is to pool compute, storage, and network resources in a private cloud -- and move IT toward more agile and efficient shared architectures.

With a private cloud, IT managers can borrow technologies and architectures pioneered by public cloud providers and apply them to their own data center. These clouds tend to have many moving parts, including virtualization management, metering and chargeback systems, automated configuration, and self-service provisioning.

Currently, these technologies tend to be spread across various products and solutions. But one in particular has gained surprising momentum over the past year. It's an open source project known as OpenStack, which offers a core set of cloud orchestration services: virtual machine management, object storage, and image services.

Billing itself as a "cloud operating system," OpenStack was initially developed by Rackspace and NASA, but plans to spin off the project as a separate foundation were detailed last month. It now claims over 138 participating companies, including AMD, Cisco, Citrix, Dell, F5, HP, Intel, NEC, and a gaggle of cloud startups. According to OpenStack, identity and self-service layers will be included in the next release in 2012. In addition, several vendors are vying to offer commercialized versions of OpenStack, from Citrix (with its Project Olympus) to startup vendors Internap, Nebula, and Piston Cloud Computing.

The best-known OpenStack competitor is Eucalyptus, which is basically a private cloud implementation of Amazon Web Services. The Amazon interoperability runs deep, because the Eucalyptus stack includes a layer that mimics Amazon's API. You can move workloads from Amazon EC2 to Eucalyptus, as long as you don't stumble over a few subtle differences between the two. Eucalyptus also comes in an open source version.

Packages of private cloud tools are appearing at all layers of the stack. Puppet, to take a leading example, is a configuration management framework designed to automate almost any repeatable task in the data center. Puppet can create fresh installs and monitor existing nodes; push out system images, as well as update and reconfigure them; and restart your services -- all unattended. Puppet Labs, the developer of Puppet, partners with both Eucalyptus and OpenStack.

It's easy to be cynical about any cluster of technology to which the term "cloud" is applied. But no one questions the benefits of large-scale virtualization or other schemes, such as network convergence, that pool resources for greater economies of scale. These paradigm changes demand new ways of working -- and the emerging collection of cloud orchestration software supplies the means. --Eric Knorr

This article, "InfoWorld's top 10 emerging enterprise technologies," was originally published at InfoWorld.com. Follow the latest developments in business technology news and get a digest of the key stories each day in the InfoWorld Daily newsletter. For the latest business technology news, follow InfoWorld on Twitter.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computingdata explosionmanagementinternetNetworkingservershtml5Data managementbusiness intelligencestoragesoftwareapplicationsData Centervirtualizationidentity managementcomputer hardwareServer VirtualizationMicrosoft WindowsNetwork managementhardware systemsapplication developmentsolid state drivesmobile technologynetwork convergenceConfiguration / maintenanceAccess control and authenticationopen source softwarevirtual desktopconsumerization of ITComputer Memory

More about Adobe SystemsAmazon Web ServicesAMDApacheAppleASAC2CiscoCiscoCitrix Systems Asia PacificCitrix Systems Asia PacificDellDell ComputerEclipseEMC CorporationetworkF5FacebookGoogleHewlett-Packard AustraliaHPIBM AustraliaIBM AustraliaInformaticaJuniper NetworksJuniper NetworksLinuxMacsMcAfee AustraliaMicrosoftNASANebulaNECNetAppNetAppOlympusOracleParallelsQuantumSAP AustraliaTechnologyToolkitTopicVirtual ComputerVMware AustraliaYahoo

Show Comments
[]