MapReduce could be the server's newest friend

Placing MapReduce on the Web server could minimize the need for separate analysis clusters, a Usenix researcher argues

In the future, when an administrator does a server build, he or she may be adding a MapReduce library to the usual stack of server, database and middleware software.

This copy of MapReduce could be used to analyze log data directly on the server, minimizing the need for a separate cluster to analyze log data, and shortening the length of time until results are generated, noted University of California San Diego researcher Dionysios Logothetis at the Usenix Annual Technical Conference, being held this week in Portland, Oregon.

With this approach, "the analysis [moves] from the dedicated clusters to the log servers themselves, to avoid expensive data migration," said Logothetis, who, along with other researchers at UCSD and Salesforce.com, described this technique in a Usenix paper, "In-situ MapReduce for Log Processing."

First introduced by Google, MapReduce is increasingly being used to analyze large amounts of data that can be located across multiple servers, or nodes. It is most commonly used as part of the Hadoop data processing platform.

While most uses of MapReduce take place on dedicated clusters, the researchers argued that a version of the analytical software framework could also be an integral part of Web servers themselves, where they could be used to do early analysis of log data.

Today's commercial Internet sites log detailed information on user visits, information that can be used for targeting ads, security monitoring and debugging.

A single server working for a busy e-commerce site can generate 1MB to 10 MBs worth of log data every second. In the course of the day, this can result in tens of terabytes' worth of data. On average, 1,000 servers each producing 1MB per second can generate 86TB of data per day. As an example, Facebook generates about 100TBs a day, Logothetis said.

Typically, large organizations such as Facebook will collect data from all servers, load it into a Hadoop cluster and analyze the results using MapReduce. MapReduce is a framework and associated library for splitting a job into multiple parts so that it can be run simultaneously across many servers, or nodes.

This "store-first, query-later" approach to log analysis has a number of disadvantages, Logothetis argued. Shipping all the data from the servers consumes a great deal of bandwidth. "This puts a lot of stress on the network," he said.

Logothetis noted Facebook discards about 80 percent of its log data before it is analyzed. With this new technology, that data would not need to be transferred.

The traditional approach also introduces a lag between when the data is produced and when it can be analyzed, which can be a critical shortcoming if the data is time-sensitive in nature.

As an alternative, the researchers proposed placing MapReduce functionality on each server itself, which could analyze the data and ship a much smaller result set back to the centralized data collection point. They dubbed this approach "in-situ MapReduce" (iMR).

"iMR is designed to complement, not replace traditional cluster-based architectures. It is meant for jobs that filter or transform log data either for immediate use or before loading it into a distributed storage system ... for follow-on analysis," the researchers' paper notes.

As a program, iMR would replicate all of the MapReduce APIs (application programming interfaces), and it would carry out the similar functionality of MapReduce, namely filtering data and aggregating the results. It would differ in that it could continually produce analysis, based on the latest data.

The researchers built a prototype of iMR, using the Mortar distributed stream processing software. With iMR, the user would specify the range of data being processed in a given analysis, such as all the data gathered in the past 60 seconds. The user would also specify how often results would be produced and transmitted, such as every 15 seconds.

Logothetis admitted that the Web servers may spend most of their resources doing their intended jobs, namely delivering services to users. But iMR could use the spare cycles left over to crunch the log data.

In their paper, the researchers devised a plan where the organization could establish a trade-off between the speed at which they get the analyzed data with the completeness of the resulting analysis. If an organization wants the analysis quickly, then each server may drop individual log entries during times of heavy usage, leading to less conclusive, but still relevant, results. Whereas a thorough analysis of the data would require a longer period of time, and more server resources to complete.

While in-situ MapReduce may not be beneficial for organizations that run only a handful of servers, it could be valuable to larger operations, such as search engines, social networks and e-commerce sites, Logothetis said.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Salesforce.comsoftwareapplicationsdiagnosticssystem managementdata miningUtilitiesUSENIXOnline analytical processing

More about etworkFacebookGoogleIDGSalesforce.com

Show Comments
[]