Hadoop isn’t just for Web 2.0 Big Data Anymore. Hadoop for HPC.

In 2004 Google released a white paper on their use of the MapReduce framework to perform fast and reliable executions of similar processes / data transformations & queries at terabyte scale. Yahoo then began the Hadoop project to support their search product. As a result of this, Apache elevated Hadoop, their MapReduce and DFS (distributed file management system) initiative out of Nutch, their open source search project.

Although technically Hadoop is still in pre-release 1.0, it has proven to be stable and useful for Big Data web 2.0 applications. When you are using Google, LinkedIn, Facebook, Twitter and Yahoo! you are running on Hadoop.

What about Hadoop for High Performance Computing with scientific applications?  It certainly has its place and a basic understanding of Hadoop helps you to understand where you can take advantage of Hadoop in HPC.

Firstly, what is MapReduce?  MapReduce is a methodology of performing parallel computations on very large volumes of data , by dividing the workload across a large number of similar machines, called ‘nodes’. Map Reduce methodology enables linear scalability through good data and file management. Additionally, Map Reduce differs from other methodologies in that it relies on nodes which are servers with attendant disk storage. Work is allocated to these storage server nodes based upon where the data is, as opposed to moving data to where processing occurs. This dramatically accelerates applications which process Big Data sets.

With Map – Reduce, you ‘map’ your input data to  the type of output you desire using some function that is replicable. For instance in manipulating strings by substituting a space for a comma in all input data. Or counting the number of occurrences of each word in a book. ‘Reducing’ aggregates the mapped data together into useful results, perhaps through functions such as addition and subtraction.

Much like RedHat with Linux, there are now commercial releases of Hadoop such as Cloudera that provide tools to simplify Hadoop implementation as well as reliable technical support. Hadoop itself provides built-in fault tolerance through triplicate copies of data distributed across processing nodes, enabling a robust implementation ‘out of the box’. Whereas GPFS and Lustre have scaled across hundreds of servers, known Hadoop implementations have successfully scaled across tens of thousands of nodes.

So what does all this mean for HPC, scientific and engineering applications?  Microway sees Hadoop as an excellent addition to the stack for data intensive scientific applications. This can include bioinformatics, physics and weather modeling applications.  Hadoop can also accelerate science when the workloads include a series of queries of very large data sets. Additionally, when scaling science from the desktop up to larger workloads, Hadoop can provide an effective transition model.

A few examples of Microway Hadoop solutions include the NumberSmasher 1U, 2U and 4U servers.. With one to four multi-core Xeon CPUs, 512GB memory and up to 120TB storage, the NumberSmasher servers are flexible and cost-effective. Microway will build your cluster for you – whether it’s four nodes or a hundred nodes.

We speak HPC, and we speak Hadoop! To learn more about how Hadoop can accelerate your science and engineering workloads feel free to reach a specialist at wespeakhpc@microway.com.

Resources:

This entry was posted in Software and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *