Call For Paper Volume:7 Issue:5 May'2020 |

Word Count Map-Reduce Job in Single Node Apache Hadoop Cluster

Publication Date : 12/04/2015

Author(s) :


Volume/Issue :
Volume 2
Issue 4
(04 - 2015)

Abstract :

Big Data has become acronym for various clients and applications where huge amount of data should be stored and processed simultaneously. Applications like Yahoo, Facebook, and Twitter have huge data which has to be stored and retrieved as per client access. This huge data storage requires huge database leading to increase in physical storage and becomes complex for analysis required in business growth. This storage capacity can be reduced and distributed processing of huge data can be done using Apache Hadoop which uses Map-reduce algorithm and combines the repeating data so that entire data is stored in reduced format. In this proposed project an experimental Word Count Map-Reduce Job is performed on a Single Node Apache Hadoop cluster and yields the prospective results as required by the user respectively. Hence also proves the functionality of Big Data both in storage and processing and found the results to be very encouraging. Keywords — Hadoop, Map-reduce, Hadoop Distributed file system HDFS, HBase, Word Count

No. of Downloads :



Web Design MymensinghPremium WordPress ThemesWeb Development

Word Count Map-Reduce Job in Single Node Apache Hadoop Cluster

April 10, 2015