论文标题
基准在大数据分类上对Apache Spark和Hadoop MapReduce进行基准测试
Benchmarking Apache Spark and Hadoop MapReduce on Big Data Classification
论文作者
论文摘要
大多数流行的大数据分析工具都会发展为改编其工作环境,以从大量非结构化数据中提取有价值的信息。数据挖掘技术从大数据中过滤这些有用信息的能力导致了大数据挖掘一词。将数据范围从小型,结构化和稳定的数据转移到庞大的量,非结构化和快速变化的数据带来了许多数据管理挑战。由于建筑限制,不同的工具以自己的方式应对这些挑战。根据手头任务选择正确的数据管理框架时,需要考虑许多参数。在本文中,我们为两种广泛使用的大数据分析工具(即Apache Spark和Hadoop MapReduce)提供了一个综合基准,即一项常见的数据挖掘任务,即分类。我们采用多个评估指标来比较基准框架的性能,例如执行时间,准确性和可扩展性。这些指标专门用于衡量分类任务的性能。据我们所知,文献中先前没有研究所有这些指标的研究,同时考虑了特定于任务的问题。我们表明,在训练模型上,Spark比MapReduce快5倍。然而,当输入工作负载较大时,火花会降解的性能。通过其他簇扩展环境可以显着提高火花的性能。但是,在Hadoop中未观察到类似的增强。 MAPREDUCE的机器学习实用程序往往比Spark的精度得分更好,例如3%,即使在小型数据集中也是如此。
Most of the popular Big Data analytics tools evolved to adapt their working environment to extract valuable information from a vast amount of unstructured data. The ability of data mining techniques to filter this helpful information from Big Data led to the term Big Data Mining. Shifting the scope of data from small-size, structured, and stable data to huge volume, unstructured, and quickly changing data brings many data management challenges. Different tools cope with these challenges in their own way due to their architectural limitations. There are numerous parameters to take into consideration when choosing the right data management framework based on the task at hand. In this paper, we present a comprehensive benchmark for two widely used Big Data analytics tools, namely Apache Spark and Hadoop MapReduce, on a common data mining task, i.e., classification. We employ several evaluation metrics to compare the performance of the benchmarked frameworks, such as execution time, accuracy, and scalability. These metrics are specialized to measure the performance for classification task. To the best of our knowledge, there is no previous study in the literature that employs all these metrics while taking into consideration task-specific concerns. We show that Spark is 5 times faster than MapReduce on training the model. Nevertheless, the performance of Spark degrades when the input workload gets larger. Scaling the environment by additional clusters significantly improves the performance of Spark. However, similar enhancement is not observed in Hadoop. Machine learning utility of MapReduce tend to have better accuracy scores than that of Spark, like around 3%, even in small size data sets.