Spark configuration

The cores property controls the number of concurrent tasks an executor can run. --executor-cores 5 means that each executor can run a maximum of five tasks at the same time.

The --num-executors command-line flag or spark.executor.instances configuration property control the number of executors requested.

https://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/

http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-1/

https://spoddutur.github.io/spark-notes/distribution_of_executors_cores_and_memory_for_spark_application.html

A better option would be to use --num-executors 17 --executor-cores 5 --executor-memory 19G. Why?

  • This config results in three executors on all nodes except for the one with the AM, which will have two executors.
  • --executor-memory was derived as (63/3 executors per node) = 21.  21 * 0.07 = 1.47.  21 – 1.47 ~ 19.

Big data knowledge

Hive vs HBase https://www.xplenty.com/blog/hive-vs-hbase/

 

http://data-flair.training/forums/topic/what-is-the-relation-between-mapreduce-and-hive

 

After the Hive finishes the query execution, the result is submitted to the JobTracker, which resides on YARN. The JobTracker consists of Map/Reduce tasks which runs the mapper and reducer job to store the final result in the HDFS. The Map task deserializes(reading) the data from the HDFS and the Reduce task serializes(writing) the data as the result of the Hive query

Map Reduce is the framework used to process the data which is stored in the HDFS, here java native language is used to writing Map Reduce programs.
Hive is a batch processing framework. This component process the data using a language called Hive Query Language(HQL). Hive prevents writing MapReduce programs in Java. Instead one can use SQL like language to do their daily tasks.
For HIVE there is no process to communicate Map/Reduce tasks directly. It communicates with Job tracker(Application Master in YARN) only for job processing related things once it got scheduled.

 

YARN:This fairly distributes an equal share of resources for jobs

 

Hadoop is the leading open-source software framework developed for scalable, reliable and distributed computing. With the world producing data in the zettabyte range there is a growing need for cheap, scalable, reliable and fast computing to process and make sense of all of this data. The underlying technology for Hadoop framework was created by Google as there was no software in the market that fit Google needs. Indexing the web and analysing search patterns required deep and computationally extensive analytics that would help Google to improve their user behaviour algorithms. Hadoop is built just for that as it runs on a large number of machines that share the workload to optimise performance. Moreover, Hadoop replicates the data throughout the machines ensuring that the processing of data will not be disrupted if one or multiple machines stop working. Hadoop has been extensively developed over the years adding new technologies and features to existing software creating the ecosystem we have today.

HDFS – or Hadoop Distributed File System is the primary storage system used for Hadoop. It is the key tool for managing Big Data and supporting analytic applications in a scalable, cheap and rapid way. Hadoop is usually used on low-cost commodity machines, where server failures are fairly common. To accommodate a high failure environment the file system is designed to distribute data throughout different servers in different server racks making the data highly available. Moreover, when HDFS takes in data it breaks it down into smaller blocks that get assigned to different nodes in a cluster which allows for parallel processing, increasing the speed in which the data is managed.

Hadoop Yarn is a programming model for processing and generating large sets of data. Yarn is the successor of Hadoop MapReduce. The original MapReduce is no longer viable in today’s environment. MapReduce was created 10 years ago, as the size of data being created increased dramatically so did the time in which MapReduce could process the ever growing amounts of data, ranging from minutes to hours. Secondly, programing MapReduce jobs is a time consuming and complex task that requires extensive training. And lastly, MapReduce did not fit all business scenarios as it was created for the single purpose of indexing the web. Yarn provides many benefits over its predecessor. Yarn provides better scalability due to distributed life-cycle management and support for multiple MapReduce API’s in a single cluster. It allows for faster processing and coupled with the in-memory capabilities of other software such as Apache Spark it is comes close to real-time processing. Yarn also supports many frameworks eliminating the need for MapReduce and making it more flexible for different use cases.

Apache Hive is a data warehouse management and analytics system that is built for Hadoop. Hive was initially developed by Facebook, but soon after became an open-source project and is being used by many other companies ever since. Apache hive uses a SQL like scripting language called HiveQL that can convert queries to MapReduce, Apache Tez and Spark jobs.

Apache Pig is a platform for analysing large sets of data. It includes a high level scripting language called Pig Latin that automates a lot of the manual coding comparing it to using Java for MapReduce jobs. Apache Pig is somewhat similar to Apache Hive though some users say that it is easier to transition to Hive rather than Pig if you come from a RDBMS SQL background. However, both platforms have a place in the market. Hive is more optimised to run standard queries and is easier to pick up where as Pig is better for tasks that require more customisation.

Apache Hbase is a non-relational database that runs on top of HDFS. This schema-less database supports in-memory caching via block cache and bloom filters that provide near real-time access to large datasets, making it especially useful for sparse data which are common in many Big Data use cases. However, it is not a replacement for a relational database as it does not speak SQL, support cross record transactions or joins.

Hadoop has become the low cost industry standard ecosystem for securely analysing high volume data from a variety of enterprise sources. We specialise in helping organisations leverage this advanced stack to rapidly understand their data landscape to deliver faster and more insightful reporting, analytics and analysis.  Contact usfor more details.

 

 

Distribute by/Sort by

https://stackoverflow.com/questions/13715044/hive-cluster-by-vs-order-by-vs-sort-by

https://stackoverflow.com/questions/30908203/can-you-explain-when-and-why-mapreduce-is-invoked-in-hive

http://www.inf.ed.ac.uk/publications/thesis/online/IM100859.pdf

 

3down voteaccepted

To understand the reason, first we need to know what map and reduce phases mean:-

  1. Map: Basically a filter which filters and organizes data in sorted order. For e.g. It will filter col1_name, col2_name from a row in the second query. However in 1st query you are reading every column, no filtering is required. Hence no Map phase
  2. Reduce: Reduce is just summary operation data across the rows. for e.g. sum of a coloumn! In both the queries you don’t need any summary data. Hence no reducer.