Extreme Computing Hadoop Lab Session


Based on material from Stratis Viglas, Michail Basios & Sasa Petrovic

Part 1: Getting Started

This lab is mostly performed in terminal. Open your terminal application to begin.

Non-DICE

Firstly, if you are doing this tutorial using a machine that isn't DICE, you'll need to ssh into a DICE machine from either a UNIX terminal or a Windows ssh application like PuTTY, where sXXXXXXX is your matriculation number. If you're already using a DICE computer, you can skip this step. For more information, see computing support.

ssh sXXXXXXX@student.ssh.inf.ed.ac.uk

You can now continue with the DICE steps.

DICE

We're going to connect to one of the nodes of the cluster. This command will randomly pick one of the 12 servers and connect to it:

ssh scutter$(seq -w 1 12 | shuf -n 1)

You can now test Hadoop by running the following command:

hdfs dfs -ls /

If you get a listing of directories on HDFS, you've successfully configured everything. If not, make sure you do ALL of the described steps EXACTLY as they appear in this document. Note that you should not continue if you have not managed to do this section. If the hdfs command isn't available, contact a demonstrator

HDFS

In order to let you copy-paste commands, we'll use $USER which the shell will turn into your user name (i.e. sXXXXXXX).

Here are a number of small pointers you should work through to familiarise yourself with navigating around HDFS

  1. Make sure that your home directory exists:

    hdfs dfs -ls /user/$USER

    To create a directory called /user/$USER/data in Hadoop:

    hdfs dfs -mkdir /user/$USER/data

    Create the following directories in a similar way (these directories will NOT have been created for you, so you need to create them yourself):

    Confirm that you've done the right thing by typing

    hdfs dfs -ls /user/$USER

    For example, if your matriculation number is s0123456, you should see something like:

    Found 2 items
    drwxr-xr-x   - s0123456 s0123456          0 2011-10-19 09:55 /user/s0123456/data
    drwxr-xr-x   - s0123456 s0123456          0 2011-10-19 09:54 /user/s0123456/source
  2. Copy the file example1.txt to /user/$USER/data/output by typing:

    hdfs dfs -cp /data/labs/example1.txt /user/$USER/data/output

    It might warn you about DFSInputStream. Just ignore that.

  3. Obviously, example1.txt doesn't belong there. Move it from /user/$USER/data/output to /user/$USER/data/input where it belongs and delete the /user/$USER/data/output directory:

    hdfs dfs -mv /user/$USER/data/output/example1.txt /user/$USER/data/input/
    hdfs dfs -rm -r /user/$USER/data/output/
  4. Examine the contents of example1.txt using cat and then tail:

    hdfs dfs -cat /user/$USER/data/input/example1.txt
    hdfs dfs -tail /user/$USER/data/input/example1.txt
  5. Create an empty file named example2 in /user/$USER/data/input. Use test to check if it exists and that it is indeed zero length; (by validating the environment variable $? is equal to 1).

    hdfs dfs -touchz /user/$USER/data/input/example2
    hdfs dfs -test -z /user/$USER/data/input/example2; echo $?
  6. Remove the file example2:

    hdfs dfs -rm /user/$USER/data/input/example2

List of HDFS Commands

What follows is a list of useful HDFS shell commands.

Managing Hadoop Jobs

Before even learn how to run Hadoop jobs it's wise to learn how to manipulate them.

Command Line

Listing Jobs

Running the following command will list all (completed and running) jobs on the cluster:

mapred job -list all

This will result in a table like this being displayed:

JobId                 State     StartTime     UserName Priority SchedulingInfo
job_201009151640_0001 2         1285081662441 s0894589 NORMAL   NA
job_201009151640_0002 2         1285081976156 s0894589 NORMAL   NA
job_201009151640_0003 2         1285082017461 s0894589 NORMAL   NA
job_201009151640_0004 2         1285082159071 s0894589 NORMAL   NA
job_201009151640_0005 2         1285082189917 s0894589 NORMAL   NA
job_201009151640_0006 2         1285082275965 s0894589 NORMAL   NA
job_201009151640_0009 2         1285083343068 s0894589 NORMAL   NA
job_201009151640_0010 3         1285106676902 s0894589 NORMAL   NA
job_201009151640_0012 3         1285106959588 s0894589 NORMAL   NA
job_201009151640_0013 3         1285107094387 s0894589 NORMAL   NA
job_201009151640_0014 2         1285107283359 s0894589 NORMAL   NA
job_201009151640_0015 2         1285109169514 s0894589 NORMAL   NA
job_201009151640_0016 2         1285109271188 s0894589 NORMAL   NA
job_201009151640_0018 1         1285148710573 s0894589 NORMAL   NA

We can leverage the grep command to narrow down the results to just see your own jobs:

mapred job -list all | grep $USER

Job Status

To get the status of a particular job, we can use

mapred job -status $jobid

Where the $jobid is ID of a job found in the first column of the list table above. The status will show the percent of completion of mappers and reducers, along with a tracking URL and the location of a file with all the information about the job. We will soon see that the web interface provides much more detail.

Web Interface

All of the previous actions can also be performed using the web interface. Open a browser and navigate to (or simply click if viewing in a browser) http://scutter02.inf.ed.ac.uk:8088. Note: you will need to be inside Informatics or use the VPN to see this page.

This shows the web interface of the jobtracker. We can see a list of running, completed, and failed jobs. Clicking on a job ID is similar to requesting its status from the command line, but it shows much more details, including the number of bytes read/written to the filesystem, number of failed/killed task attempts, and nice graphs of job completion.

Kill a Job

To kill a job, run the following command, where $jobid is the ID of the job you want to kill:

mapred job -kill $jobid

IMPORTANT NOTE: Make sure to kill any misbehaving jobs you have created using the above command ctr+C is NOT sufficient

TASK: Run through the following exercise

To try this, copy the file /data/labs/source/sleeper.py from HDFS to somewhere on your local filesystem (not HDFS). Then run the following command by replacing the <path_for_sleeper.py> with the local path of sleeper.py (i.e./home/s1234567/sleeper.py). This command (which you will learn to interpret later in this lab) will create a Hadoop streaming Job that doesn’t actually do anything, except wait for you to kill it.

hadoop jar /opt/hadoop/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar \
 -input /data/labs/secondary.txt \
 -output /user/$USER/data/sleep-output \
 -mapper sleeper.py \
 -file <path_for_sleeper.py> 

Open another terminal, log into the Hadoop cluster, and list all your jobs. Find the ID of the job you just started. Find out the status of your job, and finally kill it. After you run the kill command, look at the terminal where you originally started the job and watch the job die.

NOTE: In any case that the above command fails, before re-executing it you need to execute the following (cleaning) command

Execute the following to get rid of the output folder of the previous command.

 hdfs dfs -rm -r /user/$USER/data/sleep-output 

Running Jobs

The Hadoop examples are in /opt/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar which is a lot to type. So you might want to set an environment variable

export EXAMPLES=/opt/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar

Then we can use $EXAMPLES to refer to that path.

Computing pi

This example estimates the mathematical constant \(\pi\) to some error. The error depends on the number of samples we have (more samples \(\rightarrow\) more accurate estimate). Run the example as follows:

hadoop jar $EXAMPLES pi <num_maps> <num_samples>

Where <num_maps> is the number of mapper jobs, and <num_samples> is the number of samples, for example using 10 mappers and 5 samples:

hadoop jar $EXAMPLES pi 10 5

Try the following combinations for <num_maps> and <num_samples> and see how the running time and precision change:

Number of Maps Number of Samples Time (s) \(\hat{\pi}\)
2 10
5 10
10 10
2 100
10 100

Do the results match your expectations? How many samples are needed to approximate the third digit after the decimal dot correctly?

Word Counting

Hadoop has a number of demo applications and here we will look at the canonical task of word counting.

TASK: Try running through the following example

We will count the number of times each word appears in a document. For this purpose, we will use the /data/labs/example3.txt file, so first copy that file to your input directory. Second, make sure you delete your output directory before running the job or the job will fail. We run the wordcount example by typing:

hadoop jar $EXAMPLES wordcount /user/$USER/data/input /user/$USER/data/output

Where /user/$USER/data/input and /user/$USER/data/output are the input and output directories, respectively. After running the example, examine (using ls) the contents of the output directory. From the output directory, copy the file part-r-00000 to a local directory (somewhere in your home directory) and examine the contents. Was the job successful?

Running Streaming Jobs

Hadoop streaming is a utility that allows you to create and run map/reduce jobs with any executable or script as the mapper and/or the reducer. The way it works is very simple: input is converted into lines which are fed to the stdin of the mapper process. The mapper processes this data and writes to stdout. You can learn more about stdin and stdout here.

Lines from the stdout of the mapper process are converted into key/value pairs by splitting them on the first tab character (of course, this is only the default behavior and can be changed). The key/value pairs are fed to the stdin of the reducer process which collects and processes them.

Finally, the reducer writes to stdout which is the final output of the program. Everything will become much clearer through examples later.

It is important to note that with Hadoop streaming mappers and reducers can be any programs that read from stdin and write to stdout, so the choice of the programming language is left to the programmer. Here, we will use Python.

Writing a Word-Counting Program in Python 2.7.5

In this subsection, we will see how to create a program in Python that can count the number of words of a specific file. Initially, we will test the code locally on small files before using it in a streaming MapReduce job.

As we will see later, this is important as it helps in not running jobs in Hadoop that can give wrong results.

Word-Counting Python Mapper

  1. Using Streaming, a Mapper reads from stdin and writes to stdout
  2. Keys and Values are delimited (by default) using tabs
  3. Records are split using newlines

Create a file somewhere in your home directory called mapper.py -- there are a number of Python IDEs available, including PyCharm on DICE machines. Alternatively, simply use gedit:

gedit mapper.py

In the directory you want your mapper to be in. Then copy the following code into mapper.py and save. It's worth typing this out by hand rather than copy / pasting, to understand what the code is doing. If you are unfamiliar with the .format syntax of string interpolation Python, please refer here.

#!/usr/bin/python

import sys

for line in sys.stdin:                  # input from standard input
    line = line.strip()                 # remove whitespaces
    tokens = line.split()               # split the line into tokens

    for token in tokens:                # write the results to standard output
        print("{0}\t{1}".format(token, 1))

Make sure you save the file mapper.py.

IMPORTANT NOTE: for Hadoop to know how to properly run your Python scripts, you must include the following line as the first line in all your mappers and reducers:

#!/usr/bin/python

Word-Counting Python Reducer

Create a file called reducer.py in the same directory as mapper.py, and copy the following code into it:

#!/usr/bin/python

import sys

prev_word = ""
value_total = 0
word = ""

for line in sys.stdin:          # For ever line in the input from stdin
    line = line.strip()         # Remove trailing characters
    word, value = line.split("\t", 1)
    value = int(value)
    # Remember that Hadoop sorts map output by key reducer takes these keys sorted
    if prev_word == word:
        value_total += value
    else:
        if prev_word:  # write result to stdout
            print("{0}\t{1}".format(prev_word, value_total))

        value_total = value
        prev_word = word

if prev_word == word:  # Don't forget the last key/value pair
    print("{0}\t{1}".format(prev_word, value_total))

Once again, save this file before continuing.

Testing the Code

We perform local testing conforming to typical UNIX-style piping, our testing will take the form:

cat <data> | ./mapper.py | sort | ./reducer.py 

Which emulates the same pipeline that Hadoop will perform when streaming, albeit in a non-distributed manner. You have to make sure that files mapper.py and reducer.py have execution permissions:

chmod u+x mapper.py
chmod u+x reducer.py

Try the following command and explain the results (hint: type man sort in your terminal window to find out more about the sort command):

echo "this is a test and this should count the number of words" | ./mapper.py | sort -k1,1 | ./reducer.py

Sanity Check

The output from the above code should result in the following output:

a       1
and     1
count   1
is      1
number  1
of      1
should  1
test    1
the     1
this    2
words   1

TASK: Count the number of words a text file of your choosing contains.

Running a Streaming MapReduce Job

The command syntax for creating a Hadoop Streaming MapReduce Job is the following:

hadoop jar <jar_path> [generic_options] [streaming_options]

Streaming Options

After running locally the code successfully, the next step is to run it in Hadoop. Suppose you have your mapper, mapper.py, and your reducer, reducer.py, while your input and output directories are <input> &<output> respectively.

We always have to specify /opt/hadoop/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar as the jar to run, and the particular mapper and reducer we use are specified through -mapper and -reducer streaming options.

In the case that the mapper and/or reducer are not already present on the remote machine (which will often be the case), we also have to package the actual files in the job submission. Assuming that neither mapper.py nor reducer.py were present on the machines in the cluster, the previous job would be run as follows:

hadoop jar /opt/hadoop/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar \
 -input <input> \
 -output <output> \
 -mapper mapper.py \
 -file mapper.py \
 -reducer reducer.py \
 -file reducer.py

Here, the -file streaming option specifies that the file is to be copied to the cluster. This can be very useful for also packaging any auxiliary files your program might use (dictionaries, configuration files, etc). Each job can have multiple -file options.

NOTE: In the latest version of Hadoop, the usage of multiple -file streaming options is deprecated. It is recommended to use a single -files generic option to specify all files separated by comma. The order of generic and streaming options is significant. The -files option must be placed before any streaming options or the command will fail (see next section for more details).

TASK: We will run a simple example to demonstrate how streaming jobs are run, follow these steps

Copy the file: /data/labs/source/random-mapper.py from HDFS to a local directory (a directory on the local machine, not on HDFS.) This mapper simply generates a random number for each word in the input file, hence the input file in your input directory can be anything. Run the job by typing:

hadoop jar /opt/hadoop/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar \
 -input /user/$USER/data/input \
 -output /user/$USER/data/output \
 -mapper random-mapper.py \
 -file random-mapper.py

TASK: What happens when instead of using mapper.py you use /bin/cat as a mapper?

TASK: What happens when you use /bin/cat as both a mapper and reducer?

Generic Options

Various job generic options can be specified on the command line, we will cover the most used ones in this section. The general syntax for specifying additional configuration variables is -D <name>=<value>

Setting Job Name

To avoid having your job named something like streamjob5025479419610622742.jar, you can specify an alternative name through the mapreduce.job.name variable:

-D mapreduce.job.name="My job"

TASK: Run the random-mapper.py example again, this time naming your job "Random job <matriculation_number>", where <matriculation_number> is your matriculation number.

After you run the job (and preferably before it finishes), open the browser and go to http://jobtracker.inf.ed.ac.uk:8088/cluster/nodes. In the list of running jobs look for the job with the name you gave it and click on it. You can see various statistics about your job -- try to find the number of reducers used. How many reducers did you use? If your job finished before you had a chance to open the browser, it will be in the list of finished jobs, not the list of running jobs, but you can still see all the same information by clicking on it.

Key-Value Separation

As was mentioned earlier, the key/value pairs are obtained by splitting the mapper output on the first tab character in the line. This can be changed using stream.map.output.field.separator and stream.num.map.output.key.fields variables. For example, if I want the key to be everything up to the second - character in the line, I would add the following:

-D stream.map.output.field.separator=- \
-D stream.num.map.output.key.fields=2

Hadoop also comes with a partitioner class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner which is useful for cases where you want to perform a secondary sort on the keys. Imagine you have the following list of IPs:

192.168.2.1
190.191.34.38
161.53.72.111
192.168.1.1
161.53.72.23

You want to partition the data so that addresses with the same parts up to the second dot are processed by the same reducer (first 16 bits of the address). However, you also want each reducer to see the data sorted according to the third dot (first 24 bits). Using the mentioned partitioner class you can tell Hadoop how to group the data to be processed by the reducers. You do this using the following options:

-D mapreduce.map.output.key.field.separator=.
-D mapreduce.partition.keypartitioner.options=-k1,2

The first option tells Hadoop what character to use as a separator (just like in the previous example), and the second one tells how many fields from the key to use for partitioning. Knowing this, here is how we would solve the IP address example (assuming that the addresses are in /user/hadoop/input):

hadoop jar /opt/hadoop/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar \
 -D mapreduce.job.output.key.comparator.class=org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator \
 -D mapreduce.map.output.key.field.separator=. \
 -D stream.map.output.field.separator=. \
 -D stream.num.map.output.key.fields=3 \
 -D mapreduce.partition.keypartitioner.options=-k1,2 \
-input <input> \
-output <output> \
-mapper cat \
-reducer cat \
-partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner

The -D mapreduce.partition.keypartitioner.options=-k1,2 generic option tells Hadoop to partition IPs up to the second separator (the dot in this case), and -D stream.num.map.output.key.fields=3 tells it to sort the IPs according to everything before the third one -- which corresponds to the first 24 bits of the address. Finally the -partitioner <class_name> streaming option specifies the partitioner class that will be used.

The option mapreduce.partition.keypartitioner.options lets you specify which fields to consider for partitioning by using comma separated values starting with -k. For example -k1,2 means keys one and two, -k1,1 means only key one and so forth. You can also specify which characters of each key should be considered by using dots. For example -k1.2,1.5 means partition by key one character two until key one character five. You can find a more detailed specification in the java docs of setKeyFieldPartitionerOptions. This option is also used for secondary sorting as you will learn below.

IMPORTANT NOTE: The order of the arguments is significant. Generics arguments such as the ones starting with -D should be placed before the streaming options, otherwise the command will fail.

IMPORTANT NOTE: The option num.key.fields.for.partition is deprecated. Technically it tells Hadoop how many fields to consider for partitioning but it does not work well together with the more powerful mapreduce.partition.keypartitioner.options.

Partitioning & Secondary Sorting

The secondary.txt file (/data/labs/secondary.txt) contains lines in the following format:

last_name.first_name.address.phone_number

We want to partition the data so that all the people with the same first and last name go to the same reducer, and that they are sorted according to address. An implementation is given:

hadoop jar /opt/hadoop/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar \
 -D mapreduce.job.output.key.comparator.class=org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator \
 -D mapreduce.map.output.key.field.separator=. \
 -D stream.map.output.field.separator=. \
 -D stream.num.map.output.key.fields=3 \
 -D mapreduce.partition.keypartitioner.options=-k1,2 \
 -D mapreduce.partition.keycomparator.options=-k3 \
-input /data/labs/secondary.txt \
-output /user/$USER/data/output \
-mapper cat \
-reducer cat \
-partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner

hdfs dfs -cat /user/$USER/data/output/* returns the expected output as follows:

Stanley.Cup.Elm street 1        555-1002
Thayer.Tommy.Elm street 4       555-1003
Singer.Eric.Elm street 2        555-1001
Stanley.Paul.Elm street 3       555-1002
Simmons.Gene.Elm street 1       555-1000
Simmons.Gene.Elm street 5       555-666

The key points are the configurations of the KeyFieldBasedComparator class and the KeyFieldBasedPartitioner class. The documentation provides further examples of how you may use these features.

Note that multiple options mean that they should be enclosed in quotation marks, unlike the previous example of just -k3. If we wanted to sort by the first column in descending order, and both the second column, and then the third column, in ascending order:

 -D mapreduce.partition.keypartitioner.options=-k1,2 \
 -D mapreduce.partition.keycomparator.options="-k1,1r -k2,2 -k3,3"

This yields the result:

Thayer.Tommy.Elm street 4       555-1003
Stanley.Cup.Elm street 1        555-1002
Singer.Eric.Elm street 2        555-1001
Stanley.Paul.Elm street 3       555-1002
Simmons.Gene.Elm street 1       555-1000
Simmons.Gene.Elm street 5       555-666

If your keys are numeric, you need to us the -n modified. For instance, imagine the first key is a name, and the second is an age (integer), to sort both in descending order (note the -n and -r become nr after the index notation:

 -D mapreduce.partition.keycomparator.options="-k1,1r -k2,2nr"

TASK: Using what you have learned in this section and the secondary as your input file, your task is to:

  1. Partition the data so that all people with the same last name go to the same reducer.
  2. Partition the data so that all people with the same last name go to the same reducer, and also make sure that the lines are sorted according to first name.

Part 2: Running a MapReduce Program in Java

If you are unfamiliar with Java, or would like to know more, it is recommended that you review relevant video lectures and materials from the IJP course.

Setting Up the Environment

You can either set these environment variables per-session, or append them to your .bash_profile file in your home directory:

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-sun-1.8.0.144/
export PATH=${JAVA_HOME}/bin:${PATH}
export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar

The Code (Adapted from Hadoop Documentation)

Create the java class file WordCount.java with the following code.

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
    // Making objects is expensive. Instantiate outside the loop and re-use
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());

      // Whilst iterating over the token iterator
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());  // Store the next token in our Text object
        context.write(word, one);  // Give a <word, 1> pair
      }
    }
  }

  public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
      int sum = 0;

      for (IntWritable val : values) {
        sum += val.get();
      }

      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");

    // Make this class the main in the JAR file
    job.setJarByClass(WordCount.class);

    // Set out Mapper class, conforming to the API
    job.setMapperClass(TokenizerMapper.class);

    // Set out Combiner & Reducer classes, conforming to the (same) API
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);

    // Set the ouput Key type
    job.setOutputKeyClass(Text.class);

    // Set the output Value type
    job.setOutputValueClass(IntWritable.class);

    // Set number of reducers
    job.setNumReduceTasks(10);

    // Get the input and output paths from the job arguments
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

Compiling the Java Code

We're going to compile this Java code into a JAR file.

hadoop com.sun.tools.javac.Main WordCount.java
jar cf mywordcount.jar WordCount*.class

As a note, the number of mappers can be suggested via a command line argument -D mapred.map.tasks=5, but ultimately the InputFormat will decide upon the number needed.

Running the Job

Now, deploy the JAR file with the input data noted below, and the output directory we just created in HDFS.

hadoop jar mywordcount.jar WordCount /data/labs/example3.txt /user/$USER/data/output

Sanity Check - Output

Print the top 10 lines of the output to the terminal and compare it to the output below. If all was done correctly, they should be the same. If not, check over your code and try again. If that still doesn't work, ask for help!

To print the top 10 lines:

hdfs dfs -cat /user/$USER/data/output/* | head -n 10

Make sure that your output matches the following:

But     1
ask     1
both    1
desert  2
up      6
you     21
aching  1
been    2
down    2
game    1

TASK: Run again the same task as previously but use 5 reducers instead of 10. Observe the output folder and the number of files produced.

TASK: Run the example of the wordcount problem from the first lab by using only two reducers.

Using Side Information

It is often useful to package other, external files together with the job. For example, if your application uses a dictionary or a file that stores some configuration settings, one would want these files to be available to the program just as they would in a non-mapreduce setting. This can be achieved using the -file option that we already used to package the source files. The following program takes a dictionary and counts only those words that appear in the dictionary, ignoring everything else. First copy the /data/labs/source/mapper-dict.py and /data/labs/source/reducer.py to a local directory:

hdfs dfs -get /data/labs/source/mapper-dict.py ~/
hdfs dfs -get /data/labs/source/reducer.py ~/

Now copy the dictionary to a local directory(-get and -copyToLocal provide the same functionality):

hdfs dfs -copyToLocal /data/labs/dict.eng ~/

TASK: Run the program by typing (assuming you still have example3.txt in your input directory and that your output directory doesn’t exist):

hadoop jar /opt/hadoop/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar \
 -input  /user/$USER/data/example3.txt \
 -output /user/$USER/data/output \
 -mapper mapper-dict.py \
 -file mapper-dict.py \
 -reducer reducer.py \
 -file reducer.py \
 -file dict.eng

The program will use dict.eng as the dictionary and count only those words that appear in that list. Look at the source of mapper-dict.py to see how to open the dictionary file.

Further Resources

You now have a broad knowledge of the way in which hadoop is used. If you'd like to know more, please use the following resources:

  1. Official Hadoop Documention
  2. Hadoop Cheat Sheet
  3. Hadoop for Dummies Cheat Sheet
  4. YouTube Playlist -- Hadoop Tutorials

These are presented as optional reading, and good places to consult if you're stuck.