Arvutiteaduse instituut
  1. Kursused
  2. 2012/13 kevad
  3. Gridi ja pilvetehnoloogia alused (MTAT.08.011)
EN
Logi sisse

Gridi ja pilvetehnoloogia alused 2012/13 kevad

  • Main
  • Lectures
  • Practicals
  • Links
  • Results
  • Submit Homework

Practice 12 - Advanced Mapreduce: Finding TF-IDF

References

Referred documents and web sites contain supportive information for the practice.

Manuals

  • Hadoop API: http://hadoop.apache.org/docs/stable/api/
  • Hadoop MapReduce tutorial: http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html

TF-IDF

  • Lecture 08.05 - MapReduce in Information Retrieval slides
  • http://en.wikipedia.org/wiki/Tf%E2%80%93idf
  • http://nlp.stanford.edu/IR-book/html/htmledition/tf-idf-weighting-1.html

Exercise 12.1. Term frequency–inverse document frequency (TF-IDF) with MapReduce

Read through the referred documents to learn more about the Term frequency–inverse document frequency algorithm. Your goal is to calculate the TF-IDF using MapReduce for a set of douments we have already uploaded to HDFS in the Hadoop cluster. This time the Dataset is a bit larger and once again it has been taken from http://www.gutenberg.org/.

Create the MapReduce application to calculate TF-IDF of a dived document set

  • Download tfidf.zip as the basis for creating the application. It contains the application skeleton as an Eclipse project.
    • Main class is in the tfidf package: tfidf.MapreduceSkeletonThird.java
    • The program takes 3 arguments: <input folder> <output folder> <number of documents in the input folder>
  • Extract the eclipse project to a freely chosen folder
  • Start eclipse and create a new project and specify the path of the project to be your previously chosen folder so that the extracted files would be used.
  • Input for your Eclipse application you can take from books.zip
    • unpack the books.zip file and use the resulting folder as input in your MapReduce application in eclipse.
  • Once again, most of the work has been done for you and you only have to define the content of the Map and Reduce methods.
  • However, as you should remember from the lecture slides, calculating TF-IDF requires several MapReduce jobs, so this time you will have to define 4 Map and 3 Reduce methods, each in separate classes.

MapReduce Jobs:

  • First MR job - Word counts for each word and document
    • Map:
      • Input (LineNr, Line in document)
      • Split the line into words and output each word.
      • Output (word;filename, 1)
    • Reduce:
      • Input (word;filename, [counts])
      • Sum all the counts as n
      • Output (word;filename, n)
  • Second MR job - Word frequency for each document
    • Map:
      • Input (word;filename, n)
      • change the key to be only filename, and move the word into value
      • Output (filename, word;n)
    • Reduce:
      • Input (filename, [word;n])
      • Sum all the n's in the whole document as N and output every word again. You may have to make two cycles. One to sum all the n's and one to write out all words again one at a time!
        • Iterators are one-traversal-only - You will have to store the values somewhere such as ArrayList or HashMap.
      • Output (word;filename, n;N)
  • Third MR job - Word frequency in the whole dataset
    • Map:
      • Input (word;filename, n;N)
      • Move filename to value field and add 1 to the end of value field
      • Output (word, filename;n;N;1)
    • Reduce:
      • Input (word, [filename;n;N;1])
      • Calculate the sum of value fiels last entries as m and move filename back to key
        • Again, you will have to look through values twice, once to find the sum and once to print out all the entries again. And you will have to store the values somewhere again.
      • Output (word;filename, n;N;m)
  • Fourth MR job - Calculating the TD-IDF value
    • Map:
      • Input (word;filename, n;N;m)
      • calculate TD-IDF based on n;N;m and D. D is known ahead of time.
        • TFIDF = n/N * log(D/m)
        • NB! Be careful when dividing integers with integers! You should use doubles instead, or division will not work properly.
      • Output (word;filename, TD-IDF)
    • There is no Reduce in this job. Map output is directly written out.

NB! When parsing arbitrary text files, you can be never sure you get what you expect as input. Thus, there might be errors in map or reduce tasks. Use java try and catch constructions in such cases to allow the program to continue when it gets an error.

Exercise 12.2

Deploy the application in the cluster

  • Export the application as a normal (not executable jar) jar file
  • Upload the jar file to the server using scp command
  • Run the application
    • We have uploaded 378 books in HDFS under "books" folder, so use that folder as input
    • hadoop jar tfidf.jar tfidf.MapReduceSkeletonThird books FirstName_LastName/outbooks 378
  • Measure how long it runs
  • Save the output of the application

Cluster: IP: 54.242.93.149 username: ubuntu password: ask the lab assistant if you do not remmember

Deliverables

  1. MapReduce application source code
  2. Command line output of running the application in the cluster.

Extra points

  1. If you use 3 MapReduce jobs instead of 4, by removing the last job
  2. If you add and use combiners for the first 3 Job successfully.
  • Arvutiteaduse instituut
  • Loodus- ja täppisteaduste valdkond
  • Tartu Ülikool
Tehniliste probleemide või küsimuste korral kirjuta:

Kursuse sisu ja korralduslike küsimustega pöörduge kursuse korraldajate poole.
Õppematerjalide varalised autoriõigused kuuluvad Tartu Ülikoolile. Õppematerjalide kasutamine on lubatud autoriõiguse seaduses ettenähtud teose vaba kasutamise eesmärkidel ja tingimustel. Õppematerjalide kasutamisel on kasutaja kohustatud viitama õppematerjalide autorile.
Õppematerjalide kasutamine muudel eesmärkidel on lubatud ainult Tartu Ülikooli eelneval kirjalikul nõusolekul.
Tartu Ülikooli arvutiteaduse instituudi kursuste läbiviimist toetavad järgmised programmid:
euroopa sotsiaalfondi logo