Вы находитесь на странице: 1из 10

= 10

15/06/15 15:23:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes =


1,5,25
15/06/15 15:23:15 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/06/15 15:23:15 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and
retry cache entry expiry time is 600000 millis
15/06/15 15:23:15 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/06/15 15:23:15 INFO util.GSet: VM type
= 64-bit
15/06/15 15:23:15 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/06/15 15:23:15 INFO util.GSet: capacity = 2^15 = 32768 entries
15/06/15 15:23:15 INFO namenode.FSImage: Allocated new BlockPoolId: BP-839127011127.0.1.1-1434352995661
15/06/15 15:23:15 INFO common.Storage: Storage directory
/usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
15/06/15 15:23:16 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with
txid >= 0
15/06/15 15:23:16 INFO util.ExitUtil: Exiting with status 0
15/06/15 15:23:16 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at midarto-ThinkPad-Edge-E130/127.0.1.1
************************************************************/
hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/sbin/
hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ cd
hduser@midarto-ThinkPad-Edge-E130:~$ sudo chmod -R 777 /usr/l
lib/ local/
hduser@midarto-ThinkPad-Edge-E130:~$ sudo chmod -R 777 /usr/local/sbin/
hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/sbin/
hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ ls
hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ -ls
No command '-ls' found, did you mean:
Command 'ils' from package 'sleuthkit' (universe)
Command 'tls' from package 'python-tlslite' (universe)
Command 'hls' from package 'hfsutils' (main)
Command 'ls' from package 'coreutils' (main)
Command 'fls' from package 'sleuthkit' (universe)
Command 'jls' from package 'sleuthkit' (universe)
Command 'bls' from package 'bacula-sd' (main)
Command 'als' from package 'atool' (universe)
Command 'ols' from package 'speech-tools' (universe)
Command 'i-ls' from package 'integrit' (universe)
-ls: command not found
hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ sta
start
start-pulseaudio-x11 static-sh
startpar
start-stop-daemon
status
startpar-upstart-inject startx
start-pulseaudio-kde stat
hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ sta
start
start-pulseaudio-x11 static-sh
startpar
start-stop-daemon
status
startpar-upstart-inject startx
start-pulseaudio-kde stat
hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ start

start
start-pulseaudio-kde startx
startpar
start-pulseaudio-x11
startpar-upstart-inject start-stop-daemon
hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ cd
hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/hadoop
hadoop/
hadoop_store/
hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/hadoop/
bin/ etc/ include/ lib/ libexec/ sbin/ share/
hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/hadoop/sbin/
hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh start-all.cmd
stop-balancer.sh
hadoop-daemon.sh
start-all.sh
stop-dfs.cmd
hadoop-daemons.sh
start-balancer.sh stop-dfs.sh
hdfs-config.cmd
start-dfs.cmd
stop-secure-dns.sh
hdfs-config.sh
start-dfs.sh
stop-yarn.cmd
httpfs.sh
start-secure-dns.sh stop-yarn.sh
kms.sh
start-yarn.cmd
yarn-daemon.sh
mr-jobhistory-daemon.sh start-yarn.sh
yarn-daemons.sh
refresh-namenodes.sh stop-all.cmd
slaves.sh
stop-all.sh
hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ start-all.sh
bash: /usr/local/hadoop/sbin/start-all.sh: Permission denied
hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ cd
hduser@midarto-ThinkPad-Edge-E130:~$ sudo chmod -R 777 /usr/local/hadoop/sbin/
hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/hadoop/sbin/
hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ str
strace strings strip
hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/06/15 15:26:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-midartoThinkPad-Edge-E130.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-midartoThinkPad-Edge-E130.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is d0:33:ed:28:d4:55:e7:f0:32:e8:26:be:92:07:fe:fa.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hdusersecondarynamenode-midarto-ThinkPad-Edge-E130.out
15/06/15 15:30:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-midartoThinkPad-Edge-E130.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanagermidarto-ThinkPad-Edge-E130.out
hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ jps
4053 NodeManager

3376 DataNode
3724 ResourceManager
3576 SecondaryNameNode
3215 NameNode
4156 Jps
hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ cd
hduser@midarto-ThinkPad-Edge-E130:~$ hadoop jar /usr/local/hadoop/
bin/
include/ libexec/ logs/
README.txt share/
etc/
lib/
LICENSE.txt NOTICE.txt sbin/
hduser@midarto-ThinkPad-Edge-E130:~$ hadoop jar /usr/local/hadoop/share/hadoop/
common/ hdfs/
httpfs/ kms/
mapreduce/ tools/ yarn/
hduser@midarto-ThinkPad-Edge-E130:~$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/
hadoop-mapreduce-client-app-2.7.0.jar
hadoop-mapreduce-client-common-2.7.0.jar
hadoop-mapreduce-client-core-2.7.0.jar
hadoop-mapreduce-client-hs-2.7.0.jar
hadoop-mapreduce-client-hs-plugins-2.7.0.jar
hadoop-mapreduce-client-jobclient-2.7.0.jar
hadoop-mapreduce-client-jobclient-2.7.0-tests.jar
hadoop-mapreduce-client-shuffle-2.7.0.jar
hadoop-mapreduce-examples-2.7.0.jar
lib/
lib-examples/
sources/
hduser@midarto-ThinkPad-Edge-E130:~$ hadoop jar
/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar pi 2 5
Number of Maps = 2
Samples per Map = 5
15/06/15 15:32:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
15/06/15 15:32:32 INFO Configuration.deprecation: session.id is deprecated. Instead, use
dfs.metrics.session-id
15/06/15 15:32:32 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker,
sessionId=
15/06/15 15:32:32 INFO input.FileInputFormat: Total input paths to process : 2
15/06/15 15:32:32 INFO mapreduce.JobSubmitter: number of splits:2
15/06/15 15:32:33 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_local1644526633_0001
15/06/15 15:32:33 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
15/06/15 15:32:33 INFO mapreduce.Job: Running job: job_local1644526633_0001
15/06/15 15:32:33 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/06/15 15:32:33 INFO output.FileOutputCommitter: File Output Committer Algorithm version is
1
15/06/15 15:32:33 INFO mapred.LocalJobRunner: OutputCommitter is
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/06/15 15:32:33 INFO mapred.LocalJobRunner: Waiting for map tasks
15/06/15 15:32:33 INFO mapred.LocalJobRunner: Starting task:
attempt_local1644526633_0001_m_000000_0
15/06/15 15:32:33 INFO output.FileOutputCommitter: File Output Committer Algorithm version is

1
15/06/15 15:32:33 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
15/06/15 15:32:33 INFO mapred.MapTask: Processing split:
hdfs://localhost:54310/user/hduser/QuasiMonteCarlo_1434353547302_418935020/in/part0:0+118
15/06/15 15:32:33 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
15/06/15 15:32:33 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
15/06/15 15:32:33 INFO mapred.MapTask: soft limit at 83886080
15/06/15 15:32:33 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/06/15 15:32:33 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/06/15 15:32:33 INFO mapred.MapTask: Map output collector class =
org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15/06/15 15:32:34 INFO mapred.LocalJobRunner:
15/06/15 15:32:34 INFO mapred.MapTask: Starting flush of map output
15/06/15 15:32:34 INFO mapred.MapTask: Spilling map output
15/06/15 15:32:34 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600
15/06/15 15:32:34 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend =
26214392(104857568); length = 5/6553600
15/06/15 15:32:34 INFO mapred.MapTask: Finished spill 0
15/06/15 15:32:34 INFO mapred.Task: Task:attempt_local1644526633_0001_m_000000_0 is done.
And is in the process of committing
15/06/15 15:32:34 INFO mapred.LocalJobRunner: map
15/06/15 15:32:34 INFO mapred.Task: Task 'attempt_local1644526633_0001_m_000000_0' done.
15/06/15 15:32:34 INFO mapred.LocalJobRunner: Finishing task:
attempt_local1644526633_0001_m_000000_0
15/06/15 15:32:34 INFO mapred.LocalJobRunner: Starting task:
attempt_local1644526633_0001_m_000001_0
15/06/15 15:32:34 INFO output.FileOutputCommitter: File Output Committer Algorithm version is
1
15/06/15 15:32:34 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
15/06/15 15:32:34 INFO mapred.MapTask: Processing split:
hdfs://localhost:54310/user/hduser/QuasiMonteCarlo_1434353547302_418935020/in/part1:0+118
15/06/15 15:32:34 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
15/06/15 15:32:34 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
15/06/15 15:32:34 INFO mapred.MapTask: soft limit at 83886080
15/06/15 15:32:34 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/06/15 15:32:34 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/06/15 15:32:34 INFO mapred.MapTask: Map output collector class =
org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15/06/15 15:32:34 INFO mapred.LocalJobRunner:
15/06/15 15:32:34 INFO mapred.MapTask: Starting flush of map output
15/06/15 15:32:34 INFO mapred.MapTask: Spilling map output
15/06/15 15:32:34 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600
15/06/15 15:32:34 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend =
26214392(104857568); length = 5/6553600
15/06/15 15:32:34 INFO mapred.MapTask: Finished spill 0
15/06/15 15:32:34 INFO mapred.Task: Task:attempt_local1644526633_0001_m_000001_0 is done.
And is in the process of committing
15/06/15 15:32:34 INFO mapred.LocalJobRunner: map
15/06/15 15:32:34 INFO mapred.Task: Task 'attempt_local1644526633_0001_m_000001_0' done.
15/06/15 15:32:34 INFO mapred.LocalJobRunner: Finishing task:
attempt_local1644526633_0001_m_000001_0
15/06/15 15:32:34 INFO mapred.LocalJobRunner: map task executor complete.

15/06/15 15:32:34 INFO mapred.LocalJobRunner: Waiting for reduce tasks


15/06/15 15:32:34 INFO mapred.LocalJobRunner: Starting task:
attempt_local1644526633_0001_r_000000_0
15/06/15 15:32:34 INFO output.FileOutputCommitter: File Output Committer Algorithm version is
1
15/06/15 15:32:34 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
15/06/15 15:32:34 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin:
org.apache.hadoop.mapreduce.task.reduce.Shuffle@43f66bae
15/06/15 15:32:34 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=333971456,
maxSingleShuffleLimit=83492864, mergeThreshold=220421168, ioSortFactor=10,
memToMemMergeOutputsThreshold=10
15/06/15 15:32:34 INFO reduce.EventFetcher: attempt_local1644526633_0001_r_000000_0
Thread started: EventFetcher for fetching Map Completion Events
15/06/15 15:32:34 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map
attempt_local1644526633_0001_m_000001_0 decomp: 24 len: 28 to MEMORY
15/06/15 15:32:34 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for
attempt_local1644526633_0001_m_000001_0
15/06/15 15:32:34 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size:
24, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->24
15/06/15 15:32:34 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map
attempt_local1644526633_0001_m_000000_0 decomp: 24 len: 28 to MEMORY
15/06/15 15:32:34 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for
attempt_local1644526633_0001_m_000000_0
15/06/15 15:32:34 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size:
24, inMemoryMapOutputs.size() -> 2, commitMemory -> 24, usedMemory ->48
15/06/15 15:32:34 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
15/06/15 15:32:34 INFO mapred.LocalJobRunner: 2 / 2 copied.
15/06/15 15:32:34 INFO reduce.MergeManagerImpl: finalMerge called with 2 in-memory mapoutputs and 0 on-disk map-outputs
15/06/15 15:32:34 INFO mapred.Merger: Merging 2 sorted segments
15/06/15 15:32:34 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total
size: 42 bytes
15/06/15 15:32:34 INFO reduce.MergeManagerImpl: Merged 2 segments, 48 bytes to disk to
satisfy reduce memory limit
15/06/15 15:32:34 INFO reduce.MergeManagerImpl: Merging 1 files, 50 bytes from disk
15/06/15 15:32:34 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory
into reduce
15/06/15 15:32:34 INFO mapred.Merger: Merging 1 sorted segments
15/06/15 15:32:34 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total
size: 43 bytes
15/06/15 15:32:34 INFO mapred.LocalJobRunner: 2 / 2 copied.
15/06/15 15:32:34 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use
mapreduce.job.skiprecords
15/06/15 15:32:34 INFO mapreduce.Job: Job job_local1644526633_0001 running in uber mode :
false
15/06/15 15:32:34 INFO mapreduce.Job: map 100% reduce 0%
15/06/15 15:32:34 INFO mapred.Task: Task:attempt_local1644526633_0001_r_000000_0 is done.
And is in the process of committing
15/06/15 15:32:34 INFO mapred.LocalJobRunner: 2 / 2 copied.
15/06/15 15:32:34 INFO mapred.Task: Task attempt_local1644526633_0001_r_000000_0 is
allowed to commit now
15/06/15 15:32:35 INFO output.FileOutputCommitter: Saved output of task

'attempt_local1644526633_0001_r_000000_0' to
hdfs://localhost:54310/user/hduser/QuasiMonteCarlo_1434353547302_418935020/out/_temporary/
0/task_local1644526633_0001_r_000000
15/06/15 15:32:35 INFO mapred.LocalJobRunner: reduce > reduce
15/06/15 15:32:35 INFO mapred.Task: Task 'attempt_local1644526633_0001_r_000000_0' done.
15/06/15 15:32:35 INFO mapred.LocalJobRunner: Finishing task:
attempt_local1644526633_0001_r_000000_0
15/06/15 15:32:35 INFO mapred.LocalJobRunner: reduce task executor complete.
15/06/15 15:32:35 INFO mapreduce.Job: map 100% reduce 100%
15/06/15 15:32:35 INFO mapreduce.Job: Job job_local1644526633_0001 completed successfully
15/06/15 15:32:35 INFO mapreduce.Job: Counters: 35
File System Counters
FILE: Number of bytes read=822302
FILE: Number of bytes written=1648559
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=590
HDFS: Number of bytes written=923
HDFS: Number of read operations=30
HDFS: Number of large read operations=0
HDFS: Number of write operations=15
Map-Reduce Framework
Map input records=2
Map output records=4
Map output bytes=36
Map output materialized bytes=56
Input split bytes=296
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=56
Reduce input records=4
Reduce output records=0
Spilled Records=8
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=0
Total committed heap usage (bytes)=854065152
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=236
File Output Format Counters
Bytes Written=97
Job Finished in 3.39 seconds

Estimated value of Pi is 3.60000000000000000000


hduser@midarto-ThinkPad-Edge-E130:~$ mkdir coba
hduser@midarto-ThinkPad-Edge-E130:~$ cd coba/
hduser@midarto-ThinkPad-Edge-E130:~/coba$ nano coba.txt
hduser@midarto-ThinkPad-Edge-E130:~/coba$ hadoop dfs -copyFromLocal /home/hduser/coba/
coba
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/06/15 15:34:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
hduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -ls
15/06/15 15:34:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x - hduser supergroup
0 2015-06-15 15:34 coba
hduser@midarto-ThinkPad-Edge-E130:~/coba$ hadoop jar /usr/local/hadoop/share/hadoop/
common/ hdfs/
httpfs/ kms/
mapreduce/ tools/ yarn/
hduser@midarto-ThinkPad-Edge-E130:~/coba$ hadoop jar
/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar wordcount coba
coba-out
15/06/15 15:35:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
15/06/15 15:35:27 INFO Configuration.deprecation: session.id is deprecated. Instead, use
dfs.metrics.session-id
15/06/15 15:35:27 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker,
sessionId=
15/06/15 15:35:27 INFO input.FileInputFormat: Total input paths to process : 1
15/06/15 15:35:27 INFO mapreduce.JobSubmitter: number of splits:1
15/06/15 15:35:28 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_local1075455800_0001
15/06/15 15:35:28 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
15/06/15 15:35:28 INFO mapreduce.Job: Running job: job_local1075455800_0001
15/06/15 15:35:28 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/06/15 15:35:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is
1
15/06/15 15:35:28 INFO mapred.LocalJobRunner: OutputCommitter is
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/06/15 15:35:28 INFO mapred.LocalJobRunner: Waiting for map tasks
15/06/15 15:35:28 INFO mapred.LocalJobRunner: Starting task:
attempt_local1075455800_0001_m_000000_0
15/06/15 15:35:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is
1
15/06/15 15:35:28 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
15/06/15 15:35:28 INFO mapred.MapTask: Processing split:
hdfs://localhost:54310/user/hduser/coba/coba.txt:0+37
15/06/15 15:35:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
15/06/15 15:35:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
15/06/15 15:35:28 INFO mapred.MapTask: soft limit at 83886080
15/06/15 15:35:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/06/15 15:35:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/06/15 15:35:28 INFO mapred.MapTask: Map output collector class =

org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15/06/15 15:35:28 INFO mapred.LocalJobRunner:
15/06/15 15:35:28 INFO mapred.MapTask: Starting flush of map output
15/06/15 15:35:28 INFO mapred.MapTask: Spilling map output
15/06/15 15:35:28 INFO mapred.MapTask: bufstart = 0; bufend = 53; bufvoid = 104857600
15/06/15 15:35:28 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend =
26214384(104857536); length = 13/6553600
15/06/15 15:35:28 INFO mapred.MapTask: Finished spill 0
15/06/15 15:35:28 INFO mapred.Task: Task:attempt_local1075455800_0001_m_000000_0 is done.
And is in the process of committing
15/06/15 15:35:28 INFO mapred.LocalJobRunner: map
15/06/15 15:35:28 INFO mapred.Task: Task 'attempt_local1075455800_0001_m_000000_0' done.
15/06/15 15:35:28 INFO mapred.LocalJobRunner: Finishing task:
attempt_local1075455800_0001_m_000000_0
15/06/15 15:35:28 INFO mapred.LocalJobRunner: map task executor complete.
15/06/15 15:35:28 INFO mapred.LocalJobRunner: Waiting for reduce tasks
15/06/15 15:35:28 INFO mapred.LocalJobRunner: Starting task:
attempt_local1075455800_0001_r_000000_0
15/06/15 15:35:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is
1
15/06/15 15:35:28 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
15/06/15 15:35:28 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin:
org.apache.hadoop.mapreduce.task.reduce.Shuffle@7d27a2b6
15/06/15 15:35:28 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=333971456,
maxSingleShuffleLimit=83492864, mergeThreshold=220421168, ioSortFactor=10,
memToMemMergeOutputsThreshold=10
15/06/15 15:35:28 INFO reduce.EventFetcher: attempt_local1075455800_0001_r_000000_0
Thread started: EventFetcher for fetching Map Completion Events
15/06/15 15:35:28 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map
attempt_local1075455800_0001_m_000000_0 decomp: 63 len: 67 to MEMORY
15/06/15 15:35:28 INFO reduce.InMemoryMapOutput: Read 63 bytes from map-output for
attempt_local1075455800_0001_m_000000_0
15/06/15 15:35:28 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size:
63, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->63
15/06/15 15:35:28 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
15/06/15 15:35:28 INFO mapred.LocalJobRunner: 1 / 1 copied.
15/06/15 15:35:28 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory mapoutputs and 0 on-disk map-outputs
15/06/15 15:35:28 INFO mapred.Merger: Merging 1 sorted segments
15/06/15 15:35:28 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total
size: 53 bytes
15/06/15 15:35:28 INFO reduce.MergeManagerImpl: Merged 1 segments, 63 bytes to disk to
satisfy reduce memory limit
15/06/15 15:35:28 INFO reduce.MergeManagerImpl: Merging 1 files, 67 bytes from disk
15/06/15 15:35:28 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory
into reduce
15/06/15 15:35:28 INFO mapred.Merger: Merging 1 sorted segments
15/06/15 15:35:28 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total
size: 53 bytes
15/06/15 15:35:28 INFO mapred.LocalJobRunner: 1 / 1 copied.
15/06/15 15:35:29 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use
mapreduce.job.skiprecords

15/06/15 15:35:29 INFO mapred.Task: Task:attempt_local1075455800_0001_r_000000_0 is done.


And is in the process of committing
15/06/15 15:35:29 INFO mapred.LocalJobRunner: 1 / 1 copied.
15/06/15 15:35:29 INFO mapred.Task: Task attempt_local1075455800_0001_r_000000_0 is
allowed to commit now
15/06/15 15:35:29 INFO output.FileOutputCommitter: Saved output of task
'attempt_local1075455800_0001_r_000000_0' to hdfs://localhost:54310/user/hduser/cobaout/_temporary/0/task_local1075455800_0001_r_000000
15/06/15 15:35:29 INFO mapred.LocalJobRunner: reduce > reduce
15/06/15 15:35:29 INFO mapred.Task: Task 'attempt_local1075455800_0001_r_000000_0' done.
15/06/15 15:35:29 INFO mapred.LocalJobRunner: Finishing task:
attempt_local1075455800_0001_r_000000_0
15/06/15 15:35:29 INFO mapred.LocalJobRunner: reduce task executor complete.
15/06/15 15:35:29 INFO mapreduce.Job: Job job_local1075455800_0001 running in uber mode :
false
15/06/15 15:35:29 INFO mapreduce.Job: map 100% reduce 100%
15/06/15 15:35:29 INFO mapreduce.Job: Job job_local1075455800_0001 completed successfully
15/06/15 15:35:29 INFO mapreduce.Job: Counters: 35
File System Counters
FILE: Number of bytes read=547406
FILE: Number of bytes written=1097293
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=74
HDFS: Number of bytes written=45
HDFS: Number of read operations=13
HDFS: Number of large read operations=0
HDFS: Number of write operations=4
Map-Reduce Framework
Map input records=1
Map output records=4
Map output bytes=53
Map output materialized bytes=67
Input split bytes=113
Combine input records=4
Combine output records=4
Reduce input groups=4
Reduce shuffle bytes=67
Reduce input records=4
Reduce output records=4
Spilled Records=8
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=21
Total committed heap usage (bytes)=495976448
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0

WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=37
File Output Format Counters
Bytes Written=45
hduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -ls
15/06/15 15:35:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Found 2 items
drwxr-xr-x - hduser supergroup
0 2015-06-15 15:34 coba
drwxr-xr-x - hduser supergroup
0 2015-06-15 15:35 coba-out
hduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -ls coba-out
15/06/15 15:36:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r-- 1 hduser supergroup
0 2015-06-15 15:35 coba-out/_SUCCESS
-rw-r--r-- 1 hduser supergroup
45 2015-06-15 15:35 coba-out/part-r-00000
hduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -cat coba-out/part-r-00000
15/06/15 15:36:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Elektro1
Hasanuddi
1
Teknik 1
Universitas 1
hduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -ls^C
hduser@midarto-ThinkPad-Edge-E130:~/coba$

Вам также может понравиться