$ brew installhadoop
== Downloading http://www.apache.org/dyn/closer.cgi?path=hadoop/core/hadoop-
1.1.2/hadoop-1.1.2.tar.gz
== Best Mirror http://apache.tt.co.kr/hadoop/core/hadoop-1.1.2/hadoop-1.1.2.
tar.gz
####################################################################### 99.
2%
######################################################################## 100.
0%
== Caveats
In Hadoop's config file:
/usr/local/Cellar/hadoop/1.1.2/libexec/conf/hadoop-env.sh
$JAVA_HOME has been set to be the output of:
/usr/libexec/java_home
== Summary
🍺 /usr/local/Cellar/hadoop/1.1.2: 271 files, 78M, built in 49.1 minutes
$ hadoop version
Hadoop 1.1.2
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1
-r 1440782
Compiled by hortonfo on Thu Jan 31 02:03:24 UTC 2013
From source with checksum c720ddcf4b926991de7467d253a79b8b
$ ssh-keygen -trsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/wanheecho/.ssh/id_
rsa):
…
$ sudo apt-get install ssh
$ ssh localhost
Theauthenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is fb:cc:5d:08:ed:a2:d3:c2:7f:48:61:17:55:af:
d1:86.
Are you sure you want to continue connecting (yes/no)?
$ hadoop jar/usr/local/Cellar/hadoop/1.1.2/libexec/hadoop-examples-1.1.2.j
ar pi 10 100
Number of Maps = 10
Samples per Map = 100
Wrote input for Map #0
…
Wrote input for Map #9
Starting Job
13/05/29 10:49:21 INFO mapred.FileInputFormat: Total input paths to process
: 10
13/05/29 10:49:21 INFO mapred.JobClient: Running job: job_201305291048_0001
13/05/29 10:49:22 INFO mapred.JobClient: map 0% reduce 0%
…13/05/29 10:49:52 INFO mapred.JobClient: map 100% reduce 100%
13/05/29 10:49:53 INFO mapred.JobClient: Job complete: job_201305291048_000
1
...
Job Finished in 31.939 seconds
Estimated value of Pi is 3.14800000000000000000
$ hadoop jar/usr/local/Cellar/hadoop/1.1.2/libexec/hadoop-examples-1.1.2.j
ar pi 10 100
Number of Maps = 10
Samples per Map = 100
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.nameno
de.SafeModeException: Cannot create directory /user/androidbee/PiEstimator_
TMP_3_141592654/in. Name node is in safe mode.
The reported blocks 1 has reached the threshold 0.9990 of total blocks 1. S
afe mode will be turned off automatically in 5 seconds.
http://goo.gl/NIWoK!
$ ssh-copy-id -i~/.ssh/id_rsa.pub gtko@slave!
/usr/local/bin/ssh-copy-id: INFO: attempting to log in with the new key(s
), to filter out any that are already installed!
/usr/local/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if y
ou are prompted now it is to install the new keys!
gtko@slave's password: !
!
Number of key(s) added: 1!
!
Now try logging into the machine, with: ssh 'gtko@slave'!
and check to make sure that only the key(s) you wanted were added.!
!
!
$ cat ~/.ssh/known_hosts!
…!
slave,192.168.219.110 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8ABPzz81HF3D
pkhpesZWwcsYc!
!
!
?xml version=1.0?
?xml-stylesheet type=text/xslhref=configuration.xsl?
!-- Put site-specific property overrides in this file. --
configuration
property
namedfs.replication/name
value2/value
/property
/configuration
기본
androidbee@master:$ bin/hadoop namenode-format!
... INFO dfs.Storage: Storage directory /app/hadoop/tmp/dfs/na
me has been successfully formatted.!
androidbee@master:$!
... INFO org.apache.hadoop.dfs.Storage:Storage directory /app
/hadoop/tmp/dfs/data is not formatted.!
... INFO org.apache.hadoop.dfs.Storage: Formatting ...!
... INFO org.apache.hadoop.dfs.DataNode: Opened server at 5001
0!
... INFO org.mortbay.util.Credential: Checking Resource aliase
s!
…!
... INFO org.mortbay.http.SocketListener: Started SocketListen
er on 0.0.0.0:50075!
... INFO org.mortbay.util.Container: Started org.mortbay.jetty
.Server@56a499!
... INFO org.apache.hadoop.dfs.DataNode: Starting DataNode in:
FSDataset{dirpath='/app/hadoop/tmp/dfs/data/current'}!
... INFO org.apache.hadoop.dfs.DataNode: using BLOCKREPORT_INT
ERVAL of 3538203msec!
... INFO org.mortbay.util.Credential:Checking Resource aliase
s!
... INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4!
... INFO org.mortbay.util.Container: Started org.mortbay.jetty
.servlet.WebApplicationHandler@d19bc8!
... INFO org.mortbay.util.Container: Started WebApplicationCon
text[/,/]!
... INFO org.mortbay.util.Container: Started HttpContext[/logs
,/logs]!
... INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on
50050: starting!
... INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up
at: 50050!
... INFO org.apache.hadoop.mapred.TaskTracker: Starting tracke
r tracker_slave:50050!
... INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on
50050: starting!
... INFO org.apache.hadoop.mapred.TaskTracker: Starting thread
: Map-events fetcher for all reduce tasks on tracker_slave:500
50!
#!/usr/bin/node
var stdin =process.stdin;
var stdout = process.stdout;
var data = '';
function processLine(line) {
if(line line.trim().length 0) {
var s = line.trim().split(' ');
for(var i = 0; i s.length; ++i) {
stdout.write(s[i] + 't1n');
}
}
}
// 계속…
http://rabrown.net/hadoop-word-count-streaming-example-with-nodejs
node