1. 介绍

本文主要参考资料为:IDEA 调试 Hadoop程序

2. win下安装hadoop

我们这里使用2.7.2版本。首先到hadoop官方网站下载hadoop。

PS: 本教程前提是你已经安装了JDK。

在windows下解压后配置如下的环境变量:

HADOOP_HOME:D:\soft\dev\hadoop-2.7.2

HADOOP_BIN_PATH:%HADOOP_HOME%\bin

HADOOP_PREFIX:%HADOOP_HOME%

在Path后面加上%HADOOP_HOME%\bin;%HADOOP_HOME%\sbin;

3. 新建hadoop maven工程

新建一个MAVEN项目,按照如下配置pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>kafka-learn</artifactId>
        <groupId>com.best.kafka.test</groupId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>

    <artifactId>hadoop-test</artifactId>

    <dependencies>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.7.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.7.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-core</artifactId>
            <version>2.7.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
            <version>2.7.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-common</artifactId>
            <version>2.7.2</version>
        </dependency>
        <dependency>
            <groupId>jdk.tools</groupId>
            <artifactId>jdk.tools</artifactId>
            <version>1.8</version>
            <scope>system</scope>
            <systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
        </dependency>
    </dependencies>
        <!-- https://mvnrepository.com/artifact/junit/junit -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/log4j/log4j -->
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>
</project>

4. 编写wordcount代码

我们就拿官方的wordcount代码来当例子。

在该wordcount例子当中,额外添加hdfs和yarn的配置即可。

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper<Object, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
   //设置访问HDFS的用户名
    System.setProperty("HADOOP_USER_NAME", "root");
    Configuration conf = new Configuration();
   //设置hdfs和yarn地址
   conf.set("fs.defaultFS", "hdfs://10.45.10.33:9000");
   conf.set("yarn.resourcemanager.hostname","10.45.10.33");


    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);

  }
}

5. 配置log4j

在resource目录下新建log4j的配置文件。也可以从HADOOP安装目录下拷贝

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.Target=System.out
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
log4j.rootLogger=INFO, console

6. 安装额外的win支持

在win下调试需要额外安装一些支持程序,否则会报错。

6.1 winutils.exe

点我下载

下载后放到hadoop/bin下即可。

如果不进行该操作,会报如下错误:

java.io.IOException: Could not locate executable D:\soft\dev\hadoop-2.7.2\bin\winutils.exe in the Hadoop binaries.

6.2 nativeIO问题

当设置好运行参数(input的文件自己提前准备好),并且开始运行的时候,会产生如下报错

Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

根据文章解决Exception: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z 等一系列问题修改org.apache.hadoop.io.nativeio.NativeIO源码。

新建该包结构,并且下载该NativeIO类到该目录。

6.3 访问HDFS权限问题

完成以上操作再进行运行扔可能回有如下报错:

org.apache.hadoop.security.AccessControlException: Permission denied: ...

修改master上的hdfs-site.xml配置文件添加如下内容并且重启即可:

<property> 
   <name>dfs.permissions</name> 
   <value>false</value> 
</property>

设置了关闭权限检查,但是在提交任务到yarn的时候仍然会报权限错误,这时候可以修改指定文件的权限:

hadoop fs -chmod -R 755 /tmp

6.4 /bin/bash: line 0: fg: no job control问题

在使用windows调用Hadoop yarn平台的时候,一般都会遇到如下的错误:

org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job control 

这时候需要在windows本地修改HADOOP配置文件mapred-site.xml

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapred.remote.os</name>
        <value>Linux</value>
    </property>
    <property>
        <name>mapreduce.app-submission.cross-platform</name>
        <value>true</value>
    </property>
</configuration>

7. 成功运行