Вы находитесь на странице: 1из 3

Congure Ignite

Apache Ignite Hadoop Accelerator map-reduce engine processes Hadoop jobs within Ignite cluster. Several
prerequisites must be satised.

1) IGNITE_HOME environment variable must be set and point to the root of Ignite installation directory.

2) Each cluster node must have Hadoop jars in CLASSPATH.


See respective Ignite installation guide for your Hadoop distribution for details.

3) Cluster nodes accepts job execution requests listening particular socket. By default each Ignite node is
listening for incoming requests on 127.0.0.1:11211 . You can override the host and port
using ConnectorConfiguration class:

XML

<bean class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="connectorConfiguration">
<list>
<bean class="org.apache.ignite.configuration.ConnectorConfiguration">
<property name="host" value="myHost" />
<property name="port" value="12345" />
</bean>
</list>
</property>
</bean>

Run Ignite
When Ignite node is congured, start it using the following command:

Shell

$ bin/ignite.sh

Congure Hadoop
To run Hadoop job using Ignite job tracker several prerequisites must be satised:

1) IGNITE_HOME environment variable must be set and point to the root of Ignite installation directory.

2) Hadoop must have Ignite JARS ${IGNITE_HOME}\libs\ignite-core-

[version].jar and ${IGNITE_HOME}\libs\hadoop\ignite-hadoop-[version].ja " in CLASSPATH.

This can be achieved in several ways.


Add these JARs to HADOOP_CLASSPATH environment variable.

Copy or symlink these JARs to the folder where your Hadoop installation stores shared libraries.
See respective Ignite installation guide for your Hadoop distribution for details.

3) Your Hadoop job must be congured to user Ignite job tracker. Two conguration properties are
responsible for this:

mapreduce.framework.name must be set to ignite

mapreduce.jobtracker.address must be set to the host/port your Ignite nodes are listening.

This also can be achieved in several ways.First, you may create separate mapred-site.xml le with these
conguration properties and use it for job runs:

XML

<configuration>
...
<property>
<name>mapreduce.framework.name</name>
<value>ignite</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>127.0.0.1:11211</value>
</property>
...
</configuration>

Second, you may override default mapred-site.xml of your Hadoop installation. This will force all Hadoop
jobs to pick Ignite jobs tracker by default unless it is overriden on job level somehow.

Third, you may set these properties for particular job programmatically:

Java

Configuration conf = new Configuration();


...
conf.set(MRConfig.FRAMEWORK_NAME, IgniteHadoopClientProtocolProvider.FRAMEWORK_NAME);
conf.set(MRConfig.MASTER_ADDRESS, "127.0.0.1:11211);
...
Job job = new Job(conf, "word count");
...

Run Hadoop
How you run a job depends on how you have congured your Hadoop.

If you created separate mapred-site.xml :


Shell

hadoop --config [path_to_config] [arguments]

If you modied default mapred-site.xml , then --config option is not necessary:

Shell

hadoop [arguments]

If you start the job programmatically, then submit it:

Java

...
Job job = new Job(conf, "word count");
...
job.submit();

Вам также может понравиться