Giter Site home page Giter Site logo

Comments (15)

PtdThant avatar PtdThant commented on June 21, 2024 2

Hello,
I am now using apache hadoop 2.7.1 and try to run HiBench benchmark on that Hadoop cluster. I have a problem with the following error (unbound variable). I am just a beginner so could you help me how to work on it? Thank you very much in advance.

[phyo@rain29 terasort]$ ./prepare/prepare.sh
Parsing conf: /home/phyo/HADOOP_ALLINONE/HiBench-HiBench-4.0/conf/00-default-properties.conf
Parsing conf: /home/phyo/HADOOP_ALLINONE/HiBench-HiBench-4.0/conf/10-data-scale-profile.conf
Parsing conf: /home/phyo/HADOOP_ALLINONE/HiBench-HiBench-4.0/conf/99-user_defined_properties.conf
Parsing conf: /home/phyo/HADOOP_ALLINONE/HiBench-HiBench-4.0/workloads/terasort/conf/00-terasort-default.conf
Parsing conf: /home/phyo/HADOOP_ALLINONE/HiBench-HiBench-4.0/workloads/terasort/conf/10-terasort-userdefine.conf
ERROR, execute cmd: '( /home/phyo/HADOOP_ALLINONE/hadoop-2.7.1/bin/yarn node -list 2> /dev/null | grep RUNNING )' timedout.
STDOUT:

STDERR:

Please check!
Traceback (most recent call last):
File "/home/phyo/HADOOP_ALLINONE/HiBench-HiBench-4.0/bin/functions/load-config.py", line 447, in
load_config(conf_root, workload_root, workload_folder)
File "/home/phyo/HADOOP_ALLINONE/HiBench-HiBench-4.0/bin/functions/load-config.py", line 154, in load_config
generate_optional_value()
File "/home/phyo/HADOOP_ALLINONE/HiBench-HiBench-4.0/bin/functions/load-config.py", line 358, in generate_optional_value
assert 0, "Get workers from spark master's web UI page failed, reason:%s\nplease set hibench.masters.hostnames and hibench.slaves.hostnames manually" % e
AssertionError: Get workers from spark master's web UI page failed, reason:( /home/phyo/HADOOP_ALLINONE/hadoop-2.7.1/bin/yarn node -list 2> /dev/null | grep RUNNING ) executed timedout for 100 seconds
please set hibench.masters.hostnames and hibench.slaves.hostnames manually
/home/phyo/HADOOP_ALLINONE/HiBench-HiBench-4.0/bin/functions/workload-functions.sh: line 33: .: filename argument required
.: usage: . filename [arguments]
start HadoopPrepareTerasort bench
./prepare/prepare.sh: line 25: INPUT_HDFS: unbound variable

from hibench.

PtdThant avatar PtdThant commented on June 21, 2024 1

from hibench.

JessyLiao avatar JessyLiao commented on June 21, 2024

Hi
I had same issue. My env is Cloudera Manager CDH 5.4.0
My properties is

hibench.hadoop.home /opt/cloudera/parcels/CDH/lib/hadoop
hibench.spark.home /opt/cloudera/parcels/CDH/lib/spark
hibench.hdfs.master hdfs://master1:50070/

My error message ~
[root@master1 conf]# /HiBench-master/workloads/pagerank/prepare/prepare.sh
Parsing conf: /HiBench-master/conf/00-default-properties.conf
Parsing conf: /HiBench-master/conf/10-data-scale-profile.conf
Parsing conf: /HiBench-master/conf/99-user_defined_properties.conf
Parsing conf: /HiBench-master/workloads/pagerank/conf/00-pagerank-default.conf
Parsing conf: /HiBench-master/workloads/pagerank/conf/10-pagerank-userdefine.conf
Parsing conf: /HiBench-master/workloads/pagerank/conf/properties.conf
Traceback (most recent call last):
File "/HiBench-master/bin/functions/load-config.py", line 440, in
load_config(conf_root, workload_root, workload_folder)
File "/HiBench-master/bin/functions/load-config.py", line 154, in load_config
generate_optional_value()
File "/HiBench-master/bin/functions/load-config.py", line 209, in generate_optional_value
if hadoop_version[0] != '1': # hadoop2? or CDH's MR1?
IndexError: string index out of range
/HiBench-master/bin/functions/workload-functions.sh: line 33: .: filename argument required
.: usage: . filename [arguments]
start HadoopPreparePagerank bench
/HiBench-master/workloads/pagerank/prepare/prepare.sh: line 25: INPUT_HDFS: unbound variable

from hibench.

GoodAsh avatar GoodAsh commented on June 21, 2024

Receiving the same error with hdp 2.2.0.0-2041 here, has anyone solved the problem by adjusting configurations or changing the code?

from hibench.

GraceH avatar GraceH commented on June 21, 2024

yes. we will take a look at this. thanks for reporting that.

-Grace
sent from mobile phone

-------- �������� --------
������Re: [HiBench] error ./prepare.sh: line 25: INPUT_HDFS: unbound variable (#93)
��������GoodAsh
��������intel-hadoop/HiBench
������

Receiving the same error here, has anyone solved the problem by adjusting configurations or changing the code?

��
Reply to this email directly or view it on GitHubhttps://github.com//issues/93#issuecomment-104725998.

from hibench.

lvsoft avatar lvsoft commented on June 21, 2024

Seems like it failed to detect hadoop version.
The script'll execute <hadoop_bin>/hadoop version | head -1 | cut -d \ -f 2 to get your hadoop dist info.
According to the trace log, the shell cmd seems returned nothing.
Could you paste the result of <hadoop_bin>/hadoop version for reference?

Besides, you can set hibench.hadoop.version and hibench.hadoop.release in your 99-user_defined_properties.conf to bypass this probe procedures. For example:

     hibench.hadoop.version     hadoop2
     hibench.hadoop.release     cdh5

Note: The value of hibench.hadoop.version could be hadoop1 or hadoop2, and The value of hibench.hadoop.release could be cdh4, cdh5 and apache. Please set properly according to your environment.

from hibench.

JessyLiao avatar JessyLiao commented on June 21, 2024

I try to set hibench.hadoop.version and hibench.hadoop.release in my 99-user_defined_properties.conf.
hibench.hadoop.version hadoop2
hibench.hadoop.release cdh5

----My 99-user_defined_properties.conf----
hibench.hadoop.home /opt/cloudera/parcels/CDH/
hibench.spark.home /opt/cloudera/parcels/CDH/
hibench.hdfs.master hdfs://master1:50070/

hibench.spark.master yarn-client
hibench.hadoop.executable ${hibench.hadoop.home}/bin/hadoop
hibench.hadoop.version hadoop2
hibench.hadoop.release cdh5
hibench.spark.version spark1.3

====Run prepare.sh ====
[root@master1 conf]# /HiBench-master/workloads/join/prepare/prepare.sh
Parsing conf: /HiBench-master/conf/00-default-properties.conf
Parsing conf: /HiBench-master/conf/10-data-scale-profile.conf
Parsing conf: /HiBench-master/conf/99-user_defined_properties.conf
Parsing conf: /HiBench-master/workloads/join/conf/00-join-default.conf
Parsing conf: /HiBench-master/workloads/join/conf/10-join-userdefine.conf
This filename pattern "/opt/cloudera/parcels/CDH//share/hadoop/mapreduce2/hadoop-mapreduce-examples-.jar" is required to match only one file.
However, there's no file found, please fix it.
Traceback (most recent call last):
File "/HiBench-master/bin/functions/load-config.py", line 440, in
load_config(conf_root, workload_root, workload_folder)
File "/HiBench-master/bin/functions/load-config.py", line 154, in load_config
generate_optional_value()
File "/HiBench-master/bin/functions/load-config.py", line 272, in generate_optional_value
HibenchConf["hibench.hadoop.examples.jar"] = OneAndOnlyOneFile(HibenchConf['hibench.hadoop.home'] + "/share/hadoop/mapreduce2/hadoop-mapreduce-examples-
.jar")
File "/HiBench-master/bin/functions/load-config.py", line 113, in OneAndOnlyOneFile
raise Exception("Need to match one and only one file!")
Exception: Need to match one and only one file!
/HiBench-master/bin/functions/workload-functions.sh: line 33: .: filename argument required
.: usage: . filename [arguments]
start HadoopPrepareJoin bench
/HiBench-master/workloads/join/prepare/prepare.sh: line 25: INPUT_HDFS: unbound variable

from hibench.

lvsoft avatar lvsoft commented on June 21, 2024

Can you go to your folder /opt/cloudera/parcels/CDH//share/hadoop/mapreduce2/, and take a look if there's one and only one jar file exists which matches hadoop-mapreduce-examples-*.jar?

It seems the init script can't find the hadoop-mapreduce-examples-*.jar.
And, if the file is placed elsewhere, you'll need to set hibench.hadoop.examples.jar to the file path in your 99-user_defined_properties.conf to bypass this auto configure step.

from hibench.

TYoung1221 avatar TYoung1221 commented on June 21, 2024

I am running a test using spark 1.3 and hadoop 2.6
It seems that I had a same problem. But I think I fix it.

//My error msg is:
Parsing conf: /home/yang/HiBench/conf/00-default-properties.conf
Parsing conf: /home/yang/HiBench/conf/10-data-scale-profile.conf
Parsing conf: /home/yang/HiBench/conf/99-user_defined_properties.conf
Parsing conf: /home/yang/HiBench/workloads/wordcount/conf/00-wordcount-default.conf
Parsing conf: /home/yang/HiBench/workloads/wordcount/conf/10-wordcount-userdefine.conf
Probing spark verison, may last long at first time...
spark://localhost:7077 localhost
Traceback (most recent call last):
File "/home/yang/HiBench/bin/functions/load-config.py", line 440, in
load_config(conf_root, workload_root, workload_folder)
File "/home/yang/HiBench/bin/functions/load-config.py", line 154, in load_config
generate_optional_value()
File "/home/yang/HiBench/bin/functions/load-config.py", line 338, in generate_optional_value
assert 0, "Get workers from spark master's web UI page failed, reason:%s\nPlease check your configurations, network settings, proxy settings, or set hibench.masters.hostnames and hibench.slaves.hostnames manually to bypass auto-probe" % e
AssertionError: Get workers from spark master's web UI page failed, reason:[Errno socket error] [Errno 111] Connection refused
Please check your configurations, network settings, proxy settings, or set hibench.masters.hostnames and hibench.slaves.hostnames manually to bypass auto-probe
/home/yang/HiBench/bin/functions/workload-functions.sh: line 33: .: filename argument required
.: usage: . filename [arguments]
start HadoopPrepareWordcount bench
./prepare.sh: line 25: INPUT_HDFS: unbound variable

Because there is an socket error, so I modify the "hibench.spark.master" in "99-user_defined_properties.conf" file as "http://hostname:4040". And the error didn't occur.

I think the problem is that, you should make sure to export HADOOP_HOME and SPARK_HOME first.

from hibench.

lvsoft avatar lvsoft commented on June 21, 2024

@TYoung1221 actually, this should not be right...
HiBench will try to access your spark master's 8080 port, to fetch the slave node lists from web page. In most default conf, 8080 will be the port of Spark web UI. However, this may failed if you set web UI to another port number.

As the assertion information suggests, you can set hibench.masters.hostnames and hibench.slaves.hostnames(space separated if your have multiple slave nodes) in your 99-user_defined_properties.conf manually to bypass this auto probe step.

from hibench.

chinadragon0515 avatar chinadragon0515 commented on June 21, 2024

Anyone has a solution for this error,
I set these properties,

Hadoop home

hibench.hadoop.home /opt/mapr/hadoop/hadoop-2.5.1/

Spark home

hibench.spark.home /opt/mapr/spark/spark-1.2.1/

HDFS master

hibench.hdfs.master maprfs:///
hibench.hadoop.version hadoop2./bin/run-all.sh
Prepare wordcount ...
Exec script: /mnt/mapr/HiBench-master/workloads/wordcount/prepare/prepare.sh
Parsing conf: /mnt/mapr/HiBench-master/conf/00-default-properties.conf
Parsing conf: /mnt/mapr/HiBench-master/conf/10-data-scale-profile.conf
Parsing conf: /mnt/mapr/HiBench-master/conf/99-user_defined_properties.conf
Parsing conf: /mnt/mapr/HiBench-master/workloads/wordcount/conf/00-wordcount-default.conf
Parsing conf: /mnt/mapr/HiBench-master/workloads/wordcount/conf/10-wordcount-userdefine.conf
Traceback (most recent call last):
File "/mnt/mapr/HiBench-master/bin/functions/load-config.py", line 440, in
load_config(conf_root, workload_root, workload_folder)
File "/mnt/mapr/HiBench-master/bin/functions/load-config.py", line 154, in load_config
generate_optional_value()
File "/mnt/mapr/HiBench-master/bin/functions/load-config.py", line 299, in generate_optional_value
HibenchConf["hibench.sleep.job.jar"] = HibenchConf['hibench.hadoop.examples.test.jar']
KeyError: 'hibench.hadoop.examples.test.jar'
/mnt/mapr/HiBench-master/bin/functions/workload-functions.sh: line 33: .: filename argument required
.: usage: . filename [arguments]
start HadoopPrepareWordcount bench
/mnt/mapr/HiBench-master/workloads/wordcount/prepare/prepare.sh: line 25: INPUT_HDFS: unbound variable
ERROR: wordcount prepare failed!
Run all done!
hibench.hadoop.release mapr
hibench.masters.hostnames h07
hibench.slaves.hostnames h06 h07 h08 h09

And run the test case wordcount in mapreduce only, I still get error

from hibench.

lvsoft avatar lvsoft commented on June 21, 2024

I've confirmed this is a bug and is related to #102

from hibench.

CocoaWang avatar CocoaWang commented on June 21, 2024

I am just like you @PtdThant .Hava you got any solutions?Thanks!

from hibench.

CocoaWang avatar CocoaWang commented on June 21, 2024

Nice to meet you too!
I am a beginner too.I just completed wordcount in single mode.Hadoop-hdfs2.7+spark1.5 +hibench .
I am happy to learn hadoop with you together!
You can share your problems with me.

from hibench.

anks2024 avatar anks2024 commented on June 21, 2024

The way this issue was solved for me was by exporting the JAVA_HOME and adding it into the PATH in the .bashrc for the hdfs user (I was running with HDFS user).

from hibench.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.