用nutch做搜索引擎 出现的错误在线等。
crawl started in: dirrootUrlDir = urls
threads = 4
depth = 5
indexer=lucene
topN = 10
Injector: starting
Injector: crawlDb: dir/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
错误如下————————————————————————————————————————
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at org.apache.nutch.crawl.Injector.inject(Injector.java:211)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:124)
网上找了原因说。我的jdk不对,需要用1.6的,可是我的确是是1.6的。求解释
这是nutch-site.xml
<property>
<name>http.agent.name</name>
<value>MySearch</value>
<description>My Search Engine</description>
</property>
<property>
<name>http.agent.description</name>
<value>MySearch</value>
<description>Further description of our bot- this text is used in
the User-Agent header. It appears in parenthesis after the agent name.
</description>
</property>
<property>
<name>http.agent.url</name>
<value>http://www.163.com</value>
<description>A URL to advertise in the User-Agent header. This will
appear in parenthesis after the agent name. Custom dictates that this
should be a URL of a page explaining the purpose and behavior of this
crawler.
</description>
</property>
这是 crawl -urlfilter.txt
# accept hosts in MY.DOMAIN.NAME
+^http://([a-z0-9]*\.)*163.*/ --------------------编程问答-------------------- 我与lz遇到了同样的问题,lz解决了吗。求解答 --------------------编程问答-------------------- lz恢复啊 --------------------编程问答-------------------- +^http://([a-z0-9]*\.)*163.*/
改成+^http://([a-z0-9]*\.)*163.com*/你试试 --------------------编程问答-------------------- 查看一下hadoop的log,里边有错误提示。。。 --------------------编程问答-------------------- 估计你是插件那里没有弄对。。要么一些插件JAR没有引入。。要么就是配置的那点,关于插件那里没有弄对 --------------------编程问答-------------------- 我也求啊
介问题搞了好久了
补充:Java , Java相关