当前位置:编程学习 > 网站相关 >>

hive join 错误,求大神

执行hive sql的时候
hive> select * from records r join records2 r2 on r.year=r2.year join records3 r3 on r3.year=r2.year;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201311101215_106238, Tracking URL = http://hadoop-master.TB.com:50030/jobdetails.jsp?jobid=job_201311101215_106238
Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job  -Dmapred.job.tracker=hadoop-master.TB.com:8021 -kill job_201311101215_106238
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2013-12-18 11:00:21,940 Stage-1 map = 0%,  reduce = 0%
2013-12-18 11:00:28,378 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.7 sec
2013-12-18 11:00:29,416 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.7 sec
2013-12-18 11:00:30,459 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.7 sec
2013-12-18 11:00:31,495 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.7 sec
2013-12-18 11:00:32,535 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.7 sec
2013-12-18 11:00:34,237 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.7 sec
2013-12-18 11:00:35,270 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.7 sec
2013-12-18 11:00:36,310 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:37,424 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:38,459 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:39,483 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:40,736 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:41,779 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:42,814 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:44,069 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:45,110 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:46,130 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:47,162 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:48,189 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:49,438 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:50,481 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:51,938 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:52,970 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:54,007 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:55,043 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:56,082 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:57,113 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:58,150 Stage-1 map = 50%,  reduce = 17%, Cumulative CPU 1.7 sec
2013-12-18 11:00:59,177 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.7 sec
MapReduce Total cumulative CPU time: 1 seconds 700 msec
Ended Job = job_201311101215_106238 with errors
Error during job, obtaining debugging information...
Examining task ID: task_201311101215_106238_m_000003 (and more) from job job_201311101215_106238
Exception in thread "Thread-40" java.lang.RuntimeException: Error while reading from task log url
        at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:130)
        at org.apache.hadoop.hive.ql.exec.JobDebugger.showJobFailDebugInfo(JobDebugger.java:211)
        at org.apache.hadoop.hive.ql.exec.JobDebugger.run(JobDebugger.java:81)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Server returned HTTP response code: 400 for URL: http://hadoop-sl002.TB.com:50060/tasklog?taskid=attempt_201311101215_106238_m_000000_1&start=-8193
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436)
        at java.net.URL.openStream(URL.java:1010)
        at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:120)
        ... 3 more
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched: 
Job 0: Map: 2  Reduce: 1   Accumulative CPU: 1.7 sec   HDFS Read: 366 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 1 seconds 700 msec


hadoop 的JOBID的日志错误提示


2013-12-18 11:07:01,210 FATAL ExecMapper: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"year":"1990","standard":30}
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:550)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:390)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:324)
at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
at org.apache.hadoop.mapred.Child.main(Child.java:260)
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.IntWritable
at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.get(LazyIntObjectInspector.java:38)
at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getInt(PrimitiveObjectInspectorUtils.java:519)
at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorConverter$IntConverter.convert(PrimitiveObjectInspectorConverter.java:145)
at org.apache.hadoop.hive.ql.udf.generic.GenericUDFUtils$ConversionHelper.convertIfNecessary(GenericUDFUtils.java:345)
at org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.evaluate(GenericUDFBridge.java:181)
at org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.evaluate(ExprNodeGenericFuncEvaluator.java:163)
at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:225)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:531)
... 9 more



--------------------编程问答-------------------- 表数据和结构
hive> 
    > 
    > 
    > 
    > select * from records;
OK
1990    44      1
1990    43      1
1990    49      1
1991    45      2
1992    41      3
1993    43      2
1994    41      1
1994    44      1
1995    58      1
1993    55      1
1992    25      1
1991    39      1
Time taken: 2.823 seconds
hive> select * from records2;
OK
1990    ruishenh0
1992    ruishenh2
1991    ruishenh1
1993    ruishenh3
1994    ruishenh4
1995    ruishenh5
1996    ruishenh6
1997    ruishenh7
1998    ruishenh8
Time taken: 0.371 seconds
hive> select * from records3;
OK
1990    30
1992    34
1991    54
1993    65
1994    34
1995    45
1996    57
1997    45
1998    34
Time taken: 0.405 seconds
hive> desc records;
OK
year    int
temperature     int
quality int
Time taken: 0.102 seconds
hive> desc records2;
OK
year    int
name    string
Time taken: 0.075 seconds
hive> desc records3;
OK
year    string
standard        int
Time taken: 0.067 seconds
hive>  --------------------编程问答-------------------- 稍等,易做图,马上给你解决,分再给多点啊 --------------------编程问答-------------------- 结构表records3 ,改了一下,都改成了int,(ALTER TABLE records3 CHANGE COLUMN year year  int;)
还报错。。。
2013-12-18 11:32:41,715 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"year":1990,"name":"ruishenh0"}
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:390)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:324)
at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
at org.apache.hadoop.mapred.Child.main(Child.java:260)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"year":1990,"name":"ruishenh0"}
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:550)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
... 8 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
at org.apache.hadoop.hive.serde2.lazy.LazyStruct.uncheckedGetField(LazyStruct.java:210)
at org.apache.hadoop.hive.serde2.lazy.LazyStruct.getField(LazyStruct.java:192)
at org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.getStructFieldData(LazySimpleStructObjectInspector.java:188)
at org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.evaluate(ExprNodeColumnEvaluator.java:98)
at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:233)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:531)
... 9 more --------------------编程问答-------------------- 除 --------------------编程问答--------------------   难道没人碰到过吗?.....悲剧了》。。。。。。 --------------------编程问答-------------------- 我也遇到这问题了,8亿条记录,插入1个表,报这个错误。我改为分3批插入,就没问题了。 --------------------编程问答--------------------
引用 6 楼 ys198519 的回复:
我也遇到这问题了,8亿条记录,插入1个表,报这个错误。我改为分3批插入,就没问题了。

哦,听起来你的问题是因为数据量大的问题导致的?我这个并不是啊,只有很少的数据 就10多条而已。
补充:云计算 ,  云存储
CopyRight © 2022 站长资源库 编程知识问答 zzzyk.com All Rights Reserved
部分文章来自网络,