Uploaded image for project: 'Help-Desk'
  1. Help-Desk
  2. HELP-4896

FIWARE.Request.Tech.Data.BigData-Analysis.HiveError2

    Details

    • Type: extRequest
    • Status: Closed
    • Priority: Critical
    • Resolution: Done
    • Fix Version/s: 2021
    • Component/s: FIWARE-TECH-HELP
    • Labels:
      None

      Description

      Hello,

      i am providing technical support for FRACTALS accelator project, i am
      forwarding an issue i received about Cosmos GEi

      ---------------
      we have following problem with Cosmos Hive. Every SQL query we made on
      database table in hive system returns Exception (only query that passes
      without exception is "SELECT * FROM TABLE").
      Tables are created based on files in Cosmos, created by Cygnus.

      An example of Exception for query:

      "SELECT * FROM SENSOR_1_1_1 ORDER BY recvTime DESC"

      we are getting following exception:


      Task ID:
      task_201507101501_18238_m_000000

      URL:

      http://cosmosmaster-gi:50030/taskdetails.jsp?jobid=job_201507101501_18238&tipi

      d=task_201507101501_18238_m_000000


      Diagnostic Messages for this Task:
      java.lang.RuntimeException: Error in configuring object
      at
      org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.jav

      a:93)
      at
      org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:6

      4)
      at
      org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.ja

      va:117)
      at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:386)
      at org.apache.hadoop.mapred.MapTask.run(MapTask.java:324)
      at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:396)
      at
      org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma

      tion.java:1278)
      at org.apache.hadoop.mapred.Child.main(Child.java:260)
      Caused by: java.lang.reflect.InvocationTargetException
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at
      sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.

      java:39)
      at
      sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces

      sorImpl.jav

      FAILED: Execution Error, return code 2 from
      org.apache.hadoop.hive.ql.exec.mr.MapRedTask
      <http://org.apache.hadoop.hive.ql.exec.mr.mapredtask/>

      Can you, please, provide us a support for this issue.
      ----------------------

      BR,
      Aggelos

      Since January 1st, old domains won't be supported and messages sent to any domain different to @lists.fiware.org will be lost.
      Please, send your messages using the new domain (Fiware-tech-help@lists.fiware.org) instead of the old one.
      _______________________________________________
      Fiware-tech-help mailing list
      Fiware-tech-help@lists.fiware.org
      https://lists.fiware.org/listinfo/fiware-tech-help
      [Created via e-mail received from: Aggelos Groumas <gkraggel@di.uoa.gr>]

        Issue Links

          Activity

          Hide
          ichulani ilknur chulani added a comment -

          Dear Francisco,

          Any updates on this inquiry? It seems this ticket was assigned almost a month ago. Could you kindly help please?

          Thanks,

          ilknur

          Show
          ichulani ilknur chulani added a comment - Dear Francisco, Any updates on this inquiry? It seems this ticket was assigned almost a month ago. Could you kindly help please? Thanks, ilknur
          Hide
          mev Manuel Escriche added a comment -

          No activity here?

          Show
          mev Manuel Escriche added a comment - No activity here?
          Hide
          frb Francisco Romero added a comment -

          Sorry for the late reply.

          It seems this was caused by an error in the cluster, rather than a problem with the Hive table itself. Indeed, it currently works:

          hive> SELECT * FROM SENSOR_1_1_1 ORDER BY recvTime DESC;
          Total jobs = 1
          Launching Job 1 out of 1
          Number of reduce tasks determined at compile time: 1
          In order to change the average load for a reducer (in bytes):
          set hive.exec.reducers.bytes.per.reducer=<number>
          In order to limit the maximum number of reducers:
          set hive.exec.reducers.max=<number>
          In order to set a constant number of reducers:
          set mapred.reduce.tasks=<number>
          Starting Job = job_201507101501_23387, Tracking URL = http://cosmosmaster-gi:50030/jobdetails.jsp?jobid=job_201507101501_23387
          Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job -kill job_201507101501_23387
          Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
          2015-11-24 12:13:16,718 Stage-1 map = 0%, reduce = 0%
          2015-11-24 12:13:20,761 Stage-1 map = 0%, reduce = 100%, Cumulative CPU 0.97 sec
          2015-11-24 12:13:22,785 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 0.97 sec
          MapReduce Total cumulative CPU time: 970 msec
          Ended Job = job_201507101501_23387
          MapReduce Jobs Launched:
          Job 0: Reduce: 1 Cumulative CPU: 0.97 sec HDFS Read: 0 HDFS Write: 0 SUCCESS
          Total MapReduce CPU Time Spent: 970 msec
          OK
          Time taken: 12.182 seconds

          Closing

          Show
          frb Francisco Romero added a comment - Sorry for the late reply. It seems this was caused by an error in the cluster, rather than a problem with the Hive table itself. Indeed, it currently works: hive> SELECT * FROM SENSOR_1_1_1 ORDER BY recvTime DESC; Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201507101501_23387, Tracking URL = http://cosmosmaster-gi:50030/jobdetails.jsp?jobid=job_201507101501_23387 Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job -kill job_201507101501_23387 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1 2015-11-24 12:13:16,718 Stage-1 map = 0%, reduce = 0% 2015-11-24 12:13:20,761 Stage-1 map = 0%, reduce = 100%, Cumulative CPU 0.97 sec 2015-11-24 12:13:22,785 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 0.97 sec MapReduce Total cumulative CPU time: 970 msec Ended Job = job_201507101501_23387 MapReduce Jobs Launched: Job 0: Reduce: 1 Cumulative CPU: 0.97 sec HDFS Read: 0 HDFS Write: 0 SUCCESS Total MapReduce CPU Time Spent: 970 msec OK Time taken: 12.182 seconds Closing
          Hide
          ichulani ilknur chulani added a comment -

          Dear Aggelos,

          It seems the issue is resolved now. Could you kindly pass the GE owner's reply below to the SME? Thanks..

          ilknur
          Sorry for the late reply.

          "It seems this was caused by an error in the cluster, rather than a problem with the Hive table itself. Indeed, it currently works:

          hive> SELECT * FROM SENSOR_1_1_1 ORDER BY recvTime DESC;
          Total jobs = 1
          Launching Job 1 out of 1
          Number of reduce tasks determined at compile time: 1
          In order to change the average load for a reducer (in bytes):
          set hive.exec.reducers.bytes.per.reducer=<number>
          In order to limit the maximum number of reducers:
          set hive.exec.reducers.max=<number>
          In order to set a constant number of reducers:
          set mapred.reduce.tasks=<number>
          Starting Job = job_201507101501_23387, Tracking URL = http://cosmosmaster-gi:50030/jobdetails.jsp?jobid=job_201507101501_23387
          Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job -kill job_201507101501_23387
          Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
          2015-11-24 12:13:16,718 Stage-1 map = 0%, reduce = 0%
          2015-11-24 12:13:20,761 Stage-1 map = 0%, reduce = 100%, Cumulative CPU 0.97 sec
          2015-11-24 12:13:22,785 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 0.97 sec
          MapReduce Total cumulative CPU time: 970 msec
          Ended Job = job_201507101501_23387
          MapReduce Jobs Launched:
          Job 0: Reduce: 1 Cumulative CPU: 0.97 sec HDFS Read: 0 HDFS Write: 0 SUCCESS
          Total MapReduce CPU Time Spent: 970 msec
          OK
          Time taken: 12.182 seconds

          Closing"

          Show
          ichulani ilknur chulani added a comment - Dear Aggelos, It seems the issue is resolved now. Could you kindly pass the GE owner's reply below to the SME? Thanks.. ilknur Sorry for the late reply. "It seems this was caused by an error in the cluster, rather than a problem with the Hive table itself. Indeed, it currently works: hive> SELECT * FROM SENSOR_1_1_1 ORDER BY recvTime DESC; Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201507101501_23387, Tracking URL = http://cosmosmaster-gi:50030/jobdetails.jsp?jobid=job_201507101501_23387 Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job -kill job_201507101501_23387 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1 2015-11-24 12:13:16,718 Stage-1 map = 0%, reduce = 0% 2015-11-24 12:13:20,761 Stage-1 map = 0%, reduce = 100%, Cumulative CPU 0.97 sec 2015-11-24 12:13:22,785 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 0.97 sec MapReduce Total cumulative CPU time: 970 msec Ended Job = job_201507101501_23387 MapReduce Jobs Launched: Job 0: Reduce: 1 Cumulative CPU: 0.97 sec HDFS Read: 0 HDFS Write: 0 SUCCESS Total MapReduce CPU Time Spent: 970 msec OK Time taken: 12.182 seconds Closing"
          Hide
          burak Karaboga, Burak added a comment -

          Hi Aggelos,

          Can you please confirm if you have passed on the answer to the SME?

          Regards,

          Burak

          Show
          burak Karaboga, Burak added a comment - Hi Aggelos, Can you please confirm if you have passed on the answer to the SME? Regards, Burak
          Hide
          burak Karaboga, Burak added a comment -

          Hi Aggelos,

          Can you please confirm if you have passed on the answer to the SME?

          Regards,

          Burak

          Show
          burak Karaboga, Burak added a comment - Hi Aggelos, Can you please confirm if you have passed on the answer to the SME? Regards, Burak

            People

            • Assignee:
              frb Francisco Romero
              Reporter:
              fw.ext.user FW External User
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: