Uploaded image for project: 'Help-Desk'
  1. Help-Desk
  2. HELP-6073

FIWARE.Request.Tech.Data.BigData-Analysis.COSMOS BigData Analysis Write Permission

    Details

    • Type: extRequest
    • Status: Closed
    • Priority: Major
    • Resolution: Done
    • Fix Version/s: 2021
    • Component/s: FIWARE-TECH-HELP
    • Labels:
      None

      Description

      Hello.

      I am trying to perform an easy example of an analysis using the hadoop examples in Cosmos.
      It seems that my user (jvidal) doesn't have permission to do this. I show you the logs.

      [jvidal@cosmosmaster-gi ~]$ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dcfc2a5333eff5d19c8c4_product.txt /home/jvidal/countwords
      16/03/08 13:47:32 WARN snappy.LoadSnappy: Snappy native library is available
      16/03/08 13:47:32 INFO util.NativeCodeLoader: Loaded the native-hadoop library
      16/03/08 13:47:32 INFO snappy.LoadSnappy: Snappy native library loaded
      16/03/08 13:47:32 INFO mapred.FileInputFormat: Total input paths to process : 1
      16/03/08 13:47:33 INFO mapred.JobClient: Running job: job_201603041134_0059
      16/03/08 13:47:34 INFO mapred.JobClient: map 0% reduce 0%
      16/03/08 13:47:39 INFO mapred.JobClient: Task Id : attempt_201603041134_0059_m_000003_0, Status : FAILED
      org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
      at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
      at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
      at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:323)
      at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
      at org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:52)
      at org.apach
      16/03/08 13:47:44 INFO mapred.JobClient: Task Id : attempt_201603041134_0059_r_000010_0, Status : FAILED
      org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
      at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
      at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
      at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:323)
      at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
      at org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:52)
      at org.apach
      16/03/08 13:47:49 INFO mapred.JobClient: Task Id : attempt_201603041134_0059_m_000003_1, Status : FAILED
      org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
      at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
      at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
      at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:323)
      at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
      at org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:52)
      at org.apach
      16/03/08 13:47:54 INFO mapred.JobClient: Task Id : attempt_201603041134_0059_r_000010_1, Status : FAILED
      org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
      at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
      at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
      at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:323)
      at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
      at org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:52)
      at org.apach
      16/03/08 13:47:59 INFO mapred.JobClient: Task Id : attempt_201603041134_0059_m_000003_2, Status : FAILED
      org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
      at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
      at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
      at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:323)
      at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
      at org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:52)
      at org.apach
      16/03/08 13:48:04 INFO mapred.JobClient: Task Id : attempt_201603041134_0059_r_000010_2, Status : FAILED
      org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
      at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
      at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
      at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:323)
      at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
      at org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:52)
      at org.apach
      16/03/08 13:48:12 INFO mapred.JobClient: Job complete: job_201603041134_0059
      16/03/08 13:48:12 INFO mapred.JobClient: Counters: 4
      16/03/08 13:48:12 INFO mapred.JobClient: Job Counters
      16/03/08 13:48:12 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=22353
      16/03/08 13:48:12 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
      16/03/08 13:48:12 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
      16/03/08 13:48:12 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14040
      16/03/08 13:48:12 INFO mapred.JobClient: Job Failed: NA
      java.io.IOException: Job failed!
      at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1300)
      at org.apache.hadoop.examples.WordCount.run(WordCount.java:149)
      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
      at org.apache.hadoop.examples.WordCount.main(WordCount.java:155)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
      at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
      at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.hadoop.util.RunJar.main(RunJar.java:197)

      Could you grant me access to this user?
      If I manage more users, should I tell about them specifically, or this is a general issue that can be fixed for all?

      Thanks in advance

      Confidentiality notice:
      This e-mail message, including any attachments, may contain legally privileged and/or confidential information. If you are not the intended recipient(s), or the employee or agent responsible for delivery of this message to the intended recipient(s), you are hereby notified that any dissemination, distribution, or copying of this e-mail message is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete this e-mail message from your computer.

      Since January 1st, old domains won't be supported and messages sent to any domain different to @lists.fiware.org will be lost.
      Please, send your messages using the new domain (Fiware-tech-help@lists.fiware.org) instead of the old one.
      _______________________________________________
      Fiware-tech-help mailing list
      Fiware-tech-help@lists.fiware.org
      https://lists.fiware.org/listinfo/fiware-tech-help
      [Created via e-mail received from: =?utf-8?Q?Jose_Ben=C3=ADtez?= <jose@secmotic.com>]

      1. Jose.png
        18 kB
      2. Jose.png
        18 kB
      3. Jose.png
        18 kB

        Activity

        Hide
        fw.ext.user FW External User added a comment -

        Hi,

        The error is legal: you are trying to put the results of a MapReduce job
        in the HDFS folder /home/jvidal/countwords, which does not exist. It is
        correct for a user not having permissions for creating such a folder,
        since users are restricted to their HDFS user space. Please observe, if
        nothing is said, both the input and output paths are considered to relate
        to HDFS.

        Thus, try putting the output data somewhere your HDFS user space, i.e.
        /user/jvidal:

        $ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount
        /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dcfc
        2a5333eff5d19c8c4_product.txt /user/jvidal/countwords

        If you were trying to put the results directly in your Linux user space
        (different than the HDFS space!), which is /home/jvidal, then you must run
        the job as:

        $ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount
        /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dcfc
        2a5333eff5d19c8c4_product.txt file:///home/jvidal/countwords

        Regards,
        Francisco

        El 8/3/16 15:24, "Manuel Escriche (JIRA)" <jira-help-desk@fi-ware.org>

        ________________________________

        Este mensaje y sus adjuntos se dirigen exclusivamente a su destinatario, puede contener información privilegiada o confidencial y es para uso exclusivo de la persona o entidad de destino. Si no es usted. el destinatario indicado, queda notificado de que la lectura, utilización, divulgación y/o copia sin autorización puede estar prohibida en virtud de la legislación vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente por esta misma vía y proceda a su destrucción.

        The information contained in this transmission is privileged and confidential information intended only for the use of the individual or entity named above. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, do not read it. Please immediately reply to the sender that you have received this communication in error and then delete it.

        Esta mensagem e seus anexos se dirigem exclusivamente ao seu destinatário, pode conter informação privilegiada ou confidencial e é para uso exclusivo da pessoa ou entidade de destino. Se não é vossa senhoria o destinatário indicado, fica notificado de que a leitura, utilização, divulgação e/ou cópia sem autorização pode estar proibida em virtude da legislação vigente. Se recebeu esta mensagem por erro, rogamos-lhe que nos o comunique imediatamente por esta mesma via e proceda a sua destruição
        Since January 1st, old domains won't be supported and messages sent to any domain different to @lists.fiware.org will be lost.
        Please, send your messages using the new domain (Fiware-tech-help@lists.fiware.org) instead of the old one.
        _______________________________________________
        Fiware-tech-help mailing list
        Fiware-tech-help@lists.fiware.org
        https://lists.fiware.org/listinfo/fiware-tech-help

        Show
        fw.ext.user FW External User added a comment - Hi, The error is legal: you are trying to put the results of a MapReduce job in the HDFS folder /home/jvidal/countwords, which does not exist. It is correct for a user not having permissions for creating such a folder, since users are restricted to their HDFS user space. Please observe, if nothing is said, both the input and output paths are considered to relate to HDFS. Thus, try putting the output data somewhere your HDFS user space, i.e. /user/jvidal: $ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dcfc 2a5333eff5d19c8c4_product.txt /user/jvidal/countwords If you were trying to put the results directly in your Linux user space (different than the HDFS space!), which is /home/jvidal, then you must run the job as: $ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dcfc 2a5333eff5d19c8c4_product.txt file:///home/jvidal/countwords Regards, Francisco El 8/3/16 15:24, "Manuel Escriche (JIRA)" <jira-help-desk@fi-ware.org> ________________________________ Este mensaje y sus adjuntos se dirigen exclusivamente a su destinatario, puede contener información privilegiada o confidencial y es para uso exclusivo de la persona o entidad de destino. Si no es usted. el destinatario indicado, queda notificado de que la lectura, utilización, divulgación y/o copia sin autorización puede estar prohibida en virtud de la legislación vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente por esta misma vía y proceda a su destrucción. The information contained in this transmission is privileged and confidential information intended only for the use of the individual or entity named above. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, do not read it. Please immediately reply to the sender that you have received this communication in error and then delete it. Esta mensagem e seus anexos se dirigem exclusivamente ao seu destinatário, pode conter informação privilegiada ou confidencial e é para uso exclusivo da pessoa ou entidade de destino. Se não é vossa senhoria o destinatário indicado, fica notificado de que a leitura, utilização, divulgação e/ou cópia sem autorização pode estar proibida em virtude da legislação vigente. Se recebeu esta mensagem por erro, rogamos-lhe que nos o comunique imediatamente por esta mesma via e proceda a sua destruição Since January 1st, old domains won't be supported and messages sent to any domain different to @lists.fiware.org will be lost. Please, send your messages using the new domain (Fiware-tech-help@lists.fiware.org) instead of the old one. _______________________________________________ Fiware-tech-help mailing list Fiware-tech-help@lists.fiware.org https://lists.fiware.org/listinfo/fiware-tech-help
        Hide
        fw.ext.user FW External User added a comment -

        Ups! Sorry, and thank you very much

        Best regards,

        Confidentiality notice:
        This e-mail message, including any attachments, may contain legally privileged and/or confidential information. If you are not the intended recipient(s), or the employee or agent responsible for delivery of this message to the intended recipient(s), you are hereby notified that any dissemination, distribution, or copying of this e-mail message is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete this e-mail message from your computer.

        Since January 1st, old domains won't be supported and messages sent to any domain different to @lists.fiware.org will be lost.
        Please, send your messages using the new domain (Fiware-tech-help@lists.fiware.org) instead of the old one.
        _______________________________________________
        Fiware-tech-help mailing list
        Fiware-tech-help@lists.fiware.org
        https://lists.fiware.org/listinfo/fiware-tech-help

        Show
        fw.ext.user FW External User added a comment - Ups! Sorry, and thank you very much Best regards, – Confidentiality notice: This e-mail message, including any attachments, may contain legally privileged and/or confidential information. If you are not the intended recipient(s), or the employee or agent responsible for delivery of this message to the intended recipient(s), you are hereby notified that any dissemination, distribution, or copying of this e-mail message is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete this e-mail message from your computer. Since January 1st, old domains won't be supported and messages sent to any domain different to @lists.fiware.org will be lost. Please, send your messages using the new domain (Fiware-tech-help@lists.fiware.org) instead of the old one. _______________________________________________ Fiware-tech-help mailing list Fiware-tech-help@lists.fiware.org https://lists.fiware.org/listinfo/fiware-tech-help
        Hide
        fw.ext.user FW External User added a comment -

        Comment by jose@secmotic.com :

        Hello,

        I am unable to log in the global instance of Cosmos since my registry (which was at Sun Jun 26 2016 19:14:46 )

        My cosmos username is "andres.umana", and doing ssh andres.umana@cosmos.lab.fiware.org <khi@cosmos.lab.fiware.org> and after typing my password, the password seems incorrect.
        I have even tried to change the password in the Cosmos GUI ( https://cosmos.lab.fiware.org/profile <https://cosmos.lab.fiware.org/profile> ), but I cannot access.

        Any help would be appreciated

        Thanks in advance

        Confidentiality notice:
        This e-mail message, including any attachments, may contain legally privileged and/or confidential information. If you are not the intended recipient(s), or the employee or agent responsible for delivery of this message to the intended recipient(s), you are hereby notified that any dissemination, distribution, or copying of this e-mail message is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete this e-mail message from your computer.

        > El 8 mar 2016, a las 15:50, FRANCISCO ROMERO BUENO <francisco.romerobueno@telefonica.com> escribió:
        >
        > Hi,
        >
        > The error is legal: you are trying to put the results of a MapReduce job
        > in the HDFS folder /home/jvidal/countwords, which does not exist. It is
        > correct for a user not having permissions for creating such a folder,
        > since users are restricted to their HDFS user space. Please observe, if
        > nothing is said, both the input and output paths are considered to relate
        > to HDFS.
        >
        > Thus, try putting the output data somewhere your HDFS user space, i.e.
        > /user/jvidal:
        >
        > $ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount
        > /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dcfc
        > 2a5333eff5d19c8c4_product.txt /user/jvidal/countwords
        >
        >
        > If you were trying to put the results directly in your Linux user space
        > (different than the HDFS space!), which is /home/jvidal, then you must run
        > the job as:
        >
        > $ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount
        > /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dcfc
        > 2a5333eff5d19c8c4_product.txt file:///home/jvidal/countwords
        >
        >
        > Regards,
        > Francisco
        >
        >
        > El 8/3/16 15:24, "Manuel Escriche (JIRA)" <jira-help-desk@fi-ware.org>
        > escribió:
        >
        >>
        >> [
        >> https://jira.fiware.org/browse/HELP-6073?page=com.atlassian.jira.plugin.sy
        >> stem.issuetabpanels:all-tabpanel ]
        >>
        >> Manuel Escriche reassigned HELP-6073:
        >> -------------------------------------
        >>
        >> Assignee: Francisco Romero
        >>
        >>> [Fiware-tech-help] COSMOS BigData Analysis Write Permission
        >>> -----------------------------------------------------------
        >>>
        >>> Key: HELP-6073
        >>> URL: https://jira.fiware.org/browse/HELP-6073
        >>> Project: Help-Desk
        >>> Issue Type: extRequest
        >>> Components: FIWARE-TECH-HELP
        >>> Reporter: FW External User
        >>> Assignee: Francisco Romero
        >>> Attachments: Jose.png
        >>>
        >>>
        >>> Hello.
        >>> I am trying to perform an easy example of an analysis using the hadoop
        >>> examples in Cosmos.
        >>> It seems that my user (jvidal) doesn't have permission to do this. I
        >>> show you the logs.
        >>> [jvidal@cosmosmaster-gi ~]$ hadoop jar
        >>> /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount
        >>> /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dc
        >>> fc2a5333eff5d19c8c4_product.txt /home/jvidal/countwords
        >>> 16/03/08 13:47:32 WARN snappy.LoadSnappy: Snappy native library is
        >>> available
        >>> 16/03/08 13:47:32 INFO util.NativeCodeLoader: Loaded the native-hadoop
        >>> library
        >>> 16/03/08 13:47:32 INFO snappy.LoadSnappy: Snappy native library loaded
        >>> 16/03/08 13:47:32 INFO mapred.FileInputFormat: Total input paths to
        >>> process : 1
        >>> 16/03/08 13:47:33 INFO mapred.JobClient: Running job:
        >>> job_201603041134_0059
        >>> 16/03/08 13:47:34 INFO mapred.JobClient: map 0% reduce 0%
        >>> 16/03/08 13:47:39 INFO mapred.JobClient: Task Id :
        >>> attempt_201603041134_0059_m_000003_0, Status : FAILED
        >>> org.apache.hadoop.security.AccessControlException:
        >>> org.apache.hadoop.security.AccessControlException: Permission denied:
        >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
        >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
        >>> Method)
        >>> at
        >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc
        >>> cessorImpl.java:39)
        >>> at
        >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst
        >>> ructorAccessorImpl.java:27)
        >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio
        >>> n.java:95)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti
        >>> on.java:57)
        >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
        >>> at
        >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem
        >>> .java:323)
        >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
        >>> at
        >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter
        >>> .java:52)
        >>> at org.apach
        >>> 16/03/08 13:47:44 INFO mapred.JobClient: Task Id :
        >>> attempt_201603041134_0059_r_000010_0, Status : FAILED
        >>> org.apache.hadoop.security.AccessControlException:
        >>> org.apache.hadoop.security.AccessControlException: Permission denied:
        >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
        >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
        >>> Method)
        >>> at
        >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc
        >>> cessorImpl.java:39)
        >>> at
        >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst
        >>> ructorAccessorImpl.java:27)
        >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio
        >>> n.java:95)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti
        >>> on.java:57)
        >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
        >>> at
        >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem
        >>> .java:323)
        >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
        >>> at
        >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter
        >>> .java:52)
        >>> at org.apach
        >>> 16/03/08 13:47:49 INFO mapred.JobClient: Task Id :
        >>> attempt_201603041134_0059_m_000003_1, Status : FAILED
        >>> org.apache.hadoop.security.AccessControlException:
        >>> org.apache.hadoop.security.AccessControlException: Permission denied:
        >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
        >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
        >>> Method)
        >>> at
        >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc
        >>> cessorImpl.java:39)
        >>> at
        >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst
        >>> ructorAccessorImpl.java:27)
        >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio
        >>> n.java:95)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti
        >>> on.java:57)
        >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
        >>> at
        >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem
        >>> .java:323)
        >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
        >>> at
        >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter
        >>> .java:52)
        >>> at org.apach
        >>> 16/03/08 13:47:54 INFO mapred.JobClient: Task Id :
        >>> attempt_201603041134_0059_r_000010_1, Status : FAILED
        >>> org.apache.hadoop.security.AccessControlException:
        >>> org.apache.hadoop.security.AccessControlException: Permission denied:
        >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
        >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
        >>> Method)
        >>> at
        >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc
        >>> cessorImpl.java:39)
        >>> at
        >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst
        >>> ructorAccessorImpl.java:27)
        >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio
        >>> n.java:95)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti
        >>> on.java:57)
        >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
        >>> at
        >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem
        >>> .java:323)
        >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
        >>> at
        >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter
        >>> .java:52)
        >>> at org.apach
        >>> 16/03/08 13:47:59 INFO mapred.JobClient: Task Id :
        >>> attempt_201603041134_0059_m_000003_2, Status : FAILED
        >>> org.apache.hadoop.security.AccessControlException:
        >>> org.apache.hadoop.security.AccessControlException: Permission denied:
        >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
        >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
        >>> Method)
        >>> at
        >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc
        >>> cessorImpl.java:39)
        >>> at
        >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst
        >>> ructorAccessorImpl.java:27)
        >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio
        >>> n.java:95)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti
        >>> on.java:57)
        >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
        >>> at
        >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem
        >>> .java:323)
        >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
        >>> at
        >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter
        >>> .java:52)
        >>> at org.apach
        >>> 16/03/08 13:48:04 INFO mapred.JobClient: Task Id :
        >>> attempt_201603041134_0059_r_000010_2, Status : FAILED
        >>> org.apache.hadoop.security.AccessControlException:
        >>> org.apache.hadoop.security.AccessControlException: Permission denied:
        >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
        >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
        >>> Method)
        >>> at
        >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc
        >>> cessorImpl.java:39)
        >>> at
        >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst
        >>> ructorAccessorImpl.java:27)
        >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio
        >>> n.java:95)
        >>> at
        >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti
        >>> on.java:57)
        >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297)
        >>> at
        >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem
        >>> .java:323)
        >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314)
        >>> at
        >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter
        >>> .java:52)
        >>> at org.apach
        >>> 16/03/08 13:48:12 INFO mapred.JobClient: Job complete:
        >>> job_201603041134_0059
        >>> 16/03/08 13:48:12 INFO mapred.JobClient: Counters: 4
        >>> 16/03/08 13:48:12 INFO mapred.JobClient: Job Counters
        >>> 16/03/08 13:48:12 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=22353
        >>> 16/03/08 13:48:12 INFO mapred.JobClient: Total time spent by all
        >>> reduces waiting after reserving slots (ms)=0
        >>> 16/03/08 13:48:12 INFO mapred.JobClient: Total time spent by all
        >>> maps waiting after reserving slots (ms)=0
        >>> 16/03/08 13:48:12 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14040
        >>> 16/03/08 13:48:12 INFO mapred.JobClient: Job Failed: NA
        >>> java.io.IOException: Job failed!
        >>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1300)
        >>> at org.apache.hadoop.examples.WordCount.run(WordCount.java:149)
        >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        >>> at org.apache.hadoop.examples.WordCount.main(WordCount.java:155)
        >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        >>> at
        >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java
        >>> :39)
        >>> at
        >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI
        >>> mpl.java:25)
        >>> at java.lang.reflect.Method.invoke(Method.java:597)
        >>> at
        >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDri
        >>> ver.java:68)
        >>> at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        >>> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
        >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        >>> at
        >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java
        >>> :39)
        >>> at
        >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI
        >>> mpl.java:25)
        >>> at java.lang.reflect.Method.invoke(Method.java:597)
        >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
        >>> Could you grant me access to this user?
        >>> If I manage more users, should I tell about them specifically, or this
        >>> is a general issue that can be fixed for all?
        >>> Thanks in advance
        >>> –
        >>> Confidentiality notice:
        >>> This e-mail message, including any attachments, may contain legally
        >>> privileged and/or confidential information. If you are not the intended
        >>> recipient(s), or the employee or agent responsible for delivery of this
        >>> message to the intended recipient(s), you are hereby notified that any
        >>> dissemination, distribution, or copying of this e-mail message is
        >>> strictly prohibited. If you have received this message in error, please
        >>> immediately notify the sender and delete this e-mail message from your
        >>> computer.
        >>> Since January 1st, old domains won't be supported and messages sent to
        >>> any domain different to @lists.fiware.org will be lost.
        >>> Please, send your messages using the new domain
        >>> (Fiware-tech-help@lists.fiware.org) instead of the old one.
        >>>

        Show
        fw.ext.user FW External User added a comment - Comment by jose@secmotic.com : Hello, I am unable to log in the global instance of Cosmos since my registry (which was at Sun Jun 26 2016 19:14:46 ) My cosmos username is "andres.umana", and doing ssh andres.umana@cosmos.lab.fiware.org < khi@cosmos.lab.fiware.org > and after typing my password, the password seems incorrect. I have even tried to change the password in the Cosmos GUI ( https://cosmos.lab.fiware.org/profile < https://cosmos.lab.fiware.org/profile > ), but I cannot access. Any help would be appreciated Thanks in advance – Confidentiality notice: This e-mail message, including any attachments, may contain legally privileged and/or confidential information. If you are not the intended recipient(s), or the employee or agent responsible for delivery of this message to the intended recipient(s), you are hereby notified that any dissemination, distribution, or copying of this e-mail message is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete this e-mail message from your computer. > El 8 mar 2016, a las 15:50, FRANCISCO ROMERO BUENO <francisco.romerobueno@telefonica.com> escribió: > > Hi, > > The error is legal: you are trying to put the results of a MapReduce job > in the HDFS folder /home/jvidal/countwords, which does not exist. It is > correct for a user not having permissions for creating such a folder, > since users are restricted to their HDFS user space. Please observe, if > nothing is said, both the input and output paths are considered to relate > to HDFS. > > Thus, try putting the output data somewhere your HDFS user space, i.e. > /user/jvidal: > > $ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount > /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dcfc > 2a5333eff5d19c8c4_product.txt /user/jvidal/countwords > > > If you were trying to put the results directly in your Linux user space > (different than the HDFS space!), which is /home/jvidal, then you must run > the job as: > > $ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount > /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dcfc > 2a5333eff5d19c8c4_product.txt file:///home/jvidal/countwords > > > Regards, > Francisco > > > El 8/3/16 15:24, "Manuel Escriche (JIRA)" <jira-help-desk@fi-ware.org> > escribió: > >> >> [ >> https://jira.fiware.org/browse/HELP-6073?page=com.atlassian.jira.plugin.sy >> stem.issuetabpanels:all-tabpanel ] >> >> Manuel Escriche reassigned HELP-6073 : >> ------------------------------------- >> >> Assignee: Francisco Romero >> >>> [Fiware-tech-help] COSMOS BigData Analysis Write Permission >>> ----------------------------------------------------------- >>> >>> Key: HELP-6073 >>> URL: https://jira.fiware.org/browse/HELP-6073 >>> Project: Help-Desk >>> Issue Type: extRequest >>> Components: FIWARE-TECH-HELP >>> Reporter: FW External User >>> Assignee: Francisco Romero >>> Attachments: Jose.png >>> >>> >>> Hello. >>> I am trying to perform an easy example of an analysis using the hadoop >>> examples in Cosmos. >>> It seems that my user (jvidal) doesn't have permission to do this. I >>> show you the logs. >>> [jvidal@cosmosmaster-gi ~] $ hadoop jar >>> /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount >>> /user/jvidal/def_serv/def_servpath/556dcfc2a5333eff5d19c8c4_product/556dc >>> fc2a5333eff5d19c8c4_product.txt /home/jvidal/countwords >>> 16/03/08 13:47:32 WARN snappy.LoadSnappy: Snappy native library is >>> available >>> 16/03/08 13:47:32 INFO util.NativeCodeLoader: Loaded the native-hadoop >>> library >>> 16/03/08 13:47:32 INFO snappy.LoadSnappy: Snappy native library loaded >>> 16/03/08 13:47:32 INFO mapred.FileInputFormat: Total input paths to >>> process : 1 >>> 16/03/08 13:47:33 INFO mapred.JobClient: Running job: >>> job_201603041134_0059 >>> 16/03/08 13:47:34 INFO mapred.JobClient: map 0% reduce 0% >>> 16/03/08 13:47:39 INFO mapred.JobClient: Task Id : >>> attempt_201603041134_0059_m_000003_0, Status : FAILED >>> org.apache.hadoop.security.AccessControlException: >>> org.apache.hadoop.security.AccessControlException: Permission denied: >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native >>> Method) >>> at >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc >>> cessorImpl.java:39) >>> at >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst >>> ructorAccessorImpl.java:27) >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513) >>> at >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio >>> n.java:95) >>> at >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti >>> on.java:57) >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297) >>> at >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem >>> .java:323) >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314) >>> at >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter >>> .java:52) >>> at org.apach >>> 16/03/08 13:47:44 INFO mapred.JobClient: Task Id : >>> attempt_201603041134_0059_r_000010_0, Status : FAILED >>> org.apache.hadoop.security.AccessControlException: >>> org.apache.hadoop.security.AccessControlException: Permission denied: >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native >>> Method) >>> at >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc >>> cessorImpl.java:39) >>> at >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst >>> ructorAccessorImpl.java:27) >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513) >>> at >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio >>> n.java:95) >>> at >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti >>> on.java:57) >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297) >>> at >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem >>> .java:323) >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314) >>> at >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter >>> .java:52) >>> at org.apach >>> 16/03/08 13:47:49 INFO mapred.JobClient: Task Id : >>> attempt_201603041134_0059_m_000003_1, Status : FAILED >>> org.apache.hadoop.security.AccessControlException: >>> org.apache.hadoop.security.AccessControlException: Permission denied: >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native >>> Method) >>> at >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc >>> cessorImpl.java:39) >>> at >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst >>> ructorAccessorImpl.java:27) >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513) >>> at >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio >>> n.java:95) >>> at >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti >>> on.java:57) >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297) >>> at >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem >>> .java:323) >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314) >>> at >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter >>> .java:52) >>> at org.apach >>> 16/03/08 13:47:54 INFO mapred.JobClient: Task Id : >>> attempt_201603041134_0059_r_000010_1, Status : FAILED >>> org.apache.hadoop.security.AccessControlException: >>> org.apache.hadoop.security.AccessControlException: Permission denied: >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native >>> Method) >>> at >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc >>> cessorImpl.java:39) >>> at >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst >>> ructorAccessorImpl.java:27) >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513) >>> at >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio >>> n.java:95) >>> at >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti >>> on.java:57) >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297) >>> at >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem >>> .java:323) >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314) >>> at >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter >>> .java:52) >>> at org.apach >>> 16/03/08 13:47:59 INFO mapred.JobClient: Task Id : >>> attempt_201603041134_0059_m_000003_2, Status : FAILED >>> org.apache.hadoop.security.AccessControlException: >>> org.apache.hadoop.security.AccessControlException: Permission denied: >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native >>> Method) >>> at >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc >>> cessorImpl.java:39) >>> at >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst >>> ructorAccessorImpl.java:27) >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513) >>> at >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio >>> n.java:95) >>> at >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti >>> on.java:57) >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297) >>> at >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem >>> .java:323) >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314) >>> at >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter >>> .java:52) >>> at org.apach >>> 16/03/08 13:48:04 INFO mapred.JobClient: Task Id : >>> attempt_201603041134_0059_r_000010_2, Status : FAILED >>> org.apache.hadoop.security.AccessControlException: >>> org.apache.hadoop.security.AccessControlException: Permission denied: >>> user=jvidal, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native >>> Method) >>> at >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc >>> cessorImpl.java:39) >>> at >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst >>> ructorAccessorImpl.java:27) >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513) >>> at >>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExceptio >>> n.java:95) >>> at >>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcepti >>> on.java:57) >>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1297) >>> at >>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem >>> .java:323) >>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1314) >>> at >>> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter >>> .java:52) >>> at org.apach >>> 16/03/08 13:48:12 INFO mapred.JobClient: Job complete: >>> job_201603041134_0059 >>> 16/03/08 13:48:12 INFO mapred.JobClient: Counters: 4 >>> 16/03/08 13:48:12 INFO mapred.JobClient: Job Counters >>> 16/03/08 13:48:12 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=22353 >>> 16/03/08 13:48:12 INFO mapred.JobClient: Total time spent by all >>> reduces waiting after reserving slots (ms)=0 >>> 16/03/08 13:48:12 INFO mapred.JobClient: Total time spent by all >>> maps waiting after reserving slots (ms)=0 >>> 16/03/08 13:48:12 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14040 >>> 16/03/08 13:48:12 INFO mapred.JobClient: Job Failed: NA >>> java.io.IOException: Job failed! >>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1300) >>> at org.apache.hadoop.examples.WordCount.run(WordCount.java:149) >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) >>> at org.apache.hadoop.examples.WordCount.main(WordCount.java:155) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java >>> :39) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI >>> mpl.java:25) >>> at java.lang.reflect.Method.invoke(Method.java:597) >>> at >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDri >>> ver.java:68) >>> at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) >>> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java >>> :39) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI >>> mpl.java:25) >>> at java.lang.reflect.Method.invoke(Method.java:597) >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:197) >>> Could you grant me access to this user? >>> If I manage more users, should I tell about them specifically, or this >>> is a general issue that can be fixed for all? >>> Thanks in advance >>> – >>> Confidentiality notice: >>> This e-mail message, including any attachments, may contain legally >>> privileged and/or confidential information. If you are not the intended >>> recipient(s), or the employee or agent responsible for delivery of this >>> message to the intended recipient(s), you are hereby notified that any >>> dissemination, distribution, or copying of this e-mail message is >>> strictly prohibited. If you have received this message in error, please >>> immediately notify the sender and delete this e-mail message from your >>> computer. >>> Since January 1st, old domains won't be supported and messages sent to >>> any domain different to @lists.fiware.org will be lost. >>> Please, send your messages using the new domain >>> (Fiware-tech-help@lists.fiware.org) instead of the old one. >>>

          People

          • Assignee:
            frb Francisco Romero
            Reporter:
            fw.ext.user FW External User
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: