Uploaded image for project: 'Help-Desk'
  1. Help-Desk
  2. HELP-4745

FIWARE.Question.Tech.Data.BigData-Analysis.ERROR 503: Service not available at persist HDFS

    Details

      Description

      Created question in FIWARE Q/A platform on 04-05-2015 at 09:05
      Please, ANSWER this question AT http://stackoverflow.com/questions/30024796/error-503-service-not-available-at-persist-hdfs

      Question:
      ERROR 503: Service not available at persist HDFS

      Description:
      I have an Orion instance with Cygnus at filab; subcription and notify run fine but I can not persist data to cosmos.lab.fi-ware.org.
      Cygnus returns this error:

      [ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)] Persistence error (The talky/talkykar/room6_room directory could not be created in HDFS. HttpFS response: 503 Service unavailable)

      This is my agent_a.conf file:

      cygnusagent.sources = http-source
      cygnusagent.sinks = hdfs-sink
      cygnusagent.channels = hdfs-channel

      #=============================================

      1. source configuration
      2. channel name where to write the notification events
        cygnusagent.sources.http-source.channels = hdfs-channel
      3. source class, must not be changed
        cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
      4. listening port the Flume source will use for receiving incoming notifications
        cygnusagent.sources.http-source.port = 5050
      5. Flume handler that will parse the notifications, must not be changed
        cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
      6. URL target
        cygnusagent.sources.http-source.handler.notification_target = /notify
      7. Default service (service semantic depends on the persistence sink)
        cygnusagent.sources.http-source.handler.default_service = talky
      8. Default service path (service path semantic depends on the persistence sink)
        cygnusagent.sources.http-source.handler.default_service_path = talkykar
      9. Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
        cygnusagent.sources.http-source.handler.events_ttl = 10
      10. Source interceptors, do not change
        cygnusagent.sources.http-source.interceptors = ts de
      11. Timestamp interceptor, do not change
        cygnusagent.sources.http-source.interceptors.ts.type = timestamp
      12. Destination extractor interceptor, do not change
        cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwareconnectors.cygnus.interceptors.DestinationExtractor$Builder
      13. Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
      14. See the doc/design/interceptors document for more details
        cygnusagent.sources.http-source.interceptors.de.matching_table = /usr/cygnus/conf/matching_table.conf
      1. ============================================
      2. OrionHDFSSink configuration
      3. channel name from where to read notification events
        cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
      4. sink class, must not be changed
        cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
      5. Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
      6. If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
        cygnusagent.sinks.hdfs-sink.cosmos_host = http://cosmos.lab.fi-ware.org
      7. port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
        cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
      8. default username allowed to write in HDFS
        cygnusagent.sinks.hdfs-sink.cosmos_default_username = myuser
      9. default password for the default username
        cygnusagent.sinks.hdfs-sink.cosmos_default_password = mypass
      10. HDFS backend type (webhdfs, httpfs or infinity)
        cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
      11. how the attributes are stored, either per row either per column (row, column)
        cygnusagent.sinks.hdfs-sink.attr_persistence = row
      12. Hive FQDN/IP address of the Hive server
        cygnusagent.sinks.hdfs-sink.hive_host = http://cosmos.lab.fi-ware.org
      13. Hive port for Hive external table provisioning
        cygnusagent.sinks.hdfs-sink.hive_port = 10000
      14. Kerberos-based authentication enabling
        cygnusagent.sinks.hdfs-sink.krb5_auth = false
      15. Kerberos username
        cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
      16. Kerberos password
        cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
      17. Kerberos login file
        cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
      18. Kerberos configuration file
        cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
        #=============================================

      And this is the Cygnus log:

      2015-05-04 09:05:10,434 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink.persist(OrionHDFSSink.java:315)] [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (talky/talkykar/room6_room/room6_room.txt), Data (

      {"recvTimeTs":"1430723069","recvTime":"2015-05-04T09:04:29.819","entityId":"Room6","entityType":"Room","attrName":"temperature","attrType":"float","attrValue":"26.5","attrMd":[]}

      )
      2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:255)] HDFS request: PUT http://http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/mped.mlg/talky/talkykar/room6_room?op=mkdirs&user.name=mped.mlg HTTP/1.1
      2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:186)] Connection request: [route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500]
      2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:220)] Connection leased: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 1 of 100; total allocated: 1 of 500]
      2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection org.apache.http.impl.conn.DefaultClientConnection@5700187d closed
      2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.shutdown(DefaultClientConnection.java:154)] Connection org.apache.http.impl.conn.DefaultClientConnection@5700187d shut down
      2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:272)] Connection [id: 21][route: {}->http://http] can be kept alive for 9223372036854775807 MILLISECONDS
      2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection org.apache.http.impl.conn.DefaultClientConnection@5700187d closed
      2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:278)] Connection released: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500]
      2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:191)] The used HDFS endpoint is not active, trying another one (host=http://cosmos.lab.fi-ware.org)
      2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)] Persistence error (The talky/talkykar/room6_room directory could not be created in HDFS. HttpFS response: 503 Service unavailable)

      Thanks.

        Activity

        Hide
        backlogmanager Backlog Manager added a comment -

        2015-10-09 12:05|CREATED monitor | # answers= 1, accepted answer= True

        Show
        backlogmanager Backlog Manager added a comment - 2015-10-09 12:05|CREATED monitor | # answers= 1, accepted answer= True
        Hide
        backlogmanager Backlog Manager added a comment -

        2015-10-09 15:05|UPDATED status: transition Answer| # answers= 1, accepted answer= True

        Show
        backlogmanager Backlog Manager added a comment - 2015-10-09 15:05|UPDATED status: transition Answer| # answers= 1, accepted answer= True
        Hide
        mev Manuel Escriche added a comment -

        2015-10-13 12:00|UPDATED status: transition Answered| # answers= 1, accepted answer= True

        Show
        mev Manuel Escriche added a comment - 2015-10-13 12:00|UPDATED status: transition Answered| # answers= 1, accepted answer= True
        Hide
        mev Manuel Escriche added a comment -

        2015-10-13 15:05|UPDATED status: transition Finish| # answers= 1, accepted answer= True

        Show
        mev Manuel Escriche added a comment - 2015-10-13 15:05|UPDATED status: transition Finish| # answers= 1, accepted answer= True

          People

          • Assignee:
            frb Francisco Romero
            Reporter:
            backlogmanager Backlog Manager
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: