Uploaded image for project: 'Help-Desk'
  1. Help-Desk
  2. HELP-5029

FIWARE.Question.Tech.Data.BigData-Analysis.Cygnus not starting as a service

    Details

      Description

      Created question in FIWARE Q/A platform on 18-08-2015 at 16:08
      Please, ANSWER this question AT http://stackoverflow.com/questions/32075670/cygnus-not-starting-as-a-service

      Question:
      Cygnus not starting as a service

      Description:
      I've been checking other people's questions regarding config files for cygnus, but still I couldn't make mine work.

      Starting cygnus with "service cygnus start" fails.

      When I try to start the service the log at /var/log/cygnus/cygnus.log says:

      Warning: JAVA_HOME is not set!
      + exec /usr/bin/java -Xmx20m -Dflume.log.file=cygnus.log -cp '/usr/cygnus/conf:/usr/cygnus/lib/:/usr/cygnus/plugins.d/cygnus/lib/:/usr/cygnus/plugins.d/cygnus/libext/*' -Djava.library.path= com.telefonica.iot.cygnus.nodes.CygnusApplication -p 8081 -f /usr/cygnus/conf/agent_1.conf -n cygnusagent
      SLF4J: Class path contains multiple SLF4J bindings.
      SLF4J: Found binding in [jar:file:/usr/cygnus/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: Found binding in [jar:file:/usr/cygnus/plugins.d/cygnus/lib/cygnus-0.8.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
      log4j:ERROR setFile(null,true) call failed.
      java.io.FileNotFoundException: ./logs/cygnus.log (No such file or directory)
      at java.io.FileOutputStream.openAppend(Native Method)
      at java.io.FileOutputStream.<init>(FileOutputStream.java:210)
      at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
      at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
      at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
      at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
      at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
      at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
      at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
      at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809)
      at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)
      at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615)
      at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502)
      at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:547)
      at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
      at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
      at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
      at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
      at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
      at org.apache.flume.node.Application.<clinit>(Application.java:58)
      Starting an ordered shutdown of Cygnus
      Stopping sources
      All the channels are empty
      Stopping channels
      Stopping hdfs-channel (lyfecycle state=START)
      Stopping sinks
      Stopping hdfs-sink (lyfecycle state=START)

      JAVA_HOME is set and I think the issue is with the config files:

      agent_1.conf:

      cygnusagent.sources = http-source
      cygnusagent.sinks = hdfs-sink
      cygnusagent.channels = hdfs-channel

      #=============================================

      1. source configuration
      2. channel name where to write the notification events
        cygnusagent.sources.http-source.channels = hdfs-channel
      3. source class, must not be changed
        cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
      4. listening port the Flume source will use for receiving incoming notifications
        cygnusagent.sources.http-source.port = 5050
      5. Flume handler that will parse the notifications, must not be changed
        cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
      6. URL target
        cygnusagent.sources.http-source.handler.notification_target = /notify
      7. Default service (service semantic depends on the persistence sink)
        cygnusagent.sources.http-source.handler.default_service = def_serv
      8. Default service path (service path semantic depends on the persistence sink)
        cygnusagent.sources.http-source.handler.default_service_path = def_servpath
      9. Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
        cygnusagent.sources.http-source.handler.events_ttl = 10
      10. Source interceptors, do not change
        cygnusagent.sources.http-source.interceptors = ts gi
      11. TimestampInterceptor, do not change
        cygnusagent.sources.http-source.interceptors.ts.type = timestamp
      12. GroupinInterceptor, do not change
        cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
      13. Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
      14. See the doc/design/interceptors document for more details
        cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
      1. ============================================
      2. OrionHDFSSink configuration
      3. channel name from where to read notification events
        cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
      4. sink class, must not be changed
        cygnusagent.sinks.hdfs-sink.type = com.telefonica.iot.cygnus.sinks.OrionHDFSSink
      5. Comma-separated list of FQDN/IP address regarding the HDFS Namenode endpoints
      6. If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
        cygnusagent.sinks.hdfs-sink.hdfs_host = cosmos.lab.fiware.org
      7. port of the HDFS service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs
        cygnusagent.sinks.hdfs-sink.hdfs_port = 14000
      8. username allowed to write in HDFS
        cygnusagent.sinks.hdfs-sink.hdfs_username = MYUSERNAME
      9. OAuth2 token
        cygnusagent.sinks.hdfs-sink.oauth2_token = MYTOKEN
      10. how the attributes are stored, either per row either per column (row, column)
        cygnusagent.sinks.hdfs-sink.attr_persistence = column
      11. Hive FQDN/IP address of the Hive server
        cygnusagent.sinks.hdfs-sink.hive_host = cosmos.lab.fiware.org
      12. Hive port for Hive external table provisioning
        cygnusagent.sinks.hdfs-sink.hive_port = 10000
      13. Kerberos-based authentication enabling
        cygnusagent.sinks.hdfs-sink.krb5_auth = false
      14. Kerberos username
        cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
      15. Kerberos password
        cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
      16. Kerberos login file
        cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
      17. Kerberos configuration file
        cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf

      #=============================================

      1. hdfs-channel configuration
      2. channel type (must not be changed)
        cygnusagent.channels.hdfs-channel.type = memory
      3. capacity of the channel
        cygnusagent.channels.hdfs-channel.capacity = 1000
      4. amount of bytes that can be sent per transaction
        cygnusagent.channels.hdfs-channel.transactionCapacity = 100

      And cygnus_instance_1.conf:

      CYGNUS_USER=cygnus

      CONFIG_FOLDER=/usr/cygnus/conf

      CONFIG_FILE=/usr/cygnus/conf/agent_1.conf

      1. Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters
      2. naming conventions, e.g. it appears in .sources.http-source.channels=...
        AGENT_NAME=cygnusagent
      1. Name of the logfile located at /var/log/cygnus.
        LOGFILE_NAME=cygnus.log
      1. Administration port. Must be unique per instance
        ADMIN_PORT=8081
      1. Polling interval (seconds) for the configuration reloading
        POLLING_INTERVAL=30

      I hope it's a simple issue. If more info is needed please let me know.

      BTW, I got my token following the instructions on this link.
      Isn't there supposed to be a password field for accessing COSMOS global instance? Or is the token enough?

      Thank you

        Activity

        Hide
        backlogmanager Backlog Manager added a comment -

        2015-10-29 00:05|CREATED monitor | # answers= 2, accepted answer= True

        Show
        backlogmanager Backlog Manager added a comment - 2015-10-29 00:05|CREATED monitor | # answers= 2, accepted answer= True
        Hide
        backlogmanager Backlog Manager added a comment -

        2015-10-29 03:05|UPDATED status: transition Answer| # answers= 2, accepted answer= True

        Show
        backlogmanager Backlog Manager added a comment - 2015-10-29 03:05|UPDATED status: transition Answer| # answers= 2, accepted answer= True
        Hide
        backlogmanager Backlog Manager added a comment -

        2015-10-29 06:05|UPDATED status: transition Answered| # answers= 2, accepted answer= True

        Show
        backlogmanager Backlog Manager added a comment - 2015-10-29 06:05|UPDATED status: transition Answered| # answers= 2, accepted answer= True

          People

          • Assignee:
            frb Francisco Romero
            Reporter:
            backlogmanager Backlog Manager
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: