Uploaded image for project: 'Help-Desk'
  1. Help-Desk
  2. HELP-8562

[Fiware-lab-help] FIWARE Lab Assistance

    Details

    • Type: extRequest
    • Status: Closed
    • Priority: Major
    • Resolution: Done
    • Fix Version/s: 2021
    • Component/s: FIWARE-LAB-HELP
    • Labels:
      None

      Description

      Dear

      Currently it seems that one of my instances gives an error.
      I tried to restart the instance after it had the status ‘SHUTOF’ following an other ticket.

      The account used is joao.dacostaneves@ext.consilium.europa.eu <joao.dacostaneves@ext.consilium.europa.eu>
      The region is Spain
      The instance name is gscCIxPTest (local ip 192.168.244.35)
      Currently not coupled to a floating ip-address.

      Would it be possible what the state of the VM is and if possible to restart it?

      Many thanks upfront for any action,
      Greetings,
      Jan.

      Info

      Name: gscCIxPtest
      ID: 8a78fc3b-6426-4467-b9e9-eba66b63644e
      Status: ERROR
      Specs

      RAM: 4096MB
      VCPUs: 2 VCPU
      Disk: 40GB
      IP Addresses

      node-int-net-01: 192.168.244.35
      Security Groups

      Meta

      Key name: gscCIxPkeypair
      Image Name: domibus3.2_r5.4 <https://cloud.lab.fiware.org/#nova/images/33dd796e-3289-4405-8d13-6842f849af2c>
      region: Spain2
      nid: 1626
      Volumes

      Installed Software

      Image domibus3.2_r5.4 does not allow Software Management with SDC.

      __________________________________________________________________________________________

      You can get more information about our cookies and privacy policies on the following links:

      Fiware-lab-help mailing list
      Fiware-lab-help@lists.fiware.org
      https://lists.fiware.org/listinfo/fiware-lab-help

      [Created via e-mail received from: Jan HELLINGS <jan.hellings@triads.eu>]

        Activity

        Hide
        jicg José Ignacio Carretero Guarde added a comment -

        As we anounced two weeks ago, We are on manteinance this week, sorry for the inconveniences.

        However, accourding to our logs, a "DELETE" over this instance has been sent this morning by user "joaoneves":
        nova-api.log:2017-04-19 07:38:49.373 9845 INFO nova.osapi_compute.wsgi.server [req-39565aaf-83f0-49ee-a1c8-0c1266e551e4 joaoneves 3d0b444b6b0846649cf9b6aa93459357 - - -] 130.206.84.8,172.32.0.143 "DELETE /v2/3d0b444b6b0846649cf9b6aa93459357/servers/8a78fc3b-6426-4467-b9e9-eba66b63644e HTTP/1.1" status: 204 len: 198 time: 0.2159040

        The deletion of the VM has been done:
        nova-compute.log:2017-04-19 07:38:50.456 13230 INFO nova.virt.libvirt.driver [req-39565aaf-83f0-49ee-a1c8-0c1266e551e4 joaoneves 3d0b444b6b0846649cf9b6aa93459357 - - -] [instance: 8a78fc3b-6426-4467-b9e9-eba66b63644e] Deleting instance files /var/lib/nova/instances/8a78fc3b-6426-4467-b9e9-eba66b63644e_del
        nova-compute.log:2017-04-19 07:38:51.161 13230 INFO nova.virt.libvirt.driver [req-39565aaf-83f0-49ee-a1c8-0c1266e551e4 joaoneves 3d0b444b6b0846649cf9b6aa93459357 - - -] [instance: 8a78fc3b-6426-4467-b9e9-eba66b63644e] Deletion of /var/lib/nova/instances/8a78fc3b-6426-4467-b9e9-eba66b63644e_del complete
        nova-compute.log:2017-04-19 07:38:54.942 13230 ERROR nova.compute.manager [req-39565aaf-83f0-49ee-a1c8-0c1266e551e4 joaoneves 3d0b444b6b0846649cf9b6aa93459357 - - -] [instance: 8a78fc3b-6426-4467-b9e9-eba66b63644e] Failed to deallocate network for instance.

        And the error shown is because a communication failure between 2 components (nova and neutron) due to the mantenance task.

        It is likely that if you perform a new TERMINATE operation over the VM again from the Cloud Portal, the deletion process would get fully completed.

        Regards,
        José Ignacio.

        Show
        jicg José Ignacio Carretero Guarde added a comment - As we anounced two weeks ago, We are on manteinance this week, sorry for the inconveniences. However, accourding to our logs, a "DELETE" over this instance has been sent this morning by user "joaoneves": nova-api.log:2017-04-19 07:38:49.373 9845 INFO nova.osapi_compute.wsgi.server [req-39565aaf-83f0-49ee-a1c8-0c1266e551e4 joaoneves 3d0b444b6b0846649cf9b6aa93459357 - - -] 130.206.84.8,172.32.0.143 "DELETE /v2/3d0b444b6b0846649cf9b6aa93459357/servers/8a78fc3b-6426-4467-b9e9-eba66b63644e HTTP/1.1" status: 204 len: 198 time: 0.2159040 The deletion of the VM has been done: nova-compute.log:2017-04-19 07:38:50.456 13230 INFO nova.virt.libvirt.driver [req-39565aaf-83f0-49ee-a1c8-0c1266e551e4 joaoneves 3d0b444b6b0846649cf9b6aa93459357 - - -] [instance: 8a78fc3b-6426-4467-b9e9-eba66b63644e] Deleting instance files /var/lib/nova/instances/8a78fc3b-6426-4467-b9e9-eba66b63644e_del nova-compute.log:2017-04-19 07:38:51.161 13230 INFO nova.virt.libvirt.driver [req-39565aaf-83f0-49ee-a1c8-0c1266e551e4 joaoneves 3d0b444b6b0846649cf9b6aa93459357 - - -] [instance: 8a78fc3b-6426-4467-b9e9-eba66b63644e] Deletion of /var/lib/nova/instances/8a78fc3b-6426-4467-b9e9-eba66b63644e_del complete nova-compute.log:2017-04-19 07:38:54.942 13230 ERROR nova.compute.manager [req-39565aaf-83f0-49ee-a1c8-0c1266e551e4 joaoneves 3d0b444b6b0846649cf9b6aa93459357 - - -] [instance: 8a78fc3b-6426-4467-b9e9-eba66b63644e] Failed to deallocate network for instance. And the error shown is because a communication failure between 2 components (nova and neutron) due to the mantenance task. It is likely that if you perform a new TERMINATE operation over the VM again from the Cloud Portal, the deletion process would get fully completed. Regards, José Ignacio.

          People

          • Assignee:
            jicg José Ignacio Carretero Guarde
            Reporter:
            fw.ext.user FW External User
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: