Instance failed to spawn (BIOS Virtualisation enabled ?)

Asked by AsLY

I installed nova on a single node, and when i want to launch a instance I got :
Instance '11' failed to spawn. Is virtualization enabled in the BIOS?
But VT extensions are enable in the bios and I can launch a VM manually with KVM.

Thank you for your help.

nova-compute log :
--------------------------
2011-05-18 09:40:50,792 DEBUG nova.rpc [-] received {'_context_request_id': 'I0STKANX3ZERKYNV6ZPJ', '_context_read_deleted': False, 'args': {'instance_id': 1
1, 'injected_files': None, 'availability_zone': None}, '_context_is_admin': True, '_context_timestamp': '2011-05-18T07:40:50Z', '_context_user': 'test', 'met
hod': 'run_instance', '_context_project': 'cloudsii', '_context_remote_address': '10.6.214.2'} from (pid=3041) _receive /usr/lib/pymodules/python2.6/nova/rpc
.py:167
2011-05-18 09:40:50,793 DEBUG nova.rpc [-] unpacked context: {'timestamp': '2011-05-18T07:40:50Z', 'remote_address': '10.6.214.2', 'project': 'cloudsii', 'is
_admin': True, 'user': 'test', 'request_id': 'I0STKANX3ZERKYNV6ZPJ', 'read_deleted': False} from (pid=3041) _unpack_context /usr/lib/pymodules/python2.6/nova
/rpc.py:331
2011-05-18 09:40:50,861 AUDIT nova.compute.manager [I0STKANX3ZERKYNV6ZPJ test cloudsii] instance 11: starting...
2011-05-18 09:40:51,002 DEBUG nova.rpc [-] Making asynchronous call on network.sf14-rennes ... from (pid=3041) call /usr/lib/pymodules/python2.6/nova/rpc.py:
350
2011-05-18 09:40:51,002 DEBUG nova.rpc [-] MSG_ID is be507b0cb4c64d2c8db96936ebd89056 from (pid=3041) call /usr/lib/pymodules/python2.6/nova/rpc.py:353
2011-05-18 09:40:51,465 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev vlan100 from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/uti
ls.py:150
2011-05-18 09:40:51,471 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_bridge" for method "ensure_bridge"... from (pid=3041) inner /usr/lib/pymodu
les/python2.6/nova/utils.py:594
2011-05-18 09:40:51,471 DEBUG nova.utils [-] Attempting to grab file lock "ensure_bridge" for method "ensure_bridge"... from (pid=3041) inner /usr/lib/pymodu
les/python2.6/nova/utils.py:599
2011-05-18 09:40:51,472 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev br100 from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils
.py:150
2011-05-18 09:40:51,478 DEBUG nova.utils [-] Running cmd (subprocess): sudo route -n from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:51,487 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip addr show dev vlan100 scope global from (pid=3041) execute /usr/lib/pymodules/
python2.6/nova/utils.py:150
2011-05-18 09:40:51,497 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl addif br100 vlan100 from (pid=3041) execute /usr/lib/pymodules/python2.6/no
va/utils.py:150
2011-05-18 09:40:51,506 DEBUG nova.utils [-] Result was 1 from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:166
2011-05-18 09:40:51,583 DEBUG nova.virt.libvirt_conn [-] instance instance-0000000b: starting toXML method from (pid=3041) to_xml /usr/lib/pymodules/python2.
6/nova/virt/libvirt_conn.py:996
2011-05-18 09:40:51,649 DEBUG nova.virt.libvirt_conn [-] instance instance-0000000b: finished toXML method from (pid=3041) to_xml /usr/lib/pymodules/python2.
6/nova/virt/libvirt_conn.py:1041
2011-05-18 09:40:51,730 INFO nova [-] called setup_basic_filtering in nwfilter
2011-05-18 09:40:51,731 INFO nova [-] ensuring static filters
2011-05-18 09:40:51,785 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=3041) inner /usr/lib/pymodules/python2.6
/nova/utils.py:594
2011-05-18 09:40:51,785 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=3041) inner /usr/lib/pymodules/python2.6
/nova/utils.py:599
2011-05-18 09:40:51,792 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=3041) execute /usr/lib/pymodules/python2.6/nova
/utils.py:150
2011-05-18 09:40:51,803 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:51,815 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:51,826 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:51,861 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /var/lib/nova/instances/instance-0000000b/ from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:51,868 INFO nova.virt.libvirt_conn [-] instance instance-0000000b: Creating image
2011-05-18 09:40:51,937 DEBUG nova.utils [-] Attempting to grab semaphore "3fb61379" for method "call_if_not_exists"... from (pid=3041) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-05-18 09:40:51,937 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/3fb61379 /var/lib/nova/instances/instance-0000000b/kernel from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:51,958 DEBUG nova.utils [-] Attempting to grab semaphore "5d243800" for method "call_if_not_exists"... from (pid=3041) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-05-18 09:40:51,958 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/5d243800 /var/lib/nova/instances/instance-0000000b/ramdisk from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:51,995 DEBUG nova.utils [-] Attempting to grab semaphore "06febd44" for method "call_if_not_exists"... from (pid=3041) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-05-18 09:40:51,995 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/06febd44 /var/lib/nova/instances/instance-0000000b/disk from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:52,061 DEBUG nova.utils [-] Attempting to grab semaphore "local_20" for method "call_if_not_exists"... from (pid=3041) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-05-18 09:40:52,062 DEBUG nova.utils [-] Running cmd (subprocess): truncate /var/lib/nova/instances/_base/local_20 -s 20G from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:52,084 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/local_20 /var/lib/nova/instances/instance-0000000b/disk.local from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-05-18 09:40:52,115 INFO nova.virt.libvirt_conn [-] instance instance-0000000b: injecting key into image 117357892
2011-05-18 09:40:52,128 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-0000000b/disk from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
^[[A^[[B2011-05-18 09:41:02,152 WARNING nova.virt.libvirt_conn [-] instance instance-0000000b: ignoring error injecting data into image 117357892 (nbd device /dev/nbd15 did not show up)
2011-05-18 09:41:02,772 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 120, in _wrap
(nova.exception): TRACE: return f(*args, **kw)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 617, in spawn
(nova.exception): TRACE: domain = self._create_new_domain(xml)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 1079, in _create_new_domain
(nova.exception): TRACE: domain.createWithFlags(launch_flags)
(nova.exception): TRACE: File "/usr/lib/python2.6/dist-packages/libvirt.py", line 369, in createWithFlags
(nova.exception): TRACE: if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
(nova.exception): TRACE: libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/5
(nova.exception): TRACE: qemu: could not load kernel '/var/lib/nova/instances/instance-0000000b/kernel': Success
(nova.exception): TRACE:
(nova.exception): TRACE:
2011-05-18 09:41:02,774 ERROR nova.compute.manager [I0STKANX3ZERKYNV6ZPJ test cloudsii] Instance '11' failed to spawn. Is virtualization enabled in the BIOS?
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 234, in run_instance
(nova.compute.manager): TRACE: self.driver.spawn(instance_ref)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 126, in _wrap
(nova.compute.manager): TRACE: raise Error(str(e))
(nova.compute.manager): TRACE: Error: internal error Process exited while reading console log output: char device redirected to /dev/pts/5
(nova.compute.manager): TRACE: qemu: could not load kernel '/var/lib/nova/instances/instance-0000000b/kernel': Success
(nova.compute.manager): TRACE:
(nova.compute.manager): TRACE:

from lsmod :
kvm_intel 38146 0
kvm 214088 1 kvm_intel

Question information

Language:
English Edit question
Status:
Answered
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
AsLY (asly) said :
#1

I solve the first error with enabeling nbd, but I get a new error :

nova-compute log :
--------------------------
2011-05-18 10:58:37,830 DEBUG nova.utils [-] Running cmd (subprocess): sudo tune2fs -c 0 -i 0 /dev/nbd14 from (pid=2189) execute /usr/lib/pymodules/python2.6
/nova/utils.py:150
2011-05-18 10:58:37,859 DEBUG nova.utils [-] Result was 1 from (pid=2189) execute /usr/lib/pymodules/python2.6/nova/utils.py:166
2011-05-18 10:58:37,860 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd14 from (pid=2189) execute /usr/lib/pymodules/python2.6/nova/
utils.py:150
2011-05-18 10:58:37,871 WARNING nova.virt.libvirt_conn [-] instance instance-0000000e: ignoring error injecting data into image 117357892 (Unexpected error w
hile running command.
Command: sudo tune2fs -c 0 -i 0 /dev/nbd14
Exit code: 1
Stdout: 'tune2fs 1.41.12 (17-May-2010)\n'
Stderr: "tune2fs: Argument invalide lors de la tentative d'ouverture de /dev/nbd14\nImpossible de trouver un superbloc de syst\xc3\xa8me de fichiers valide.\
n")
2011-05-18 10:58:38,434 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 120, in _wrap
(nova.exception): TRACE: return f(*args, **kw)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 617, in spawn
(nova.exception): TRACE: domain = self._create_new_domain(xml)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 1079, in _create_new_domain
(nova.exception): TRACE: domain.createWithFlags(launch_flags)
(nova.exception): TRACE: File "/usr/lib/python2.6/dist-packages/libvirt.py", line 369, in createWithFlags
(nova.exception): TRACE: if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
(nova.exception): TRACE: libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/4
(nova.exception): TRACE: qemu: could not load kernel '/var/lib/nova/instances/instance-0000000e/kernel': Success
(nova.exception): TRACE:
(nova.exception): TRACE:

Translation of the french message : tune2fs: Invalid argument while trying to open / dev/nbd14 \ nUnable to find a valid superblock in filesystem

Revision history for this message
guanxiaohua2k6 (guanxiaohua2k6) said :
#2

I think it failed to download kernel to /var/lib/nova/instances/instance-00000000e/kernel. Please check whether the file exists and the size of the file is correct.

Revision history for this message
AsLY (asly) said :
#3

the file exists but it's empty.
------------------
#ls -l /var/lib/nova/instances/instance-00000000e/
-rw-r----- 1 root root 0 18 mai 10:58 console.log
-rw-r--r-- 1 root root 6291456 18 mai 10:58 disk
-rw-r--r-- 1 root root 8388608 18 mai 10:58 disk.local
-rw-r--r-- 1 root root 0 18 mai 10:58 kernel
-rw-r--r-- 1 nova nogroup 1917 18 mai 10:58 libvirt.xml
-rw-r--r-- 1 root root 0 18 mai 10:58 ramdisk

Revision history for this message
Jim Wichelman (jim-wichelman) said :
#4

I am seeing this same problem (zero sized kernel). This works on one compute node but not on a second node. I am using Glance as my image store. Has anybody identified a root cause for this?

Revision history for this message
Vish Ishaya (vishvananda) said :
#5

Generally it is due to missing settings for glance or wrong versions
--glance_api_servers=host:port
needs to be set or it will default to localhost and it will only get images on the machine that is running glance. You should also clear out the 0 sized images in _base

Vish

On Aug 5, 2011, at 12:31 PM, Jim Wichelman wrote:

> Question #158002 on OpenStack Compute (nova) changed:
> https://answers.launchpad.net/nova/+question/158002
>
> Jim Wichelman requested more information:
> I am seeing this same problem (zero sized kernel). This works on one
> compute node but not on a second node. I am using Glance as my image
> store. Has anybody identified a root cause for this?
>
> --
> You received this question notification because you are a member of Nova
> Core, which is an answer contact for OpenStack Compute (nova).

Revision history for this message
Jim Wichelman (jim-wichelman) said :
#6

I have been using the --glance_api_servers flag.
There were some zero sized items in _base, so I nuked them all.
Yet when I try to create an instance I still get the same problem (a zero sized item gets created in _base and no kernel file appears at all).
I think the problem is with glance. I am seeing this in the glance log:

2011-08-05 20:19:16 DEBUG [eventlet.wsgi.server] 15.184.103.107 - - [05/Aug/2011 20:19:16] "GET /v1/images/27 HTTP/1.1" 200 0 1.031617
2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Processing request: HEAD /v1/images/28 Accept:
2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0
2011-08-05 20:23:38 DEBUG [routes.middleware] Matched HEAD /images/28
2011-08-05 20:23:38 DEBUG [routes.middleware] Route path: '/images/{id}', defaults: {'action': u'meta', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>}
2011-08-05 20:23:38 DEBUG [routes.middleware] Match dict: {'action': u'meta', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>, 'id': u'28'}
2011-08-05 20:23:38 DEBUG [eventlet.wsgi.server] 15.184.103.106 - - [05/Aug/2011 20:23:38] "HEAD /v1/images/28 HTTP/1.1" 200 948 0.010306
2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Processing request: HEAD /v1/images/28 Accept:
2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0
2011-08-05 20:23:38 DEBUG [routes.middleware] Matched HEAD /images/28
2011-08-05 20:23:38 DEBUG [routes.middleware] Route path: '/images/{id}', defaults: {'action': u'meta', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>}
2011-08-05 20:23:38 DEBUG [routes.middleware] Match dict: {'action': u'meta', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>, 'id': u'28'}
2011-08-05 20:23:38 DEBUG [eventlet.wsgi.server] 15.184.103.106 - - [05/Aug/2011 20:23:38] "HEAD /v1/images/28 HTTP/1.1" 200 948 0.009156
2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Processing request: HEAD /v1/images/28 Accept:
2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0
2011-08-05 20:23:38 DEBUG [routes.middleware] Matched HEAD /images/28
2011-08-05 20:23:38 DEBUG [routes.middleware] Route path: '/images/{id}', defaults: {'action': u'meta', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>}
2011-08-05 20:23:38 DEBUG [routes.middleware] Match dict: {'action': u'meta', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>, 'id': u'28'}
2011-08-05 20:23:38 DEBUG [eventlet.wsgi.server] 15.184.103.106 - - [05/Aug/2011 20:23:38] "HEAD /v1/images/28 HTTP/1.1" 200 948 0.009843
2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Processing request: HEAD /v1/images/27 Accept:
2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0
2011-08-05 20:23:38 DEBUG [routes.middleware] Matched HEAD /images/27
2011-08-05 20:23:38 DEBUG [routes.middleware] Route path: '/images/{id}', defaults: {'action': u'meta', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>}
2011-08-05 20:23:38 DEBUG [routes.middleware] Match dict: {'action': u'meta', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>, 'id': u'27'}
2011-08-05 20:23:38 DEBUG [eventlet.wsgi.server] 15.184.103.106 - - [05/Aug/2011 20:23:38] "HEAD /v1/images/27 HTTP/1.1" 200 909 0.008861
2011-08-05 20:23:39 DEBUG [glance.api.middleware.version_negotiation] Processing request: GET /v1/images/27 Accept:
2011-08-05 20:23:39 DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0
2011-08-05 20:23:39 DEBUG [routes.middleware] Matched GET /images/27
2011-08-05 20:23:39 DEBUG [routes.middleware] Route path: '/images/:(id)', defaults: {'action': u'show', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>}
2011-08-05 20:23:39 DEBUG [routes.middleware] Match dict: {'action': u'show', 'controller': <glance.common.wsgi.Resource object at 0x34c7210>, 'id': u'27'}
2011-08-05 20:23:39 DEBUG [glance.store.swift] stuart authurl http://15.184.3.19:8080/auth/v1.0
2011-08-05 20:23:40 DEBUG [root] HTTP PERF: 0.00929 seconds to GET 15.184.3.19:8080 /auth/v1.0)
2011-08-05 20:23:41 DEBUG [root] HTTP PERF: 0.00868 seconds to GET 15.184.3.19:8080 /auth/v1.0)
2011-08-05 20:23:41 DEBUG [eventlet.wsgi.server] Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 347, in handle_one_response
    for data in result:
  File "/usr/lib/pymodules/python2.7/glance/api/v1/images.py", line 184, in image_iterator
    options=self.options)
  File "/usr/lib/pymodules/python2.7/glance/store/__init__.py", line 82, in get_from_backend
    return backend_class.get(parsed_uri, **kwargs)
  File "/usr/lib/pymodules/python2.7/glance/store/swift.py", line 74, in get
    obj_size = int(resp_headers['content-length'])
UnboundLocalError: local variable 'resp_headers' referenced before assignment

Revision history for this message
Vish Ishaya (vishvananda) said :
#7

do you have the same version of blance on both machines?
dpkg -l | grep glance

Revision history for this message
Jim Wichelman (jim-wichelman) said :
#8

There is only once glance server (and two compute nodes). But the python-glance package on the compute nodes is different:

  Node that works has: ii python-glance 2011.3~d1~20110531.139-0ubuntu0ppa1~natty1
  Node that does not work has: ii python-glance 2011.3-d2-hp1

I'll run this one to ground and post an update (probably monday).

Can you help with this problem?

Provide an answer of your own, or ask AsLY for more information if necessary.

To post a message you must log in.