VMware with nova compute issue br100 not found

Asked by pramod

while creating an instance getting below error. using esx4.1 as hypervisor. Not able to launch a single instance successfully.
Please help.

2012-07-04 09:09:07 DEBUG nova.manager [-] Running periodic task ComputeManager._publish_service_capabilities from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:09:07 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rescued_instances from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:09:07 DEBUG nova.manager [-] Skipping ComputeManager._sync_power_states, 1 ticks left until next run from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147
2012-07-04 09:09:07 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:09:07 DEBUG nova.manager [-] Running periodic task ComputeManager.update_available_resource from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:09:07 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rebooting_instances from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:09:07 DEBUG nova.manager [-] Skipping ComputeManager._cleanup_running_deleted_instances, 22 ticks left until next run from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147
2012-07-04 09:09:07 DEBUG nova.manager [-] Running periodic task ComputeManager._heal_instance_info_cache from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:09:07 DEBUG nova.rpc.amqp [-] Making asynchronous call on network ... from (pid=5854) multicall /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321
2012-07-04 09:09:07 DEBUG nova.rpc.amqp [-] MSG_ID is 5d9fc12781a74be1b29db816e489ae53 from (pid=5854) multicall /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324
2012-07-04 09:09:07 DEBUG nova.compute.manager [-] Updated the info_cache for instance df7c43c1-ba3b-4c4b-9e0f-20606d20deb8 from (pid=5854) _heal_instance_info_cache /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2227
2012-07-04 09:09:07 DEBUG nova.manager [-] Skipping ComputeManager._run_image_cache_manager_pass, 15 ticks left until next run from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147
2012-07-04 09:09:07 DEBUG nova.manager [-] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:09:07 DEBUG nova.compute.manager [-] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=5854) _reclaim_queued_deletes /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380
2012-07-04 09:09:07 DEBUG nova.manager [-] Running periodic task ComputeManager._report_driver_status from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:09:07 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:09:27 DEBUG nova.rpc.amqp [-] received {u'_context_roles': [u'admin', u'Member'], u'_context_request_id': u'req-2897fe85-5bfd-468d-bbb8-fdce336ff964', u'_context_read_deleted': u'no', u'args': {u'instance_uuid': u'df7c43c1-ba3b-4c4b-9e0f-20606d20deb8'}, u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': True, u'_context_project_id': u'5025a843d0684553922f1c20ae64550a', u'_context_timestamp': u'2012-07-04T03:39:27.678802', u'_context_user_id': u'131cf2e26c8145ec9cdedf95e9a3fbca', u'method': u'terminate_instance', u'_context_remote_address': u'192.168.230.74'} from (pid=5854) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
2012-07-04 09:09:27 DEBUG nova.rpc.amqp [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] unpacked context: {'user_id': u'131cf2e26c8145ec9cdedf95e9a3fbca', 'roles': [u'admin', u'Member'], 'timestamp': '2012-07-04T03:39:27.678802', 'auth_token': '<SANITIZED>', 'remote_address': u'192.168.230.74', 'is_admin': True, 'request_id': u'req-2897fe85-5bfd-468d-bbb8-fdce336ff964', 'project_id': u'5025a843d0684553922f1c20ae64550a', 'read_deleted': u'no'} from (pid=5854) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
2012-07-04 09:09:27 INFO nova.compute.manager [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] check_instance_lock: decorating: |<function terminate_instance at 0x22cbed8>|
2012-07-04 09:09:27 INFO nova.compute.manager [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] check_instance_lock: arguments: |<nova.compute.manager.ComputeManager object at 0x7f4cac659c10>| |<nova.rpc.amqp.RpcContext object at 0xbc1c5d0>| |df7c43c1-ba3b-4c4b-9e0f-20606d20deb8|
2012-07-04 09:09:27 DEBUG nova.compute.manager [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] instance df7c43c1-ba3b-4c4b-9e0f-20606d20deb8: getting locked state from (pid=5854) get_lock /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1597
2012-07-04 09:09:27 INFO nova.compute.manager [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] check_instance_lock: locked: |False|
2012-07-04 09:09:27 INFO nova.compute.manager [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] check_instance_lock: admin: |True|
2012-07-04 09:09:27 INFO nova.compute.manager [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] check_instance_lock: executing: |<function terminate_instance at 0x22cbed8>|
2012-07-04 09:09:27 DEBUG nova.utils [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Attempting to grab semaphore "df7c43c1-ba3b-4c4b-9e0f-20606d20deb8" for method "do_terminate_instance"... from (pid=5854) inner /usr/lib/python2.7/dist-packages/nova/utils.py:927
2012-07-04 09:09:27 DEBUG nova.utils [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Got semaphore "df7c43c1-ba3b-4c4b-9e0f-20606d20deb8" for method "do_terminate_instance"... from (pid=5854) inner /usr/lib/python2.7/dist-packages/nova/utils.py:931
2012-07-04 09:09:27 WARNING nova.utils [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] /usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py:1869: SAWarning: The IN-predicate on "bw_usage_cache.mac" was invoked with an empty sequence. This results in a contradiction, which nonetheless can be expensive to evaluate. Consider alternative strategies for improved performance.
  return self._in_impl(operators.in_op, operators.notin_op, other)

2012-07-04 09:09:27 AUDIT nova.compute.manager [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] [instance: df7c43c1-ba3b-4c4b-9e0f-20606d20deb8] Terminating instance
2012-07-04 09:09:27 DEBUG nova.rpc.amqp [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Making asynchronous call on network ... from (pid=5854) multicall /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321
2012-07-04 09:09:27 DEBUG nova.rpc.amqp [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] MSG_ID is c9b0d811d60040d29709dd5539e68f5b from (pid=5854) multicall /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324
2012-07-04 09:09:28 DEBUG nova.compute.manager [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] [instance: df7c43c1-ba3b-4c4b-9e0f-20606d20deb8] Deallocating network for instance from (pid=5854) _deallocate_network /usr/lib/python2.7/dist-packages/nova/compute/manager.py:616
2012-07-04 09:09:28 DEBUG nova.rpc.amqp [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Making asynchronous cast on network... from (pid=5854) cast /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:346
2012-07-04 09:09:28 DEBUG nova.virt.vmwareapi.vmops [req-2897fe85-5bfd-468d-bbb8-fdce336ff964 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] instance - instance-00000001 not present from (pid=5854) destroy /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py:548
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager._publish_service_capabilities from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rescued_instances from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager._sync_power_states from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:07 DEBUG nova.virt.vmwareapi.vmops [-] Getting list of instances from (pid=5854) list_instances /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py:66
2012-07-04 09:10:07 DEBUG nova.virt.vmwareapi.vmops [-] Got total of 2 instances from (pid=5854) list_instances /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py:82
2012-07-04 09:10:07 WARNING nova.compute.manager [-] Found 0 in the database and 2 on the hypervisor.
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager.update_available_resource from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rebooting_instances from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:07 DEBUG nova.manager [-] Skipping ComputeManager._cleanup_running_deleted_instances, 21 ticks left until next run from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager._heal_instance_info_cache from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:07 DEBUG nova.manager [-] Skipping ComputeManager._run_image_cache_manager_pass, 14 ticks left until next run from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:07 DEBUG nova.compute.manager [-] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=5854) _reclaim_queued_deletes /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager._report_driver_status from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:07 INFO nova.compute.manager [-] Updating host status
2012-07-04 09:10:07 ERROR nova.manager [-] Error during ComputeManager._report_driver_status:
2012-07-04 09:10:07 TRACE nova.manager Traceback (most recent call last):
2012-07-04 09:10:07 TRACE nova.manager File "/usr/lib/python2.7/dist-packages/nova/manager.py", line 155, in periodic_tasks
2012-07-04 09:10:07 TRACE nova.manager task(self, context)
2012-07-04 09:10:07 TRACE nova.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2281, in _report_driver_status
2012-07-04 09:10:07 TRACE nova.manager self.driver.get_host_stats(refresh=True))
2012-07-04 09:10:07 TRACE nova.manager File "/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 576, in get_host_stats
2012-07-04 09:10:07 TRACE nova.manager raise NotImplementedError()
2012-07-04 09:10:07 TRACE nova.manager NotImplementedError
2012-07-04 09:10:07 TRACE nova.manager
2012-07-04 09:10:07 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=5854) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-07-04 09:10:16 DEBUG nova.rpc.amqp [-] received {u'_context_roles': [u'admin', u'Member'], u'_context_request_id': u'req-e7cbfa86-dedc-43ba-a061-5575a85272c1', u'_context_read_deleted': u'no', u'args': {u'instance_uuid': u'5443f83c-88a8-4169-80a9-4cecd6170f21', u'is_first_time': True, u'filter_properties': {u'scheduler_hints': {}}, u'admin_password': '<SANITIZED>', u'injected_files': [], u'requested_networks': None}, u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': True, u'_context_project_id': u'5025a843d0684553922f1c20ae64550a', u'_context_timestamp': u'2012-07-04T03:40:16.032733', u'_context_user_id': u'131cf2e26c8145ec9cdedf95e9a3fbca', u'method': u'run_instance', u'_context_remote_address': u'192.168.230.74'} from (pid=5854) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
2012-07-04 09:10:16 DEBUG nova.rpc.amqp [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] unpacked context: {'user_id': u'131cf2e26c8145ec9cdedf95e9a3fbca', 'roles': [u'admin', u'Member'], 'timestamp': '2012-07-04T03:40:16.032733', 'auth_token': '<SANITIZED>', 'remote_address': u'192.168.230.74', 'is_admin': True, 'request_id': u'req-e7cbfa86-dedc-43ba-a061-5575a85272c1', 'project_id': u'5025a843d0684553922f1c20ae64550a', 'read_deleted': u'no'} from (pid=5854) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
2012-07-04 09:10:16 DEBUG nova.utils [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Attempting to grab semaphore "5443f83c-88a8-4169-80a9-4cecd6170f21" for method "do_run_instance"... from (pid=5854) inner /usr/lib/python2.7/dist-packages/nova/utils.py:927
2012-07-04 09:10:16 DEBUG nova.utils [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Got semaphore "5443f83c-88a8-4169-80a9-4cecd6170f21" for method "do_run_instance"... from (pid=5854) inner /usr/lib/python2.7/dist-packages/nova/utils.py:931
2012-07-04 09:10:16 DEBUG nova.virt.vmwareapi.vmops [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Getting list of instances from (pid=5854) list_instances /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py:66
2012-07-04 09:10:16 DEBUG nova.virt.vmwareapi.vmops [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Got total of 2 instances from (pid=5854) list_instances /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py:82
2012-07-04 09:10:16 DEBUG nova.compute.manager [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] image_id=6f544711-0a90-468f-ad7d-0fce91e609f4, image_size_bytes=2666070016, allowed_size_bytes=21474836480 from (pid=5854) _check_image_size /usr/lib/python2.7/dist-packages/nova/compute/manager.py:525
2012-07-04 09:10:16 AUDIT nova.compute.manager [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] Starting instance...
2012-07-04 09:10:16 DEBUG nova.rpc.amqp [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Making asynchronous call on network ... from (pid=5854) multicall /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321
2012-07-04 09:10:16 DEBUG nova.rpc.amqp [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] MSG_ID is 60ebe5abc5ad4d029d1781780d3ec861 from (pid=5854) multicall /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324
2012-07-04 09:10:18 DEBUG nova.compute.manager [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] Instance network_info: |[VIF({'network': Network({'bridge': u'br100', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [], 'address': u'192.168.230.34'})], 'version': 4, 'meta': {u'dhcp_server': u'192.168.230.33'}, 'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': u'8.8.4.4'})], 'routes': [], 'cidr': u'192.168.230.32/27', 'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': u'192.168.230.33'})}), Subnet({'ips': [], 'version': None, 'meta': {u'dhcp_server': None}, 'dns': [], 'routes': [], 'cidr': None, 'gateway': IP({'meta': {}, 'version': None, 'type': u'gateway', 'address': None})})], 'meta': {u'tenant_id': None, u'should_create_bridge': True, u'bridge_interface': u'eth1'}, 'id': u'27532003-a3f0-4bb0-b47f-b6e37a62ffde', 'label': u'private'}), 'meta': {}, 'id': u'ac717eb9-9ee9-4395-af37-0b6e0a689748', 'address': u'fa:16:3e:10:e8:a2'})]| from (pid=5854) _allocate_network /usr/lib/python2.7/dist-packages/nova/compute/manager.py:566
2012-07-04 09:10:18 DEBUG nova.virt.vmwareapi.vmware_images [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Getting image size for the image 6f544711-0a90-468f-ad7d-0fce91e609f4 from (pid=5854) get_vmdk_size_and_properties /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmware_images.py:139
2012-07-04 09:10:18 DEBUG nova.virt.vmwareapi.vmware_images [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Got image size of 2666070016 for the image 6f544711-0a90-468f-ad7d-0fce91e609f4 from (pid=5854) get_vmdk_size_and_properties /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmware_images.py:144
2012-07-04 09:10:18 WARNING nova.virt.vmwareapi.network_utils [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] [(ManagedObjectReference){
   value = "HaNetwork-VM Network"
   _type = "Network"
 }, (ManagedObjectReference){
   value = "HaNetwork-Test Network"
   _type = "Network"
 }]
2012-07-04 09:10:19 ERROR nova.compute.manager [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] Instance failed to spawn
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] Traceback (most recent call last):
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 592, in _spawn
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] self._legacy_nw_info(network_info), block_device_info)
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi_conn.py", line 135, in spawn
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] self._vmops.spawn(context, instance, image_meta, network_info)
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 187, in spawn
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] vif_infos = _get_vif_infos()
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 180, in _get_vif_infos
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] network_ref = _check_if_network_bridge_exists(network_name)
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 168, in _check_if_network_bridge_exists
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] raise exception.NetworkNotFoundForBridge(bridge=network_name)
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] NetworkNotFoundForBridge: Network could not be found for bridge br100
2012-07-04 09:10:19 TRACE nova.compute.manager [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21]
2012-07-04 09:10:19 DEBUG nova.compute.manager [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] [instance: 5443f83c-88a8-4169-80a9-4cecd6170f21] Deallocating network for instance from (pid=5854) _deallocate_network /usr/lib/python2.7/dist-packages/nova/compute/manager.py:616
2012-07-04 09:10:19 DEBUG nova.rpc.amqp [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Making asynchronous cast on network... from (pid=5854) cast /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:346
2012-07-04 09:10:19 ERROR nova.rpc.amqp [req-e7cbfa86-dedc-43ba-a061-5575a85272c1 131cf2e26c8145ec9cdedf95e9a3fbca 5025a843d0684553922f1c20ae64550a] Exception during message handling
2012-07-04 09:10:19 TRACE nova.rpc.amqp Traceback (most recent call last):
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 252, in _process_data
2012-07-04 09:10:19 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
2012-07-04 09:10:19 TRACE nova.rpc.amqp return f(*args, **kw)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 177, in decorated_function
2012-07-04 09:10:19 TRACE nova.rpc.amqp sys.exc_info())
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2012-07-04 09:10:19 TRACE nova.rpc.amqp self.gen.next()
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 171, in decorated_function
2012-07-04 09:10:19 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 651, in run_instance
2012-07-04 09:10:19 TRACE nova.rpc.amqp do_run_instance()
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 945, in inner
2012-07-04 09:10:19 TRACE nova.rpc.amqp retval = f(*args, **kwargs)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 650, in do_run_instance
2012-07-04 09:10:19 TRACE nova.rpc.amqp self._run_instance(context, instance_uuid, **kwargs)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 451, in _run_instance
2012-07-04 09:10:19 TRACE nova.rpc.amqp self._set_instance_error_state(context, instance_uuid)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2012-07-04 09:10:19 TRACE nova.rpc.amqp self.gen.next()
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 432, in _run_instance
2012-07-04 09:10:19 TRACE nova.rpc.amqp self._deallocate_network(context, instance)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2012-07-04 09:10:19 TRACE nova.rpc.amqp self.gen.next()
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 429, in _run_instance
2012-07-04 09:10:19 TRACE nova.rpc.amqp injected_files, admin_password)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 592, in _spawn
2012-07-04 09:10:19 TRACE nova.rpc.amqp self._legacy_nw_info(network_info), block_device_info)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi_conn.py", line 135, in spawn
2012-07-04 09:10:19 TRACE nova.rpc.amqp self._vmops.spawn(context, instance, image_meta, network_info)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 187, in spawn
2012-07-04 09:10:19 TRACE nova.rpc.amqp vif_infos = _get_vif_infos()
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 180, in _get_vif_infos
2012-07-04 09:10:19 TRACE nova.rpc.amqp network_ref = _check_if_network_bridge_exists(network_name)
2012-07-04 09:10:19 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 168, in _check_if_network_bridge_exists
2012-07-04 09:10:19 TRACE nova.rpc.amqp raise exception.NetworkNotFoundForBridge(bridge=network_name)
2012-07-04 09:10:19 TRACE nova.rpc.amqp NetworkNotFoundForBridge: Network could not be found for bridge br100
2012-07-04 09:10:19 TRACE nova.rpc.amqp

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
pramod
Solved:
Last query:
Last reply:
Revision history for this message
John Garbutt (johngarbutt) said :
#1

Looks like you have not configured the network settings correctly.

Certainly the defaults only work with libvirt style deployments.

Are you OK giving the details about your nova.conf file? Also, which interfaces and vswitches you are planning to run your instance traffic (flat_network_bridge setting, if I remember correctly).

Also these flags will need to be correct before you do nova-manage network create.

Revision history for this message
Ivar Lazzaro (mmaleckk) said :
#2

I got a similar issue too, it seems that br100 (which is translated by nova Compute into an ESXi Port Group) must be part of vSwitch0 in the host.

I don't really know the reason, but this solved my problem and if it solves yours too then it could be a bug :) let us know.

Revision history for this message
pramod (p-rathor) said :
#3

here is my nova.conf contents
________________________________________________________________________________
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/run/lock/nova
--allow_admin_api=true
--use_deprecated_auth=false
--auth_strategy=keystone
--scheduler_driver=nova.scheduler.simple.SimpleScheduler
--s3_host=192.168.230.74
--ec2_host=192.168.230.74
--rabbit_host=192.168.230.74
--cc_host=192.168.230.74
--nova_url=http://192.168.230.74:8774/v1.1/
--routing_source_ip=192.168.230.74
--glance_api_servers=192.168.230.74:9292
--image_service=nova.image.glance.GlanceImageService
--iscsi_ip_prefix=192.168.230
--sql_connection=mysql://novadbadmin:password@192.168.230.74/nova
--ec2_url=http://192.168.230.74:8773/services/Cloud
--keystone_ec2_url=http://192.168.230.74:5000/v2.0/ec2tokens
--api_paste_config=/etc/nova/api-paste.ini
#--libvirt_type=qemu
--libvirt_use_virtio_for_bridges=true
--start_guests_on_host_boot=true
--resume_guests_state_on_host_boot=true
# vnc specific configuration
--novnc_enabled=true
--novncproxy_base_url=http://192.168.249.27:6080/vnc_auto.html
--vncserver_proxyclient_address=192.168.249.27
--vncserver_listen=192.168.249.27
# network specific settings
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=eth0
--flat_interface=eth1
--flat_network_bridge=br100
--fixed_range=192.168.230.1/27
--floating_range=192.168.230.32/27
--network_size=32
--flat_network_dhcp_start=192.168.230.33
--flat_injected=False
--force_dhcp_release
--iscsi_helper=tgtadm
#--connection_type=libvirt
--connection_type=vmwareapi
--root_helper=sudo nova-rootwrap
--vmwareapi_host_ip=192.168.249.123
--vmwareapi_host_username=root
--vmwareapi_host_password=password
--vmwareapi_wsdl_loc=http://192.168.230.74:8080/wsdl/vim25/vimService.wsdl
#--vmwareapi_vlan_interface=vmnic0
--verbose
________________________________________________________________________________

Here is the network create command...

sudo nova-manage network create private --fixed_range_v4=192.168.249.32/27 --num_networks=1 --bridge=br100 --bridge_interface=eth1 --network_size=32
________________________________________________________________________________

Here is the ifconfig

br100 Link encap:Ethernet HWaddr 00:50:56:b9:00:05
          inet addr:192.168.249.33 Bcast:192.168.249.63 Mask:255.255.255.224
          inet6 addr: fe80::e87a:3bff:fead:9865/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:2026197 errors:0 dropped:5 overruns:0 frame:0
          TX packets:1028482 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1664765960 (1.6 GB) TX bytes:63150528 (63.1 MB)

eth0 Link encap:Ethernet HWaddr 00:50:56:b9:00:04
          inet addr:192.168.230.74 Bcast:192.168.230.255 Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:feb9:4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:10 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1563 (1.5 KB) TX bytes:586 (586.0 B)

eth1 Link encap:Ethernet HWaddr 00:50:56:b9:00:05
          inet addr:192.168.249.27 Bcast:192.168.249.255 Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:feb9:5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:2027049 errors:0 dropped:127 overruns:0 frame:0
          TX packets:1030411 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1693279902 (1.6 GB) TX bytes:63327727 (63.3 MB)

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:16436 Metric:1
          RX packets:432970 errors:0 dropped:0 overruns:0 frame:0
          TX packets:432970 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3000544305 (3.0 GB) TX bytes:3000544305 (3.0 GB)

virbr0 Link encap:Ethernet HWaddr be:7d:02:d0:d3:70
          inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
          UP BROADCAST MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
__________________________________________________________________________________________

Any else???? plz help....

Revision history for this message
pramod (p-rathor) said :
#4

Here is my bridge config

$brctl show

bridge name bridge id STP enabled interfaces
br100 8000.005056b90005 no eth1
virbr0 8000.000000000000 yes

Revision history for this message
John Garbutt (johngarbutt) said :
#5

flat_network_bridge should be the vSwitch name on the hypervisor you want to carry the instance traffic.
flat_interface should be the interface on your VM running OpenStack that connects to your VM traffic network I mentioned above.

Don't create a bridge inside your VM running OpenStack yourself, that should happen automatically.

Once you have got the network settings wrong, you need to delete the network (or just reset the whole nova DB) before your new (hopefully correct flags) will work. Then try to launch a VM again, and things should be a bit better.

I hope that helps.

Revision history for this message
pramod (p-rathor) said :
#6

Hypervisor - ESX4.1

Device Switch Ip
vmnic1 vSwitch1 230.XXX
vmnic0 vSwitch0 249.ZZZ

Build n/w command:

sudo nova-manage network create private --fixed_range_v4=192.168.249.32/27 --num_networks=1 --bridge=vSwitch0 --bridge_interface=eth1 --network_size=32

nova.conf

# network specific settings
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=eth0
--flat_interface=eth1
--flat_network_bridge=vSwitch0
--fixed_range=192.168.249.32/27
--floating_range=192.168.230.32/27

ifconfig

eth0 Link encap:Ethernet HWaddr 00:50:56:b9:00:04
          inet addr:192.168.230.74 Bcast:192.168.230.255 Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:feb9:4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:21080 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21069 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:9862191 (9.8 MB) TX bytes:11808311 (11.8 MB)

root@osserver1:~# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:50:56:b9:00:05
          inet6 addr: fe80::250:56ff:feb9:5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:16628 errors:0 dropped:426 overruns:0 frame:0
          TX packets:3288 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1509996 (1.5 MB) TX bytes:1142934 (1.1 MB)

Still same error NetworkNotFoundForBridge: Network could not be found for bridge vSwitch0

Plz check. if above configurations are correct??

Revision history for this message
Ivar Lazzaro (mmaleckk) said :
#7

Here is my working configuration (almost matches the one above):

--vmwareapi_host_ip=10.10.10.25
--vmwareapi_host_username=root
--vmwareapi_host_password=password
--vmwareapi_wsdl_loc=http://10.10.10.123:8080/wsdl/vim25/vimService.wsdl
--flat_interface=eth1
--flat_network_bridge=br100

-------------------------------------------------------------------------------------------

sudo nova-manage network create private --fixed_range_v4=192.168.22.32/27 --num_networks=1 --bridge=br100 --bridge_interface=eth1 --network_size=32

-------------------------------------------------------------------------------------------

You have to check something before trying this, that is that a VMware Port Group named br100 is PRE-existing in your hypervisor under vSwitch0.

To check that through vSphere go to Configuration-->Networking and see if the port group is there. If not, create it according to your physical network configuration.

Revision history for this message
pramod (p-rathor) said :
#8

Great!!!!

I added port group named br100 to the hypervisor and its working now...

Thank you for all the support and help.

Regards,
Pramod

Revision history for this message
John Garbutt (johngarbutt) said :
#9

Cool, I plan to add this into the official docs soon.