Quantum (Folsom): source bridge is not set during VM booting

Asked by Mattia Peirano

Hi,

I'm trying to run Folsom with Quantum but I have a problem. I configured a multihost infrastructure with five hosts and
I configured Quantum with the Open vSwitch Plugin with tunneling and non-overlapping tenant networks.

Now I can create a network, but when I boot a Virtual Machine the instance obtains an IP address,
according to the specified network, but enter in the ERROR state.
Quantum server is running correctly and all services on the compute nodes are ok.

Looking at the log I found:

2012-11-14 13:38:17 DEBUG nova.virt.libvirt.config [req-be2dd92e-0fb1-4562-8380-51f67189d421 c4e9a7544f764840899b194e2fa10b42 bcb940507f644777849d1b3c47c6d6c0] Generated XML <domain type="kvm">
  <uuid>084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa</uuid>
  <name>instance-00000001</name>
  <memory>2097152</memory>
  <vcpu>1</vcpu>
  <os>
    <type>hvm</type>
    <boot dev="hd"/>
  </os>
  <features>
    <acpi/>
  </features>
  <clock offset="utc">
    <timer name="pit" tickpolicy="delay"/>
    <timer name="rtc" tickpolicy="catchup"/>
  </clock>
  <cpu mode="host-model" match="exact"/>
  <devices>
    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2" cache="none"/>
      <source file="/var/lib/nova/instances/instance-00000001/disk"/>
      <target bus="virtio" dev="vda"/>
    </disk>
    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2" cache="none"/>
      <source file="/var/lib/nova/instances/instance-00000001/disk.local"/>
      <target bus="virtio" dev="vdb"/>
    </disk>
    <interface type="bridge">
      <mac address="fa:16:3e:4d:f6:1a"/>
      <model type="virtio"/>
      <source bridge=""/>
      <filterref filter="nova-instance-instance-00000001-fa163e4df61a">
        <parameter name="IP" value="10.1.1.3"/>
        <parameter name="DHCPSERVER" value="10.1.1.2"/>
        <parameter name="PROJNET" value="10.1.1.0"/>
        <parameter name="PROJMASK" value="255.255.255.0"/>
      </filterref>
    </interface>
    <serial type="file">
      <source path="/var/lib/nova/instances/instance-00000001/console.log"/>
    </serial>
    <serial type="pty"/>
    <input type="tablet" bus="usb"/>
    <graphics type="vnc" autoport="yes" keymap="en-us" listen="127.0.0.1"/>
  </devices>
</domain>
  from (pid=22681) to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py:66

2012-11-14 13:38:37 ERROR nova.compute.manager [req-be2dd92e-0fb1-4562-8380-51f67189d421 c4e9a7544f764840899b194e2fa10b42 bcb940507f644777849d1b3c47c6d6c0] [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] Instance failed to spawn
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] Traceback (most recent call last):
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 743, in _spawn
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] block_device_info)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] temp_level, payload)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] self.gen.next()
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 92, in wrapped
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] return f(*args, **kw)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1062, in spawn
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] block_device_info)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1888, in _create_domain_and_network
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] domain = self._create_domain(xml)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1867, in _create_domain
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] domain.createWithFlags(launch_flags)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] result = proxy_call(self._autowrap, f, *args, **kwargs)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in proxy_call
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] rv = execute(f,*args,**kwargs)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] rv = meth(*args,**kwargs)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] File "/usr/lib/python2.7/dist-packages/libvirt.py", line 650, in createWithFlags
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa] libvirtError: Cannot get interface MTU on '': No such device
2012-11-14 13:38:37 TRACE nova.compute.manager [instance: 084dc46d-ad25-45d3-a6e5-c0b6ecddd1fa]

Notice that in the XML file the bridge field is empty.
On the physical machine the bridge br-int exists.

Here are the configuration files:

###############################################################
1) ovs_quantum_plugin.ini
###############################################################

[DATABASE]
sql_connection = mysql://quantum:quantum@<IP>/quantum?charset=utf8
reconnect_interval = 2

[OVS]
tenant_network_type = gre
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.0.3

[AGENT]
root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf

###############################################################
2) nova.conf
###############################################################

[DEFAULT]

#LOGSTATE
verbose=True
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova

#NETWORK
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
force_dhcp_release=True
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

#QUANTUM
network_api_class = nova.network.quantumv2.api.API
quantum_url = http://<IP>:9696
quantum_auth_strategy = keystone
quantum_admin_tenant_name = service
quantum_admin_username = quantum
quantum_admin_password = quantum
quantum_admin_auth_url = http://<IP>:35357/v2.0

my_ip=<IP>
fixed_range=10.0.0.0/24

#VOLUMES
volumes_path=/var/lib/nova/volumes
iscsi_helper=tgtadm

#COMPUTE
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
libvirt_type=kvm
compute_driver=libvirt.LibvirtDriver
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
allow_resize_to_same_host=True
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver

#API
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
multi_host=True
enabled_apis=ec2,osapi_compute,osapi_volume,metadata
metadata_host=<IP>
metadata_port=8775

#DATABASE
sql_connection=mysql://nova:nova@<IP>/nova

# GLANCE
image_service=nova.image.glance.GlanceImageService
glance_api_servers=<IP>:9292

#MESSAGES
rabbit_host = <IP>

#AUTHENTICATION
auth_strategy=keystone

[keystone_authtoken]
auth_host = <IP>
auth_port = 35357
auth_protocol = http
auth_uri = <IP>
admin_tenant_name = service
admin_user = nova
admin_password = password

vncserver_listen=0.0.0.0

#####################################################
And finally, ovs-vsctl show
#####################################################
5c698d39-e293-432b-a542-f0fe030f0fb8
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "tap707e8c7a-b0"
            tag: 4095
            Interface "tap707e8c7a-b0"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tapb797ae3a-e6"
            tag: 1
            Interface "tapb797ae3a-e6"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth1:1"
            Interface "eth1:1"
    ovs_version: "1.4.0+build0"

What am I doing wrong?

Thanks in advance! :)

Question information

Language:
English Edit question
Status:
Solved
For:
neutron Edit question
Assignee:
No assignee Edit question
Solved by:
Mattia Peirano
Solved:
Last query:
Last reply:
Revision history for this message
dan wendlandt (danwent) said :
#1

Yes, you are correct that the root cause of the problem is that the XML template is getting populated with an empty bridge name.

Your configuration looks correct. Can you confirm that the nova.conf is used across all hosts? In particular, that all nova-compute nodes have the flag:

libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver

Are there any other exceptions further up in the Nova log than the one you show above? Pasting the entire log output from spinning up a single VM would be helpful.

Also, what version of the code are you running? Is this a Folsom release, or something newer?

Thanks for the detailed report.

Revision history for this message
Mattia Peirano (mattiapei88) said :
#2

First of all, thanks for the quick response.

Yes, all nodes are using the nova.conf file that I have posted.
All nova-compute nodes have the following flag:

libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver

I've installed the Folsom release from this repository:
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main

It's this version:
# nova-manage version
2012.2 (2012.2-LOCALBRANCH:LOCALREVISION)

I'll try to describe the entire process:

I boot the VM forcing the host (I also tried without defining the host, but the result is the same) :

nova boot --image 21e2e055-53be-4cf0-a025-5564db5bd901 --flavor 2 --nic net-id=f7d69605-cd1e-40bf-a180-648b5e64b672 --availability-zone nova:b205b2

+--------------------------------------+-------+--------+----------------+
| ID | Name | Status | Networks |
+--------------------------------------+-------+--------+----------------+
| 5f3953d2-66ee-4fde-82fd-b762c179cb1d | fenix | BUILD | lan03=10.1.1.3 |
+--------------------------------------+-------+--------+----------------+

+--------------------------------------+-------+--------+----------------+
| ID | Name | Status | Networks |
+--------------------------------------+-------+--------+----------------+
| 5f3953d2-66ee-4fde-82fd-b762c179cb1d | fenix | ERROR | lan03=10.1.1.3 |
+--------------------------------------+-------+--------+----------------+

Here is the entire nova, quantum server and openvswitch-agent logs, and the quantum conf file.

nova log: http://pastie.org/5378090
quantum-server log: http://pastie.org/5378095
openvswitch-agent log: http://pastie.org/5378099
quantum conf: http://pastie.org/5378103

In the nova log there is another error, but I think it is consequence of the first one.

Thanks! :)

Revision history for this message
dan wendlandt (danwent) said :
#3

From the logs, I see no messages from the vif-driver, so something is not right about how it is being configured or invoked.

Check out http://paste.openstack.org/show/25887/, in particular, the lines following:

012-11-14 10:11:33 DEBUG nova.virt.libvirt.driver [req-229f3b89-54bb-42ae-b023-8b4d12ebba71 demo demo] block_device_list [] _volume_in_mapping /opt/stack/nova/nova/virt/libvirt/d
river.py:1463

But before the XML dump.

These dont' appear in your log.

All signs point to the nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver not actually being loaded. Can you post the entire nova log from the compute node booting the VMs, so we can see the flags that are dumped at boot-up?

Revision history for this message
Mattia Peirano (mattiapei88) said :
#4

Hi,

You are right,

On booting, the node does not load the correct module:

2012-11-15 11:50:03 DEBUG nova.service [-] libvirt_vif_driver : nova.virt.libvirt.vif.LibvirtBridgeDriver from (pid=7927) wait /usr/lib/python2.7/dist-packages/nova/service.py:188

It's loading the LibvirtBridgeDriver instead of the specified nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver.

Why?

Could the libvirt version be the cause?
I'm running the 0.9.13 version.

Here is the complete log: http://paste.openstack.org/show/25937/

Thanks!

Revision history for this message
dan wendlandt (danwent) said :
#5

Ok, that's what I was guessing.

So now the question is why that config flag in your file is not being seen. Is it possible that nova is actually not loading the nova.conf? for example, it is in a different directory? Are there other non-default config options that are specified in your nova.conf file? If so, do you see nova getting those non-default values when it boots?

Revision history for this message
yong sheng gong (gongysh) said :
#6

I saw
2012-11-15 11:50:03 DEBUG nova.service [-] config_file : ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] from (pid=7927) wait /usr/lib/python2.7/dist-packages/nova/service.py:188

please check if there exist different settings for
libvirt_vif_driver

Revision history for this message
Mattia Peirano (mattiapei88) said :
#7

Hi,

I have verified that nova wasn't reading the flag (I changed the position of the flag in the flag file and nova read it o_O' ).

Then I tried booting the node and obtained the error:

ERROR nova.compute.manager [-] Unable to load the virtualization driver: Class LibvirtDriver cannot be found

I found this thread that solved my problem:

https://lists.launchpad.net/openstack/msg18632.html
https://lists.launchpad.net/openstack/msg18643.html

I used:
compute_driver=nova.virt.libvirt.LibvirtDriver
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver

instead of
compute_driver=libvirt.LibvirtDriver
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver that could not be founded.

Thanks folks!