Howto manage network IPs exhaution

Asked by net-security

Hi folks, We are doing our first step in Quantum and we would like to simulate what would happen if I run more instances than my available Ip addresses.
In my lab I have a Nova Controller / Compute and quantum server. I create a network /29 with 8 available IP addresses and I have launched 10 instances. The result was, the instances with Ip address set its "running" state in second the other stay in "Pending" states.
These are my logs
2012-01-23 09:57:28,057 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE: File "/usr/lib/python2.6/dist-packages/nova/rpc/impl_kombu.py", line 620, in _process_data
(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE: File "/usr/lib/python2.6/dist-packages/nova/exception.py", line 98, in wrapped
(nova.rpc): TRACE: return f(*args, **kw)
(nova.rpc): TRACE: File "/usr/lib/python2.6/dist-packages/nova/compute/manager.py", line 454, in run_instance
(nova.rpc): TRACE: self._run_instance(context, instance_id, **kwargs)
(nova.rpc): TRACE: File "/usr/lib/python2.6/dist-packages/nova/compute/manager.py", line 393, in _run_instance
(nova.rpc): TRACE: requested_networks=requested_networks)
(nova.rpc): TRACE: File "/usr/lib/python2.6/dist-packages/nova/network/api.py", line 162, in allocate_for_instance
(nova.rpc): TRACE: 'args': args})
(nova.rpc): TRACE: File "/usr/lib/python2.6/dist-packages/nova/rpc/__init__.py", line 45, in call
(nova.rpc): TRACE: return get_impl().call(context, topic, msg)
(nova.rpc): TRACE: File "/usr/lib/python2.6/dist-packages/nova/rpc/impl_kombu.py", line 739, in call
(nova.rpc): TRACE: rv = list(rv)
(nova.rpc): TRACE: File "/usr/lib/python2.6/dist-packages/nova/rpc/impl_kombu.py", line 703, in __iter__
(nova.rpc): TRACE: raise result
(nova.rpc): TRACE: RemoteError: NoMoreFixedIps Zero fixed ips available.
(nova.rpc): TRACE: [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.6/dist-packages/nova/rpc/impl_kombu.py", line 620, in _process_data\n rval = node_func(context=ctxt, **node_args)\n', u' File "/usr/lib/python2.6/dist-packages/nova/network/quantum/manager.py", line 177, in allocate_for_instance\n vif_rec)\n', u' File "/usr/lib/python2.6/dist-packages/nova/network/quantum/nova_ipam_lib.py", line 122, in allocate_fixed_ip\n vif_rec[\'instance_id\'])\n', u' File "/usr/lib/python2.6/dist-packages/nova/db/api.py", line 347, in fixed_ip_associate_pool\n instance_id, host)\n', u' File "/usr/lib/python2.6/dist-packages/nova/db/sqlalchemy/api.py", line 101, in wrapper\n return f(*args, **kwargs)\n', u' File "/usr/lib/python2.6/dist-packages/nova/db/sqlalchemy/api.py", line 729, in fixed_ip_associate_pool\n raise exception.NoMoreFixedIps()\n', u'NoMoreFixedIps: Zero fixed ips available.\n']
(nova.rpc): TRACE:
2012-01-23 09:57:28,115 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):

My question is, how could I manage a chance of Ip addresses exhaustion and on the other side, how could I delete this instances in Pending State without start everything from the scratch.
Thanks a lot!

Question information

Language:
English Edit question
Status:
Answered
For:
Melange Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Ariel Moreno (meli-datacenter) said :
#1

Does anybody have a clue about this?

Revision history for this message
Ariel Moreno (meli-datacenter) said :
#2

There is any chance to create networks and user the uuid to link them to the same tenant as a way to deal with the fixed_address exhaustion?

Revision history for this message
dan wendlandt (danwent) said :
#3

Hi Ariel,

This really isn't specific to Quantum. Rather, it is a result of the IP address management (IPAM) mechanism you are using. Specifically, the IPAM that nova uses by default only let's you specify a single IPv4 prefix for your network. When you run out of IPs, you are out of luck. The same thing is the case without Quantum.

There is a new project to improve IPAM for openstack. It is called Melange (see: http://launchpad.net/melange). They may be able to handle this scenario by associating multiple IP address ranges with a single network. I am re-assigning this question to that project to see if they have any comments.

Revision history for this message
Jason Kölker (jason-koelker) said :
#4

In the legacy nova IPAM IP addresses are tied only to the network not to the user (although they happen to keep track of the project_id that created them as well).

Melange will allow for multiple subnets per L2 network. It *should* work now, but we've yet to test it.

Currently you'll have to manually delete the instances from the hypervisor and the database, I believe https://review.openstack.org/#change,3309 may have a fix in it that will allow them to be deleted through the api.

Notifications are yet to be integrated, they will be similar to the Nova notifications and you'll have to have a collector for them. (Not sure about how all that works though, but the code is in nova/notifier). There is an unsupported project at https://github.com/Cerberus98/yagi that is an example of consuming the notifications and turning them into a PubSubHubBub Publisher.

Can you help with this problem?

Provide an answer of your own, or ask net-security for more information if necessary.

To post a message you must log in.