Quota exceeded: code=%(code)s (HTTP 413)
I'm running Xenserver and Openstack.
I'm following the XenServer Development doc. Everything is installed by I get the Quota error when I try to run an instance.
nova@OSserver2b
+------
| ID | Name | Status |
+------
| 465442ad-
| 6c5f6148-
| 7b17d37e-
+------
nova@OSserver2b
+----+-
| ID | Name | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Quota | RXTX_Cap |
+----+-
| 1 | m1.tiny | 512 | | 0 | 1 | | |
| 2 | m1.small | 2048 | | 20 | 1 | | |
| 3 | m1.medium | 4096 | | 40 | 2 | | |
| 4 | m1.large | 8192 | | 80 | 4 | | |
| 5 | m1.xlarge | 16384 | | 160 | 8 | | |
+----+-
nova@OSserver2b
Quota exceeded: code=%(code)s (HTTP 413)
nova@OSserver2b
Quota exceeded: code=%(code)s (HTTP 413)
nova@OSserver2b
Quota exceeded: code=%(code)s (HTTP 413)
nova@OSserver2b
/usr/lib/
import pkg_resources
2012-03-16 17:11:47 DEBUG nova.utils [req-f8ec300f-
metadata_items: 128
instances: 10
injected_
injected_files: 5
volumes: 10
gigabytes: 1000
cores: 20
ram: 51200
floating_ips: 10
nova@OSserver2b
As far as I can tell, I should be able to create unto 10 instances, but it won't even let me create 1.
Am I missing another setting? Any help would be appreciated. Thanks.
Question information
- Language:
- English Edit question
- Status:
- Answered
- Assignee:
- No assignee Edit question
- Last query:
- Last reply:
Revision history for this message
|
#1 |
I would recommend you try using DevStack (http://
I have not seen that error myself, but it looks a lot like a configuration issue.
If you could check the api-server and the scheduler logs they may be able to give you some hints as to why it thinks you have reached your quota. Maybe the way you integrated keystone is not quite working?
It might be worth checking the clocks to ensure it does not simply think there are no resources avaiable (but I thought that would give you a different error).
I hope that is of some help.
Revision history for this message
|
#2 |
Yes, I am currently using a pull from DevStack.
The nova-api.log file shows:
2012-03-19 12:28:02 INFO nova.wsgi [-] Started metadata on 0.0.0.0:8775
2012-03-19 12:28:11 INFO nova.api.
2012-03-19 12:28:12 DEBUG nova.utils [req-e7e8696d-
2012-03-19 12:28:14 WARNING nova.compute.api [req-e7e8696d-
2012-03-19 12:28:14 INFO nova.api.
2012-03-19 12:28:14 INFO nova.api.
I didn't notice any activity on the other logs.
Revision history for this message
|
#3 |
Also checked the clock and they are all in sync.
Revision history for this message
|
#4 |
Interesting. Just ran a "nova list" and found:
nova@OSserver2b
+------
| ID | Name | Status | Networks |
+------
| 20854cdd-
| 5a0d5c03-
| 62c3f5cb-
| 7b3045bd-
| b1967369-
| b481cd67-
| b4894f33-
| c194ec35-
| c6049034-
| c813bc01-
+------
So it seems that instances were being created and I did indeed have 10 of them already.
Will delete and they to create again and see what happens.
Revision history for this message
|
#5 |
Ran "nova delete 20854cdd-
Command didn't return an error and didn't delete the instance.
nova-api output:
2012-03-19 16:31:49 INFO nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.compute.api [req-bae31505-
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.api.
2012-03-19 16:31:49 DEBUG nova.compute.api [req-bae31505-
2012-03-19 16:31:49 DEBUG nova.compute.api [req-bae31505-
2012-03-19 16:31:49 INFO nova.api.
Revision history for this message
|
#6 |
Tried again to delete and now getting a similar, but different error:
2012-03-19 16:36:11 INFO nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.compute.api [req-30892558-
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.api.
2012-03-19 16:36:11 DEBUG nova.compute.api [req-30892558-
2012-03-19 16:36:11 DEBUG nova.compute.api [req-30892558-
2012-03-19 16:36:11 INFO nova.api.
2012-03-19 16:36:16 INFO nova.api.
2012-03-19 16:36:16 DEBUG nova.api.
2012-03-19 16:36:16 DEBUG nova.api.
2012-03-19 16:36:16 INFO nova.api.
2012-03-19 16:36:16 INFO nova.api.
2012-03-19 16:36:16 DEBUG nova.api.
2012-03-19 16:36:16 DEBUG nova.api.
2012-03-19 16:36:16 INFO nova.api.
2012-03-19 16:36:16 INFO nova.api.
2012-03-19 16:36:16 DEBUG nova.api.
2012-03-19 16:36:16 DEBUG nova.compute.api [req-211c4deb-
2012-03-19 16:36:17 DEBUG nova.rpc.common [req-211c4deb-
2012-03-19 16:36:17 INFO nova.api.
2012-03-19 16:38:16 INFO nova.api.
2012-03-19 16:38:16 DEBUG nova.api.
2012-03-19 16:38:16 DEBUG nova.api.
2012-03-19 16:38:16 INFO nova.api.
2012-03-19 16:38:16 INFO nova.api.
2012-03-19 16:38:16 DEBUG nova.api.
2012-03-19 16:38:16 DEBUG nova.api.
2012-03-19 16:38:16 INFO nova.api.
2012-03-19 16:38:16 INFO nova.api.
2012-03-19 16:38:16 DEBUG nova.api.
2012-03-19 16:38:17 DEBUG nova.compute.api [req-7e9b8583-
2012-03-19 16:38:17 DEBUG nova.rpc.common [req-7e9b8583-
2012-03-19 16:38:17 INFO nova.api.
Revision history for this message
|
#7 |
The nova-compute output shows:
2012-03-19 17:23:09 DEBUG nova.rpc.common [-] received {u'_context_roles': [u'admin'], u'_context_
2012-03-19 17:23:09 DEBUG nova.rpc.common [req-d7d78fd8-
2012-03-19 17:23:09 INFO nova.compute.
2012-03-19 17:23:09 INFO nova.compute.
2012-03-19 17:23:09 DEBUG nova.compute.
2012-03-19 17:23:09 INFO nova.compute.
2012-03-19 17:23:09 INFO nova.compute.
2012-03-19 17:23:09 INFO nova.compute.
2012-03-19 17:23:09 AUDIT nova.compute.
2012-03-19 17:23:09 DEBUG nova.rpc.common [-] Making asynchronous call on network ... from (pid=3939) multicall /home/localadmi
2012-03-19 17:23:09 DEBUG nova.rpc.common [-] MSG_ID is 738bf27394b34af
2012-03-19 17:23:09 DEBUG nova.rpc.common [-] Pool creating new connection from (pid=3939) create /home/localadmi
2012-03-19 17:23:09 INFO nova.rpc.common [-] Connected to AMQP server on 10.228.24.60:5672
Revision history for this message
|
#8 |
The instance is not getting deleted.
"nova list" output shows there are still 10 instances.
Revision history for this message
|
#9 |
Ouch, that looks annoying. Afraid I have not see that set of errors before.
Since you are using DevStack, it might be easier to start again (although it would be nice to know what has cause all your pain):
killall screen
./stack.sh
When you try to create an instance (easier using horizon, you can see what is going on better), you can then trace what is happening through the different logs, and you should be able to find the route cause of your issues.
Just want to double check, but I assume there is just a single XenSever with a single DomU running the nova services? And you are using EXT storage for your local storage SR?
Revision history for this message
|
#10 |
yes, I'm about at the point of starting again.
Yes, one XenServer with a single DomU running the nova services. Local storage.
Revision history for this message
|
#11 |
It seems that the table for the instances is not in mysql. Is that true?
I thought I'd force delete the instances in the db, but since it doesn't seem to be in mysql, not sure where it is. Any ideas?
Revision history for this message
|
#12 |
Found that the table was actually in the mysql db on server1 not on server2.
This, of course, has be a bit confused. Are there suppose to be separate mysql dbs on sever1 and sever2?
Revision history for this message
|
#13 |
I was able to workaround the failed nova delete by directly going into mysql on server1 and deleted the entries in the instance table.
But still not clear why the instances creation is failing.
Revision history for this message
|
#14 |
no there are not. you need everything talking to the same mysql and rabbit servers.
You should set:
rabbit_host
and:
sql_connection
in your conf file
Can you help with this problem?
Provide an answer of your own, or ask Shirley Woo for more information if necessary.