Can't create more that 16 vCPUs per host using KVM on Ubuntu 11.10 running OpenStack Diablo

Asked by Ahmad Al-Shishtawy

Hi,

I'm running OpenStack Diablo on Ubuntu 11.10. The cluster consists of 9 servers with 24 cores each and I'm using kvm.

I can not create more that 16 vCPUs per physical host. For example if I create 4 core VMs then I can not create more that 4 VMs. The status of the fifth VM remains building.

The max vcpus in /usr/src/linux-headers-3.0.0-19/arch/x86/include/asm/kvm_host.h is
#define KVM_MAX_VCPUS 64

So I expect to be able to run 64 vCPUs per host! Right?

What is the problem with my setup? How can I increase the limit?

Regards,
Ahmad

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Ahmad Al-Shishtawy
Solved:
Last query:
Last reply:
Revision history for this message
Mark Lehrer (pyite) said :
#1

My ugly workaround was to hack simple.py and force check_cores to be 0.

When I have a few minutes, I'll see if I can solve it "for real"

Revision history for this message
Ahmad Al-Shishtawy (alshishtawy) said :
#2

Thank you! That solved my problem :)

I edited simple.py and changed the value of max_cores from 16 to 128

flags.DEFINE_integer("max_cores", 128,
                     "maximum number of instance cores to allow per host")

Now I can have more than 16 vCPUs per host :D

Revision history for this message
John Garbutt (johngarbutt) said :
#3

All those flags can be set in /etc/nova/nova.conf, so no need to change the code if you don't want to.

Revision history for this message
Ahmad Al-Shishtawy (alshishtawy) said :
#4

Thanks! Even better :) I didn't know that.

Revision history for this message
Mark Lehrer (pyite) said :
#5

John, I set this in nova.conf on the compute nodes and also on the nova-scheduler node (and restarted nova-compute and nova-scheduler), but it still didn't seem to take effect. It's quite possible that I am just doing something wrong, of course.

Revision history for this message
John Garbutt (johngarbutt) said :
#6

That's odd.

Did the log give any hints about what flag value it chose when the service started?
Try change some other flags see if they work?
Maybe just print out into the log the content of the flag to double check?