LVM filter issue

Asked by Jorge Ventura

I have a cluster running with 3 nodes and recently I had to reboot the nodes.

After reboot, one node change the "volume service" status to "down" without obvious reason.

The problem was because the boot drive that initially was /dev/sda changed to /dev/sdb. Most of the time this is not a problem with other drives because the openstack configuration check the volume group name but in this case, the volume from cinder, change from /dev/sdX to /dev/sda wich was rejected by lvm.conf/filter .

Is not possible to use /dev/disk/by-id or something else, stable during reboots.

--
Ventura

Question information

Language:
English Edit question
Status:
Answered
For:
OpenStack-Ansible Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) said :
#1

Hey,

Sorry, I'm not fully understand at which step does issue happen.

As far as I get, it's not an AIO setup (and never was an AIO setup - like expanding aio to more nodes), correct?

Then you did reboot one of your compute/control nodes, and as you was using LVM+SCSI as cinder backend - this backend went down due to device being renamed?

So the issue with UUID, is that it is applicable only to formatted partitions, while at the moment we do not perform that, but rely on lvm.conf file:
https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/templates/lvm.conf.j2

Thus, I don't expect it to be an easy change. Also we are hardly using LVM SCSI on production systems, so securing time/priority for such refactoring will not be easy either.

But if you want to contribute and suggest a change to make this flow more resilient - you are more then welcome!

Can you help with this problem?

Provide an answer of your own, or ask Jorge Ventura for more information if necessary.

To post a message you must log in.