VDO Device Excluded by a filter

Issues related to applications and software problems and general support
Post Reply
User avatar
penguinpages
Posts: 91
Joined: 2015/07/21 13:58:05

VDO Device Excluded by a filter

Post by penguinpages » 2020/09/16 02:56:18

Trying still to use HCI setup of CentOS8 for cluster rebuild. Kept getting failures so trying to just focus on one node that is failing. ... other two can do single node deployment. Just using first 512GB SSD drive. So something about this node that is confusing things. (I also tested on the two 1TB SSDs in server with same results)


Node with error.
[root@thor ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
sdb 8:16 0 931.5G 0 disk
sdc 8:32 0 477G 0 disk
sdd 8:48 1 57.3G 0 disk
├─sdd1 8:49 1 1G 0 part /boot
└─sdd2 8:50 1 56.3G 0 part
├─cl-root 253:0 0 35.2G 0 lvm /
├─cl-swap 253:1 0 4G 0 lvm
└─cl-home 253:2 0 17.2G 0 lvm /home
[root@thor ~]# wipefs -a /dev/sdc

Also tried other means to ensure drive is wiped.
[root@thor ~]# dd if=/dev/zero of=/dev/sdc bs=512 count=10000
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.0654142 s, 78.3 MB/s


I noted that multipathd had hooks on the drive... thought maybe that was something to do with it:
# Collect local disk ID that are local with replication
# Ex: thor
[root@thor ~]# multipath -F
create: WDC_WDS100T2B0B-00YS70_19106A802926 undef ATA,WDC WDS100T2B0B
size=932G features='1 queue_if_no_path' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
`- 1:0:0:0 sda 8:0 undef ready running
create: WDC_WDS100T2B0B-00YS70_192490801828 undef ATA,WDC WDS100T2B0B
size=932G features='1 queue_if_no_path' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
`- 2:0:0:0 sdb 8:16 undef ready running
create: Samsung_SSD_850_PRO_512GB_S250NXAGA15787L undef ATA,Samsung SSD 850
size=477G features='1 queue_if_no_path' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
`- 4:0:0:0 sdc 8:32 undef ready running

#### May not be helpful .. trying to avoid issues of "…err": "vdo: ERROR - Device /dev/sdc excluded by a filter.\n",
# Blacklist local disk from multipath
vi /etc/multipath.conf

blacklist {
wwid WDC_WDS100T2B0B-00YS70_19106A802926
wwid WDC_WDS100T2B0B-00YS70_192490801828
wwid Samsung_SSD_850_PRO_512GB_S250NXAGA15787L
protocol "(scsi:adt|scsi:sbp)"
}
systemctl restart multipathd.service
multipath -F
multipath -v2


<reboot>

No change.


# Snip of where it fails below. Full log attached
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ********
ok: [thorst.penguinpages.local]

TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ******
failed: [thorst.penguinpages.local] (item={'name': 'vdo_sdc', 'device': '/dev/sdc', 'slabsize': '32G', 'logicalsize': '11000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Device /dev/sdc excluded by a filter.\n", "index": 0, "item": {"blockmapcachesize": "128M", "device": "/dev/sdc", "emulate512": "off", "logicalsize": "11000G", "maxDiscardSize": "16M", "name": "vdo_sdc", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_sdc failed.", "rc": 1}



### HCI Single node Deploy Ansible File

Code: Select all

hc_nodes:
  hosts:
    thorst.penguinpages.local:
      gluster_infra_volume_groups:
        - vgname: gluster_vg_sdc
          pvname: /dev/mapper/vdo_sdc
      gluster_infra_mount_devices:
        - path: /gluster_bricks/engine
          lvname: gluster_lv_engine
          vgname: gluster_vg_sdc
        - path: /gluster_bricks/data
          lvname: gluster_lv_data
          vgname: gluster_vg_sdc
        - path: /gluster_bricks/vmstore
          lvname: gluster_lv_vmstore
          vgname: gluster_vg_sdc
      gluster_infra_vdo:
        - name: vdo_sdc
          device: /dev/sdc
          slabsize: 32G
          logicalsize: 11000G
          blockmapcachesize: 128M
          emulate512: 'off'
          writepolicy: auto
          maxDiscardSize: 16M
      blacklist_mpath_devices:
        - sdc
      gluster_infra_thick_lvs:
        - vgname: gluster_vg_sdc
          lvname: gluster_lv_engine
          size: 1000G
      gluster_infra_thinpools:
        - vgname: gluster_vg_sdc
          thinpoolname: gluster_thinpool_gluster_vg_sdc
          poolmetadatasize: 3G
      gluster_infra_lv_logicalvols:
        - vgname: gluster_vg_sdc
          thinpool: gluster_thinpool_gluster_vg_sdc
          lvname: gluster_lv_data
          lvsize: 5000G
        - vgname: gluster_vg_sdc
          thinpool: gluster_thinpool_gluster_vg_sdc
          lvname: gluster_lv_vmstore
          lvsize: 5000G
  vars:
    gluster_infra_disktype: JBOD
    gluster_set_selinux_labels: true
    gluster_infra_fw_ports:
      - 2049/tcp
      - 54321/tcp
      - 5900/tcp
      - 5900-6923/tcp
      - 5666/tcp
      - 16514/tcp
    gluster_infra_fw_permanent: true
    gluster_infra_fw_state: enabled
    gluster_infra_fw_zone: public
    gluster_infra_fw_services:
      - glusterfs
    gluster_features_force_varlogsizecheck: false
    cluster_nodes:
      - thorst.penguinpages.local
    gluster_features_hci_cluster: '{{ cluster_nodes }}'
    gluster_features_hci_volumes:
      - volname: engine
        brick: /gluster_bricks/engine/engine
        arbiter: 0
      - volname: data
        brick: /gluster_bricks/data/data
        arbiter: 0
      - volname: vmstore
        brick: /gluster_bricks/vmstore/vmstore
        arbiter: 0
    gluster_features_hci_volume_options:
      storage.owner-uid: '36'
      storage.owner-gid: '36'
      features.shard: 'on'
      performance.low-prio-threads: '32'
      performance.strict-o-direct: 'on'
      network.remote-dio: 'off'
      network.ping-timeout: '30'
      user.cifs: 'off'
      nfs.disable: 'on'
      performance.quick-read: 'off'
      performance.read-ahead: 'off'
      performance.io-cache: 'off'
      cluster.eager-lock: enable
Last edited by penguinpages on 2020/09/16 03:19:17, edited 1 time in total.

User avatar
penguinpages
Posts: 91
Joined: 2015/07/21 13:58:05

Re: VDO Device Excluded by a filter

Post by penguinpages » 2020/09/16 03:02:14

Update: I removed VDO from picture..

Unchecked dedup / compression.

Still getting error

TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:59
failed: [thorst.penguinpages.local] (item={'key': 'gluster_vg_sdc', 'value': [{'vgname': 'gluster_vg_sdc', 'pvname': '/dev/sdc'}]}) => {"ansible_loop_var": "item", "changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"key": "gluster_vg_sdc", "value": [{"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}]}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}


So not VDO...

User avatar
penguinpages
Posts: 91
Joined: 2015/07/21 13:58:05

Re: VDO Device Excluded by a filter

Post by penguinpages » 2020/09/16 11:50:33

Found isses(s). Please note this is RH / Centos 8.. not 5-7... as their are differences.

Ansible when it runs playbooks and then tries to back out is a bit like my kid cleaning his room. Gets distracted and forgets a few things... albeit important ones.

# LVM Filter ... the local block data device should not be listed.. I think Gluster does this so it controls LVM on top it...but sometimes leaves a mess
# Check if lvm cleanup happened . Note the OS Drive Ex: ssd needs to be left.. but remove the data drive Ex: sdc
cat /etc/lvm/lvm.conf |grep filter
#filter = ["a|^/dev/mapper/vdo_sdc$|", "a|^/dev/sdd2$|", "r|.*|"]}
filter = ["a|^/dev/sdd2$|", "r|.*|"]}


# Multipath had hands on disk... which should not be cause.. three notes to add here... this MAY not be needed but this may help if you run into issues
# Collect local disk ID that are local with replication
# Ex: thor
[root@thor ~]# multipath -F
create: WDC_WDS100T2B0B-00YS70_19106A802926 undef ATA,WDC WDS100T2B0B
size=932G features='1 queue_if_no_path' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
`- 1:0:0:0 sda 8:0 undef ready running
create: WDC_WDS100T2B0B-00YS70_192490801828 undef ATA,WDC WDS100T2B0B
size=932G features='1 queue_if_no_path' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
`- 2:0:0:0 sdb 8:16 undef ready running
create: Samsung_SSD_850_PRO_512GB_S250NXAGA15787L undef ATA,Samsung SSD 850
size=477G features='1 queue_if_no_path' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
`- 4:0:0:0 sdc 8:32 undef ready running

#### May not be helpful .. trying to avoid issues of "…err": "vdo: ERROR - Device /dev/sdc excluded by a filter.\n",
# Blacklist local disk from multipath. Note this does not include NVMe and that the remarked at top of file is needed "VDSM PRIVATE"
vi /etc/multipath.conf
# VDSM REVISION 1.9
# VDSM PRIVATE

blacklist {
# Block them all
devnode "^sd[a-z]"
# Or... block just specific ones you have today in system
wwid WDC_WDS100T2B0B-00YS70_19106A802926
wwid WDC_WDS100T2B0B-00YS70_192490801828
wwid Samsung_SSD_850_PRO_512GB_S250NXAGA15787L
protocol "(scsi:adt|scsi:sbp)"
}
systemctl restart multipathd.service
multipath -F
multipath -v2
# !!!!!!!Rebuild initrd image
dracut --force --add multipath -v




# Tools learned along way to put the puzzle of how the disk is laid out
dmsetup info -c -o name,blkdevname,devnos_used,blkdevs_used
pvs
pvscan
lvs
lvscan

User avatar
penguinpages
Posts: 91
Joined: 2015/07/21 13:58:05

Re: VDO Device Excluded by a filter

Post by penguinpages » 2020/09/17 13:47:05

Note: the modification of /etc/lvm/lvm.conf was NOT the fix.

I was able to create partition on drive, but after reboot and attempt to run HCI setup wizard it again failed with same error about lock.

And sadly... I gave up beating head on where / what ansible did when I used the GUI to remove the mounts / drives / volumes... and HCI deploys but makes some other change that the GUI did not know to clean out and... so leave me with a fubar system.

Reload... reset node back into SSH / Network .. .blah blah..

HCI wizard :) ran without issue.

Off to next step in HCI setup with three nodes back to what I had on CentOS7 via manual build process :)

User avatar
penguinpages
Posts: 91
Joined: 2015/07/21 13:58:05

[SOLVED] VDO Device Excluded by a filter

Post by penguinpages » 2020/09/22 20:13:37

Just wanted to follow up with this posting. I had help from gluster team.. and oVirt teams to work through this...

Summary: oVirt HCI wizard, may require, or sometimes on cleanup if failing for other reasons (such as SELINUX required, data on ANY drive in system besides OS drive.... etc..) .. you will fail with Excluded by a filter error.

This is NOT that it has a filter and needs it removed.. but that it lacks it and needs it

I do want to make clear here, that to get around the error you must ADD (not remove ) drives to /etc/lvm/lvm.conf so oVirt Gluster can complete setup of drives.
[root@thor log]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
sdb 8:16 0 931.5G 0 disk
sdc 8:32 0 477G 0 disk
└─vdo_sdc 253:6 0 2.1T 0 vdo
├─gluster_vg_sdc-gluster_lv_engine 253:7 0 100G 0 lvm /gluster_bricks/engine
├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc_tmeta 253:8 0 1G 0 lvm
│ └─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc-tpool 253:10 0 2T 0 lvm
│ ├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc 253:11 0 2T 1 lvm
│ ├─gluster_vg_sdc-gluster_lv_data 253:12 0 1000G 0 lvm /gluster_bricks/data
│ ├─gluster_vg_sdc-gluster_lv_vmstore 253:13 0 1000G 0 lvm /gluster_bricks/vmstore
│ └─gluster_vg_sdc-gluster_lv_iso 253:14 0 50G 0 lvm /gluster_bricks/iso
└─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc_tdata 253:9 0 2T 0 lvm
└─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc-tpool 253:10 0 2T 0 lvm
├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc 253:11 0 2T 1 lvm
├─gluster_vg_sdc-gluster_lv_data 253:12 0 1000G 0 lvm /gluster_bricks/data
├─gluster_vg_sdc-gluster_lv_vmstore 253:13 0 1000G 0 lvm /gluster_bricks/vmstore
└─gluster_vg_sdc-gluster_lv_iso 253:14 0 50G 0 lvm /gluster_bricks/iso
sdd 8:48 1 58.8G 0 disk
├─sdd1 8:49 1 1G 0 part /boot
└─sdd2 8:50 1 57.8G 0 part
├─cl-root 253:0 0 36.1G 0 lvm /
├─cl-swap 253:1 0 4G 0 lvm [SWAP]
└─cl-home 253:2 0 17.6G 0 lvm /home
[root@thor log]# ls -al /dev/disk/by-id/ |grep sdc | grep wwn-
lrwxrwxrwx. 1 root root 9 Sep 22 10:02 wwn-0x50025388400e2fab -> ../../sdc
[root@thor log]# ls -al /dev/disk/by-id/ |grep sda | grep wwn-
lrwxrwxrwx. 1 root root 9 Sep 22 10:02 wwn-0x5001b448b847be41 -> ../../sda
[root@thor log]# ls -al /dev/disk/by-id/ |grep sdb | grep wwn-
lrwxrwxrwx. 1 root root 9 Sep 22 10:02 wwn-0x5001b448b8efe084 -> ../../sdb


[root@thor log]# cat /etc/lvm/lvm.conf |grep filter
# Broken for gluster in oVirt
#filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "r|.*|"]
# working for gluster wizard in oVirt
filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "a|^/dev/disk/by-id/wwn-0x5001b448b847be41$|", "r|.*|"]
# Go to oVirt GUI and create partition + VDO + LVM then lay down gluster

# end result
[root@thor log]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─WDC_WDS100T2B0B-00YS70_19106A802926 253:3 0 931.5G 0 mpath

└─vdo0_2926 253:5 0 931.5G 0 vdo
sdb 8:16 0 931.5G 0 disk
└─WDC_WDS100T2B0B-00YS70_192490801828 253:4 0 931.5G 0 mpath
sdc 8:32 0 477G 0 disk
└─vdo_sdc 253:6 0 2.1T 0 vdo
├─gluster_vg_sdc-gluster_lv_engine 253:7 0 100G 0 lvm /gluster_bricks/engine
├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc_tmeta 253:8 0 1G 0 lvm
│ └─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc-tpool 253:10 0 2T 0 lvm
│ ├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc 253:11 0 2T 1 lvm
│ ├─gluster_vg_sdc-gluster_lv_data 253:12 0 1000G 0 lvm /gluster_bricks/data
│ ├─gluster_vg_sdc-gluster_lv_vmstore 253:13 0 1000G 0 lvm /gluster_bricks/vmstore
│ └─gluster_vg_sdc-gluster_lv_iso 253:14 0 50G 0 lvm /gluster_bricks/iso
└─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc_tdata 253:9 0 2T 0 lvm
└─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc-tpool 253:10 0 2T 0 lvm
├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc 253:11 0 2T 1 lvm
├─gluster_vg_sdc-gluster_lv_data 253:12 0 1000G 0 lvm /gluster_bricks/data
├─gluster_vg_sdc-gluster_lv_vmstore 253:13 0 1000G 0 lvm /gluster_bricks/vmstore
└─gluster_vg_sdc-gluster_lv_iso 253:14 0 50G 0 lvm /gluster_bricks/iso
sdd 8:48 1 58.8G 0 disk
├─sdd1 8:49 1 1G 0 part /boot
└─sdd2 8:50 1 57.8G 0 part
├─cl-root 253:0 0 36.1G 0 lvm /
├─cl-swap 253:1 0 4G 0 lvm [SWAP]
└─cl-home 253:2 0 17.6G 0 lvm /home


Hope that saves someone many hours of head beating.

Post Reply