ansible-playbook [core 2.17.14] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-IJC executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Aug 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_default.yml **************************************************** 1 plays in /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml PLAY [Ensure that the role runs with default parameters] *********************** TASK [Run the role] ************************************************************ task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml:19 Saturday 13 September 2025 07:20:54 -0400 (0:00:00.041) 0:00:00.041 **** included: fedora.linux_system_roles.hpc for managed-node1 TASK [fedora.linux_system_roles.hpc : Set platform/version specific variables] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:3 Saturday 13 September 2025 07:20:54 -0400 (0:00:00.050) 0:00:00.092 **** included: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.hpc : Ensure ansible_facts used by role] ******* task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:2 Saturday 13 September 2025 07:20:54 -0400 (0:00:00.015) 0:00:00.107 **** [WARNING]: Platform linux on host managed-node1 is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node1] TASK [fedora.linux_system_roles.hpc : Check if system is ostree] *************** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:10 Saturday 13 September 2025 07:20:55 -0400 (0:00:00.973) 0:00:01.081 **** ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.hpc : Set flag to indicate system is ostree] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:15 Saturday 13 September 2025 07:20:55 -0400 (0:00:00.411) 0:00:01.492 **** ok: [managed-node1] => { "ansible_facts": { "__hpc_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.hpc : Set platform/version specific variables] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:19 Saturday 13 September 2025 07:20:55 -0400 (0:00:00.021) 0:00:01.514 **** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__template_packages": [], "__template_services": [] }, "ansible_included_var_files": [ "/tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__template_packages": [], "__template_services": [] }, "ansible_included_var_files": [ "/tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.hpc : Deploy the GPG key for RHEL EPEL repository] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:6 Saturday 13 September 2025 07:20:55 -0400 (0:00:00.036) 0:00:01.550 **** ok: [managed-node1] => { "changed": false } TASK [fedora.linux_system_roles.hpc : Install EPEL release package] ************ task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:12 Saturday 13 September 2025 07:20:56 -0400 (0:00:00.932) 0:00:02.482 **** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [ "Installed /root/.ansible/tmp/ansible-tmp-1757762456.8586993-8273-189357672586394/epel-release-latest-9.noarchp7udbflk.rpm" ] } MSG: Nothing to do lsrpackages: https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm TASK [fedora.linux_system_roles.hpc : Deploy the GPG key for NVIDIA repositories] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:18 Saturday 13 September 2025 07:20:58 -0400 (0:00:01.633) 0:00:04.116 **** changed: [managed-node1] => { "changed": true } TASK [fedora.linux_system_roles.hpc : Configure the NVIDIA CUDA repository] **** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:23 Saturday 13 September 2025 07:20:58 -0400 (0:00:00.469) 0:00:04.586 **** redirecting (type: action) ansible.builtin.yum to ansible.builtin.dnf changed: [managed-node1] => { "changed": true, "repo": "nvidia-cuda", "state": "present" } TASK [fedora.linux_system_roles.hpc : Install lvm2 to get lvs command] ********* task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:33 Saturday 13 September 2025 07:20:59 -0400 (0:00:00.412) 0:00:04.998 **** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_manage_storage", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Get current LV size of rootlv] *********** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:39 Saturday 13 September 2025 07:20:59 -0400 (0:00:00.014) 0:00:05.012 **** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_manage_storage", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Get current LV size of usrlv] ************ task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:46 Saturday 13 September 2025 07:20:59 -0400 (0:00:00.011) 0:00:05.024 **** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_manage_storage", "skip_reason": "Conditional result was False" } TASK [Configure storage] ******************************************************* task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:53 Saturday 13 September 2025 07:20:59 -0400 (0:00:00.009) 0:00:05.034 **** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_manage_storage", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Update RHUI packages from Microsoft repositories] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:81 Saturday 13 September 2025 07:20:59 -0400 (0:00:00.010) 0:00:05.044 **** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Explicitly install kernel-devel and kernel-headers packages matching the currently running kernel] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:89 Saturday 13 September 2025 07:20:59 -0400 (0:00:00.010) 0:00:05.055 **** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kernel-devel-5.14.0-612.el9.x86_64 kernel-headers-5.14.0-612.el9.x86_64 TASK [fedora.linux_system_roles.hpc : Ensure that dnf-command(versionlock) is installed] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:99 Saturday 13 September 2025 07:21:01 -0400 (0:00:02.264) 0:00:07.319 **** changed: [managed-node1] => { "changed": true, "rc": 0, "results": [ "Installed: python3-dnf-plugin-versionlock-4.3.0-22.el9.noarch" ] } lsrpackages: dnf-command(versionlock) TASK [fedora.linux_system_roles.hpc : Check if kernel versionlock entries exist] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:104 Saturday 13 September 2025 07:21:03 -0400 (0:00:01.811) 0:00:09.131 **** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1662723770.0, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1757762463.0121915, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 9210254, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0644", "mtime": 1662723770.0, "nlink": 1, "path": "/etc/dnf/plugins/versionlock.list", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "version": "158434453", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.hpc : Get content of versionlock file] ********* task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:109 Saturday 13 September 2025 07:21:03 -0400 (0:00:00.393) 0:00:09.524 **** ok: [managed-node1] => { "changed": false, "cmd": [ "cat", "/etc/dnf/plugins/versionlock.list" ], "delta": "0:00:00.002972", "end": "2025-09-13 07:21:04.206058", "rc": 0, "start": "2025-09-13 07:21:04.203086" } TASK [fedora.linux_system_roles.hpc : Prevent installation of all kernel packages of a different version] *** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:119 Saturday 13 September 2025 07:21:04 -0400 (0:00:00.410) 0:00:09.935 **** changed: [managed-node1] => (item=kernel) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel" ], "delta": "0:00:00.647265", "end": "2025-09-13 07:21:05.187580", "item": "kernel", "rc": 0, "start": "2025-09-13 07:21:04.540315" } STDOUT: Last metadata expiration check: 0:00:04 ago on Sat 13 Sep 2025 07:21:00 AM EDT. Adding versionlock on: kernel-0:5.14.0-612.el9.* changed: [managed-node1] => (item=kernel-core) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-core" ], "delta": "0:00:00.647039", "end": "2025-09-13 07:21:06.152403", "item": "kernel-core", "rc": 0, "start": "2025-09-13 07:21:05.505364" } STDOUT: Last metadata expiration check: 0:00:05 ago on Sat 13 Sep 2025 07:21:00 AM EDT. Adding versionlock on: kernel-core-0:5.14.0-612.el9.* changed: [managed-node1] => (item=kernel-modules) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-modules" ], "delta": "0:00:00.646124", "end": "2025-09-13 07:21:07.114987", "item": "kernel-modules", "rc": 0, "start": "2025-09-13 07:21:06.468863" } STDOUT: Last metadata expiration check: 0:00:06 ago on Sat 13 Sep 2025 07:21:00 AM EDT. Adding versionlock on: kernel-modules-0:5.14.0-612.el9.* changed: [managed-node1] => (item=kernel-modules-extra) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-modules-extra" ], "delta": "0:00:00.649510", "end": "2025-09-13 07:21:08.078537", "item": "kernel-modules-extra", "rc": 0, "start": "2025-09-13 07:21:07.429027" } STDOUT: Last metadata expiration check: 0:00:07 ago on Sat 13 Sep 2025 07:21:00 AM EDT. Adding versionlock on: kernel-modules-extra-0:5.14.0-604.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-612.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-605.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-603.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-611.el9.* changed: [managed-node1] => (item=kernel-devel) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-devel" ], "delta": "0:00:00.642826", "end": "2025-09-13 07:21:09.040845", "item": "kernel-devel", "rc": 0, "start": "2025-09-13 07:21:08.398019" } STDOUT: Last metadata expiration check: 0:00:08 ago on Sat 13 Sep 2025 07:21:00 AM EDT. Adding versionlock on: kernel-devel-0:5.14.0-612.el9.* changed: [managed-node1] => (item=kernel-headers) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-headers" ], "delta": "0:00:00.655340", "end": "2025-09-13 07:21:10.013588", "item": "kernel-headers", "rc": 0, "start": "2025-09-13 07:21:09.358248" } STDOUT: Last metadata expiration check: 0:00:09 ago on Sat 13 Sep 2025 07:21:00 AM EDT. Adding versionlock on: kernel-headers-0:5.14.0-612.el9.* TASK [fedora.linux_system_roles.hpc : Enable proprietary nvidia-driver] ******** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:128 Saturday 13 September 2025 07:21:10 -0400 (0:00:05.812) 0:00:15.747 **** fatal: [managed-node1]: FAILED! => { "changed": false, "failures": [], "rc": 1, "results": [] } MSG: Depsolve Error occurred: Problem 1: cannot install the best candidate for the job - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda Problem 2: package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 3: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 4: package nvidia-driver-cuda-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 5: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package nvidia-settings-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 6: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package xorg-x11-nvidia-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 7: package xorg-x11-nvidia-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed - package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package nvidia-xconfig-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires xorg-x11-nvidia(x86-64) >= 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda - package xorg-x11-nvidia-3:580.65.06-1.el9.x86_64 from nvidia-cuda is filtered out by modular filtering - package xorg-x11-nvidia-3:580.82.07-1.el9.x86_64 from nvidia-cuda is filtered out by modular filtering TASK [Cleanup] ***************************************************************** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml:23 Saturday 13 September 2025 07:21:11 -0400 (0:00:01.363) 0:00:17.111 **** included: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml for managed-node1 TASK [Check if versionlock entries exist] ************************************** task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:3 Saturday 13 September 2025 07:21:11 -0400 (0:00:00.020) 0:00:17.131 **** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1757762471.2511985, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "f5c4b99153fc53eda2dc7ad8558ef3a95914af92", "ctime": 1757762469.9761975, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 9210254, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1757762469.9761975, "nlink": 1, "path": "/etc/dnf/plugins/versionlock.list", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 609, "uid": 0, "version": "158434453", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [Clear dnf versionlock entries] ******************************************* task path: /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:8 Saturday 13 September 2025 07:21:11 -0400 (0:00:00.487) 0:00:17.619 **** changed: [managed-node1] => { "changed": true, "cmd": [ "dnf", "versionlock", "clear" ], "delta": "0:00:00.564688", "end": "2025-09-13 07:21:12.784287", "rc": 0, "start": "2025-09-13 07:21:12.219599" } STDOUT: Last metadata expiration check: 0:00:12 ago on Sat 13 Sep 2025 07:21:00 AM EDT. PLAY RECAP ********************************************************************* managed-node1 : ok=18 changed=5 unreachable=0 failed=1 skipped=5 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.14", "end_time": "2025-09-13T11:21:11.428083+00:00Z", "host": "managed-node1", "message": "Depsolve Error occurred: \n Problem 1: cannot install the best candidate for the job\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n Problem 2: package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 3: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 4: package nvidia-driver-cuda-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 5: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package nvidia-settings-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 6: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package xorg-x11-nvidia-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 7: package xorg-x11-nvidia-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed\n - package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package nvidia-xconfig-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires xorg-x11-nvidia(x86-64) >= 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n - package xorg-x11-nvidia-3:580.65.06-1.el9.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package xorg-x11-nvidia-3:580.82.07-1.el9.x86_64 from nvidia-cuda is filtered out by modular filtering", "rc": 1, "start_time": "2025-09-13T11:21:10.067792+00:00Z", "task_name": "Enable proprietary nvidia-driver", "task_path": "/tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:128" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 13 September 2025 07:21:12 -0400 (0:00:00.892) 0:00:18.512 **** =============================================================================== fedora.linux_system_roles.hpc : Prevent installation of all kernel packages of a different version --- 5.81s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:119 fedora.linux_system_roles.hpc : Explicitly install kernel-devel and kernel-headers packages matching the currently running kernel --- 2.26s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:89 fedora.linux_system_roles.hpc : Ensure that dnf-command(versionlock) is installed --- 1.81s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:99 fedora.linux_system_roles.hpc : Install EPEL release package ------------ 1.63s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:12 fedora.linux_system_roles.hpc : Enable proprietary nvidia-driver -------- 1.36s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:128 fedora.linux_system_roles.hpc : Ensure ansible_facts used by role ------- 0.97s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:2 fedora.linux_system_roles.hpc : Deploy the GPG key for RHEL EPEL repository --- 0.93s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:6 Clear dnf versionlock entries ------------------------------------------- 0.89s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:8 Check if versionlock entries exist -------------------------------------- 0.49s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:3 fedora.linux_system_roles.hpc : Deploy the GPG key for NVIDIA repositories --- 0.47s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:18 fedora.linux_system_roles.hpc : Configure the NVIDIA CUDA repository ---- 0.41s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:23 fedora.linux_system_roles.hpc : Check if system is ostree --------------- 0.41s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:10 fedora.linux_system_roles.hpc : Get content of versionlock file --------- 0.41s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:109 fedora.linux_system_roles.hpc : Check if kernel versionlock entries exist --- 0.39s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:104 Run the role ------------------------------------------------------------ 0.05s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml:19 fedora.linux_system_roles.hpc : Set platform/version specific variables --- 0.04s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:19 fedora.linux_system_roles.hpc : Set flag to indicate system is ostree --- 0.02s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:15 Cleanup ----------------------------------------------------------------- 0.02s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml:23 fedora.linux_system_roles.hpc : Set platform/version specific variables --- 0.02s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:3 fedora.linux_system_roles.hpc : Install lvm2 to get lvs command --------- 0.01s /tmp/collections-IJC/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:33 Sep 13 07:20:53 managed-node1 python3.9[8129]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Sep 13 07:20:53 managed-node1 sshd[8182]: Accepted publickey for root from 10.31.10.60 port 56180 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 13 07:20:53 managed-node1 systemd-logind[608]: New session 13 of user root. ░░ Subject: A new session 13 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 13 has been created for the user root. ░░ ░░ The leading process of the session is 8182. Sep 13 07:20:53 managed-node1 systemd[1]: Started Session 13 of User root. ░░ Subject: A start job for unit session-13.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-13.scope has finished successfully. ░░ ░░ The job identifier is 1591. Sep 13 07:20:53 managed-node1 sshd[8182]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Sep 13 07:20:53 managed-node1 sshd[8185]: Received disconnect from 10.31.10.60 port 56180:11: disconnected by user Sep 13 07:20:53 managed-node1 sshd[8185]: Disconnected from user root 10.31.10.60 port 56180 Sep 13 07:20:53 managed-node1 sshd[8182]: pam_unix(sshd:session): session closed for user root Sep 13 07:20:53 managed-node1 systemd-logind[608]: Session 13 logged out. Waiting for processes to exit. Sep 13 07:20:53 managed-node1 systemd[1]: session-13.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-13.scope has successfully entered the 'dead' state. Sep 13 07:20:53 managed-node1 systemd-logind[608]: Removed session 13. ░░ Subject: Session 13 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 13 has been terminated. Sep 13 07:20:55 managed-node1 python3.9[8383]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family', 'devices'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Sep 13 07:20:55 managed-node1 python3.9[8547]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Sep 13 07:20:56 managed-node1 python3.9[8696]: ansible-rpm_key Invoked with key=https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9 state=present validate_certs=True fingerprint=None Sep 13 07:20:57 managed-node1 python3.9[8850]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Sep 13 07:20:57 managed-node1 python3.9[8927]: ansible-ansible.legacy.dnf Invoked with name=['https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 13 07:20:58 managed-node1 python3.9[9077]: ansible-rpm_key Invoked with key=https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/D42D0685.pub state=present validate_certs=True fingerprint=None Sep 13 07:20:59 managed-node1 python3.9[9232]: ansible-yum_repository Invoked with name=nvidia-cuda description=NVIDIA CUDA repository baseurl=['https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64'] gpgcheck=True reposdir=/etc/yum.repos.d state=present unsafe_writes=False bandwidth=None cost=None deltarpm_metadata_percentage=None deltarpm_percentage=None enabled=None enablegroups=None exclude=None failovermethod=None file=None gpgcakey=None gpgkey=None module_hotfixes=None http_caching=None include=None includepkgs=None ip_resolve=None keepalive=None keepcache=None metadata_expire=None metadata_expire_filter=None metalink=None mirrorlist=None mirrorlist_expire=None password=NOT_LOGGING_PARAMETER priority=None protect=None proxy=None proxy_password=NOT_LOGGING_PARAMETER proxy_username=None repo_gpgcheck=None retries=None s3_enabled=None skip_if_unavailable=None sslcacert=None ssl_check_cert_permissions=None sslclientcert=None sslclientkey=None sslverify=None throttle=None timeout=None ui_repoid_vars=None username=None async=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Sep 13 07:20:59 managed-node1 python3.9[9381]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Sep 13 07:21:00 managed-node1 python3.9[9458]: ansible-ansible.legacy.dnf Invoked with name=['kernel-devel-5.14.0-612.el9.x86_64', 'kernel-headers-5.14.0-612.el9.x86_64'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 13 07:21:02 managed-node1 python3.9[9612]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Sep 13 07:21:02 managed-node1 python3.9[9689]: ansible-ansible.legacy.dnf Invoked with name=['dnf-command(versionlock)'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 13 07:21:03 managed-node1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. ░░ Subject: A start job for unit run-rd17941c3ba364639892c634bf8b769b5.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-rd17941c3ba364639892c634bf8b769b5.service has finished successfully. ░░ ░░ The job identifier is 1660. Sep 13 07:21:03 managed-node1 systemd[1]: Starting man-db-cache-update.service... ░░ Subject: A start job for unit man-db-cache-update.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has begun execution. ░░ ░░ The job identifier is 1725. Sep 13 07:21:03 managed-node1 systemd[1]: man-db-cache-update.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service has successfully entered the 'dead' state. Sep 13 07:21:03 managed-node1 systemd[1]: Finished man-db-cache-update.service. ░░ Subject: A start job for unit man-db-cache-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has finished successfully. ░░ ░░ The job identifier is 1725. Sep 13 07:21:03 managed-node1 systemd[1]: run-rd17941c3ba364639892c634bf8b769b5.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-rd17941c3ba364639892c634bf8b769b5.service has successfully entered the 'dead' state. Sep 13 07:21:03 managed-node1 python3.9[9926]: ansible-stat Invoked with path=/etc/dnf/plugins/versionlock.list follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Sep 13 07:21:04 managed-node1 python3.9[10077]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/dnf/plugins/versionlock.list _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 13 07:21:04 managed-node1 python3.9[10227]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 13 07:21:05 managed-node1 python3.9[10377]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-core _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 13 07:21:06 managed-node1 python3.9[10527]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-modules _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 13 07:21:07 managed-node1 python3.9[10677]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-modules-extra _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 13 07:21:08 managed-node1 python3.9[10827]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-devel _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 13 07:21:09 managed-node1 python3.9[10977]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-headers _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 13 07:21:10 managed-node1 python3.9[11127]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Sep 13 07:21:10 managed-node1 python3.9[11204]: ansible-ansible.legacy.dnf Invoked with name=['@nvidia-driver:575-dkms'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 13 07:21:11 managed-node1 python3.9[11354]: ansible-stat Invoked with path=/etc/dnf/plugins/versionlock.list follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Sep 13 07:21:12 managed-node1 python3.9[11505]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock clear _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 13 07:21:12 managed-node1 sshd[11531]: Accepted publickey for root from 10.31.10.60 port 44782 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 13 07:21:13 managed-node1 systemd-logind[608]: New session 14 of user root. ░░ Subject: A new session 14 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 14 has been created for the user root. ░░ ░░ The leading process of the session is 11531. Sep 13 07:21:13 managed-node1 systemd[1]: Started Session 14 of User root. ░░ Subject: A start job for unit session-14.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-14.scope has finished successfully. ░░ ░░ The job identifier is 1790. Sep 13 07:21:13 managed-node1 sshd[11531]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Sep 13 07:21:13 managed-node1 sshd[11534]: Received disconnect from 10.31.10.60 port 44782:11: disconnected by user Sep 13 07:21:13 managed-node1 sshd[11534]: Disconnected from user root 10.31.10.60 port 44782 Sep 13 07:21:13 managed-node1 sshd[11531]: pam_unix(sshd:session): session closed for user root Sep 13 07:21:13 managed-node1 systemd[1]: session-14.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-14.scope has successfully entered the 'dead' state. Sep 13 07:21:13 managed-node1 systemd-logind[608]: Session 14 logged out. Waiting for processes to exit. Sep 13 07:21:13 managed-node1 systemd-logind[608]: Removed session 14. ░░ Subject: Session 14 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 14 has been terminated. Sep 13 07:21:13 managed-node1 sshd[11559]: Accepted publickey for root from 10.31.10.60 port 44786 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 13 07:21:13 managed-node1 systemd-logind[608]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 11559. Sep 13 07:21:13 managed-node1 systemd[1]: Started Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 1859. Sep 13 07:21:13 managed-node1 sshd[11559]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)