ansible-playbook [core 2.17.14] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-w7i executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Aug 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_default.yml **************************************************** 1 plays in /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml PLAY [Ensure that the role runs with default parameters] *********************** TASK [Run the role] ************************************************************ task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml:19 Wednesday 17 September 2025 04:13:48 -0400 (0:00:00.040) 0:00:00.040 *** included: fedora.linux_system_roles.hpc for managed-node1 TASK [fedora.linux_system_roles.hpc : Set platform/version specific variables] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:3 Wednesday 17 September 2025 04:13:48 -0400 (0:00:00.055) 0:00:00.095 *** included: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.hpc : Ensure ansible_facts used by role] ******* task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:2 Wednesday 17 September 2025 04:13:48 -0400 (0:00:00.019) 0:00:00.115 *** [WARNING]: Platform linux on host managed-node1 is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node1] TASK [fedora.linux_system_roles.hpc : Check if system is ostree] *************** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:10 Wednesday 17 September 2025 04:13:49 -0400 (0:00:00.973) 0:00:01.088 *** ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.hpc : Set flag to indicate system is ostree] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:15 Wednesday 17 September 2025 04:13:49 -0400 (0:00:00.439) 0:00:01.528 *** ok: [managed-node1] => { "ansible_facts": { "__hpc_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.hpc : Set platform/version specific variables] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:19 Wednesday 17 September 2025 04:13:49 -0400 (0:00:00.021) 0:00:01.550 *** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__template_packages": [], "__template_services": [] }, "ansible_included_var_files": [ "/tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__template_packages": [], "__template_services": [] }, "ansible_included_var_files": [ "/tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.hpc : Deploy the GPG key for RHEL EPEL repository] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:6 Wednesday 17 September 2025 04:13:49 -0400 (0:00:00.036) 0:00:01.586 *** ok: [managed-node1] => { "changed": false } TASK [fedora.linux_system_roles.hpc : Install EPEL release package] ************ task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:12 Wednesday 17 September 2025 04:13:50 -0400 (0:00:00.626) 0:00:02.213 *** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [ "Installed /root/.ansible/tmp/ansible-tmp-1758096830.3447757-8273-212281201167060/epel-release-latest-9.noarchyj2q0uz2.rpm" ] } MSG: Nothing to do TASK [fedora.linux_system_roles.hpc : Deploy the GPG key for NVIDIA repositories] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:18 Wednesday 17 September 2025 04:13:51 -0400 (0:00:01.498) 0:00:03.712 *** changed: [managed-node1] => { "changed": true } TASK [fedora.linux_system_roles.hpc : Configure the NVIDIA CUDA repository] **** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:23 Wednesday 17 September 2025 04:13:52 -0400 (0:00:00.558) 0:00:04.270 *** redirecting (type: action) ansible.builtin.yum to ansible.builtin.dnf changed: [managed-node1] => { "changed": true, "repo": "nvidia-cuda", "state": "present" } TASK [fedora.linux_system_roles.hpc : Install lvm2 to get lvs command] ********* task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:33 Wednesday 17 September 2025 04:13:52 -0400 (0:00:00.423) 0:00:04.694 *** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_manage_storage", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Get current LV size of rootlv] *********** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:39 Wednesday 17 September 2025 04:13:52 -0400 (0:00:00.014) 0:00:04.709 *** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_manage_storage", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Get current LV size of usrlv] ************ task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:46 Wednesday 17 September 2025 04:13:52 -0400 (0:00:00.011) 0:00:04.720 *** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_manage_storage", "skip_reason": "Conditional result was False" } TASK [Configure storage] ******************************************************* task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:53 Wednesday 17 September 2025 04:13:52 -0400 (0:00:00.010) 0:00:04.730 *** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_manage_storage", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Update RHUI packages from Microsoft repositories] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:81 Wednesday 17 September 2025 04:13:52 -0400 (0:00:00.010) 0:00:04.741 *** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Force install kernel] ******************** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:89 Wednesday 17 September 2025 04:13:52 -0400 (0:00:00.009) 0:00:04.751 *** skipping: [managed-node1] => { "changed": false, "false_condition": "__hpc_force_kernel_version is not none", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Explicitly install kernel-devel and kernel-headers packages matching the currently running kernel] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:96 Wednesday 17 September 2025 04:13:52 -0400 (0:00:00.014) 0:00:04.766 *** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.hpc : Flush handlers to apply new kernel] ****** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:106 Wednesday 17 September 2025 04:13:55 -0400 (0:00:02.328) 0:00:07.094 *** META: triggered running handlers for managed-node1 TASK [fedora.linux_system_roles.hpc : Ensure that dnf-command(versionlock) is installed] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:109 Wednesday 17 September 2025 04:13:55 -0400 (0:00:00.001) 0:00:07.096 *** changed: [managed-node1] => { "changed": true, "rc": 0, "results": [ "Installed: python3-dnf-plugin-versionlock-4.3.0-22.el9.noarch" ] } TASK [fedora.linux_system_roles.hpc : Check if kernel versionlock entries exist] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:114 Wednesday 17 September 2025 04:13:57 -0400 (0:00:02.135) 0:00:09.231 *** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1662723770.0, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1758096836.8327513, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 9210254, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0644", "mtime": 1662723770.0, "nlink": 1, "path": "/etc/dnf/plugins/versionlock.list", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "version": "4103490548", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.hpc : Get content of versionlock file] ********* task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:119 Wednesday 17 September 2025 04:13:57 -0400 (0:00:00.352) 0:00:09.584 *** ok: [managed-node1] => { "changed": false, "cmd": [ "cat", "/etc/dnf/plugins/versionlock.list" ], "delta": "0:00:00.003157", "end": "2025-09-17 04:13:58.039280", "rc": 0, "start": "2025-09-17 04:13:58.036123" } TASK [fedora.linux_system_roles.hpc : Prevent installation of all kernel packages of a different version] *** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:129 Wednesday 17 September 2025 04:13:58 -0400 (0:00:00.432) 0:00:10.016 *** changed: [managed-node1] => (item=kernel) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel" ], "delta": "0:00:00.680913", "end": "2025-09-17 04:13:59.073112", "item": "kernel", "rc": 0, "start": "2025-09-17 04:13:58.392199" } STDOUT: Last metadata expiration check: 0:00:05 ago on Wed 17 Sep 2025 04:13:53 AM EDT. Adding versionlock on: kernel-0:5.14.0-612.el9.* changed: [managed-node1] => (item=kernel-core) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-core" ], "delta": "0:00:00.685159", "end": "2025-09-17 04:14:00.087007", "item": "kernel-core", "rc": 0, "start": "2025-09-17 04:13:59.401848" } STDOUT: Last metadata expiration check: 0:00:06 ago on Wed 17 Sep 2025 04:13:53 AM EDT. Adding versionlock on: kernel-core-0:5.14.0-612.el9.* changed: [managed-node1] => (item=kernel-modules) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-modules" ], "delta": "0:00:00.666459", "end": "2025-09-17 04:14:01.084240", "item": "kernel-modules", "rc": 0, "start": "2025-09-17 04:14:00.417781" } STDOUT: Last metadata expiration check: 0:00:07 ago on Wed 17 Sep 2025 04:13:53 AM EDT. Adding versionlock on: kernel-modules-0:5.14.0-612.el9.* changed: [managed-node1] => (item=kernel-modules-extra) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-modules-extra" ], "delta": "0:00:00.696658", "end": "2025-09-17 04:14:02.102607", "item": "kernel-modules-extra", "rc": 0, "start": "2025-09-17 04:14:01.405949" } STDOUT: Last metadata expiration check: 0:00:08 ago on Wed 17 Sep 2025 04:13:53 AM EDT. Adding versionlock on: kernel-modules-extra-0:5.14.0-612.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-615.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-605.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-611.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-604.el9.* changed: [managed-node1] => (item=kernel-devel) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-devel" ], "delta": "0:00:00.694104", "end": "2025-09-17 04:14:03.134447", "item": "kernel-devel", "rc": 0, "start": "2025-09-17 04:14:02.440343" } STDOUT: Last metadata expiration check: 0:00:09 ago on Wed 17 Sep 2025 04:13:53 AM EDT. Adding versionlock on: kernel-devel-0:5.14.0-612.el9.* changed: [managed-node1] => (item=kernel-headers) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-headers" ], "delta": "0:00:00.688477", "end": "2025-09-17 04:14:04.154822", "item": "kernel-headers", "rc": 0, "start": "2025-09-17 04:14:03.466345" } STDOUT: Last metadata expiration check: 0:00:10 ago on Wed 17 Sep 2025 04:13:53 AM EDT. Adding versionlock on: kernel-headers-0:5.14.0-612.el9.* TASK [fedora.linux_system_roles.hpc : Enable proprietary nvidia-driver] ******** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:138 Wednesday 17 September 2025 04:14:04 -0400 (0:00:06.119) 0:00:16.136 *** fatal: [managed-node1]: FAILED! => { "changed": false, "failures": [], "rc": 1, "results": [] } MSG: Depsolve Error occurred: Problem 1: cannot install the best candidate for the job - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda Problem 2: package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 3: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 4: package nvidia-driver-cuda-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 5: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package nvidia-settings-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 6: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package xorg-x11-nvidia-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda Problem 7: package xorg-x11-nvidia-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed - package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed - package nvidia-xconfig-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires xorg-x11-nvidia(x86-64) >= 3:575.57.08, but none of the providers can be installed - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed - cannot install the best candidate for the job - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda - package xorg-x11-nvidia-3:580.65.06-1.el9.x86_64 from nvidia-cuda is filtered out by modular filtering - package xorg-x11-nvidia-3:580.82.07-1.el9.x86_64 from nvidia-cuda is filtered out by modular filtering TASK [Cleanup] ***************************************************************** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml:23 Wednesday 17 September 2025 04:14:05 -0400 (0:00:01.398) 0:00:17.534 *** included: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml for managed-node1 TASK [Check if versionlock entries exist] ************************************** task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:3 Wednesday 17 September 2025 04:14:05 -0400 (0:00:00.021) 0:00:17.555 *** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1758096845.4217215, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "0f8aadd126cc2c4a26dc047fe5b45c44cf2910d8", "ctime": 1758096844.1137276, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 9210254, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1758096844.1137276, "nlink": 1, "path": "/etc/dnf/plugins/versionlock.list", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 609, "uid": 0, "version": "4103490548", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [Clear dnf versionlock entries] ******************************************* task path: /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:8 Wednesday 17 September 2025 04:14:06 -0400 (0:00:00.376) 0:00:17.932 *** changed: [managed-node1] => { "changed": true, "cmd": [ "dnf", "versionlock", "clear" ], "delta": "0:00:00.584866", "end": "2025-09-17 04:14:06.887030", "rc": 0, "start": "2025-09-17 04:14:06.302164" } STDOUT: Last metadata expiration check: 0:00:13 ago on Wed 17 Sep 2025 04:13:53 AM EDT. PLAY RECAP ********************************************************************* managed-node1 : ok=18 changed=5 unreachable=0 failed=1 skipped=6 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.14", "end_time": "2025-09-17T08:14:05.604470+00:00Z", "host": "managed-node1", "message": "Depsolve Error occurred: \n Problem 1: cannot install the best candidate for the job\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n Problem 2: package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 3: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 4: package nvidia-driver-cuda-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 5: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package nvidia-settings-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 6: package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package xorg-x11-nvidia-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n Problem 7: package xorg-x11-nvidia-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-driver(x86-64) = 3:575.57.08, but none of the providers can be installed\n - package nvidia-driver-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires nvidia-kmod-common = 3:575.57.08, but none of the providers can be installed\n - package nvidia-xconfig-3:575.57.08-1.el9.x86_64 from nvidia-cuda requires xorg-x11-nvidia(x86-64) >= 3:575.57.08, but none of the providers can be installed\n - package nvidia-kmod-common-3:575.57.08-1.el9.noarch from nvidia-cuda requires nvidia-kmod = 3:575.57.08, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kmod-nvidia-575.57.08-5.14.0-570.22.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.23.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.24.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.25.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package kmod-nvidia-575.57.08-5.14.0-570.26.1-3:575.57.08-3.el9_6.x86_64 from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-latest-dkms-3:575.57.08-1.el9.x86_64 from nvidia-cuda\n - package kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda is filtered out by modular filtering\n - nothing provides dkms >= 3.1.8 needed by kmod-nvidia-open-dkms-3:575.57.08-1.el9.noarch from nvidia-cuda\n - package xorg-x11-nvidia-3:580.65.06-1.el9.x86_64 from nvidia-cuda is filtered out by modular filtering\n - package xorg-x11-nvidia-3:580.82.07-1.el9.x86_64 from nvidia-cuda is filtered out by modular filtering", "rc": 1, "start_time": "2025-09-17T08:14:04.209098+00:00Z", "task_name": "Enable proprietary nvidia-driver", "task_path": "/tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:138" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Wednesday 17 September 2025 04:14:06 -0400 (0:00:00.931) 0:00:18.863 *** =============================================================================== fedora.linux_system_roles.hpc : Prevent installation of all kernel packages of a different version --- 6.12s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:129 fedora.linux_system_roles.hpc : Explicitly install kernel-devel and kernel-headers packages matching the currently running kernel --- 2.33s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:96 fedora.linux_system_roles.hpc : Ensure that dnf-command(versionlock) is installed --- 2.14s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:109 fedora.linux_system_roles.hpc : Install EPEL release package ------------ 1.50s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:12 fedora.linux_system_roles.hpc : Enable proprietary nvidia-driver -------- 1.40s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:138 fedora.linux_system_roles.hpc : Ensure ansible_facts used by role ------- 0.97s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:2 Clear dnf versionlock entries ------------------------------------------- 0.93s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:8 fedora.linux_system_roles.hpc : Deploy the GPG key for RHEL EPEL repository --- 0.63s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:6 fedora.linux_system_roles.hpc : Deploy the GPG key for NVIDIA repositories --- 0.56s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:18 fedora.linux_system_roles.hpc : Check if system is ostree --------------- 0.44s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:10 fedora.linux_system_roles.hpc : Get content of versionlock file --------- 0.43s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:119 fedora.linux_system_roles.hpc : Configure the NVIDIA CUDA repository ---- 0.42s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:23 Check if versionlock entries exist -------------------------------------- 0.38s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:3 fedora.linux_system_roles.hpc : Check if kernel versionlock entries exist --- 0.35s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:114 Run the role ------------------------------------------------------------ 0.06s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml:19 fedora.linux_system_roles.hpc : Set platform/version specific variables --- 0.04s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:19 fedora.linux_system_roles.hpc : Set flag to indicate system is ostree --- 0.02s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:15 Cleanup ----------------------------------------------------------------- 0.02s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_default.yml:23 fedora.linux_system_roles.hpc : Set platform/version specific variables --- 0.02s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:3 fedora.linux_system_roles.hpc : Force install kernel -------------------- 0.01s /tmp/collections-w7i/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:89 Sep 17 04:13:47 managed-node1 sshd[8255]: Accepted publickey for root from 10.31.15.140 port 34758 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 17 04:13:47 managed-node1 systemd-logind[609]: New session 13 of user root. ░░ Subject: A new session 13 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 13 has been created for the user root. ░░ ░░ The leading process of the session is 8255. Sep 17 04:13:47 managed-node1 systemd[1]: Started Session 13 of User root. ░░ Subject: A start job for unit session-13.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-13.scope has finished successfully. ░░ ░░ The job identifier is 1591. Sep 17 04:13:47 managed-node1 sshd[8255]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Sep 17 04:13:47 managed-node1 sshd[8258]: Received disconnect from 10.31.15.140 port 34758:11: disconnected by user Sep 17 04:13:47 managed-node1 sshd[8258]: Disconnected from user root 10.31.15.140 port 34758 Sep 17 04:13:47 managed-node1 sshd[8255]: pam_unix(sshd:session): session closed for user root Sep 17 04:13:47 managed-node1 systemd[1]: session-13.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-13.scope has successfully entered the 'dead' state. Sep 17 04:13:47 managed-node1 systemd-logind[609]: Session 13 logged out. Waiting for processes to exit. Sep 17 04:13:47 managed-node1 systemd-logind[609]: Removed session 13. ░░ Subject: Session 13 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 13 has been terminated. Sep 17 04:13:48 managed-node1 python3.9[8456]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family', 'devices'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Sep 17 04:13:49 managed-node1 python3.9[8620]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Sep 17 04:13:50 managed-node1 python3.9[8769]: ansible-rpm_key Invoked with key=https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9 state=present validate_certs=True fingerprint=None Sep 17 04:13:50 managed-node1 python3.9[8923]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Sep 17 04:13:51 managed-node1 python3.9[9000]: ansible-ansible.legacy.dnf Invoked with name=['https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 17 04:13:52 managed-node1 python3.9[9150]: ansible-rpm_key Invoked with key=https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/D42D0685.pub state=present validate_certs=True fingerprint=None Sep 17 04:13:52 managed-node1 python3.9[9305]: ansible-yum_repository Invoked with name=nvidia-cuda description=NVIDIA CUDA repository baseurl=['https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64'] gpgcheck=True reposdir=/etc/yum.repos.d state=present unsafe_writes=False bandwidth=None cost=None deltarpm_metadata_percentage=None deltarpm_percentage=None enabled=None enablegroups=None exclude=None failovermethod=None file=None gpgcakey=None gpgkey=None module_hotfixes=None http_caching=None include=None includepkgs=None ip_resolve=None keepalive=None keepcache=None metadata_expire=None metadata_expire_filter=None metalink=None mirrorlist=None mirrorlist_expire=None password=NOT_LOGGING_PARAMETER priority=None protect=None proxy=None proxy_password=NOT_LOGGING_PARAMETER proxy_username=None repo_gpgcheck=None retries=None s3_enabled=None skip_if_unavailable=None sslcacert=None ssl_check_cert_permissions=None sslclientcert=None sslclientkey=None sslverify=None throttle=None timeout=None ui_repoid_vars=None username=None async=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Sep 17 04:13:53 managed-node1 python3.9[9454]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Sep 17 04:13:53 managed-node1 python3.9[9531]: ansible-ansible.legacy.dnf Invoked with name=['kernel-devel-5.14.0-612.el9.x86_64', 'kernel-headers-5.14.0-612.el9.x86_64'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 17 04:13:55 managed-node1 python3.9[9685]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Sep 17 04:13:55 managed-node1 python3.9[9762]: ansible-ansible.legacy.dnf Invoked with name=['dnf-command(versionlock)'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 17 04:13:56 managed-node1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. ░░ Subject: A start job for unit run-r6aaf67aab0254ab5bd72ddb17f2c0690.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-r6aaf67aab0254ab5bd72ddb17f2c0690.service has finished successfully. ░░ ░░ The job identifier is 1660. Sep 17 04:13:56 managed-node1 systemd[1]: Starting man-db-cache-update.service... ░░ Subject: A start job for unit man-db-cache-update.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has begun execution. ░░ ░░ The job identifier is 1725. Sep 17 04:13:57 managed-node1 systemd[1]: man-db-cache-update.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service has successfully entered the 'dead' state. Sep 17 04:13:57 managed-node1 systemd[1]: Finished man-db-cache-update.service. ░░ Subject: A start job for unit man-db-cache-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has finished successfully. ░░ ░░ The job identifier is 1725. Sep 17 04:13:57 managed-node1 systemd[1]: run-r6aaf67aab0254ab5bd72ddb17f2c0690.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-r6aaf67aab0254ab5bd72ddb17f2c0690.service has successfully entered the 'dead' state. Sep 17 04:13:57 managed-node1 python3.9[9999]: ansible-stat Invoked with path=/etc/dnf/plugins/versionlock.list follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Sep 17 04:13:58 managed-node1 python3.9[10150]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/dnf/plugins/versionlock.list _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 17 04:13:58 managed-node1 python3.9[10300]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 17 04:13:59 managed-node1 python3.9[10450]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-core _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 17 04:14:00 managed-node1 python3.9[10600]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-modules _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 17 04:14:01 managed-node1 python3.9[10750]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-modules-extra _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 17 04:14:02 managed-node1 python3.9[10900]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-devel _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 17 04:14:03 managed-node1 python3.9[11050]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-headers _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 17 04:14:04 managed-node1 python3.9[11200]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Sep 17 04:14:04 managed-node1 python3.9[11277]: ansible-ansible.legacy.dnf Invoked with name=['@nvidia-driver:575-dkms'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 17 04:14:05 managed-node1 python3.9[11427]: ansible-stat Invoked with path=/etc/dnf/plugins/versionlock.list follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Sep 17 04:14:06 managed-node1 python3.9[11578]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock clear _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Sep 17 04:14:07 managed-node1 sshd[11604]: Accepted publickey for root from 10.31.15.140 port 48722 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 17 04:14:07 managed-node1 systemd-logind[609]: New session 14 of user root. ░░ Subject: A new session 14 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 14 has been created for the user root. ░░ ░░ The leading process of the session is 11604. Sep 17 04:14:07 managed-node1 systemd[1]: Started Session 14 of User root. ░░ Subject: A start job for unit session-14.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-14.scope has finished successfully. ░░ ░░ The job identifier is 1790. Sep 17 04:14:07 managed-node1 sshd[11604]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Sep 17 04:14:07 managed-node1 sshd[11607]: Received disconnect from 10.31.15.140 port 48722:11: disconnected by user Sep 17 04:14:07 managed-node1 sshd[11607]: Disconnected from user root 10.31.15.140 port 48722 Sep 17 04:14:07 managed-node1 sshd[11604]: pam_unix(sshd:session): session closed for user root Sep 17 04:14:07 managed-node1 systemd[1]: session-14.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-14.scope has successfully entered the 'dead' state. Sep 17 04:14:07 managed-node1 systemd-logind[609]: Session 14 logged out. Waiting for processes to exit. Sep 17 04:14:07 managed-node1 systemd-logind[609]: Removed session 14. ░░ Subject: Session 14 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 14 has been terminated. Sep 17 04:14:07 managed-node1 sshd[11632]: Accepted publickey for root from 10.31.15.140 port 48732 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 17 04:14:07 managed-node1 systemd-logind[609]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 11632. Sep 17 04:14:07 managed-node1 systemd[1]: Started Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 1859. Sep 17 04:14:07 managed-node1 sshd[11632]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)