<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.rosemarknetworks.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Maeve</id>
	<title>RoseWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.rosemarknetworks.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Maeve"/>
	<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php/Special:Contributions/Maeve"/>
	<updated>2026-05-01T08:43:34Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.3</generator>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=154</id>
		<title>Category:Linux Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=154"/>
		<updated>2025-12-08T21:22:06Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our Linux tutorials.&lt;br /&gt;
&lt;br /&gt;
== Linux Tutorials (Especially helpful for Proxmox) ==&lt;br /&gt;
&lt;br /&gt;
* [[Offline Uncorrectable Sectors]]&lt;br /&gt;
* [[ZFS Failed Disk Replacement]]&lt;br /&gt;
* [[Calculating SSD Wearout]]&lt;br /&gt;
* [[Proxmox Backup Server Replication]]&lt;br /&gt;
* [[Proxmox Host SSH keys]]&lt;br /&gt;
* [[Proxmox Upgrade From 8 To 9]]&lt;br /&gt;
* [[Replacing A Proxmox Virtual Environment Server in a Ceph cluster]]&lt;br /&gt;
* [[Proxmox Backup Server Namespaces]]&lt;br /&gt;
* [[Proxmox Remove Ceph old health warnings]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Proxmox_Upgrade_From_8_To_9&amp;diff=153</id>
		<title>Proxmox Upgrade From 8 To 9</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Proxmox_Upgrade_From_8_To_9&amp;diff=153"/>
		<updated>2025-12-08T21:19:22Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;== Initial Requirements == In order to upgrade from Proxmox 8 to 9, we need to ensure that a few minimal requirements are met to ensure that both the upgrade goes well and that guests do not experience incompatibilities afterwards.  First, ensure that all members of the cluster are on the latest version of Proxmox 8 and Debian 12. This can be done by executing &amp;#039;&amp;#039;apt update&amp;#039;&amp;#039; and then &amp;#039;&amp;#039;apt dist-upgrade.&amp;#039;&amp;#039; If CEPH is installed, it must also be on the latest version of 19....&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initial Requirements ==&lt;br /&gt;
In order to upgrade from Proxmox 8 to 9, we need to ensure that a few minimal requirements are met to ensure that both the upgrade goes well and that guests do not experience incompatibilities afterwards.&lt;br /&gt;
&lt;br /&gt;
First, ensure that all members of the cluster are on the latest version of Proxmox 8 and Debian 12. This can be done by executing &#039;&#039;apt update&#039;&#039; and then &#039;&#039;apt dist-upgrade.&#039;&#039; If CEPH is installed, it must also be on the latest version of 19.2. If CEPH is running 18, upgrade it beforehand. The upgrade will take at least 5GB storage on the root installation drive for each node, but it&#039;s preferable to have at least 10GB free. &lt;br /&gt;
&lt;br /&gt;
Start by executing the bundled &#039;&#039;pve8to9&#039;&#039; command and checking its output. &amp;quot;Warnings&amp;quot; are details that may lead to incompatibilities or inconsistencies upon rebooting the host or that may disrupt service of hosts during the upgrade. The most important thing is to move running VMs off of the node being upgraded via either manual migration or HA, or to power them off if they&#039;re not critical. &lt;br /&gt;
&lt;br /&gt;
=== QEMU version change ===&lt;br /&gt;
Due to Proxmox 9 sunsetting QEMU versions below 6, the first thing you should do is go to each VM&#039;s hardware options, check the Machine, and ensure that it&#039;s set to the latest version of QEMU, presently 10.1. After setting this, reboot each affected machine and ensure that the network interface and all drives are online. This is known to be a problem point. &lt;br /&gt;
&lt;br /&gt;
=== &#039;screen&#039; command ===&lt;br /&gt;
It&#039;s strongly, strongly advised that you run &#039;&#039;apt install screen&#039;&#039; before we continue. Screen is a terminal multiplexer that we will use to prevent potential failure due to dropped sessions if a network outage occurs.&lt;br /&gt;
&lt;br /&gt;
=== pve8to9 checklist ===&lt;br /&gt;
Run pve8to9 from the command console. This will alert you to any potential breaking changes you need to watch for, such as quorum being insufficient for an upgrade or a change to the bootloader that must be accommodated before reboot after an otherwise successful upgrade. This is straightforward and you can broadly copy and paste the commands it tells you to execute. &lt;br /&gt;
&lt;br /&gt;
== Update repositories ==&lt;br /&gt;
&lt;br /&gt;
=== Debian and Proxmox repositories ===&lt;br /&gt;
Debian 13 uses a new package format called deb822. We start by adding the Proxmox repository as a deb822.&lt;br /&gt;
 cat &amp;gt; /etc/apt/sources.list.d/proxmox.sources &amp;lt;&amp;lt; EOF&lt;br /&gt;
 Types: deb&lt;br /&gt;
 URIs: &amp;lt;nowiki&amp;gt;http://download.proxmox.com/debian/pve&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
 Suites: trixie&lt;br /&gt;
 Components: pve-no-subscription&lt;br /&gt;
 Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg&lt;br /&gt;
 EOF&lt;br /&gt;
If using an enterprise repository, use the following:&lt;br /&gt;
 cat &amp;gt; /etc/apt/sources.list.d/proxmox-enterprise.sources &amp;lt;&amp;lt; EOF&lt;br /&gt;
 Types: deb&lt;br /&gt;
 URIs: &amp;lt;nowiki&amp;gt;https://enterprise.proxmox.com/debian/pve&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
 Suites: trixie&lt;br /&gt;
 Components: pve-enterprise&lt;br /&gt;
 Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg&lt;br /&gt;
 EOF&lt;br /&gt;
Then let&#039;s ensure that we have the Debian repositories updated to 13, codenamed Trixie:&lt;br /&gt;
 sed -i &#039;s/bookworm/trixie/g&#039; /etc/apt/sources.list&lt;br /&gt;
Manually check using nano or vim the various files in /etc/apt/sources.list. Any third party repositories such as zabbix should either be disabled by prefacing their lines with # or be updated to the trixie equivalent if possible. If a Debian 13 package exists for the third-party repository, simply change &#039;&#039;bookworm&#039;&#039; to &#039;&#039;trixie&#039;&#039;. Also, at this stage, &#039;&#039;remove any references to Proxmox outside of our new proxmox.sources file&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
=== CEPH repositories ===&lt;br /&gt;
For CEPH repositories we follow the same procedure. Remove any references to CEPH in existing files in /etc/apt/sources.list and sources.list.d. Then add the deb822:&lt;br /&gt;
 cat &amp;gt; /etc/apt/sources.list.d/ceph.sources &amp;lt;&amp;lt; EOF&lt;br /&gt;
 Types: deb&lt;br /&gt;
 URIs: &amp;lt;nowiki&amp;gt;http://download.proxmox.com/debian/ceph-squid&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
 Suites: trixie&lt;br /&gt;
 Components: no-subscription&lt;br /&gt;
 Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg&lt;br /&gt;
 EOF&lt;br /&gt;
For enterprise, use the following:&lt;br /&gt;
 cat &amp;gt; /etc/apt/sources.list.d/ceph.sources &amp;lt;&amp;lt; EOF&lt;br /&gt;
 Types: deb&lt;br /&gt;
 URIs: &amp;lt;nowiki&amp;gt;https://enterprise.proxmox.com/debian/ceph-squid&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
 Suites: trixie&lt;br /&gt;
 Components: enterprise&lt;br /&gt;
 Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg&lt;br /&gt;
 EOF&lt;br /&gt;
Update the repositories by running:&lt;br /&gt;
 apt update&lt;br /&gt;
 apt policy&lt;br /&gt;
After running apt policy, a page will appear showing the various sources for enabled repositories. Ensure that none say &amp;quot;bookworm&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
== Upgrade ==&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; perform a Proxmox upgrade from the web UI console. Connect via SSH. Start a screen session:&lt;br /&gt;
 screen -S proxmoxupgrade&lt;br /&gt;
At this point we can start the update.&lt;br /&gt;
 # NEVER use apt upgrade. apt full-upgrade and apt dist-upgrade are synonyms, but all official Proxmox guides suggest dist-upgrade.&lt;br /&gt;
 apt dist-upgrade&lt;br /&gt;
At various points during the upgrade, you will be prompted to replace existing config files with new ones from package maintainers. Here are some guidelines on which ones to allow to be replaced and which ones to keep:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;/etc/issue&#039;&#039;&#039; - N (do not replace)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;/etc/lvm/lvm.conf&#039;&#039;&#039; - Y (replace)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;/etc/ssh/sshd_config&#039;&#039;&#039; - N&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;/etc/default/grub&#039;&#039;&#039; - You will only be asked this if there&#039;s been a meaningful change to the grub configuration. It&#039;s recommended you view the difference between the two. If the only difference is the addition of some bash scripting, accept the new file. If any lines differ in boot execution, consider carefully.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;/etc/chrony/chrony.conf&#039;&#039;&#039; - Only prompted if custom configurations have been made. /etc/chrony/chrony.conf.d has been deprecated and any config lines in files under this directory must be moved to /etc/chrony/chrony.conf. It&#039;s unlikely that this has occurred and you will likely not be prompted.&lt;br /&gt;
&lt;br /&gt;
== After Upgrade ==&lt;br /&gt;
Always reboot a node after an upgrade to ensure that it&#039;s running the latest Linux kernel and all packages have been reloaded. &lt;br /&gt;
&lt;br /&gt;
Be aware that Proxmox 9 deprecates &amp;quot;HA groups&amp;quot; in favor of &amp;quot;HA rules&amp;quot; - this is an automatic conversion - this is triggered after all cluster members are on 9 or above.&lt;br /&gt;
&lt;br /&gt;
You may now run &#039;&#039;apt modernize-sources&#039;&#039; which will convert any old repository configurations to the deb822 format. &lt;br /&gt;
&lt;br /&gt;
It&#039;s advised to run &#039;&#039;systemctl disable --now systemd-journald-audit.socket&#039;&#039; to prevent overly long kernel audit messages during the upgrade&lt;br /&gt;
&lt;br /&gt;
If booting from lvm in UEFI mode we have to run &#039;&#039;[ -d /sys/firmware/efi ] &amp;amp;&amp;amp; apt install grub-efi-amd64&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Some systems&#039; lvm thin pools may report an error, run lvconvert --repair pve/data to resolve.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=ZFS_Failed_Disk_Replacement&amp;diff=152</id>
		<title>ZFS Failed Disk Replacement</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=ZFS_Failed_Disk_Replacement&amp;diff=152"/>
		<updated>2025-07-12T21:27:11Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Copy partitions from good disk sda to blank disk sdb==&lt;br /&gt;
 sgdisk /dev/sda -R /dev/sdb	# sgdisk  /dev/sda&amp;lt;From this disk&amp;gt; -R /dev/sdb&amp;lt;Replicate to this disk&amp;gt;&amp;lt;br&amp;gt;sgdisk -G /dev/sdb		# randomize the GUID on the new disk since it was copied from the other drive.&lt;br /&gt;
&lt;br /&gt;
==Using Parted to verify the partition table of /dev/sdb==&lt;br /&gt;
 (parted) select /dev/sdb&amp;lt;br&amp;gt;Using /dev/sdb&lt;br /&gt;
 &amp;lt;br&amp;gt;(parted) p&amp;lt;br&amp;gt;    Model: ATA WDC WD2000FYYZ-0 (scsi)&amp;lt;br&amp;gt;    Disk /dev/sdb: 2000398934016B&amp;lt;br&amp;gt;    Sector size (logical/physical): 512B/512B&amp;lt;br&amp;gt;    Partition Table: gpt&amp;lt;br&amp;gt;    Disk Flags:&amp;lt;br&amp;gt;    Number Start End Size File system Name Flags&lt;br /&gt;
     1 1048576B 2097151B 1048576B Grub-Boot-Partition bios_grub&lt;br /&gt;
     2 2097152B 136314879B 134217728B fat32 EFI-System-Partition boot, esp&lt;br /&gt;
     3 136314880B 2000397885439B 2000261570560B zfs PVE-ZFS-Partition&lt;br /&gt;
 &lt;br /&gt;
 (Ok partitions copied)&lt;br /&gt;
&lt;br /&gt;
==Copy data from /dev/sda1 to /dev/sdb1==&lt;br /&gt;
 dd if=/dev/sda1 of=/dev/sdb1 bs=512 #This is the bios boot partition  &lt;br /&gt;
 root@folkvang:~# dd if=/dev/sda1 of=/dev/sdb1 bs=512&lt;br /&gt;
 2014+0 records in   &lt;br /&gt;
 2014+0 records out  &lt;br /&gt;
 1031168 bytes (1.0 MB) copied, 0.10164 s, 10.1 MB/s  &lt;br /&gt;
&lt;br /&gt;
==Replace the failed partition in the zpool==&lt;br /&gt;
Find the ID of the failed block device&lt;br /&gt;
 root@folkvang:~# zpool status&lt;br /&gt;
 pool: rpool&lt;br /&gt;
     state: DEGRADED&lt;br /&gt;
     status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state.&lt;br /&gt;
     action: Replace the device using &#039;zpool replace&#039;.&lt;br /&gt;
     see: http://zfsonlinux.org/msg/ZFS-8000-4J&lt;br /&gt;
     scan: scrub repaired 0 in 0h25m with 0 errors on Sun May  8 11:20:27 2016&lt;br /&gt;
     config&lt;br /&gt;
     NAME                    STATE     READ WRITE CKSUM&lt;br /&gt;
     rpool                   DEGRADED     0     0     0&lt;br /&gt;
       mirror-0              DEGRADED     0     0     0&lt;br /&gt;
         993077023721924477  FAULTED      0     0     0  was /dev/sdk2&lt;br /&gt;
         sdk2                ONLINE       0     0     0&lt;br /&gt;
     errors: No known data errors&lt;br /&gt;
&lt;br /&gt;
==Call zpool to replace the failed device==&lt;br /&gt;
 root@folkvang:~# zpool replace -f rpool 993077023721924477 /dev/sdl2&lt;br /&gt;
 &lt;br /&gt;
Make sure to wait until resilver is done before rebooting.&lt;br /&gt;
 root@folkvang:~# zpool statuspool: rpool&amp;lt;br&amp;gt;    state: DEGRADED&amp;lt;br&amp;gt;    status: One or more devices is currently being resilvered.  The pool will continue to function, possibly in a degraded state.&amp;lt;br&amp;gt;    action: Wait for the resilver to complete.&amp;lt;br&amp;gt;    scan: resilver in progress since Fri Sep  2 16:45:53 2016&amp;lt;br&amp;gt;    13.2M scanned out of 8.83G at 902K/s, 2h50m to go&amp;lt;br&amp;gt;    12.9M resilvered, 0.15% done&amp;lt;br&amp;gt;    config:&amp;lt;br&amp;gt;    NAME                      STATE     READ WRITE CKSUM&amp;lt;br&amp;gt;    rpool                     DEGRADED     0     0     0&amp;lt;br&amp;gt;      mirror-0                DEGRADED     0     0     0&amp;lt;br&amp;gt;        replacing-0           UNAVAIL      0     0     0&amp;lt;br&amp;gt;          993077023721924477  FAULTED      0     0     0  was /dev/sdk2&amp;lt;br&amp;gt;          sdl2                ONLINE       0     0     0  (resilvering)&amp;lt;br&amp;gt;        sdk2                  ONLINE       0     0     0&amp;lt;br&amp;gt;    errors: No known data errors&lt;br /&gt;
&lt;br /&gt;
== After fixing the drive, we need to ensure that the boot sectors are configured. ==&lt;br /&gt;
 proxmox-boot-tool format /dev/sdb2&lt;br /&gt;
&lt;br /&gt;
 proxmox-boot-tool init /dev/sdb2&lt;br /&gt;
&lt;br /&gt;
 proxmox-boot-tool refresh&lt;br /&gt;
&lt;br /&gt;
 proxmox-boot-tool status&lt;br /&gt;
&lt;br /&gt;
 proxmox-boot-tool clean&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;b&amp;gt;grub-install /dev/sdk&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;b&amp;gt;grub-install /dev/sdl&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;b&amp;gt;update-grub&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== OPTIONAL: Set Zpool to Expand ==&lt;br /&gt;
We can set a Zpool to expand itself to fill all available storage by first taking the pool offline with &#039;&#039;&#039;zpool offline &amp;lt;zpool&amp;gt;.&#039;&#039;&#039; Now, we can bring it back online with a special flag that will tell it to expand to fill its storage space. Sometimes this is enabled by default, but if it appears not to be working, you can do this to rule this setting not being set out. We do this with &#039;&#039;&#039;zpool online -e &amp;lt;zpool&amp;gt;&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
If you cloned these drives directly from one drive to a larger drive, this won&#039;t work, as the partition table hasn&#039;t been reconfigured. You&#039;ll have to move the end partition and then expand the first one before running &#039;&#039;&#039;zpool&#039;&#039;&#039; &#039;&#039;&#039;online -e&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Linux Tutorials]]&lt;br /&gt;
[[Category:Proxmox Tutorials]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Force_Wireguard_DNS_Zone_Override&amp;diff=151</id>
		<title>Force Wireguard DNS Zone Override</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Force_Wireguard_DNS_Zone_Override&amp;diff=151"/>
		<updated>2025-07-07T14:54:33Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Sometimes you may have a situation in which you require a machine using a Wireguard tunnel to always use a specific DNS server to resolve specific zones. One case where we needed this was an Active Directory DNS zone that also matched a publicly available DNS zone. The publicly available records resolved to one end of a NAT&#039;d IP range, but the AD DNS zone had records pointing to the other side. When connecting locally in the network, this was fine, but some Windows clients were resolving through the LAN&#039;s DNS regardless of the search domain set in the Wireguard config.&lt;br /&gt;
&lt;br /&gt;
To fix this we can leverage the power of the [https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn593632(v=ws.11) Name Resolution Policy Table].&lt;br /&gt;
&lt;br /&gt;
You could simply fix this by executing this command:&lt;br /&gt;
 Add-DnsClientNrptRule -Comment &#039;NameOfVPNTunnel&#039; -Namespace &#039;.dnszone.com&#039; -NameServers 203.0.113.0&lt;br /&gt;
This works, but the problem is that this rule will always be active. If you know for sure that the device&#039;s network situation will never change, this is fine, but if it&#039;s a mobile workstation that may be repurposed you should avoid this if possible. We can tell Wireguard to dynamically create and remove these rules using the PostUp and PostDown hooks in the Wireguard config. The problem is that these are disabled by default due to the potential room for abuse. The solution is to enable these using a registry edit:&lt;br /&gt;
 reg add HKLM\Software\WireGuard /v DangerousScriptExecution /t REG_DWORD /d 1 /f&lt;br /&gt;
&lt;br /&gt;
If this results in &amp;quot;The operation was completed successfully&amp;quot; we can now add these lines to our wireguard config:&lt;br /&gt;
 PostUp = powershell.exe -Command &amp;quot;&amp;amp; { Add-DnsClientNrptRule -Comment &#039;NameOfVPNTunnel&#039; -Namespace &#039;.dnszone.com&#039; -NameServers 203.0.113.0 }&amp;quot;&lt;br /&gt;
 PostDown = powershell.exe -Command &amp;quot;&amp;amp; { Get-DnsClientNrptRule | where Comment -eq &#039;NameOfVPNTunnel&#039; | foreach { Remove-DnsClientNrptRule -Name $_.Name -Force } }&amp;quot;&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Force_Wireguard_DNS_Zone_Override&amp;diff=150</id>
		<title>Force Wireguard DNS Zone Override</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Force_Wireguard_DNS_Zone_Override&amp;diff=150"/>
		<updated>2025-07-03T17:41:01Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;Sometimes you may have a situation in which you require a machine using a Wireguard tunnel to always use a specific DNS server to resolve specific zones. One case where we needed this was an Active Directory DNS zone that also matched a publicly available DNS zone. The publicly available records resolved to one end of a NAT&amp;#039;d IP range, but the AD DNS zone had records pointing to the other side. When connecting locally in the network, this was fine, but some Windows clien...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Sometimes you may have a situation in which you require a machine using a Wireguard tunnel to always use a specific DNS server to resolve specific zones. One case where we needed this was an Active Directory DNS zone that also matched a publicly available DNS zone. The publicly available records resolved to one end of a NAT&#039;d IP range, but the AD DNS zone had records pointing to the other side. When connecting locally in the network, this was fine, but some Windows clients were resolving through the LAN&#039;s DNS regardless of the search domain set in the Wireguard config.&lt;br /&gt;
&lt;br /&gt;
To fix this we can leverage the power of the [https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn593632(v=ws.11) Name Resolution Policy Table].&lt;br /&gt;
&lt;br /&gt;
You could simply fix this by executing this command:&lt;br /&gt;
 Add-DnsClientNrptRule -Comment &#039;NameOfVPNTunnel&#039; -Namespace &#039;.dnszone.com&#039; -NameServers 203.0.113.0&lt;br /&gt;
This works, but the problem is that this rule will always be active. If you know for sure that the device&#039;s network situation will never change, this is fine, but if it&#039;s a mobile workstation that may be repurposed you should avoid this if possible. We can tell Wireguard to dynamically create and remove these rules using the PostUp and PostDown hooks in the Wireguard config. The problem is that these are disabled by default due to the potential room for abuse. The solution is to enable these using a registry edit:&lt;br /&gt;
 Set-ItemProperty -Path HKLM:\SOFTWARE\Wireguard -Name DangerousScriptExecution -Type DWord -Value 1&lt;br /&gt;
If this results in &amp;quot;The operation was completed successfully&amp;quot; we can now add these lines to our wireguard config:&lt;br /&gt;
 PostUp = powershell.exe -Command &amp;quot;&amp;amp; { Add-DnsClientNrptRule -Comment &#039;NameOfVPNTunnel&#039; -Namespace &#039;.dnszone.com&#039; -NameServers 203.0.113.0 }&amp;quot;&lt;br /&gt;
 PostDown = powershell.exe -Command &amp;quot;&amp;amp; { Get-DnsClientNrptRule | where Comment -eq &#039;NameOfVPNTunnel&#039; | foreach { Remove-DnsClientNrptRule -Name $_.Name -Force } }&amp;quot;&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Watchguard_IKEv2_Certificates&amp;diff=149</id>
		<title>Watchguard IKEv2 Certificates</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Watchguard_IKEv2_Certificates&amp;diff=149"/>
		<updated>2025-05-26T20:24:10Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;Watchguard Fireboxes have several VPN protocols available for their Mobile VPN service. When using IKEv2, there&amp;#039;s a problem that can arise if using the built-in self-signed certificates which are generated automatically. The problem is that the Firebox uses an internal CA certificate to sign the certificate. When configuring the client machine, you install a certificate bundle from the Firebox. For pretty reasonable security reasons, however, you cannot move this interna...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Watchguard Fireboxes have several VPN protocols available for their Mobile VPN service. When using IKEv2, there&#039;s a problem that can arise if using the built-in self-signed certificates which are generated automatically. The problem is that the Firebox uses an internal CA certificate to sign the certificate. When configuring the client machine, you install a certificate bundle from the Firebox. For pretty reasonable security reasons, however, you cannot move this internal CA certificate from one Firebox to another, meaning that when you upgrade the machine, you &#039;&#039;must&#039;&#039; redo all VPN configurations, as the root CA installed on each client machine no longer matches. This is hugely inconvenient. The solution is to use a certificate signed by a third party certificate. You can either use a major third party certificate like Sectigo, or you can host this in-house. &lt;br /&gt;
&lt;br /&gt;
You &#039;&#039;can&#039;&#039; use Windows CA Server but this requires another license. The other solution is to use an open-source CA system like [https://smallstep.com/docs/step-ca/ step-ca].&lt;br /&gt;
&lt;br /&gt;
== Step CA Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== Step CA Host Certificate Length ===&lt;br /&gt;
Step CA was originally designed for automated recertification of internal web services. For this reason the default length of a certificate is 24 hours. If you want your certificates to last longer when signing them, we first need to edit the configuration of Step.&lt;br /&gt;
 # -- snippet start - /root/.step/configs/ca.json --&lt;br /&gt;
 &amp;quot;authority&amp;quot;: {&lt;br /&gt;
     &amp;quot;claims&amp;quot;: {&lt;br /&gt;
         &amp;quot;maxTLSCertDuration&amp;quot;:&amp;quot;175200h&amp;quot; # ~20 years &lt;br /&gt;
     },&lt;br /&gt;
     &amp;quot;provisioners&amp;quot;: [ # -- snippet end --&lt;br /&gt;
&lt;br /&gt;
== Firebox Certificates ==&lt;br /&gt;
The IKEv2 certificate authenticates based on two things: &#039;&#039;&#039;trust&#039;&#039;&#039; and &#039;&#039;&#039;SAN matching&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
=== Trust ===&lt;br /&gt;
The benefit of using a third party certificate server is that the only thing you have to install on the client machines &#039;&#039;is the root CA&#039;&#039;, not the certificate on the Firebox. This means that as long as the same certificate is used to sign any certificate assigned to the firebox, it will maintain trust and be authenticated. This means you can swap out the physical hardware an indefinite amount of times, change the hostname / IP of the machine, and as long as the rest of the certificate matches and is signed by the same root CA, no action (except for updating the endpoint address) has to be taken on the clients.&lt;br /&gt;
&lt;br /&gt;
=== SAN matching ===&lt;br /&gt;
As long as certificate trust is maintained, the next thing you have to consider is the SANs (&#039;&#039;&#039;Subject Alternative Name&#039;&#039;&#039;) on the certificate on the Firebox. The SAN is set as such in OpenSSL syntax: &lt;br /&gt;
 subjectAltName=DNS:hostname.domainname.com,IP:203.0.113.2&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Rustdesk&amp;diff=148</id>
		<title>Rustdesk</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Rustdesk&amp;diff=148"/>
		<updated>2025-04-28T20:50:07Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Rustdesk&amp;#039;&amp;#039;&amp;#039; is a Remote Access Software used by Rosemark Networks to administer support to our customers regardless of VPN connectivity status.   == Installation and configuration ==  # [https://github.com/rustdesk/rustdesk/releases Download the latest version of Rustdesk from their official Github repository page. Download the 64-bit version MSI installer.] # When prompted during the install, uncheck &amp;#039;&amp;#039;Install&amp;#039;&amp;#039; &amp;#039;&amp;#039;Rustdesk Printer.&amp;#039;&amp;#039;  # After finishing the installati...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Rustdesk&#039;&#039;&#039; is a Remote Access Software used by Rosemark Networks to administer support to our customers regardless of VPN connectivity status. &lt;br /&gt;
&lt;br /&gt;
== Installation and configuration ==&lt;br /&gt;
&lt;br /&gt;
# [https://github.com/rustdesk/rustdesk/releases Download the latest version of Rustdesk from their official Github repository page. Download the 64-bit version MSI installer.]&lt;br /&gt;
# When prompted during the install, uncheck &#039;&#039;Install&#039;&#039; &#039;&#039;Rustdesk Printer.&#039;&#039; &lt;br /&gt;
# After finishing the installation, from the Rustdesk main menu, select the hamburger menu (three lines near the window titlebar controls) &lt;br /&gt;
# Select &#039;&#039;Network&#039;&#039; settings. &lt;br /&gt;
# Press &#039;&#039;Unlock&#039;&#039; and select &#039;&#039;Relay/ID Server Settings&#039;&#039;. &lt;br /&gt;
# Enter &#039;&#039;&#039;rd.rosemarknetworks.com&#039;&#039;&#039; as both the ID and Relay server.&lt;br /&gt;
# API Server remains blank.&lt;br /&gt;
# The key can be found on this secret page: [[RustdeskPublicKey|Rustdesk public key.]] You cannot access this page unless you a Sysop user on this Wiki. &lt;br /&gt;
# Press OK. &lt;br /&gt;
# Go to &#039;&#039;Security&#039;&#039; settings and select Permanent Password. This will prompt you to set your permanent password. &lt;br /&gt;
# Restart Rustdesk app.&lt;br /&gt;
# Make note of Rustdesk ID.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Secret:RustdeskPublicKey&amp;diff=147</id>
		<title>Secret:RustdeskPublicKey</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Secret:RustdeskPublicKey&amp;diff=147"/>
		<updated>2025-04-28T20:40:53Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Maeve moved page Secret:RustdeskPublicKey to RustdeskPublicKey&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[RustdeskPublicKey]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Proxmox_Backup_Server&amp;diff=142</id>
		<title>Proxmox Backup Server</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Proxmox_Backup_Server&amp;diff=142"/>
		<updated>2025-03-18T20:39:43Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;== PAGE IS WIP == This page is a work in progress as there is much to discuss and document regarding Proxmox Backup Server (PBS) and integrations with Proxmox Virtual Environment (PVE).  == Overview ==  == Installation ==  == Datastore Creation ==  == Scheduled Jobs ==  === Prune Jobs ===  === Garbage Collection Jobs ===  === Verify Jobs ===  == Administration ==  === Notifications ===  ==== Datastore Notifications Caveat ==== When setting up the notification system in P...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== PAGE IS WIP ==&lt;br /&gt;
This page is a work in progress as there is much to discuss and document regarding Proxmox Backup Server (PBS) and integrations with Proxmox Virtual Environment (PVE).&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Datastore Creation ==&lt;br /&gt;
&lt;br /&gt;
== Scheduled Jobs ==&lt;br /&gt;
&lt;br /&gt;
=== Prune Jobs ===&lt;br /&gt;
&lt;br /&gt;
=== Garbage Collection Jobs ===&lt;br /&gt;
&lt;br /&gt;
=== Verify Jobs ===&lt;br /&gt;
&lt;br /&gt;
== Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Notifications ===&lt;br /&gt;
&lt;br /&gt;
==== Datastore Notifications Caveat ====&lt;br /&gt;
When setting up the notification system in Proxmox Backup Server, ensure that you select Notification System in the Notification Mode options on each datastore you want to get notifications for.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Netutils_Manual&amp;diff=141</id>
		<title>Netutils Manual</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Netutils_Manual&amp;diff=141"/>
		<updated>2025-01-03T17:49:19Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;WIP Test&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;WIP Test&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Software_Tutorials&amp;diff=140</id>
		<title>Category:Software Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Software_Tutorials&amp;diff=140"/>
		<updated>2025-01-03T17:49:06Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our tutorials and notes on software, both selfhosted FOSS and 3rd party products.&lt;br /&gt;
&lt;br /&gt;
== Software Tutorials ==&lt;br /&gt;
&lt;br /&gt;
* [[Firebox Content Inspection|Firebox Content Inspection (HTTPS Content Inspection)]]&lt;br /&gt;
* [[Netutils Manual]]&lt;br /&gt;
* [[Nextcloud]]&lt;br /&gt;
* [[Packetfence]]&lt;br /&gt;
* [[Proxy Server]] (high level)&lt;br /&gt;
** [[Reverse Proxy]] (use-case specific)&lt;br /&gt;
* [[Zabbix]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Proxmox_Backup_Server_Namespaces&amp;diff=139</id>
		<title>Proxmox Backup Server Namespaces</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Proxmox_Backup_Server_Namespaces&amp;diff=139"/>
		<updated>2024-12-28T22:13:17Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;Proxmox Backup Server has a feature called &amp;#039;&amp;#039;&amp;#039;namespaces&amp;#039;&amp;#039;&amp;#039;. Namespaces allow you to organize parts of your datastore under different directories, and then target these directories individually under prune jobs and sync jobs. One usecase may be if you have a set of backups that you only want to keep the last 30 days of, but you have another that you wish to keep a copy of per day.  Namespaces can be nested, with one inside another one, following directory slash notation...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Proxmox Backup Server has a feature called &#039;&#039;&#039;namespaces&#039;&#039;&#039;. Namespaces allow you to organize parts of your datastore under different directories, and then target these directories individually under prune jobs and sync jobs. One usecase may be if you have a set of backups that you only want to keep the last 30 days of, but you have another that you wish to keep a copy of per day.&lt;br /&gt;
&lt;br /&gt;
Namespaces can be nested, with one inside another one, following directory slash notation (/root/nested_namespace1/nested_namespace2). Internally, Proxmox Backup Server&#039;s datastore uses a &amp;quot;ns&amp;quot; directory, alongside a &amp;quot;vm&amp;quot; and &amp;quot;ct&amp;quot; directory to organize this content. So internally, a root namespace with three VMs and a nested namespace with two more VMs may appear as such:&lt;br /&gt;
 root/&lt;br /&gt;
 | vm/&lt;br /&gt;
   | 100/&lt;br /&gt;
   | 101/&lt;br /&gt;
   | 102/&lt;br /&gt;
 | ct/&lt;br /&gt;
 | ns/&lt;br /&gt;
   | nested/&lt;br /&gt;
     | vm/&lt;br /&gt;
       | 103/ &lt;br /&gt;
       | 104/&lt;br /&gt;
     | ct/&lt;br /&gt;
Namespaces can be nested many times over, if desired. Another important usecase for namespaces is that they allow you to have VMs from several PVE clusters that have overlapping VMIDs.&lt;br /&gt;
&lt;br /&gt;
== Sync Jobs ==&lt;br /&gt;
&#039;&#039;&#039;Note: this assumes sync jobs using pulls instead of pushes, which was and continues to be the standard paradigm intended for Proxmox Backup Server.&#039;&#039;&#039; In a recent update, the options for a push were added, but this is a bit limited. If using push jobs, adapt these notes with caution.&lt;br /&gt;
&lt;br /&gt;
When configuring a sync job, the local and remote namespace may be specified, to avoid copying the full contents of the backup server to another backup server. You may use this to configure several identical backup servers in a mesh, with local machines pushing backups to the backup server, followed by remotes syncing from other sites&#039; local servers. &lt;br /&gt;
&lt;br /&gt;
* Site A may have a backup server with namespaces A and B. &lt;br /&gt;
* Site B may have the same configuration, with namespaces A and B. &lt;br /&gt;
* Site A backs up to its local backup server&#039;s A namespace. &lt;br /&gt;
* Later in the day, Site B&#039;s backup server syncs from Site A&#039;s A Namespace into its own A namespace, only populated by contents gathered from the sync job, while Site B backs up its own contents to its local backup server&#039;s B namespace.&lt;br /&gt;
* The same process occurs with Site A syncing Site B&#039;s namespace into Site A&#039;s B namespace. &lt;br /&gt;
&lt;br /&gt;
Now, both sites have both copies of themselves, formed locally, and copies of each other. This can scale up with an arbitrary number of backup servers.&lt;br /&gt;
&lt;br /&gt;
=== Nested namespaces ===&lt;br /&gt;
Sync jobs support nested namespaces. By default, when configuring a sync job, you specify source and destination namespaces. These can be as specific as you like. Note the &amp;quot;&#039;&#039;&#039;Max Depth&#039;&#039;&#039;&amp;quot; option, which defaults to &amp;quot;Full&amp;quot;. This option controls how many namespaces deep the sync job will crawl. Without a value set, it will sync the entire directory all the way down its tree. If you set this to 0, it will sync any contents in the source namespace, but will not traverse past this specific namespace. With the above example, a Max Depth of 0 syncing on the root namespace would copy VMs 100, 101, and 102, but NOT the nested namespace or its content. &lt;br /&gt;
&lt;br /&gt;
==== Nested namespaces will be created if not already present ====&lt;br /&gt;
In the above example, if we have another Proxmox Backup Server that has its own root namespace, but no nested subnamespace, and we sync the contents from root with Max Depth full to this second server&#039;s root, it will automatically generate the &amp;quot;nested&amp;quot; namespace and use it in any subsequent syncs. This is why it&#039;s important to understand your namespace structure, as a misalignment could lead to many erroneous namespaces being generated! &lt;br /&gt;
&lt;br /&gt;
== Manipulating namespaces ==&lt;br /&gt;
{|&lt;br /&gt;
![[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
!DO NOT do ANY of this if a prune job, garbage collection, verify job, sync job, or standard backup is in progress. It&#039;d be best to set this datastore to maintenance mode first!&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== Moving VMs and CTs from one namespace to another ====&lt;br /&gt;
This is not implicitly supported by Proxmox Backup Server, but due to the fact that the underlying infrastructure is a very simple POSIX directory system, it is not difficult to manually perform this action.&lt;br /&gt;
&lt;br /&gt;
# If a namespace has been created, but does not have any VMs in it yet, the vm directory will be absent, so we must create this manually. &lt;br /&gt;
## From the parent namespace directory: &#039;&#039;mkdir ns/&amp;lt;namespace&amp;gt;/vm&#039;&#039;&lt;br /&gt;
## Set the owner to backup:backup: &#039;&#039;chown backup:backup ns/&amp;lt;namespace&amp;gt;/vm&#039;&#039; &lt;br /&gt;
# Move the contents you wish to move:&lt;br /&gt;
## mv vm/&amp;lt;vm_id&amp;gt; ns/&amp;lt;namespace&amp;gt;/vm/&amp;lt;vm_id&amp;gt;&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=138</id>
		<title>Category:Linux Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=138"/>
		<updated>2024-12-28T21:25:02Z</updated>

		<summary type="html">&lt;p&gt;Maeve: /* Linux Tutorials (Especially helpful for Proxmox) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our Linux tutorials.&lt;br /&gt;
&lt;br /&gt;
== Linux Tutorials (Especially helpful for Proxmox) ==&lt;br /&gt;
&lt;br /&gt;
* [[Offline Uncorrectable Sectors]]&lt;br /&gt;
* [[ZFS Failed Disk Replacement]]&lt;br /&gt;
* [[Calculating SSD Wearout]]&lt;br /&gt;
* [[Proxmox Backup Server Replication]]&lt;br /&gt;
* [[Proxmox Host SSH keys]]&lt;br /&gt;
* [[Replacing A Proxmox Virtual Environment Server in a Ceph cluster]]&lt;br /&gt;
* [[Proxmox Backup Server Namespaces]]&lt;br /&gt;
* [[Proxmox Remove Ceph old health warnings]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=137</id>
		<title>Replacing A Proxmox Virtual Environment Server in a Ceph cluster</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=137"/>
		<updated>2024-12-24T19:45:30Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The Situation ==&lt;br /&gt;
Imagine a situation in which you have a cluster of Proxmox VE nodes as part of a hpyerconverged Ceph installation. You want to replace one of your nodes with a fully brand new installation of Proxmox VE. You don&#039;t intend to migrate its drives, you want to replace the node outright with new drives and a new installation of Proxmox VE. In order for this to work, there are a handful of things you need to consider to prevent 1. unnecessary Ceph rebalances 2. broken HA in Proxmox VE. The broken HA issue comes from the fact that Proxmox VE uses SSH internally to move data around and to run commands on remote nodes. These keys get messed up because Proxmox does not remove the keys when decommissioning a node.&lt;br /&gt;
&lt;br /&gt;
== The Strategy ==&lt;br /&gt;
Migrate all virtual machines from the node. Ensure that you have made copies of any local data or backups that you want to keep. In addition, make sure to remove any scheduled replication jobs to the node to be removed.&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|Failure to remove replication jobs to a node before removing said node will result in the replication job becoming irremovable. Especially note that replication automatically switches direction if a replicated VM is migrated, so by migrating a replicated VM from a node to be deleted, replication jobs will be set up to that node automatically.&lt;br /&gt;
|}&lt;br /&gt;
Replacing a Proxmox Virtual Environment Server in hyperconverged Ceph configuration.&lt;br /&gt;
&lt;br /&gt;
This guide assumes a four node cluster with hostnames node1-node4. We&#039;re assuming a replacement of node2, and for convenience, we&#039;re migrating to node1, but it could be to any other node in the cluster. In this guide, we are &#039;&#039;specifically&#039;&#039; replacing node2 with a new installation of PVE on an upgraded server that shares the same IP(s) and hostname, not migrating an existing installation to a new chassis. &lt;br /&gt;
&lt;br /&gt;
=== Confirm cluster status (prerequisite, optional). ===&lt;br /&gt;
From node2&#039;s CLI, run &#039;&#039;pvecm nodes.&#039;&#039;&lt;br /&gt;
 node2# pvecm nodes&lt;br /&gt;
 Membership information&lt;br /&gt;
 ----------------------&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
          1          1 node1&lt;br /&gt;
          2          1 node2 (local)&lt;br /&gt;
          3          1 node3&lt;br /&gt;
          4          1 node4&lt;br /&gt;
Confirm that the node you intend to decommission is listed has the &#039;&#039;&amp;quot;(local)&amp;quot;&#039;&#039; identifier next to its name. This confirms we&#039;re on the right machine.&lt;br /&gt;
&lt;br /&gt;
=== Migrate VM, CT, templates, storage off of node2. ===&lt;br /&gt;
Using HA, migrate running VM&#039;s from node2 to node1.&lt;br /&gt;
&lt;br /&gt;
Manually migrate all remaining nonrunning VMs, CTs, and templates off of node2 to node1.&lt;br /&gt;
&lt;br /&gt;
=== Decommission node from CEPH. ===&lt;br /&gt;
&lt;br /&gt;
# Set OSDs on node2 to &amp;quot;out&amp;quot; and wait for rebalance.&lt;br /&gt;
#* [[File:Ceph osd management.png]]&lt;br /&gt;
#* On Node2, go to Ceph -&amp;gt; OSD -&amp;gt; Node2 -&amp;gt; per each OSD listed, select the OSD and press &amp;quot;Out&amp;quot; in the top right of the control bar.&lt;br /&gt;
# Following rebalance, stop and destroy all OSDs on node2.&lt;br /&gt;
#* In the same screen of the PVE gui, select each out OSD and press stop, and once stopped, under the &amp;quot;more&amp;quot; submenu, select &amp;quot;destroy&amp;quot;.&lt;br /&gt;
# Remove the Ceph servers (monitor, manager, and metadata) from node2.&lt;br /&gt;
#* In Ceph -&amp;gt; Monitor, select node2&#039;s monitor and press stop, and then once stopped, press destroy.&lt;br /&gt;
#** [[File:Ceph mon.png]]&lt;br /&gt;
#* In Ceph -&amp;gt; Monitor, select node2&#039;s manager in the bottom panel, press stop, and once stopped, press destroy.&lt;br /&gt;
#** [[File:Ceph manager.png]]&lt;br /&gt;
#* In Ceph -&amp;gt; CephFS - Metadata Servers, stop and destroy the Metadata Server for node2.&lt;br /&gt;
#** [[File:Ceph mds.png]]&lt;br /&gt;
# On node2&#039;s CLI, clean up the Ceph CRUSH map and remove the host bucket using &amp;quot;ceph osd crush remove node2&amp;quot;.&lt;br /&gt;
# From node1&#039;s CLI, run &amp;quot;pvecm delnode node2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|&#039;&#039;Important: Power off node2 before running pvecm delnode&#039;&#039;. This is because the SSH keys and some internal references to the cluster may still exist on the node being removed, and it may attempt to perform sync operations within corosync and any distributed storage (i.e. Replicated ZFS pools).&lt;br /&gt;
|}&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|At this point, it is possible that you will receive an error message stating &amp;lt;code&amp;gt;Could not kill node (error = CS_ERR_NOT_EXIST)&amp;lt;/code&amp;gt;. This does not signify an actual failure in the deletion of the node, but rather a failure in corosync trying to kill an offline node. Thus, it can be safely ignored.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Confirm node deleted. ===&lt;br /&gt;
From node1&#039;s CLI, run &#039;&#039;pvecm nodes.&#039;&#039;&lt;br /&gt;
 node2# pvecm nodes&lt;br /&gt;
 Membership information&lt;br /&gt;
 ----------------------&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
          1          1 node1 (local)&lt;br /&gt;
          2          1 node3&lt;br /&gt;
          3          1 node4&lt;br /&gt;
&lt;br /&gt;
=== Clean up SSH Keys. ===&lt;br /&gt;
There is a &#039;&#039;&#039;major missing detail&#039;&#039;&#039; in the official documentation which is that if you intend to join a node into a cluster with the &#039;&#039;&#039;same hostname and IP&#039;&#039;&#039; the previous node had, everything will break if you don&#039;t take some prerequisite actions.&lt;br /&gt;
&lt;br /&gt;
Power on the new node2 preconfigured with the same hostname and IP address. &lt;br /&gt;
&lt;br /&gt;
From node1, SSH into the new node2. This will generate an error from the SSH client which will contain a line containing a command that will remove the key from the known hosts file. Use this command syntax to clear these keys:&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/etc/ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/etc/ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;node2.domain.com&#039;&lt;br /&gt;
Now attempt to SSH into node2 again. You will get a nearly identical error, only this time the path to the known_hosts file will be in the root user&#039;s .ssh directory as so:&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/root/.ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/root/.ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;node2.domain.com&#039;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|This second series of commands, pointing to /root/, are &#039;&#039;not replicated&#039;&#039; across the cluster members, and must be done on all members of the cluster manually.&lt;br /&gt;
|}&lt;br /&gt;
After executing these commands, when we join the node to the cluster, SSH will work correctly.&lt;br /&gt;
&lt;br /&gt;
=== Join node to cluster. ===&lt;br /&gt;
&lt;br /&gt;
# Power on server and configure LOM such as Dell DRAC to use the correct IP address.&lt;br /&gt;
# Edit /etc/hostname and /etc/hosts to confirm hostname is correctly matched to previous install&#039;s hostname.&lt;br /&gt;
# Reboot and verify hostname and IP are correct. &lt;br /&gt;
# If the previous machine had a Proxmox license, apply it now.&lt;br /&gt;
# Validate network connectivty on corosync network and both the Ceph frontend (consumption and management) and backend (replication) networks to all other nodes.&lt;br /&gt;
# Join the Proxmox cluster.&lt;br /&gt;
# Install Ceph.&lt;br /&gt;
# Add Ceph monitor and Ceph manager to this node.&lt;br /&gt;
# Migrate a test VM to the new node to confirm consumption.&lt;br /&gt;
# If there are any other maintenance tasks to complete (like swapping another node with the previous node&#039;s hardware) do NOT add OSDs back to node2 until ready.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=136</id>
		<title>Replacing A Proxmox Virtual Environment Server in a Ceph cluster</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=136"/>
		<updated>2024-12-24T19:44:29Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The Situation ==&lt;br /&gt;
Imagine a situation in which you have a cluster of Proxmox VE nodes as part of a hpyerconverged Ceph installation. You want to replace one of your nodes with a fully brand new installation of Proxmox VE. You don&#039;t intend to migrate its drives, you want to replace the node outright with new drives and a new installation of Proxmox VE. In order for this to work, there are a handful of things you need to consider to prevent 1. unnecessary Ceph rebalances 2. broken HA in Proxmox VE. The broken HA issue comes from the fact that Proxmox VE uses SSH internally to move data around and to run commands on remote nodes. These keys get messed up because Proxmox does not remove the keys when decommissioning a node.&lt;br /&gt;
&lt;br /&gt;
== The Strategy ==&lt;br /&gt;
Migrate all virtual machines from the node. Ensure that you have made copies of any local data or backups that you want to keep. In addition, make sure to remove any scheduled replication jobs to the node to be removed.&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|Failure to remove replication jobs to a node before removing said node will result in the replication job becoming irremovable. Especially note that replication automatically switches direction if a replicated VM is migrated, so by migrating a replicated VM from a node to be deleted, replication jobs will be set up to that node automatically.&lt;br /&gt;
|}&lt;br /&gt;
Replacing a Proxmox Virtual Environment Server in hyperconverged Ceph configuration.&lt;br /&gt;
&lt;br /&gt;
This guide assumes a four node cluster with hostnames node1-node4. We&#039;re assuming a replacement of node2, and for convenience, we&#039;re migrating to node1, but it could be to any other node in the cluster. In this guide, we are &#039;&#039;specifically&#039;&#039; replacing node2 with a new installation of PVE on an upgraded server that shares the same IP(s) and hostname, not migrating an existing installation to a new chassis. &lt;br /&gt;
&lt;br /&gt;
=== Confirm cluster status (prerequisite, optional). ===&lt;br /&gt;
From node2&#039;s CLI, run &#039;&#039;pvecm nodes.&#039;&#039;&lt;br /&gt;
 node2# pvecm nodes&lt;br /&gt;
 Membership information&lt;br /&gt;
 ----------------------&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
          1          1 node1&lt;br /&gt;
          2          1 node2 (local)&lt;br /&gt;
          3          1 node3&lt;br /&gt;
          4          1 node4&lt;br /&gt;
Confirm that the node you intend to decommission is listed has the &#039;&#039;&amp;quot;(local)&amp;quot;&#039;&#039; identifier next to its name. This confirms we&#039;re on the right machine.&lt;br /&gt;
&lt;br /&gt;
=== Migrate VM, CT, templates, storage off of node2. ===&lt;br /&gt;
Using HA, migrate running VM&#039;s from node2 to node1.&lt;br /&gt;
&lt;br /&gt;
Manually migrate all remaining nonrunning VMs, CTs, and templates off of node2 to node1.&lt;br /&gt;
&lt;br /&gt;
=== Decommission node from CEPH. ===&lt;br /&gt;
&lt;br /&gt;
# Set OSDs on node2 to &amp;quot;out&amp;quot; and wait for rebalance. [[File:Ceph osd management.png]]&lt;br /&gt;
#* On Node2, go to Ceph -&amp;gt; OSD -&amp;gt; Node2 -&amp;gt; per each OSD listed, select the OSD and press &amp;quot;Out&amp;quot; in the top right of the control bar.&lt;br /&gt;
# Following rebalance, stop and destroy all OSDs on node2.&lt;br /&gt;
#* In the same screen of the PVE gui, select each out OSD and press stop, and once stopped, under the &amp;quot;more&amp;quot; submenu, select &amp;quot;destroy&amp;quot;.&lt;br /&gt;
# Remove the Ceph servers (monitor, manager, and metadata) from node2.&lt;br /&gt;
#* In Ceph -&amp;gt; Monitor, select node2&#039;s monitor and press stop, and then once stopped, press destroy. [[File:Ceph mon.png]]&lt;br /&gt;
#* In Ceph -&amp;gt; Monitor, select node2&#039;s manager in the bottom panel, press stop, and once stopped, press destroy.  [[File:Ceph manager.png]]&lt;br /&gt;
#* In Ceph -&amp;gt; CephFS - Metadata Servers, stop and destroy the Metadata Server for node2.  [[File:Ceph mds.png]]&lt;br /&gt;
# On node2&#039;s CLI, clean up the Ceph CRUSH map and remove the host bucket using &amp;quot;ceph osd crush remove node2&amp;quot;.&lt;br /&gt;
# From node1&#039;s CLI, run &amp;quot;pvecm delnode node2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|&#039;&#039;Important: Power off node2 before running pvecm delnode&#039;&#039;. This is because the SSH keys and some internal references to the cluster may still exist on the node being removed, and it may attempt to perform sync operations within corosync and any distributed storage (i.e. Replicated ZFS pools).&lt;br /&gt;
|}&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|At this point, it is possible that you will receive an error message stating &amp;lt;code&amp;gt;Could not kill node (error = CS_ERR_NOT_EXIST)&amp;lt;/code&amp;gt;. This does not signify an actual failure in the deletion of the node, but rather a failure in corosync trying to kill an offline node. Thus, it can be safely ignored.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Confirm node deleted. ===&lt;br /&gt;
From node1&#039;s CLI, run &#039;&#039;pvecm nodes.&#039;&#039;&lt;br /&gt;
 node2# pvecm nodes&lt;br /&gt;
 Membership information&lt;br /&gt;
 ----------------------&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
          1          1 node1 (local)&lt;br /&gt;
          2          1 node3&lt;br /&gt;
          3          1 node4&lt;br /&gt;
&lt;br /&gt;
=== Clean up SSH Keys. ===&lt;br /&gt;
There is a &#039;&#039;&#039;major missing detail&#039;&#039;&#039; in the official documentation which is that if you intend to join a node into a cluster with the &#039;&#039;&#039;same hostname and IP&#039;&#039;&#039; the previous node had, everything will break if you don&#039;t take some prerequisite actions.&lt;br /&gt;
&lt;br /&gt;
Power on the new node2 preconfigured with the same hostname and IP address. &lt;br /&gt;
&lt;br /&gt;
From node1, SSH into the new node2. This will generate an error from the SSH client which will contain a line containing a command that will remove the key from the known hosts file. Use this command syntax to clear these keys:&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/etc/ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/etc/ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;node2.domain.com&#039;&lt;br /&gt;
Now attempt to SSH into node2 again. You will get a nearly identical error, only this time the path to the known_hosts file will be in the root user&#039;s .ssh directory as so:&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/root/.ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/root/.ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;node2.domain.com&#039;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|This second series of commands, pointing to /root/, are &#039;&#039;not replicated&#039;&#039; across the cluster members, and must be done on all members of the cluster manually.&lt;br /&gt;
|}&lt;br /&gt;
After executing these commands, when we join the node to the cluster, SSH will work correctly.&lt;br /&gt;
&lt;br /&gt;
=== Join node to cluster. ===&lt;br /&gt;
&lt;br /&gt;
# Power on server and configure LOM such as Dell DRAC to use the correct IP address.&lt;br /&gt;
# Edit /etc/hostname and /etc/hosts to confirm hostname is correctly matched to previous install&#039;s hostname.&lt;br /&gt;
# Reboot and verify hostname and IP are correct. &lt;br /&gt;
# If the previous machine had a Proxmox license, apply it now.&lt;br /&gt;
# Validate network connectivty on corosync network and both the Ceph frontend (consumption and management) and backend (replication) networks to all other nodes.&lt;br /&gt;
# Join the Proxmox cluster.&lt;br /&gt;
# Install Ceph.&lt;br /&gt;
# Add Ceph monitor and Ceph manager to this node.&lt;br /&gt;
# Migrate a test VM to the new node to confirm consumption.&lt;br /&gt;
# If there are any other maintenance tasks to complete (like swapping another node with the previous node&#039;s hardware) do NOT add OSDs back to node2 until ready.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=135</id>
		<title>Replacing A Proxmox Virtual Environment Server in a Ceph cluster</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=135"/>
		<updated>2024-12-24T19:43:45Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The Situation ==&lt;br /&gt;
Imagine a situation in which you have a cluster of Proxmox VE nodes as part of a hpyerconverged Ceph installation. You want to replace one of your nodes with a fully brand new installation of Proxmox VE. You don&#039;t intend to migrate its drives, you want to replace the node outright with new drives and a new installation of Proxmox VE. In order for this to work, there are a handful of things you need to consider to prevent 1. unnecessary Ceph rebalances 2. broken HA in Proxmox VE. The broken HA issue comes from the fact that Proxmox VE uses SSH internally to move data around and to run commands on remote nodes. These keys get messed up because Proxmox does not remove the keys when decommissioning a node.&lt;br /&gt;
&lt;br /&gt;
== The Strategy ==&lt;br /&gt;
Migrate all virtual machines from the node. Ensure that you have made copies of any local data or backups that you want to keep. In addition, make sure to remove any scheduled replication jobs to the node to be removed.&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|Failure to remove replication jobs to a node before removing said node will result in the replication job becoming irremovable. Especially note that replication automatically switches direction if a replicated VM is migrated, so by migrating a replicated VM from a node to be deleted, replication jobs will be set up to that node automatically.&lt;br /&gt;
|}&lt;br /&gt;
Replacing a Proxmox Virtual Environment Server in hyperconverged Ceph configuration.&lt;br /&gt;
&lt;br /&gt;
This guide assumes a four node cluster with hostnames node1-node4. We&#039;re assuming a replacement of node2, and for convenience, we&#039;re migrating to node1, but it could be to any other node in the cluster. In this guide, we are &#039;&#039;specifically&#039;&#039; replacing node2 with a new installation of PVE on an upgraded server that shares the same IP(s) and hostname, not migrating an existing installation to a new chassis. &lt;br /&gt;
&lt;br /&gt;
=== Confirm cluster status (prerequisite, optional). ===&lt;br /&gt;
From node2&#039;s CLI, run &#039;&#039;pvecm nodes.&#039;&#039;&lt;br /&gt;
 node2# pvecm nodes&lt;br /&gt;
 Membership information&lt;br /&gt;
 ----------------------&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
          1          1 node1&lt;br /&gt;
          2          1 node2 (local)&lt;br /&gt;
          3          1 node3&lt;br /&gt;
          4          1 node4&lt;br /&gt;
Confirm that the node you intend to decommission is listed has the &#039;&#039;&amp;quot;(local)&amp;quot;&#039;&#039; identifier next to its name. This confirms we&#039;re on the right machine.&lt;br /&gt;
&lt;br /&gt;
=== Migrate VM, CT, templates, storage off of node2. ===&lt;br /&gt;
Using HA, migrate running VM&#039;s from node2 to node1.&lt;br /&gt;
&lt;br /&gt;
Manually migrate all remaining nonrunning VMs, CTs, and templates off of node2 to node1.&lt;br /&gt;
&lt;br /&gt;
=== Decommission node from CEPH. ===&lt;br /&gt;
&lt;br /&gt;
# Set OSDs on node2 to &amp;quot;out&amp;quot; and wait for rebalance. [[File:Ceph osd management.png]]&lt;br /&gt;
#* On Node2, go to Ceph -&amp;gt; OSD -&amp;gt; Node2 -&amp;gt; per each OSD listed, select the OSD and press &amp;quot;Out&amp;quot; in the top right of the control bar.&lt;br /&gt;
# Following rebalance, stop and destroy all OSDs on node2.&lt;br /&gt;
#* In the same screen of the PVE gui, select each out OSD and press stop, and once stopped, under the &amp;quot;more&amp;quot; submenu, select &amp;quot;destroy&amp;quot;.&lt;br /&gt;
# Remove the Ceph servers (monitor, manager, and metadata) from node2.&lt;br /&gt;
#* In Ceph -&amp;gt; Monitor, select node2&#039;s monitor and press stop, and then once stopped, press destroy. [[File:Ceph mon.png]]&lt;br /&gt;
#* In Ceph -&amp;gt; Monitor, select node2&#039;s manager in the bottom panel, press stop, and once stopped, press destroy. [[File:Ceph manager.png]]&lt;br /&gt;
#* In Ceph -&amp;gt; CephFS - Metadata Servers, stop and destroy the Metadata Server for node2. [[File:Ceph mds.png]]&lt;br /&gt;
# On node2&#039;s CLI, clean up the Ceph CRUSH map and remove the host bucket using &amp;quot;ceph osd crush remove node2&amp;quot;.&lt;br /&gt;
# From node1&#039;s CLI, run &amp;quot;pvecm delnode node2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|&#039;&#039;Important: Power off node2 before running pvecm delnode&#039;&#039;. This is because the SSH keys and some internal references to the cluster may still exist on the node being removed, and it may attempt to perform sync operations within corosync and any distributed storage (i.e. Replicated ZFS pools).&lt;br /&gt;
|}&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|At this point, it is possible that you will receive an error message stating &amp;lt;code&amp;gt;Could not kill node (error = CS_ERR_NOT_EXIST)&amp;lt;/code&amp;gt;. This does not signify an actual failure in the deletion of the node, but rather a failure in corosync trying to kill an offline node. Thus, it can be safely ignored.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Confirm node deleted. ===&lt;br /&gt;
From node1&#039;s CLI, run &#039;&#039;pvecm nodes.&#039;&#039;&lt;br /&gt;
 node2# pvecm nodes&lt;br /&gt;
 Membership information&lt;br /&gt;
 ----------------------&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
          1          1 node1 (local)&lt;br /&gt;
          2          1 node3&lt;br /&gt;
          3          1 node4&lt;br /&gt;
&lt;br /&gt;
=== Clean up SSH Keys. ===&lt;br /&gt;
There is a &#039;&#039;&#039;major missing detail&#039;&#039;&#039; in the official documentation which is that if you intend to join a node into a cluster with the &#039;&#039;&#039;same hostname and IP&#039;&#039;&#039; the previous node had, everything will break if you don&#039;t take some prerequisite actions.&lt;br /&gt;
&lt;br /&gt;
Power on the new node2 preconfigured with the same hostname and IP address. &lt;br /&gt;
&lt;br /&gt;
From node1, SSH into the new node2. This will generate an error from the SSH client which will contain a line containing a command that will remove the key from the known hosts file. Use this command syntax to clear these keys:&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/etc/ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/etc/ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;node2.domain.com&#039;&lt;br /&gt;
Now attempt to SSH into node2 again. You will get a nearly identical error, only this time the path to the known_hosts file will be in the root user&#039;s .ssh directory as so:&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/root/.ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/root/.ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;node2.domain.com&#039;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|This second series of commands, pointing to /root/, are &#039;&#039;not replicated&#039;&#039; across the cluster members, and must be done on all members of the cluster manually.&lt;br /&gt;
|}&lt;br /&gt;
After executing these commands, when we join the node to the cluster, SSH will work correctly.&lt;br /&gt;
&lt;br /&gt;
=== Join node to cluster. ===&lt;br /&gt;
&lt;br /&gt;
# Power on server and configure LOM such as Dell DRAC to use the correct IP address.&lt;br /&gt;
# Edit /etc/hostname and /etc/hosts to confirm hostname is correctly matched to previous install&#039;s hostname.&lt;br /&gt;
# Reboot and verify hostname and IP are correct. &lt;br /&gt;
# If the previous machine had a Proxmox license, apply it now.&lt;br /&gt;
# Validate network connectivty on corosync network and both the Ceph frontend (consumption and management) and backend (replication) networks to all other nodes.&lt;br /&gt;
# Join the Proxmox cluster.&lt;br /&gt;
# Install Ceph.&lt;br /&gt;
# Add Ceph monitor and Ceph manager to this node.&lt;br /&gt;
# Migrate a test VM to the new node to confirm consumption.&lt;br /&gt;
# If there are any other maintenance tasks to complete (like swapping another node with the previous node&#039;s hardware) do NOT add OSDs back to node2 until ready.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=File:Ceph_mds.png&amp;diff=134</id>
		<title>File:Ceph mds.png</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=File:Ceph_mds.png&amp;diff=134"/>
		<updated>2024-12-24T19:15:42Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ceph_mds&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=File:Ceph_manager.png&amp;diff=133</id>
		<title>File:Ceph manager.png</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=File:Ceph_manager.png&amp;diff=133"/>
		<updated>2024-12-24T19:13:47Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ceph_manager&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=File:Ceph_mon.png&amp;diff=132</id>
		<title>File:Ceph mon.png</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=File:Ceph_mon.png&amp;diff=132"/>
		<updated>2024-12-24T19:13:10Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ceph_mon&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=File:Ceph_osd_management.png&amp;diff=131</id>
		<title>File:Ceph osd management.png</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=File:Ceph_osd_management.png&amp;diff=131"/>
		<updated>2024-12-24T19:02:09Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ceph_osd_management&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=130</id>
		<title>Replacing A Proxmox Virtual Environment Server in a Ceph cluster</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=130"/>
		<updated>2024-12-24T16:39:30Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Migrate all virtual machines from the node. Ensure that you have made copies of any local data or backups that you want to keep. In addition, make sure to remove any scheduled replication jobs to the node to be removed.&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|Failure to remove replication jobs to a node before removing said node will result in the replication job becoming irremovable. Especially note that replication automatically switches direction if a replicated VM is migrated, so by migrating a replicated VM from a node to be deleted, replication jobs will be set up to that node automatically.&lt;br /&gt;
|}&lt;br /&gt;
Replacing a Proxmox Virtual Environment Server in hyperconverged ceph configuration.&lt;br /&gt;
&lt;br /&gt;
This guide assumes a four node cluster with hostnames node1-node4. We&#039;re assuming a replacement of node2, and for convenience, we&#039;re migrating to node1, but it could be to any other node in the cluster. In this guide, we are &#039;&#039;specifically&#039;&#039; replacing node2 with a new installation of PVE on an upgraded server that shares the same IP(s) and hostname, not migrating an existing installation to a new chassis. &lt;br /&gt;
&lt;br /&gt;
=== Confirm cluster status (prerequisite, optional). ===&lt;br /&gt;
From node2&#039;s CLI, run &#039;&#039;pvecm nodes.&#039;&#039;&lt;br /&gt;
 node2# pvecm nodes&lt;br /&gt;
 Membership information&lt;br /&gt;
 ----------------------&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
          1          1 node1&lt;br /&gt;
          2          1 node2 (local)&lt;br /&gt;
          3          1 node3&lt;br /&gt;
          4          1 node4&lt;br /&gt;
Confirm that the node you intend to decommission is listed has the &#039;&#039;&amp;quot;(local)&amp;quot;&#039;&#039; identifier next to its name. This confirms we&#039;re on the right machine.&lt;br /&gt;
&lt;br /&gt;
=== Migrate VM, CT, templates, storage off of node2. ===&lt;br /&gt;
Using HA, migrate running VM&#039;s from node2 to node1.&lt;br /&gt;
&lt;br /&gt;
Manually migrate all remaining nonrunning VMs, CTs, and templates off of node2 to node1.&lt;br /&gt;
&lt;br /&gt;
=== Decommission node from CEPH. ===&lt;br /&gt;
&lt;br /&gt;
# Set OSDs on node2 to &amp;quot;out&amp;quot; and wait for rebalance.&lt;br /&gt;
#* On Node2, go to Ceph -&amp;gt; OSD -&amp;gt; Node2 -&amp;gt; per each OSD listed, select the OSD and press &amp;quot;Out&amp;quot; in the top right of the control bar.&lt;br /&gt;
# Following rebalance, stop and destroy all OSDs on node2.&lt;br /&gt;
#* In the same screen of the PVE gui, select each out OSD and press stop, and once stopped, under the &amp;quot;more&amp;quot; submenu, select &amp;quot;destroy&amp;quot;.&lt;br /&gt;
# Remove the Ceph servers (monitor, manager, and metadata) from node2.&lt;br /&gt;
#* In Ceph -&amp;gt; Monitor, select node2&#039;s monitor and press stop, and then once stopped, press destroy.&lt;br /&gt;
#* In Ceph -&amp;gt; Monitor, select node2&#039;s manager in the bottom panel, press stop, and once stopped, press destroy.&lt;br /&gt;
#* In Ceph -&amp;gt; CephFS - Metadata Servers, stop and destroy the Metadata Server for node2.&lt;br /&gt;
# On node2&#039;s CLI, clean up the Ceph CRUSH map and remove the host bucket using &amp;quot;ceph osd crush remove node2&amp;quot;.&lt;br /&gt;
# From node1&#039;s CLI, run &amp;quot;pvecm delnode node2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|&#039;&#039;Important: Power off node2 before running pvecm delnode&#039;&#039;. This is because the SSH keys and some internal references to the cluster may still exist on the node being removed, and it may attempt to perform sync operations within corosync and any distributed storage (i.e. Replicated ZFS pools).&lt;br /&gt;
|}&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|At this point, it is possible that you will receive an error message stating &amp;lt;code&amp;gt;Could not kill node (error = CS_ERR_NOT_EXIST)&amp;lt;/code&amp;gt;. This does not signify an actual failure in the deletion of the node, but rather a failure in corosync trying to kill an offline node. Thus, it can be safely ignored.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Confirm node deleted. ===&lt;br /&gt;
From node1&#039;s CLI, run &#039;&#039;pvecm nodes.&#039;&#039;&lt;br /&gt;
 node2# pvecm nodes&lt;br /&gt;
 Membership information&lt;br /&gt;
 ----------------------&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
          1          1 node1 (local)&lt;br /&gt;
          2          1 node3&lt;br /&gt;
          3          1 node4&lt;br /&gt;
&lt;br /&gt;
=== Clean up SSH Keys. ===&lt;br /&gt;
There is a major missing detail in the official documentation which is that if you intend to join a node into a cluster with the same hostname and IP the previous node had, everything will break if you don&#039;t take some prerequisite actions.&lt;br /&gt;
&lt;br /&gt;
Power on the new node2 preconfigured with the same hostname and IP address. &lt;br /&gt;
&lt;br /&gt;
From node1, SSH into the new node2. This will generate an error from the SSH client which will contain a line containing a command that will remove the key from the known hosts file. Use this command syntax to clear these keys:&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/etc/ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/etc/ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;node2.domain.com&#039;&lt;br /&gt;
Now attempt to SSH into node2 again. You will get a nearly identical error, only this time the path to the known_hosts file will be in the root user&#039;s .ssh directory as so:&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/root/.ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&lt;br /&gt;
 ssh-keygen -f &#039;&#039;&amp;lt;nowiki/&amp;gt;&#039;/root/.ssh/ssh_known_hosts&#039;&#039;&#039; -R &#039;node2.domain.com&#039;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:Warning-sign-icon-transparent-background-free-png.webp|60x60px]]&lt;br /&gt;
|This second series of commands, pointing to /root/, are &#039;&#039;not replicated&#039;&#039; across the cluster members, and must be done on all members of the cluster manually.&lt;br /&gt;
|}&lt;br /&gt;
After executing these commands, when we join the node to the cluster, SSH will work correctly.&lt;br /&gt;
&lt;br /&gt;
=== Join node to cluster. ===&lt;br /&gt;
&lt;br /&gt;
# Power on server and configure LOM such as Dell DRAC to use the correct IP address.&lt;br /&gt;
# Edit /etc/hostname and /etc/hosts to confirm hostname is correctly matched to previous install&#039;s hostname.&lt;br /&gt;
# Reboot and verify hostname and IP are correct. &lt;br /&gt;
# If the previous machine had a Proxmox license, apply it now.&lt;br /&gt;
# Validate network connectivty on corosync network and both the Ceph frontend (consumption and management) and backend (replication) networks to all other nodes.&lt;br /&gt;
# Join the Proxmox cluster.&lt;br /&gt;
# Install Ceph.&lt;br /&gt;
# Add Ceph monitor and Ceph manager to this node.&lt;br /&gt;
# Migrate a test VM to the new node to confirm consumption.&lt;br /&gt;
# If there are any other maintenance tasks to complete (like swapping another node with the previous node&#039;s hardware) do NOT add OSDs back to node2 until ready.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=File:Warning-sign-icon-transparent-background-free-png.webp&amp;diff=129</id>
		<title>File:Warning-sign-icon-transparent-background-free-png.webp</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=File:Warning-sign-icon-transparent-background-free-png.webp&amp;diff=129"/>
		<updated>2024-12-24T15:34:44Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=128</id>
		<title>Replacing A Proxmox Virtual Environment Server in a Ceph cluster</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_A_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=128"/>
		<updated>2024-12-24T02:09:11Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;Move all virtual machines from the node. Ensure that you have made copies of any local data or backups that you want to keep. In addition, make sure to remove any scheduled replication jobs to the node to be removed. {| class=&amp;quot;wikitable&amp;quot; | |Failure to remove replication jobs to a node before removing said node will result in the replication job becoming irremovable. Especially note that replication automatically switches direction if a replicated VM is migrated, so by mi...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Move all virtual machines from the node. Ensure that you have made copies of any local data or backups that you want to keep. In addition, make sure to remove any scheduled replication jobs to the node to be removed.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
|Failure to remove replication jobs to a node before removing said node will result in the replication job becoming irremovable. Especially note that replication automatically switches direction if a replicated VM is migrated, so by migrating a replicated VM from a node to be deleted, replication jobs will be set up to that node automatically.&lt;br /&gt;
|}&lt;br /&gt;
Replacing a Proxmox Virtual Environment Server in hyperconverged ceph configuration.&lt;br /&gt;
&lt;br /&gt;
This guide assumes a four node cluster with hostnames node1-node4. We&#039;re assuming a replacement of node2, and for convenience, we&#039;re migrating to node1, but it could be to any other node in the cluster. In this guide, we are &#039;&#039;specifically&#039;&#039; replacing node2 with a new installation of PVE on an upgraded server that shares the same IP(s) and hostname, not migrating an existing installation to a new chassis. &lt;br /&gt;
&lt;br /&gt;
=== Migrate VM, CT, templates, storage off of node2. ===&lt;br /&gt;
Using HA, migrate running VM&#039;s from node2 to node1.&lt;br /&gt;
&lt;br /&gt;
Manually migrate all remaining nonrunning VMs, CTs, and templates off of node2 to node1.&lt;br /&gt;
&lt;br /&gt;
=== Decommission node from CEPH ===&lt;br /&gt;
Set OSDs on node2 to &amp;quot;out&amp;quot; and wait for rebalance. &lt;br /&gt;
&lt;br /&gt;
On Node2, go to Ceph -&amp;gt; OSD -&amp;gt; Node2 -&amp;gt; per each OSD listed, select the OSD and press &amp;quot;Out&amp;quot; in the top right of the control bar. &lt;br /&gt;
&lt;br /&gt;
Following rebalance, stop and destroy all OSDs on node2.&lt;br /&gt;
&lt;br /&gt;
In the same screen of the PVE gui, select each out OSD and press stop, and once stopped, under the &amp;quot;more&amp;quot; submenu, select &amp;quot;destroy&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
Remove the Ceph servers (monitor, manager, and metadata) from node2.&lt;br /&gt;
&lt;br /&gt;
In Ceph -&amp;gt; Monitor, select node2&#039;s monitor and press stop, and then once stopped, press destroy.&lt;br /&gt;
&lt;br /&gt;
In Ceph -&amp;gt; Monitor, select node2&#039;s manager in the bottom panel, press stop, and once stopped, press destroy. &lt;br /&gt;
&lt;br /&gt;
In Ceph -&amp;gt; CephFS - Metadata Servers, stop and destroy the Metadata Server for node2.&lt;br /&gt;
&lt;br /&gt;
On node2&#039;s CLI, clean up the Ceph CRUSH map and remove the host bucket using &amp;quot;ceph osd crush remove node2&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;!! Important: Power off node2 before running pvecm delnode&#039;&#039;. This is because the SSH keys and some internal references to the cluster may still exist on the node being removed, and it may attempt to perform sync operations within corosync and any distributed storage (i.e. Replicated ZFS pools).&lt;br /&gt;
&lt;br /&gt;
From node1&#039;s CLI, run &amp;quot;pvecm delnode node2&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
=== Clean up SSH Keys ===&lt;br /&gt;
Power on the new node2 preconfigured with the same hostname and IP address. &lt;br /&gt;
&lt;br /&gt;
From node1, SSH into the new node2. This will generate an error from the SSH client which will contain a line containing a command that will remove the key from the known hosts file. Use this command syntax to clear these keys:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;  ssh-keygen -f &#039;/etc/ssh/ssh_known_hosts&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;  ssh-keygen -f &#039;/etc/ssh/ssh_known_hosts&#039; -R &#039;node2.domain.com&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Now attempt to SSH into node2 again. You will get a nearly identical error, only this time the path to the known_hosts file will be in the root user&#039;s .ssh directory as so:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;  ssh-keygen -f &#039;/root/.ssh/ssh_known_hosts&#039; -R &#039;XXX.XXX.XXX.XXX&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;  ssh-keygen -f &#039;/root/.ssh/ssh_known_hosts&#039; -R &#039;node2.domain.com&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
!! This second series of commands, pointing to root, are &#039;&#039;not replicated&#039;&#039; across the cluster members, and must be done on all members of the cluster manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the following example, we will remove the node hp4 from the cluster.&lt;br /&gt;
&lt;br /&gt;
Log in to a &#039;&#039;&#039;different&#039;&#039;&#039; cluster node (not hp4), and issue a &amp;lt;code&amp;gt;pvecm nodes&amp;lt;/code&amp;gt; command to identify the node ID to remove:&lt;br /&gt;
  &amp;lt;code&amp;gt;hp1# pvecm nodes&lt;br /&gt;
 &lt;br /&gt;
 Membership information&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;~~~~~~~~~~~~~~~~~~~~~~&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
          1          1 hp1 (local)&lt;br /&gt;
          2          1 hp2&lt;br /&gt;
          3          1 hp3&lt;br /&gt;
          4          1 hp4&amp;lt;/code&amp;gt;&lt;br /&gt;
At this point, you must power off hp4 and ensure that it will not power on again (in the network) with its current configuration.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
|As mentioned above, it is critical to power off the node &#039;&#039;&#039;before&#039;&#039;&#039; removal, and make sure that it will &#039;&#039;&#039;not&#039;&#039;&#039; power on again (in the existing cluster network) with its current configuration. If you power on the node as it is, the cluster could end up broken, and it could be difficult to restore it to a functioning state.&lt;br /&gt;
|}&lt;br /&gt;
After powering off the node hp4, we can safely remove it from the cluster.&lt;br /&gt;
  &amp;lt;code&amp;gt;hp1# pvecm delnode hp4&lt;br /&gt;
  Killing node 4&amp;lt;/code&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
|At this point, it is possible that you will receive an error message stating &amp;lt;code&amp;gt;Could not kill node (error = CS_ERR_NOT_EXIST)&amp;lt;/code&amp;gt;. This does not signify an actual failure in the deletion of the node, but rather a failure in corosync trying to kill an offline node. Thus, it can be safely ignored.&lt;br /&gt;
|}&lt;br /&gt;
Use &amp;lt;code&amp;gt;pvecm nodes&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;pvecm status&amp;lt;/code&amp;gt; to check the node list again. It should look something like:&lt;br /&gt;
 &amp;lt;code&amp;gt;hp1# pvecm status&lt;br /&gt;
 &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 Votequorum information&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;~~~~~~~~~~~~~~~~~~~~~~&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
 Expected votes:   3&lt;br /&gt;
 Highest expected: 3&lt;br /&gt;
 Total votes:      3&lt;br /&gt;
 Quorum:           2&lt;br /&gt;
 Flags:            Quorate&lt;br /&gt;
 &lt;br /&gt;
 Membership information&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;~~~~~~~~~~~~~~~~~~~~~~&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     Nodeid      Votes Name&lt;br /&gt;
 0x00000001          1 192.168.15.90 (local)&lt;br /&gt;
 0x00000002          1 192.168.15.91&lt;br /&gt;
 0x00000003          1 192.168.15.92&amp;lt;/code&amp;gt;&lt;br /&gt;
If, for whatever reason, you want this server to join the same cluster again, you have to:&lt;br /&gt;
&lt;br /&gt;
* do a fresh install of Proxmox VE on it,&lt;br /&gt;
* then join it, as explained in the previous section.&lt;br /&gt;
&lt;br /&gt;
The configuration files for the removed node will still reside in &#039;&#039;/etc/pve/nodes/hp4&#039;&#039;. Recover any configuration you still need and remove the directory afterwards.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Proxmox_Host_SSH_keys&amp;diff=125</id>
		<title>Proxmox Host SSH keys</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Proxmox_Host_SSH_keys&amp;diff=125"/>
		<updated>2024-12-13T18:46:12Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Intended method: ==&lt;br /&gt;
Delete old ssh host keys: &lt;br /&gt;
  rm /etc/ssh/ssh_host_*&lt;br /&gt;
Reconfigure OpenSSH Server: &lt;br /&gt;
  dpkg-reconfigure openssh-server&lt;br /&gt;
Update all ssh client(s) at ~/.ssh/known_hosts&lt;br /&gt;
&lt;br /&gt;
Then update certs and keys &#039;&#039;on each machine&#039;&#039;:&lt;br /&gt;
  pvecm updatecerts -f&lt;br /&gt;
&lt;br /&gt;
== Manual method&amp;lt;ref&amp;gt;https://forum.proxmox.com/threads/pvecm-updatecert-f-not-working.135812/page-3#post-660500&amp;lt;/ref&amp;gt;: ==&lt;br /&gt;
If this fails (which it might), log into each troublesome node through SSHd and copy the public key from &lt;br /&gt;
  /etc/ssh/ssh_host_rsa_key.pub. &lt;br /&gt;
Copy this to &lt;br /&gt;
  /etc/pve/nodes/&amp;lt;node&amp;gt;/ssh_known_hosts &lt;br /&gt;
and prepend it with that machine&#039;s hostname. Assuming a hostname of pve1, this line should appear as&lt;br /&gt;
  pve1 ssh-rsa &amp;lt;key&amp;gt;&lt;br /&gt;
Then restart the SSH daemon:&lt;br /&gt;
  systemctl restart sshd&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=124</id>
		<title>Category:Linux Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=124"/>
		<updated>2024-12-11T00:17:08Z</updated>

		<summary type="html">&lt;p&gt;Maeve: /* Linux Tutorials (Especially helpful for Proxmox) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our Linux tutorials.&lt;br /&gt;
&lt;br /&gt;
== Linux Tutorials (Especially helpful for Proxmox) ==&lt;br /&gt;
&lt;br /&gt;
* [[Offline Uncorrectable Sectors]]&lt;br /&gt;
* [[ZFS Failed Disk Replacement]]&lt;br /&gt;
* [[Calculating SSD Wearout]]&lt;br /&gt;
* [[Proxmox Backup Server Replication]]&lt;br /&gt;
* [[Proxmox Host SSH keys]]&lt;br /&gt;
* [[Replacing Proxmox Virtual Environment Server in a Ceph cluster]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=123</id>
		<title>Replacing Proxmox Virtual Environment Server in a Ceph cluster</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Replacing_Proxmox_Virtual_Environment_Server_in_a_Ceph_cluster&amp;diff=123"/>
		<updated>2024-12-11T00:17:00Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;Replacing a Proxmox Virtual Environment Server in hyperconverged ceph configuration.  This guide assumes a four node cluster with hostnames node1-node4. We&amp;#039;re assuming a replacement of node2.  # Using HA, migrate running VM&amp;#039;s from node2 to node1, or any other location that has ample resources and is currently a member of CEPH.  # Set OSDs on node2 to &amp;quot;out&amp;quot; and wait for rebalance. # Following rebalance, stop and destroy all OSDs on node2. # Remove Ceph mon and manager fro...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Replacing a Proxmox Virtual Environment Server in hyperconverged ceph configuration.&lt;br /&gt;
&lt;br /&gt;
This guide assumes a four node cluster with hostnames node1-node4. We&#039;re assuming a replacement of node2.&lt;br /&gt;
&lt;br /&gt;
# Using HA, migrate running VM&#039;s from node2 to node1, or any other location that has ample resources and is currently a member of CEPH. &lt;br /&gt;
# Set OSDs on node2 to &amp;quot;out&amp;quot; and wait for rebalance.&lt;br /&gt;
# Following rebalance, stop and destroy all OSDs on node2.&lt;br /&gt;
# Remove Ceph mon and manager from node2.&lt;br /&gt;
# Clean up the Ceph CRUSH map and remove the host bucket using &amp;quot;ceph osd crush remove node2&amp;quot;.&lt;br /&gt;
# From a node that&#039;s still participating in Ceph, run &amp;quot;pvecm delnode node2&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The node is now decommissioned and no longer participating in Ceph. It can be removed. Let&#039;s install the replacement.&lt;br /&gt;
&lt;br /&gt;
# Physically remove old server, install new server. Cable and power server. &lt;br /&gt;
# Configure LOM such as Dell DRAC to use the correct IP address.&lt;br /&gt;
# Set new node2&#039;s IP management IP address to the IP of the previous machine. Validate connectivity.&lt;br /&gt;
# Edit /etc/hostname and /etc/hosts to confirm hostname is correctly matched to previous install&#039;s hostname. &lt;br /&gt;
# Reboot and verify hostname and IP are correct.&lt;br /&gt;
# If the previous machine had a Proxmox license, apply it now. &lt;br /&gt;
# Validate network connectivty on corosync network and both the Ceph frontend (consumption and management) and backend (replication) networks to all other nodes. &lt;br /&gt;
# Join the Proxmox cluster.&lt;br /&gt;
# Install Ceph.&lt;br /&gt;
# Add Ceph Mon and Ceph Manager to this node. &lt;br /&gt;
# Migrate a test VM to the new node to confirm consumption.&lt;br /&gt;
# If there are any other maintenance tasks to complete (like swapping another node with the previous node&#039;s hardware) do NOT add OSDs back to node2 until ready.&lt;br /&gt;
&lt;br /&gt;
A similar series of steps can be taken if existing drives are being moved to a new server intallation, maintaining OS and OSDs, as opposed to new drives. We&#039;ll assume a replacement of node 1 with the previous node2&#039;s. hardware.&lt;br /&gt;
&lt;br /&gt;
# Using HA, migrate running VM&#039;s from node1 to node2, or any other location that has ample resources and is currently a member of CEPH. &lt;br /&gt;
# Unlike before, set the noout flag - the OSDs aren&#039;t actually going anywhere, so we do not want a rebalancing. &lt;br /&gt;
# Shutdown node1.&lt;br /&gt;
# Physially move boot and data drives from node1 to the donor that was previously node2.&lt;br /&gt;
# Un-rack now driveless node1 and replace with now populated donor node2. Cable.&lt;br /&gt;
# Power on and configure LOM such as Dell DRAC to use the correct IP address.&lt;br /&gt;
# Validate system boot.&lt;br /&gt;
# Validate network connectivty on corosync network and both the Ceph frontend (consumption and management) and backend (replication) networks to all other nodes. &lt;br /&gt;
# Verify that OSDs are online and that all PGs report synced. &lt;br /&gt;
# Disable noout flag. &lt;br /&gt;
# We can now safely add OSDs back to the new node2 and allow it to rebalance. This could take a large amount of time - up to several days, depending on quantity of storage.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=ZFS_Remote_Sync&amp;diff=115</id>
		<title>ZFS Remote Sync</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=ZFS_Remote_Sync&amp;diff=115"/>
		<updated>2024-11-25T20:51:35Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;It is possible to &amp;#039;&amp;#039;&amp;#039;sync one zpool&amp;#039;s contents to another&amp;#039;&amp;#039;&amp;#039; via &amp;#039;&amp;#039;zfs send&amp;#039;&amp;#039; and &amp;#039;&amp;#039;zfs recv.&amp;#039;&amp;#039;   First, create a snapshot of the data you want to send. To copy an entire zpool, create a snapshot with the following command:  zpool snapshot -r [zpool_to_snapshot]@[snapshot_name] You can also snapshot a specific subvol.   zpool snapshot [zpool_to_snapshot/with_sub_vol]@[snapshot_name]  We&amp;#039;ll be sending the data from this snapshot over to another host using zfs send and zfs...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;It is possible to &#039;&#039;&#039;sync one zpool&#039;s contents to another&#039;&#039;&#039; via &#039;&#039;zfs send&#039;&#039; and &#039;&#039;zfs recv.&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
First, create a snapshot of the data you want to send. To copy an entire zpool, create a snapshot with the following command:&lt;br /&gt;
 zpool snapshot -r [zpool_to_snapshot]@[snapshot_name]&lt;br /&gt;
You can also snapshot a specific subvol. &lt;br /&gt;
 zpool snapshot [zpool_to_snapshot/with_sub_vol]@[snapshot_name] &lt;br /&gt;
We&#039;ll be sending the data from this snapshot over to another host using zfs send and zfs recv. We&#039;ll want to run this command in a screen session in case our terminal disconnects. We pipe the output of zfs send into an ssh connection running zfs recv.&lt;br /&gt;
&lt;br /&gt;
We run zfs send with -R to send all content under the specified snapshot. We run zfs recv with the flags -F (expand and replace) -d (discard target name and replace with source) -u (do not mount on destination). The pool must exist on the destination. Essentially, we&#039;re not cloning the ZFS configuration, we&#039;re funneling information on a higher level between them, in the same sense that one might SCP or rsync data. We recommend using zpool set autoexpand=on [pool_name]&lt;br /&gt;
 zfs send -R [zpool_to_clone]@[snapshot_name] | ssh [destination_IP] zfs recv -Fdu [destination_pool]&lt;br /&gt;
This will take quite some time. It may take multiple days, depending on the size of your dataset.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=114</id>
		<title>Category:Linux Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=114"/>
		<updated>2024-10-31T18:34:19Z</updated>

		<summary type="html">&lt;p&gt;Maeve: /* Linux Tutorials (Especially helpful for Proxmox) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our Linux tutorials.&lt;br /&gt;
&lt;br /&gt;
== Linux Tutorials (Especially helpful for Proxmox) ==&lt;br /&gt;
&lt;br /&gt;
* [[Offline Uncorrectable Sectors]]&lt;br /&gt;
* [[ZFS Failed Disk Replacement]]&lt;br /&gt;
* [[Calculating SSD Wearout]]&lt;br /&gt;
* [[Proxmox Backup Server Replication]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Packetfence&amp;diff=113</id>
		<title>Packetfence</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Packetfence&amp;diff=113"/>
		<updated>2024-10-28T11:42:20Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Packetfence&#039;&#039;&#039; is an open source &#039;&#039;&#039;Network Access Control&#039;&#039;&#039; solution developed by Inverse Inc. It provides a RADIUS server that pools together authentication sources from Active Directory, the Google suite, HTPasswd, another RADIUS server, generic LDAP, etc. It also has several supported methods, through its RADIUS server, of allowing authentication to client devices, including dot1x.&lt;br /&gt;
&lt;br /&gt;
== Installing Packetfence ==&lt;br /&gt;
A Debian-based installation ISO for Packetfence is [https://us-ord-1.linodeobjects.com/packetfence-iso/v14.0.0/PacketFence-ISO-v14.0.0.iso available for download here.] The stable release is recommended. This release comes with all of its main services pre-installed and given sane defaults. A minimum requirement of 200GB of storage and 16GB of ram. It is recommended to use multiple network interfaces, one for management and several for other purposes, but for documentation&#039;s sake we&#039;ll start with one network interface.&lt;br /&gt;
&lt;br /&gt;
# Set the hostname.&lt;br /&gt;
# Specify the domain name if not autopopulated through DHCP.&lt;br /&gt;
# Set the root password. &lt;br /&gt;
# After setting the root password, the installer will take some time. It may appear stuck at some points, but it just takes a while. Up to an hour. &#039;&#039;&#039;Be patient.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
=== UEFI fix ===&lt;br /&gt;
When you install Packetfence on Proxmox using the standard recommended configuration settings (Q35 machine type, OVMF bios) will result in some issues after the machine reboots. You&#039;ll be prompted with a yellow and black UEFI shell, requiring you to run &#039;&#039;fs0:\efi\debian\grubx64.efi&#039;&#039; every time the machine boots, which is obviously not going to work if the host loses power. &lt;br /&gt;
&lt;br /&gt;
To fix this, on the first boot of this machine&amp;lt;ref&amp;gt;[https://forum.proxmox.com/threads/how-to-boot-grubx64-efi-after-import-from-hyper-v.55429/#post-255178 Paraphrased from this Proxmox Forum post] &amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
# Press ESC immediately to get into the BIOS.&lt;br /&gt;
# Go to &#039;Boot Maintenance Manager&#039;&lt;br /&gt;
# Go to &#039;Boot Option&#039;&lt;br /&gt;
# Go to &#039;Add Boot Option&#039;&lt;br /&gt;
# Press enter on the &#039;PciRoot&#039; volume&lt;br /&gt;
# Select EFI and press enter&lt;br /&gt;
# Select the debian folder.&lt;br /&gt;
# Select grubx64.efi and press enter.&lt;br /&gt;
# Enter &amp;quot;Boot into Packetfence&amp;quot; as the description. &lt;br /&gt;
# Press F10 to save again.&lt;br /&gt;
# Go to &#039;Commit Changes and Exit&#039;&lt;br /&gt;
# Select &#039;Change Boot Order&#039;&lt;br /&gt;
# Press enter to get the listing of boot devices.&lt;br /&gt;
# Go to &#039;Boot into Packetfence&#039;&lt;br /&gt;
# Press + until this boot option is on top of the list and press enter&lt;br /&gt;
# Press F10 to save (just to be sure)&lt;br /&gt;
# Select &#039;Commit Changes and Exit&#039;&lt;br /&gt;
&lt;br /&gt;
== First time configuration ==&lt;br /&gt;
Packetfence most likely received a DHCP lease from your DHCP server. You can either check your DHCP server or run ip addr to determine the default IP address. Assuming a DHCP lease of &#039;&#039;192.168.1.128,&#039;&#039; access the management interface from &amp;lt;nowiki&amp;gt;https://192.168.1.128:1443/&amp;lt;/nowiki&amp;gt;. The first prompt will be a listing of network interfaces. Click the interface and it will take you to a submenu where you can specify the network settings for management. This will take effect immediately so you will need to navigate to the new address. The first few prompts are extremely straight forward, generating the admin user account and password, and selecting the management interface. Set your domain and hostname if you wish to change them from what was set during Debian installation. Also specify a timezone and log recipients. &lt;br /&gt;
&lt;br /&gt;
Under the log recipients section, we recommend using the advanced settings context in the top right corner of the settings card so you can specify your SMTP server host settings. &lt;br /&gt;
&lt;br /&gt;
You can skip the Fingerbank setting if you don&#039;t intend to use it. &lt;br /&gt;
&lt;br /&gt;
Save the default passwords generated for the internal services somewhere safe as they can neither be retrieved again or altered. Reboot the machine after the configuration is done, for good measure. Not every source gets properly restarted after the wizard concludes. &lt;br /&gt;
&lt;br /&gt;
== Authentication and authorization flow preamble ==&lt;br /&gt;
This is where things get murky with the official documentation lacking clarity and specificity. &lt;br /&gt;
&lt;br /&gt;
The flow of behavior is as follows:&lt;br /&gt;
&lt;br /&gt;
# Device connects through Packetfence client device like a Cisco switch - this uses EAP between the port on the device and the NIC of the client machine.&lt;br /&gt;
# The switch then has a configuration that tells it to communicate to the Packetfence server using RADIUS, providing Packetfence with the credentials the user gave over EAP.&lt;br /&gt;
# Packetfence acts a grand arbiter between all of your unique authentication sources and configurations (which is what we will configure below) and determines what to tell the switch to do with that user&#039;s port. This response is over RADIUS.&lt;br /&gt;
# Once the user has authenticated, the port they&#039;re connected to is assigned a VLAN, which we will assume you have a properly configured DHCP server waiting on. Once Packetfence has taken care of the user&#039;s authentication and set the VLAN, the rest of your networking infrastructure is unchanged to accommodate the user.&lt;br /&gt;
&lt;br /&gt;
The first thing we need to define is the connection point between your Packetfence server and your Active Directory Domain Controller. &lt;br /&gt;
&lt;br /&gt;
== Active Directory Domain Connection ==&lt;br /&gt;
Go to Configuration -&amp;gt; Policies and Access Control -&amp;gt; Active Directory Domains.&lt;br /&gt;
&lt;br /&gt;
Press &amp;quot;New Domain&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* Identifier is the name used throughout the rest of the Packetfence gui referencing this domain configuration,&lt;br /&gt;
* Workgroup can be set to the name of the domain.&lt;br /&gt;
* Set DNS name to the FQDN of your active directory domain.&lt;br /&gt;
* Sticky DC can be left as *&lt;br /&gt;
* Active Directory FQDN is the FQDN of the domain controller you&#039;re going to make queries against. &lt;br /&gt;
* Active Directory IP and DNS Server(s) can both be set to the IP of your domain controller.&lt;br /&gt;
* OU should be set to Computers or potentially something more specific. This is the OU that Packetfence will create its machine account under in AD. &lt;br /&gt;
* The last two fields are Domain Administrator Username and Password. We recommend having a dedicated admin account for this, just to separate concerns.&lt;br /&gt;
** Note that these credentials are only used when generating the machine account and storing its hashed password, and will not store these credentials permanently. &lt;br /&gt;
* Set Allow on Registration to true. &lt;br /&gt;
&lt;br /&gt;
After saving, return to the previous menu. You should see a green light under &amp;quot;domain joined&amp;quot; and a 200 HTTP status code in the top right corner. &lt;br /&gt;
&lt;br /&gt;
Packetfence can now communicate to Active Directory. However at this stage, it&#039;s not been configured to actually &#039;&#039;use&#039;&#039; this communication. Let&#039;s add an Active Directory Authentication Source.&lt;br /&gt;
&lt;br /&gt;
== Active Directory Authentication Source ==&lt;br /&gt;
From the same configuration tab, let&#039;s go to Authentication Sources.&lt;br /&gt;
&lt;br /&gt;
Press &amp;quot;New Internal Source&amp;quot; and select Active Directory from the drop down. &lt;br /&gt;
&lt;br /&gt;
* Give it a name, preferably that includes the domain itself, as well as a description. &lt;br /&gt;
* Host should be set to the domain controller you specified in the previous step. &lt;br /&gt;
* Base DN should be set to a string of the format &amp;quot;dc=domain,dc=local,dc=example,dc=com&amp;quot; - take the FQDN of your domain and take each element and prefix it with dc=. Use commas between them.&lt;br /&gt;
* Bind DN is going to be set with a similar string detailing which user account Packetfence binds with. If you created your Packetfence admin in Active Directory under the Users OU, this will be something of the form &amp;quot;cn=packetfence,cn=users,dc=domain,dc=local,dc=example,dc=com&amp;quot;.&lt;br /&gt;
* Provide the password below.&lt;br /&gt;
* Set &amp;quot;associated realms&amp;quot; to &#039;&#039;default&#039;&#039; and &#039;&#039;null&#039;&#039;.&lt;br /&gt;
* Create an &#039;&#039;&#039;Authentication Rule:&#039;&#039;&#039;&lt;br /&gt;
** Name: catchall&lt;br /&gt;
** Description: catchall&lt;br /&gt;
** Matches: any&lt;br /&gt;
** Conditions: none&lt;br /&gt;
** Actions:&lt;br /&gt;
*** Role: default&lt;br /&gt;
*** Access Duration: 12 hours&lt;br /&gt;
* Save the source.&lt;br /&gt;
&lt;br /&gt;
Now, when we configure our RADIUS client device, such as an AP or a switch, credentials will be acceptable from Active Directory, and authenticated users will be given 12 hours of access on a default VLAN.&lt;br /&gt;
&lt;br /&gt;
== Configuring the switch (Catalyst 2960) ==&lt;br /&gt;
The rest of the packetfence installation guides assumes a Catalyst 2960. Most modern IOS devices use the same syntax so we&#039;ll use their template. &lt;br /&gt;
 dot1x system-auth-control&lt;br /&gt;
 aaa new-model&lt;br /&gt;
 aaa group server radius packetfence&lt;br /&gt;
  server PF_MANAGEMENT_IP auth-port 1812 acct-port 1813&lt;br /&gt;
 aaa authentication login default local&lt;br /&gt;
 aaa authentication dot1x default group packetfence&lt;br /&gt;
 aaa authorization network default group packetfence&lt;br /&gt;
 radius-server host PF_MANAGEMENT_IP auth-port 1812 acct-port 1813 timeout 2 key useStrongerSecret&lt;br /&gt;
 radius-server vsa send authentication&lt;br /&gt;
 snmp-server community public RO&lt;br /&gt;
 snmp-server community private RW&lt;br /&gt;
Then on a port we wish to use dot1x on:&lt;br /&gt;
 authentication host-mode single-host&lt;br /&gt;
 authentication order dot1x mab&lt;br /&gt;
 authentication priority dot1x mab&lt;br /&gt;
 authentication port-control auto&lt;br /&gt;
 authentication periodic&lt;br /&gt;
 authentication timer restart 10800&lt;br /&gt;
 authentication timer reauthenticate 10800&lt;br /&gt;
 mab&lt;br /&gt;
 no snmp trap link-status&lt;br /&gt;
 dot1x pae authenticator&lt;br /&gt;
 dot1x timeout quiet-period 2&lt;br /&gt;
 dot1x timeout tx-period 3&lt;br /&gt;
&lt;br /&gt;
== Adding switch to Packetfence ==&lt;br /&gt;
We then navigate to Configuration -&amp;gt; Policies and Access Control -&amp;gt; Network Devices -&amp;gt; Switches.&lt;br /&gt;
&lt;br /&gt;
Press &amp;quot;New Switch&amp;quot; and select default when prompted.&lt;br /&gt;
&lt;br /&gt;
Enter the management IP of the switch and select &amp;quot;production&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Under Type, select Cisco Catalyst 2960.&lt;br /&gt;
&lt;br /&gt;
In the Radius tab, enter the secret specified in the steps above.&lt;br /&gt;
&lt;br /&gt;
Make sure VLAN by Role ID is enabled and that &amp;quot;default&amp;quot; has been set to a VLAN on your network that you wish to grant access to.&lt;br /&gt;
&lt;br /&gt;
== Connection profile ==&lt;br /&gt;
Lastly, let&#039;s go to Configuration -&amp;gt; Policies and Access Control -&amp;gt; Connection Profiles. &lt;br /&gt;
&lt;br /&gt;
Click on &amp;quot;New Connection Profile&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* Profile name: 8021x.&lt;br /&gt;
* Profile description: 8021x wired connections&lt;br /&gt;
* Enable the profile.&lt;br /&gt;
* Automatically register devices: checked&lt;br /&gt;
* FIlters:&lt;br /&gt;
** Match any&lt;br /&gt;
** Connection Type: Ethernet EAP&lt;br /&gt;
* Add the ADDC source we created earlier&lt;br /&gt;
&lt;br /&gt;
You can now test your workstation.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Packetfence&amp;diff=112</id>
		<title>Packetfence</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Packetfence&amp;diff=112"/>
		<updated>2024-10-24T23:30:19Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Wrote all the introduction and clarification on booting issue&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Packetfence&#039;&#039;&#039; is an open source &#039;&#039;&#039;Network Access Control&#039;&#039;&#039; solution developed by Inverse Inc. It provides a RADIUS server that pools together authentication sources from Active Directory, the Google suite, HTPasswd, another RADIUS server, generic LDAP, etc. It also has several supported methods, through its RADIUS server, of allowing authentication to client devices, including dot1x.&lt;br /&gt;
&lt;br /&gt;
== Installing Packetfence ==&lt;br /&gt;
A Debian-based installation ISO for Packetfence is [https://us-ord-1.linodeobjects.com/packetfence-iso/v14.0.0/PacketFence-ISO-v14.0.0.iso available for download here.] The stable release is recommended. This release comes with all of its main services pre-installed and given sane defaults. A minimum requirement of 200GB of storage and 16GB of ram. It is recommended to use multiple network interfaces, one for management and several for other purposes, but for documentation&#039;s sake we&#039;ll start with one network interface.&lt;br /&gt;
&lt;br /&gt;
# Set the hostname.&lt;br /&gt;
# Specify the domain name if not autopopulated through DHCP.&lt;br /&gt;
# Set the root password. &lt;br /&gt;
# After setting the root password, the installer will take some time. It may appear stuck at some points, but it just takes a while. Up to an hour. &#039;&#039;&#039;Be patient.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
=== UEFI fix ===&lt;br /&gt;
When you install Packetfence on Proxmox using the standard recommended configuration settings (Q35 machine type, OVMF bios) will result in some issues after the machine reboots. You&#039;ll be prompted with a yellow and black UEFI shell, requiring you to run &#039;&#039;fs0:\efi\debian\grubx64.efi&#039;&#039; every time the machine boots, which is obviously not going to work. &lt;br /&gt;
&lt;br /&gt;
To fix this, on the first boot of this machine&amp;lt;ref&amp;gt;[https://forum.proxmox.com/threads/how-to-boot-grubx64-efi-after-import-from-hyper-v.55429/#post-255178 Paraphrased from this Proxmox Forum post] &amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
# Press ESC immediately to get into the BIOS.&lt;br /&gt;
# Go to &#039;Boot Maintenance Manager&#039;&lt;br /&gt;
# Go to &#039;Boot Option&#039;&lt;br /&gt;
# Go to &#039;Add Boot Option&#039;&lt;br /&gt;
# Press enter on the &#039;PciRoot&#039; volume&lt;br /&gt;
# Select EFI and press enter&lt;br /&gt;
# Select the debian folder.&lt;br /&gt;
# Select grubx64.efi and press enter.&lt;br /&gt;
# Enter &amp;quot;Boot into Packetfence&amp;quot; as the description. &lt;br /&gt;
# Press F10 to save again.&lt;br /&gt;
# Go to &#039;Commit Changes and Exit&#039;&lt;br /&gt;
# Select &#039;Change Boot Order&#039;&lt;br /&gt;
# Press enter to get the listing of boot devices.&lt;br /&gt;
# Go to &#039;Boot into Packetfence&#039;&lt;br /&gt;
# Press + until this boot option is on top of the list and press enter&lt;br /&gt;
# Press F10 to save (just to be sure)&lt;br /&gt;
# Select &#039;Commit Changes and Exit&#039;&lt;br /&gt;
&lt;br /&gt;
== First time configuration ==&lt;br /&gt;
Packetfence most likely received a DHCP lease from your DHCP server. You can either check your DHCP server or run ip addr to determine the default IP address. Assuming a DHCP lease of &#039;&#039;192.168.1.128,&#039;&#039; access the management interface from &amp;lt;nowiki&amp;gt;https://192.168.1.128:1443/&amp;lt;/nowiki&amp;gt;. The first prompt will be a listing of network interfaces. Click the interface and it will take you to a submenu where you can specify the network settings for management. This will take effect immediately so you will need to navigate to the new address. The first few prompts are extremely straight forward, generating the admin user account and password, and selecting the management interface. Set your domain and hostname if you wish to change them from what was set during Debian installation. Also specify a timezone and log recipients. &lt;br /&gt;
&lt;br /&gt;
Under the log recipients section, we recommend using the advanced settings context in the top right corner of the settings card so you can specify your SMTP server host settings. &lt;br /&gt;
&lt;br /&gt;
You can skip the Fingerbank setting if you don&#039;t intend to use it. &lt;br /&gt;
&lt;br /&gt;
Save the default passwords generated for the internal services somewhere safe as they can neither be retrieved again or altered. Reboot the machine after the configuration is done, for good measure. Not every source gets properly restarted after the wizard concludes. &lt;br /&gt;
&lt;br /&gt;
== Authentication and authorization flow preamble ==&lt;br /&gt;
This is where things get murky with the official documentation lacking clarity and specificity. &lt;br /&gt;
&lt;br /&gt;
The flow of behavior is as follows:&lt;br /&gt;
&lt;br /&gt;
# Device connects through Packetfence client device like a Cisco switch - this uses EAP between the port on the device and the NIC of the client machine.&lt;br /&gt;
# The switch then has a configuration that tells it to communicate to the Packetfence server using RADIUS, providing Packetfence with the credentials the user gave over EAP.&lt;br /&gt;
# Packetfence acts a grand arbiter between all of your unique authentication sources and configurations (which is what we will configure below) and determines what to tell the switch to do with that user&#039;s port. This response is over RADIUS.&lt;br /&gt;
# Once the user has authenticated, the port they&#039;re connected to is assigned a VLAN, which we will assume you have a properly configured DHCP server waiting on. Once Packetfence has taken care of the user&#039;s authentication and set the VLAN, the rest of your networking infrastructure is unchanged to accommodate the user.&lt;br /&gt;
&lt;br /&gt;
The first thing we need to define is the connection point between your Packetfence server and your Active Directory Domain Controller. &lt;br /&gt;
&lt;br /&gt;
== Active Directory Domain source ==&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Software_Tutorials&amp;diff=111</id>
		<title>Category:Software Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Software_Tutorials&amp;diff=111"/>
		<updated>2024-10-17T12:45:24Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Added packetfence link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our tutorials and notes on software, both selfhosted FOSS and 3rd party products.&lt;br /&gt;
&lt;br /&gt;
== Software Tutorials ==&lt;br /&gt;
&lt;br /&gt;
* [[Firebox Content Inspection|Firebox Content Inspection (HTTPS Content Inspection)]]&lt;br /&gt;
* [[Nextcloud]]&lt;br /&gt;
* [[Packetfence]]&lt;br /&gt;
* [[Proxy Server]] (high level)&lt;br /&gt;
** [[Reverse Proxy]] (use-case specific)&lt;br /&gt;
* [[Zabbix]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Networking_Tutorials&amp;diff=110</id>
		<title>Category:Networking Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Networking_Tutorials&amp;diff=110"/>
		<updated>2024-10-14T16:40:45Z</updated>

		<summary type="html">&lt;p&gt;Maeve: /* Switch configuration[1] [2] */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our networking related guides. &lt;br /&gt;
&lt;br /&gt;
== Web Hosting ==&lt;br /&gt;
&lt;br /&gt;
=== Proxying requests ===&lt;br /&gt;
&lt;br /&gt;
* [[Proxy Server]]&lt;br /&gt;
* [[Reverse Proxy]]&lt;br /&gt;
&lt;br /&gt;
=== Switch configuration&amp;lt;ref&amp;gt;[[:Category:Catalyst Tutorials|Catalyst Tutorials]]&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;[[:Category:Nexus Tutorials|Nexus Templates]]&amp;lt;/ref&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
* [[Cisco Catalyst Template]]&lt;br /&gt;
* [[Cisco Catalyst SNMP Tutorial]]&lt;br /&gt;
* [[Cisco Nexus Template]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Catalyst_Tutorials&amp;diff=109</id>
		<title>Category:Catalyst Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Catalyst_Tutorials&amp;diff=109"/>
		<updated>2024-10-14T16:39:55Z</updated>

		<summary type="html">&lt;p&gt;Maeve: /* General templates */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Below is a collection of related configuration snippets for Catalyst switches that we&#039;ve collected for certain purposes over the years. We&#039;ve generalized and anonymized them to be readily available as templates for your needs.&lt;br /&gt;
&lt;br /&gt;
== General templates ==&lt;br /&gt;
&lt;br /&gt;
* [[Cisco Catalyst Template]]&lt;br /&gt;
&lt;br /&gt;
== Specific features ==&lt;br /&gt;
&lt;br /&gt;
* [[Cisco Catalyst SNMP Tutorial|Cisco Catalyst SNMP]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Cisco_Catalyst_SNMP_Tutorial&amp;diff=108</id>
		<title>Cisco Catalyst SNMP Tutorial</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Cisco_Catalyst_SNMP_Tutorial&amp;diff=108"/>
		<updated>2024-10-14T16:23:41Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;SNMP&#039;&#039;&#039; is a protocol built into most network devices (or available through installable packages on Linux, BSD) that enables authenticated remote monitoring of a device without CLI access. &lt;br /&gt;
&lt;br /&gt;
== Creating SNMP Views ==&lt;br /&gt;
The first thing we&#039;re going to do is create an SNMP View. This is essentially an access level indicator that can be supplied as an argument when creating a group. This allows us to specify which items in SNMP&#039;s MIB Database we want our users to access. It also allows us to specify read/write permissions. &lt;br /&gt;
 snmp-server view Zabbix iso included&lt;br /&gt;
Here, we enter the snmp-server command context, then we define a view named Zabbix. Then, after the name, we set the MIB or OID name that we want to target - iso is the global SNMP namespace, it includes everything SNMP itself records - and then we say that this view is inclusive, meaning anything under iso is to be included in the view.&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s create another one, but this time, it&#039;ll prevent access instead.&lt;br /&gt;
 snmp-server view Zabbix_ReadOnly iso excluded&lt;br /&gt;
Now, we have a second view called Zabbix_ReadOnly that can&#039;t access &#039;&#039;anything&#039;&#039;. This will be our write permissions for Zabbix.&lt;br /&gt;
&lt;br /&gt;
== Creating SNMP Groups ==&lt;br /&gt;
Let&#039;s create a group for our SNMPv3 user to be a part of. &lt;br /&gt;
 snmp-server group zbx v3 priv read Zabbix write Zabbix_ReadOnly &lt;br /&gt;
Here we specify that for proper communication with SNMP, we need to use the authPriv user level which requires two different passwords, one for user auth and one for encryption.&lt;br /&gt;
&lt;br /&gt;
Groups can have three different views specified.&lt;br /&gt;
&lt;br /&gt;
* Read view defines permissions for standard read operations&lt;br /&gt;
* Write views define permissions for management - some settings can be controlled through SNMP, but Zabbix does not require write privileges. &lt;br /&gt;
* Notify views define access for SNMP users when sending traps and informs. We&#039;re not specifically setting traps up in this guide so our notify view isn&#039;t set&lt;br /&gt;
&lt;br /&gt;
== Creating an SNMP User ==&lt;br /&gt;
Now we can create an SNMPv3 user. &lt;br /&gt;
 snmp-server user zabbix zbx v3 auth sha &#039;&#039;&#039;strongerAuthPassword&#039;&#039;&#039; priv aes 256 &#039;&#039;&#039;strongerPrivPassword&#039;&#039;&#039; &lt;br /&gt;
Now, if we run &#039;&#039;show snmp user:&#039;&#039;&lt;br /&gt;
 User name: zabbix&lt;br /&gt;
 Engine ID: 800000090300848A8DEC9A00&lt;br /&gt;
 storage-type: nonvolatile	 active&lt;br /&gt;
 Authentication Protocol: SHA&lt;br /&gt;
 Privacy Protocol: AES256&lt;br /&gt;
 Group-name: zbx&lt;br /&gt;
We can see that the user has been added.&lt;br /&gt;
[[Category:SNMP]]&lt;br /&gt;
[[Category:Networking Tutorials]]&lt;br /&gt;
[[Category:Catalyst Tutorials]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Cisco_Catalyst_SNMP_Tutorial&amp;diff=107</id>
		<title>Cisco Catalyst SNMP Tutorial</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Cisco_Catalyst_SNMP_Tutorial&amp;diff=107"/>
		<updated>2024-10-14T16:20:26Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;SNMP&amp;#039;&amp;#039;&amp;#039; is a protocol built into most network devices (or available through installable packages on Linux, BSD) that enables authenticated remote monitoring of a device without CLI access.   == Creating SNMP Views == The first thing we&amp;#039;re going to do is create an SNMP View. This is essentially an access level indicator that can be supplied as an argument when creating a group. This allows us to specify which items in SNMP&amp;#039;s MIB Database we want our users to access. It...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;SNMP&#039;&#039;&#039; is a protocol built into most network devices (or available through installable packages on Linux, BSD) that enables authenticated remote monitoring of a device without CLI access. &lt;br /&gt;
&lt;br /&gt;
== Creating SNMP Views ==&lt;br /&gt;
The first thing we&#039;re going to do is create an SNMP View. This is essentially an access level indicator that can be supplied as an argument when creating a group. This allows us to specify which items in SNMP&#039;s MIB Database we want our users to access. It also allows us to specify read/write permissions. &lt;br /&gt;
 snmp-server view Zabbix iso included&lt;br /&gt;
Here, we enter the snmp-server command context, then we define a view named Zabbix. Then, after the name, we set the MIB or OID name that we want to target - iso is the global SNMP namespace, it includes everything SNMP itself records - and then we say that this view is inclusive, meaning anything under iso is to be included in the view.&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s create another one, but this time, it&#039;ll prevent access instead.&lt;br /&gt;
 snmp-server view Zabbix_ReadOnly iso excluded&lt;br /&gt;
Now, we have a second view called Zabbix_ReadOnly that can&#039;t access &#039;&#039;anything&#039;&#039;. This will be our write permissions for Zabbix.&lt;br /&gt;
&lt;br /&gt;
== Creating SNMP Groups ==&lt;br /&gt;
Let&#039;s create a group for our SNMPv3 user to be a part of. &lt;br /&gt;
 snmp-server group zbx v3 priv read Zabbix write Zabbix_ReadOnly &lt;br /&gt;
Here we specify that for proper communication with SNMP, we need to use the authPriv user level which requires two different passwords, one for user auth and one for encryption.&lt;br /&gt;
&lt;br /&gt;
Groups can have three different views specified.&lt;br /&gt;
&lt;br /&gt;
* Read view defines permissions for standard read operations&lt;br /&gt;
* Write views define permissions for management - some settings can be controlled through SNMP, but Zabbix does not require write privileges. &lt;br /&gt;
* Notify views define access for SNMP users when sending traps and informs. We&#039;re not specifically setting traps up in this guide so our notify view isn&#039;t set&lt;br /&gt;
&lt;br /&gt;
== Creating an SNMP User ==&lt;br /&gt;
Now we can create an SNMPv3 user. &lt;br /&gt;
 snmp-server user zabbix zbx v3 auth sha &#039;&#039;&#039;strongerAuthPassword&#039;&#039;&#039; priv aes 256 &#039;&#039;&#039;strongerPrivPassword&#039;&#039;&#039; &lt;br /&gt;
Now, if we run &#039;&#039;show snmp user:&#039;&#039;&lt;br /&gt;
 User name: zabbix&lt;br /&gt;
 Engine ID: 800000090300848A8DEC9A00&lt;br /&gt;
 storage-type: nonvolatile	 active&lt;br /&gt;
 Authentication Protocol: SHA&lt;br /&gt;
 Privacy Protocol: AES256&lt;br /&gt;
 Group-name: zbx&lt;br /&gt;
We can see that the user has been added.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Calculating_SSD_Wearout&amp;diff=105</id>
		<title>Calculating SSD Wearout</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Calculating_SSD_Wearout&amp;diff=105"/>
		<updated>2024-10-11T13:25:21Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;SSDs have a predetermined wearout&amp;#039;&amp;#039;&amp;#039; point at which they enter a failed state. This is because SSDs use nand flash which eventually loses its ability to accurate respond to IO requests. Determining the status of your SSD is vital, as knowing the state of its wearout can help you schedule much needed replacements.   First, you need to know the drive&amp;#039;s &amp;#039;&amp;#039;rated&amp;#039;&amp;#039; &amp;#039;&amp;#039;&amp;#039;TBW&amp;#039;&amp;#039;&amp;#039; - &amp;#039;&amp;#039;&amp;#039;terabytes or total bytes written -&amp;#039;&amp;#039;&amp;#039; as this is what we&amp;#039;ll be calculating against. This repre...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;SSDs have a predetermined wearout&#039;&#039;&#039; point at which they enter a failed state. This is because SSDs use nand flash which eventually loses its ability to accurate respond to IO requests. Determining the status of your SSD is vital, as knowing the state of its wearout can help you schedule much needed replacements. &lt;br /&gt;
&lt;br /&gt;
First, you need to know the drive&#039;s &#039;&#039;rated&#039;&#039; &#039;&#039;&#039;TBW&#039;&#039;&#039; - &#039;&#039;&#039;terabytes or total bytes written -&#039;&#039;&#039; as this is what we&#039;ll be calculating against. This represents what the drive is rated to be able to handle. This can be determined usually through spec sheets or by using [https://www.techpowerup.com/ssd-specs/ this useful database.] It&#039;s not exhaustive, but it&#039;s usually good enough. For this example, our drive is rated for 400TB TBW. &lt;br /&gt;
&lt;br /&gt;
Next, you need to know your drive&#039;s &#039;&#039;&#039;Total_LBAs_Written -&#039;&#039;&#039; a measurement of how many sectors have been written or modified on the drive. This can be determined by using the following:&lt;br /&gt;
 root@example-pve:~# smartctl --all /dev/sdX&lt;br /&gt;
 --- snippet ---&lt;br /&gt;
 206 Write_Error_Rate        0x000e   100   100   000    Old_age   Always       -       1&lt;br /&gt;
 &#039;&#039;&#039;246 Total_LBAs_Written      0x0032   100   100   000    Old_age   Always       -       384578442888&#039;&#039;&#039;&lt;br /&gt;
 247 Host_Program_Page_Count 0x0032   100   100   000    Old_age   Always       -       12018275091&lt;br /&gt;
 248 FTL_Program_Page_Count  0x0032   100   100   000    Old_age   Always       -       84402429335&lt;br /&gt;
 180 Unused_Reserve_NAND_Blk 0x0033   000   000   000    Pre-fail  Always       -       9380&lt;br /&gt;
 210 Success_RAIN_Recov_Cnt  0x0032   100   100   000    Old_age   Always       -       4&lt;br /&gt;
The exact values available on your device differ from model to model. Some even have an explicit field for expected lifetime remaining, but not all do, so this method may be the best option.&lt;br /&gt;
&lt;br /&gt;
Then, we multiply this value (on the right hand side, the raw bytes value) by the Sector Size of the drive. This can also be gotten from this command:&lt;br /&gt;
 Sector Size:      512 bytes logical/physical&lt;br /&gt;
Now we can get the actual terabyte amount using this formula:&lt;br /&gt;
 (total_lbas_written * sector_size) / (1024^4)&lt;br /&gt;
 (384578442888 * 512) / (1024^4)&lt;br /&gt;
 &#039;&#039;&#039;≈ 179TB&#039;&#039;&#039;&lt;br /&gt;
Now, we know that on this drive, &#039;&#039;&#039;179TB&#039;&#039;&#039; has been written. Remember how our drive is rated for about 400TB? That means that our drive currently has &#039;&#039;&#039;(179/400) ≈ .45 ≈ 45% wearout.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Proxmox uses a slightly more sophisticated internal algorithm, and may use a self-discovered max TBW rating indicator, which could explain discrepancies within a few percent. Case in point, Proxmox tells us that this drive actually has 49% wearout. But hey, that&#039;s really close! Close enough to schedule on.&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=104</id>
		<title>Category:Linux Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Linux_Tutorials&amp;diff=104"/>
		<updated>2024-10-11T12:51:21Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our Linux tutorials.&lt;br /&gt;
&lt;br /&gt;
== Linux Tutorials (Especially helpful for Proxmox) ==&lt;br /&gt;
&lt;br /&gt;
* [[Offline Uncorrectable Sectors]]&lt;br /&gt;
* [[ZFS Failed Disk Replacement]]&lt;br /&gt;
* [[Calculating SSD Wearout]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Networking_Tutorials&amp;diff=103</id>
		<title>Category:Networking Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Networking_Tutorials&amp;diff=103"/>
		<updated>2024-10-10T23:11:38Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our networking related guides. &lt;br /&gt;
&lt;br /&gt;
== Web Hosting ==&lt;br /&gt;
&lt;br /&gt;
=== Proxying requests ===&lt;br /&gt;
&lt;br /&gt;
* [[Proxy Server]]&lt;br /&gt;
* [[Reverse Proxy]]&lt;br /&gt;
&lt;br /&gt;
=== Switch configuration&amp;lt;ref&amp;gt;[[:Category:Catalyst Tutorials|Catalyst Tutorials]]&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;[[:Category:Nexus Tutorials|Nexus Templates]]&amp;lt;/ref&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
* [[Cisco Catalyst Template]]&lt;br /&gt;
* [[Cisco Nexus Template]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Nexus_Tutorials&amp;diff=102</id>
		<title>Category:Nexus Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Nexus_Tutorials&amp;diff=102"/>
		<updated>2024-10-10T23:10:12Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;Here are some of our tutorials and templates for the Nexus series of Layer 3 switches by Cisco.  == Nexus Tutorials ==  * Cisco Nexus Template&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our tutorials and templates for the Nexus series of Layer 3 switches by Cisco.&lt;br /&gt;
&lt;br /&gt;
== Nexus Tutorials ==&lt;br /&gt;
&lt;br /&gt;
* [[Cisco Nexus Template]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Cisco_Nexus_Template&amp;diff=101</id>
		<title>Cisco Nexus Template</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Cisco_Nexus_Template&amp;diff=101"/>
		<updated>2024-10-10T23:08:33Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot; ! Set up hostname, management   hostname examplehostname    ! Some optional features we assume you&amp;#039;ll want  no feature telnet  feature scp-server  feature interface-vlan  feature dhcp  feature lldp    ! Management interface (Nexuses have dedicated gigabit ports for management  interface mgmt0    vrf member management    ip address 192.168.10.10/24    ! Here we define the router we want our management VRF to use as its gateway.   vrf context management    ip route 0.0...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; ! Set up hostname, management &lt;br /&gt;
 hostname examplehostname&lt;br /&gt;
 &lt;br /&gt;
 ! Some optional features we assume you&#039;ll want&lt;br /&gt;
 no feature telnet&lt;br /&gt;
 feature scp-server&lt;br /&gt;
 feature interface-vlan&lt;br /&gt;
 feature dhcp&lt;br /&gt;
 feature lldp&lt;br /&gt;
 &lt;br /&gt;
 ! Management interface (Nexuses have dedicated gigabit ports for management&lt;br /&gt;
 interface mgmt0&lt;br /&gt;
   vrf member management&lt;br /&gt;
   ip address 192.168.10.10/24&lt;br /&gt;
 &lt;br /&gt;
 ! Here we define the router we want our management VRF to use as its gateway. &lt;br /&gt;
 vrf context management&lt;br /&gt;
   ip route 0.0.0.0/0 192.168.10.1&lt;br /&gt;
 &lt;br /&gt;
 ! Set domain information  &lt;br /&gt;
 ip domain-lookup&lt;br /&gt;
 ip domain-name your-local-domain-here&lt;br /&gt;
 ip name-server 192.168.10.1 use-vrf management&lt;br /&gt;
 &lt;br /&gt;
 ! Setup Network Time Protocol&lt;br /&gt;
 ntp server 0.north-america.pool.ntp.org&lt;br /&gt;
 ntp server 1.north-america.pool.ntp.org&lt;br /&gt;
 ntp server 2.north-america.pool.ntp.org&lt;br /&gt;
 ntp server 3.north-america.pool.ntp.org&lt;br /&gt;
 &lt;br /&gt;
 ! This is optional, but if you&#039;re used to IOS&#039; wr to save configs, this lets you use that same phrase.&lt;br /&gt;
 cli alias name wr copy run start&lt;br /&gt;
 &lt;br /&gt;
 ! Set up users&lt;br /&gt;
 username AdminUser password 5 supersecretadminpassword role network-admin&lt;br /&gt;
 &lt;br /&gt;
 ! Example vlan. Note that we&#039;re just setting a description.&lt;br /&gt;
 vlan 10&lt;br /&gt;
   name LAN_Management_192.168.10.0/24&lt;br /&gt;
 &lt;br /&gt;
 ! Example vlan interface&lt;br /&gt;
 interface vlan 10 &lt;br /&gt;
     description Management_Address&lt;br /&gt;
     ip address 192.168.10.2/24&lt;br /&gt;
     vrf member management &lt;br /&gt;
     no shutdown&lt;br /&gt;
     no ip redirects&lt;br /&gt;
     &lt;br /&gt;
 ! Enables management&lt;br /&gt;
 line console&lt;br /&gt;
 line vty&lt;br /&gt;
 copy running-config startup-config&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Networking_Tutorials&amp;diff=100</id>
		<title>Category:Networking Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Networking_Tutorials&amp;diff=100"/>
		<updated>2024-10-10T17:36:57Z</updated>

		<summary type="html">&lt;p&gt;Maeve: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some of our networking related guides. &lt;br /&gt;
&lt;br /&gt;
== Web Hosting ==&lt;br /&gt;
&lt;br /&gt;
=== Proxying requests ===&lt;br /&gt;
&lt;br /&gt;
* [[Proxy Server]]&lt;br /&gt;
* [[Reverse Proxy]]&lt;br /&gt;
&lt;br /&gt;
=== Switch configuration&amp;lt;ref&amp;gt;[[:Category:Catalyst Tutorials|Catalyst Tutorials]]&amp;lt;/ref&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
* [[Cisco Catalyst Template]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Cisco_Catalyst_Template&amp;diff=99</id>
		<title>Cisco Catalyst Template</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Cisco_Catalyst_Template&amp;diff=99"/>
		<updated>2024-10-10T17:35:11Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created entire page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; ! &#039;&#039;&#039;Enter configuration mode&#039;&#039;&#039; &lt;br /&gt;
 en&lt;br /&gt;
 conf t&lt;br /&gt;
  &lt;br /&gt;
 ! &#039;&#039;&#039;Turn on rapid spanning tree&#039;&#039;&#039;&lt;br /&gt;
 spanning-tree mode rapid-pvst &lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Set the hostname&#039;&#039;&#039;&lt;br /&gt;
 hostname &#039;&#039;&#039;examplename&#039;&#039;&#039;&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Enables password encryption&#039;&#039;&#039; &lt;br /&gt;
 service password-encryption&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Optional. Enable secret followed by a password will require console users to provide a password before they can &amp;quot;enable&amp;quot; the switch, allowing them to edit conf mode, run show run, etc&#039;&#039;&#039;&lt;br /&gt;
 enable secret &#039;&#039;&#039;supersecretpassword&#039;&#039;&#039;&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Create a superuser / admin. The name can be anything&#039;&#039;&#039;. &lt;br /&gt;
 username AdminUser priv 15 secret &#039;&#039;&#039;incrediblysecurepassword&#039;&#039;&#039;&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Set Timezone&#039;&#039;&#039;&lt;br /&gt;
 clock timezone UTC 0 0&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Set NTP server. If DNS is functional, use one of these&#039;&#039;&#039;.&lt;br /&gt;
 ntp server 0.north-america.pool.ntp.org&lt;br /&gt;
 ntp server 1.north-america.pool.ntp.org&lt;br /&gt;
 ntp server 2.north-america.pool.ntp.org&lt;br /&gt;
 ntp server 3.north-america.pool.ntp.org&lt;br /&gt;
 ! &#039;&#039;&#039;Alternatively, if you want this device to instead pull NTP from another device in your network, supply an IP&#039;&#039;&#039;.&lt;br /&gt;
 ! ntp server &#039;&#039;&#039;10.0.0.1&#039;&#039;&#039; &lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;aaa new-model enables a suite of features that are now standard across all other Cisco devices.&#039;&#039;&#039;&lt;br /&gt;
 aaa new-model&lt;br /&gt;
 aaa authentication login default local&lt;br /&gt;
 ! &#039;&#039;&#039;Console connection&#039;&#039;&#039;&lt;br /&gt;
 aaa authorization console&lt;br /&gt;
 ! &#039;&#039;&#039;SSH connection&#039;&#039;&#039; &lt;br /&gt;
 aaa authorization exec default local &lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Disables requirement for password on the console. If enabled, this would be one step behind enable secret.&#039;&#039;&#039; &lt;br /&gt;
 line con 0&lt;br /&gt;
 no password&lt;br /&gt;
 exit &lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;This configures SSH access in the same way that we configured the above console connection. a vty is a remote connection. IOS supports 16 concurrently&#039;&#039;&#039;. &lt;br /&gt;
 line vty 0 15&lt;br /&gt;
 no password&lt;br /&gt;
 transport input ssh&lt;br /&gt;
 exit&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Disables default interface for VLAN 1. VLAN 1 should be avoided when possible as this is the default VLAN that ports will take when reset.&#039;&#039;&#039; &lt;br /&gt;
 int vlan 1&lt;br /&gt;
 no ip address&lt;br /&gt;
 shutdown&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Define a default domain name.&#039;&#039;&#039;&lt;br /&gt;
 !no ip domain-lookup&lt;br /&gt;
 ip domain-name &#039;&#039;&#039;your-internal-domain-name&#039;&#039;&#039;&lt;br /&gt;
 ip name-server &#039;&#039;&#039;your-local-dns-server&#039;&#039;&#039;&lt;br /&gt;
 ! Generates a cryptokey to enable SSH &lt;br /&gt;
 crypto key generate rsa modulus 4096 &lt;br /&gt;
 ip ssh version 2&lt;br /&gt;
 ! ip ssh {timeout seconds | authentication-retries number}&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Here we add a port to VLAN 10. This assumes VLAN 10 is the VLAN you&#039;re going to use for your primary management. Adjust as appropriate.&#039;&#039;&#039; &lt;br /&gt;
 int te 1/0/48 &lt;br /&gt;
 switchport access VLAN 10 &lt;br /&gt;
 int vlan 18&lt;br /&gt;
 ip address 192.168.10.10 255.255.255.0&lt;br /&gt;
 ip default-gateway 192.168.10.1&lt;br /&gt;
 exit&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Ping the device from itself to ensure the interface has come alive.&#039;&#039;&#039;&lt;br /&gt;
 ping 192.168.10.10&lt;br /&gt;
 &lt;br /&gt;
 ! &#039;&#039;&#039;Write to memory.&#039;&#039;&#039;&lt;br /&gt;
 wr&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Category:Catalyst_Tutorials&amp;diff=98</id>
		<title>Category:Catalyst Tutorials</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Category:Catalyst_Tutorials&amp;diff=98"/>
		<updated>2024-10-10T16:25:55Z</updated>

		<summary type="html">&lt;p&gt;Maeve: Created page with &amp;quot;Below is a collection of related configuration snippets for Catalyst switches that we&amp;#039;ve collected for certain purposes over the years. We&amp;#039;ve generalized and anonymized them to be readily available as templates for your needs.  == General templates ==  * Cisco Catalyst Template&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Below is a collection of related configuration snippets for Catalyst switches that we&#039;ve collected for certain purposes over the years. We&#039;ve generalized and anonymized them to be readily available as templates for your needs.&lt;br /&gt;
&lt;br /&gt;
== General templates ==&lt;br /&gt;
&lt;br /&gt;
* [[Cisco Catalyst Template]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
	<entry>
		<id>https://www.rosemarknetworks.com/wiki/index.php?title=Main_Page&amp;diff=97</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://www.rosemarknetworks.com/wiki/index.php?title=Main_Page&amp;diff=97"/>
		<updated>2024-10-10T16:17:45Z</updated>

		<summary type="html">&lt;p&gt;Maeve: /* Categories */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the Rosemark Networks Consultants Wiki.&lt;br /&gt;
&lt;br /&gt;
== Categories ==&lt;br /&gt;
&lt;br /&gt;
* [[:Category:Windows Tutorials|Windows Tutorials]]&lt;br /&gt;
* [[:Category:Linux Tutorials|Linux Tutorials]]&lt;br /&gt;
* [[:Category:Software Tutorials|Software Tutorials]]&lt;br /&gt;
* [[:Category:Networking Tutorials|Networking Tutorials]]&lt;/div&gt;</summary>
		<author><name>Maeve</name></author>
	</entry>
</feed>