Migrating Proxmox Hypervisor’s Boot Volume ZFS Mirror To New (Smaller) Disks

Background

I have a three server Proxmox cluster in my lab, where I cheaped out on the OS drives. I bought 1TB Silicon Power SSD’s, because they were cheap, and I have generally had no issues with using these off brand SSD’s as boot drives. I figured they were just OS drives anyway, so using a ZFS Mirror would be sufficient redundancy and would be fine for my use case.

Unfortunately, these cheap SSD’s don’t seem to play nice with ZFS. They are constantly getting kicked out of the ZFS pool for no reason across all 3 hosts. I have seen some similar reports of compatibility issues with a variety of SSD models, and it seems like it is an unresolvable issue. I either use different SSD’s or I stop using ZFS. As a result, I decided to replace them with some Crucial MX500 1TB SSD’s, and simply copy my ZFS volume over to those. I thought it should be simple enough to replace each mirror member in the zpool and rebuild.

However, upon buying those, I realized that they were not quite the same size. For some reason, my cheap Silicon Power SSD’s were actually 1.02TB in size, according to Smart hardware information. Therefore, it was not possible to clone the entire disk in a straightforward way, such as with DD or CloneZilla, because the destination disks were very slightly smaller than the source disks.

I didn’t want to do a fresh reinstall, since I already had many VM’s on this cluster, and not only that, but it is a hyperconverged CEPH cluster, so there would have been a lot of configuration to rebuild here. I didn’t want to have to resilver the whole CEPH storage volume or figure out how to get a fresh install to join back to my cluster. I was looking for an easy solution.

In order to get all of the steps tested and documented, I created a VM with two 100GB virtual disks and installed Proxmox (inside the VM) in order to give myself a test environment. I then proceeded to add two 50GB virtual disks and started experimenting.

Obviously, no one would ever install Proxmox this way for any reason other than testing, but for the purposes of the steps being taken here, whether the machine is bare metal or not doesn’t matter.

I’m creating this write-up partially for myself for when I complete this on my real hypervisors, so the screenshots and references in this article will be referencing those small virtual disks, as the article was written off of my VM proof of concept, however, I did complete the same steps successfully on my real hypervisors afterwards.

This guide may not be comprehensive, and may not be the best methods to do this process, but this is what I was able to synthesize from the disjointed posts and guides I found online, and these steps worked reliably for me. As always, make sure to have backups before you begin messing around with your production operating systems.

Assumptions and Environment

For the purposes of this guide, the partition layout is the default one when you install Proxmox from the Proxmox 7 ISO with ZFS Mirror on to two disks. This creates three partitions: A BIOS_Grub partition, A boot\ESP partition, and the actual ZFS data partition. The only partition which is actually mounted on the hypervisor during normal operation is the ZFS one, but the other two are required for boot.

While performing all of the work outlined in this guide, I booted to the Proxmox 7 install ISO, hit Install Proxmox so the installer would boot (but never ran the installer, of course), then hit CTRL+ALT+F3 to summon a TTY and did the work from the console. I figured this was a quick and dirty way to get a roughly similar environment with the ZFS components already installed and ready to go.

I did create a test virtual machine in my virtual Proxmox environment to make sure that any VM’s stored on the ZFS volume would be copied over, however this was not critical to my production run since all of my VM disks are on my CEPH storage.

If you want to do this process yourself, make sure that your new disks have sufficient disk space to accommodate the actual disk usage on your ZFS volume.

Copy the Partition Layout and Boot Loader Data

The first part of the process here will be to copy over the partition layout.

Normally, I would copy partition tables from one disk to another using the “sfdisk” command, however I found that this did not work due to the differently sized disks. So, it was necessary to do the work manually.

The Proxmox install CD does not have Parted installed, but it is simple enough to install from the console.

# apt update
# apt install parted -y

Once Parted is installed, we can review the current partition layout. In my environment, my original install disks were /dev/sda and /dev/sdb. The new (half-sized) disks are /dev/sdc and /dev/sdd.

Here, you can see the partition layout which was created by the installer. We will need to manually copy this layout to the new disks, along with the flags.

I went ahead and manually created the same partitions on /dev/sdc using Parted.

You can see the full sequence of commands in the below screenshot, essentially I simply opened the device with:

# parted /dev/sdc

Then I initialized a GPT partition table on the device.

(parted) mklabel gpt

Then I used the mkpart command (following the prompts on the screen) to create the partitions.

(parted) mkpart

For the first two partitions, I copied the start and end positions exactly.

Note that when making the last partition (the actual ZFS data partition), I used the capacity of the disk as shown by parted for my “end” point of the partition.

I ignored the partition alignment errors, as I did want to keep the partitions exactly the same.

Now, we have a matching partition table, but still need to set the flags. Also, partition 1 shows an ext2 filesystem which shouldn’t be there (and isn’t really there), but this will be resolved later.

Next, we need to set the partition flags.

(parted) toggle 1 bios_grub
(parted) toggle 2 esp
(parted) toggle 2 boot

I found that after doing this, the flags on partition 2 didn’t show up until I ran “toggle 2 boot” a second time.

(parted) toggle 2 boot

Perhaps it was already toggled on by default (but not shown that way?), nonetheless, this did the trick.

Now, we have a solid partition table on /dev/sdc. We can repeat these steps on /dev/sdd, or if you want to make life simpler, we can simply copy the data with sfdisk.

Sfdisk is not installed on the Proxmox installer ISO either, but it’s easy enough to install, it is part of the “fdisk” package.

# apt install fdisk -y

With it now installed, we can simply copy the partition layout with Sfdisk.

# sfdisk -d /dev/sdc | sfdisk -f /dev/sdd

As you can see above, we now have the identical partitions on /dev/sdd. We couldn’t do this from the original disks because they were a different size.

Next, we still need to copy over our boot loader data to partitions 1 and 2, as they currently have no data on them. I will use DD to copy the data from partitions 1 and 2 on my old disks /dev/sda and /dev/sdb to the same partitions on /dev/sdc and /dev/sdd (my new disks).

# dd if=/dev/sda1 of=/dev/sdc1 status=progress
# dd if=/dev/sda2 of=/dev/sdc2 status=progress
# dd if=/dev/sdb1 of=/dev/sdd1 status=progress
# dd if=/dev/sdb2 of=/dev/sdd2 status=progress

As you can see, with this data copied over, the filesystem residue in Parted is now gone as well.

Create a New ZFS Mirror, And Copy The Data

We’ve now copied over all of our boot loader and partitions, but we still do not have a ZFS volume to boot off of, nor any of our data.

We are still booted to the Proxmox installer CD, accessing the console on TTY3.

As you can see, currently no ZFS volume is live. We can simply import the existing one, which by default is called “rpool”.

If you aren’t sure yours is called “rpool” and you want to verify that, you can follow this process.

# zpool import

This will show you your available pools to be imported. As you can see below, mine is named “rpool”. You will need to import the pool with the -f option, as it was previously used on another system so the mount needs to be forced.

# zpool import rpool -f

The above screenshot should clarify the process, and why the -f flag is needed.

Now that our pool is mounted, we should see our filesystems when we run:

# zfs list

In my case, we see a zpool/ROOT/pve-1, which is our root filesystem for the Proxmox install. We can also see some zpool/data volumes for my test virtual machines. Yours may differ, but likely only by showing different VM volumes, etc, which is not important for our next steps.

Now we need to create a new ZFS pool on our new disks to house this data. I will name the pool “pve” so that I don’t have to deal with two pools named “rpool” (we can rename it later).

You can see on my earlier zpool status screenshot that Proxmox by default uses the disk ID’s for the ZFS configuration, rather than direct device paths like /dev/sdc. I will find my available paths and their corresponding device mappings by running:

# ls -la /dev/disk/by-id/

I want to create a ZFS mirror (RAID 1) like what I had before, but named “pve” instead of “rpool”, so the command to do that will be:

# zpool create pve mirror /dev/disk/by-id/scsi-QEMU_QEMU_HARDDISK_drive-scsi2-part3 /dev/disk/by-id/scsi-QEMU_QEMU_HARDDISK_drive-scsi3-part3

You can see in the above screenshot that these are the same as /dev/sdc3 and /dev/sdd3, our ZFS data partitions we created in Parted.

Now, you can see in “zpool status” that our old and new pools both exist.

However, the new pool does not have any data on it yet. In order to copy the data, I will use ZFS snapshots.

We will first create a snapshot of our existing volume “rpool”, called “migration”:

# zfs snapshot -r rpool@migration

Next, we will copy the data to our new volume “pve”.

# zfs send -R rpool@migration | zfs receive -v pve -F

As you can see in this screenshot, you need to use the -F with the zfs receive so that it will overwrite the existing (empty) volume.

You can now confirm through “zfs list” that our data shows up on both volumes.

At this point in time, to ensure data integrity, I decided to shut down the machine and remove the two old drives. This way, we know that we won’t make a mistake and delete our original copy of the data, and we won’t have to deal with two ZFS pools by the same name. Once the disks are removed, I rebooted back into the Proxmox installation ISO again and went back to TTY3 to continue working. Perhaps this is overly cautious, but I think it is likely the best practice here.

Now that we are booted again, we can confirm that the only ZFS volume that remains is the newly created “pve” one by running “zpool import”. Note that this did not actually mount or import any volume, because as before, it thinks it was used by another system.

Now, we want to rename the volume to “rpool” again, so that the volume name did not change, as this may have consequences.

To do that, we will simply import the pve volume and rename it to rpool, then export it again.

First, let’s import it and check that it is now named rpool.

# zpool import pve rpool -f
# zpool status

As you can see, it is, so we’ll simply export it now.

# zpool export rpool

No screenshot needed here since there was no output. Now we should be done with the data manipulation part.

Reinstall the Boot Loader

Since our disk ID’s have changed, in order to ensure that our next boot and subsequent ones will be successful, we can reinstall the boot loader.

Note that since I removed the original disks, the new disks are now /dev/sda and /dev/sdb.

# zpool import rpool
# zfs set mountpoint=/mnt rpool/ROOT/pve-1
# mount -t proc proc /mnt/proc
# mount -t sysfs sys /mnt/sys
# mount -o bind /dev /mnt/dev
# mount -o bind /run /mnt/run
# chroot /mnt/
# update-grub
# grub-install.real /dev/sda
# grub-install.real /dev/sdb
# exit

Now we can go ahead and reboot, attempting to boot off of one of our new disks.

First Reboot

On the first reboot, we are going to run into the familiar error stating that the ZFS pool was previously used on another system. This will cause our system to drop to an (initramfs) prompt.

It’s no problem, we can simply run “zpool import rpool -f” from this prompt, then reboot our system again.

# (initramfs) zpool import rpool -f

Our next boot subsequently, we might receive an error because we changed the root mountpoint earlier while using our chroot environment. Again, no worries, we can simply change it back, and reboot again.

(initramfs) set mountpoint=/ rpool/ROOT/pve-1

Following this, we should be all done! Our system has now booted normally, and should continue to do so.

If we explore in the web UI, we can see that our VM’s still exist, and our new 50GB storage is live on the local ZFS volume.

We have successfully migrated our Proxmox ZFS installation to a new smaller disk. 🙂

OPNsense Performance Tuning for Multi-Gigabit Internet

Recently, I decided to begin the process of retiring my Ubiquiti EdgeRouter Infinity, for a number of reasons, including the fact that I don’t have a spare and the availability and pricing of these routers has only gotten worse with each passing year. I wanted to replace this setup with something that could be more easily swapped in the event of a failure, and having been a former PFSense (and even former Monowall) user years ago, I decided to give OPNsense a try.

I ordered some equipment which provided a good compromise between enterprise grade, lots of PCIe slots, cost, and power efficiency. I ended up building a system with an E5-2650L v3 processor and 64GB of RAM. I decided to start by installing Proxmox, allowing me to make this into a hub for network services in the future rather than just a router. Afterall, I have a Proxmox cluster in my server rack, Proxmox VM’s are easy to backup and restore, and even inside of virtual machines, I have always found the multi-gig networking to be highly performant. This all changed when I installed OPNsense.

Earlier this year, my Internet was upgraded to 6Gbps (7Gbps aggregate between my two hand-offs). This actually was another factor in my decision to go back to using a computer as a routerI, there are rumors of upgrades to 10Gbps and beyond in the pipeline, and I want to be prepared in the future with a system that will allow me to swap in any network hardware I want.

I’d assumed that modern router software like this should have no problem handling multi-gigabit connectivity, especially on such a powerful system (I mean I built an E5 server…), but after installing OPNsense in my Proxmox VM and trying to use it on my super fast connection, I was instantly disappointed. Out of the box, the best I could do was 2-3Gbps (about half of my speed).

Through the course of my testing, I realized that even testing with iperf from my OPNsense VM to other computers on my local network, the speeds were just as bad. So why was OPNsense only capable of using about 25% of a 10Gbps network connection? I spent several days combing through articles and forum threads trying to determine just that, and now I am compiling my findings for future reference. Hopefully some of you reading this will now save some time.

I did eventually solve my throughput issues, and I’m back to my full connection speed.

Ruling out hardware issues…

I know from my other hypervisor builds that Proxmox is more than capable of maxing out a 10Gbps line rate with virtual machines… and my new hypervisor was equipped with Intel X520-DA2I cards, which I know have given me no issues in the past.

Just to rule out any issues with this hardware I’d assembled, I created a Debian 11 VM attached to the same virtual interfaces and did some iperf testing. I found that the Debian VM had no problems performing as expected out of the box, giving me about 9.6Gbps on my iperf testing on my LAN.

Proxmox virtual networking issues in OPNsense\FreeBSD?

Throughout the course of my research, I found out to my dismay that FreeBSD seemed to have a history of performance issues when it comes to virtual network adapters – not just Proxmox, but VMWare as well.

Some sources seemed to suggest that VirtIO had major driver issues in FreeBSD 11 or 12 and I should be using E1000. Some sources seemed to suggest that VirtIO drivers should be fixed in the release I was using (which was based on FreeBSD 13).

I tested each virtual network adapter type offered in the Proxmox interface: VirtIO, E1000, Realtek RTL8139, and VMWare vmxnet3.

Out of the box with no performance tuning, VirtIO actually performed the best for me by far. None of the other network adapter types were even able to achieve 1Gbps. VirtIO was giving me about 2.5Gbps. So, I decided to proceed under the assumption that VirtIO was the right thing to use, and maybe I just needed to do some additional tuning.

Throughout the course of my testing, I also tested using the “host” CPU type versus KVM64. To my great shock, KVM64 actually seemed like it performed better, so I decided to leave this default in place. I did add the AES flag (because I am doing a lot of VPN stuff on my router, so might as well) and I did decide to add the Numa flag, although I don’t think this added any performance boost.

OPNsense Interface Settings, hardware offload good or bad?

It seems like the general consensus is, somewhat counter intuitively, that you should not enable Hardware TSO or Hardware LRO on a firewall appliance.

I tried each one of these interface settings individually, and occasionally I saw some performance gains (Hardware LRO gave me a noticeable performance boost), but some of the settings also tremendously damaged performance. The network was so slow with Hardware VLAN filtering turned on that I couldn’t even access the web UI reliably. I had to manually edit /conf/config.xml from the console to get back into the firewall.

I experienced some very strange issues with the hardware offloading. In some situations, the hardware offloading would help the LAN side perform significantly better, but the performance on the WAN side would take a nosedive. (I’m talking, 8Gbps iperf to the LAN, coinciding with less than 1Mbps of Internet throughput).

As a result of all these strange results, I later decided that the right move was to leave all of this hardware offloading turned off. In the end, I was able to achieve the above performance without any of it enabled.

OPNsense\FreeBSD, inefficient default sysctl tunables?

My journey into deeper sysctl tuning on FreeBSD began with this 11 page forum thread from 2020 from someone who seemed to be having the same problem as me. Other users were weighing in, echoing my experiences, all equally confused as to how OPNsense could be performing so poorly, with mostly disinterested responses from any staff weighing in on the topic.

It was through the forums that I stumbled on this very popular and well respected guide for FreeBSD network performance tuning. I combed over all of the writing in this guide, ignoring all of the ZFS stuff and DDoS mitigation stuff, focusing on the aspects of the write-up that aimed to improve network performance.

After making these adjustments, I did see a notable improvement, I was now able to achieve about 4-5Gbps through the OPNsense firewall! But, my full Internet speed was still slightly eluding me, and I knew there had to be more that I could do to improve the performance.

I ended up reading through several other posts and discussions, such as this thread on Github, this thread on the OPNsense forum about receive side scaling, the performance tuning guide for PFsense, a similar FreeBSD based firewall solution from which OPNsense was forked, a very outdated thread from 2011 about a similar issue on PFsense, and a 2 year old Reddit thread on /r/OPNsenseFirewall about the same issue.

Each resource I read through listed one or two other tunables which seemed to be the silver bullet for them. I kept changing things one at a time, and rebooting my firewall. I didn’t keep that great of track of which things made an impact and which didn’t, because as I read what each thing was, I generally agreed that “yeah increasing this seems like a good idea”, and decided to keep even modifications that didn’t seem to make a noticeable performance improvement.

Perhaps you are in a position where you want to do more testing and narrow down which sysctl values matter for your particular setup, but I offer this as my known working configuration that resolved the speed issues for me, and which I am satisfied with. I have other projects to move on to and have spent more than enough time on this firewall one, it’s time to accept my performance gains and move on.

Configuration changes I decided to keep in my “known good” configuration.

If you haven’t enjoyed my rambling journey above of how I got here, then this is the part of this guide you’re looking for. Below are all of the configuration changes I decided to keep on my production firewall, the configuration which yielded the above speed test exceeding 6Gbps.

If you’re doing what I’m doing, you’re sitting with a default OPNsense installation inside of a Proxmox virtual machine, here’s everything to change to get to the destination I arrived at.

Proxmox Virtual Machine Hardware Settings – Machine Type

I read conflicting information online about whether q35 or i440fx performed better with OPNsense. In the end, I ended up sticking with the default i440fx. I didn’t notice any huge performance swing one way or another.

Proxmox Virtual Machine Hardware Settings – CPU

  • Leave the CPU type as “KVM64” (default). This seemed to provide the best performance in my testing.
  • I matched the total core count with my physical hypervisor CPU, since this will be primarily a router and I want the router to have the ability to use the full CPU.
  • I checked “Enable NUMA” (but I don’t think this improved performance any).
  • I enabled the AES CPU flag, with the hope that it might improve my VPN performance, but I didn’t test if it did. I know it shouldn’t hurt.

Proxmox Virtual Machine Hardware Settings – Network Adapters

  • Disable the Firewall checkbox. There is no need for Proxmox to do any firewall processing, we’re going to do all our firewall work on OPNsense anyway.
  • Use the VirtIO network device type. This provided the best performance in my testing.
  • Set the Multiqueue setting to 8. Currently, 8 is the maximum value for this setting. This provides additional parallel processing for the network adapter.

OPNsense Interface Settings

The first and most obvious settings to tinker with were the ones in Interfaces > Settings in OPNsense. As I wrote above, these provided mixed results for me and were not very predictable. In the end, after extensively testing each option one by one, I decided to leave all the hardware offloading turned off.

OPNsense Tunables (sysctl)

After testing a number of tunable options (some in bulk, and some individually), I arrived at this combination of settings which worked well for me.

These can probably be adjusted in configuration files if you like, but I did it through the web UI. After changing these values, it’s a good idea to reboot the firewall entirely, as some of the values are applied only at boot time.

The best overall guide which got me the most information was this FreeBSD Network Performance Tuning guide I linked above. I’m not going to go into as much detail here, and not everything set below was from this guide, but it was a great jumping off point for me.

hw.ibrs_disable=1

This is a CPU related tunable to mitigate a Spectre V2 vulnerability. A lot of people suggested that disabling it was helpful for performance.

net.isr.maxthreads=-1

This uncaps the amount of CPU’s which can be used for netisr processing. By default this aspect of the network stack on FreeBSD seems to be single threaded. This value of -1 for me resulted in 24 threads spawning (for my 24 CPU’s).

net.isr.bindthreads = 1

This binds each of the ISR threads to 1 CPU core, which makes sense to do since we are launching one per core. I’d guess that doing this will reduce interrupts.

net.isr.dispatch = deferred

Per this Github thread I linked earlier, it seems that changing this tunable to “deferred” or “hybrid” is required to make the other net.isr tunables do anything meaningful. So, I set mine to deferred.

net.inet.rss.enabled = 1

I decided to enable Receive Side Scaling. This didn’t come from the tuning guide either, it came from an OPNsense forum thread I linked earlier. In a nutshell, RSS is another feature to improve parallel processing of network traffic on multi-core systems.

net.inet.rss.bits = 6

This is a receive side scaling tunable from the same forum thread. I set it to 6 as it seems the optimal value is CPU cores divided by 4. I have 24 cores, so 24/4=6. Your value should be based on the number of CPU cores on your OPNsense virtual machine.

kern.ipc.maxsockbuf = 614400000

I grabbed this from the FreeBSD Network Performance Tuning Guide, this was their recommended value for if you have 100Gbps network adapters. The default value that came shipped with my OPNsense installation corresponded with the guide’s value for 2Gbps networking. I decided since I may want to expand in the future, I would increase this to this absurd level so I don’t have to deal with this again. You may want to set a more rational value, 16777216 should work for 10Gbps. The guide linked above goes into what this value does and other values it effects in great detail.

net.inet.tcp.recvbuf_max=4194304
net.inet.tcp.recvspace=65536
net.inet.tcp.sendbuf_inc=65536
net.inet.tcp.sendbuf_max=4194304
net.inet.tcp.sendspace=65536

These TCP buffer settings were taken from the FreeBSD Network Performance Tuning Guide, I didn’t look into them too deeply but they were all equivalent or larger buffers than what came shipped on OPNsense, so I rolled with it. The guide explains more about how these values can help improve performance.

net.inet.tcp.soreceive_stream = 1

Also from the tuning guide, this enables an optimized kernel socket interface which can significantly reduce the CPU impact of fast TCP streams.

net.pf.source_nodes_hashsize = 1048576

I grabbed this from the tuning guide as well, it likely didn’t help with my problem today, but it may prevent problems in the future. This increases the PF firewall hash table size to allow more connections in the table before performance deteriorates.

net.inet.tcp.mssdflt=1240
net.inet.tcp.abc_l_var=52

I grabbed these values from the tuning guide which are intended to improve efficiency while processing IP fragments. There are slightly more aggressive values you can set here too, but it seems these are the more safe values, so I went with them.

net.inet.tcp.minmss = 536

Another tuning guide value which I didn’t look into too heavily, but it configures the minimum segment size, or smallest payload of data which a single IPv4 TCP segment will agree to transmit, aimed at improving efficiency.

kern.random.fortuna.minpoolsize=128

This isn’t related to the network at all, but it was a value recommended by the tuning guide to improve the RNG entropy pool. Since I am doing VPN stuff on this system, I figure more RNG is better.

net.isr.defaultqlimit=2048

This value originated from my earlier linked Reddit thread, it was quickly added during the last batch of tunables that finally pushed me over the edge in terms of performance, and I decided I’d leave it even if it wasn’t doing anything meaningful. Increasing queuing values seems to have been a theme of the tuning overall.

Good enough for now!

With all of the above changes, I achieved my desired performance with OPNsense, running in a KVM virtual machine on Proxmox.

I’d imagine that these same concepts would apply well to any FreeBSD based router solution, such as PFsense, and some could even apply to other FreeBSD based solutions common in homelab environments, such as FreeNAS. However, it appears in my research that OPNsense is unique limited in its performance (more limited than stock FreeBSD 13). So, your mileage may vary.

The above is not intended to be a comprehensive guide, I write it both for my future reference, and with the hopes that some of the many folks who seem to be out there having these same performance issues, and being forced to stumble around in the dark looking for answers like I was, might try the settings in my guide and achieve the same great outcome.

Reddit Deplatforms Popular Microsoft Software Swap Subreddit

As large social media platforms like Reddit grow subject to ever increasing scrutiny over the content posted by their users, it is becoming increasingly common for these platforms to remove entire communities over concerns about the content being shared and discussed there.

Last week, Reddit struck down /r/MicrosoftSoftwareSwap, a popular subreddit for buying and selling digital licenses to Microsoft products. The move coincided with the banning of a similar subreddit /r/MicrosoftServices and a number of the largest sellers on both subreddits.

/r/MicrosoftSoftwareSwap as archived by The Internet Archive in August 2022.

Officially, the reason for the ban as declared by Reddit, is because the subreddits were being used for spam.

What you will see today if you try to visit /r/MicrosoftSoftwareSwap

At first glance, this explanation might hold up under a strict interpretation of Reddit policies, since these subreddits frequently contained duplicates of the same post in order to keep the fresh and active sellers on the top of the listings.

However, the timing of the concurrent bans, after the subreddit operated undisturbed for nearly a decade, and conveniently leading up to Reddit’s anticipated stock IPO, leads some displaced community members to conclude that the move is really about distancing the platform from the activity on the subreddit, in order to avoid attracting unwanted attention from Microsoft, whose software’s EULA does not typically authorize this type of resale. The discounted prices offered by sellers on /r/MicrosoftSoftwareSwap likely also cut into Microsoft’s retail software business.

Sellers, and even operators of these subreddits, were unaware of the bans and unable to prepare for their sudden impact. One former /r/MicrosoftSoftwareSwap moderator, going by the Reddit username s5ean, had been selling product keys on the subreddit for almost a decade, accumulating a large base of both repeat and new customers. His livelihood was suddenly shut down on Reddit’s whim, leaving him scrambling to reconnect with past customers as his own Reddit accounts were being shut down as well.

S5ean, not deterred, created a Discord community, which he’s dubbed The New Microsoft Software Swap, where he hopes to gather together what’s left of his customer base as he figures out how to move forward from here. He plans to keep on selling his discounted licenses for Microsoft software as he works on finding new ways to reach customers.

If you are a displaced user from the /r/MicrosoftSoftwareSwap community who is looking to get in touch with S5ean, his Discord can be found here: https://discord.gg/vUa8ngTGpy

S5ean’s post from September 2022 reflecting his software prices – as would be live on /r/MicrosoftSoftwareSwap if it were still around.

Comcast Upgrades Gigabit Pro from 3Gbps to 6Gbps!

Seems like I just wrote one of these articles, doesn’t it? It was only back in September 2021 when my Gigabit Pro connection received an upgrade from 2Gbps to 3Gbps, now, Comcast has officially bumped all Gigabit Pro customers up to 6Gbps!

Once I began hearing more rumors on Reddit, I investigated and found that Comcast’s support articles had been quietly upgraded to indicate that all Gigabit Pro customers would receive an upgrade to 6Gbps.

Since late last year, I have enjoyed speeds of “3Gbps+1Gbps” on my two handoffs. The service provides a 2Gbps fiber hand-off (10G fiber, rate limited to 3G), and a 1Gbps ethernet hand-off. Both circuits were able to be used simultaneously, as I demonstrated briefly at the end of my installation article.

With the upgrade in place, I now seem to have a “6Gbps+1Gbps” symmetrical service!

It’s genuinely hard to find a speedtest.net server that can do this connection justice. This test was run to Comcast’s own speedtest server.

I wanted to see if the 10% over-provisioning typical of Comcast’s fiber in the past was still in effect, speedtest.net seemed to top out for me at around 6200Mbps, but I was occasionally seeing some higher numbers on my router.

I set out to test the connection more thoroughly, which was not an easy task. I used multi-gig servers from 4 different datacenter locations to complete some iperf tests.

I was having some trouble getting the inbound and outbound maxed out at the same time, so for the sake of the most accurate test of “what is the rate limiting set to?”, I decided to test both directions of traffic flow separately.

Transmitting at full speed on both lines does indicate a 10% over-provisioning, and the ability to use both uplinks simultaneously for a total throughput of around 7Gbps.
Throughput during the test is a bit inconsistent, but this is likely just due to how difficult this test is to run. A lot of factors on the outside Internet make achieving this speed all the time difficult.
Receiving at full speed on both lines also works as expected, although for me, the receive test was a bit less consistent and harder to keep the bandwidth at expected levels. This is again most likely due to external factors, not how Comcast is rate limiting the connection.
You can see here that throughput during this test is a lot more inconsistent, even on the gigabit uplink, which is likely indicating some congestion coming from my test servers more than anything else.

Interestingly, in the tests you can see that the upload (TX) test was much more stable and consistent than the download (RX) test. It’s difficult to say for sure where the fluctuations are coming from, but the servers I was testing with or the networks they are connected to may have some capacity constraints. Datacenter traffic tends to be heavy outbound, so there is likely more free available inbound bandwidth at datacenters (for me to transmit to them), resulting in a smoother test.

I may re-conduct these tests again in the future if I gain access to faster servers to test with. For the time being, the results are still very impressive, and I have to comment Comcast for pushing the envelope with this, even though I suspect the only reason they did it is to give a petty jab at AT&T who just rolled out a 5Gbps residential fiber plan themselves.

In any case, I look forward to never maxing this connection out except when trying to stress test and speed test! 😉 Even the 3Gbps connection was basically impossible to fully utilize given the limitations of the servers you’re connecting to.

6Gbps is even pushing the limitations of a lot of common computer hardware (like SATA III disk drives which have a 6Gbps interface speed). It’s crazy to think that my Internet connection is capable of transferring data faster than most consumers currently are able to read\write data to a SATA SSD.

It will definitely be a long time before these speeds are needed by any residential customer, but I’m still quite pleased to have them now!

Porting a Landline Number to Google Voice, via TracFone

In 2021, an AT&T landline price increase finally gave my grandma the push she needed to get rid of her landline phone service. I have been encouraging my family to use their Google Voice lines for many years, but some of them have still held on to their expensive landline service. A hangup for my grandma was not wanting to lose her primary phone number, so I set off to see about porting it to Google Voice.

Google Voice, for some reason, will not port a landline number directly. I came across some other articles online, like this one from The Cord Cutting Report, which I found helpful throughout my process. So, I thought I’d document my experience for the future reference of myself and anyone else who may find it useful.

TracFone as a number porting intermediary.

Numerous guides I’d read online suggested using a prepaid cell phone carrier, such as Ting, to port the number away from the landline service. This process would transition the phone number into a mobile number, which Google Voice is willing and able to port over to their service.

I decided to use TracFone, since I have used them personally for many years, as has most of my family. As a result of my long time as a customer, I had an old Android phone laying around from a previous TracFone service term, and was able to get it activated on the network again – so I did not even have to buy a burner phone, I already had it!

I’ve read that others have experienced problems with their number porting due to information not matching up on the two accounts. It seems that things like the billing zip code on the credit card used to order the service, and other factors can impact the process. So, to attempt to minimize any such issues, I added my old phone to my grandma’s TracFone account, and we ordered the service using her credit card. This way, all of the billing and account information will match her AT&T landline service.

I began the process on a Thursday night, knowing it may take 1-2 business days, not knowing whether it would be done before the weekend or not. But, there was no rush here. I’d been informed through my research that it was a good idea to wait at least 1 full week between porting the number to TracFone and trying to port it again, so with this intention in mind, nobody was in any hurry.

The Porting Process – Initial Issues

Having personally never ported a number before, I wasn’t totally familiar with the process and what it would entail. I began the process on TracFone’s website, which asked me for the AT&T account number and password\PIN for the account I wanted to port the number from.

The account number could be found on the top right corner of the AT&T landline bill.

Example bill layout from AT&T’s website, note #6 Account Number.

It seemed completely clear, although there was some awkward spacing in the information. For the sake of this example so I can provide a workable example with a non-real account number, let’s say her phone number was (111) 222-3456. And let’s say the account PIN was 7654.

The information appearing as “Account Number” on the AT&T bill was formatted like this: 111 222-3456 765 4

I tried all possible combinations of these numbers on TracFone’s website, but it said everything I entered was an invalid account number. I came across this Reddit thread from someone else who had the same issue. The thread seemed to conclude that the account number would be 13 digits, so from our example, I assumed the correct account number would be 1112223456765. The thread noted it was necessary to select “Other” as the carrier, not “AT&T” because the account number was still not a “valid” AT&T account number, according to the form.

I figured I could call TracFone support, but I also figured there was no harm in trying this, and if it didn’t work I could fall back to calling support. So, I went ahead and did as suggested, entering the account number as 1112223456765 and the PIN as 7654.

The Wait Begins

After entering the information, TracFone appeared to accept the order and begin the porting process. They provided the following guidance:

The transfer process is in progress and should take a few hours to complete. In some cases, it could take as long as 2 business days. It may take longer for landline phone numbers. During this time, your current phone will still work.

After your CURRENT phone stops working:
1. Call *22890 from your NEW phone to initiate the Activation process.
2. When the activation is complete, make a call.
– If the activation or call fails, wait a few minutes and call * again.

For kicks, I tried to activate the phone several hours later, and unsurprisingly, the activation was not successful. The wait was now on to see how many days \ business days the transfer would take, and if it would even be successful at all.

Unexpected Issues

Unfortunately, there was a hangup in my plan. Tracfone called us the next day to notify us that the phone we were trying to activate was too old, and no longer supported by the network. I was somewhat surprised, considering I still know at least one person actively using the same model phone on Tracfone’s network, but since the temporary phone I had was a 3G phone, they probably are no longer activating 3G phones since that network is in the process of being decommissioned by major carriers, like AT&T.

I really didn’t want to buy a phone for the sole purpose of using for a week, so I started asking around, and was fortunate to find a friend with an old 4G unlocked phone they were no longer using. I was able to borrow their phone and use a very cheap TracFone BYOD SIM card to attempt to activate it on TracFone’s network.

The process of activating the second phone was difficult the first time, the first agent I called was unable to find the service card we had paid for on the previous phone, and told me that the account number I had from AT&T was invalid for the number port. (This turned out to not be true – the information outlined above IS correct.)

I called back a second time the next day, after calling AT&T to verify the account number, and got a different TracFone rep who was able to overcome all of the problems the first rep couldn’t. Finally, the transfer process was underway! I was told it would take 2 days to complete the porting process, since it was a landline. It sounded like they would have to actually communicate with AT&T via email and send over documentation, it would not be a quick automated process (maybe this is why Google refuses to do it!).

Successful TracFone Port!

After two days, as promised, the number was ported and I was able to activate the phone on TracFone’s network. I made a test call to myself and confirmed it came from the old landline number.

Various articles and forum threads I’d read online suggested that I should wait 1 week before trying to port the number again. So, I decided to wait a little extra. The port completed on a Wednesday, and I decided to do the second port the following Sunday, when I could go over to Grandma’s house and finish setting up her Obi200 box.

Porting to Google

The port to Google was a fairly straightforward process overall, but a lot of the information I’d found online concerning how to port out of TracFone had been incorrect.

I began here, on this Google KB post which provides the link to the page to start the number porting process.

The first step was to enter the phone number and check portability, this was where we failed before, since Google did not support porting the AT&T landline. This time, success! Google said the number was eligible for porting. Once on the next screen, they provide the following terms and details:

The next step was to fill out contact information for the phone number being ported (the billing address and details on the TracFone account). Note that the carrier shows up as Verizon, I was expecting this, since I used the Verizon compatible BYOD SIM.

This was where the slight issues began. Every piece of information I found online, including this article from Best Cellular, stated that the “Account Number” for a TracFone BYOD device was the last 15 digits of the SIM card. As far as the 4 digit PIN, information I found online varied from “TracFone doesn’t use PINs, so you should enter 0000” to “TracFone doesn’t use PINs, so you should enter any 4 digit number”. I decided to enter a 4 digit security PIN I knew existed on the account.

Unfortunately, the port request was immediately rejected due to a bad account number. I was provided the option to correct the account number, but didn’t know what the right information was. I really was not looking forward to calling TracFone and asking them how to port away the number they’d just ported in a week earlier, but I decided to give it a shot.

Fortunately, the agent I got at the TracFone number porting department was very helpful, and didn’t seem to care at all about what I was doing. They initially thought I was right to be using the last 15 digits of the SIM card too, but then found another account number in the system that I didn’t have. They provided it to me, and I entered it in the Google form, along with the 4 digit account security PIN. This time, the port was accepted!

I was disappointed that it was going to take another 24 hours, I thought the port would be fairly immediate now that it was a mobile number. I’m not sure if the delay was normal, because it was a landline, or because I made a mistake on the form the first time. Either way, Google was very timely and completed the port request exactly when they said they would. The next day, we received an email notifying us it was complete!

Closing Thoughts

This was my first time ever porting a phone number in any capacity, and I was trying to do something somewhat unsupported… All in all, this process was as easy as I could have asked for. There were certainly some snags along the way, which were learning experiences for me, but I would definitely do this again in the future if someone else in my family asks for assistance getting rid of their landline. Hopefully someone else out there will also find this information useful if you are trying to do the same process, or wondering what will be involved and how hard it will be. Happy porting!