Category Archives: How-to’s

Tutorials and step-by-step instructions.

Migrating Proxmox Hypervisor’s Boot Volume ZFS Mirror To New (Smaller) Disks

Background

I have a three server Proxmox cluster in my lab, where I cheaped out on the OS drives. I bought 1TB Silicon Power SSD’s, because they were cheap, and I have generally had no issues with using these off brand SSD’s as boot drives. I figured they were just OS drives anyway, so using a ZFS Mirror would be sufficient redundancy and would be fine for my use case.

Unfortunately, these cheap SSD’s don’t seem to play nice with ZFS. They are constantly getting kicked out of the ZFS pool for no reason across all 3 hosts. I have seen some similar reports of compatibility issues with a variety of SSD models, and it seems like it is an unresolvable issue. I either use different SSD’s or I stop using ZFS. As a result, I decided to replace them with some Crucial MX500 1TB SSD’s, and simply copy my ZFS volume over to those. I thought it should be simple enough to replace each mirror member in the zpool and rebuild.

However, upon buying those, I realized that they were not quite the same size. For some reason, my cheap Silicon Power SSD’s were actually 1.02TB in size, according to Smart hardware information. Therefore, it was not possible to clone the entire disk in a straightforward way, such as with DD or CloneZilla, because the destination disks were very slightly smaller than the source disks.

I didn’t want to do a fresh reinstall, since I already had many VM’s on this cluster, and not only that, but it is a hyperconverged CEPH cluster, so there would have been a lot of configuration to rebuild here. I didn’t want to have to resilver the whole CEPH storage volume or figure out how to get a fresh install to join back to my cluster. I was looking for an easy solution.

In order to get all of the steps tested and documented, I created a VM with two 100GB virtual disks and installed Proxmox (inside the VM) in order to give myself a test environment. I then proceeded to add two 50GB virtual disks and started experimenting.

Obviously, no one would ever install Proxmox this way for any reason other than testing, but for the purposes of the steps being taken here, whether the machine is bare metal or not doesn’t matter.

I’m creating this write-up partially for myself for when I complete this on my real hypervisors, so the screenshots and references in this article will be referencing those small virtual disks, as the article was written off of my VM proof of concept, however, I did complete the same steps successfully on my real hypervisors afterwards.

This guide may not be comprehensive, and may not be the best methods to do this process, but this is what I was able to synthesize from the disjointed posts and guides I found online, and these steps worked reliably for me. As always, make sure to have backups before you begin messing around with your production operating systems.

Assumptions and Environment

For the purposes of this guide, the partition layout is the default one when you install Proxmox from the Proxmox 7 ISO with ZFS Mirror on to two disks. This creates three partitions: A BIOS_Grub partition, A boot\ESP partition, and the actual ZFS data partition. The only partition which is actually mounted on the hypervisor during normal operation is the ZFS one, but the other two are required for boot.

While performing all of the work outlined in this guide, I booted to the Proxmox 7 install ISO, hit Install Proxmox so the installer would boot (but never ran the installer, of course), then hit CTRL+ALT+F3 to summon a TTY and did the work from the console. I figured this was a quick and dirty way to get a roughly similar environment with the ZFS components already installed and ready to go.

I did create a test virtual machine in my virtual Proxmox environment to make sure that any VM’s stored on the ZFS volume would be copied over, however this was not critical to my production run since all of my VM disks are on my CEPH storage.

If you want to do this process yourself, make sure that your new disks have sufficient disk space to accommodate the actual disk usage on your ZFS volume.

Copy the Partition Layout and Boot Loader Data

The first part of the process here will be to copy over the partition layout.

Normally, I would copy partition tables from one disk to another using the “sfdisk” command, however I found that this did not work due to the differently sized disks. So, it was necessary to do the work manually.

The Proxmox install CD does not have Parted installed, but it is simple enough to install from the console.

# apt update
# apt install parted -y

Once Parted is installed, we can review the current partition layout. In my environment, my original install disks were /dev/sda and /dev/sdb. The new (half-sized) disks are /dev/sdc and /dev/sdd.

Here, you can see the partition layout which was created by the installer. We will need to manually copy this layout to the new disks, along with the flags.

I went ahead and manually created the same partitions on /dev/sdc using Parted.

You can see the full sequence of commands in the below screenshot, essentially I simply opened the device with:

# parted /dev/sdc

Then I initialized a GPT partition table on the device.

(parted) mklabel gpt

Then I used the mkpart command (following the prompts on the screen) to create the partitions.

(parted) mkpart

For the first two partitions, I copied the start and end positions exactly.

Note that when making the last partition (the actual ZFS data partition), I used the capacity of the disk as shown by parted for my “end” point of the partition.

I ignored the partition alignment errors, as I did want to keep the partitions exactly the same.

Now, we have a matching partition table, but still need to set the flags. Also, partition 1 shows an ext2 filesystem which shouldn’t be there (and isn’t really there), but this will be resolved later.

Next, we need to set the partition flags.

(parted) toggle 1 bios_grub
(parted) toggle 2 esp
(parted) toggle 2 boot

I found that after doing this, the flags on partition 2 didn’t show up until I ran “toggle 2 boot” a second time.

(parted) toggle 2 boot

Perhaps it was already toggled on by default (but not shown that way?), nonetheless, this did the trick.

Now, we have a solid partition table on /dev/sdc. We can repeat these steps on /dev/sdd, or if you want to make life simpler, we can simply copy the data with sfdisk.

Sfdisk is not installed on the Proxmox installer ISO either, but it’s easy enough to install, it is part of the “fdisk” package.

# apt install fdisk -y

With it now installed, we can simply copy the partition layout with Sfdisk.

# sfdisk -d /dev/sdc | sfdisk -f /dev/sdd

As you can see above, we now have the identical partitions on /dev/sdd. We couldn’t do this from the original disks because they were a different size.

Next, we still need to copy over our boot loader data to partitions 1 and 2, as they currently have no data on them. I will use DD to copy the data from partitions 1 and 2 on my old disks /dev/sda and /dev/sdb to the same partitions on /dev/sdc and /dev/sdd (my new disks).

# dd if=/dev/sda1 of=/dev/sdc1 status=progress
# dd if=/dev/sda2 of=/dev/sdc2 status=progress
# dd if=/dev/sdb1 of=/dev/sdd1 status=progress
# dd if=/dev/sdb2 of=/dev/sdd2 status=progress

As you can see, with this data copied over, the filesystem residue in Parted is now gone as well.

Create a New ZFS Mirror, And Copy The Data

We’ve now copied over all of our boot loader and partitions, but we still do not have a ZFS volume to boot off of, nor any of our data.

We are still booted to the Proxmox installer CD, accessing the console on TTY3.

As you can see, currently no ZFS volume is live. We can simply import the existing one, which by default is called “rpool”.

If you aren’t sure yours is called “rpool” and you want to verify that, you can follow this process.

# zpool import

This will show you your available pools to be imported. As you can see below, mine is named “rpool”. You will need to import the pool with the -f option, as it was previously used on another system so the mount needs to be forced.

# zpool import rpool -f

The above screenshot should clarify the process, and why the -f flag is needed.

Now that our pool is mounted, we should see our filesystems when we run:

# zfs list

In my case, we see a zpool/ROOT/pve-1, which is our root filesystem for the Proxmox install. We can also see some zpool/data volumes for my test virtual machines. Yours may differ, but likely only by showing different VM volumes, etc, which is not important for our next steps.

Now we need to create a new ZFS pool on our new disks to house this data. I will name the pool “pve” so that I don’t have to deal with two pools named “rpool” (we can rename it later).

You can see on my earlier zpool status screenshot that Proxmox by default uses the disk ID’s for the ZFS configuration, rather than direct device paths like /dev/sdc. I will find my available paths and their corresponding device mappings by running:

# ls -la /dev/disk/by-id/

I want to create a ZFS mirror (RAID 1) like what I had before, but named “pve” instead of “rpool”, so the command to do that will be:

# zpool create pve mirror /dev/disk/by-id/scsi-QEMU_QEMU_HARDDISK_drive-scsi2-part3 /dev/disk/by-id/scsi-QEMU_QEMU_HARDDISK_drive-scsi3-part3

You can see in the above screenshot that these are the same as /dev/sdc3 and /dev/sdd3, our ZFS data partitions we created in Parted.

Now, you can see in “zpool status” that our old and new pools both exist.

However, the new pool does not have any data on it yet. In order to copy the data, I will use ZFS snapshots.

We will first create a snapshot of our existing volume “rpool”, called “migration”:

# zfs snapshot -r rpool@migration

Next, we will copy the data to our new volume “pve”.

# zfs send -R rpool@migration | zfs receive -v pve -F

As you can see in this screenshot, you need to use the -F with the zfs receive so that it will overwrite the existing (empty) volume.

You can now confirm through “zfs list” that our data shows up on both volumes.

At this point in time, to ensure data integrity, I decided to shut down the machine and remove the two old drives. This way, we know that we won’t make a mistake and delete our original copy of the data, and we won’t have to deal with two ZFS pools by the same name. Once the disks are removed, I rebooted back into the Proxmox installation ISO again and went back to TTY3 to continue working. Perhaps this is overly cautious, but I think it is likely the best practice here.

Now that we are booted again, we can confirm that the only ZFS volume that remains is the newly created “pve” one by running “zpool import”. Note that this did not actually mount or import any volume, because as before, it thinks it was used by another system.

Now, we want to rename the volume to “rpool” again, so that the volume name did not change, as this may have consequences.

To do that, we will simply import the pve volume and rename it to rpool, then export it again.

First, let’s import it and check that it is now named rpool.

# zpool import pve rpool -f
# zpool status

As you can see, it is, so we’ll simply export it now.

# zpool export rpool

No screenshot needed here since there was no output. Now we should be done with the data manipulation part.

Reinstall the Boot Loader

Since our disk ID’s have changed, in order to ensure that our next boot and subsequent ones will be successful, we can reinstall the boot loader.

Note that since I removed the original disks, the new disks are now /dev/sda and /dev/sdb.

# zpool import rpool
# zfs set mountpoint=/mnt rpool/ROOT/pve-1
# mount -t proc proc /mnt/proc
# mount -t sysfs sys /mnt/sys
# mount -o bind /dev /mnt/dev
# mount -o bind /run /mnt/run
# chroot /mnt/
# update-grub
# grub-install.real /dev/sda
# grub-install.real /dev/sdb
# exit

Now we can go ahead and reboot, attempting to boot off of one of our new disks.

First Reboot

On the first reboot, we are going to run into the familiar error stating that the ZFS pool was previously used on another system. This will cause our system to drop to an (initramfs) prompt.

It’s no problem, we can simply run “zpool import rpool -f” from this prompt, then reboot our system again.

# (initramfs) zpool import rpool -f

Our next boot subsequently, we might receive an error because we changed the root mountpoint earlier while using our chroot environment. Again, no worries, we can simply change it back, and reboot again.

(initramfs) set mountpoint=/ rpool/ROOT/pve-1

Following this, we should be all done! Our system has now booted normally, and should continue to do so.

If we explore in the web UI, we can see that our VM’s still exist, and our new 50GB storage is live on the local ZFS volume.

We have successfully migrated our Proxmox ZFS installation to a new smaller disk. 🙂

OPNsense Performance Tuning for Multi-Gigabit Internet

Recently, I decided to begin the process of retiring my Ubiquiti EdgeRouter Infinity, for a number of reasons, including the fact that I don’t have a spare and the availability and pricing of these routers has only gotten worse with each passing year. I wanted to replace this setup with something that could be more easily swapped in the event of a failure, and having been a former PFSense (and even former Monowall) user years ago, I decided to give OPNsense a try.

I ordered some equipment which provided a good compromise between enterprise grade, lots of PCIe slots, cost, and power efficiency. I ended up building a system with an E5-2650L v3 processor and 64GB of RAM. I decided to start by installing Proxmox, allowing me to make this into a hub for network services in the future rather than just a router. Afterall, I have a Proxmox cluster in my server rack, Proxmox VM’s are easy to backup and restore, and even inside of virtual machines, I have always found the multi-gig networking to be highly performant. This all changed when I installed OPNsense.

Earlier this year, my Internet was upgraded to 6Gbps (7Gbps aggregate between my two hand-offs). This actually was another factor in my decision to go back to using a computer as a routerI, there are rumors of upgrades to 10Gbps and beyond in the pipeline, and I want to be prepared in the future with a system that will allow me to swap in any network hardware I want.

I’d assumed that modern router software like this should have no problem handling multi-gigabit connectivity, especially on such a powerful system (I mean I built an E5 server…), but after installing OPNsense in my Proxmox VM and trying to use it on my super fast connection, I was instantly disappointed. Out of the box, the best I could do was 2-3Gbps (about half of my speed).

Through the course of my testing, I realized that even testing with iperf from my OPNsense VM to other computers on my local network, the speeds were just as bad. So why was OPNsense only capable of using about 25% of a 10Gbps network connection? I spent several days combing through articles and forum threads trying to determine just that, and now I am compiling my findings for future reference. Hopefully some of you reading this will now save some time.

I did eventually solve my throughput issues, and I’m back to my full connection speed.

Ruling out hardware issues…

I know from my other hypervisor builds that Proxmox is more than capable of maxing out a 10Gbps line rate with virtual machines… and my new hypervisor was equipped with Intel X520-DA2I cards, which I know have given me no issues in the past.

Just to rule out any issues with this hardware I’d assembled, I created a Debian 11 VM attached to the same virtual interfaces and did some iperf testing. I found that the Debian VM had no problems performing as expected out of the box, giving me about 9.6Gbps on my iperf testing on my LAN.

Proxmox virtual networking issues in OPNsense\FreeBSD?

Throughout the course of my research, I found out to my dismay that FreeBSD seemed to have a history of performance issues when it comes to virtual network adapters – not just Proxmox, but VMWare as well.

Some sources seemed to suggest that VirtIO had major driver issues in FreeBSD 11 or 12 and I should be using E1000. Some sources seemed to suggest that VirtIO drivers should be fixed in the release I was using (which was based on FreeBSD 13).

I tested each virtual network adapter type offered in the Proxmox interface: VirtIO, E1000, Realtek RTL8139, and VMWare vmxnet3.

Out of the box with no performance tuning, VirtIO actually performed the best for me by far. None of the other network adapter types were even able to achieve 1Gbps. VirtIO was giving me about 2.5Gbps. So, I decided to proceed under the assumption that VirtIO was the right thing to use, and maybe I just needed to do some additional tuning.

Throughout the course of my testing, I also tested using the “host” CPU type versus KVM64. To my great shock, KVM64 actually seemed like it performed better, so I decided to leave this default in place. I did add the AES flag (because I am doing a lot of VPN stuff on my router, so might as well) and I did decide to add the Numa flag, although I don’t think this added any performance boost.

OPNsense Interface Settings, hardware offload good or bad?

It seems like the general consensus is, somewhat counter intuitively, that you should not enable Hardware TSO or Hardware LRO on a firewall appliance.

I tried each one of these interface settings individually, and occasionally I saw some performance gains (Hardware LRO gave me a noticeable performance boost), but some of the settings also tremendously damaged performance. The network was so slow with Hardware VLAN filtering turned on that I couldn’t even access the web UI reliably. I had to manually edit /conf/config.xml from the console to get back into the firewall.

I experienced some very strange issues with the hardware offloading. In some situations, the hardware offloading would help the LAN side perform significantly better, but the performance on the WAN side would take a nosedive. (I’m talking, 8Gbps iperf to the LAN, coinciding with less than 1Mbps of Internet throughput).

As a result of all these strange results, I later decided that the right move was to leave all of this hardware offloading turned off. In the end, I was able to achieve the above performance without any of it enabled.

OPNsense\FreeBSD, inefficient default sysctl tunables?

My journey into deeper sysctl tuning on FreeBSD began with this 11 page forum thread from 2020 from someone who seemed to be having the same problem as me. Other users were weighing in, echoing my experiences, all equally confused as to how OPNsense could be performing so poorly, with mostly disinterested responses from any staff weighing in on the topic.

It was through the forums that I stumbled on this very popular and well respected guide for FreeBSD network performance tuning. I combed over all of the writing in this guide, ignoring all of the ZFS stuff and DDoS mitigation stuff, focusing on the aspects of the write-up that aimed to improve network performance.

After making these adjustments, I did see a notable improvement, I was now able to achieve about 4-5Gbps through the OPNsense firewall! But, my full Internet speed was still slightly eluding me, and I knew there had to be more that I could do to improve the performance.

I ended up reading through several other posts and discussions, such as this thread on Github, this thread on the OPNsense forum about receive side scaling, the performance tuning guide for PFsense, a similar FreeBSD based firewall solution from which OPNsense was forked, a very outdated thread from 2011 about a similar issue on PFsense, and a 2 year old Reddit thread on /r/OPNsenseFirewall about the same issue.

Each resource I read through listed one or two other tunables which seemed to be the silver bullet for them. I kept changing things one at a time, and rebooting my firewall. I didn’t keep that great of track of which things made an impact and which didn’t, because as I read what each thing was, I generally agreed that “yeah increasing this seems like a good idea”, and decided to keep even modifications that didn’t seem to make a noticeable performance improvement.

Perhaps you are in a position where you want to do more testing and narrow down which sysctl values matter for your particular setup, but I offer this as my known working configuration that resolved the speed issues for me, and which I am satisfied with. I have other projects to move on to and have spent more than enough time on this firewall one, it’s time to accept my performance gains and move on.

Configuration changes I decided to keep in my “known good” configuration.

If you haven’t enjoyed my rambling journey above of how I got here, then this is the part of this guide you’re looking for. Below are all of the configuration changes I decided to keep on my production firewall, the configuration which yielded the above speed test exceeding 6Gbps.

If you’re doing what I’m doing, you’re sitting with a default OPNsense installation inside of a Proxmox virtual machine, here’s everything to change to get to the destination I arrived at.

Proxmox Virtual Machine Hardware Settings – Machine Type

I read conflicting information online about whether q35 or i440fx performed better with OPNsense. In the end, I ended up sticking with the default i440fx. I didn’t notice any huge performance swing one way or another.

Proxmox Virtual Machine Hardware Settings – CPU

  • Leave the CPU type as “KVM64” (default). This seemed to provide the best performance in my testing.
  • I matched the total core count with my physical hypervisor CPU, since this will be primarily a router and I want the router to have the ability to use the full CPU.
  • I checked “Enable NUMA” (but I don’t think this improved performance any).
  • I enabled the AES CPU flag, with the hope that it might improve my VPN performance, but I didn’t test if it did. I know it shouldn’t hurt.

Proxmox Virtual Machine Hardware Settings – Network Adapters

  • Disable the Firewall checkbox. There is no need for Proxmox to do any firewall processing, we’re going to do all our firewall work on OPNsense anyway.
  • Use the VirtIO network device type. This provided the best performance in my testing.
  • Set the Multiqueue setting to 8. Currently, 8 is the maximum value for this setting. This provides additional parallel processing for the network adapter.

OPNsense Interface Settings

The first and most obvious settings to tinker with were the ones in Interfaces > Settings in OPNsense. As I wrote above, these provided mixed results for me and were not very predictable. In the end, after extensively testing each option one by one, I decided to leave all the hardware offloading turned off.

OPNsense Tunables (sysctl)

After testing a number of tunable options (some in bulk, and some individually), I arrived at this combination of settings which worked well for me.

These can probably be adjusted in configuration files if you like, but I did it through the web UI. After changing these values, it’s a good idea to reboot the firewall entirely, as some of the values are applied only at boot time.

The best overall guide which got me the most information was this FreeBSD Network Performance Tuning guide I linked above. I’m not going to go into as much detail here, and not everything set below was from this guide, but it was a great jumping off point for me.

hw.ibrs_disable=1

This is a CPU related tunable to mitigate a Spectre V2 vulnerability. A lot of people suggested that disabling it was helpful for performance.

net.isr.maxthreads=-1

This uncaps the amount of CPU’s which can be used for netisr processing. By default this aspect of the network stack on FreeBSD seems to be single threaded. This value of -1 for me resulted in 24 threads spawning (for my 24 CPU’s).

net.isr.bindthreads = 1

This binds each of the ISR threads to 1 CPU core, which makes sense to do since we are launching one per core. I’d guess that doing this will reduce interrupts.

net.isr.dispatch = deferred

Per this Github thread I linked earlier, it seems that changing this tunable to “deferred” or “hybrid” is required to make the other net.isr tunables do anything meaningful. So, I set mine to deferred.

net.inet.rss.enabled = 1

I decided to enable Receive Side Scaling. This didn’t come from the tuning guide either, it came from an OPNsense forum thread I linked earlier. In a nutshell, RSS is another feature to improve parallel processing of network traffic on multi-core systems.

net.inet.rss.bits = 6

This is a receive side scaling tunable from the same forum thread. I set it to 6 as it seems the optimal value is CPU cores divided by 4. I have 24 cores, so 24/4=6. Your value should be based on the number of CPU cores on your OPNsense virtual machine.

kern.ipc.maxsockbuf = 614400000

I grabbed this from the FreeBSD Network Performance Tuning Guide, this was their recommended value for if you have 100Gbps network adapters. The default value that came shipped with my OPNsense installation corresponded with the guide’s value for 2Gbps networking. I decided since I may want to expand in the future, I would increase this to this absurd level so I don’t have to deal with this again. You may want to set a more rational value, 16777216 should work for 10Gbps. The guide linked above goes into what this value does and other values it effects in great detail.

net.inet.tcp.recvbuf_max=4194304
net.inet.tcp.recvspace=65536
net.inet.tcp.sendbuf_inc=65536
net.inet.tcp.sendbuf_max=4194304
net.inet.tcp.sendspace=65536

These TCP buffer settings were taken from the FreeBSD Network Performance Tuning Guide, I didn’t look into them too deeply but they were all equivalent or larger buffers than what came shipped on OPNsense, so I rolled with it. The guide explains more about how these values can help improve performance.

net.inet.tcp.soreceive_stream = 1

Also from the tuning guide, this enables an optimized kernel socket interface which can significantly reduce the CPU impact of fast TCP streams.

net.pf.source_nodes_hashsize = 1048576

I grabbed this from the tuning guide as well, it likely didn’t help with my problem today, but it may prevent problems in the future. This increases the PF firewall hash table size to allow more connections in the table before performance deteriorates.

net.inet.tcp.mssdflt=1240
net.inet.tcp.abc_l_var=52

I grabbed these values from the tuning guide which are intended to improve efficiency while processing IP fragments. There are slightly more aggressive values you can set here too, but it seems these are the more safe values, so I went with them.

net.inet.tcp.minmss = 536

Another tuning guide value which I didn’t look into too heavily, but it configures the minimum segment size, or smallest payload of data which a single IPv4 TCP segment will agree to transmit, aimed at improving efficiency.

kern.random.fortuna.minpoolsize=128

This isn’t related to the network at all, but it was a value recommended by the tuning guide to improve the RNG entropy pool. Since I am doing VPN stuff on this system, I figure more RNG is better.

net.isr.defaultqlimit=2048

This value originated from my earlier linked Reddit thread, it was quickly added during the last batch of tunables that finally pushed me over the edge in terms of performance, and I decided I’d leave it even if it wasn’t doing anything meaningful. Increasing queuing values seems to have been a theme of the tuning overall.

Good enough for now!

With all of the above changes, I achieved my desired performance with OPNsense, running in a KVM virtual machine on Proxmox.

I’d imagine that these same concepts would apply well to any FreeBSD based router solution, such as PFsense, and some could even apply to other FreeBSD based solutions common in homelab environments, such as FreeNAS. However, it appears in my research that OPNsense is unique limited in its performance (more limited than stock FreeBSD 13). So, your mileage may vary.

The above is not intended to be a comprehensive guide, I write it both for my future reference, and with the hopes that some of the many folks who seem to be out there having these same performance issues, and being forced to stumble around in the dark looking for answers like I was, might try the settings in my guide and achieve the same great outcome.

Porting a Landline Number to Google Voice, via TracFone

In 2021, an AT&T landline price increase finally gave my grandma the push she needed to get rid of her landline phone service. I have been encouraging my family to use their Google Voice lines for many years, but some of them have still held on to their expensive landline service. A hangup for my grandma was not wanting to lose her primary phone number, so I set off to see about porting it to Google Voice.

Google Voice, for some reason, will not port a landline number directly. I came across some other articles online, like this one from The Cord Cutting Report, which I found helpful throughout my process. So, I thought I’d document my experience for the future reference of myself and anyone else who may find it useful.

TracFone as a number porting intermediary.

Numerous guides I’d read online suggested using a prepaid cell phone carrier, such as Ting, to port the number away from the landline service. This process would transition the phone number into a mobile number, which Google Voice is willing and able to port over to their service.

I decided to use TracFone, since I have used them personally for many years, as has most of my family. As a result of my long time as a customer, I had an old Android phone laying around from a previous TracFone service term, and was able to get it activated on the network again – so I did not even have to buy a burner phone, I already had it!

I’ve read that others have experienced problems with their number porting due to information not matching up on the two accounts. It seems that things like the billing zip code on the credit card used to order the service, and other factors can impact the process. So, to attempt to minimize any such issues, I added my old phone to my grandma’s TracFone account, and we ordered the service using her credit card. This way, all of the billing and account information will match her AT&T landline service.

I began the process on a Thursday night, knowing it may take 1-2 business days, not knowing whether it would be done before the weekend or not. But, there was no rush here. I’d been informed through my research that it was a good idea to wait at least 1 full week between porting the number to TracFone and trying to port it again, so with this intention in mind, nobody was in any hurry.

The Porting Process – Initial Issues

Having personally never ported a number before, I wasn’t totally familiar with the process and what it would entail. I began the process on TracFone’s website, which asked me for the AT&T account number and password\PIN for the account I wanted to port the number from.

The account number could be found on the top right corner of the AT&T landline bill.

Example bill layout from AT&T’s website, note #6 Account Number.

It seemed completely clear, although there was some awkward spacing in the information. For the sake of this example so I can provide a workable example with a non-real account number, let’s say her phone number was (111) 222-3456. And let’s say the account PIN was 7654.

The information appearing as “Account Number” on the AT&T bill was formatted like this: 111 222-3456 765 4

I tried all possible combinations of these numbers on TracFone’s website, but it said everything I entered was an invalid account number. I came across this Reddit thread from someone else who had the same issue. The thread seemed to conclude that the account number would be 13 digits, so from our example, I assumed the correct account number would be 1112223456765. The thread noted it was necessary to select “Other” as the carrier, not “AT&T” because the account number was still not a “valid” AT&T account number, according to the form.

I figured I could call TracFone support, but I also figured there was no harm in trying this, and if it didn’t work I could fall back to calling support. So, I went ahead and did as suggested, entering the account number as 1112223456765 and the PIN as 7654.

The Wait Begins

After entering the information, TracFone appeared to accept the order and begin the porting process. They provided the following guidance:

The transfer process is in progress and should take a few hours to complete. In some cases, it could take as long as 2 business days. It may take longer for landline phone numbers. During this time, your current phone will still work.

After your CURRENT phone stops working:
1. Call *22890 from your NEW phone to initiate the Activation process.
2. When the activation is complete, make a call.
– If the activation or call fails, wait a few minutes and call * again.

For kicks, I tried to activate the phone several hours later, and unsurprisingly, the activation was not successful. The wait was now on to see how many days \ business days the transfer would take, and if it would even be successful at all.

Unexpected Issues

Unfortunately, there was a hangup in my plan. Tracfone called us the next day to notify us that the phone we were trying to activate was too old, and no longer supported by the network. I was somewhat surprised, considering I still know at least one person actively using the same model phone on Tracfone’s network, but since the temporary phone I had was a 3G phone, they probably are no longer activating 3G phones since that network is in the process of being decommissioned by major carriers, like AT&T.

I really didn’t want to buy a phone for the sole purpose of using for a week, so I started asking around, and was fortunate to find a friend with an old 4G unlocked phone they were no longer using. I was able to borrow their phone and use a very cheap TracFone BYOD SIM card to attempt to activate it on TracFone’s network.

The process of activating the second phone was difficult the first time, the first agent I called was unable to find the service card we had paid for on the previous phone, and told me that the account number I had from AT&T was invalid for the number port. (This turned out to not be true – the information outlined above IS correct.)

I called back a second time the next day, after calling AT&T to verify the account number, and got a different TracFone rep who was able to overcome all of the problems the first rep couldn’t. Finally, the transfer process was underway! I was told it would take 2 days to complete the porting process, since it was a landline. It sounded like they would have to actually communicate with AT&T via email and send over documentation, it would not be a quick automated process (maybe this is why Google refuses to do it!).

Successful TracFone Port!

After two days, as promised, the number was ported and I was able to activate the phone on TracFone’s network. I made a test call to myself and confirmed it came from the old landline number.

Various articles and forum threads I’d read online suggested that I should wait 1 week before trying to port the number again. So, I decided to wait a little extra. The port completed on a Wednesday, and I decided to do the second port the following Sunday, when I could go over to Grandma’s house and finish setting up her Obi200 box.

Porting to Google

The port to Google was a fairly straightforward process overall, but a lot of the information I’d found online concerning how to port out of TracFone had been incorrect.

I began here, on this Google KB post which provides the link to the page to start the number porting process.

The first step was to enter the phone number and check portability, this was where we failed before, since Google did not support porting the AT&T landline. This time, success! Google said the number was eligible for porting. Once on the next screen, they provide the following terms and details:

The next step was to fill out contact information for the phone number being ported (the billing address and details on the TracFone account). Note that the carrier shows up as Verizon, I was expecting this, since I used the Verizon compatible BYOD SIM.

This was where the slight issues began. Every piece of information I found online, including this article from Best Cellular, stated that the “Account Number” for a TracFone BYOD device was the last 15 digits of the SIM card. As far as the 4 digit PIN, information I found online varied from “TracFone doesn’t use PINs, so you should enter 0000” to “TracFone doesn’t use PINs, so you should enter any 4 digit number”. I decided to enter a 4 digit security PIN I knew existed on the account.

Unfortunately, the port request was immediately rejected due to a bad account number. I was provided the option to correct the account number, but didn’t know what the right information was. I really was not looking forward to calling TracFone and asking them how to port away the number they’d just ported in a week earlier, but I decided to give it a shot.

Fortunately, the agent I got at the TracFone number porting department was very helpful, and didn’t seem to care at all about what I was doing. They initially thought I was right to be using the last 15 digits of the SIM card too, but then found another account number in the system that I didn’t have. They provided it to me, and I entered it in the Google form, along with the 4 digit account security PIN. This time, the port was accepted!

I was disappointed that it was going to take another 24 hours, I thought the port would be fairly immediate now that it was a mobile number. I’m not sure if the delay was normal, because it was a landline, or because I made a mistake on the form the first time. Either way, Google was very timely and completed the port request exactly when they said they would. The next day, we received an email notifying us it was complete!

Closing Thoughts

This was my first time ever porting a phone number in any capacity, and I was trying to do something somewhat unsupported… All in all, this process was as easy as I could have asked for. There were certainly some snags along the way, which were learning experiences for me, but I would definitely do this again in the future if someone else in my family asks for assistance getting rid of their landline. Hopefully someone else out there will also find this information useful if you are trying to do the same process, or wondering what will be involved and how hard it will be. Happy porting!

Building a More Complete & Full Featured CKEditor5

CKEditor 5 is a WYSIWYG text editor that can be used for a variety of purposes, from creating your own Google Docs type of site to creating your own WordPress knock-off platform.

I recently wanted to use this editor, but was disappointed with the lack of features in the default builds. It doesn’t even have underlining. Evidently the official stance of the developers is that you should build your own and they won’t provide a full build anymore like they did for CKEditor 4.

CKEditor provides some detailed build instructions on their documentation, but as someone who is not a Javascript developer and has never used Node before, I found the process a bit intimidating at first. So I am writing this guide up for my own future reference when I need to update my build, and also to hopefully help someone else who is in the same situation by explaining what I feel isn’t well explained to someone who’s completely new to NPM.

The Basics – Starting Your Build

First you will need NPM installed as well as Yarn. For me on Debian 10 the package name for Yarn was not immediately intuitive and the command was different than just “yarn”.

On Debian 10 my dependencies were:
# apt-get install npm yarnpkg git

Once these are installed you can simply clone the Git repository. I didn’t plan on keeping this server long term so I’m just doing it the lazy way on to a temporary VM I am going to delete when I have my final build. This isn’t the best way if you are developing your own stuff, but if you are like me and you just want a build that CKEditor won’t provide, you can just use a temp environment on a throwaway VM like I did. I had no desire to junk up my live server or even my desktop with all this NPM stuff I will not likely use again anytime soon.

For the purposes of this project I am starting with a “Classic” editor as that’s closest to what I want.

# git clone -b stable https://github.com/ckeditor/ckeditor5-build-classic.git
# cd ckeditor5-build-classic
# git remote add upstream https://github.com/ckeditor/ckeditor5-build-classic.git

Finding The Plugins You Want

The plugins you want can all be located on this page of the official documentation. I simply went through each option on the sidebar to see which plugins I might want.

Some of these are already included in the build, which you can discern by reading the build file located in src/ckeditor.js on your VM.

For the purposes of my build, I am adding Alignment, Strikethrough, Underline, Subscript, Superscript, Code, Highlight, HorizontalLine, RemoveFormat, Base64UploadAdapter, and ImageResize.

Installing Plugins

To install your desired plugin, there are 3 steps.

  1. Install the NPM package. The directions for this are provided on the plugin page on the documentation. Here are the NPM installs I ran to install the plugins I wanted:

# npm install --save @ckeditor/ckeditor5-alignment
# npm install --save @ckeditor/ckeditor5-highlight
# npm install --save @ckeditor/ckeditor5-horizontal-line
# npm install --save @ckeditor/ckeditor5-remove-format
# npm install --save @ckeditor/ckeditor5-upload

At a glance, you might notice these plugins do not match what I stated I wanted to add above. This is because you may see that some plugins contain several features, not all of which must be imported. For example, Base64UploadAdapter is one feature of ckeditor5-upload, there are other features which I haven’t imported such as SimpleUploadAdapter.

You can discern which features are part of which plugin from the plugin’s documentation page. Each one has a link to a page containing more information about the feature.

  1. Edit src/ckeditor.js to contain an import line for each plugin feature that you wish to import. For the purposes of my build, I added these import lines below the default ones.

import Alignment from '@ckeditor/ckeditor5-alignment/src/alignment'
import Strikethrough from '@ckeditor/ckeditor5-basic-styles/src/strikethrough';
import Underline from '@ckeditor/ckeditor5-basic-styles/src/underline';
import Subscript from '@ckeditor/ckeditor5-basic-styles/src/subscript';
import Superscript from '@ckeditor/ckeditor5-basic-styles/src/superscript';
import Code from '@ckeditor/ckeditor5-basic-styles/src/code';
import Highlight from '@ckeditor/ckeditor5-highlight/src/highlight';
import HorizontalLine from '@ckeditor/ckeditor5-horizontal-line/src/horizontalline';
import RemoveFormat from '@ckeditor/ckeditor5-remove-format/src/removeformat';
import Base64UploadAdapter from '@ckeditor/ckeditor5-upload/src/adapters/base64uploadadapter';
import ImageResize from '@ckeditor/ckeditor5-image/src/imageresize';

As a newbie to NPM, I wasn’t 100% sure how to determine exactly what should go here at first. Since I for example ran “npm install –save @ckeditor/ckeditor5-upload” how do I determine the remainder of the string to import the feature?

The best way I found is to click through to the GitHub page for the plugin, and navigate into the “src” folder. There, you will see .js files, and you simply need to put the path to the .js file, minus the extension.

  1. Add a line for each plugin in the ClassicEditor.builtinPlugins section of src/ckeditor.js. Unless you do this the plugin won’t actually be built into your build, which afterall is the goal.

Once again, I came across some ambiguity here. Where do the names of the plugins come from and how can I make sure I have the capitazliation correct and everything?

I copied the exact name as displayed on the documentation page for the plugin, it seemed to work correctly 100% of the time.

Some caution is needed here as some plugins contain dashes in the NPM package but the plugin itself doesn’t contain a dash. For example “@ckeditor/ckeditor5-horizontal-line” the plugin name is “HorizontalLine”.

On the Horizontal Line page of documentation, it says “See the Horizontal line feature guide and the HorizontalLine plugin documentation.” This is where I was sourcing my exact spellings and it was working reliably.

Here’s what my additional lines looked like:

        Alignment,
        Strikethrough,
        Underline,
        Subscript,
        Superscript,
        Code,
        Highlight,
        HorizontalLine,
        RemoveFormat,
        Base64UploadAdapter,
        ImageResize,
  1. Add your desired plugin to the “toolbar:” section of the ClassicEditor.defaultConfig in src/ckeditor.js.

Once again some ambiguity here. I used the lowercase version of the plugin name from the section above in step 3. This seemed to work 100% of the time.

By the way, you can use the pipe ‘|’, in the toolbar section to add spacers to the toolbar.

I moved some stuff around so here is what my whole toolbar section ended up looking like:

        toolbar: {
                items: [
                        'heading',
                        'removeformat',
                        'horizontalline',
                        '|',
                        'alignment',
                        'bold',
                        'italic',
                        'underline',
                        '|',
                        'strikethrough',
                        'subscript',
                        'superscript',
                        '|',
                        'link',
                        'bulletedList',
                        'numberedList',
                        '|',
                        'highlight',
                        'indent',
                        'outdent',
                        '|',
                        'imageUpload',
                        'mediaEmbed',
                        '|',
                        'code',
                        'blockQuote',
                        'insertTable',
                        '|',
                        'undo',
                        'redo'
                ]
        },

A Note About Some Dumb (In My Opinion) Defaults

CKEditor comes bundled with a few things which I removed for the purposes of my build.

Firstly it comes bundled with CKFinder and its associated UploadAdapter. Are far as I can tell, this does not function without a server side script I don’t care to invest time investigating, so I’m removing these from my build.

Additionally I am removing EasyImage because I have no plans to use cloud services.

To remove these items, I’m simply commenting the imports out and the plugin declarations in builtinPlugins in my src/ckeditor.js before finishing my build.

Out of the box, the image upload features of the editor do not work unless you put in some elbow grease. For now I am implementing Base64 image uploading so I don’t have to mess with a server side handler and filesystem permissions issues that can come along with uploading files. We’ll see how this works for me in my use case long term, I may switch to the Simple Upload Adapter and write a server side handler in the future.

I personally think CK Editor should just include the Base64 uploader by default so that the features work out of the box instead of this CKFinder plugin that doesn’t work without additional dependencies.

Finishing Your Build

Once you have added all of the customizations to your build, you can compile it with the Yarn tool you installed.

Although the official documentation suggests the command is “yarn”, on my Debian 10 system, it was “yarnpkg”.

So to finish my build I ran:

# yarnpkg build

Once it’s finished, the completed file is located in build/ckeditor.js. This file can be used as a drop-in replacement for any other downloadable build direct from CKEditor, and it should contain your new features.

I found I could run this build over and over as I refined my source file and I didn’t have any problems, it just overwrote my build file with a new one.

There ya go! I hope this guide simplifies someone’s project. 🙂

Adaptec 6805T Troubleshooting Experience

I recently started building out my third file server, and picked up some Adaptec 6805T RAID cards on eBay to interface with my multiple drive trays. Having built similar servers and also working with numerous Adaptec product lines in my day to day datacenter work, I consider myself to be a subject matter expert, but this experience seemed interesting enough to write a post about since the issue had me scratching my head for a bit, hopefully this will help someone who is in my shoes in the future.

An Unusual Error

One of the cards out of the batch was producing an unusual error.

One or more drives are either missing or not responding.
Please check if the drives are connected and powered on.
<<<< FATAL CONFIGURATION ERROR DETECTED >>>>
<<<< CANNOT CONTINUE BOOT PROCESS >>>>
<< Correct the problem and Reboot the system >>

On the surface, this error makes a lot of sense, it’s simply warning that the hard drives from the previous configuration are gone. Since I bought this card on eBay, and there’s no telling what the previous owner’s configuration was, this makes perfect sense.

What doesn’t make sense is this error message halting the boot process, and the lack of ability to accept this new configuration state. Normally, you would be prompted to accept or reject the new configuration when drives are removed.

What makes even LESS sense is the complete absence of any documentation or other posts online containing the text of this error. My Google searches returned nothing of use. No mention of this on any Adaptec resources I could find. No one on /r/homelab able to provide a helpful response to my post.

Troubleshooting Efforts

Troubleshooting this problem was difficult, because my computer would not boot with the PCIe card installed. This error message completely halted the boot process. Since PCIe isn’t hot plug, it’s not as if I could turn the computer on and then slide the card in.

I remembered that I have a few server boards which allow me the capability of disabling the PCIe option ROMs on a per-port basis, so my hope was that I could disable the oprom on the card and boot into Linux, then do further troubleshooting with arcconf.

So, I popped the card in and went to go into the BIOS, but of course, thanks to this error, I couldn’t even open the BIOS. The ROM initialization happened before the BIOS opened and thus locked up my system and I wasn’t able to proceed further.

I decided to put a working 6805T card in, disable the option ROM, then swap the card for the non-working one and hope that the ROM would stay disabled in the BIOS.

This worked, and I was able to get booted into Linux!

Trial & Error

Poking around arcconf I had a few ideas.

I tried resetting the controller to factory defaults:
arcconf setconfig 1 default

Curiously, this alone did NOT fix the problem.

Reading the docs, I came across this command, and I was VERY hopeful:
arcconf setbiosparams 1 BIOSHALTONMISSINGDRIVECOUNT <count>

But unfortunately, apparently my controller doesn’t “support” changing this setting?
Setting the BIOS parameters in not supported on this controller.
This was definitely a letdown, as this seemed like the setting I wanted to change.

What Ended Up Working

I decided, since the firmware on the controller was outdated, I would update the firmware, try to reset as much as possible, and then test it.

So I downloaded the latest BIOS ROM image from Adaptec and proceeded:

arcconf romupdate 1 as680T01.ufi
arcconf setconfig 1 default
arcconf resetstatisticscounters 1

This worked! I was now able to boot into the Adaptec BIOS with the CTRL+A prompt, after moving the card to a PCIe slot where the option ROM was enabled.

Curiously, even after resetting everything, I STILL had to “accept” the new configuration the first time. But, after this, everything was working normally! 🙂