• About Binary Impulse
  • BashBytes
Binary Impulse

Category Archives: How-to’s

Tutorials and step-by-step instructions.

My UBook X Linux Tablet Adventure and how I learned more than I ever thought I would need to know about accelerometer drivers

April 7, 2024 5:42 PM / 5 Comments / Kirk Schnable

The quest for a full featured Linux experience on a tablet

I recently fulfilled a dream of mine that I let stay a dream for too long – I bought a tablet PC which can run a full desktop operating system on a normal CPU architecture.

From as far back as my first time shopping for a laptop as a grammar school student, I can remember being impressed by the (at the time, thick and clunky) “tablet” laptops which could accept pen\stylus input. The type of computer that would run Windows XP Tablet PC Edition. These were far more expensive than conventional laptops at the time, and way out of my price range.

I spent the years, perhaps over a decade that followed chasing this idea of mobile computing. From my Palm i705, to my HP iPAQ H1910, to the Cassiopeia running Windows CE 1.0 that I found at a garage sale, to the many iPads I have bought throughout the past decade… but I’d never owned anything that quite scratched that itch for a truly mobile full desktop experience.

I steered clear of Microsoft Surface for a long time, in no small part because of the (in my opinion) disastrous initial launch where many of them contained ARM processors and only pretended to be full featured computers… tricking non-savvy consumers by running something which resembled Windows, but couldn’t run most Windows programs due to the CPU architecture.

Recently, after AT&T took away the grandfathered iPad unlimited data plan I had for nearly a decade, I felt a new freedom to explore what was out there in the tablet market. By now, it seems that a bit more is available out there with full desktop capabilities in a tablet form factor, but it still for some reason seems to be a very niche type of product.

I found the Chuwi UBook X, which had a very similar look and feel as a Microsoft Surface tablet, but seemed to be significantly cheaper for better hardware specs. I was intrigued, especially as I discovered that there is a large community of people out there running Linux on their Microsoft Surface tablets. I decided to give the UBook a try and picked one up.

Linux on the UBook

The Windows experience out of the box on the UBook was very impressive. I found the screen auto rotation and on screen keyboard to be close to as intuitive as what I was accustomed to from my years of iPad usage. The voice dictation and keyboard autocorrect\predictive text was also fairly solid. Windows 11 on the tablet is definitely a pleasant user experience. Pleasant enough that I am keeping it intact, and my attempts to get Linux working so far have been as a dual boot.

I tested several different distributions and desktop environments, curious to see how some of the ones I was accustomed to running on my desktops were working with mobile support. Unsurprisingly, traditional desktops found in Linux Mint distributions like MATE and Cinnamon were basically unusable on mobile out of the box. I’m sure you could make it work with heavy modification, but lacking even the basics like an on screen keyboard, these desktop UI’s were clearly not going to be my picks on this device.

I tested Ubuntu 22.04, which by default comes with a GNOME desktop interface. I found this to be competent and usable on the tablet, so finally we were getting somewhere! But, despite the interface claiming to have auto screen rotation capabilities, I found that my tablet would never exit landscape mode.

In search for a solution, I found that a lot of folks are running Fedora 39 on this type of hardware. Fedora 39 comes with GNOME out of the box too, but with a bit of a better default configuration for my preferences, such as eliminating the Ubuntu sidebar. Fedora is also a much more bleeding edge distribution, offering newer kernels and drivers, which is often helpful when you are running newer or more niche hardware like this.

I installed Fedora 39 and immediately it felt like it belonged on this device, except for one problem… the screen still didn’t rotate!

Troubleshooting Linux automatic screen rotation and accelerometers

So, I am now faced with the reality that a newer kernel and drivers won’t save me. My accelerometer doesn’t seem to work, whether I’m running a 5.x or 6.x kernel, whether I’m on Ubuntu or Fedora… but why? I found reports from other people online saying their accelerometer was working on their UBook, but with some other issues, like the screen rotation being 90 degrees off.

I began to research, consuming content like this video about how to fix flipped screen rotation on Linux, in an effort to learn more about how these accelerometers worked and why mine didn’t work… It seemed like this must be a lower level issue than just GNOME not working, so I wanted to find out how the drivers from these sensors work and how to troubleshoot them.

I first learned of the monitor-sensor command, which seemed to be a lower level way to debug what was happening. Upon running this command, my screen orientation was always reporting “left-up”, regardless of the actual position of the tablet.

Still not sure about where the disconnect was here, I dove deeper. The monitor-sensor output talks about “iio-sensor-proxy“, so what does that do? I learned that the actual hardware for the accelerator in Linux is handled by a subsystem called Industiral I/O (IIO).

I further learned that their device handles are named like “iio:device#” in /dev/. I soon found my accelerometer sensor located in “/dev/iio:device0” and learned that its raw readings are available in the path “/sys/bus/iio/devices/iio:device0/”.

This finding led to an interesting discovery, the raw values were not always exactly the same each Linux distribution I tested, but they seemed to be unchanging while being observed. They were not updating when the tablet was rotated physically.

I dove deeper into the driver support, and found that while my accelerometer was an MXC6655, support for this accelerometer had been added to the kernel previously through the MXC4005 driver. I even dug up the original messages on the mailing list where this took place. I could see in my lsmod that the MXC4005 driver was, in fact, the active one. This was supposedly a working driver for this model of accelerometer, so, why didn’t it work?

I concluded that I must be some kind of driver issue and a fairly low level problem, so armed with my knowledge so far, I decided to open a bug report. Since this seemed to be not specific to any distribution, I went straight to Linux itself and submitted a bug report on the Kernel bug tracker.

Bug report and further troubleshooting

With my bug report now open, I was very humbled by the immediately helpful responses I received. I’ve had many past experiences in open source communities and forums over the years where my very detailed questions receive no response, or receive incomplete or rude responses which are generally unhelpful.

Working with the kernel.org bug tracker contributors was, in contrast, far more pleasant and helpful than any support experience I can think of, even when working with support departments for major enterprise products.

It was a very humbling experience, and I am definitely glad to know there are so many great people working on the Linux kernel. Everyone I worked with there provided just enough information on what they wanted me to do or test or collect data from, offered more assistance if I needed it, and made me feel like my (honestly, relatively unimportant) bug was something they were going to get to the bottom of.

I will cover some of the basic steps and process that went into the troubleshooting here, but if you want to see the actual dump data and actual exchanges, you can read further on the actual bug report: https://bugzilla.kernel.org/show_bug.cgi?id=218578

The first step was dumping some data from the ACPI tables to a file and extract the accelerometer related information.

# cat /proc/acpi/dsdt > dsdt.dat
# iasl -d dsdt.dat

The above process creates a second file in your working directory, called “dsdt.dsl”, which is a lot more human readable. I proceeded to open that in a text editor and find and extract the accelerometer related information.

It was at this point that someone who read my detailed report asked me an interesting question I hadn’t considered. Essentially, they called me out on saying I was “rebooting” a lot, and asked what I meant by “reboot” and if I had tried a cold start from a powered off state.

Admittedly, this was not something which I’d really ever given any prior thought to. I knew that Windows had a “fast startup” feature which caused it to save some state data so it could boot faster next time. But outside of this fact, it hadn’t really occurred to me that on a modern computer there would be any functional difference between powering it off and “rebooting”. Especially when transitioning between operating systems, like rebooting Windows, and selecting Fedora from a GRUB menu.

To my shock, when I began to test things from a cold start, the accelerometer actually started working. Evidently, this had something to do with why I was finding reports online from other people with the same hardware who said it was working. They likely weren’t trying to do anything as fancy as I was, and were not dual booting these tablets, so they weren’t “rebooting” as much as I was. Anyone who was fully powering down their computer every time they started up next time, would have been blissfully unaware of the issue I was facing. But, because I was constantly testing things by installing and then rebooting straight out of a live environment, or rebooting straight out of Windows into a live USB, I was very often not cold booting the machine. And when I was cold booting the machine, it was usually into a live USB to install a new Linux distribution to test, and with the keyboard attached so I could run the installer… thus, when I did cold boot, I wasn’t trying to rotate the tablet.

With this new information in hand, I continued with the information gathering. I gathered some information to find the DSDT node which is being used on the tablet, under the instruction of the helpful contributor who was now handling my bug ticket.

# ls -l /sys/bus/i2c/devices/
i2c-MXC6655:00 -> ../../../devices/pci0000:00/0000:00:15.0/i2c_designware.0/i2c-0/i2c-MXC6655:00
# cat /sys/bus/i2c/devices/i2c-MXC6655\:00/firmware_node/path 
\_SB_.PCI0.I2C0.ACC0

I also collected information from “acpidump” and “dmesg” which was just simply dumped onto the bug report ticket for further analysis.

The next instructions I was given were to install i2c-tools, which opened up a whole other world of troubleshooting I didn’t know was possible. I was instructed to take some I2C dumps: one of a good working accelerometer state after a cold boot and another after a warm reboot where the accelerometer didn’t work.

i2cdump -y -f -r 0-15 0 0x15 > i2dcump.txt

Even more interesting, I was given a command to reset the accelerometer manually!

i2cset -y -f 0 0x15 0x01 0x10

So, I needed to generate a third dump. After the warm reboot where the accelerometer did not work, and I had generated that dump, I was to reset the accelerometer and take a third dump to show the state afterwards.

To my amazement, the reset command actually brought my dead accelerometer back to life right in front of my eyes, and I took the third i2c dump and attached them all.

After collecting all of this data, the contributor helping me determined that the driver did not always reset the accelerometer chip, and sometimes the chip came up with a bit register set incorrectly and putting the chip in “power-down” mode. The solution seemed simple enough, the driver would be patched to add the reset command to the probe() method of the driver, so when the driver probed the accelerometer, this bit would get reset back to the correct value if it was wrong.

This led me to bring up the possibility of the same thing happening while coming out of a sleep power state (which was also occurring in my testing), and since the same reset was correcting that condition, we would also include a reset of the chip after resume from standby in the patch.

The patch and testing

To my excitement, I soon received a bug ticket update asking me to test an experimental kernel with the new driver included. I had never done anything like this before, but I received these instructions along with a few other links:

Generic Fedora kernel test instructions
=======================================

1. Create a folder to save the kernel rpms into
2. Download the:
    kernel-core-<version>.x86_64.rpm
    kernel-modules-<version>.x86_64.rpm
    kernel-modules-core-<version>.x86_64.rpm
   files from the provided koji link into the created folder
3. From the folder run: "sudo rpm -ivh --oldpackage kernel*.rpm"
4. Remove any manual configuration you've done to work around the bug, such as
   kernel cmdline changes in grub config files, or /etc/modprobe.d/*.conf files
5. Note test kernels are not signed so you need to disable secureboot if you
   have it enabled
5. Reboot, press ESC / F1 / F2 (machine specific) to enter your BIOS setup
   to disable secure-boot if necessary
6. Select the new kernel (should be selected by default) and boot into the new
   kernel
7. Verify you've removed any special kernel commandline options by doing:
   "cat /proc/cmdline" and checking they are not there
8. Test if the bug is now fixed
9. Report the testing-results back in bugzilla

My test kernel was to be kernel-6.7.10-200.bz218578.fc39, and was available to me to download from the Fedora build system. There was also now a mailing list discussion going to work on getting this patch implemented in the kernel code base once I had tested it.

I followed the instructions and installed the experimental kernel, and the problems with my accelerometer seemed to be completely solved! 🙂 The accelerometer worked on cold boots, warm boots, and even when coming off of a sleep state!

At this point, I am not sure which kernel update will have the patched driver… and I know it will take a longer time for the patched kernel to trickle down to other distributions like Ubuntu and Linux Mint, but hopefully in the meantime if someone is having the same problem, they will find something in this write-up helpful, and they’ll know that a patch is on the way… in the meantime, they can always manually reset the accelerometer if they need to.

Many thanks to the contributors who helped with the kernel bug ticket! It was a pleasure working with you. 🙂

Accelerometer orientation problems

There was still one other issue which was unrelated to my bug report ticket. The accelerometer orientation was wrong. When the screen rotated, it was usually upside down from the actual orientation of the tablet for me. Some people said it was off by 90 degrees? I feel like mine was off by 180 degrees. I’m not sure if that was really a difference or if people were misremembering or misspeaking about their angles… regardless… I had done some manual stuff to get this resolved through the course of my troubleshooting, and it led me down some completely different rabbit holes.

I had been told to add a rule to /etc/udev/hwdb.d/ containing the following:

sensor:modalias:acpi:MXC6655*:dmi:*:svnCHUWIInnovationAndTechnology*:pnUBookX:*
 ACCEL_MOUNT_MATRIX= 0, -1, 0; -1, 0, 0; 0, 0, 1

My initial attempts to do this were weirdly unsuccessful. Given the other problems I was having with the accelerometer, this was difficult to troubleshoot for awhile since I was somewhat working on both issues concurrently.

  • I did not realize at first that files in /etc/udev/hwdb.d/ must have the “.hwdb” extension. Some other extension, like “.conf” will not work in this location.
  • When the information was communicated to me, it was poorly formatted by the website where it was posted, and I did not realize that there should be a single SPACE before ACCEL_MOUNT_MATRIX.

So despite the fact I was copying and pasting this straight from someone who said it worked for them, initially it didn’t for me. It did finally work once I figured out the missing space and file extension issues.

I also learned the proper way to reload this configuration without fully rebooting the system. A lot of the older posts out there do not do this through systemd, so I eventually came across this method which seemed to do the trick.

# systemd-hwdb update && udevadm trigger

This configuration reload process was actually instrumental to me figuring out why the configuration I had wasn’t working, as without the space before ACCEL_MOUNT_MATRIX, I received an error: “Property expected, ignoring record with no properties”. This eventually led me to realize my file was formatted improperly, a fact which was obscured to me before when I was just rebooting to try to apply my changes.

Now that I had this working, and my accelerometer was fully functioning as expected, I wanted to try to get this updated for other people too so that one day with the updated kernel these UBook tablets will just work correctly straight on a fresh install of any distribution.

With the help of the bug tracker contributors, I was led to the actual hwdb file on the systemd GitHub: https://github.com/systemd/systemd/blob/main/hwdb.d/60-sensor.hwdb

To my shock, almost this exact line existed on the hwdb list already. The existing code looked like this:

# Chuwi Ubook X (CWI535)
sensor:modalias:acpi:MXC6655*:dmi*:svnCHUWIInnovationAndTechnology*:pnUBookX:*
 ACCEL_MOUNT_MATRIX=0, -1, 0; -1, 0, 0; 0, 0, -1

Maybe it shouldn’t have, but it actually took a side-by-side comparison in a text editor for me to figure out what was wrong with this.

It was missing a colon!

This led to my first GitHub pull request to a real open source project. I submitted a pull request to systemd to fix the missing colon so this hwdb entry would work for others in the future.

Now, the official code in the systemd codebase is correct and matches what is in my hwdb override file:

# Chuwi Ubook X (CWI535)
sensor:modalias:acpi:MXC6655*:dmi:*:svnCHUWIInnovationAndTechnology*:pnUBookX:*
 ACCEL_MOUNT_MATRIX=0, -1, 0; -1, 0, 0; 0, 0, -1

So, one day future UBook owners won’t have to worry about this anymore.

What does ACCEL_MOUNT_MATRIX actually do?

Before I realized why my configuration file was not working (because I had no output, I could just see there was no change to the behavior) I dove a bit into ACCEL_MOUNT_MATRIX, what it really does, and what the values really mean. I want to elaborate on this a bit here too, as I didn’t find this to be immediately obvious, and in fact, it seems to never really be discussed accompanying the various recommendations I found posted by people to use various values for this.

I found a good authoritative source on this which I wanted to share in this write-up. It comes from the iio-sensory-proxy documentation, and can be found here.

The ACCEL_MOUNT_MATRIX is used to correct accelerometer orientation issues, essentially giving you a way to override the values if the accelerometer is not mounted in the same orientation the software expects. This part is fairly obvious, but wasn’t obvious to me at first was exactly how it works.

In my first callback to high school math classes I’ve had in awhile, I learned that this is essentially using matrix multiplication to adjust the accelerometer values.

So, hopefully this additional information will give some context if you are reading this and need to formulate your own accelerometer mount matrix to correct an accelerometer for which the work has not already been put in to create this.

Once you wrap your head around it, you can see that it’s just some simple math to flip and invert values as needed.

Where my Linux UBook is now

This has been a truly fascinating journey for me, I feel like I barely scratched the surface with this lengthy write-up. It has provided some valuable insights for me into how lower level drivers for these type of devices work, and given me some firsthand experience in how Linux kernel bugs are reported and eventually patched. It has also given me a new appreciation for the incredibly intelligent people who are hanging around the Linux kernel bug tracker with the knowledge to troubleshoot these type of issues at an expert level. If you feel like this write-up was over your head and highly technical, you understand the level of amazement I have at the technical knowledge that goes into these efforts.

I am now comfortably running Fedora 39 on my UBook and I will continue to keep the patched kernel installed and available on my GRUB menu, so I can hopefully continue to use it until the patch goes into the real Fedora 39 kernel mainline. In the meantime, I will continue to experiment with Linux on my tablet.

There are still a few things about the Linux experience on this tablet I am not fully satisfied with yet, and I feel like this will not be the last technical write-up on my journey to solve my problems.

A few more outstanding things I want to work on:

  • I would like to get a graphical boot loader working with touch screen support, so that I can select my Windows or Linux boot using only the touch screen. I have been experimenting with rEFInd, but their touch support does not currently seem to work with the UBook touch screen. This may be my next “bug report rabbit hole” after I find some time for further testing.
  • I am not entirely satisfied with the Intel graphics performance on Fedora. For most uses it is good enough, but I noticed that 4K and 1440P YouTube videos are struggling on my Fedora install, where they playback effortlessly in Windows. It sounds like the Intel graphics drivers may ship with some “not optimized for performance” defaults, even on bleeding edge kernels. My next stop will be threads like this one which make some suggestions I haven’t tried yet.
  • It was brought to my attention that the built-in webcams don’t work on Linux yet, and I have confirmed this to be the case as well. There is an existing StackExchange post about this from another UBook user, so far with no replies. It looks like the webcams are Intel IPU3 CIO2 devices, and they may not work the same way as other webcam video sensors (they may not output normal RGB or YUV data, and it may be a raw Bayer sensor). I might look into this further too.

All in all, these are minor things to me, and I will be using Linux on a more regular basis on this tablet now that I have the accelerometer working.

I’ll likely continue to run Fedora 39. I like the GNOME interface on this tablet, even though I am not very fond of some of their UI decisions when I am running it on a real desktop. I think GNOME is the best “desktop experience” out there for a Linux tablet right now.

KDE Plasma Mobile was also brought to my attention, and I have installed this as well, so that I can toggle back and forth between this and GNOME on my login screen. So far, KDE Plasma Mobile seems to be even more tablet optimized than GNOME. To me though, the interface almost feels a bit “too” mobile… it feels more like using an Android tablet than a Linux one. GNOME may be a bit better suited to my tastes, but there are a few things I like on KDE Plasma slightly better, like I feel that I can type more accurately on their on-screen keyboard.

I will continue experimenting and hopefully will have more knowledge to share in another write-up in the future. 🙂

Posted in: How-To's, Musings

Migrating Proxmox Hypervisor’s Boot Volume ZFS Mirror To New (Smaller) Disks

November 27, 2023 7:59 PM / Leave a Comment / Kirk Schnable

Background

I have a three server Proxmox cluster in my lab, where I cheaped out on the OS drives. I bought 1TB Silicon Power SSD’s, because they were cheap, and I have generally had no issues with using these off brand SSD’s as boot drives. I figured they were just OS drives anyway, so using a ZFS Mirror would be sufficient redundancy and would be fine for my use case.

Unfortunately, these cheap SSD’s don’t seem to play nice with ZFS. They are constantly getting kicked out of the ZFS pool for no reason across all 3 hosts. I have seen some similar reports of compatibility issues with a variety of SSD models, and it seems like it is an unresolvable issue. I either use different SSD’s or I stop using ZFS. As a result, I decided to replace them with some Crucial MX500 1TB SSD’s, and simply copy my ZFS volume over to those. I thought it should be simple enough to replace each mirror member in the zpool and rebuild.

However, upon buying those, I realized that they were not quite the same size. For some reason, my cheap Silicon Power SSD’s were actually 1.02TB in size, according to Smart hardware information. Therefore, it was not possible to clone the entire disk in a straightforward way, such as with DD or CloneZilla, because the destination disks were very slightly smaller than the source disks.

I didn’t want to do a fresh reinstall, since I already had many VM’s on this cluster, and not only that, but it is a hyperconverged CEPH cluster, so there would have been a lot of configuration to rebuild here. I didn’t want to have to resilver the whole CEPH storage volume or figure out how to get a fresh install to join back to my cluster. I was looking for an easy solution.

In order to get all of the steps tested and documented, I created a VM with two 100GB virtual disks and installed Proxmox (inside the VM) in order to give myself a test environment. I then proceeded to add two 50GB virtual disks and started experimenting.

Obviously, no one would ever install Proxmox this way for any reason other than testing, but for the purposes of the steps being taken here, whether the machine is bare metal or not doesn’t matter.

I’m creating this write-up partially for myself for when I complete this on my real hypervisors, so the screenshots and references in this article will be referencing those small virtual disks, as the article was written off of my VM proof of concept, however, I did complete the same steps successfully on my real hypervisors afterwards.

This guide may not be comprehensive, and may not be the best methods to do this process, but this is what I was able to synthesize from the disjointed posts and guides I found online, and these steps worked reliably for me. As always, make sure to have backups before you begin messing around with your production operating systems.

Assumptions and Environment

For the purposes of this guide, the partition layout is the default one when you install Proxmox from the Proxmox 7 ISO with ZFS Mirror on to two disks. This creates three partitions: A BIOS_Grub partition, A boot\ESP partition, and the actual ZFS data partition. The only partition which is actually mounted on the hypervisor during normal operation is the ZFS one, but the other two are required for boot.

While performing all of the work outlined in this guide, I booted to the Proxmox 7 install ISO, hit Install Proxmox so the installer would boot (but never ran the installer, of course), then hit CTRL+ALT+F3 to summon a TTY and did the work from the console. I figured this was a quick and dirty way to get a roughly similar environment with the ZFS components already installed and ready to go.

I did create a test virtual machine in my virtual Proxmox environment to make sure that any VM’s stored on the ZFS volume would be copied over, however this was not critical to my production run since all of my VM disks are on my CEPH storage.

If you want to do this process yourself, make sure that your new disks have sufficient disk space to accommodate the actual disk usage on your ZFS volume.

Copy the Partition Layout and Boot Loader Data

The first part of the process here will be to copy over the partition layout.

Normally, I would copy partition tables from one disk to another using the “sfdisk” command, however I found that this did not work due to the differently sized disks. So, it was necessary to do the work manually.

The Proxmox install CD does not have Parted installed, but it is simple enough to install from the console.

# apt update
# apt install parted -y

Once Parted is installed, we can review the current partition layout. In my environment, my original install disks were /dev/sda and /dev/sdb. The new (half-sized) disks are /dev/sdc and /dev/sdd.

Here, you can see the partition layout which was created by the installer. We will need to manually copy this layout to the new disks, along with the flags.

I went ahead and manually created the same partitions on /dev/sdc using Parted.

You can see the full sequence of commands in the below screenshot, essentially I simply opened the device with:

# parted /dev/sdc

Then I initialized a GPT partition table on the device.

(parted) mklabel gpt

Then I used the mkpart command (following the prompts on the screen) to create the partitions.

(parted) mkpart

For the first two partitions, I copied the start and end positions exactly.

Note that when making the last partition (the actual ZFS data partition), I used the capacity of the disk as shown by parted for my “end” point of the partition.

I ignored the partition alignment errors, as I did want to keep the partitions exactly the same.

Now, we have a matching partition table, but still need to set the flags. Also, partition 1 shows an ext2 filesystem which shouldn’t be there (and isn’t really there), but this will be resolved later.

Next, we need to set the partition flags.

(parted) toggle 1 bios_grub
(parted) toggle 2 esp
(parted) toggle 2 boot

I found that after doing this, the flags on partition 2 didn’t show up until I ran “toggle 2 boot” a second time.

(parted) toggle 2 boot

Perhaps it was already toggled on by default (but not shown that way?), nonetheless, this did the trick.

Now, we have a solid partition table on /dev/sdc. We can repeat these steps on /dev/sdd, or if you want to make life simpler, we can simply copy the data with sfdisk.

Sfdisk is not installed on the Proxmox installer ISO either, but it’s easy enough to install, it is part of the “fdisk” package.

# apt install fdisk -y

With it now installed, we can simply copy the partition layout with Sfdisk.

# sfdisk -d /dev/sdc | sfdisk -f /dev/sdd

As you can see above, we now have the identical partitions on /dev/sdd. We couldn’t do this from the original disks because they were a different size.

Next, we still need to copy over our boot loader data to partitions 1 and 2, as they currently have no data on them. I will use DD to copy the data from partitions 1 and 2 on my old disks /dev/sda and /dev/sdb to the same partitions on /dev/sdc and /dev/sdd (my new disks).

# dd if=/dev/sda1 of=/dev/sdc1 status=progress
# dd if=/dev/sda2 of=/dev/sdc2 status=progress
# dd if=/dev/sdb1 of=/dev/sdd1 status=progress
# dd if=/dev/sdb2 of=/dev/sdd2 status=progress

As you can see, with this data copied over, the filesystem residue in Parted is now gone as well.

Create a New ZFS Mirror, And Copy The Data

We’ve now copied over all of our boot loader and partitions, but we still do not have a ZFS volume to boot off of, nor any of our data.

We are still booted to the Proxmox installer CD, accessing the console on TTY3.

As you can see, currently no ZFS volume is live. We can simply import the existing one, which by default is called “rpool”.

If you aren’t sure yours is called “rpool” and you want to verify that, you can follow this process.

# zpool import

This will show you your available pools to be imported. As you can see below, mine is named “rpool”. You will need to import the pool with the -f option, as it was previously used on another system so the mount needs to be forced.

# zpool import rpool -f

The above screenshot should clarify the process, and why the -f flag is needed.

Now that our pool is mounted, we should see our filesystems when we run:

# zfs list

In my case, we see a zpool/ROOT/pve-1, which is our root filesystem for the Proxmox install. We can also see some zpool/data volumes for my test virtual machines. Yours may differ, but likely only by showing different VM volumes, etc, which is not important for our next steps.

Now we need to create a new ZFS pool on our new disks to house this data. I will name the pool “pve” so that I don’t have to deal with two pools named “rpool” (we can rename it later).

You can see on my earlier zpool status screenshot that Proxmox by default uses the disk ID’s for the ZFS configuration, rather than direct device paths like /dev/sdc. I will find my available paths and their corresponding device mappings by running:

# ls -la /dev/disk/by-id/

I want to create a ZFS mirror (RAID 1) like what I had before, but named “pve” instead of “rpool”, so the command to do that will be:

# zpool create pve mirror /dev/disk/by-id/scsi-QEMU_QEMU_HARDDISK_drive-scsi2-part3 /dev/disk/by-id/scsi-QEMU_QEMU_HARDDISK_drive-scsi3-part3

You can see in the above screenshot that these are the same as /dev/sdc3 and /dev/sdd3, our ZFS data partitions we created in Parted.

Now, you can see in “zpool status” that our old and new pools both exist.

However, the new pool does not have any data on it yet. In order to copy the data, I will use ZFS snapshots.

We will first create a snapshot of our existing volume “rpool”, called “migration”:

# zfs snapshot -r rpool@migration

Next, we will copy the data to our new volume “pve”.

# zfs send -R rpool@migration | zfs receive -v pve -F

As you can see in this screenshot, you need to use the -F with the zfs receive so that it will overwrite the existing (empty) volume.

You can now confirm through “zfs list” that our data shows up on both volumes.

At this point in time, to ensure data integrity, I decided to shut down the machine and remove the two old drives. This way, we know that we won’t make a mistake and delete our original copy of the data, and we won’t have to deal with two ZFS pools by the same name. Once the disks are removed, I rebooted back into the Proxmox installation ISO again and went back to TTY3 to continue working. Perhaps this is overly cautious, but I think it is likely the best practice here.

Now that we are booted again, we can confirm that the only ZFS volume that remains is the newly created “pve” one by running “zpool import”. Note that this did not actually mount or import any volume, because as before, it thinks it was used by another system.

Now, we want to rename the volume to “rpool” again, so that the volume name did not change, as this may have consequences.

To do that, we will simply import the pve volume and rename it to rpool, then export it again.

First, let’s import it and check that it is now named rpool.

# zpool import pve rpool -f
# zpool status

As you can see, it is, so we’ll simply export it now.

# zpool export rpool

No screenshot needed here since there was no output. Now we should be done with the data manipulation part.

Reinstall the Boot Loader

Since our disk ID’s have changed, in order to ensure that our next boot and subsequent ones will be successful, we can reinstall the boot loader.

Note that since I removed the original disks, the new disks are now /dev/sda and /dev/sdb.

# zpool import rpool
# zfs set mountpoint=/mnt rpool/ROOT/pve-1
# mount -t proc proc /mnt/proc
# mount -t sysfs sys /mnt/sys
# mount -o bind /dev /mnt/dev
# mount -o bind /run /mnt/run
# chroot /mnt/
# update-grub
# grub-install.real /dev/sda
# grub-install.real /dev/sdb
# exit

Now we can go ahead and reboot, attempting to boot off of one of our new disks.

First Reboot

On the first reboot, we are going to run into the familiar error stating that the ZFS pool was previously used on another system. This will cause our system to drop to an (initramfs) prompt.

It’s no problem, we can simply run “zpool import rpool -f” from this prompt, then reboot our system again.

# (initramfs) zpool import rpool -f

Our next boot subsequently, we might receive an error because we changed the root mountpoint earlier while using our chroot environment. Again, no worries, we can simply change it back, and reboot again.

(initramfs) set mountpoint=/ rpool/ROOT/pve-1

Following this, we should be all done! Our system has now booted normally, and should continue to do so.

If we explore in the web UI, we can see that our VM’s still exist, and our new 50GB storage is live on the local ZFS volume.

We have successfully migrated our Proxmox ZFS installation to a new smaller disk. 🙂

Posted in: How-To's, Musings / Tagged: cloning, parted, partition, Proxmox, ZFS

OPNsense Performance Tuning for Multi-Gigabit Internet

November 17, 2022 1:21 AM / 28 Comments / Kirk Schnable

Recently, I decided to begin the process of retiring my Ubiquiti EdgeRouter Infinity, for a number of reasons, including the fact that I don’t have a spare and the availability and pricing of these routers has only gotten worse with each passing year. I wanted to replace this setup with something that could be more easily swapped in the event of a failure, and having been a former PFSense (and even former Monowall) user years ago, I decided to give OPNsense a try.

I ordered some equipment which provided a good compromise between enterprise grade, lots of PCIe slots, cost, and power efficiency. I ended up building a system with an E5-2650L v3 processor and 64GB of RAM. I decided to start by installing Proxmox, allowing me to make this into a hub for network services in the future rather than just a router. Afterall, I have a Proxmox cluster in my server rack, Proxmox VM’s are easy to backup and restore, and even inside of virtual machines, I have always found the multi-gig networking to be highly performant. This all changed when I installed OPNsense.

Earlier this year, my Internet was upgraded to 6Gbps (7Gbps aggregate between my two hand-offs). This actually was another factor in my decision to go back to using a computer as a routerI, there are rumors of upgrades to 10Gbps and beyond in the pipeline, and I want to be prepared in the future with a system that will allow me to swap in any network hardware I want.

I’d assumed that modern router software like this should have no problem handling multi-gigabit connectivity, especially on such a powerful system (I mean I built an E5 server…), but after installing OPNsense in my Proxmox VM and trying to use it on my super fast connection, I was instantly disappointed. Out of the box, the best I could do was 2-3Gbps (about half of my speed).

Through the course of my testing, I realized that even testing with iperf from my OPNsense VM to other computers on my local network, the speeds were just as bad. So why was OPNsense only capable of using about 25% of a 10Gbps network connection? I spent several days combing through articles and forum threads trying to determine just that, and now I am compiling my findings for future reference. Hopefully some of you reading this will now save some time.

I did eventually solve my throughput issues, and I’m back to my full connection speed.

Ruling out hardware issues…

I know from my other hypervisor builds that Proxmox is more than capable of maxing out a 10Gbps line rate with virtual machines… and my new hypervisor was equipped with Intel X520-DA2I cards, which I know have given me no issues in the past.

Just to rule out any issues with this hardware I’d assembled, I created a Debian 11 VM attached to the same virtual interfaces and did some iperf testing. I found that the Debian VM had no problems performing as expected out of the box, giving me about 9.6Gbps on my iperf testing on my LAN.

Proxmox virtual networking issues in OPNsense\FreeBSD?

Throughout the course of my research, I found out to my dismay that FreeBSD seemed to have a history of performance issues when it comes to virtual network adapters – not just Proxmox, but VMWare as well.

Some sources seemed to suggest that VirtIO had major driver issues in FreeBSD 11 or 12 and I should be using E1000. Some sources seemed to suggest that VirtIO drivers should be fixed in the release I was using (which was based on FreeBSD 13).

I tested each virtual network adapter type offered in the Proxmox interface: VirtIO, E1000, Realtek RTL8139, and VMWare vmxnet3.

Out of the box with no performance tuning, VirtIO actually performed the best for me by far. None of the other network adapter types were even able to achieve 1Gbps. VirtIO was giving me about 2.5Gbps. So, I decided to proceed under the assumption that VirtIO was the right thing to use, and maybe I just needed to do some additional tuning.

Throughout the course of my testing, I also tested using the “host” CPU type versus KVM64. To my great shock, KVM64 actually seemed like it performed better, so I decided to leave this default in place. I did add the AES flag (because I am doing a lot of VPN stuff on my router, so might as well) and I did decide to add the Numa flag, although I don’t think this added any performance boost.

OPNsense Interface Settings, hardware offload good or bad?

It seems like the general consensus is, somewhat counter intuitively, that you should not enable Hardware TSO or Hardware LRO on a firewall appliance.

I tried each one of these interface settings individually, and occasionally I saw some performance gains (Hardware LRO gave me a noticeable performance boost), but some of the settings also tremendously damaged performance. The network was so slow with Hardware VLAN filtering turned on that I couldn’t even access the web UI reliably. I had to manually edit /conf/config.xml from the console to get back into the firewall.

I experienced some very strange issues with the hardware offloading. In some situations, the hardware offloading would help the LAN side perform significantly better, but the performance on the WAN side would take a nosedive. (I’m talking, 8Gbps iperf to the LAN, coinciding with less than 1Mbps of Internet throughput).

As a result of all these strange results, I later decided that the right move was to leave all of this hardware offloading turned off. In the end, I was able to achieve the above performance without any of it enabled.

OPNsense\FreeBSD, inefficient default sysctl tunables?

My journey into deeper sysctl tuning on FreeBSD began with this 11 page forum thread from 2020 from someone who seemed to be having the same problem as me. Other users were weighing in, echoing my experiences, all equally confused as to how OPNsense could be performing so poorly, with mostly disinterested responses from any staff weighing in on the topic.

It was through the forums that I stumbled on this very popular and well respected guide for FreeBSD network performance tuning. I combed over all of the writing in this guide, ignoring all of the ZFS stuff and DDoS mitigation stuff, focusing on the aspects of the write-up that aimed to improve network performance.

After making these adjustments, I did see a notable improvement, I was now able to achieve about 4-5Gbps through the OPNsense firewall! But, my full Internet speed was still slightly eluding me, and I knew there had to be more that I could do to improve the performance.

I ended up reading through several other posts and discussions, such as this thread on Github, this thread on the OPNsense forum about receive side scaling, the performance tuning guide for PFsense, a similar FreeBSD based firewall solution from which OPNsense was forked, a very outdated thread from 2011 about a similar issue on PFsense, and a 2 year old Reddit thread on /r/OPNsenseFirewall about the same issue.

Each resource I read through listed one or two other tunables which seemed to be the silver bullet for them. I kept changing things one at a time, and rebooting my firewall. I didn’t keep that great of track of which things made an impact and which didn’t, because as I read what each thing was, I generally agreed that “yeah increasing this seems like a good idea”, and decided to keep even modifications that didn’t seem to make a noticeable performance improvement.

Perhaps you are in a position where you want to do more testing and narrow down which sysctl values matter for your particular setup, but I offer this as my known working configuration that resolved the speed issues for me, and which I am satisfied with. I have other projects to move on to and have spent more than enough time on this firewall one, it’s time to accept my performance gains and move on.

Configuration changes I decided to keep in my “known good” configuration.

If you haven’t enjoyed my rambling journey above of how I got here, then this is the part of this guide you’re looking for. Below are all of the configuration changes I decided to keep on my production firewall, the configuration which yielded the above speed test exceeding 6Gbps.

If you’re doing what I’m doing, you’re sitting with a default OPNsense installation inside of a Proxmox virtual machine, here’s everything to change to get to the destination I arrived at.

Proxmox Virtual Machine Hardware Settings – Machine Type

I read conflicting information online about whether q35 or i440fx performed better with OPNsense. In the end, I ended up sticking with the default i440fx. I didn’t notice any huge performance swing one way or another.

Proxmox Virtual Machine Hardware Settings – CPU

  • Leave the CPU type as “KVM64” (default). This seemed to provide the best performance in my testing.
  • I matched the total core count with my physical hypervisor CPU, since this will be primarily a router and I want the router to have the ability to use the full CPU.
  • I checked “Enable NUMA” (but I don’t think this improved performance any).
  • I enabled the AES CPU flag, with the hope that it might improve my VPN performance, but I didn’t test if it did. I know it shouldn’t hurt.

Proxmox Virtual Machine Hardware Settings – Network Adapters

  • Disable the Firewall checkbox. There is no need for Proxmox to do any firewall processing, we’re going to do all our firewall work on OPNsense anyway.
  • Use the VirtIO network device type. This provided the best performance in my testing.
  • Set the Multiqueue setting to 8. Currently, 8 is the maximum value for this setting. This provides additional parallel processing for the network adapter.

OPNsense Interface Settings

The first and most obvious settings to tinker with were the ones in Interfaces > Settings in OPNsense. As I wrote above, these provided mixed results for me and were not very predictable. In the end, after extensively testing each option one by one, I decided to leave all the hardware offloading turned off.

OPNsense Tunables (sysctl)

After testing a number of tunable options (some in bulk, and some individually), I arrived at this combination of settings which worked well for me.

These can probably be adjusted in configuration files if you like, but I did it through the web UI. After changing these values, it’s a good idea to reboot the firewall entirely, as some of the values are applied only at boot time.

The best overall guide which got me the most information was this FreeBSD Network Performance Tuning guide I linked above. I’m not going to go into as much detail here, and not everything set below was from this guide, but it was a great jumping off point for me.

hw.ibrs_disable=1

This is a CPU related tunable to mitigate a Spectre V2 vulnerability. A lot of people suggested that disabling it was helpful for performance.

net.isr.maxthreads=-1

This uncaps the amount of CPU’s which can be used for netisr processing. By default this aspect of the network stack on FreeBSD seems to be single threaded. This value of -1 for me resulted in 24 threads spawning (for my 24 CPU’s).

net.isr.bindthreads = 1

This binds each of the ISR threads to 1 CPU core, which makes sense to do since we are launching one per core. I’d guess that doing this will reduce interrupts.

net.isr.dispatch = deferred

Per this Github thread I linked earlier, it seems that changing this tunable to “deferred” or “hybrid” is required to make the other net.isr tunables do anything meaningful. So, I set mine to deferred.

net.inet.rss.enabled = 1

I decided to enable Receive Side Scaling. This didn’t come from the tuning guide either, it came from an OPNsense forum thread I linked earlier. In a nutshell, RSS is another feature to improve parallel processing of network traffic on multi-core systems.

net.inet.rss.bits = 6

This is a receive side scaling tunable from the same forum thread. I set it to 6 as it seems the optimal value is CPU cores divided by 4. I have 24 cores, so 24/4=6. Your value should be based on the number of CPU cores on your OPNsense virtual machine.

kern.ipc.maxsockbuf = 614400000

I grabbed this from the FreeBSD Network Performance Tuning Guide, this was their recommended value for if you have 100Gbps network adapters. The default value that came shipped with my OPNsense installation corresponded with the guide’s value for 2Gbps networking. I decided since I may want to expand in the future, I would increase this to this absurd level so I don’t have to deal with this again. You may want to set a more rational value, 16777216 should work for 10Gbps. The guide linked above goes into what this value does and other values it effects in great detail.

net.inet.tcp.recvbuf_max=4194304
net.inet.tcp.recvspace=65536
net.inet.tcp.sendbuf_inc=65536
net.inet.tcp.sendbuf_max=4194304
net.inet.tcp.sendspace=65536

These TCP buffer settings were taken from the FreeBSD Network Performance Tuning Guide, I didn’t look into them too deeply but they were all equivalent or larger buffers than what came shipped on OPNsense, so I rolled with it. The guide explains more about how these values can help improve performance.

net.inet.tcp.soreceive_stream = 1

Also from the tuning guide, this enables an optimized kernel socket interface which can significantly reduce the CPU impact of fast TCP streams.

net.pf.source_nodes_hashsize = 1048576

I grabbed this from the tuning guide as well, it likely didn’t help with my problem today, but it may prevent problems in the future. This increases the PF firewall hash table size to allow more connections in the table before performance deteriorates.

net.inet.tcp.mssdflt=1240
net.inet.tcp.abc_l_var=52

I grabbed these values from the tuning guide which are intended to improve efficiency while processing IP fragments. There are slightly more aggressive values you can set here too, but it seems these are the more safe values, so I went with them.

net.inet.tcp.minmss = 536

Another tuning guide value which I didn’t look into too heavily, but it configures the minimum segment size, or smallest payload of data which a single IPv4 TCP segment will agree to transmit, aimed at improving efficiency.

kern.random.fortuna.minpoolsize=128

This isn’t related to the network at all, but it was a value recommended by the tuning guide to improve the RNG entropy pool. Since I am doing VPN stuff on this system, I figure more RNG is better.

net.isr.defaultqlimit=2048

This value originated from my earlier linked Reddit thread, it was quickly added during the last batch of tunables that finally pushed me over the edge in terms of performance, and I decided I’d leave it even if it wasn’t doing anything meaningful. Increasing queuing values seems to have been a theme of the tuning overall.

Good enough for now!

With all of the above changes, I achieved my desired performance with OPNsense, running in a KVM virtual machine on Proxmox.

I’d imagine that these same concepts would apply well to any FreeBSD based router solution, such as PFsense, and some could even apply to other FreeBSD based solutions common in homelab environments, such as FreeNAS. However, it appears in my research that OPNsense is unique limited in its performance (more limited than stock FreeBSD 13). So, your mileage may vary.

The above is not intended to be a comprehensive guide, I write it both for my future reference, and with the hopes that some of the many folks who seem to be out there having these same performance issues, and being forced to stumble around in the dark looking for answers like I was, might try the settings in my guide and achieve the same great outcome.

Posted in: How-To's, Musings / Tagged: 10Gbps, FreeBSD, Gigabit Pro, Multi-gig, OPNsense

Porting a Landline Number to Google Voice, via TracFone

October 26, 2021 5:53 PM / 1 Comment / Kirk Schnable

In 2021, an AT&T landline price increase finally gave my grandma the push she needed to get rid of her landline phone service. I have been encouraging my family to use their Google Voice lines for many years, but some of them have still held on to their expensive landline service. A hangup for my grandma was not wanting to lose her primary phone number, so I set off to see about porting it to Google Voice.

Google Voice, for some reason, will not port a landline number directly. I came across some other articles online, like this one from The Cord Cutting Report, which I found helpful throughout my process. So, I thought I’d document my experience for the future reference of myself and anyone else who may find it useful.

TracFone as a number porting intermediary.

Numerous guides I’d read online suggested using a prepaid cell phone carrier, such as Ting, to port the number away from the landline service. This process would transition the phone number into a mobile number, which Google Voice is willing and able to port over to their service.

I decided to use TracFone, since I have used them personally for many years, as has most of my family. As a result of my long time as a customer, I had an old Android phone laying around from a previous TracFone service term, and was able to get it activated on the network again – so I did not even have to buy a burner phone, I already had it!

I’ve read that others have experienced problems with their number porting due to information not matching up on the two accounts. It seems that things like the billing zip code on the credit card used to order the service, and other factors can impact the process. So, to attempt to minimize any such issues, I added my old phone to my grandma’s TracFone account, and we ordered the service using her credit card. This way, all of the billing and account information will match her AT&T landline service.

I began the process on a Thursday night, knowing it may take 1-2 business days, not knowing whether it would be done before the weekend or not. But, there was no rush here. I’d been informed through my research that it was a good idea to wait at least 1 full week between porting the number to TracFone and trying to port it again, so with this intention in mind, nobody was in any hurry.

The Porting Process – Initial Issues

Having personally never ported a number before, I wasn’t totally familiar with the process and what it would entail. I began the process on TracFone’s website, which asked me for the AT&T account number and password\PIN for the account I wanted to port the number from.

The account number could be found on the top right corner of the AT&T landline bill.

Example bill layout from AT&T’s website, note #6 Account Number.

It seemed completely clear, although there was some awkward spacing in the information. For the sake of this example so I can provide a workable example with a non-real account number, let’s say her phone number was (111) 222-3456. And let’s say the account PIN was 7654.

The information appearing as “Account Number” on the AT&T bill was formatted like this: 111 222-3456 765 4

I tried all possible combinations of these numbers on TracFone’s website, but it said everything I entered was an invalid account number. I came across this Reddit thread from someone else who had the same issue. The thread seemed to conclude that the account number would be 13 digits, so from our example, I assumed the correct account number would be 1112223456765. The thread noted it was necessary to select “Other” as the carrier, not “AT&T” because the account number was still not a “valid” AT&T account number, according to the form.

I figured I could call TracFone support, but I also figured there was no harm in trying this, and if it didn’t work I could fall back to calling support. So, I went ahead and did as suggested, entering the account number as 1112223456765 and the PIN as 7654.

The Wait Begins

After entering the information, TracFone appeared to accept the order and begin the porting process. They provided the following guidance:

The transfer process is in progress and should take a few hours to complete. In some cases, it could take as long as 2 business days. It may take longer for landline phone numbers. During this time, your current phone will still work.

After your CURRENT phone stops working:
1. Call *22890 from your NEW phone to initiate the Activation process.
2. When the activation is complete, make a call.
– If the activation or call fails, wait a few minutes and call * again.

For kicks, I tried to activate the phone several hours later, and unsurprisingly, the activation was not successful. The wait was now on to see how many days \ business days the transfer would take, and if it would even be successful at all.

Unexpected Issues

Unfortunately, there was a hangup in my plan. Tracfone called us the next day to notify us that the phone we were trying to activate was too old, and no longer supported by the network. I was somewhat surprised, considering I still know at least one person actively using the same model phone on Tracfone’s network, but since the temporary phone I had was a 3G phone, they probably are no longer activating 3G phones since that network is in the process of being decommissioned by major carriers, like AT&T.

I really didn’t want to buy a phone for the sole purpose of using for a week, so I started asking around, and was fortunate to find a friend with an old 4G unlocked phone they were no longer using. I was able to borrow their phone and use a very cheap TracFone BYOD SIM card to attempt to activate it on TracFone’s network.

The process of activating the second phone was difficult the first time, the first agent I called was unable to find the service card we had paid for on the previous phone, and told me that the account number I had from AT&T was invalid for the number port. (This turned out to not be true – the information outlined above IS correct.)

I called back a second time the next day, after calling AT&T to verify the account number, and got a different TracFone rep who was able to overcome all of the problems the first rep couldn’t. Finally, the transfer process was underway! I was told it would take 2 days to complete the porting process, since it was a landline. It sounded like they would have to actually communicate with AT&T via email and send over documentation, it would not be a quick automated process (maybe this is why Google refuses to do it!).

Successful TracFone Port!

After two days, as promised, the number was ported and I was able to activate the phone on TracFone’s network. I made a test call to myself and confirmed it came from the old landline number.

Various articles and forum threads I’d read online suggested that I should wait 1 week before trying to port the number again. So, I decided to wait a little extra. The port completed on a Wednesday, and I decided to do the second port the following Sunday, when I could go over to Grandma’s house and finish setting up her Obi200 box.

Porting to Google

The port to Google was a fairly straightforward process overall, but a lot of the information I’d found online concerning how to port out of TracFone had been incorrect.

I began here, on this Google KB post which provides the link to the page to start the number porting process.

The first step was to enter the phone number and check portability, this was where we failed before, since Google did not support porting the AT&T landline. This time, success! Google said the number was eligible for porting. Once on the next screen, they provide the following terms and details:

The next step was to fill out contact information for the phone number being ported (the billing address and details on the TracFone account). Note that the carrier shows up as Verizon, I was expecting this, since I used the Verizon compatible BYOD SIM.

This was where the slight issues began. Every piece of information I found online, including this article from Best Cellular, stated that the “Account Number” for a TracFone BYOD device was the last 15 digits of the SIM card. As far as the 4 digit PIN, information I found online varied from “TracFone doesn’t use PINs, so you should enter 0000” to “TracFone doesn’t use PINs, so you should enter any 4 digit number”. I decided to enter a 4 digit security PIN I knew existed on the account.

Unfortunately, the port request was immediately rejected due to a bad account number. I was provided the option to correct the account number, but didn’t know what the right information was. I really was not looking forward to calling TracFone and asking them how to port away the number they’d just ported in a week earlier, but I decided to give it a shot.

Fortunately, the agent I got at the TracFone number porting department was very helpful, and didn’t seem to care at all about what I was doing. They initially thought I was right to be using the last 15 digits of the SIM card too, but then found another account number in the system that I didn’t have. They provided it to me, and I entered it in the Google form, along with the 4 digit account security PIN. This time, the port was accepted!

I was disappointed that it was going to take another 24 hours, I thought the port would be fairly immediate now that it was a mobile number. I’m not sure if the delay was normal, because it was a landline, or because I made a mistake on the form the first time. Either way, Google was very timely and completed the port request exactly when they said they would. The next day, we received an email notifying us it was complete!

Closing Thoughts

This was my first time ever porting a phone number in any capacity, and I was trying to do something somewhat unsupported… All in all, this process was as easy as I could have asked for. There were certainly some snags along the way, which were learning experiences for me, but I would definitely do this again in the future if someone else in my family asks for assistance getting rid of their landline. Hopefully someone else out there will also find this information useful if you are trying to do the same process, or wondering what will be involved and how hard it will be. Happy porting!

Posted in: How-To's, Musings

Building a More Complete & Full Featured CKEditor5

December 8, 2019 4:13 AM / Leave a Comment / Kirk Schnable

CKEditor 5 is a WYSIWYG text editor that can be used for a variety of purposes, from creating your own Google Docs type of site to creating your own WordPress knock-off platform.

I recently wanted to use this editor, but was disappointed with the lack of features in the default builds. It doesn’t even have underlining. Evidently the official stance of the developers is that you should build your own and they won’t provide a full build anymore like they did for CKEditor 4.

CKEditor provides some detailed build instructions on their documentation, but as someone who is not a Javascript developer and has never used Node before, I found the process a bit intimidating at first. So I am writing this guide up for my own future reference when I need to update my build, and also to hopefully help someone else who is in the same situation by explaining what I feel isn’t well explained to someone who’s completely new to NPM.

The Basics – Starting Your Build

First you will need NPM installed as well as Yarn. For me on Debian 10 the package name for Yarn was not immediately intuitive and the command was different than just “yarn”.

On Debian 10 my dependencies were:
# apt-get install npm yarnpkg git

Once these are installed you can simply clone the Git repository. I didn’t plan on keeping this server long term so I’m just doing it the lazy way on to a temporary VM I am going to delete when I have my final build. This isn’t the best way if you are developing your own stuff, but if you are like me and you just want a build that CKEditor won’t provide, you can just use a temp environment on a throwaway VM like I did. I had no desire to junk up my live server or even my desktop with all this NPM stuff I will not likely use again anytime soon.

For the purposes of this project I am starting with a “Classic” editor as that’s closest to what I want.

# git clone -b stable https://github.com/ckeditor/ckeditor5-build-classic.git
# cd ckeditor5-build-classic
# git remote add upstream https://github.com/ckeditor/ckeditor5-build-classic.git

Finding The Plugins You Want

The plugins you want can all be located on this page of the official documentation. I simply went through each option on the sidebar to see which plugins I might want.

Some of these are already included in the build, which you can discern by reading the build file located in src/ckeditor.js on your VM.

For the purposes of my build, I am adding Alignment, Strikethrough, Underline, Subscript, Superscript, Code, Highlight, HorizontalLine, RemoveFormat, Base64UploadAdapter, and ImageResize.

Installing Plugins

To install your desired plugin, there are 3 steps.

  1. Install the NPM package. The directions for this are provided on the plugin page on the documentation. Here are the NPM installs I ran to install the plugins I wanted:

# npm install --save @ckeditor/ckeditor5-alignment
# npm install --save @ckeditor/ckeditor5-highlight
# npm install --save @ckeditor/ckeditor5-horizontal-line
# npm install --save @ckeditor/ckeditor5-remove-format
# npm install --save @ckeditor/ckeditor5-upload

At a glance, you might notice these plugins do not match what I stated I wanted to add above. This is because you may see that some plugins contain several features, not all of which must be imported. For example, Base64UploadAdapter is one feature of ckeditor5-upload, there are other features which I haven’t imported such as SimpleUploadAdapter.

You can discern which features are part of which plugin from the plugin’s documentation page. Each one has a link to a page containing more information about the feature.

  1. Edit src/ckeditor.js to contain an import line for each plugin feature that you wish to import. For the purposes of my build, I added these import lines below the default ones.

import Alignment from '@ckeditor/ckeditor5-alignment/src/alignment'
import Strikethrough from '@ckeditor/ckeditor5-basic-styles/src/strikethrough';
import Underline from '@ckeditor/ckeditor5-basic-styles/src/underline';
import Subscript from '@ckeditor/ckeditor5-basic-styles/src/subscript';
import Superscript from '@ckeditor/ckeditor5-basic-styles/src/superscript';
import Code from '@ckeditor/ckeditor5-basic-styles/src/code';
import Highlight from '@ckeditor/ckeditor5-highlight/src/highlight';
import HorizontalLine from '@ckeditor/ckeditor5-horizontal-line/src/horizontalline';
import RemoveFormat from '@ckeditor/ckeditor5-remove-format/src/removeformat';
import Base64UploadAdapter from '@ckeditor/ckeditor5-upload/src/adapters/base64uploadadapter';
import ImageResize from '@ckeditor/ckeditor5-image/src/imageresize';

As a newbie to NPM, I wasn’t 100% sure how to determine exactly what should go here at first. Since I for example ran “npm install –save @ckeditor/ckeditor5-upload” how do I determine the remainder of the string to import the feature?

The best way I found is to click through to the GitHub page for the plugin, and navigate into the “src” folder. There, you will see .js files, and you simply need to put the path to the .js file, minus the extension.

  1. Add a line for each plugin in the ClassicEditor.builtinPlugins section of src/ckeditor.js. Unless you do this the plugin won’t actually be built into your build, which afterall is the goal.

Once again, I came across some ambiguity here. Where do the names of the plugins come from and how can I make sure I have the capitazliation correct and everything?

I copied the exact name as displayed on the documentation page for the plugin, it seemed to work correctly 100% of the time.

Some caution is needed here as some plugins contain dashes in the NPM package but the plugin itself doesn’t contain a dash. For example “@ckeditor/ckeditor5-horizontal-line” the plugin name is “HorizontalLine”.

On the Horizontal Line page of documentation, it says “See the Horizontal line feature guide and the HorizontalLine plugin documentation.” This is where I was sourcing my exact spellings and it was working reliably.

Here’s what my additional lines looked like:

        Alignment,
        Strikethrough,
        Underline,
        Subscript,
        Superscript,
        Code,
        Highlight,
        HorizontalLine,
        RemoveFormat,
        Base64UploadAdapter,
        ImageResize,
  1. Add your desired plugin to the “toolbar:” section of the ClassicEditor.defaultConfig in src/ckeditor.js.

Once again some ambiguity here. I used the lowercase version of the plugin name from the section above in step 3. This seemed to work 100% of the time.

By the way, you can use the pipe ‘|’, in the toolbar section to add spacers to the toolbar.

I moved some stuff around so here is what my whole toolbar section ended up looking like:

        toolbar: {
                items: [
                        'heading',
                        'removeformat',
                        'horizontalline',
                        '|',
                        'alignment',
                        'bold',
                        'italic',
                        'underline',
                        '|',
                        'strikethrough',
                        'subscript',
                        'superscript',
                        '|',
                        'link',
                        'bulletedList',
                        'numberedList',
                        '|',
                        'highlight',
                        'indent',
                        'outdent',
                        '|',
                        'imageUpload',
                        'mediaEmbed',
                        '|',
                        'code',
                        'blockQuote',
                        'insertTable',
                        '|',
                        'undo',
                        'redo'
                ]
        },

A Note About Some Dumb (In My Opinion) Defaults

CKEditor comes bundled with a few things which I removed for the purposes of my build.

Firstly it comes bundled with CKFinder and its associated UploadAdapter. Are far as I can tell, this does not function without a server side script I don’t care to invest time investigating, so I’m removing these from my build.

Additionally I am removing EasyImage because I have no plans to use cloud services.

To remove these items, I’m simply commenting the imports out and the plugin declarations in builtinPlugins in my src/ckeditor.js before finishing my build.

Out of the box, the image upload features of the editor do not work unless you put in some elbow grease. For now I am implementing Base64 image uploading so I don’t have to mess with a server side handler and filesystem permissions issues that can come along with uploading files. We’ll see how this works for me in my use case long term, I may switch to the Simple Upload Adapter and write a server side handler in the future.

I personally think CK Editor should just include the Base64 uploader by default so that the features work out of the box instead of this CKFinder plugin that doesn’t work without additional dependencies.

Finishing Your Build

Once you have added all of the customizations to your build, you can compile it with the Yarn tool you installed.

Although the official documentation suggests the command is “yarn”, on my Debian 10 system, it was “yarnpkg”.

So to finish my build I ran:

# yarnpkg build

Once it’s finished, the completed file is located in build/ckeditor.js. This file can be used as a drop-in replacement for any other downloadable build direct from CKEditor, and it should contain your new features.

I found I could run this build over and over as I refined my source file and I didn’t have any problems, it just overwrote my build file with a new one.

There ya go! I hope this guide simplifies someone’s project. 🙂

Posted in: How-To's, Musings

Post Navigation

← Older Posts
 

Pages

  • About Binary Impulse
  • BashBytes

Recent Posts

  • My UBook X Linux Tablet Adventure and how I learned more than I ever thought I would need to know about accelerometer drivers
  • Migrating Proxmox Hypervisor’s Boot Volume ZFS Mirror To New (Smaller) Disks
  • OPNsense Performance Tuning for Multi-Gigabit Internet
  • Reddit Deplatforms Popular Microsoft Software Swap Subreddit
  • Comcast Upgrades Gigabit Pro from 3Gbps to 6Gbps!
May 2025
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  
« Apr    

Recent Comments

  • jasonistre on OPNsense Performance Tuning for Multi-Gigabit Internet
  • Nick on OPNsense Performance Tuning for Multi-Gigabit Internet
  • Kirk Schnable on OPNsense Performance Tuning for Multi-Gigabit Internet
  • nick on OPNsense Performance Tuning for Multi-Gigabit Internet
  • Eric on My UBook X Linux Tablet Adventure and how I learned more than I ever thought I would need to know about accelerometer drivers

Categories

  • How-To's
  • Humorous
  • Musings
  • Newsworthy
  • Other

Meta

  • Register
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© Copyright 2025 - Binary Impulse
Infinity Theme by DesignCoral / WordPress