Looped Network

Looped Links

I'm excited that I've hit a bit of a milestone with my WriteFreely client. I basically have the package to a point where I'm content with it and feel like I can actually start using it on the regular. The code is available on GitHub; I know I referenced the repo on GitLab in my original blog post, but that changed a few weeks ago for reasons that will be another post for another time.

This has been a few months in the making, as I spent a not insignificant amount of time working on it each weekend since July. I also feel like I'm starting to put a bow on it at a good time since I've had some other personal project ideas crop up that I'd like to start working on that also offer the benefit of helping me learn a new language, C#, which I'm looking to learn for work.

What's also kind of special to me is that this is really the first “bigger” personal project that I've actually completed. I've written plenty of code in my free time, but nothing that really amounted to more than a basic script for something. I can also see why projects like this die so frequently; there were plenty of times where, after not looking at the code for a week, I just really didn't want to spend any time diving back into it in order to figure out where I left off, what I needed to change, or what design decisions I needed to fix.

The Client

The client is accessible in 2 different ways:

  1. A CLI client executed from the command line with a plethora of sub-commands, similar to something like kubectl.
  2. An interactive TUI client, made significantly less bland-looking via rich.

I struggled initially to think of a good reason why the TUI client would exist, especially since I was never going to create a text editor in my client better than what people would already be using. As is so often the case, though, I was really overthinking the situation, and I ultimately realized I could just use whatever was already set as the $EDITOR for the content itself and just have my code act as a wrapper to manage that content. Win-win. I honestly now use the TUI version more than anything else!

Design

Probably the best thing that I learned from this experience, along with updating my Python skills, was around design. I'm often very guilty of immediately diving into writing code without really thinking through the bigger picture of what I'd like to accomplish and how my decisions will impact that. A great example is the fact that I wanted to have both a CLI and TUI option for the application, but I initially worked on only the CLI side. While that's probably for the best to save me from juggling too many items at once, I made decisions that often only worked for the CLI version of the application, such as assuming that if I ran into an error I could just exit the application with a non-zero code. Obviously that then didn't work for the TUI version where I wouldn't want the user to be kicked out due to an error; I'd just want to note what went wrong and offer some options for what should be done while keeping the application active. This resulted in my having to change the behavior for a lot of my classes. While it wasn't a big deal in the end, I still could've saved myself from a lot of work with better planning.

Wrap Up

I'm really happy with where the client is now, though if I happen to think of anything which I'd like to add I'll certainly continue working on it in the future. By the same token, if anyone other than me actually ends up using this and has a feature request, it's certainly something I'll consider. I'm just excited now to have what I feel is a solid option for posting to WriteFreely from the CLI, something I'm actually doing right now since this post is being made via my client. 🙂

It just dawned on me that I've finally made good progress in decreasing the number of domains that I own. I've historically purchased domains on a whim; when I'm bored or sitting on a bar stool somewhere, I'll grab my phone and check if random domains that pop into my head are available. While the overwhelming majority of them would be taken, I'd occasionally strike upon something that hadn't been snatched up. I'd typically always buy them... and then do nothing with them the vast majority of the time. They were basically like ICANN Pokémon.

A little over a year ago, I decided that I would start to let some of my domains lapse through attrition. I turned off automatic renewal and figured that if I didn't come up with a use for a particular domain by the time I started getting alerts about the fact that it was expiring, then I didn't really need it in the first place. At the time I owned 9 domains. To date, I've let 5 of them expire. 1 was something I used for a skunk works project at my job that ended up becoming fairly critical to their workflow, so I transferred that domain to the company when I left that job. (Humorously, I never expensed this domain — even after it became “production” — because it renewed on the same date as laifu.moe, and I didn't want to submit the receipt showing both domains. 😅) That leaves me with:

  1. This domain.
  2. The aforementioned laifu.moe, the domain I've actually owned the longest, and the only domain I've ever purchased as a “joke” and actually done something with.
  3. A random domain I bought prior to deciding that I'd rather just use looped.network as my primary domain. I'm letting this one expire, though it has until next summer.

It's typically been easy for me to justify the expense of domains because I tend to use (relatively) inexpensive TLDs. I believe the most expensive domain I've ever purchased as a .io that was around $40 USD for a year. While TLDs which are $10 to $15 a year are a bit more palatable, they still add up when I've got a large number of them... and that cost is doing absolutely nothing if I do nothing with the domain. Today I'd only consider purchasing a domain if I have an immediate use for it; I no longer buy any just because it's a fun name that I want to hold on to. I've actually had a handful of scenarios where I thought of decent domain names and discovered they were available, but so far I've been walking the straight and narrow without buying them.

While it's a little silly to keep a domain for a single web page, I don't see myself ditching laifu.moe any time soon since my inner weeb likes it too much. looped.network hosts several websites (like this one) and a few servers/services that I run, so it's also pretty locked in. At this point I think I'd have to have something pretty outstanding that I wouldn't just host it on another subdomain of looped.network.

As the title alludes to, this morning I tried updating my Pinebook Pro running Manjaro Linux through my normal method:

sudo pacman -Syu

Today, this resulted in an error message about the libibus package:

warning: could not fully load metadata for package libibus-1.5.26-2 error: failed to prepare transaction (invalid or corrupted package)

Fun. I first wanted to see if it was really just this package that was causing the problem or if there were other issues. Being a pacman noob, I just used the UI to mark updates to the libibus package as ignored. Once I did that, all of the other packages installed successfully. This prompted for a reboot, which I gladly did since I figured I'd see if that made any difference. Once my laptop was back up and running, though, executing pacman -Syu again still gave the same error related to libibus.

Some searches online showed that a mirror with a bad package could be the problem, so I updated my mirrors via:

sudo pacman-mirrors -f5

This didn't solve the problem, but the new mirror gave me a different error message:

error: could not open file /var/lib/pacman/local/libibus-1.5.26-2/desc

With some more searches online, I saw a few people on the Manjaro forums say that simply creating those files was enough to fix similar errors they had with other packages. Creating the file above just resulted in an error about a second file being missing, so I ultimately ended up running:

sudo touch /var/lib/pacman/local/libibus-1.5.26-2/desc
sudo touch /var/lib/pacman/local/libibus-1.5.26-2/file

Now running an update allowed things to progress a little further, but I got a slew of errors complaining about libibus header files (.h) existing on the filesystem. My next less-than-well-thought-out idea was to just remove the package and try installing it fresh. I tried running:

sudo pacman -R libibus

Fortunately, Manjaro didn't let me do this by telling me that it was a dependency for gnome-shell. Yeah, removing that would've been bad. It was back to searching online. The next tip I stumbled across was to try clearing the pacman cache and then install updates with:

sudo pacman -Scc
sudo pacman -Syyu

This unfortunately gave me the same error about the header files. However, the same forum thread had another recommendation to run:

sudo pacman -Syyu --overwrite '*'

Curious about exactly what this would do prior to running it, I checked out the man page for pacman:

Bypass file conflict checks and overwrite conflicting files. If the package that is about to be installed contains files that are already installed and match glob, this option will cause all those files to be overwritten. Using —overwrite will not allow overwriting a directory with a file or installing packages with conflicting files and directories. Multiple patterns can be specified by separating them with a comma. May be specified multiple times. Patterns can be negated, such that files matching them will not be overwritten, by prefixing them with an exclamation mark. Subsequent matches will override previous ones. A leading literal exclamation mark or backslash needs to be escaped.

I took this to mean that instead of complaining about the header files that already existed on the filesystem, it would simply overwrite them since my glob was just * to match anything. I ran this, and sure enough everything was fine.

I mainly run Manjaro on my Pinebook Pro just because it's such a first class citizen there with tons of support. It's now the default when new Pinebook devices ship; back when I got mine it was still coming with Debian, though I quickly moved it over after seeing how in love the community was with Manjaro. I do find that I run into more random issues like this on Manjaro than I do with Fedora on my other laptop or Debian on my servers, for example, and at times it can be a little frustrating. I didn't really want to spend a chunk of my Saturday morning troubleshooting this, for example. But while there seem to be more issues with Manjaro, the documentation and community are so good that usually after a little time digging in, the solution can always be found. I've yet to run into any issue where the current installation was a lost cause forcing me to reinstall the operating system.

Just a few moments ago I needed to extract the audio component out of a video file into some type of standalone audio file, like .mp3. Since I've been working with Audacity to record audio, I figured maybe it had some capability for ripping it out of video.

My initial searches gave me results like this which quickly made it clear that while this is technically possible, it requires some add-ins that I didn't really want to mess around with. However, since the add-in mentioned in that video was for FFmpeg, I realized I could just use that directly.

I didn't have ffmpeg installed, but that was easy enough to rectify on Fedora 36.

sudo dnf install ffmpeg-free

Then I needed to extract the audio. I first checked how it was encoded in the video with:

ffprobe my_video.mp4

After sifting through the output, I saw that it was encoded as aac:

Stream #0:10x2: Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, s16, 317 kb/s (default)

Rather than that, I wanted to simultaneously re-encode the audio as MP3. Another quick search showed me some great resources. Ultimately, I ended up doing:

ffmpeg -i my_video.mp4 -q:a 0 -map a bourbon.mp3

As mentioned in the Stack Overflow post, the -q:a 0 parameter allows for a variable bitrate while while -map a says to ignore everything else except the audio.

Just a few moments later, and my MP3 was successfully encoded.

I recently ran across an interesting error with my development Kubernetes cluster, and while I still have no idea what I may have done to cause it, I at least figured out how to rectify it. As is commonly the case, most of the things I end up deploying to Kubernetes simply log to standard out so that I can view logs with the kubectl logs command. While running this against a particular deployment, though, I received an error:

failed to try resolving symlinks

Looking at the details of the error message, it seemed that running a command like:

kubectl logs -f -n {namespace} {podname}

Is looking for a symbolic link at the following path:

/var/log/pods/{namespace}_{pod-uuid}/{namespace}

The end file itself seems to be something extremely simple, like a number followed by a .log suffix. In my case, it was 4.log. That symbolic link then points to a file at:

/var/lib/docker/containers/{uuid}/{uuid}-json.log

Where the uuid is the UUID of the container in question.

Note: The directory above isn’t even viewable without being root, so depending on your setup you may need to use sudo ls to be able to look at what’s there.

I was able to open the -json.log file and validate that it had the information I needed, so I just had to create the missing symlink. I did that with:

sudo ln -s /var/lib/docker/containers/{uuid}/{uuid}-json.log 4.log

Since my shell was already in the /var/log/pods/{namespace}_{pod-uuid}/{namespace} directory, I didn’t need to give the full path to the actual link location, just specify the relative file of 4.log.

Sure enough, after creating this I was able to successfully run kubectl logs against the previously broken pod.

Lately I've been working through getting WinRM connectivity working between a Linux container and a bunch of Windows servers. I'm using the venerable pywinrm library. It works great, but there was a decent bit of setup for the underlying host to make it work that I had been unfamiliar with; you can't just create a client object, plug in some credentials, and go. A big part of this for my setup was configuring krb5 to be able to speak to Active Directory appropriately.

My setup involves a container that runs an SSH server which another, external service actually SSHs into in order to execute various pieces of code. So my idea was to take the entrypoint script that configures the SSH server and have it also both:

  1. Create a keytab file.
  2. Use it to get a TGT.
  3. Create a cron job to keep it refreshed.

Let's pretend the AD account I had been given to use was:

Username@sub.domain.com

In my manual testing, this worked fine after I was prompted for the password:

kinit Username@SUB.DOMAIN.COM

If you're completely new to this, note that it's actually critical that the domain (more appropriately called the “realm” in this case) is in all capital letters. If I run this manually by execing my way into a container, I get a TGT just like I'd expect. I can view it via:

klist -e

Unfortunately, things didn't go smoothly when I tried to use a keytab file. I created one in my entrypoint shell script via a function that runs:

{
    echo "addent -password -p Username@SUB.DOMAIN.COM -k 1 -e aes256-cts-hmac-sha1-96"
    sleep 1
    echo <password>
    sleep 1
    echo "wkt /file.keytab"
} | ktutil &> /dev/null

The keytab file is created successfully, but as soon as I try to leverage it with...

kinit Username@SUB.DOMAIN.COM -kt /file.keytab

...I receive a Kerberos preauthentication error. After much confusion and searching around online, I finally found an article that got me on the right track.

The article discusses the fact that an assumption is being made under the hood that the salt being used to encrypt the contents of the keytab file is the realm concatenated together with the user's samAccountName (aka “shortname”). So for my sample account, the salt value would be:

SUB.DOMAIN.COMUsername

The problem highlighted by the article is that when you authenticate via the UserPrincipalName format (e.g.: username@domain.com) rather than the shortname format (e.g.: domain\username), another assumption is made that the prefix of the UPN is the same as the shortname. This is very commonly not the case; in a previous life where I actually was the AD sysadmin, I had shortnames of first initial and last name while the UPNs were actually firstname dot lastname. So for example, my UPN was:

looped.network@domain.com

While my samAccountName was:

lnetwork

If this type of mismatch happens, you can use -s when running addent to specify the salt. After checking AD, I verified in my current case that the username was the same for both properties... but that in both places it was completely lowercase. I can't say why it was given to me with the first character capitalized, but after re-trying with username@SUB.DOMAIN.COM, everything was successful. This made sense to me because while AD doesn't care about the username's capitalization when it authenticates (hence why manually running kinit and typing the password worked), using a keytab file means that the wrong salt was given.

There’s nothing quite like being on a live call to make you realize that you’re not as savvy with Vim as you thought. I’ll probably be shifting back to Sublime with my main workflow for the foreseeable future.

I had written not very long ago about my progress on my little Write Freely python client that I've been working on to facilitate my ability to create posts from an SSH session to a VPS. I actually had a bit of an “Oh no!” moment just the other day when I realized that I might be able to accomplish what I'm looking to do just by going to the write.as website from a TUI browser like w3m, but a quick test let me know that Javascript was required.

This weekend I felt like I didn't have a ton left to work out from the perspective of the CLI version of the application, at least for a first build. I wanted to round out some of the functionality with pulling back post information to then be able to get IDs for deleting posts. Based on that, then I needed to update some of the help documentation. With that implemented, though, I wanted to test it from my VPS. Out of the gate, that was a bit of a pain since I'm not feeling like things are ready to push to something like PyPI yet. So instead, I just cloned my repo, manually created the virtual environment, installed the dependencies, and then created a shell script in my $PATH named writepyly that just contained:

#!/usr/bin/env bash
/home/{username}/code/writepyly/.venv/bin/python /home/{username}/code/writepyly/src/__main__.py $@

In this case, {username} holds my actual username on the system. This works great and allowed me to put some of the functionality through its paces. I got to fix a few bugs with things like trying to push posts when I didn't have any configuration files, for example. I apparently like to catch errors and then not actually stop the execution flow. This post, however, is being made from the my client on the VPS.

After getting the VPS side of things sorted, I went back to start building out the TUI version of the application, which I want to launch when writepyly is executed without any commands provided. In the original branch, that would simply print the help documentation. In this new version, only writepyly help will trigger that while writepyly by itself will cause the TUI to load up.

This will be an interesting learning experience for me since I have zero experience building something like this. I'm using rich as the framework for the TUI, and it honestly seems very easy to work with. I think building out everything except for creating new posts will be super easy. Creating new posts is going to involve basically having a text editor in my application, so I currently have no idea what the hell that will look like. Maybe instead of having a text editor for post creation, I'll just initiate prompt the user from the TUI for where the file they want to use is. I don't see a ton of value in trying to recreate something like Vim, Emacs, Micro, etc. given that they'll all be better solutions for writing content than what I would put together. 🤔

I feel dumb right now, especially after my post about what I've been doing with Neovim. While working on a personal project, I kept having complaints from Neovim about my file having mixed indentation, indents and unindents not aligning, etc. This project has now been worked on with both VS Code, Sublime, and Neovim. After struggling to manually rectify things one line at a time in Neovim, I eventually did the smart thing and took to the Internet where I learned that:

I can easily issue the command:

:set syntax=whitespace

To see what whitespace is comprised of tabs and which is comprised of spaces. If I've got Neovim set the way I want as far as tabs and spaces are concerned, I can then just issue:

:retab

To make everything match. I guess it's another “better later than never” scenario.

I had written a few months ago on Medium that I was trying to switch from using VS Code as my main editor to Vim. As I mentioned in that post, I've used Vim for years now, but never as my “main” editor for when I need to get serious work done, such as with my job. I also swapped from vanilla Vim to Neovim, which I found to have a few quality of life improvements that I enjoyed. I just couldn't stick with it, though, because I missed the how frequently VS Code saved me from myself when I did things like making stupid mistakes that I've need to debug manually because my editor wasn't telling me about the problems in advance. Likewise, I got irritated when I kept having to check things like what parameters I needed to pass to a method or where I defined a particular class manually because I couldn't easily peek them like I can in VS Code.

That being said, I knew this functionality was possible in Neovim (and Vim), but I just never bothered to check exactly how. During some initial homework on the matter, it seemed like parts of it were fairly simple while other parts were complicated. Ultimately, it turned out that how difficult the process is to set everything up really depends on how difficult you want to make it and how much you want to customize things. I just reproduced the steps I originally followed on my work laptop with my personal laptop to validate my notes prior to making this post, and it probably took me less than 5 minutes.

Plugins and init.vim

When I first started with Neovim, I quite literally told it to just use what I had already set up with Vim as far as configuration and plugins were concerned. I had used Pathogen for my Vim plugins and had my configuration done in ~/.vimrc. Neovim looks for configuration files in ~/.config/nvim, and they can be written in Vimscript, Lua, or a combination of the two. I initially just had my init.vim file with:

set runtimepath^=~/.vim runtimepath+=~/.vim/after
let &packpath = &runtimepath
source ~/.vimrc

This was taken straight from the documentation. It worked fine, but I wanted to keep my configs separate in this case. I started my just copying the content of my existing .vimrc file to ~/.config/nvim/init.vim.

Note: If you're curious, my full Neovim configuration is on GitLab.

Next I wanted a plugin manager. vim-plug seems to be extremely popular and was simple enough to install with the command they provide:

sh -c 'curl -fLo "${XDG_DATA_HOME:-$HOME/.local/share}"/nvim/site/autoload/plug.vim --create-dirs \
       https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim'

Then I just updated my init.vim with the plugins I wanted to install:

call plug#begin('~/.config/plugged')
Plug 'https://github.com/joshdick/onedark.vim.git'
Plug 'https://github.com/vim-airline/vim-airline.git'
Plug 'https://github.com/tpope/vim-fugitive.git'
Plug 'https://github.com/PProvost/vim-ps1.git'
Plug 'https://github.com/wakatime/vim-wakatime.git'
Plug 'neovim/nvim-lspconfig'
Plug 'neoclide/coc.nvim', {'branch': 'release'}
call plug#end()

call plug#begin('~/.config/plugged') and call plug#end() indicate what configuration pertains to vim-plug. The path inside of call plug#begin is where plugins get installed to; I could pick whatever arbitrary location I wanted. Plugins can be installed with any valid git link. You can see above that there's a mix of full URLs and a shorthand method. I started off by just copying the links for plugins I already used with Vim (all of the full GitHub links) and then adding the others as I looked up how to do some additional configuration. More on those later.

With init.vim updated, I just needed to close and re-open Neovim for everything to apply, followed by running:

:PlugInstall

This opens a new pane and shows the progress as the indicated plugins are all installed. What's really cool about this is that I can also use :PlugUpdate to update my plugins, rather than going to my plugin folder and using git commands to check for them.

Note On Configuration

I ultimately ended up doing all of my configuration in Vimscript. I would actually prefer to use Lua, but most of the examples I found were using Vimscript. I also have a fairly lengthy function in my original Vim configuration for adding numbers to my tabs that I didn't want to have to rewrite, especially since I wholesale copied it from somewhere online. Depending on what you want to do, however, you may end up with a mix of both, especially if you find some examples in Vimscript and some in Lua. This is entirely possible. Just note there can be only one init file, either init.vim or init.lua. If you create both, which is what I initially did, you'll get a warning each time you open Neovim and only one of them will be loaded.

To use init.vim as a base and then also import some Lua configuration(s), I created a folder for Lua at:

~/.config/nvim/lua

In there, I created a file called basic.lua where I had some configuration. Then, back in init.vim, I just added the following line to tell it to check this file as well:

lua require('basic')

Error Checking

Note: I ended up not using the steps below, so if you want to follow along with exactly what I ended up using, there's no need to actually do any of the steps in this section.

This is where some options come in to play. Astute readers may have noticed the second to last plugin in my vim-plug config was for:

Plug 'neovim/nvim-lspconfig'

This is for the LSP, or Language Server Protocol. This allows Neovim to talk to various language servers and implement whatever functionality they offer. However, it doesn't actually come with any language servers included, so I needed to get those and configure them as needed. For example, I could install pyright from some other source, like NPM:

npm i -g pyright

And then I needed additional configuration to tell Neovim about this LSP. The samples were in Lua, which is why I initially needed to use Lua configuration alongside Vimscript:

require'lspconfig'.pyright.setup{}

This actually worked for me with respect to error checking. Opening up a Python file would give me warnings and errors on the fly. However, I didn't get any code completion. I started looking at options for this, but frankly a lot of them seemed pretty involved to set up, and I wanted something relatively simple rather than having to take significant amounts of time configuring my editor any time I use a new machine or want to try out a different language.

Code Completion

Ultimately, I stumbled onto onto Conquer of Completion, or coc. I don't know why it took me so long to find as it seems to be insanely popular, but better later than never. One of coc's goals is to be as easy to use as doing the same thing in VS Code, and I honestly think they've nailed it. I first installed it via vim-plug in init.vim:

Plug 'neoclide/coc.nvim', {'branch': 'release'}

After restarting Neovim and running :PlugInstall, I could now install language servers straight from Neovim by running :CocInstall commands:

:CocInstall coc-json coc-css coc-html coc-htmldjango coc-pyright

After this, I fired up a Python file and saw that I had both error checking and code completion. There was just one final step.

Key Mapping

Given the wide array of key mapping options and customizations that people do, coc doesn't want to make any assumptions about what key mappings are available and which may already be in use. As a result, there are NO custom mappings by default. Instead, they need to be added to your Neovim configuration just like any other mapping changes. However, the project shares a terrific example configuration with some recommended mappings in their documentation. I legitimately just copied the sample into my existing init.vim file. This adds some extremely useful mappings like:

  • gd to take me to the declaration for what I'm hovering.
  • K to show the documentation for what I'm hovering (based on the docstring for Python, for example.)
  • ]g to go to the next error/warning and [g to go to the previous one.
  • Tab and Shift + Tab to move through the options in the code completion floating window.
  • Enter to select the first item in the code completion floating window.
  • A function to remap Ctrl + f and Ctrl + b, which are normally page down and page up, to scroll up and down in floating windows but only if one is present.

And tons of other stuff great stuff. I initially spent about 30 minutes just playing around with some throwaway code to test all of the different options and key mappings. It honestly feels super natural and now gives me the same benefits of VS Code while allowing me to use a much leaner and more productive editor in Neovim.