Latest Entries »

We’ve completed yet another lap around the sun. The year of 2022 has come to an end. A year in which the unimaginable has happened: Russia started a war on European soil. They’ve invaded Ukraine. Right after this pandemic, we have war. Really, the 2020’s are a great time to be alive. Oh well…

As I’ve mentioned in last years review, I quit at Manus last year, and I’ve been working at Spinnov for a year now. What else is new? 2022 has also been the year I’ve started role playing. Also, this past year I’ve gotten to know my neighbour a little better.

So, this was 2022. Let’s hope 2023 brings peace. While I say, the war in Ukraine doesn’t affect my life much, I mean, the electricity grid is stable, the supermarkets are stocked, perhaps the only thing noticeable is the price tag. But all-in-all, nothing has happened that disrupts our lives. But the war is in the news, and maybe we don’t think about it all the time, it still is in the back of our minds. I once read in my mother’s diary, a scared girl, scared the Russians were coming. Is this what it felt like living during the cold war? What will happen next? Will we be able to continue to live our “normal” lives? Will the war end? And what happens then? Let’s hope 2023 brings peace, let’s hope covid doesn’t return. Let’s hope. — I am just being me, caught between hope and despair —

Peace and Love, happy 2023!

Some random notes…

Arch Linux ARM has dropped support for the ARMv5 and ARMv6 targets. The only ARMv6 target they supported was the original Raspberry Pi. This includes the Raspberry Pi Zero, as it is the same chip, just on a different size board. As I still have my original Raspberry Pi in service. Yes, the original ones, with only 256 MiB of RAM, when they were released as RS and Farnell, and the many orders brought down their websites. That’s what I call a dDoS attack… just open your orders at time X, and have everyone who wants to order overload the servers. Oh, it was fun in those days, right?

Oh well… as the official repos don’t build those pacakges any more… well… building them yourselves is one of the options, and it seems, on various places on the internet, the forums, github, reddit, people are wondering what to do. Can we build our own packages? Can someone host repositories for the dropped architectures, like ArchLinux32 did when ArchLinux dropped i686 support?

Building on the machines themselves, well… it’s not an option, way too slow, we need some cross-compilation on a fast computer if we want to be finished before the heat death of the universe. So, we need some cross-compiling. I’ve been looking at the crosstool-NG configuration. As taking the official configuration files, for ARMv7, the latest version when ARMv6 was still available, and the current. And diff them, apply the patch to the ARMv6 configuration. Sounds simple, but it ain’t that simple. One thing to note is the fact ArchLinuxARM uses a git version rather then the last release version, as they require the current versions of gcc, glibc, and related requirements. Installing the crosstool-ng-git package from AUR will be newer then the version used to generate the config files, and as such, some versions may have changed. The problem with that, if a version is not recognised, it will revert to the default, which will be a way older version. So… creating the cross compiler for ARMv5 and ARMv6, it will take another look before I got that working to satisfaction.

But just having a cross compiler ain’t the whole picture. As I don’t want just to compile, I want to create pacakges. On the ArchLinuxARM github is a repo called plugbuild, which appears to be what ArchLinuxARM uses internally to build their packages, but due lack of documentation, this is useless for an outsider.

But hey… I found an AUR package, called devtools-qemu, description “QEMU based cross-build tools for Arch Linux ARM package maintainers”. That looks promising, right? Well… missing dependency archlinuxarm-keyring. Well, I can download that from the ArchLinuxARM website and install it on my ArchLinux machine, no problem. A comment mentions the arch4edu repository, so I’ve added that one to my pacman configuration as well. There we go, however it ain’t working yet….

I’m too tired now to write out all the details of what was going on, at least, the scripts from devtools-qemu ain’t working properly, but using the config files from devtools-qemu, but using the normal devtools (by replacing the symlink, make extra-armv7h-build link to archbuild in stead of archbuild-qemu) gives results.

Also note that, for a cross-platform chroot to work, qemu-user-static is required, using binfmt. This means, if the normal qemu-binfmt is installed it will conflict, as it will use those instead, which of course won’t work inside a chroot.

We’ve completed another lap around the sun, hooray!
Well… I’m late posting this, but hey, what is there to say about 2021,
Another year cancelled due this pandemic thing going on.

So there is nothing to report. For a brief moment it looked like
summer was still on, but no luck, everything of interest got
cancelled. So… 2021: nothing to report, better luck in 2022.

But hey, there is some thing to tell. I quit my job at Manus.
After 5+ years, I decided it was long enough. You know what
they say, quit when you are at your top, and I had the feeling
I couldn’t overdo my self any more at that place. So, it was time
to move on to a new job.

As I have been using Let’s Encrypt since the beginning, the shut down of the ACME-1 servers to be replaced by the ACME-2 servers break the running installation. As for how to migrate an existing installation.

The certbot-auto should have been updated to a version that supports the new revision of the protocol, however, the existing configuration files still point to the servers speaking the old protocol. Therefore you must manually edit the config files to point to the new servers to continue to obtain certificated.

In /etc/letsencrypt/cli.ini locate the line that says

server =

and change it to

server =

Then, per website in /etc/letsencrypt/renewal there is a website.conf file, in each file, make the same change.
And then certbot-auto should be up and running again.

I have recently bought a new computer. As my default operating system, I use ArchLinux. Simply boot from a running USB stick, and copy the content over. I have explained the procedure before.

But for a few tasks I might require a Microsoft Windows installation. For this purpose I have downloaded the ISO for Microsoft Windows 10 21H1. To create a bootable medium out of this, it is some hassle, as to boot from UEFI it requires the large images to be split. But that’s something I might discuss another time, but this is explained out there.

Now… what I want to talk about are a few steps after installing Microsoft Windows 10 to make it usable. First, the Start Menu needs to be restored. For this purpose, there is Open Shell. This will restore the start menu to a style previous versions of Windows offered, which feels like it gives me a better overview of installed applications.

Therefore, the code now sits on my github. Next thing is the keyboard layout. As I tend to write in multiple languages, I require to be able to enter accented characters. I mean, if I write my name, André, I already need that. Now, Windows, by default, uses the “US International” layout. This layout has a flaw, as when I type a ‘ or “, I need to press the space bar, otherwise it will be interpreted as an accept instead of a character. The layout that has fixed these flaws is “US International AltGr Dead Keys”. Unfortunately, Microsoft does not offer this layout. But it can be installed. I once found the source code on Google Code, which is long gone.

But now, how to use it. To compile a keyboard layout for Windows. You need Microsoft Keyboard Layout Creator. This program required .NET 3.5, which should be installed first. Then run the setup. Once installed launch the program, open the “.klc”-file, and click Project-> Build DLL and setup package. This will create output in %USERPROFILE%\usialtgr. There run the installer created. Yeah… keyboard layouts are binaries on the Windows NT platform. Then it can be selected. I have only found the setting to change the keyboard layout in the Metro Settings App. I haven’t been able to find it in the Control Panel. Maybe I am blind, but I suspect it is only available in the Metro App.  Add the newly installed keyboard layout, and remove the US International one, and we’re good. I can type the way I like it.

There is more stuff to be done to make Windows 10 behave a bit, like making it understand the RTC Clock runs in UTC. One guide is here, but it is not confirmed yet.

I have recently bought a new NAS. I have chosen a ZyXEL NAS542. One of the reasons for choosing this NAS is the fact it is possible to install OpenMediaVault on it. Seeing it is possible to run my own Linux installation on the thing, therefore, not being limited to the firmware ZyXEL offers, and being able to get updates after they stop supporting the thing. That possibility was enough to take the risk.

Well… looking at the stock firmware, I am disappointed. So, I am going for OpenMediaVault anyways. So… why am I disappointed in the stock firmware? Well… they say they support NFS. It turned out NFS support is an “app”. It ain’t core functionality. So far so good, but, it it turned out, what they put on the NFS share is a separate directory, not to be seen by any other part of the NAS software, not the in-browser file browser, nothing. This could of course be fixed by logging into a shell account and creating some symlinks, however, I don’t trust their scripts and something might wipe all the data if I do so.

What I would like is to be able to share my shares over any protocol I choose. This is not offered, therefore, for my use-case, where most of my clients will access the NAS over NFS, but the occasional client may talk SMB, to see the same. As this is not offered, I am going to take a look at OpenMediaVault.

When I made the decision to purchase this NAS, I noted there are OpenMediaVault images for this NAS. I didn’t take a closer look at that time. Maybe I should have, as the latest image is from 2018. Kinda old, huh. Furthermore, the SoC used in this NAS is an NXP LS1024A. This SoC has no mainline kernel support. I am kinda suck with a 3.2 kernel. That sounds ancient. I should have done better research prior to buying this ting. Oh well… it seems 5.x support is being worked on, but that is for later. For now, I’ll go with the image from

As the images are 2 GB is size, I took a 2 GB SD Card, and wrote the image to it, put the SD card into the NAS, and switched it on.

$ gunzip debian-nas-stretch-18.069-armhf.img.gz
# dd if=debian-nas-stretch-18.069-armhf.img of=/dev/mmcblk0
# gparted /dev/mmvblk0

The NAS came up with the OMV Web interface, and I tried to log in with the default credentials admin:openmediavault. However, it showed an error message saying “Failed to connect to socket: No such file or directory”.

According to this forum thread, the omv-engined daemon isn’t running. Now, there is a shell account with the same credentials, so I ssh’d into the machine, and did a sudo bash to become root, and ran the said daemon. When the daemon was running, I could log in to the web interface. When I logged into the shell, I noticed it’s configured to a German locale. This is not really a surprise as I downloaded the image from a German blog.

Now…. this is an ancient image, so I need to install some updates. This image is based upon Debian “stretch”, which is codename for Debian 9. This has LTS support up to June 2022, so there should still be updates available for this image. As this is a debian image, it uses apt as a package manager.  But when I ran apt-get update, it complained it couldn’t resolve hostnames. It seems I needed to put my nameserver in /etc/resolv.conf manually for some reason. Oh well…. after that running apt-get update succeeded, and then proceeding to apt-get upgrade, it ran out of disk space. Well…. I need a larger SD card.

So, I proceeded with an 8 GB card. I wrote the image, and expanded the file system.

Trying to repeat what I did before, however, this time, the NAS didn’t come up. Why doesn’t it boot? I could open up the NAS, and connect to its UART to see the kernel output, but for now… I don’t feel like opening it just yet. Let’s try another SD card. A 16 GB model. This time it works as it should. Odd… I guess some SD card incompatibilities? Oh well…. let’s try the update thing again like I did last time… and then see if the daemon problem still exists…

And it does… also this

Trigger für openmediavault (4.1.36-1) werden verarbeitet ...
Restarting engine daemon ...
'omv-engined' trying to restart
'omv-engined' start: '/bin/systemctl start openmediavault-engined'
'omv-engined' failed to start (exit status 0) -- no output
invoke-rc.d: unknown initscript, /etc/init.d/openmediavault-engined not found.
openmediavault-engined.service couldn't restart.

It looks like it tries to do some systemd stuff on an rc.d based system? Starting the daemon manually seems to work though… but still… this ain’t the way it should operate. So my take, that image is no good. I guess, I should try to whip up my own some time… while at it, I might look at that kernel as well.

It is that time of year again, the dark days, that time when we’ve completed another lap around the sun. The time to reflect on the past 12 months. As I usually do around this time. I guess I can say 2020 has been an unusual year for everybody. It has been a year for the history book. Interesting times for future historians.

But let’s go back to the beginning, when 2020 started. A new decade began. It began with the Brexit being our biggest concern. Drinking my Old Speckled Hen at the English Pub, wondering whether I would be able to drink a year later due to the Brexit. Oh, little I knew I wouldn’t be drinking my ale in my pub for the rest of the year.

When the COVID-19 outbreak started in Asia, the rest of the world didn’t worry much. We saw the MERS and SARS outbreaks in the past, and those outbreaks didn’t pose much of a thread outside Asia, so we assumed it would be the same this time. We couldn’t have been more wrong. That nasty SARS-CoV-2 virus spread around the world faster then the blink of an eye.There was this one day in March, when the government announced the hospitality would be closed for three weeks. That’s how it all began.

A year ago, when I wrote my “Welcome to 2020” post, I uttered by annoyances about anti-vaxxors. In a pre-pandemic world, those were just annoyances, but in today’s world? They are in their right to put themselves at risk, I have no problem with that. I’ll get my shot as soon as I am offered, so they won’t pose a thread to me. The problem is for those who cannot receive a vaccination due a medical condition. Those who do not have the choice to get vaccinated are the ones being put at risk, and that’s my problem here. And mind you, this is not only a risk to immunocompromised people, but also for the rest of us, as the British mutation likely originated from an immunocompromised individual.

Compared to the rest of the world, the Netherlands was rather late with the whole mask thing. While in many countries people were wearing masks, here in the Netherlands, we did not. It wasn’t until June masks became mandatory in public transportation, and only in public transportation. Since December masks are mandatory in indoor public spaces. And then of course, there are a bunch of anti-maskers, going like “Don’t listen to the government, don’t wear the mask!”, thinking they’re anarchists, sorry dude, you’ve got no idea what anarchism is about!

Like I said last year, the great civilisations of the past have fallen, why would we be any different? And that is where I am at. Caught between hope and despair. But despair is only short term, hope is long term. When our civilisation does down, from its ashes, a new civilisation will rise.

’nuff of that shit. Let’s get back to what I’ve been up to last year. I’ve picked up my old hobby of doing online radio shows again. Something I used to do, I believe, between 2004 and 2009. Might be a year off. but that’s details. When the pubs closed in March, I figured, I’ll go provide some entertainment. And so, I started doing my shows on Friday nights, 20:00, the same time slot I had back in the days. However, I switches to Tuesday 20:00, as that fits better in my schedule. Tuesday night was the night I used to go to my British Pub, drinking my Old Speckled Hen, I started this post with.

What else have I been up to? Started working on some git repositories. I’ve made this repository for my embedded software development. I collect some submodules with manufacturer libraries, and other useful libraries such as ucglib. It’s a collection of libraries, and I’m adding some Makefiles to ease building. Furthermore, I’ve been writing some libraries. First of all, I’ve cleaned up and release by WS1218B library. Then, I started working on a High Level USB implementation. High Level, as it’s not hardware specific, but the protocol on logical level. The reason for this project, while ST releases their hardware abstraction library under a BSD license, their USB “middleware” is under a non-free license. Therefore, I decided to create my own high level implementation, under a MIT license. I have had some inspiration by libopencm3 approached it, but as they are LGPL licensed, which is basically GPL for embedded as on embedded you always link statically. My latest project is RFID related. I’ve based it on an Arduino library, UNLICENSE licenced. I’ve ported it back to C (I say back, as it seems to be derived from a library I’ve used years ago, which was in C). It works and selects cards. So I am working on adding support for other PDC ICs. Right now I am working on CFRC622 support. For this I need to separate PDC (reader) and PICC (card) code, and then when I get ther RCC662 working, it should be easier to add the THM3060. But I’ve the feeling I’ll be working on the RFID stuff for the coming months. Another thing I’ll be doing is some displays. Stuff like the ST7735 and SDD1331. They’re supported by the ucglib library, and I’ve been using that in a project, but one thing, it was rather slow. To come around, I’ve been writing a framebuffer implementation. Render the image in RAM and transfer that to the screen as in. That gives a huge speed increasement. However, doing such required a lot of RAM, so I’ve been writing partial screens. Well… I might put code to do that stuff in a neat library as well.

I’m rambling again, ain’t I? Oh well… I guess that’s it for this year. After this false start for the 20’s, let’s try this again, let’s make this a beautiful decade! Happy 2021!

So, last Sunday, Daylight Saving Time ended, time to reset the clocks. Folks in the states do it this weekend, but well… this post ain’t about time. But when the clocks are reset anyway, it might be the time to test the Residual Current Device.

I have a bunch of Single Board Computers running, a Raspberry Pi 1, 3, 4, an Odroid U3, a Pine64. Need to shut them down before yanking out the power, and when shutting them down, it is also a good time to make sure all updates are installed before doing so, right?

Well… while installing updates on the Odroid U3, I noticed something strange:

error: command terminated by signal 4: Illegal instruction

After some googling around I found something that looks like this issue. It seems the C library the updates installed require a newer kernel that I was currently running. The new kernel might be a dependency of the C library, and it might have been installed, but a new kernel doesn’t become active until a reboot.

So, after rebooting the system, it booted fine. However, since the previous attempt to install updated included those signal 4 messages, it might been wise to reinstall all packages.

# pacman -Qqn | pacman -S -

While reinstalling everything, it prompted me to update the bootloader. And I said yes… well…. shouldn’t have, that would have saved me some time… I guess you know the result, the system didn’t boot. Somehow the new bootloader changed the boot configuration. I would have guessed any U-Boot to load the boot.scr file and just boot from that. But someone the device IDs were all wrong, trying to boot from mmc 0, while my SD card was mmc 1. Ended up tinkering with the boot scripts, first fixing the mmc to 1, and then, rather ugly, setting the root partition to /dev/mmcblk0p1, rather then using the UUIDs. For some reason U-boot didn’t read and/or pass the UUID correctly. I might be doing a complete reinstall and fix the boot configuration of my Odroid U3, but it has always been a mess, and when updating the mess, it becomes even messier.

But after that excursion to U-boot, the system booted, and was up and running again, or at least, I thought so.

Later that Sunday, I was about to prepare the playlist for my radio show, which I do on Tuesdays. Yeah, I picked up my old hobby of hosting radio shows again. Radio BlaatSchaap has risen from the digital ashes when covid hit the world. But well, where was I? So, I was going to prepare some music, but then… I discovered I couldn’t access the music. Now, the music is stored on an USB hard disk connected to the Odroid, and is accessed over the network. So… what the ****? The system was up again, wasn’t it?

Turned out… the system entered a faulty state:

[   31.474178] blk_update_request: I/O error, dev mmcblk0, sector 137032 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
[   31.485018] Buffer I/O error on dev mmcblk0p2, logical block 233, lost async page write
[   34.475437] s3c-sdhci 12530000.sdhci: Card stuck in wrong state! card_busy_detect status: 0xf00
[   34.478494] mmcblk0: recovery failed!

The SD card was misbehaving. Why was this? I’ve recently updated the kernel, but this is unlikely to be a kernel issue, as the system was running on this kernel earlier that day. Inserting the SD card in my laptop gave similar errors.

[25440.796103] blk_update_request: I/O error, dev mmcblk0, sector 137144 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
[25440.796112] Buffer I/O error on dev mmcblk0p2, logical block 247, lost async page write
[25443.799351] sdhci-pci 0000:24:00.1: Card stuck in wrong state! card_busy_detect status: 0xf00
[25443.799358] mmcblk0: recovery failed!

Further testing turned out reading succeeds, but writing fails. My best guess, the SD card is worn out. The reinstalling of all packages caused many writes, and a few hours later, it was one write too many.

I’ve created an image of the SD card, and fsck’ed it. The file system was in a good condition. Therefore I wrote the image to a new SD card, inserted it in the Odroid, and it was up and running again.

One curious fact: when inserting the SD card in an USB SD reader, rather then in the native SD slot in my laptop, I am able to write to the card. Possibly the cheap Chinese SD reader fails to implement error checking?

At my home, I have a Raspberry Pi 3 connected to my amplifier. This Pi is running a PulseAudio Server. This way, I can send the audio output from any machines in my network. At least, machines that are running Linux (or other *NIX).

I have attached a HDMI monitor to my Pi. This monitor has also audio support. However, I want the output to go to the analogue output as that is where the amplifier is connected. Furthermore, my home network is both IPv4 and IPv6.

First, I disable the auto detect for audio hardware, and replace it with a manual specification, so it will output at the analogue output rather then the HDMI,

#### Automatically load driver modules depending on the hardware available
#load-module module-udev-detect
#### Use the static hardware detection module (for systems that lack udev support)
#load-module module-detect

# Force PulseAudio to use analogue audio
load-module module-alsa-card device_id=1

To enable the server, I load the module-native-protocol-tcp. With the auth-ip-acl I set the access control list to allow only connection from my network. auth-anonymous=1 allows anonymous authentication, which disabled the need for sharing cookies. Finally, I add list=, this makes sure it only listens on IPv4 addresses, as otherwise the raspberry pi server will show up twice in the network.
Publishing in the network is done with the module-zeroconf-publish module.

# Adding listen= to force it IPv4 only, otherwise it will use both IPv4 and IPv6 and it appears twice
load-module module-native-protocol-tcp auth-ip-acl=; auth-anonymous=1 listen=  
load-module module-zeroconf-publish

The title says it, this is all about installing ArchLinuxARM on Le Potato, a Single Board Computer by Libre Computer. Well… what can I say. I’ve got ArchLinux booting, but I’m not happy with the results yet. The thing is, I wanted to build my own U-Boot and create a package of it, and that’s the part still failing…

So, for what I’ve got now, I’ve downloaded an image of theirs (A Debian image), and replaced the partitions with my own. That way, I’ve got their U-boot build, booting the kernel I provided. It boots, so that’s fine. However… I want to be able to create an U-Boot build of my own, and be able to install it as a package. No luck with that.

First, I guess, I’ll note down the steps to create a bootable TF (aka micro SD) card for the thing. I have the bootloader fail using a 2 GB TF card, but with two different 8 GB cards these instructions create a bootable card.

Requirements: wget, uboot-tools

Create a file boot.txt with the following contant:

setenv fdtfile amlogic/meson-gxl-s905x-libretech-cc.dtb 
setenv distro_bootpart 2
setenv devtype mmc

test -n "${distro_bootpart}" || setenv distro_bootpart 1
part uuid ${devtype} ${devnum}:${distro_bootpart} uuid
setenv bootargs "console=ttyAML0,115200n8 root=PARTUUID=${uuid} rw rootwait earlycon"

if load ${devtype} ${devnum}:${distro_bootpart} ${kernel_addr_r} /boot/Image; then
  if load ${devtype} ${devnum}:${distro_bootpart} ${fdt_addr_r} /boot/dtbs/${fdtfile}; then
    if load ${devtype} ${devnum}:${distro_bootpart} ${ramdisk_addr_r} /boot/initramfs-linux.img; then
      booti ${kernel_addr_r} ${ramdisk_addr_r}:${filesize} ${fdt_addr_r};
      booti ${kernel_addr_r} - ${fdt_addr_r};
mkimage -A arm -O linux -T script -C none -n "U-Boot boot script" -d boot.txt boot.scr
dd if=libre-computer-aml-s905x-cc-debian-stretch-headless-4.19.55+-2019-06-24.img of=/dev/mmcblk0 bs=1M count=2
fdisk /dev/mmcblk0

Type “o” to create a new MSDOS partition table
Type “n” to create a new partiton
Type “p” for primary
Type “1” for number 1
Press “Enter” for the default start
Type “+100M” for a 100 MiB partition size
— If you are prompted to delete a signature, say yes
Type “t” to change the type
Type “C” to set the type to FAT32
Type “n” to create a new parition
Press “Enter” a couple of times to accept the defaults
— If you are prompted to delete a signature, say yes
Type “w” to write the changes

mkfs.fat /dev/mmcblk0p1
mkfs.ext4 -O ^metadata_csum,^64bit /dev/mmcblk0p2
mkdir /tmp/boot
mkdir /tmp/root
mount /dev/mmcblk0p1 /tmp/boot
mount /dev/mmcblk0p2 /tmp/root
bsdtar -xpf ArchLinuxARM-aarch64-latest.tar.gz -C /tmp/root
cp boot.* /tmp/boot
umount /tmp/root
umount /tmp/boot

Connect a USB-TTL-UART to the TTL pins (next to the UART connector) and open it using for example PuTTY at 115200 bps
Apply power to the board using the micro USB connector.

The board should boot into ArchLinuxARM.

Now… although this works… I still wish to be able to build my own u-boot.
Since this board is Amlogic based, I’d figure I’d start at an Amlogic board already supported, the Odroid C2.
I have not succeeded in this. I will write about my attempts another time.