Change M365 Bookings calendar sender address

TL;DR Update UPN and wait at least 24 hours.

During setup of M365 Bookings shared calendar, a company account is created with user principal name (UPN) in the format of [email protected]. This UPN is used as the sender address when sending out booking notifications. The issue is that *.onmicrosoft.com email domain is often abused for spams and many email providers (notably iCloud) are starting to reject any incoming email with that domain. Microsoft meanwhile has started to throttle usage of that domain for sending email.

There are two ways to update the sender address:

  1. OWA mailbox policy
  2. Change UPN

Method (1) is the Microsoft’s recommended method, but I prefer Method (2) due to visibility on Entra ID; anyone with read-only account within the tenant can view the updated UPN in Entra ID whereas only M365 admin can view mailbox policy. If a subdomain (e.g. [email protected]) is preferred over the company’s email domain (example.com), the subdomain will need to be added to M365 with MX (booking-example-com.mail.protection.outlook.com), SPF ("v=spf1 include:spf.protection.outlook.com -all") and DKIM DNS records, regardless of method (1) or (2).

If the booking calendar is embedded in the company website, the iframe link https://outlook.office365.com/owa/calendar/[email protected]/bookings/ needs to be updated to https://outlook.office365.com/owa/calendar/<new-upn>/bookings/.

Once UPN is updated, it may take at least a day for the change to apply. In my experience, the new UPN is applied to new appointment’s notification within hours, but it only applied to cancellation notification after 24 hours (or longer).

Opening a web browser from Java apps in WSL

Linux CLI tools typically rely on xdg-open or fallback to $BROWSER environment variable to open the default browser, usually for OAuth authentication to authorise the CLI tool without using a permanent credential. Java apps behave differently by trying common browser executables in $PATH if the default browser could not be located. To launch a web browser from a Java app in WSL, simply add a symlink in “/usr/bin/“ that points to the browser executable in Windows, this way you don’t have to install a web browser in WSL.

# Examples, just choose one.
sudo ln -s "/mnt/c/Program Files (x86)/Microsoft/Edge/Application/msedge.exe" "/usr/bin/msedge"
sudo ln -s "/mnt/c/Program Files/Google/Chrome/Application/chrome.exe" "/usr/bin/chrome"
sudo ln -s "/mnt/c/Program Files/Mozilla Firefox/firefox.exe" /usr/bin/firefox"

Programs that rely on OAuth authentication typically listens on a random port to receive credential during the last step of authorisation. Since WSL enables IPv6 by default, the program may listen on IPv6 only. I noticed this behaviour when I check for listening ports netstat -aon | findstr ":port-number" on Windows and the output was [::1]:port-number without any 127.0.0.1. This can cause the program to unable to receive credential and fail authorisation if the Windows host has IPv6 disabled. A workaround for this issue is to also disable IPv6 on WSL to force the program to listen on IPv4. Append this line to “$home\.wslconfig” in Windows and restart WSL wsl --shutdown,

[wsl2]
kernelCommandLine=ipv6.disable=1

Credit

Importing intermediate certificate into Chromium/Cromite

Some websites only serve leaf/server certificate instead of the usual certificate chain (leaf + intermediate). If a browser doesn’t have the corresponding intermediate certificate (that signs the leaf certificate) cached beforehand, this can cause certificate error.

To download the missing intermediate certificate, click on “Not secure” > “Certificate details” > Details tab > “Authority Information Access”, there should be a link next to “CA Issuers”.

The easiest way is to import the downloaded certificate in Chromium/Cromite is to use the built-in Certificate Manager (chrome://certificate-manager/localcerts/usercerts). If you use p11-kit (trust anchor --store interCA.crt) to import, Cromite may not necessarily trust it; in that case, in Certificate Manager (chrome://certificate-manager/localcerts), enable “Use imported local certificates…”.

Extending LVM partition after disk expansion

  1. Boot GParted Live as this is best done offline.
  2. GParted may prompt to fix the GPT header due to metadata mismatch about the disk size, select “Fix”.
  3. Using GParted program, deactivate the LVM partition.
  4. Resize the LVM partition by dragging the right-arrow to the end.
  5. Click tick ✓ to apply. Resizing should take only a few seconds, if it’s not finished within a minute, reboot GParted Live and repeat; this may happen if Steps 2-3 are skipped.
  6. Reactivate the LVM partition.
  7. Launch Terminal,
sudo -s
vgs
lvs
  1. vgs may show non-zero VFree value meaning the volume group contains unallocated space. lvs lists the volume group and logical volume, the values are used in lvresize; Ubuntu defaults to ubuntu-vg/ubuntu-lv, the slash is not an OR, both values with a slash are required.
lvresize -l +100%FREE --resizefs VG-name/LV-name
vgs
  1. lvresize may fail due to corrupted filesystem, skip this step if no error.
e2fsck -f /dev/VG-name/LV-name
resize2fs /dev/VG-name/LV-name
  1. vgs should now show zero VFree value.
  2. Reboot.

GnuPG 2.5 for Windows is now 64-bit only

After updating GnuPG to 2.5.16 using Chocolatey, I wasn’t able to sign commit in WSL with pinentry error. The “$HOME/.gnupg/gpg-agent.conf” was previously configured with pinentry-program "/mnt/c/Program Files (x86)/gnupg/bin/pinentry-basic.exe" which is now an invalid path. I updated it to:

pinentry-program "/mnt/c/Program Files/GnuPG/bin/pinentry-basic.exe"

Then run systemctl --user restart gpg-agent.service.

If Git and GnuPG are used in Windows, the gpg config in “$HOME\.gitconfig” should be updated to:

[gpg]
  program = C:\\Program Files\\GnuPG\\bin\\gpg.exe

GRUB 2.14rc1 supports LUKS2 + Argon2 disk encryption

I had always used grub-improved-luks2-git AUR package to boot up my LUKS2+Argon2-encrypted disk. Now that GRUB 2.14rc1 supports it, it’s time to switch to the default package.

$ sudo pacman -S grub

pacman detected it conflicts with grub-improved-luks2-git and prompted for removal which is expected. Then, this is the most important part, “/etc/default/grub” config has been restored to the default during installation, so I had to replace it with my config. Thankfully, pacman made a backup at “/etc/default/grub.pacsave”, so I just need to move it back.

$ sudo mv /etc/default/grub.pacsave /etc/default/grub

Reinstall and regenerate the GRUB configuration.

sudo grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id="Arch Linux" --recheck
sudo grub-mkconfig -o /boot/grub/grub.cfg

booloader-id value can be anything. The whole line of grub-mkconfig can be replaced with just update-grub (without any option) if the command is available.

Importing FreeTube subscriptions to NewPipe/Tubular

In FreeTube, navigate to Settings > Data > Export Subscriptions > Export YouTube (.csv).

Transfer the csv file to your mobile device.

In NewPipe/Tubular, navigate to Subscriptions > upper-left triple-dot > Import from > YouTube > Import File > choose the csv file.

The import will run in the background with notification. The app’s notification will clear once the import is complete.

Using vector tiles on Nextcloud Maps

Nextcloud Maps uses raster tiles by default, but it also supports vector tiles which looks nicer. Navigate to Nextcloud administration > Additional settings (<nextcloud-domain>/settings/admin/additional) > Maplibre settings. Set the style url as https://tiles.openfreemap.org/styles/liberty.

The style url can be set to other map providers. I use OpenFreeMap because it’s free and doesn’t need an API key. It’s available in 4 styles.

Maplibre uses WebGL to render the vector tiles, so if your browser block WebGL by default, your Nextcloud domain (not openfreemap.org) needs to be allowlisted for it.

Separate markdown headings into pages

Previously the Threat Hunting page contained all search queries in one page separated by headings. That approach was untidy especially when conducting a web search; where after being redirected to the threat hunting page, still had to navigate to the relevant heading to locate the relevant search query.

Each heading or search query is now in a separate page, so once those new pages are indexed by search engine, the search result will lead directly to a page that only contains the relevant search query, e.g. FileFix detection.

from os import chdir, path
from re import S, findall, sub

chdir(path.dirname(__file__))

template = """---
title: {title}
layout: page
date: 2025-07-27
---
{content}"""

with open("index.md") as f:
    s = f.read()
    # https://stackoverflow.com/a/66619938
    for title, content in findall(r"(?:^|\n)##\s([^\n]+)\n(.*?)(?=\n##?\s|$)", s, S):
        # https://stackoverflow.com/a/74260791
        fname = sub(r"\W+", "-", title).strip("-").lower()
        with open(fname + ".md", "w") as w:
            w.write(template.format(title=title, content=content))
        with open("index-new.md", "a") as a:
            a.write(f"- [{title}]({fname})\n")

linux-firmware meta package on Arch Linux

Arch Linux linux-firmware is now a meta package. Its derivative Manjaro renamed it to linux-firmware-meta. The default set covers a wide range of firmwares that may not be applicable to most devices and can be trimmed down.

  1. Remove the meta package, pacman -Rn linux-firmware
  2. Identify device manufacturer, lspci
  3. Remove irrelevant firmware, e.g. if Nvidia device is not installed, pacman -Rns linux-firmware-nvidia
  4. Mark the necessary firmware as explicitly installed, pacman -D --asexplicit $(pacman -Qs -q linux-firmware | sed -z 's|\n| |g')