Kilala.nl - Personal website of Tess Sluijter

Unimportant background
Login
  RSS feed

About me

Blog archives

2025

2024

2023

2022

2021

2020

2019

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

> Weblog

> Sysadmin articles

> Maths teaching

<< 8 / 2025 10 / 2025 >>

Proxmox e1000 hardware hang

2025-09-29 21:43:00

Running Proxmox on an older Optiplex 7020 Micro, my networking is limited to the onboard e1000 Intel Ethernet card. So far it's treated me well, except ...

Except for those situations where I'm synchronizing tens of gigabytes with Syncthing, that's running on an Ubuntu VM in Proxmox. :)

e1000e 0000:00:1f.6 enp0s31f6: Detected Hardware Unit Hang

Is what the e1000 kernel module starts throwing at me, upon which point all network connectivity fully dies until I reboot. 

It seems this has been going on at least since 2019 and it's a known issue with the older e1000 hardware, working with modern-day throughput. Here's a more recent thread on Reddit.

Supposedly a work-around for the issue lies in disabling certain on-card functionality, forcing it back to the CPU. Let's see if my connection remains stable now. 


kilala.nl tags: ,

View or add comments (curr. 0)

Syncthing as alternative to OneDrive, Dropbox etc.

2025-09-23 22:12:00

After getting a proof-of-concept running for my self-hosted email server, I'm ready for my next step in "stepping away from big tech". A lot of my files are on OneDrive and I'm getting antsy about it. 

Ideally I'd use Synology Drive and Synology MailPlus as the enterprise grade replacements of MS365 and OneDrive. But getting the gear ready has some requirements I can't fulfil just now.

So I'm testing alternatives, just like Ernie Smith in 2022.

With a few hours of fiddling I have another hardened Ubuntu server, running as centralized server for Syncthing. I've also set up a local/private Syncthing Discovery server, to make it easier for the server and systems in the ecosystem to find each other. The alternative would be to use public Syncthing Discovery servers, through NAT and port forwards, while opening up my infrastructure to the Internet.

I don't want that. I want it all to work via my VPN. And it does. :) 

My Macbook is currently synced with the Ubuntu server at the office. The small amount of data I chucked back and forth looks okay. Before I start putting important data on there, it's time to ensure full drive encryption on the server and to also have backups up and running. 

 


kilala.nl tags: ,

View or add comments (curr. 0)

Some notes on the final TLS pieces to the SMTP puzzle

2025-09-23 22:01:00

While setting up all the DNS records needed for the Mailcow server, I ran into a few small snags. Troubleshooting was made so much easier, thanks to three tools:

Setting up my DANE record was eased by using the tool at Huque.com.

Michael Lucas' "Run your own mail server" book has been an indispensible read! It's clarified quite a few things I didn't understand yet, about the email ecosystem.


kilala.nl tags: ,

View or add comments (curr. 0)

My bad: Mailcow message refused when sending email to GMail

2025-09-21 22:56:00

Another hour, another case of PEBCAK.

Emails from my self-hosted Mailcow instance to GMail were getting bounced with this error report:

status=bounced (host gmail-smtp-in.l.google.com said: 
550-5.7.26 Unauthenticated email from broehaha.nl is not accepted due to
550-5.7.26 domain's DMARC policy. Please contact the administrator of
550-5.7.26 broehaha.nl domain if this was a legitimate mail. To learn about the
550-5.7.26 DMARC initiative, go to 550 5.7.26  https://support.google.com/mail/?p=DmarcRejection

It took a little fiddling and when I tested with the MXToolbox.com email deliverability tool I found out what was wrong.

While Mailcow was signing my emails with DKIM, the recipient wasn't able to match the signature to my published key. I had erroneously followed the example of another domain of mine, publishing the key as "s1._domainkey". That was wrong. 

Mailcow's admin interface specifically tells you the selector in the DNS name. It should be "dkim._domainkey" (or is it "email._domainkey"??). I made both of those and now it works! Gmail validates my signatures, MXToolbox says it's fine too... and even another domain of mine no longer puts the emails in the junk folder. Nice!


kilala.nl tags: ,

View or add comments (curr. 0)

False alarm: Mailcow message refused when sending to Outlook

2025-09-21 22:45:00

I'm setting up Mailcow for self-hosted email. Lots of moving parts and one of the stranger things was this bounce message when emailing to my MS365 account.

<tess@${MyDomain}>: host ${MyDomain}.e-v1.mx.microsoft[52.101.68.18] said:
 550 5.7.1 Service unavailable, Client host [178.zzz.yyy.xxx] blocked using
   Spamhaus. 

That IP address was not something I'd outright recognized as my own. I know exactly which IP address is tied to my static fiber connection, so I was confused about this one! I also couldn't do a reverse DNS lookup, the IP did not link back to a name. 

It was only while doing some Rubber Ducky Debugging on the Angry Nerds Discords that I realized what was happening: that IP address belongs to my fallback 4G GSM router! 

More than two years ago I'd setup a Teltonika RUT241 as the secondary WAN interface for my Unifi router. It works like a charm and load balancing makes it possible to have faster downloads than my single fiber allows.

But load balancing also meant that some emails were being sent over the GSM WAN. And that's not cool. :D 

It was easily fixed by adding a policy-based route, ensuring that all hosts in the DMZ would only communicate with the Internet via the fiber on WAN1.


kilala.nl tags: ,

View or add comments (curr. 0)

Self-hosting Mailcow, behind NAT, with Lets Encrypt TLS certificates

2025-09-21 21:13:00

The past few days I've been slowly piecing together a self-hosted email and productivity platform, just to see if I could. This project has a lot of moving pieces, which all need to sync up, so I don't look like a spammer to the outside world.

One of the things I need, is a properly trusted certificate. One free and easy way to get those is through Lets Encrypt. Their Certificate Authority uses a very hands-off approach, insofar that it's aimed at high automation through the Acme tooling.

With Acme there are a few ways you can prove that you're actually requesting a certificate for a domain you own. By far the easiest, is to have Acme run as a scheduled job on your public web server. If that's not an option you can also provide Acme with API access to your DNS records. And if that's not an option? Well...

As it usually goes, I'm making things more difficult than I have to. 

could use the easiest approach, running Acme with the web server for my email platform. The trouble is that this web server also offers the webmail as well as the admin functionality for the platform. I want neither of those on the web, I want them 100% in my private network only. And there's no easy way to split them. 

I could also look into API access to my TransIP DNS account and in the long term I probably will. But not for now.

So I had to go the third way: the Acme manual DNS registration. This means that I need to manually run the acme.sh client, then manually register the DNS record their API expects, then rerun acme.sh to get the certificate. And I need to repeat that at least every fifty days. Here's some useful acme.sh documentation.

The process on my server was as follows. First I edited mailcow.conf to set "SKIP_LETS_ENCRYPT=y". This is per Mailcow's own documentation.

I then ran:

git clone https://github.com/acmesh-official/acme.sh.git /opt/acme.sh

echo 'export PATH="${PATH}:/opt/acme.sh"' > ~/.bashrc

chmod +x /opt/acme.sh/acme.sh

acme.sh --issue -d mail.${MyDomain} --dns --yes-I-know-dns-manual-mode-enough-go-ahead-please

Then I went ahead and created the requested DNS record. After waiting for it to propagate, I run the following. The export/install instructions were found here.

acme.sh --set-default-ca --server https://acme-v02.api.letsencrypt.org/directory

acme.sh --issue -d mail.${MyDomain} --dns --yes-I-know-dns-manual-mode-enough-go-ahead-please --renew

acme.sh --install-cert -d mail.${MyDomain} \
--cert-file      data/assets/ssl/nochain.pem \
--key-file       data/assets/ssl/key.pem \
--fullchain-file data/assets/ssl/cert.pem

docker compose restart

In the ideal case, I'd have a way of the built-in Acme registration to work without exposing the web interface(s) but the developers have already indicated they won't be building that any time soon. 


kilala.nl tags: ,

View or add comments (curr. 0)

Isolation of the "undesirables"

2025-09-17 21:54:00

Isolation of the undesirables. That is what any oppressive government will hope to achieve. 

Quoth Ronny Jackon, Republican representative in Texas:

"We have to treat these people. We have to get them off the streets, and we have to get them off the internet, and we can't let them communicate with each other. I'm all about free speech, but this is a virus, this is a cancer that's spreading across this country,"

Isolation, sequestering the undesirables from each other and the rest of the country. 

Right now it is us, your transgender fellow citizens.

Then it will be the gays. Or the neurodivergent. Or the artists. Or the handicapped. Or the ill. Or those of religions deemed unworthy. 

Remember "First they came", by Martin Niemöller.


kilala.nl tags: ,

View or add comments (curr. 0)

Revisiting Linux PAM: does authentication really require root access to /etc/shadow?

2025-09-05 20:53:00

Consider this part 2 of a two part series. Here is part 1.

While researching the use of PAM for authentication I was surprised: I thought the PAM API enabled any application to use the local Linux authentication files in an easy and safe way.

Taken literally that statement is true. But it lacks one detail: it doesn't mention the access rights needed by the application!

To my brain "any application" literally meant every application running on Linux. I hadn't considered limitations! I'd thought PAM was an active mediator, but instead I'm now reminded that it's a set of programming libraries. There is no PAM service which performs authentication for you, you have to do it yourself. 

PAM and Apache web server

Because I really can't work with Java, I thought I'd find an example that's closer to my understanding: the Apache web server.

There is more than one Apache module to authenticate against PAM, the most popular being mod_authnz_pam.

I started with a walkthrough like this article on Siberoloji. It's outdated and I needed to make a lot of changes to the workflow (which I will turn into a lab for my Linux+ class!). 

I soon had Ubuntu running Apache, with /var/www/html/private being blocked off by PAM authentication. A curl without --basic immediately leads to a 401 Unauthorized (as expected). Trying the curl with basic authentication and my username and a wrong password also gave the 401. Good. 

But when I tried my username with the real password, I also wasn't let in:

     unix_chkpwd[4159]: check pass; user unknown
unix_chkpwd[4159]: password check failed for user (tess)
apache2[4076]: pam_unix(apache2:auth): authentication failure; logname= uid=33 euid=33 tty= ruser= rhost=127.0.0.1 user=tess

Most assuredly my user is not unknown! It's right there in /etc/shadow

Looking at the code and the documentation for the Apache module, I see that it uses a compiled C binary, unix_chkpwd. The documentation for this tool clearly states that it must be run as the root (uid=0) user account, otherwise you cannot authentication other users than yourself. 

On systems with SELinux, you even need to set an additional boolean to allow the webserver to work with PAM: 

     setsebool -P httpd_mod_auth_pam 1

It still wasn't clicking for me: surely you should be able to use PAM and unix_chkpwd without root access, right?!

And that's when I found a blog post that I had written in October of 2020. It was literally the second hit in Google and Ecosia! I had already done this whole research four years ago! It was also discussed on RedHat support.

As a test, I modified the Apache user account to be in the shadow group:

     sudo usermod -aG shadow www-data

And presto, now I can authenticate with my real username and password.

One final test: the LibPAM proof of concept

I thought I'd give it one final shot! I turned to the PAM developer documentation, which has a very simple demo program to perform a username/password check.

To build the application on Ubuntu, I installed the libpam0g-dev package via APT. I then compiled it by running:

     gcc pamtest.c -lpam -lpam_misc -o pamtest.o

If I run pamtest.o without any parameters, it defaults to user "nobody". And that gives results we've seen before:

     unix_chkpwd[83004]: check pass; user unknown
unix_chkpwd[83004]: password check failed for user (nobody)
pamtest.o[83002]: pam_unix(check_user:auth): authentication failure; logname=tess uid=1000 euid=1000 tty= ruser= rhost= user=nobody

If I run pamtest.o with my username as the argument, it works perfectly.

If I run "sudo ./pamtest.o www-data" and I give it that user's password, it also works. 

Conclusion

Yes, Tess. PAM authentication really does need privileged read access to /etc/shadow. You can't escape it. 

See you in four years, when you've forgotten this again. ;)

Lab exercise: Apache and PAM authentication

I've added a lab exercise to day 14 of my Linux+ class. It walks you through setting up a protected website which uses the local Unix user database for authentication.

The PDF is here on Github, the lab starts on slide number 48.


kilala.nl tags: , ,

View or add comments (curr. 0)

Linux PAM authentication for Java applications

2025-09-05 20:20:00

Consider this part 1 in a two part series. Here's part 2.

One of my students at school, a Java-developer in the Linux Essentials class, asked me a smart question: "could you use the Linux users database for authentication in your Java application?". 

It's a clever idea, since Linux already takes care of secure storage, of password hashing and it has tools for user management. It's not a bad question! So let's discuss the topic, first from the Java side ending with a conclusion and some suggestions. Then in part 2 I'll look more into the Linux side. 

Linux authentication

Yes, Linux offers a standard API for integration with its authentication and authorization implementation, PAM: pluggable authentication modules. A lot of software already makes use of PAM, most notable in your daily life: SSH, su, sudo, FTP and so on. 

But Java and Spring? Not so much.

PAM libraries for Java

Normally, the first thing that I do is to turn to the Maven online registry to see if there's libraries out there that match my search query. Oddly, I didn't find anything at all when it came to Java + PAM

So I dug around a little bit, to see if it's really never been done. I stumbled upon the JenkinsCI Github, who have a plugin which allows you to use local Linux authentication for your Jenkins interface using PAM. The JenkinsCI project people are no newbies, they are experienced developers! Surely they know what they're doing, right?

Diving into their source code, I see they're importing org.jvnet.libpam, which provides classes such as org.jvnet.libpam.UnixUser. Sounds interesting, especially since this plugin had its last update this year.

The weird thing is that libpam is completely unknown on MVNRepo. I find that very strange! Where is it coming from?

After tracking down a lot of historical documents and messages, I find that this library used to be published on https://java.net/projects/libpam4j/.  That website was shutdown in 2016, but luckily the Internet Archive still has the last snapshot available.

The historical snapshot shows the author of the library is/was Kohsuke Kawaguchi, a name that also pops up in the JenkinsCI plugin source code, as kohsuke. Searching Github, I find a repository which seemingly is the modern-day home of the project: https://github.com/kohsuke/libpam4j

The project had its last update seven years ago, in 2018. That was right after a bad security vulnerability was discovered in libpam/libpam4j in 2017.

Because my understanding of the Java programming language is almost nill, I'm having a hard time understanding how it really works. After some reading, I have concluded that the libpam4j Java library is a wrapper around the libC PAM libraries that are native to a number of Unix-like operating systems. Big hints found here in this file and the other implementation files. 

For now I'll accept, in a Jedi-mindtrick-handwave-style, that the library works and does indeed correctly perform PAM authentication. 

Initial conclusion

Yes, there is a Java library that lets you integrate with Linux PAM for authentication and authorization. However, it's somewhat basic and it has fallen into disrepair. It might still work, it might not. The lack of maintenance since 2017 could be problematic. 

Further analysis, advice and warnings

If you'd ask StackOverflow or an AI coding tool like Claude or Copilot about Java and Linux PAM, they would probably serve you a quick and dirty example of using libpam. They won't warn you about the things I described above, nor will it tell you about a few downsides.

Next to the lack of maintenance, there are other considerations you need to make.

  1. In order to use PAM for authentication, your application needs to have read-access to /etc/shadow, the Linux file which has the real user password hashes. Your application must either run as the "root" user with uid=0, or your application user needs to be a member of (secondary) group "shadow". 

    This is undesirable. /etc/shadow holds the accounts of system administrators and of other high privileged processes running on the server. If a flaw in your application allows for RCE (remote code execution) or LFI (local file inclusion), it would allow an attacker to steal all the password hashes of these users.

    More about this in part 2!

  2. If you're building a Java application, it's not likely that the users of your application also need access to the Linux server itself. For example, if you're building an API for data exchange, or a blog or webshop, the people who use the application are likely not employees of your organisation. 

    Mingling their accounts with the others in /etc/shadow means they will have a presence on the server. Unless you prevent logings through means like giving them /sbin/nologin as their login command, these people could login to the server. You really don't want that. 

  3. PAM doesn't just authenticate and authorized, in many cases it also allows users to change their password. This will require write access to /etc/shadow, which definitely needs uid=0 (root) access. Really a bad idea.

  4. In this modern day and age, your Java application will not run on a long-lived Linux server. Instead, it will run in a VM which can be rebuilt frequently, or in a container which gets constantly rebuilt or moved. There will not be a solid local Linux user database in those cases!

Final conclusion

Yes, it's possible. But you really don't want it. Instead use another, modern standard for your authentication. Hook into LDAP, Kerberos, OAuth or if worse comes to worst, build your own database table. 


kilala.nl tags: , ,

View or add comments (curr. 0)

<< 8 / 2025 10 / 2025 >>