Posts Tagged ‘linux’

Diagnosing performance degradation under adverse circumstances

[This post is a few years old and was never published. Recently, I was reminded about memcached slab imbalance, which in turn reminded me of this post.]

At work, we encountered a sudden and precipitous performance regression on one particular page of a legacy application. It's a Perl web application, running under mod_perl, using ModPerl::RegistryLoader to compile scripts at server startup, and Apache::DBI to provide persistent database connections.

Our users suddenly began complaining about one particular page being "three times slower than normal." Later examination of the Apache logs showed a 20x(!!) slowdown.

Investigating this performance problem was interesting because we didn't have good access to required data, and our technology choices slowed us down or completely prevented us from collecting it. Although we solved the mystery, the experience had several important lessons.
(more…)

Validating SSL certificates for IRC bouncers

IRC bouncers are sort of like a proxy. Your bouncer stays online, connected to IRC, all the time, and then you connect to the bouncer using a normal IRC client. I connect to my bouncer with an SSL-encrypted connection, but I hadn't been validating the certificate until now. Validating the SSL certificate is critical for thwarting man-in-the-middle (MITM) attacks.

In a MITM attack, the victim connects to the attacker, thinking it is the service they want to talk to (the IRC bouncer in this case). The attacker then forwards the connection to the service. Both connections might use SSL, but in the middle, the attacker can see the plaintext. They can simply eavesdrop, or modify the data flowing in both directions. SSL is supposed to prevent that, but if you don't validate the certificate, then you don't know who you're talking to. I want to know I'm really talking to my IRC bouncer, so let's figure out how to validate that certificate.
(more…)

Adding LUKS hard disk encryption on LVM after the fact

I have an external hard disk enclosure with two disks. I used Logical Volume Manager to create a single logical volume that spanned them, and slowly filled it to about 60% capacity. Lately, I've been trying to be more conscious of using encryption, and this was one area where I hadn't done so. At the time I felt like learning how to do LVM was enough, and LUKS could wait until later. Well, it's later. Here's how I added encryption after the fact (without a spare hard disk).

The warning

As these are potentially dangerous commands, you absolutely should back up your files prior to beginning. If you have sufficient spare media, you should consider not trying to do this conversion in-place. Instead just create a new filesystem and copy files over.

The plan

In my case, the volume of data was such that I didn't have the resources to do so, and the value of the files was fairly low, so I just went ahead without. The plan is to shrink the existing filesystem and logical volume, create a new logical volume in the freed space, with an encrypted filesystem inside. Then, move files from the old filesystem to the new one, while readjusting the allocation of disk space if needed.

The script

This was reconstructed from my bash history, and typed out by hand. Errors are certainly possible, so don't blindly reuse these exact commands. With that said, let's begin.

First, unmount the filesystem (FS), and shrink it to be as small as possible. Then, shrink the logical volume (LV) to match the FS's size:

umount /dev/vg0/usr-store
resize2fs -pM /dev/vg0/usr-store
lvreduce -L ? /dev/vg0/usr-store # from resize2fs

If resize2fs reports "2000 (4k) blocks" as the new filesystem size, then the size for lvreduce is in megabytes: 2000 * 4*1024 / 1024*1024.

Now, allocate the space you just freed up by shrinking usr-store to a new LV:

lvcreate -l 100%FREE -n new-store vg0

Now, we'll add the encryption layer:

cryptsetup --verify-passphrase luksFormat /dev/vg0/new-store
cryptsetup luksOpen /dev/vg0/new-store new-store

And create a new filesystem. I chose ext4, which has efficiencies when dealing with large files, but your workload might require something different.

mkfs -t ext4 -m2 -O dir_index,filetype,sparse_super -L new-store-fs /dev/mapper/new-store

At this point, you have a new filesystem that's LUKS-encrypted and ready to use. We can now begin transferring files from the old filesystem to the new one.

mkdir /mnt/new-store
mount -t ext4 /dev/mapper/new-store /mnt/new-store
mount -t ext4 /dev/vg0/usr-store /mnt/usr-store
mkdir /mnt/new-store/mike
chown mike:mike /mnt/new-store/mike
rsync -a --remove-source-files /mnt/usr-store/mike/Files /mnt/new-store/mike
# wait... wait... wait...
find /mnt/usr-store/mike/Files -type d -empty -delete

If you filled the new filesystem, then you'll need to readjust the disk space allocation by shrinking the old filesystem & LV, and growing the new filesystem & LV.

umount /dev/vg0/usr-store
resize2fs -pM /dev/vg0/usr-store
lvreduce -L ? /dev/vg0/usr-store
mount -t ext4 /dev/vg0/usr-store /mnt/usr-store
 
lvextend -l +100%FREE /dev/vg0/new-store
cryptsetup resize new-store
resize2fs -p /dev/mapper/new-store

Now you can continue moving files.

Once the old filesystem is empty, simply remove the logical volume that contains it, and rename the new encrypted volume and LV. You can optionally extend the FS & LV to fill the rest of the space now as well. I chose to use the space for a new LV I'll be using for backups instead.

umount /dev/vg0/usr-store
lvremove /dev/vg0/usr-store
 
lvrename vg0 new-store usr-store
dmsetup rename new-store usr-store
rmdir /mnt/new-store
mount -t ext4 /dev/mapper/usr-store /mnt/usr-store

Introducing mvr: like mv, but clever

I wanted to move a large number of files from one directory to another, but the target directory already had many of the filenames already used. This is a common enough problem -- digital cameras use DSC#, video downloaders often append numbers to get a unique filename, and so on. In both those examples, the sequence restarts if you empty the program's work directory. So, you'll end up with DSC0001.jpg every time you empty your camera's memory card. If you're trying to move such files into a single directory, you'll get conflicts every time.

Instead of manually renaming the files before transferring them, I wrote a simple script to give each file a unique name in the destination directory.
(more…)

Book review: "Coding Freedom" by Gabriella Coleman

I've just finished reading Gabriella Coleman's new book "Coding Freedom: The ethics and aesthetics of hacking" (2013, Princeton University Press) which culminates over a decade of field research, in-depth interviews, observation, and participation in the hacker scene globally. In this case, "hacker" refers to the free/open-source software (FOSS) hacker, and in particular the Debian project.
(more…)

Automating server build-out with Module::Build

At Pythian, we have one application that is composed of several components, the deployment of which needs to conform to our slightly peculiar server setup. Until recently, this required manually deploying each component. I did this a couple weeks ago, and it took me something like 40 hours to figure out and complete. As I went, I started reading up on Module::Build, trying to figure out how to automate as much as possible. It turns out that this core module gives us a surprisingly powerful tool for customized deployment. First, it will help to understand a few aspects of how our code is deployed. (more…)

Introducing File::Symlink::Atomic

In Tips & tricks from my 4 months at Pythian, I showed how to give a symlink a new target atomically. I wasn't aware of any module to encapsulate that, so I quickly put together File::Symlink::Atomic.

This module is useful because it eliminates the need to know how to do this safely - simply

use File::Symlink::Atomic

and you get a drop-in replacement for CORE::symlink. It creates a temporary symlink (using File::Temp to get a unique pathname) pointing to your new target, then moves it into place with a rename call. On POSIX systems, the rename system call guarantees atomicity.

I put it on PrePAN to get some advice. I have no clue what that'll do on any non-POSIX systems that have symlinks (if the OS doesn't do symlinks, I can't help you). Is a rename call universally atomic? If not, how can I detect those platforms, and provide that atomic guarantee some other way?

I didn't get any feedback, so I chose to simply release the module. It's now on CPAN. Enjoy!

Tips & tricks from my 4 months at Pythian

After working with Yanick Champoux on a few little Perl projects here and there, we finally met face-to-face at YAPC::NA last summer. A few months later, when I was looking for a co-op position, I immediately thought of Pythian. (more…)

Wherein I realize the bliss of writing init scripts with Daemon::Control

Init scripts are annoying little things - almost entirely boilerplate. Here's how I learned to stop struggling, and love Daemon::Control to control my daemons.

The module really is as simple as the synopsis - you describe the daemon, have it write an init script (which actually just runs your Daemon::Control script) for you, then update-rc.d and you're golden. It really is that simple. (more…)

Trimming whitespace in gedit with Perl

With gedit plugins, you can turn this simple text editor into a lightweight IDE. It's fast, has good syntax highlighting, and can have code completion, shell integration, and many similar feature you might expect from an IDE. One feature it lacked was trimming whitespace from files. I searched for plugins to do this, and found several, but none of them quite met my expectations, because none were configurable. I typically want my files to end with one and only one newline. Of course, the solution is Perl. (more…)

Understanding load averages

Load averages are at once hugely simple and hideously complex. Understading what these numbers mean is important for correctly applying this simple indicator of system health.

First, a load average is not CPU percentage. That is simply a snapshot of how often a process was found being executed on the CPU. The load average differs in that it includes all demand for CPU, not just what is currently running.

A useful analogy

A four-processor machine can be visualized as a four-lane freeway. Each lane provides the path on which instructions can execute. A vehicle can represent those instructions. Additionally, there are vehicles on the entrance lanes ready to travel down the freeway, and the four lanes either are ready to accommodate that demand or they're not. If all freeway lanes are jammed, the cars entering have to wait for an opening. If we now apply the CPU percentage and CPU load-average measurements to this situation, percentage examines the relative amount of time each vehicle was found occupying a freeway lane, which inherently ignores the pent-up demand for the freeway -- that is, the cars lined up on the entrances.
The load average gives us that view because it includes the cars that are queuing up to get on the freeway. It could be the case that it is a nonrush-hour time of day, and there is little demand for the freeway, but there just happens to be a lot of cars on the road. The CPU percentage shows us how much the cars are using the freeway, but the load averages show us the whole picture, including pent-up demand.
Ray Walker – Examining load averages

So, perfect utilization of a single CPU gives us a load average of 1.00, while anything above that represents unmet demand and anything below that represents unused supply. For a two-core machine, the perfect load average is 2.00 (1.00 for each of two cores). This is unfortunate, since one needs to know how many cores are available to make sense of the values. (Technically, a load average of 0 is perfect utilization, but that isn't typically possible outside embedded systems).

uptime and other tools like w provde three load averages: 1-, 5-, and 15-minute averages. They're actually exponentially-dampened moving averages, since recent load is more likely to affect current system performance than old load. These numbers are actually presented in the wrong order to intuit a trend. For the mathematics behind calculating the load averages, read UNIX Load Average Part 1: How It Works.

Conclusion

Load averages show how much demand for CPU there is (run queue length), not simply how much use there is. Consequently, load averages provide a more sophisticated measure of system utilization than CPU percentage. However, one must know the number of CPUs to understand the load average.

Further reading

If Linux were popular

I understand that some people have problems switching from Windows to linux. Since all my friends keep telling me how great Windows is, I thought I'd try an experiment.
(more…)