Wake and suspend machines on local Ethernet LAN

Suppose you have a spare PC (say A) that supports Wake-on-LAN, you can first enable it by following this guide. Then you can hack on some scripts on other local machines connected through Ethernet to remotely control A to make it suspend/hibernate and wake up as you want.


To wake up A is simple. Suppose A’s MAC address is 11:aa:22:bb:33:cc and the hostname is “machine-a”, and you want to wake it up from other machines (e.g. Machine B) on LAN, first install the wakeonlan utility (on Machine B).

On Debian/Ubuntu:

$ sudo apt-get install etherwake


$ sudo apt-get install wakeonlan

On Fedora/RHEL:

$ sudo apt-get install net-tools

On OS X:

$ brew install wakeonlan

To send magic Wake-on-LAN packets to A:

$ etherwake 11:aa:22:bb:33:cc


$ ether-wake 11:aa:22:bb:33:cc


$ wakeonlan 11:aa:22:bb:33:cc



Apache name-based virtual hosts and reverse DNS

On some operating systems such as Ubuntu, you can setup virtual hosts quite easily with the default Apache server (installed with APT). The global configuration file /etc/apache2/apache2.conf contains the following line as the last line by default:

Include sites-enabled/

This means all virtual hosts configurations are loaded from the /etc/apache2/sites-enabled directory in alphabetical order. If you list this directory, you will find there is already a symbolic link 000-default that points to /etc/apache2/sites-available/default. This default virtual host configuration file does not contain a ServerName directive, and neither does the global configuration file (/etc/apache2/apache2.conf) by default. The default port.conf contains the following line:

NameVirtualHost *:80

This means all the following name-virtual hosts will follow this NameVirtualHost directive, which receives requests on all interfaces on the server. [1]

Basically, you can place your own virtual host configuration file in /etc/apache2/sites-available directory and use the a2ensite command to enable this configuration (by creating a symbolic link at /etc/apache2/sites-enabled that Apache will load). If you owns a domain example.com and a subdomain wiki.example.com that both points to the IP address of this server, you can create a virtual host with ServerName wiki.example.com (the argument to <VirtualHost> is *:80 in this case). If the configuration filename is wiki, you can use a2ensite wiki command to enable it.

Given these, let’s see how Apache handles requests for virtual hosts. When a request arrives, the server will find the best (most specific) matching <VirtualHost> argument based on the IP address and port used by the request. If there is more than one virtual host containing this best-match address and port combination, Apache will further compare the ServerName and ServerAlias directives to the server name present in the request. [2]

If you try to visit wiki.example.com you actually request the IP of this server. The Apache server will handle this request by comparing active virtual host configurations. It first checks 000-default, but detects no ServerName directive (neither does the global configuration as mentioned before). As a result, it performs a reverse DNS lookup to find the ServerName. [3] If you have not setup a reverse DNS for this IP to wiki.example.com, the match will fail and Apache continues to match the next virtual host, wiki, which does have a ServerName, wiki.example.com, and this is a match. Apache will respond with this virtual host.

However, if you have recently setup a reverse DNS for IP of your server to wiki.example.com, a different result will come out. When you access wiki.example.com, Apache first checks 000-default, and performs a reverse DNS resolution only to find wiki.example.com as the result for ServerName, and this is a match. Apache will respond with 000-default instead of wiki and you will browse Web page for 000-default rather than wiki. To solve this problem, you can either set a global ServerName with example.com at /etc/apache2/apache2.conf or specify a ServerName for 000-default. In this way, the Apache will match the right ServerName with your request even if you setup a reverse DNS for one of the virtual hosts.

Amazon EC2 Auto Scaling example

The following code creates an Auto Scaling group with scale-up and scale-down policies based on the monitoring the CPU utilization of either a specific EC2 instance or instances within the Auto Scaling group cluster.

If alarm_dimensions specifies InstanceId, the Auto Scaling group will scale up or down according to the CPU utilization of the specified running instance; if alarm_dimensions specifies AutoScalingGroupName, the Auto Scaling group will scale up or down according to the CPU utilization of instances within its existing cluster. For more metrics and dimensions in CloudWatch, see Amazon Elastic Compute Cloud Dimensions and Metrics.

Raw code can be downloaded here.

import boto.ec2
from boto.ec2.autoscale import (AutoScalingGroup,
                                LaunchConfiguration, ScalingPolicy)
from boto.ec2.cloudwatch import MetricAlarm

# Create connections to Auto Scaling and CloudWatch
as_conn = boto.ec2.autoscale.connect_to_region("us-east-1")
cw_conn = boto.ec2.cloudwatch.connect_to_region("us-east-1")

# Name for auto scaling group and launch configuration
as_name = "VM1"

# Create launch configuration
lc = LaunchConfiguration(name=as_name,
                         image_id="ami-76f0061f",  # AMI ID of your instance


# Create Auto Scaling group
ag = AutoScalingGroup(group_name=as_name,
                      launch_config=lc, min_size=0,


# Create scaling policies
scale_up_policy = ScalingPolicy(
    name='scale_up', adjustment_type='ChangeInCapacity',
    as_name=as_name, scaling_adjustment=1, cooldown=180)

scale_down_policy = ScalingPolicy(
    name='scale_down', adjustment_type='ChangeInCapacity',
    as_name=as_name, scaling_adjustment=-1, cooldown=180)


scale_up_policy = as_conn.get_all_policies(
    as_group=as_name, policy_names=['scale_up'])[0]

scale_down_policy = as_conn.get_all_policies(
    as_group=as_name, policy_names=['scale_down'])[0]

# Set dimensions for CloudWatch alarms
# Monitor on a specific instance
alarm_dimensions = {"InstanceId": "your_instance_id"}

# Monitor instances within the Auto Scaling group cluster
alarm_dimensions_as = {"AutoScalingGroupName": as_name}

# Create metric alarms
scale_up_alarm = MetricAlarm(
    name='scale_up_on_cpu_' + as_name, namespace='AWS/EC2',
    metric='CPUUtilization', statistic='Average',
    comparison='>', threshold="80",
    period='60', evaluation_periods=2,

scale_down_alarm = MetricAlarm(
    name='scale_down_on_cpu_' + as_name, namespace='AWS/EC2',
    metric='CPUUtilization', statistic='Average',
    comparison='<', threshold="20",
    period='60', evaluation_periods=2,

# Create alarms in CloudWatch

Sync and backup files from a host on the LAN over SSH on Mac OS X

Suppose you have several computers on an LAN, where DHCP is enabled. On Machine B, you want to routinely back up files from Machine A, whose IP address may change. This can be done using SSH and a local DNS server. The following experiment is done on two Mac OS X 10.8 machines.

  1. Set up DNS server (optional):

    named and rndc utility are installed by default on Mac OS X 10.8. For references on setup, see this link.

    1. Use rndc-confgen to generate configuration and secret key:

      $ sudo bash -c "rndc-confgen -b 256 > /etc/rndc.conf"
      $ sudo bash -c " head -n5 /etc/rndc.conf | tail -n4 > /etc/rndc.key"
    2. Edit /etc/named.conf and /etc/rndc.conf to ensure the port number are the same

    3. Start named server, run “rndc status” to check whether it is started:

      $ launchctl load -w /System/Library/LaunchDaemons/org.isc.named.plist
      $ launchctl start org.isc.named
      $ rndc status
    4. Create a zone file for the target machine (Machine A in this case):

      $ cd ~/Documents/
      $ mkdir named
      $ cd named
      $ vi machine-a.zone

      Copy the following text in to machine-a.zone, the IP address is irrelevant at this time:

      $TTL 86400
      $ORIGIN machine-a.
      @       IN      SOA     @ root (
                              2013091701      ; serial number YYMMDDNN
                              28800           ; Refresh
                              7200            ; Retry
                              864000          ; Expire
                              86400           ; Min TTL
              IN      NS      @
              IN      A   ; lan-sync
    5. Create symbolic link at /var/named/machine-a.zone (/private/var/named/machine-a.zone):

      $ ln -s /Users/yourname/Documents/named/machine-a.zone /private/var/named/machine-a.zone
    6. Edit /etc/named.conf, insert the following lines after the existing zone configurations:

      zone "machine-a" IN {
              type master;
              file "machine-a.zone";
              allow-update { none; };
    7. Edit /etc/resolv.conf, replace existing nameserver with This file should look like this:

      # This file is automatically generated.
  2. Download the script and set the following variables in ssh_sync.sh:

    $ git clone https://github.com/moleculea/lan-sync-over-ssh
    $ cd lan-sync-over-ssh
    $ vi ssh_sync.sh
    # Remote hostname (LAN) and MAC address
    # User name on the remote host


Fetch and convert online flash videos

  1. Installation of prerequisite utilities:

    $ sudo cpan install WWW::Mechanize
    $ git clone https://github.com/monsieurvideo/get-flash-videos.git /path/to/get-flash-videos
    $ ln -s /usr/local/bin/get_flash_videos /path/to/get-flash-videos/get_flash_videos

    For the installation of ffmpeg, see http://www.ffmpeg.org/download.html.

  2. Download flash-video-archiver scripts:

    $ git clone https://github.com/moleculea/flash-video-archiver.git flash-video-archiver
  3. Create a file of video link list, e.g. videos.txt:

    $ cat /path/to/videos.txt
  4. Fetch videos and convert them into MP4:

    $ cd flash-video-archiver
    $ ./fetch.sh /path/to/videos.txt /path/to/output/directory
    $ ./convert.sh /path/to/output/directory

Setting up Selenium Python environment with X virtual framebuffer on Ubuntu server

  1. Install relevant APT packages:

    $ sudo apt-get install xorg
    $ sudo apt-get install xfvd
    $ sudo apt-get install firefox
    $ sudo apt-get install openjdk-7-jre
    $ sudo apt-get install python-pip
  2. Download Selenium server and Python bindings:

    $ sudo pip install selenium
    $ wget https://selenium.googlecode.com/files/selenium-server-standalone-2.34.0.jar
  3. Start virtual framebuffer X server for server number 1 (for DISPLAY) in the background:

    $ sudo Xvfb :1 &
    $ sudo Xvfb :1 -screen 0 1280x1024x8
  4. Start Selenium server in the background (without sudo):

    $ java -jar selenium-server-standalone-2.34.0.jar &
  5. Write sample Python code in sample.py:

    from selenium import webdriver
    from selenium.webdriver.common.keys import Keys
    driver = webdriver.Firefox()
    assert "Python" in driver.title
    elem = driver.find_element_by_name("q")
    assert "Google" in driver.title
  6. Run sample.py with environment variable DISPLAY=:1:

    $ DISPLAY=:1 python sample.py

Make slideshow GIF from a batch of images using ImageMagick

  1. Download and install ImageMagick Mac OS X Binary Release at ImageMagick official site.

    $ tar xvfz ImageMagick-x86_64-apple-darwin12.4.0.tar.gz
    $ export MAGICK_HOME="/YOUR/PATH/TO/ImageMagick-6.8.6"
    $ export PATH="$MAGICK_HOME/bin:$PATH"
    $ export DYLD_LIBRARY_PATH="$MAGICK_HOME/lib/"

    See this for installation on Linux.

  2. Enter the directory where original images are stored (assuming all files are in .jpg format), use the following command:

    $ convert -delay 100 -loop 0 -resize 300x225 -quality 90 *.jpg image.gif

    -deplay represents interval between slides, -loop indicates number of loops for slide show (0 for infinite loops); other options are intuitive. Note that *.jpg will expand to images ordered alphabetically by filename. Use the following command to re-order the images by last modified time:

    $ convert -delay 100 -loop 0 -resize 300x225 -quality 90 `ls -t *.jpg` image.gif

Configure vsftpd FTP server in active mode on CentOS

  1. Install vsftpd, configure SELinux context and start the service:

    # yum install vsftpd
    # chkconfig vsftpd on
    # chcon -R -t public_content_t /var/ftp
    # service vsftpd start
  2. There are two ways to configure iptables to allow connections

    One way is using system-config-firewall, which is simple:

    # system-config-firewall-tui

    Choose FTP in the “Trusted Service” menu and save the configuration. system-config-firewall will add rule in the INPUT chain and load ip_conntrack_ftp kernel module, which can be verified using:

    # lsmod | grep ftp
    nf_conntrack_ftp       10475  0
    nf_conntrack           65428  4 nf_conntrack_ftp,nf_conntrack_ipv6,nf_conntrack_ipv4,xt_state

    Another way is do it manually:

    1. Insert the following rule somewhere before the final “reject-with icmp-host-prohibited” rule, say number 4:

      # iptables -L --line-numbers
      # iptables -I INPUT 4 -p tcp --dport 21 -m state --state NEW -j ACCEPT
    2. Load ip_conntrack_ftp (alias of nf_conntrack_ftp):

      # modprobe ip_conntrack_ftp

      Now the FTP directory should be accessible from remote machines.

Evaluation criteria of computer science professionals

Personal point of view on how to evaluate the generic skills possessed by a computer software professional from a pragmatic perspective without considering specific subdisciplines or academic background in computer science.

  • Programming Language
    • Low-level: C/C++
    • Object-Oriented: Java/C++/PHP/Python/Ruby
    • Functional/Scripting: Lisp/Python/Perl/Haskell/Erlang
    • Web/Mobile: HTML/CSS/JavaScript
  • System and Network
    • System Administration (Unix)
    • OS/Kernel concept: Process, Memory, I/O
    • TCP/IP
    • Basic Scripting: bash/ksh/tcsh/zsh/Perl/Python
  • Quantitative
    • (Discrete) Math
    • Data Structures and Algorithms
  • Software/Architecture/Engineering
    • Design Pattern
    • Paradigm: Object-Oriented, Imperative
    • Purpose:
    • Low-level: kernel/driver/system/network
    • Architecture: HPC/HA/Cloud/Cluster/Distributed/Grid
    • Internet: Web/Mobile
    • Others: Architecture, Testing, SCM (SVN/Git)
  • Database
    • SQL/NoSQL/Cache
    • DBMS: MySQL/PostgreSQL/Oracle/MongoDB/…
  • Conceptions
    • Computer Architecture and Hardware

Get the inode birth time (crtime or btime) of a file on Mac OS X and BSD descendants

$ stat -f "%SB %N" filename.txt
Jul 11 22:27:33 2013 filename.txt

According to the man page, -f is used to display information using the specified format. S is an optional output format specifier that represents a string output, and B is a required field specifier that represent the birth time of the inode. N represents the file name.