Click to stream .m3u files in Ubuntu

I just recently heard about CCMixter.org on FLOSS Weekly. CCMixter.org is a resource and collaborative space for musicians and remixers. They have thousands of music tracks which can be downloaded, remixed, sampled, or streamed.

I recently did a fresh install of Ubuntu on the computer I was using, and clicking on any of CCMixter’s streaming links caused a window to pop up asking me if I wanted to play the stream using Rhythmbox or “Other”. Selecting Rhythmbox popped up Rhythmbox, but it wouldn’t play the stream. Googling around a bit led me to discussions of Rhythmbox brokenness going back to 2008, so I took a different tack.

I fired up Synaptic Package Manager and installed the VLC Media Player.

Then I clicked the gear icon on Unity’s upper right menu bar, selected “About this Computer”, clicked Default Applications, and changed the default application for Music to “VLC Media Player.”

Now when I click on a link to an .m3u stream, Ubuntu sends the link to VLC, and the music starts to play.

Hope you find this useful.

Share Button

Get Ansible’s “pip” method to install the right version of Django

I was using Ansible to set up a bunch of Scientific Linux 6.6 servers running Django and I wanted to use a specific version of Django, version 1.6.5, on all servers.

Ansible makes this easy with the “pip” module:

  - name: Install pip package from yum
    yum: name={{ item }} state=present
    with_items:
    - python-pip
    - python-setuptools

  - name: Install Django 1.6.5
    pip: name=django version=1.6.5 state=present

This works great if you’re installing on a clean, empty server, but if you’re upgrading a server that had an older version of Django on it (1.6.4 in my case) Ansible will act as if it’s installing 1.6.5, but when it’s done I still had version 1.6.4.

If I try using straight PIP commands I get this:

$ pip install django==1.6.5
Downloading/unpacking django==1.6.5
  Running setup.py egg_info for package django
    warning: no previously-included files matching '__pycache__' found under directory '*'
    warning: no previously-included files matching '*.py[co]' found under directory '*'
  Requested django==1.6.5, but installing version 1.6.4
Installing collected packages: django
  Found existing installation: Django 1.6.4
    Uninstalling Django:
      Successfully uninstalled Django
  Running setup.py install for django
    warning: no previously-included files matching '__pycache__' found under directory '*'
    warning: no previously-included files matching '*.py[co]' found under directory '*'
    changing mode of /usr/bin/django-admin.py to 755
Successfully installed django
Cleaning up...

Note the line “Requested django==1.6.5, but installing version 1.6.4″. Thanks PIP!

It turned out to be a bug in PIP versions earlier than PIP 1.4, not Ansible. A little Googling turned up a page on Stackoverflow that pointed the finger at an old cached copy of 1.6.4 in the build directory, which I found in /tmp/pip-build-root.

I updated my Ansible YAML file to get rid of the temporary directory and now it works fine:

  - name: Install pip package from yum
    yum: name={{ item }} state=present
    with_items:
    - python-pip
    - python-setuptools

  - name: Remove PIP temp directory
    file: path=/tmp/pip-build-root state=absent

  - name: Install Django 1.6.5
    pip: name=django version=1.6.5 state=present

Hope you find this useful.

Share Button

2014 HPCwire Awards

The StratoStor project I’ve been working on for the past 10 months just got a “Top 5 New Products or Technologies to Watch” award from HPCwire announced at this week’s SuperComputing 2014 (SC14) conference in New Orleans.

HPC = High Performance Computing, HPCwire is a news bureau for all things regarding High Performance Computing, and SC14 is where every major vendor of HPC equipment and products shows off their wares, so getting this bit of recognition from the readers of HPCwire is really nice.

So THANK YOU HPCwire readers, for this award.

http://www.hpcwire.com/2014-hpcwire-readers-choice-awards/23/

2014 HPCwire Awards

Share Button

Validating Distributed Application Workloads

This is the talk I gave at RICON this year on Validating Distributed Application Workloads. It’s about how we set up test environments at Seagate for validating storage system performance at the petabyte scale. This talk centers around the testing done to validate performance of a 2PB rack running Riak CS.

Share Button

Increase a VM’s available memory with virsh

If you try to increase the amount of available memory using the obvious command it fails with an error message:

# virsh setmem <vm name> 16G --live
error: invalid argument: cannot set memory higher than max memory

The physical host in this case has 128G RAM and 32 CPUs. Plenty of capacity. To increase the maximum amount of memory that can be allocated to the VM:

# virsh setmaxmem <vm name> 16G --config

There are also –live and –current options which claim to affect the running/current domain. These options do not actually work. You have to use the –config option (changes take effect after next boot) and then power off the machine by logging in and running “poweroff”.

Once the machine is off set the actual memory with:

# virsh setmem <vm name> 16G --config

Then start the vm:

# virsh start <vm name>

Once the VM starts up it will have more memory.

Hope you find this useful.

Share Button

Increase a VM’s vcpu count with virsh

You have a virtual machine you created with virsh. You want to increase the number of vcpus in the virtual machine, so you use the obvious command:

virsh setvcpus --count 8 <vm name>

… and get the irritating error message:

error: invalid argument: requested vcpus is greater than max allowable vcpus for the domain: 8 > 2

This is virsh telling you that you can’t increase the number of vcpus to a number larger than what you started with.

Although virsh doesn’t support increasing the number of vcpus while the VM is running, you can change the number of vcpus if you’re willing to reboot the VM. All you need to to is to edit the virsh XML file with:

virsh edit <vm name>

Look for the line “vcpu placement” and increase the value to the number of vcpus that you want. I changed the vcpus from 2 to 8 here:

<vcpu placement='static'>8</vcpu>

Save the file.

Shutdown the VM:

virsh shutdown <vm name>

Wait until the VM’s status is “shut down”.

virsh list --all

Destroy the VM:

virsh destroy <vm name>

Start up the VM:

virsh start <vm name>

Once the VM starts you’ll have more vcpus running.

Hope you find this useful.

Share Button

Getting rid of the “redirecting to systemctl” message in OpenSUSE

On OpenSUSE systems running systemd all rcX scripts now redirect start, stop, reload, restart, etc. service commands to systemctl. The messages that  used to appear on STDOUT telling you that a command is successful (or not) are now logged, but are no longer displayed on STDOUT.

That I can deal with, but every call to an rcX script now generates the message “redirecting to systemctl” to STDERR. I have a lot of scripts that call rcX scripts, and they interpret STDERR messages as “something just broke”.

The culprit is the new /etc/rc.status script that ships with OpenSUSE. It spews out the “redirecting to systemctl” message to STDERR for every operation that you do. The following command will modify the script and remove this stupid message:

if ( grep -q 'redirecting to systemctl' /etc/rc.status ) ; then
    # Save a copy of the original file
    cp -p /etc/rc.status /etc/rc.status.orig;

    # OpenSUSE 12.1:
    perl -i.bak -pe 's,echo "redirecting to systemctl" >/dev/stderr,,;' /etc/rc.status;

    # OpenSUSE 12.3:
    perl -i.bak -pe 's,echo "redirecting to systemctl \${SYSTEMCTL_OPTIONS} \$1 \${_rc_base}" 1>&2,,;' /etc/rc.status;
fi

This works for OpenSUSE 12.1 and 12.3. I did not have a 12.2 system available to test with.

Hope you find this useful.

Share Button

Bring Pidgin’s window into front focus when there’s an inbound IM

I was talking to a co-worker about Pidgin not coming into focus when there’s a new, inbound IM. The Pidgin window used to come into focus, front and center, when I was running Ubuntu/Gnome and when running OpenSUSE/KDE, but when I upgraded my office desktop to Ubuntu/Unity it stopped behaving this way. My co-worker noticed the same behavior with Fedora17/Gnome. A new IM would come in, but the Pidgin IM window would remain in the background, hidden, unseen and unread.

I thought “There has to be a setting that controls this,” and there is…

  • Bring up Pidgin’s Buddy List
  • Click Tools > Plugins
  • Locate the Message Notification plugin and highlight it
  • At the bottom of the Plugins window is a Configure Plugin button. Click it
  • Under Notification Methods check both Raise conversation window and Present conversation window
  • Click Close

That’s it. The next time someone IM’s you, your Pidgin Conversation will pop up in the center of your screen, in front of all of your other windows.

Hope you find this useful.

Share Button

How to turn off AMBER alerts on an iPhone

Last night I was woken up by an AMBER alert on my iPhone. Apparently there was a horrific murder and possible child abduction and the police wanted to make absolutely sure that every cell phone -carrying person in the state was made aware of the fact, just in case we spotted the children somewhere.

I live near San Francisco. The possible abduction happened near San Diego. It’s an 8 hour drive away. Teleportation has not been invented yet. There is no possible way that I am going to have witnessed anything that can help.

Until the people operating the AMBER alert system either:

  1. Limit notifications to the geographic area where they might actually do some good
  2. OR Give me the option to disable AMBER alerts while I’m asleep (“Do Not Disturb” mode is enabled)
  3. OR Give me the option to disable AMBER alerts while stationary (phone is not moving, so I’m not out and about and therefore unlikely to witness anything helpful)

… I am going to disable AMBER alerts on my iPhone. If one of these problems is addressed I’ll consider turning alerts back on. Until then, they’re staying off.

If you feel the same way, here’s what you do:

  • Go to Settings -> Notifications
  • Scroll all the way down to the bottom of the screen
  • Switch “AMBER Alerts” to the OFF position
  • Get some (undisturbed) sleep

How to Improve the AMBER Alert System so that it’s MORE Effective

I am convinced that the AMBER alert system can do good, but I also believe that it will be less and less effective if the people managing the system continue to send out alerts in such a ham-handed way. If the people managing the system send alerts to mobile phones in the middle of the night, and the only options that a mobile phone user has are ON and OFF, more and more people will start turning AMBER alerts OFF, making the AMBER alert system less and less effective.

I’ve built many operations alert systems over the past 15 years.  Sending repeated alarms to the wrong people makes those people ignore alarms. Sending alerts all of the time desensitizes people when there’s an actual alarm they should worry about. If I had a little more control over how and where I receive AMBER alerts, I’d leave them on. Here are my suggestions to the maintainers of the AMBER alert system:

Limit alerts to phones within a given radius of the scene of the crime. Every cell tower has a known geographic position. Every active mobile phone self-registers with the nearest cell tower. With the incident that took place in Boulevard, CA (near San Diego), alerts went out to all of California, alerting citizens in Yreka, CA (851 miles from the crime scene), but not Fortuna, AZ (123 miles from the crime scene).  By sending alerts to cell towers within a 200 or 300 mile radius, the alerts would be seen by the people most likely to have actually seen something. Sending alerts to people 850 miles from the crime scene desensitizes them to future alerts.

Include a URL for more information. If you’re sending the alerts to smart phones, include a link that someone can click for more information, then they might actually know what to look for.

Delay alerts for phones that are in “Do Not Disturb” mode. No one wants to be woken up at 3am with a screeching alert tone only to find out that they need to be on the lookout for a blue Nissan pickup truck. There are no blue Nissan pickup trucks in my bedroom or anyone else’s bedroom. If a phone’s “Do Not Disturb” mode is turned on, hold the alert until the DnD time is over, then alert the person carrying the phone. That’s one less person who will turn alerts off.

Better yet, hold alerts until the phone moves. If a phone’s “Do Not Disturb” mode is turned on, hold the alert until the DnD time is over, then alert the person carrying the phone as soon as they pick it up or move the phone. I’m awake now, you have my full attention, and I’m getting ready to go somewhere where I might actually see something. That’s the time to tell someone to be on the lookout, not at 3am when they’re asleep.

With these simple changes the AMBER alert system could be made more effective, reaching people who might have seen something at the time when they’re actually out and about. Without changes such as these, the system will become less and less effective over time, and lives will be lost.

Fix the system. Make it better. Make it more effective.

If you find this useful, please click the “share” button and share it with your friends.

Share Button

Creating differential backups with hard links and rsync

You can use a hard link in Linux to create two file names that both point to the same physical location on a hard disk. For instance, if I type:

> echo xxxx > a
> cp -l a b
> cat a
xxxx
> cat b
xxxx

I create a file named “a” that contains the string “xxxx”. Then I create a hard link “b” that also points to the same spot on the disk. Now if I write to the file “a” whatever I write also appears in file “b” and vice versa:

> echo yyyy > b
> cat b
yyyy
> cat a
yyyy
> echo zzzz > a
> cat a
zzzz
> cat b
zzzz

Copying to a hard link updates the data on the disk that each hard link points to:

> rm a b c
> echo xxxx > a
> echo yyyy > c
> cp -l a b
> cat a b c
xxxx
xxxx
yyyy

“a” and “b” point to the same file on disk, “c” is a separate file. If I copy a file “c” to “b” that also updates “a”:

> cp c b 
> cat a b c
yyyy
yyyy
yyyy
> echo zzzz > c
> cat a b c
yyyy
yyyy
zzzz 

What most people don’t know is that rsync is an exception to this rule. If you use rsync to sync two files, and it sees that the target file is a hard link, it will create a new target file but only if the contents of the two files are not the same:

> rm a
> rm b
> echo xxxx > a
> cp -l a b
> cat a
xxxx
> cat b
xxxx
> echo yyyy > c
> cat c
yyyy
> rsync -av c b
sending incremental file list
c
sent 87 bytes  received 31 bytes  236.00 bytes/sec
total size is 5  speedup is 0.04
> cat b
yyyy
> cat c
yyyy
> cat a
xxxx

File “b” is no longer a hard link of “a”, it’s a new file. If I update “a” it no longer updates “b”:

> echo zzzz > a
> cat a b c
zzzz
yyyy
yyyy

However, if the file that I’m rsync-ing is the same as “b”, then rsync does NOT break the hard link, it leaves the file alone:

> rm a
> rm b
> rm c
> echo xxxx > a
> cp -al a b
> cp -p a c
> cat a b c
xxxx
xxxx
xxxx

At this point “a” and “b” both point to the same file on the disk, which contains the string “xxxx”. “c” is a separate file that also contains the string “xxxx” and has the same permissions and timestamp as “a”.

> rsync -av c b
sending incremental file list
sent 39 bytes  received 12 bytes  102.00 bytes/sec
total size is 5  speedup is 0.10
> cat a b c
xxxx
xxxx
xxxx

At this point I’ve rsynced file “c” to “b”, but since c has the same contents and timestamp as “a” and “b” rsync does nothing at all. It doesn’t break the hard link. If I change “b” it still updates “a”:

> echo yyyy > b
> cat a b c
yyyy
yyyy
xxxx

This is how many modern file system backup programs work. On day 1 you make an rsync copy of your entire file system:

backup@backup_server> DAY1=`date +%Y%m%d%H%M%S`
backup@backup_server> rsync -av -e ssh earl@192.168.1.20:/home/earl/ /var/backups/$DAY1/

On day 2 you make a hard link copy of the backup, then a fresh rsync:

backup@backup_server> DAY2=`date +%Y%m%d%H%M%S`
backup@backup_server> cp -al /var/backups/$DAY1 /var/backups/$DAY2
backup@backup_server> rsync -av -e ssh --delete earl@192.168.1.20:/home/earl/ /var/backups/$DAY2/

“cp -al” makes a hard link copy of the entire /home/earl/ directory structure from the previous day, then rsync runs against the copy of the tree. If a file remains unchanged then rsync does nothing — the file remains a hard link. However, if the file’s contents changed, then rsync will create a new copy of the file in the target directory. If a file was deleted from /home/earl then rsync deletes the hard link from that day’s copy.

In this way, the $DAY1 directory has a snapshot of the /home/earl tree as it existed on day 1, and the $DAY2 directory has a snapshot of the /home/earl tree as it existed on day 2, but only the files that changed take up additional disk space. If you need to find a file as it existed at some point in time you can look at that day’s tree. If you need to restore yesterday’s backup you can rsync the tree from yesterday, but you don’t have to store a copy of all of the data from each day, you only use additional disk space for files that changed or were added.

I use this technique to keep 90 daily backups of a 500GB file system on a 1TB drive.

One caveat: The hard links do use up inodes. If you’re using a file system such as ext3, which has a set number of inodes, you should allocate extra inodes on the backup volume when you create it. If you’re using a file system that can dynamically add inodes, such as ext4, zfs or btrfs, then you don’t need to worry about this.

Share Button