Categories
Tech

From DVD ISO to mkv

I spent a while ripping DVDs to ISO files using DVD Decrypter a while ago, which allows me to use the Kodi (XBMC) menus to select DVDs to watch from anywhere in the house. However, lately I’ve been interested in Plex Media Server, which, although it’s a fork of XBMC, doesn’t support the ability to play DVD ISOs. Plex is much more geared to streaming, with apps for phones, tablets, game consoles and various IP TV boxes like the Roku. It also has this neat cloud feature where you can watch your media from anywhere with a fast enough internet connection, and share with friends and family. I think it’d be cool to get my parents a Roku box, and be able to ‘curate’ a movie collection for them with no effort on their part and without them having to have a server/NAS running, by ripping their DVDs onto my home NAS.

Another feature of Kodi that has been taken out of Plex is the concept of a ‘stub’ file, where you just keep a 0 byte file on the NAS that says “out in the physical world, there’s a DVD with ‘The Godfather (1972)’ on it”, so that Kodi can be your librarian for offline movies too. So now, to use Plex, I definitely need to have some kind of online copy of each DVD (or maybe a 2 second video to take its place, that says “look on the shelf behind you”… hmmm)

Anyway, I now have a lot of ISO files that need to become MKV or avi or something else, ideally with no input from me. It took me a while to work out how to get a title extracted from a DVD ISO (not a physical DVD) as an MPEG2 stream, so here’s another little post about that. ‘title’ is what the DVD standard calls the individual video streams on the DVD – for example, one for the main movie, one for the trailer, and others for special features. They are all stored in MPEG-2 video format, which is getting old now, and not so great for compression. The cool kids are using H.264 these days.

I did this on my Windows desktop, but the tools are natively Linux tools so it should work on Linux or OS X too. You need mplayer and ffmpeg. Actually, you probably could do this with just mplayer/mencoder, but this worked for me and so I stopped looking. I specifically wanted a command-line method to do it. If you don’t care about that, just use Handbrake, which is very nice. I may yet do that too, since it also has a job queue.

First, to extract Title 1 from the DVD as MPEG-2 video:

mplayer dvdnav://1 -dvd-device DVDVolume.iso -dumpstream -dumpfile title1.m2v

If you don’t know which title you need, mplayer can help there too:

mplayer dvdnav:// -dvd-device DVDVolume.iso -frames 0 -identify -vo null -ao null -nocache

which will spit out a LOT of information, including this:

ID_DVD_TITLE_1_LENGTH=6714.000
ID_DVD_TITLE_1_CHAPTERS=25
TITLE 1, CHAPTERS: 00:01:37,00:10:51,00:14:10,00:23:21,00:24:09,00:27:44 {etc}
ID_DVD_TITLE_2_LENGTH=130.033
ID_DVD_TITLE_2_CHAPTERS=1
TITLE 2, CHAPTERS: 00:02:10,

which tells me that title 1 is the movie, and title 2 is the trailer (only 2 minutes long).

Then to make an MKV file of the resulting MPEG-2 file:

ffmpeg.exe -i title1.m2v ./title1.mkv

There are loads of settings for x264 that I don’t understand, so I just ignored them. The results look pretty good so far, and I get files of about 1.5GB instead of the 4GB original. Each one takes a while (about an hour? I haven’t timed it), but there’s nothing to watch or poke, so it doesn’t really matter. I can leave it to do its thing.

Categories
Monitoring Network Projects & Hacking

wraprancid and RANCID 3.x

Jethro R Binks’ excellent wraprancid script allows you to bring in configurations (and pretty much anything else that can be text) without having to get involved in writing a new ?rancid/?login combination for your device. That avoids some pretty hairy perl and Tcl code, so it’s definitely a Good Thing! It’s also useful for devices that don’t even have a command-line, but might allow you to fetch their config from a web page, or TFTP.

The trouble is, RANCID changed the way it deals with device types between RANCID 2.x and RANCID 3. It changed in a good way, so that the patches to rancid-fe that tools like wraprancid required are no longer necessary. What was previously hard-coded in the source of rancid-fe is now a proper configuration file, with a second config file for you to add your own types to. Here’s how to get wraprancid working with RANCID 3.x

First, I’m assuming you have a working wrapplugin script. Here’s one I use to fetch the config from Asterisk servers.

#!/opt/perl/bin/perl -w
#
#######################################################
# Modules
#######################################################

# Load any modules needed
use strict;
use Getopt::Std;
use Net::SSH::Perl;

#######################################################
# Variables
#######################################################

# Initialize variables used in this script

my $debug = 0;

my %options = ();
getopts('df:', \%options);
my $file = $options{'f'};
my $fh;
my $host = $ARGV[0];

$debug = $options{'d'};

print STDERR "to host: $host\n" if $debug;

my $ssh = Net::SSH::Perl->new($host, protocol => '2,1', debug => $debug );

print STDERR "made ssh obj\n" if $debug;
$ssh->login("root");

print STDERR "login\n" if $debug;
my ($stdout, $stderr, $exit) = $ssh->cmd("true");
print STDERR "got output\n" if $debug;

# Open the output file.
open($fh, ">", $file) or die "Cannot open output file\n";
print $fh "#RANCID-CONTENT-TYPE: wrapper.asterisk\n#\n";

print $fh $stdout;
print STDERR "wrote output of ". length($stdout)." bytes\n" if $debug;

#######
# End #
#######
close($fh);
print STDERR "done\n" if $debug;

That lives in ~rancid/bin/asterisk.wrapplugin, just as it did in version 2.

Then, in ~rancid/etc/rancid.types.conf, we’ll define a new device type called wrapper-asterisk:

wrapper-asterisk;script;wraprancid -s asterisk.wrapplugin
wrapper-asterisk;login;clogin

(I don’t think the login script matters, as it’s never used, but it must be specified to keep RANCID happy)

And finally in the router.db, you can put your actual device:

asterisk-sipgateway;wrapper-asterisk;up;

That’s it. You can repeat for whichever other scripts you need to do this for.

Bonus Tip

The asterisk end of the script above works like this: we use SSH public key authentication to connect to the server, and then in ~root/.ssh/authorized_keys, there is a line like this:

command="/usr/sbin/asterisk -V; echo 'extensions.conf'; cat /etc/asterisk/extensions.conf; echo 'sip.conf'; cat /etc/asterisk/sip.conf; echo 'iax.conf';cat  /etc/asterisk/iax.conf",from="myrancidhost" ssh-dss AAAAB3NzaC174ENozlUVBe5hH32Wy/duAJt1b4nWbVPoW1GP/koSZNv3888s3fx23nEpLMJxispulA== rancid@myrancidhost

So that the user authenticating with that particular key doesn’t get a shell, they just get the output from a series of cat commands, and then disconnected. They must also be connecting from the RANCID server.

So, now we have Asterisk in the same version control system as our network gear. You can use a similar setup for things like BSD ipfw-based firewalls, or Quagga routers.

Categories
Monitoring Network Tech Uncategorized

RANCID, ssh, Cisco MDS and “too many authentication failures”

I just ran into this, and it took a little while to figure out, so here’s my quick note. If you have a Cisco MDS being backed up by RANCID, then you can get the following odd message, even if it’s the first time you tried to log in with this user:

Received disconnect from 10.0.7.5: 2: Too many authentication failures for confbackup

What is happening is that the ssh client tries with whatever public keys it has configured first, and then the password-based auth that you thought it was doing all along. With a few keys, that’s enough to annoy the MDS into closing the connection.

The solution is to disable public-key auth for this connection. To do that with RANCID requires a little bit of extra work. First, create a shellscript (I call mine /opt/rancid/local/ssh-no-pubkey):

#!/bin/sh

ssh -o PubkeyAuthentication=no $*

Then for the devices that are suffering, tell RANCID to use this new SSH command instead of just ‘ssh’. In .cloginrc:

add sshcmd mds01 {/opt/rancid/local/ssh-no-pubkey}

Now RANCID can login and backup the config fine.

Additional tip – the ‘cisco’ device type seems to work better than the (theoretically correct) ‘cisco-nx’ device type for MDS switches.

Categories
Network Tech

Virtual serial ports in Windows VM hosts (for IOS XRv)

I’ve been trying to play around with Cisco IOS XRv this morning – the virtual machine version of the IOS XR software used on the ASR-9000 and CRS-1 series routers. Having expensive hardware like that for a simple test environment is tricky, but XRv means you can have one on your desktop.

The VM boots fine, but then leaves you with a nearly-blank screen that says “Booting IOS XRv”. What happened? All the action is happening on the serial port of the device, just like on a real router. So you need a serial port in the VM, and some way to talk to that.

I went through quite a few different versions of this, using NPTP, and VMwareGateway. I tried with VirtualBox and VMware Workstation. In each case, I would get a login prompt that didn’t work. I would see my password echoed back on the screen, and the router would generally act strangely – showing a password prompt and immediately saying that login failed, for example.

Then I found out that PuTTY can talk directly to a named pipe. So all you actually need is PuTTY. Fire it up, select Serial, then paste in the named pipe name you used when configuring the VM. It works! And login works, too!

So the full process: import the OVA. Then add a serial port to the VM (if it’s in VMware Workstation, VirtualBox already has one). Set the serial port type to Named Pipe, and add something like “.\pipe\my_xrv” to the name for the pipe. It MUST start with “.\pipe\”. In VMware, you want to say “This is the server”. In VBox, you say “Create Pipe”.

Then start the VM, and use Putty to connect (with Serial connection selected) to “.\pipe\my_xrv” where it usually says “COM1”.

Obviously the same technique works for anything else that needs a serial port to talk to the world.

Categories
Tech

See inside long-running Unix pipelines

So you are restoring a database on your Linux system, and you have the 3GiB SQL file ready to go, and so

mysql -uroot -p mydb < backup-2013-10-17.sql

and then… wait. But for how long? It’s probably broken right? It should never take this long! Your service desk needs to know what to tell customers.

Pipe Viewer is a handy little tool to use in place of cat(1), which gives you a progress bar and throughput figures for long-running processes like this.

pv -cN source < backup-2013-10-17.sql | mysql -uroot -p mydb

It also gives an ETA, which is about as good as Windows file copy ETAs, but knowing that something is moving, and at what kind of pace is very reassuring.

Categories
Monitoring Network Tech

RANCID on Speed

I like RANCID a lot, and this is the first time I’ve found a presentation from someone else about the kind of things I like to do with RANCID.

http://www.denog.de/meetings/denog2/pdf/010-Stoegbauer-RANCID_on_Speed.pdf

RANCID is pretty handy by itself – allowing you to actually know that the config for customer X hasn’t been changed in months, or verify that changes happening actually have corresponding change control tickets, as well as simply having a backup of everything and an automatic inventory (need all the serial numbers of all the WS-X6748-GE-TX cards in your network? It’s just a grep away). Since it all goes into version control (Subversion or CVS), you can do all this for last week, or last year, too. Useful for when your maintenance contract still lists the original serial number for that module that got RMAed 6 months ago, instead of the new one.

Internally, we have tools based around the Net::Netblock and Cisco::Reconfig perl modules and a bunch of hacks to generate things like hourly-updated, always accurate maps of what VLANs are in use where, what IP ranges are in use where, by VRF, which devices have an interface in that subnet on that VLAN and so on, all generated from collected configs in RANCID.

If you have a network of more than a few devices, and especially if you need to suddenly start answering “compliance” kinds of questions (where are your backups? can you prove they are regular? can you show the last change on that device? can you regularly verify that all devices have telnet disabled?) then you really should spend that afternoon setting it up. You’ll feel better for it.

Categories
Network Tech

Importing an OVA made from an Ubuntu Server installation doesn’t show eth0

OVAs are a handy way to bundle up virtual machines with all their configuration in a way that is portable between different virtualisation environments – you can export from VirtualBox and import into vmWare, for example. You can even bundle up multiple VMs, like an app server and its database, together.

I have a little virtual development server I use for Network Weathermap development, which I made an OVA of, to avoid having to set up Cacti several times on different PCs. Trouble is, when I used my OVA, the network didn’t work…

Once I’d figured this out, it was pretty logical, but when Ubuntu detects a new ethernet interface, it adds a line to /etc/udev/rules.d/70-persistent-net.rules which defines that interface as eth0, eth1 or whatever.

When you create an OVA from a running system, it has already done this. When you import the OVA and choose ‘Reset MACs’ (which you probably should, rather than have two VMs with the same MAC address), then Ubuntu detects this ‘new’ NIC. The result is that your config for eth0 is ignored, because eth0 has gone away, and this mysterious new NIC has appeared with a new MAC address.

Delete the ethernet-related lines from /etc/udev/rules.d/70-persistent-net.rules and reboot, to be back in business.

Update: Also true for CentOS 6 (and presumably RHEL 6 and Scientific Linux too).

Categories
Tech

Kitty – Putty with extra purrrrrr

I’ve been looking around recently for a nicer way to manage my Putty saved sessions – I’ve been getting involved with a much larger environment lately, and a giant list of names is less appealing. There are quite a lot of tools around that “wrap” Putty to add new features of various kinds, but they’re a bit clunky.

On the other hand, Kitty is a true fork of Putty, with some very cool extra features:

  • Nested folders in the saved sessions
  • Act as your local terminal for cygwin shells – no more DOS box
  • Drag files straight onto the terminal to upload over PSCP
  • Portable version with sessions saved to a file alongside the .exe
  • Still talks with Pageant etc for SSH infrastructure

It’s early days so far, but I think it might be squish time for Putty.

Categories
Network Projects & Hacking Tech

Raspberry Pi + ser2net = Cheap NM16A (Serial Console Server)

Introduction

I have had a Raspberry Pi model B sitting on the corner of my desk for about 6 months now, gathering dust and waiting for an application. I don’t need an XBMC box – I have one of those that’s more powerful (and also tiny enough) already with a Lenovo Q180 – and the actual I/O stuff looks mostly harder than it would be with an Arduino, so I’ve skipped that too.

Yesterday I was talking with a customer who had installed a network device in their DC racks that wasn’t talking to the outside world anymore. It’s management is either by SSH or serial port. The SSH was part of the problem, so serial needed to be the solution.

A small, easy to install box to allow network connectivity to a serial port? This did seem like a job for the RPi. I grabbed the latest Debian Wheezy SD image from the RPi website, and the USB-serial adapter from my bag, and got to work.

Booting the RPi with the serial adapter installed Just Worked, like USB stuff is supposed to. It’s an FTDI-chipset adapter and it just comes up as /dev/ttyUSB0 in Linux.

To get access to the serial port remotely, I could have just installed minicom on the RPi and then SSH to a shell before running it, but I was interested in how this might scale to more serial ports. You can get the USB serial gizmos for £2 each on ebay if you hunt around, and a couple of 8 port powered hubs would run to perhaps £15 each. That makes an ugly but usable 16-port console server for under £100. If you are building a lab environment for Cisco CCNP SWITCH or CCIE study, then this is a pretty decent deal.

The alternatives are Cisco’s NM-16A and NM-32A modules, plus the special cables to connect them, plus the router to put them in, or the ancient Cisco 2509 (so old it doesn’t have 10BaseT ethernet), or other random ebay scrap. I currently have a Lantronix 8 port device, but it was made before Cisco completely dominated the network world, and everyone else took up their console pinout – that means making up special cables to use it in my rack, which is kind of a pain. NM-16A modules go for around £150-200 on ebay, and you still need a pair of £50 cables to connect all the ports, and a router to put the module in.

The RPi has the added bonus that it’s still a linux box – so if you want to have an NTP server, or TFTP server or DNS, or RADIUS, then it’ll do that for your lab network too!

ser2net is a small application that listens for incoming telnet connections and connects them to serial ports. You configure a TCP port per serial port, so that ‘telnet rpi-ip-address 2001’ goes to /dev/ttyUSB01. You can preset the speed and other settings of the serial port, and you can also change them on the fly using the control interface (on a different TCP port). It also understands RFC2217, an extension to the telnet protocol that allows a client to control serial port settings with special codes. You can also get software like Serial Port Redirector which makes the remote serial port available as a local one under Windows, complete with port control. ser2net compiles simply on RPi Debian, and a single line added to the ser2net.conf has you up and running with your new serial console server.

Howto

First, grab the Debian Wheezy SD image from the Raspberry Pi Foundation’s Download Page and write it to a fresh 2GB SD card as described on their website.

Boot the RPi from the card, while connected to a monitor and network – Linux is preconfigured to use DHCP to get an IP address, so you’ll need to know that somehow to get access to the system. It tells you at the end of the boot process. Once booted for the first time, you’ll get the configurator utility, which allows you to enable SSH access. That should be the last time you need the monitor.

Download the latest ser2net distribution using wget.

wget http://downloads.sourceforge.net/project/ser2net/ser2net/ser2net-2.8.tar.gz

Untar, configure, make && make install.

tar xvfz ser2net-2.8.tar.gz
cd ser2net-2.8
./configure && make && sudo make install

Check what device name your serial adapter has:

dmesg | grep tty
[    9.735015] usb 1-1.3: pl2303 converter now attached to ttyUSB0

(ttyUSB0 in my case)

Now create a config file in /etc/ser2net.conf

BANNER:banner1:this is ser2net TCP port \p device \d  serial parms \s\r\n

# Don't do this by default
#CONTROLPORT:23

2001:raw:600:/dev/ttyUSB0:9600 NONE 1STOPBIT 8DATABITS XONXOFF LOCAL -RTSCTS
3001:telnet:0:/dev/ttyUSB0:9600 remctl banner1

And test by running the server:

/usr/local/sbin/ser2net -c /etc/ser2net.conf -n

With that running, you should be able to open another window, telnet to port 3001 on the RPi and get a welcome banner. If you have something connected to the serial port, you should be able to talk to it.

The final step is to make sure that the ser2net service starts when the RPi boots. Simply add the following line to the bottom of /etc/rc.local, just before the ‘exit 0’ line:

/usr/local/sbin/ser2net -c /etc/ser2net.conf

and it will be started automatically on boot.

You can add additional lines to /etc/ser2net.conf for multiple serial devices.

Extra Cheese

For an added bonus in a shared environment, you can log all output from the serial devices (while someone is connected). You get a file per session, with a timestamp for the start and finish, and the source IP. This is another couple of config lines in ser2net.conf

TRACEFILE:tr1:/var/log/ser2net/tr-\p-\Y-\M-\D-\H:\i:\s.\U
3001:telnet:0:/dev/ttyUSB0:9600 remctl banner1 tr=tr1 timestamp

i.e. add tr=tr1 and timestamp to the end of end of each telnet line. Then create the /var/log/ser2net directory and you are off and running.

Now it’s time to figure out how to ‘package’ this into less of a mess. A 1U box with 16 serial ports in Cisco pinout and a simple IEC power connector would be very handy! I think it’s do-able for about £200.

Categories
Monitoring Network Tech

Why do you make me have to hurt you, baby?

Everyone knows that #monitoringsucks, but does it have to suck this much?

An organisation I deal with uses CA’s Nimsoft monitoring system. It has a neat architecture, with a hierarchy of hubs, each collecting data and funnelling it back up to the central site. A central management console lets you configure any of the hubs. The hubs include an SSL VPN, so monitoring traffic can traverse NATs and live in conflicting address spaces inside customer networks. There are a bunch of application-specific plugins for enterprisey things like SQL and ERP apps. Config and new plugins are pushed out from the centre to hubs when necessary, and alerting/reporting goes back up the same pipe, so you don’t need to punch a firewall full of holes. Pretty cool, right?

Here’s why it sucks though:

  • It’s incredibly slow. Like, “go and do something else while waiting for a window to open” slow. I don’t know why.

  • Each probe plugin provides it’s own UI, and uses it’s own individual config file. So you might have a central management console, but you are centrally managing dozens of individual probes with separate islands of config. Because of that:

    • some probes have templates, some don’t
    • some probes allow arranging targets into groups, or folders, some don’t
    • any probe that uses SNMP or other credentials has it’s own record of the credentials – if you have a Cisco switch and want to monitor general health, interface stats and some other special OID, that’s three different probes to config, per device. Four, if you want to receive traps back from it.
    • sometimes, the same credentials work in one probe, but fail in another, on the same system!
    • Basic UI features, like not having the tab-order through dialogs be completely random, are missing

The “separate config files” thing is supposed to make it easy to roll out a “standard” config across a series of customers, which it might, but it makes dealing with the tool after deployment really painful.

Previously, I’ve used Cacti to do this particular piece of monitoring. Cacti has two plugins, Autom8 and Thold, that allow me to:

  • Add a new device, and apply a Host Template to it – that pulls in the relevant SNMP variables for this device
  • Wait for Autom8 to add all the graphs for me
  • Apply threshold templates that give standard alerting for all those new graphs
  • That’s it

With a CLI to bulk-add devices, I can have a DC full of switches under monitoring, with graphs and alarms in about 20 minutes. Even without the (standard, documented) CLI, it takes a minute to add a new device. The only thing I can’t do is distribute the polling to hubs, for customer networks or for general performance and efficiency. A simple one-line cron job will tell Cacti to rescan for new interfaces on a device as often as I like.

The only other part I need to do manually, currently, is periodically re-apply the Thold templates to pick up new interfaces for checking error-counters. Autom8 doesn’t talk to Thold, unfortunately.

How do large companies manage to make simple tasks so complicated? It’s like nobody actually tried to use this for a normal installation, while imagining they had a real job they were supposed to be doing as well. Configuring monitoring shouldn’t be a career choice, for things like switch error rates, should it?

Nimsoft isn’t alone – last time I looked for a distributed, SME-scale monitoring tool that understood that two devices might have the same IP address in different networks, and had a central management console, the competition was mostly worse. Usually it had better UI, often much better, but didn’t really have central management, just a central reporting console – you go to each remote hub to actually do the configuration.

How can this stuff be so hard?