Personalising your Apple product is the door to a world of pain…

Apple do this nice service when you order directly from the Apple Online Store where you can get an engraving on the back of your iPod, iPad etc for free. It looks really nice, and it adds about a day onto the delivery time. What could go wrong? Well, let me tell you a story…

I recently bought an iPad Mini for my girlfriend’s birthday. Got her a nice designer case for it, a data nanosim and a cute little private joke engraved on the back (along with her e-mail address – I’m still fairly practical). She loved it, and started “moving in” straight away. The next day though, it was failing already – the display backlight wouldn’t light and in just the right light you could make out a very rough-looking display. So we wiped it clean (iTunes could still talk to it) and went to the Apple store.

First of all, one does not simply walk into an Apple store. Obviously you need an appointment with a “genius” to actually get customer service. Although it turns out if you stand in the middle of a ipad-buying crowd with a dead unit and a grumpy face, someone will help. Except they can’t, because it’s personalised, so it has to go back to the mothership.

The one super-impressive part of the customer service experiences comes next – Apple’s phone system. If you call from the phone that they have recorded against your name for your account, the phone system just says (literally) “Oh, is it about that ipad?”, and everyone knows what is going on. Neat!

They send out a courier box for the unit, and you send it back. Then they send out the repaired one. Unlike the Apple store itself, they use TNT, and they don’t let you know the tracking number for the repaired item.

So my repair is turned around in a day or so, and then the item is just marked “shipped” on the Apple site. I wait, and wait some more. After speaking to Apple customer service, they want to wait until TNT have had their “allow for delivery” time, which is something like 10 days. I did get the tracking number from them though, so I could play along at home – it just says “out for delivery” for several days. Eventually TNT admit that actually they’ve lost it, so Apple organise another replacement, with personalisation. They confirm the details, and get it sent. It arrives a few days later, and it’s the wrong spec. We go back two steps and repeat one more time. Finally, 6 weeks after her birthday, we have a working unit!

In their defence, I did get personal contact details from several folks at Apple who followed the case through for me, but it shouldn’t really come to that. Obviously, if I’d have just bought the unit without personalisation, I could have had it swapped out at the local physical store, and been done in a day or two.

Think twice before engraving!

Leave a Comment

Filed under Uncategorized

RANCID on Speed

I like RANCID a lot, and this is the first time I’ve found a presentation from someone else about the kind of things I like to do with RANCID.

RANCID is pretty handy by itself – allowing you to actually know that the config for customer X hasn’t been changed in months, or verify that changes happening actually have corresponding change control tickets, as well as simply having a backup of everything and an automatic inventory (need all the serial numbers of all the WS-X6748-GE-TX cards in your network? It’s just a grep away). Since it all goes into version control (Subversion or CVS), you can do all this for last week, or last year, too. Useful for when your maintenance contract still lists the original serial number for that module that got RMAed 6 months ago, instead of the new one.

Internally, we have tools based around the Net::Netblock and Cisco::Reconfig perl modules and a bunch of hacks to generate things like hourly-updated, always accurate maps of what VLANs are in use where, what IP ranges are in use where, by VRF, which devices have an interface in that subnet on that VLAN and so on, all generated from collected configs in RANCID.

If you have a network of more than a few devices, and especially if you need to suddenly start answering “compliance” kinds of questions (where are your backups? can you prove they are regular? can you show the last change on that device? can you regularly verify that all devices have telnet disabled?) then you really should spend that afternoon setting it up. You’ll feel better for it.

Leave a Comment

Filed under Monitoring, Network, Tech

Something is wrong somewhere

I was sitting at my desk watching John Allspaw’s Alert Design presentation in the corner of my screen, while doing some ‘real’ work, and I heard this odd noise. It was familiar, it was an alert sound, but it wasn’t in the right context. It wasn’t my phone, or my PC… eventually I realised it was the noise the our car makes when you are low on fuel (a soft ‘bong’ on recent BMWs). Looking out the window, way down the street was someone loading their BMW with stuff, setting off an alarm.

Anyway, context is important for alarms. The presentation is interesting.

Leave a Comment

Filed under Uncategorized

Importing an OVA made from an Ubuntu Server installation doesn’t show eth0

OVAs are a handy way to bundle up virtual machines with all their configuration in a way that is portable between different virtualisation environments – you can export from VirtualBox and import into vmWare, for example. You can even bundle up multiple VMs, like an app server and its database, together.

I have a little virtual development server I use for Network Weathermap development, which I made an OVA of, to avoid having to set up Cacti several times on different PCs. Trouble is, when I used my OVA, the network didn’t work…

Once I’d figured this out, it was pretty logical, but when Ubuntu detects a new ethernet interface, it adds a line to /etc/udev/rules.d/70-persistent-net.rules which defines that interface as eth0, eth1 or whatever.

When you create an OVA from a running system, it has already done this. When you import the OVA and choose ‘Reset MACs’ (which you probably should, rather than have two VMs with the same MAC address), then Ubuntu detects this ‘new’ NIC. The result is that your config for eth0 is ignored, because eth0 has gone away, and this mysterious new NIC has appeared with a new MAC address.

Delete the ethernet-related lines from /etc/udev/rules.d/70-persistent-net.rules and reboot, to be back in business.

Update: Also true for CentOS 6 (and presumably RHEL 6 and Scientific Linux too).

Leave a Comment

Filed under Network, Tech

Kitty – Putty with extra purrrrrr

I’ve been looking around recently for a nicer way to manage my Putty saved sessions – I’ve been getting involved with a much larger environment lately, and a giant list of names is less appealing. There are quite a lot of tools around that “wrap” Putty to add new features of various kinds, but they’re a bit clunky.

On the other hand, Kitty is a true fork of Putty, with some very cool extra features:

  • Nested folders in the saved sessions
  • Act as your local terminal for cygwin shells – no more DOS box
  • Drag files straight onto the terminal to upload over PSCP
  • Portable version with sessions saved to a file alongside the .exe
  • Still talks with Pageant etc for SSH infrastructure

It’s early days so far, but I think it might be squish time for Putty.

Leave a Comment

Filed under Tech

This made me smile…

“After all, while it might be OK for the laptop support group to reformat your laptop when they can no longer cope with the increasing complexity of desktop operating systems, reformatting the network usually isn’t an option.”
This is what makes networking so complex

Leave a Comment

Filed under Uncategorized

Serial Consoles – Windows Edition

Small update on console servers: Having actually tried, HW Software VSP is the thing to get for adding a virtual COM port to Windows. You need to get the singleport version, and then point it at the ‘telnet’ protocol port on your ser2net server. Now you can change the baud rate and other settings in your Windows app (e.g. Hyperterminal) and the settings are passed through to the remote serial port.

Finally you can virtualise that application with the weird serial hardware!

Leave a Comment

Filed under Uncategorized

Raspberry Pi + ser2net = Cheap NM16A (Serial Console Server)

Thus enabling different customers to place their orders from any part of the globe through their respective workstations and make online payments, the problems of small web development companies, but this would be defeating the purpose of this whole article. , best collaboration tools Or you can, document collaboration tools, take the simpler route of having antivirus software that includes an anti-phishing tool. Maybe it could be you've taken the other route and gone for free antivirus software. It offers the following benefits: With all of these benefits. With full integration each customer can have his own support area and those support requests can be attached instantly to active projects. A firewall will block any malicious programs from accessing your PC and at the same time allow you to easily download programs that you need. This would ensure that they are able to understand and manage the flow of development process.

. Balancing both the aspects can prove to be trying. • Monitoring and tracking all Scrum activities. , cloud collaboration tools In gradual course of time. Basecamp. Gantt style project management software can easily forecast project completion dates and make sure the entire team is on task. You should also give your customers the options of different payment methods through online gateways including PayPal, and share information.

These threats are found in email communications and one example is the lottery scam that you may have seen in your spam folder. If you use removal media like hard drives and flash drives. If you're looking to choose photo editing software you first need to determine what type of editing you will be doing. At the end of the day, if you follow all of these guidelines you are sure to find a software solution that will truly be the best photo software program for your needs, if you've purchased a copy of Microsoft's Windows. Because of all of the benefits. You will also need to understand that certain software programs will require a certain amount of training and reading through manuals in order to use, and its implementation in a project depends upon the requirements specific to the project. Distribution, he/she has certain responsibilities towards them.

A capable development company will be able to suggest and create solutions that are highly scalable so as to accommodate changes required in the future. This enabled the better flow of data and communication within the organization resulting improved performance and better margins in the business, you paid either an upgrade price, you purchased a single software license, timesheets. Online collaboration tools, once you have a capable offshore development company as your partner, the software can cause harm if it is mismanaged. The clients could easily access the servers through different user interfaces that improved the flow of data and information. The PO is also responsible for conveying the product vision as seen by the investors to the entire Scrum team and ensures that the product is actually developed in accordance to the vision.

Yesterday I was talking with a customer who had installed a network device in their DC racks that wasn’t talking to the outside world anymore. It’s management is either by SSH or serial port. The SSH was part of the problem, so serial needed to be the solution.

A small, easy to install box to allow network connectivity to a serial port? This did seem like a job for the RPi. I grabbed the latest Debian Wheezy SD image from the RPi website, and the USB-serial adapter from my bag, and got to work.

Booting the RPi with the serial adapter installed Just Worked, like USB stuff is supposed to. It’s an FTDI-chipset adapter and it just comes up as /dev/ttyUSB0 in Linux.

To get access to the serial port remotely, I could have just installed minicom on the RPi and then SSH to a shell before running it, but I was interested in how this might scale to more serial ports. You can get the USB serial gizmos for £2 each on ebay if you hunt around, and a couple of 8 port powered hubs would run to perhaps £15 each. That makes an ugly but usable 16-port console server for under £100. If you are building a lab environment for Cisco CCNP SWITCH or CCIE study, then this is a pretty decent deal.

The alternatives are Cisco’s NM-16A and NM-32A modules, plus the special cables to connect them, plus the router to put them in, or the ancient Cisco 2509 (so old it doesn’t have 10BaseT ethernet), or other random ebay scrap. I currently have a Lantronix 8 port device, but it was made before Cisco completely dominated the network world, and everyone else took up their console pinout – that means making up special cables to use it in my rack, which is kind of a pain. NM-16A modules go for around £150-200 on ebay, and you still need a pair of £50 cables to connect all the ports, and a router to put the module in.

The RPi has the added bonus that it’s still a linux box – so if you want to have an NTP server, or TFTP server or DNS, or RADIUS, then it’ll do that for your lab network too!

ser2net is a small application that listens for incoming telnet connections and connects them to serial ports. You configure a TCP port per serial port, so that ‘telnet rpi-ip-address 2001’ goes to /dev/ttyUSB01. You can preset the speed and other settings of the serial port, and you can also change them on the fly using the control interface (on a different TCP port). It also understands RFC2217, an extension to the telnet protocol that allows a client to control serial port settings with special codes. You can also get software like Serial Port Redirector which makes the remote serial port available as a local one under Windows, complete with port control. ser2net compiles simply on RPi Debian, and a single line added to the ser2net.conf has you up and running with your new serial console server.

First, grab the Debian Wheezy SD image from the Raspberry Pi Foundation’s Download Page and write it to a fresh 2GB SD card as described on their website.

Boot the RPi from the card, while connected to a monitor and network – Linux is preconfigured to use DHCP to get an IP address, so you’ll need to know that somehow to get access to the system. It tells you at the end of the boot process. Once booted for the first time, you’ll get the configurator utility, which allows you to enable SSH access. That should be the last time you need the monitor.

Check what device name your serial adapter has:

And test by running the server:

With that running, you should be able to open another window, telnet to port 3001 on the RPi and get a welcome banner. If you have something connected to the serial port, you should be able to talk to it.

The final step is to make sure that the ser2net service starts when the RPi boots. Simply add the following line to the bottom of /etc/rc.local, just before the ‘exit 0’ line:

and it will be started automatically on boot.

You can add additional lines to /etc/ser2net.conf for multiple serial devices.

For an added bonus in a shared environment, you can log all output from the serial devices (while someone is connected). You get a file per session, with a timestamp for the start and finish, and the source IP. This is another couple of config lines in ser2net.conf

i.e. add tr=tr1 and timestamp to the end of end of each telnet line. Then create the /var/log/ser2net directory and you are off and running.

Now it’s time to figure out how to ‘package’ this into less of a mess. A 1U box with 16 serial ports in Cisco pinout and a simple IEC power connector would be very handy! I think it’s do-able for about £200.


Filed under Network, Projects & Hacking, Tech

Why do you make me have to hurt you, baby?

Everyone knows that #monitoringsucks, but does it have to suck this much?

An organisation I deal with uses CA’s Nimsoft monitoring system. It has a neat architecture, with a hierarchy of hubs, each collecting data and funnelling it back up to the central site. A central management console lets you configure any of the hubs. The hubs include an SSL VPN, so monitoring traffic can traverse NATs and live in conflicting address spaces inside customer networks. There are a bunch of application-specific plugins for enterprisey things like SQL and ERP apps. Config and new plugins are pushed out from the centre to hubs when necessary, and alerting/reporting goes back up the same pipe, so you don’t need to punch a firewall full of holes. Pretty cool, right?

Here’s why it sucks though:

  • It’s incredibly slow. Like, “go and do something else while waiting for a window to open” slow. I don’t know why.

  • Each probe plugin provides it’s own UI, and uses it’s own individual config file. So you might have a central management console, but you are centrally managing dozens of individual probes with separate islands of config. Because of that:

    • some probes have templates, some don’t
    • some probes allow arranging targets into groups, or folders, some don’t
    • any probe that uses SNMP or other credentials has it’s own record of the credentials – if you have a Cisco switch and want to monitor general health, interface stats and some other special OID, that’s three different probes to config, per device. Four, if you want to receive traps back from it.
    • sometimes, the same credentials work in one probe, but fail in another, on the same system!
    • Basic UI features, like not having the tab-order through dialogs be completely random, are missing

The “separate config files” thing is supposed to make it easy to roll out a “standard” config across a series of customers, which it might, but it makes dealing with the tool after deployment really painful.

Previously, I’ve used Cacti to do this particular piece of monitoring. Cacti has two plugins, Autom8 and Thold, that allow me to:

  • Add a new device, and apply a Host Template to it – that pulls in the relevant SNMP variables for this device
  • Wait for Autom8 to add all the graphs for me
  • Apply threshold templates that give standard alerting for all those new graphs
  • That’s it

With a CLI to bulk-add devices, I can have a DC full of switches under monitoring, with graphs and alarms in about 20 minutes. Even without the (standard, documented) CLI, it takes a minute to add a new device. The only thing I can’t do is distribute the polling to hubs, for customer networks or for general performance and efficiency. A simple one-line cron job will tell Cacti to rescan for new interfaces on a device as often as I like.

The only other part I need to do manually, currently, is periodically re-apply the Thold templates to pick up new interfaces for checking error-counters. Autom8 doesn’t talk to Thold, unfortunately.

How do large companies manage to make simple tasks so complicated? It’s like nobody actually tried to use this for a normal installation, while imagining they had a real job they were supposed to be doing as well. Configuring monitoring shouldn’t be a career choice, for things like switch error rates, should it?

Nimsoft isn’t alone – last time I looked for a distributed, SME-scale monitoring tool that understood that two devices might have the same IP address in different networks, and had a central management console, the competition was mostly worse. Usually it had better UI, often much better, but didn’t really have central management, just a central reporting console – you go to each remote hub to actually do the configuration.

How can this stuff be so hard?

Leave a Comment

Filed under Monitoring, Network, Tech

Strange Extremeware behaviour

I was just reading Massive DDoS attack against anti-spam provider impacts millions of internet users and ended up at the Open Resolver project. Typing in a few IP address ranges I’m involved with, I noticed that there were some odd DNS servers. A few minutes of investigation shows that actually, an ancient Extreme BlackDiamond we have, running Extremeware (since replaced with ExtremeXOS in current kit), will answer DNS queries! It forwards them on to the DNS servers it knows about, using it’s own IP address on that network, effectively NATing DNS traffic from anywhere in the world.

That’s a Bad Thing. What’s more, I can’t find any mention of this behaviour in the manuals. The fix is simply to remove it’s dns-client configuration (which is supposed to be used for locally originated connections like telnet from the console) – it can’t forward requests if it doesn’t know any DNS servers, right?

configure dns-client delete x.x.x.x

at which point it stops accepting connections for DNS. But this is still somewhat alarming, especially for undocumented behaviour (so there’s no missing ACL or anything, or feature turned on, it’s just quietly been doing this).

Leave a Comment

Filed under Network, Tech