Audrey was looking for a replacement battery for an old watch, and that got me looking through my own wrist watch boneyard. I gave up wearing watches in 2008.
Back in the late 1990’s and early 2000’s, I wore one of these:
The Casio ABX-20 was an analog watch with a digital display that floated above the hands. I thought it was pretty cool at the time (although I am sure everyone else thought it was dorky). I also had a couple of Timex “Expedition” analog/digital watches — they had Indiglo backlights.
I still think the analog/digital dual format is pretty cool.
Sadly, the Casio ABX-20 is beyond repair. But while we were getting a battery for Audrey’s watch, I picked up a few batteries for some of the other boneyard watches, just to take them for a nostalgic spin.
This is either a story of poorly-managed expectations, or of me being an idiot, depending on how generous you’re feeling.
Eight months ago, when I heard that Moogfest was coming to Durham, I jumped on the chance to get tickets. I like electronic music, and I’ve always been fascinated by sound and signals and even signal processing mathematics. At the time, I was taking an online course in Digital Signal Processing for Music Applications. I recruited a wingman; my friend Jeremy is also into making noise using open source software.
The festival would take place over a four-day weekend in May, so I signed up for two vacation days and I cleared the calendar for four days of music and tech geekery. Since I am not much of a night-owl, I wanted to get my fill of the festival in the daytime and then return home at night… one benefit of being local to Durham.
Pretty soon, the emails started coming in… about one a week, usually about some band or another playing in Durham, with one or two being way off base, about some music-related parties on the west coast. So I started filing these emails in a folder called “moogfest”. Buried in the middle of that pile would be one email that was important… although I had purchased a ticket, I’d need to register for workshops that had limited attendance.
Unfortunately, I didn’t do any homework in advance of Moogfest. You know, life happens. After all, I’d have four days to deal with the festival. So Jeremy and I showed up at the American Tobacco campus on Thursday with a clean slate… dumb and dumber.
Thursday started with drizzly rain to set the mood.
I’m not super familiar with Durham, but I know my way around the American Tobacco campus, so that’s where we started. We got our wristbands, visited the Modular Marketplace (a very small and crowded vendor area where they showed off modular synthesizer blocks) and the Moog Pop-up Factory (one part factory assembly area, and one part Guitar Center store). Thankfully, both of these areas made heavy use of headphones to keep the cacophony down.
From there, we ventured north, outside of my familiarity. The provided map was too small to really make any sense of — mainly because they tried to show the main festival area and the outlying concert area on the same map. So we spent a lot of time wandering, trying to figure out what we were supposed to see. We got lost and stopped for a milkshake and a map-reading. Finally, we found the 21c hotel and museum. There were three classrooms inside the building that housed workshops and talks, but that was not very clearly indicated anywhere. At every turn, it felt like we were in the “wrong place“.
We finally found a talk on “IBM Watson: Cognitive Tech for Developers“. This was one of the workshops that required pre-registration, but there seemed to be room left over from no-shows, so they let us in. This ended up being a marketing pitch for IBM’s research projects — nothing to do with music synthesis or really even with IBM’s core business.
Being unfamiliar with Durham, and since several points on the map seemed to land in a large construction area, we wandered back to the American Tobacco campus for a talk. We arrived just after the talk started, so the doors were closed. So we looked for lunch. There were a few sit-down restaurants, but not much in terms of quick meals (on Friday, I discovered the food trucks).
Finally, we declared Thursday to be a bust, and we headed home.
We’d basically just spent $200 and a vacation day to attend three advertising sessions. I seriously considered just going back to work on Friday.
With hopes of salvaging Friday, I spent three hours that night poring over the schedule to figure out how it’s supposed to be done.
- I looked up all of the venues, noting that several were much farther north than we had wandered.
- I registered (wait-listed) for workshops that might be interesting.
- I tried to visualize the entire day on a single grid, gave up on that, and found I could filter the list.
- I read the descriptions of every event and put a ranking on my schedule.
- I learned – much to my disappointment – that the schedule was clearly divided at supper time, with talks and workshops in the daytime and music at night.
- I made a specific plan for Friday, which included sleeping in later and staying later in the night to hear some music.
I flew solo on Friday, starting off with some static displays and exploring the venues along West Morgan Street (the northern area). Then I attended a talk on “Techno-Shamanism“, a topic that looked interesting because it was so far out of my experience. The speaker was impressively expressive, but it was hard to tell whether he was sharing deep philosophical secrets or just babbling eloquently… I am still undecided.
I rushed off to the Carolina Theater for a live recording of the podcast “Song Exploder“. However, the theater filled just as I arrived — I mean literally, the people in front of me were seated — and the rest of the line was sent away. Severe bummer.
I spent a lot of time at a static display called the Wifi Whisperer, something that looked pretty dull from the description in the schedule, but that was actually pretty intriguing. It showed how our phones volunteer information about previous wifi spots we have attached to. My question – why would my phone share with the Moogfest network the name of the wifi from the beach house we stayed at last summer? Sure enough, it was there on the board!
Determined to not miss any more events, I rushed back to ATC for a talk on Polyrhythmic Loops, where the speaker demonstrated how modular synth clocks can be used to construct complex rhythms by sending sequences of triggers to sampler playback modules. I kind of wish we could’ve seen some of the wire-connecting madness involved, but instead he did a pretty good job of describing what he was doing and then he played the results. It was interesting, but unnecessarily loud.
The daytime talks were winding down, and my last one was about Kickstarter-funded music projects.
To fill the gap until the music started, I went to “Tech Jobs Under the Big Top“, a job fair that is held periodically in RTP. As if to underscore the craziness of “having a ticket but still needing another registration” that plagued Moogfest, the Big Top folks required two different types of registration that kept me occupied for much longer than the time I actually spent inside their tent. Note: the Big Top event was not part of Moogfest, but they were clearly capitalizing on the location, and they were even listed in the Moogfest schedule.
Up until this point, I had still not heard any MUSIC.
My wingman returned and we popped into our first music act, Sam Aaron played a “Live Coding” set on his Sonic Pi. This performance finally brought Moogfest back into the black, justifying the ticket price and the hassles of the earlier schedule. His set was unbelievable, dropping beats from the command line like a Linux geek.
To wrap up the night, we hiked a half mile to the MotorCo stage to see Grimes, one of the headline attractions of Moogfest. Admittedly, I am not part of the target audience for this show, since I had never actually heard of Grimes, and I am about 20 years older than many of the attendees. But I had been briefly introduced to her sound at one of the static displays, so I was stoked for a good show. However, the performance itself was really more of a military theatrical production than a concert.
Sure, there was a performer somewhere on that tiny stage in the distance, but any potential talent there was hidden behind explosions of LEDs and lasers, backed by a few kilotons of speaker blasts.
When the bombs stopped for a moment, the small amount of interstitial audience engagement reminded me of a middle school pep rally, both in tone and in body language. The words were mostly indiscernible, but the message was clear. Strap in, because this rocket is about to blast off! We left after a few songs.
Feeling that I had overstayed my leave from home, I planned a light docket for Friday. There were only two talks that I wanted to see, both in the afternoon. I could be persuaded to see some more evening shows, but at that point, I could take them or leave them.
Some folks from Virginia Tech gave a workshop on the “Linux Laptop Orchestra” (titled “Designing Synthesizers with Pd-L2Ork“). From my brief pre-study, it looked like a mathematical tool used to design filters and create synthesizers. Instead, it turned out to be an automation tool similar to PLC ladder logic that could be used to trigger the playback of samples in specific patterns. This seemed like the laptop equivalent to the earlier talk on Polyrhythmic Loops done with synth modules. The talk was more focused on the wide array of toys (raspi, wii remotes) that could be connected to this ecosystem, and less about music. Overall, it looked like a very cool system, but not enough to justify a whole lot of tinkering to get it to run on my laptop (for some reason, my Ubuntu 15.10 and 16.04 systems both rejected the .deb packages because of package dependencies — perhaps this would be a good candidate for a docker container).
The final session of Moogfest (for me, at least) was the workshop behind Sam Aaron’s Friday night performance. Titled “Synthesize Sounds with Live Code in Sonic Pi“, he explained in 90 minutes how to write Ruby code in Sonic Pi, how to sequence samples and synth sounds, occasionally diving deep into computer science topics like the benefits of pseudo-randomness and concurrency in programs. Sam is a smart fellow and a natural teacher, and he has developed a system that is both approachable by school kids and sophisticated enough for post-graduate adults.
I skipped Sunday… I’d had enough.
My wife asked me if I would attend again next year, and I’m undecided (they DID announce 2017 dates today). I am thrilled that Moogfest has decided to give Durham a try. But for me personally, the experience was an impedance mismatch. I think a few adjustments, both on my part and on the part of the organizers, would make the festival lot more attractive. Here is a list of suggestions that could help.
- Clearly, I should’ve done my homework. I should have read through each and every one of the 58 emails I received from them, possibly as I received them, rather than stockpiling them up for later. I should have tuned in more closely a few weeks in advance of the date for some advanced planning as the schedule materialized.
- Moogfest could have been less prolific with their emails, and clearly labeled the ones that required some action on my part.
- The organizers could schedule music events throughout the day instead of just during the night shift… I compare this festival with the IBMA Wide Open Bluegrass festival in Raleigh, which has music throughout the day and into the nights. Is there a particular reason why electronic music has to be played at night?
- I would enjoy a wider variety of smaller, more intimate performances, rather than megawatt-sized blockbuster performances. At least one performance at the Armory was loud enough to send me out of the venue, even though I had earplugs. It was awful.
- The festival could be held in a tighter geographic area. The American Tobacco Campus ended up being an outlier, with most of the action being between West Morgan Street and West Main Street (I felt like ATC was only included so Durham could showcase it for visitors). Having the events nearer to one another would mean less walking to-and-from events (I walked 14½ miles over the three days I attended). Shuttle buses could be provided for the severely outlying venues like MotorCo.
- The printed schedule could give a short description of the sessions, because the titles alone did not mean much. Static displays (red) should not be listed on the schedule as if they are timed events.
- The web site did a pretty good job of slicing and dicing the schedule, but I would like to be able to vote items up and down, then filter by my votes (don’t show me anything I have already thumbs-downed). I would also like to be able to turn on and off entire categories – for example, do not show me the (red) static events, but show all (orange) talks and (grey) workshops.
- The register-for-workshops process was clearly broken. As a late-registerer, my name was not on anyone’s printed list. But there was often room anyway, because there’s no reason for anyone to ever un-register for a workshop they later decided to skip.
- The time slots did not offer any time to get to and from venues. Maybe they should be staggered (northern-most events start on the hour, southern-most start on the half-hour) to give time for walking between them.
All in all, I had a good time. But I feel like I burned two vacation days (and some family karma/capital) to attend a couple of good workshops and several commercial displays. I think I would have been equally as happy to attend just on Saturday and Sunday, if the music and talks were intermixed throughout the day, and did not require me to stick around until 2am.
On my way home today, I stopped by our neighborhood gas station to fill up the tank. As I was leaving, I noticed a mother duck and four ducklings walking along the curb of the shopping center driveway. They were making a lot of noise. The mother was cluck-cluck-clicking, and the ducklings were cheep-cheep-cheeping.
They were standing pretty close to a storm drain. Then a car came whizzing by and one of the ducklings jumped into the storm drain! I went over to the storm drain and found six ducklings at the bottom!
So I rushed home and recruited Audrey and Sydney, who were eager to help. We got some buckets and brooms and some rope and went back to the shopping center. By that time, a couple of other people were gathered around, and they said they had called the Cary Police.
We went ahead and lifted the storm drain grate and one lady climbed in, carrying a bucket. One by one, she lured them close and plucked them up and into the bucket!
The Policeman finally showed up, and we went looking for the mother duck and the other three ducklings. They could’ve been in the woods or near one of the storm drains. We finally spotted them in the pond across the street.
So we carried our bucket to the pond. When we got close, the mother heard the ducklings cheeping and she ran over to us. Sydney laid the bucket down sideways in the grass and we all backed away. The mother duck ran to us, quacking like crazy, and all of the ducklings started cheeping even louder. The mother went to the bucket and then escorted them all down the grass and into the pond. And then they swam away in a tight formation, all nine babies clinging closely behind the mother.
Sydney said that it was the best day ever!
I’ve used Mozilla Thunderbird to read my email for years, and for the most part, I think it’s a pretty nice email client. But lately I’ve developed an itch that really needed scratching.
I tend to use the keyboard to navigate around through applications, and so in Thunderbird, I find myself using TAB to switch between the list of mail folders on the left and the list of messages on the right. The problem is that a few years back, when they added tabbed views, they changed the way that the TAB key works. (I’ll try to be clear about the tabbed views and the TAB key, which unfortunately share the same name). After the addition of tabbed views, the TAB key no longer toggled between just the (1) folders pane and (2) messages pane, but now it toggled between (1) folders pane (2) messages pane (3) tab selector widget. So that means I had to re-train myself to press the TAB key once to go from folders to messages, and twice to go from messages back to folders. But it got worse. If you turn on something like the Quick Filter, the TAB key toggles between (1) folder pane (2) messages pane (3) tab selector widget (4) the Quick Filter.
Basically, the TAB key works like it does in a web browser, which is pretty much useless when there are so many widgets that can accept focus.
Today I discovered that what I was really looking for was the F6 key. It strictly changes focus among the visible window panes. For me, most of the time, that’s (1) folder pane (2) messages pane, but if I turn on message previews (rarely), it expands to (1) folder pane (2) messages pane (3) preview pane.
THIS MAKES SENSE. Within the main window (tab) that I am looking at, the F6 key moves between the major window panes. Awesome.
However, wouldn’t it be cool if I could use the TAB key to do this focus-switching, instead of lifting my fingers off of their pseudo-home position to get way up to F6 (which I can’t find just by feel — I have to look down at it)?
A little bit of searching led me to extensions, such as the very old but still usable “keyconfig”. This is a pretty opaque tool that lets you insert some sort of arcane code into the
prefs.js file. Basically, it did not help me do anything, but it did help me understand how keys are mapped. Deeper searches led me to the “DOM Inspector”, which lets you view the document that is being rendered (apparently, views in Thunderbird are pretty much HTML documents, which I suppose was hip at the time). That led me to some of the arcane codes that are mapped to certain keys.
So here’s what I tried. I looked at the arcane code that is mapped to F6, and I looked at the way “keyconfig” inserted some mappings of key names and their arcane codes. And I mimicked it. I just added this line to
And wouldn’t you know… it worked! Now the TAB key does what the F6 key normally does… it switches focus among the main window panes in the active tabbed view. Yay, lazy fingers cheer!
I reformatted a hard disk this weekend. In the process, I needed to copy a bunch of files from one machine to the other. Since both of these machines were smaller embedded devices, neither one of them had very capable CPUs. So I wanted to copy all of the files without compression or encryption.
Normally, I would use “
rsync -avz --delete --progress user@other:/remote/path/ /local/path/“, but this does both compression (-z) and encryption (via rsync-over-ssh).
Here’s what I ended up with. It did not disappoint.
Step 1 – On the machine being restored:
box1$ netcat -l -p 2020 | tar --numeric-owner -xvf -
Step 2 – On the machine with the backup:
box2$ tar --numeric-owner -cvf - | netcat -w3 box1 2020
Over the last few months, my daughter Sydney and I have been working on Python programming assignments. I showed her that we can occasionally make a snapshot of our work using git, so if we mess something up, we can always get back to our previous checkpoint.
So we got into the habit of starting off new assignments with “
git init .“.
Recently, though, I decided I wanted to host a copy of her assignments on my home file server, so we could check out the assignments on her computer or on mine. In the process, I decided to merge all of the separate assignments into a single git project. As a matter of principle, I wanted to preserve the change histories (diffs and author and dates — but not necessarily the old SHA hashes, which would have been impossible).
I did some searching on the topic, and I found a variety of solutions. One of them used a perl script that sent me off into the weeds of getting CPAN to work. A couple of good posts (here and here) used branches for each assignment, and then merged all of the branches together. The results were OK, but I had the problem where the assignment files started off on their own top-level directory, and then I later moved the files to their own assignment subdirectories. I really wanted to rewrite history so it looked like the files were in their own subdirectories all along.
Then I noticed that my daughter and I had misspelled her name in her original “git config –global”. Oops! This ended up being a blessing in disguise.
This last little snag got me thinking along a different track, though. Instead of using branches and merges to get my projects together, maybe I could use patches. That way, I could edit her name in the commits, and I could also make sure that files were created inside the per-assignment directories!
So I whipped up a little shell script that would take a list of existing projects, iterate through the list, generate a patch file for each one, alter the patch file to use a subdirectory, (fix the mis-spelled name), and then import all of the patches. The options we pass to
git format-patch and
git am will preserve the author and timestamp for each commit.
#!/bin/bash remoteProjects="$*" git init . for remoteProject in $remoteProjects ; do echo "remote project = $remoteProject" subProject=$(basename $remoteProject) ( cd $remoteProject ; git format-patch --root master --src-prefix=AAAA --dst-prefix=BBBB --stdout ) > $subProject.patch # essential file path fixes sed -i -e "s|AAAA|a/$subProject/|g" $subProject.patch sed -i -e "s|BBBB|b/$subProject/|g" $subProject.patch sed -i -e "s|/$subProject/dev/null|/dev/null|g" $subProject.patch # other fixes, while we're here sed -i -e 's/syndey/sydney/g' $subProject.patch # bring the patch into our repo git am --committer-date-is-author-date < $subProject.patch # clean up rm $subProject.patch done exit 0
I think this solution works nicely.
The one with the separate branches above was kind of cool because a git tree would show the work we did on each assignment. But in the end, the linear history that we produced by using patches was just as appropriate for our project, since we actually worked on a single homework assignment each week.
I suppose I could combine the two solutions by creating a branch before doing the "
git am" (git "accept mail patch") step. That is left as an exercise for the reader.
This is part of a series I have been thinking about for a long time. When I have a fleeting thought about some neat idea, I should publish it to ensure that it can not be patented later.
I saw an ad for hearing aids, and that made me wonder if instead of simply amplifying, hearing aids could do some more sophisticated sound transforms. Maybe they do already.
Since hearing loss is typically non-uniform across the hearing spectrum, it would make sense to transpose sounds from “bad” ranges to “good” ranges. Of course, in practice, that might sound weird. For example, someone with high-frequency hearing loss might have high-pitched consonant sounds transposed to a lower end of the spectrum. I’m sure the listener would have to adjust to that, since we’re used to vowels sounding low and consonants sounding high.
This is part of a series I have been thinking about for a long time. When I have a fleeting thought about some neat idea, I should publish it to ensure that it can not be patented later.
This morning I read an article about a drunk driver that killed a motorcyclist. I know there are companies that make sobriety tests that tie into vehicle ignition systems. Some courts order offenders to have these installed.
I thought it would make sense to use the car’s existing controls (buttons on the steering wheel) and displays to run a reaction-time test that has to be passed before the car can be started.
Of course, this would be annoying. So maybe the car could be configured (via web page?) to require this test only at certain times. I log into car.com and set it to require a sobriety test to be started between 10pm and 4am. It could provide options if I fail. Say, after two failures, the car could phone a friend, or it could (via a service like OnStar) call a cab to my location.
This weekend was a first for me. I performed a simple ukulele song on a stage with an audience. The song was “Princess Poopooly” and the venue was the C.F.Martin guitar company tent at the IBMA World of Bluegrass street festival.
I’ll admit, “Princess Poopooly” is not a bluegrass song… it’s a silly Hawaiian tune. But the kind folks at the Martin tent invited any and all to come up on stage and show their stuff. Play a song, get a T-shirt.
The performance itself was underwhelming. I’ve never worked with mics before, so it was a little constraining to sit behind two: one for me and one for my ukulele. Halfway through my song, the uke mic dropped out of the stand and into my lap, which led to the most-remembered line of my act: “whoops!” The kids laughed and repeated that one over and over.
This was the realization of a promise I made to myself at last year’s World of Bluegrass festival. After watching a bunch of other folks step up and play (including both of my daughters), I decided that it was time to pick up an instrument myself and learn.
Big thanks to the folks in the audience who cheered me on.
Like many families, we have accumulated several iPhones and iPods over the years. My wife and I have new iPhones, and we upgrade every so often, and our kids inherit our older phones. So we’ve encountered that age-old question: how should we manage the Apple IDs for all of these devices?
At first, we followed the simple approach — just leave the older devices associated with our Apple IDs. It makes some things easier. For example, the kids don’t have to re-buy the games that you bought over the last few years.
But when you share Apple IDs for all services, things get weird quickly. I started seeing my daughter’s iMessage conversations on my phone. If anyone in the family changed an account setting on any of the apps that use Apple IDs, we’d get a flood of notifications about the change, and the change would usually propagate to the other devices against our intentions. It felt like a very unstable equilibrium. Just as I’d get everything working right, something would upset the balance.
The thing that finally persuaded me to look at alternative setups was when I tried to set up “Find My Friends” so we could see where the others were. It did not want to let me track my daughter, because it thought she and I were the same user.
So I read a few discussions and articles about the different strategies for setting up Apple IDs for a family. They explained how Apple IDs work, and showed how to manage them. Some highlights:
- Creating an E-mail Account and Apple ID for your Child – Apple IDs are simple
- Moving from a shared iCloud to individual accounts – How to change your iCloud account on the phone
- Multiple Apple IDs and iOS devices in a Family – Simple list of Apple ID services (the basis for my bullet list below)
- How many Apple IDs should your family have? – Detailed list of services that use Apple IDs
OUR FAMILY’S STRATEGY
Everyone in our family now has a unique email address and their own Apple ID. My Apple ID is associated with a credit card, but theirs are just simple accounts. These can easily be set up at http://appleid.apple.com/.
Technically, since the kids are young, I have an Apple ID that they use. The contact info is mine. But the point is that each person has a unique identifier for their devices, and each one is tied to a unique email address.
The key to making this work is this sometimes-overlooked fact about how the Apple ecosystem works: a single device can use different Apple IDs for different purposes.
So in our family:
- iTunes Store – use Daddy’s ID
- iMessage – use your own
- FaceTime – use your own
- iCloud* – use your own
- Game Center – use your own
Note that iCloud is a biggie. It includes Mail, Contacts, Calendars, Reminders, Safari, Find My iPhone, Documents and Data, Photo Stream and Backups.
I’m not really sure how the iCloud Mail and Calendar stuff works, because we don’t use them. I host my own mail and calendar services on a Linux server, and that stuff works great with the iPhones. We have separate email addresses on several domains. And we have some shared calendars and some individual calendars.
Our family’s new setup puts some sanity back in the system. I know that my iMessages will only show up on my phone and Mac. I can call my kids using FaceTime without it getting confused, trying to call myself. I see my contacts, and my kids don’t. But we each get to use the games and other apps that we have bought as a family. And we can each use “Find my Friends” to keep track of where everyone is.
A WORD ABOUT EMAIL ADDRESSES VS APPLE IDS
I’m going to dive just a little deeper here, because I discovered something else in the transition that might help someone else.
Since I run my own mail server, I tend to use very specialized addresses for any kind of service that I sign up for. That way, I can sort all of my bills into a “bills” folder that I don’t have to see until it’s time to pay bills. Or if one vendor starts sending me too much junk, I can remove that one email address/alias and that stuff disappears forever.
Following this strategy, our Apple IDs are actually specially-made alias addresses in the form itunes-(name)@(ourdomain).com. But since we want to use our real email addresses for stuff like FaceTime and iMessage, we need to associate our real email addresses with these new Apple IDs. On that appleid.apple.com site, there’s a little form where you can associate all of your other email addresses to the Apple ID.
However, since we were migrating from a single Apple ID, I had to remove the kids’ preferred email addresses from my Apple ID before it would let me add them to their Apple IDs. This is very easily done on the appleid.apple.com site. However, if you just try to add the email address to the new Apple ID directly using the phone menus, it just sits there with a spinny star saying “verifying”, and it never actually sends the verification email.
So my advice is to manage your Apple IDs using the web site, http://appleid.apple.com/.
Recently, our local Linux Users Group was talking about DNS servers. Some folks in the group claimed that their ISP’s DNS servers were very slow.
In a group like this, there is usually a camp that are strong supporters of running BIND. Somehow, I have never been able to wrap my head around BIND. Instead, I have been using dnsmasq. These two packages are very different.
BIND is a fully recursive DNS resolver. When you look up a name like “www.cnn.com”, it goes to “com” to ask who “cnn” is, and then it goes to “cnn.com” to ask who “www.cnn.com” is. BIND has a steep learning curve, and that has always discouraged me from really tinkering with it. It also misses a very important point that my home network needs — local name resolution of DHCP-assigned addresses.
Dnsmasq is more of a caching DNS server for a local network. It has a built-in DHCP server, so devices on my home network get their addresses from dnsmasq. When I make a DNS request, dnsmasq looks in its local DHCP table first. For example, if I want to talk to another device in the same room, like a Roku or a printer, dnsmasq knows the addresses of the local devices and it responds immediately. If the request is not a local name, it simply passes on the request to some other name server… maybe your ISP’s, or maybe a free server like OpenDNS or Google’s 126.96.36.199. Dnsmasq caches all DNS requests, so if you make repeated requests to the same site, they are answered pretty quickly.
I really like dnsmasq.
It is super flexible, and you configure it through a single configuration file which is super easy to understand. In fact, many home routers use dnsmasq under the hood.
But during the discussion in our LUG, someone mentioned unbound, another fully recursive DNS server that is super easy to set up. So I had to try it out. It did not disappoint.
So how do these two tools work together?
Actually, it’s quite elegant. Dnsmasq listens on port 53 of all addresses on my router. It is the primary DNS server for all machines on my local network. If the request is for a local device, then it fills the request immediately. But if the request is for some site on the internet, then it passes the request off to unbound, which is also running on the router, but listening on a different address/port combination.
Here is how I configured dnsmasq.
# --- DNS ---------------------------- # Be a good netizen, keep local stuff local. domain-needed bogus-priv filterwin2k # Do not listen on "all" interfaces and just filter. bind-interfaces # Listen on port 53 on in-home network (eth1) and localhost (lo). # Do not listen on internet interface (eth0). interface=lo interface=eth1 # Upstream servers are not listed in resolv.conf, they are listed here. no-resolv server=127.0.0.1#10053 # unbound # Add this domain to all simple names in the hosts file. # (Also sets the domain (15) option for DHCP). expand-hosts domain=home.alanporter.com # Special treatments for some domains and hosts. local=/local/ # dnsmasq handles these itself server=/alanporter.com/188.8.131.52 # look up via ns1.linode.com address=/doubleclick.net/127.0.0.1 # return this address immediately address=/sentosa.us/184.108.40.206 # return this address immediately cname=oldname.home.alanporter.com,newname.home.alanporter.com # Logging log-queries log-facility=local1 # Caching cache-size=1000 # --- DHCP --------------------------- dhcp-range=FunkyNet,172.31.1.100,172.31.1.199,10m dhcp-option=FunkyNet,1,255.255.255.0 # subnet mask - 1 dhcp-option=FunkyNet,3,172.31.1.1 # default router - 3 dhcp-option=FunkyNet,6,172.31.1.1 # DNS server - 6 dhcp-option=FunkyNet,15,home.alanporter.com # domain name - 15 dhcp-option=FunkyNet,28,172.31.1.255 # broadcast address - 28 dhcp-leasefile=/var/lib/dnsmasq.leases read-ethers # reserved names and addresses dhcp-host=d8:5d:4c:93:32:41,chumby dhcp-host=00:50:43:00:02:02,sheeva,172.31.1.3,10m # --- PXE ---------------------------- dhcp-boot=pxelinux.0,bender,172.31.1.1
So dnsmasq listens on the local network for requests, answers what it can: local DHCP addresses, cached addresses and special overrides from the config file. And anything it can’t handle itself, it sends on upstream to unbound.
server: # perform cryptographic DNSSEC validation using the root trust anchor. auto-trust-anchor-file: "/var/lib/unbound/root.key" # listen on local network, allow local network access interface: 127.0.0.1 access-control: 127.0.0.0/8 allow # NOT listening on IPv6 # interface: ::1 # access-control: ::1 allow port: 10053 # logging chroot: "" logfile: "/var/log/unbound.log" log-time-ascii: yes log-queries: yes verbosity: 2
As you can see, unbound does not require much configuration.
Notice that I am NOT listening on the IPv6 interface. It turns out, there is no need. Dnsmasq listens on both, and it forwards A requests and AAAA requests to unbound over an IPv4 connection on the local “lo” adaptor.
How it stacks up
So how well does this setup work? Are there advantages or disadvantages to using dnsmasq and unbound together?
I tested this setup using “namebench“, a Google “20 percent” project that measures DNS lookup times. It told me that Google’s public DNS (220.127.116.11) was 250% faster than my in-home DNS. Furthermore, it said I would be better off using my ISP’s DNS servers. I am guessing that this is because these larger DNS servers cache a much larger pool of addresses, bypassing full recursive lookups of most common names.
Advantages of dnsmasq + unbound
If my setup is slower than using a single upstream DNS, then why should I run mine this way? I have a few reasons.
- First and foremost, I learn a lot about DNS this way.
- But also worth considering, ISP nameservers are notoriously flaky. Just because the ISP beat my nameserver on a single test, that does not mean it will always do so. That’s like comparing the bus to driving your own car… it might be better sometimes, but really bad other times.
- One compelling reason to run a recursive DNS server like unbound is that you know you’re getting the right answer. When you use an ISP’s DNS server, they may hijack some domains and give you an incorrect answer on purpose. For example, they may censor content, and return a bogus landing page address for addresses that are on their black list. OpenDNS touts this as a feature… it is more “family-friendly” than raw DNS.
- If you’re the tinfoil hat type, you might not want to use a DNS service from someone like Google, who makes their money from knowing more about your browsing habits than you do. Or from your ISP, who is always trying to up-sell you with something.
Advantages of dnsmasq + any upstream DNS
- Dnsmasq (whether I use an upstream DNS or unbound) gives me control over how stuff is looked up. For example, when I was working on a new web site, I could tell dnsmasq to use the hosting company’s DNS for that one domain, so I did not have to wait for caches to expire between me and the host.
- Dnsmasq caches lookups. Actually, unbound does, too. I am still playing with both.
- Dnsmasq make switching DNS providers really easy. Say your ISP’s nameservers are acting up… just change one line in dnsmasq.conf and start getting results from somewhere else.
A timely discovery
I have been interested in automated backups of computer data since the mid 90’s, when I had a very well-timed hard disk failure. By pure chance, I had been working on a script that would copy my “important files” from my hard disk to a 100MB “Zip Drive”. I finished my script after testing it several times, and then I went to bed. The next morning, I woke up to find that my hard disk had crashed. Fortunately, I had a very recent backup!
I have often marveled at how easy it would be to lose invaluable files in a single mishap… countless memories, photos, financial records and project work. Backups are important.
When I worked at “the oven place” (TMIO), I was tasked with evaluating backup schemes for their factory and office PC’s. So I looked at several open source packages, with emphasis on being server-centric and automatic. That is, the backup server would decide when to make the backups, and the employees would never have to remember to do anything special. Any process that relies on a human to remember to kick it off is destined to be run once-a-year.
We ended up choosing “BackupPC“, which runs on a modest server with a large storage disk. It would wake up every so often and run through its list of clients and pick one to back up.
For several years, I ran BackupPC at home, too. At first, I ran it on a discarded PC. But later, I migrated to low-power fanless embedded boards.
In 2013, I decided that BackupPC was taking too long to make backups. I would bring my laptop home from work and turn it on, and BackupPC would notice it and start backing it up. But the backups were taking so long that they would still be running when I was ready to leave for work the next morning! I ran a few tests with rsync to see if the problem was with BackupPC or the file compression or their crazy idea of how “incremental backups” should work. So I wrote what started out to be a speed test, and then a wrapper around “rsback”, and finally a very minimal python script that I named “Flashback“. “Flash” because it’s fast. My laptop backup, which was taking all night using BackupPC, usually completes in a half hour or less.
You can find Flashback on github.
The Pogo Plug v4
This week, I tried out a new hardware gadget called a Pogo Plug. It is a very close cousin to the SheevaPlug, an embedded Linux board which I had been BackupPC and Flashback on. What caught my attention about the Pogo Plug v4 was:
- It has two USB3 ports.
- It has gigabit ethernet.
- It was on sale for just $20.
The only bad part is that it only has 128MB of RAM… that’s only a quarter of what the SheevaPlug has. But I am not really using the memory for anything. I am just running rsync.
I did not spend any time using the stock firmware. Instead, I immediate enabled SSH and then followed these instructions for installing Arch Linux on a USB stick, which I plugged into the top plug (the bootable USB2 one). I plugged the 1-terabyte USB2 hard disk into the back of the Pogo Plug.
Then I installed Flashback and I modified the monitor script to take advantage of the three-color LED on the front (green for sleeping, yellow for backing up, red for error).
It’s been running for about a week now, and I think it has proven itself worthy.
I’d like to try it with a USB3 hard disk, and see if it’s any faster.