Updates to a BIND dynamic zone that is shared between views delayed

binddynamic-dnssplit-dns

Here's the quick and dirty: On BIND9 with a dynamic zone that's shared between views, doing a nsupdate, updating/creating/deleting a record will work fine if I query for that record from a client that falls into the same view I did the nsupdate from.

Querying from a view that isn't the same as the one I used to nsupdate will throw NXDOMAIN (if adding a new record) or will show old record information in the event of a change/update until some arbitrary length of time (say 15 minutes) passes, or I forcibly do $ rndc freeze && rndc thaw. $ rndc sync doesn't appear to do anything at all to resolve the issue — I was hoping it was just a journal file thing since the journal flushes are documented to flush around 15 minutes.

If that's not clear, here's some pseudo code to get us started:

BIND Views

view "cdn-redir" {
   match-clients { 10.1.1.0/24; 10.1.2.0/24; };
   include "cdn-zone.db";
   include "dynamic-zone.db";
};

view "default" {
   match-clients { any; };
   include "dynamic-zone.db";
};

Example command line

user@ns:~$ nsupdate -k rndc.key
> server localhost
> zone example.com.
> update add foohost.example.com. 600 A 10.5.6.7
> send
> quit

user@ns:~$ dig foohost.example.com (resolv.conf points to 127.0.0.1)
  [ responds with correct answer 10.5.6.7 ]

Elsewhere, a host falling into the same view as the nsupdate

user@10.11.12.50:~$ foohost.example.com (resolv.conf points to above nameserver)
  [ responds with correct answer 10.5.6.7 ]

Elsewhere, host falling into a different view as the nsupdate

user@10.1.1.50:~$ dig foohost.example.com (resolv.conf points to above nameserver)
  [ responds with NXDOMAIN even though I'm asking the same server ]

At this point, if I'm patient, the problem will eventually resolve itself (maybe 15 minutes), but I frequently don't have the luxury of patience, so I'm forced to $ rndc freeze && rndc thaw on the nameserver to forcibly fix the problem.

On the flip side

On the perfectly reversed side of things, if I do the nsupdate against the server from an address that falls into the "cdn-redir" view, the problem reverses itself. Subsequent queries from clients matching "cdn-redir" get the correct record immediately after nsupdate without fiddling with "rndc freeze/thaw", but querying from addresses that fall outside the view of "cdn-redir" now have the delay/rndc silliness.

My ultimate question

If it were as simple as 42, I'd take it with open arms…

I'd like to avoid having to "rndc freeze && rndc thaw" for fear of missing a dynamic update from the DHCP server. Does anyone know how to get the updated records synchronized between views more effectively/efficently, or can shed some light on where I may be going wrong?

Edit: BIND 9.9.5/Ubuntu 14.04 but it happened on previous versions of both Ubuntu and BIND.

Thanks all!

As requested by Andrew B, here's the redacted (and partial) zone:

$ORIGIN .
$TTL 3600
example.com     IN SOA ns1.example.com. HOSTMASTER.example.com. (
                       2009025039 ; serial
                       900 ; refresh 15
                       600 ; retry 10
                       86400 ; expire 1 day
                       900 ; minimum 15 min
                )
                NS     ns1.example.com.
$ORIGIN example.com.
$TTL 30
AEGIS           A   10.2.1.60
                TXT "31bdb9b3dec929e051f778dda5abd0dfc7"
$TTL 86400
ts-router       A 10.1.1.1 
                A 10.1.2.1
                A 10.1.3.1
                A 10.1.4.1
                A 10.1.5.1
                A 10.1.6.1
                A 10.1.7.1
                A 10.1.8.1
                A 10.2.1.1
                A 10.2.2.1
                A 10.2.3.1
ts-server       A 10.2.1.20
ts-squid0       A 10.2.2.20
ts-squid1       A 10.2.2.21
$TTL 600
tssw4           A 10.2.3.4
tssw5           A 10.2.3.5
tssw6           A 10.2.3.6
tssw7           A 10.2.3.7
; wash rinse repeat for more hosts
$TTL 30
wintermute      A     10.2.1.61
                TXT   "003f141e5bcd3fc86954ac559e8a55700"

Best Answer

Different views act separately, it's essentially a convenience over running separate instances of named. If there are zones with the same name in different views this is just a coincidence, they are still entirely separate zones, no more related than any other zones.

Having multiple separate zones use the same zone file does not work in situations where bind is updating the zone contents on its own (slave zones, zones with dynamic updates, etc). I'm not sure if there's even risk of corrupting the zone file itself.

You may be able to set something like what you want to do up by having the zone in one view be a slave for the zone with the same name in the other view. This will clearly be a somewhat complicated configuration but using TSIG keys for match-clients as well as the notify/transfer I believe it should be doable.

Edit: ISC has published a KB article for this scenario, How do I share a dynamic zone between multiple views?, suggesting the same kind of config mentioned above.

This is their config example with somewhat improved formatting:

key "external" {
    algorithm hmac-md5;
    secret "xxxxxxxx";
};

key "mykey" {
    algorithm hmac-md5;
    secret "yyyyyyyy";
};

view "internal" {
    match-clients { !key external; 10.0.1/24; };

    server 10.0.1.1 {
        /* Deliver notify messages to external view. */
        keys { external; };
    };

    zone "example.com" {
        type master;
        file "internal/example.db";
        allow-update { key mykey; };
        also-notify { 10.0.1.1; };
    };
};

view "external" {
    match-clients { key external; any; };

    zone "example.com" {
        type slave;
        file "external/example.db";
        masters { 10.0.1.1; };
        transfer-source { 10.0.1.1; };
        // allow-update-forwarding { any; };
        // allow-notify { ... };
    };
};
Related Topic