How I made my Unifi-Controller High Available

plonxyz
3 min readMay 6, 2018

--

Days ago I wondered if there’s a way to make my Unifi controller high available.
Maybe you’ll ask yourself now, why does this guy want to make it HA?
Well, the reason is quite simple, I use the controller for radius authentication as well as for guest-access, so it should be available 24/7.

Btw I need to say that Ubiquiti does a very good job, on every update you’ll see improvements and new functions. Until Unifi integrates HA into the controller I helped myself.

The plan

The controller uses mongoDB as Database, so I thought why not replicate the data to a second mongoDB ? For this I’ve created 2 CentOS hosts where I installed the Unifi controller, created a dedicated mongoDB instance (usually the controller has it’s own instance, which is handled by itself , so if the unifi service is stopped the instance is gone), imported all the data from a dump into it and created a replication-set to the 2nd node.

One important thing to mention is that mongoDB doesn’t allow to read or write on the secondary node, so the controller needs an active connection on the primary node in the replica-set. This is no problem, as the Unifi-controller is written in Java, so I modified the system.properties file of the controller and added the following lines. With these lines, each controller has an active connection to the primary node. Make sure that the entry “unifi.db.name=” matches the db.mongo.uri.

db.mongo.local=false
db.mongo.uri=mongodb\://unifi1.yourdomain.local,unifi2.yourdomain.local\:27017/unifi?replicaSet\=name-of-replica-set
statdb.mongo.uri=mongodb\://unifi1.yourdomain.local,unifi2.yourdomain.local\:27017/unifi?replicaSet\=nameofreplicaset_stat
unifi.db.name=name-of-replica-set

So far everything was ok. The DBs were replicating, each controller-node had the same data and I was able to change some settings on one node and the other had the same changes on it.

Floating IP

If one controller should be offline, all Unifi-devices should connect to the backup-controller, right? So for this case I configured a floating IP with the package keepalived (this package enables VRRP between these 2 nodes).

To get keepalived working, install the package on each node and configure it on the primary-node as written below.
Use the same lines for the backup node but make sure that you change unicast_src_ip and unicast_peer vice versa.

#this checks if the mongod service is active
vrrp_script chk_mongod {
script “pidof mongod”
interval 2
}
vrrp_instance VI_1 {
# The interface keepalived will manage
interface ens18
state BACKUP
# How often to send out VRRP advertisements
advert_int 2
# The virtual router id number to assign the routers to
virtual_router_id 51
# The priority to assign to this device. This controls
# who will become the MASTER and BACKUP for a given
# VRRP instance (a lower number get’s less priority).
priority 50
authentication {
auth_type PASS
auth_pass password
}
unicast_src_ip 192.168.X.X
unicast_peer {
192.168.X.X
}
track_script {
chk_mongod
}
# The virtual IP addresses to float between nodes.
virtual_ipaddress {
192.168.X.X
}
}

After doing some checks that everything worked well, I needed to re-adopt every Unifi-Device to the floating IP. I’ve connected via ssh on each device and kicked on a manual adoption (set-inform http://FLOATING-IP:8080/inform).
After some time the devices contacted over the floating IP the primary controller and I was able to adopt them properly.

Testing fallback scenario

For testing the fallback-scenario I’ve shutted down the primary controller and waited for the backup-controller to become active. The backup-mongoDB took over to be primary on the replication-set and the floating IP pointed to the backup-controller web-interface.

It worked!

Of course the devices need some seconds for reconnecting to the backup controller, but in the end all services are back online.

--

--

No responses yet