Please Sign the Philly Mesh GPG Key!

Now that we have erected an SKS keyserver, I invite everyone to sign the Philly Mesh GPG key to help verify our identity. There are many GPG/PGP applications out there, but below I will provide steps for the gpg utility available on many POSIX systems (Linux, Darwin, etc.). Ideally, with enough signatures, the Philly Mesh key has a higher probability of entering the Web of Trust strong set, the largest collection of strongly-connected gpg keys.

Receive the Philly Mesh Key

Before you can sign the Philly Mesh key, you will need to download it to your system via a keyserver. Here is an example using the SKS server pool:

$ gpg --keyserver pool.sks-keyservers.net --recv-keys 0x8f5b291d3a3ca65a
gpg: requesting key 3A3CA65A from hkp server pool.sks-keyservers.net
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1

Now, you should be able to list the Philly Mesh key in your public keyring. Make sure that the key has not been revoked and is not expired:

$ gpg --list-keys 0x8f5b291d3a3ca65a
pub   4096R/3A3CA65A 2017-11-25 [expires: 2027-11-23]
uid                  Philly Mesh <phillymesh@protonmail.ch>
uid                  Philly Mesh <hello@phillymesh.net>
uid                  Mike Dank <mike@phillymesh.net>
sub   4096R/1744B74A 2017-11-25 [expires: 2027-11-23]

Bootstrapping Trust

Before you sign the Philly Mesh key, you want to make sure that it is actually owned by Philly Mesh. For some people, this is as easy as asking me online somewhere or in person. For others, you might want to check the verifications for Philly Mesh on Keybase which shows that this key has been verified by the phillymesh.net domain. For those who want some instant verification that this key is associated with the domain phillymesh.net, you can query against a DNS record on the domain which holds the key’s fingerprint.

First, let’s see the fingerprint for the key you have just received:

$ gpg --fingerprint 0x8f5b291d3a3ca65a
pub   4096R/3A3CA65A 2017-11-25 [expires: 2027-11-23]
      Key fingerprint = C58B 0431 C815 F315 7310  0959 8F5B 291D 3A3C A65A
uid                  Philly Mesh <phillymesh@protonmail.ch>
uid                  Philly Mesh <hello@phillymesh.net>
uid                  Mike Dank <mike@phillymesh.net>
sub   4096R/1744B74A 2017-11-25 [expires: 2027-11-23]

Now, let’s query against fingerprint.phillymesh.net, which pulls a live TXT record set up on the domain housing the trusted fingerprint:

$ dig +short -t txt fingerprint.phillymesh.net
"C58B 0431 C815 F315 7310  0959 8F5B 291D 3A3C A65A"

The fingerprint from the gpg --fingerprint command should match the result from the dig command. If it doesn’t match, don’t trust the key. Someone may be in control of the phillymesh.net domain and try to get you to trust their false key.

Sign the Key

Now, you are ready to sign the Philly Mesh key. At this point, we assume that you have already created a key of your own. While receiving the key in the initial section above, we also assume you have made sure the key has not expired or been revoked.

Sign the Philly Mesh key with your own key, following the prompts as they come up. At the time of this writing there are 3 uids (email addresses) associated with this key (listed below in the command output). They can each safely be signed:

$ gpg --sign-key 0x8f5b291d3a3ca65a
gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2027-11-23
pub  4096R/3A3CA65A  created: 2017-11-25  expires: 2027-11-23  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/1744B74A  created: 2017-11-25  expires: 2027-11-23  usage: E
[ultimate] (1). Philly Mesh <phillymesh@protonmail.ch>
[ultimate] (2)  Philly Mesh <hello@phillymesh.net>
[ultimate] (3)  Mike Dank <mike@phillymesh.net>

Really sign all user IDs? (y/N) y

After signing, send the key back to the keyserver so the signature is recorded:

$ gpg --keyserver pool.sks-keyservers.net --send-key 0x8f5b291d3a3ca65a
gpg: sending key 3A3CA65A to hkp server pool.sks-keyservers.net

That’s all it takes! Your signature will now be recorded and the record will update across all keyservers in the SKS pool. You can check that your signature has been recorded here (it might take a few minutes to populate).

The Philly Mesh key has been signed by 0x1619ae4d7cf2a8f7.

 

Installing Yggdrasil – A Toy Implementation of an Encrypted IPv6 Network

At Philly Mesh, we like to play around with pieces of technology that aren’t directly related to our core software stack. One such piece of software is Yggdrasil, an encrypted IPv6 networking implementation developed by Arceliar. Yggdrasil borrows many ideas from cjdns, but was primarily written to test a new routing scheme developed by Arceliar. While it is not production-ready software, Yggdrasil is an interesting foray into encrypted networking and fun to experiment with.

For this installation guide, we assume a Debian Stretch (or similar) Linux system with a non-root, sudo user.

First, we need to make sure we have a recent version of go. We can check our version using the following command:

$ go --version
go version go1.9.2 linux/amd64

Yggdrasil is built with Go 1.9. At the time of writing, Debian Stretch only comes with Go 1.3. If you need to install a more recent version of Go, you can do so manually. Below is an example installing Go 1.9.2 for the amd64 architecture using a download link from https://golang.org/dl/:

$ sudo apt-get remove golang
$ cd /usr/local
$ sudo wget https://redirector.gvt1.com/edgedl/go/go1.9.2.linux-amd64.tar.gz
$ sudo tar -xzf go1.9.2.linux-amd64.tar.gz
$ sudo ln -s /usr/local/go/bin/go /usr/local/bin/go

Now we will set up some environment variables to use Go:

$ mkdir ~/go
$ export GOROOT=/usr/local/go

Now we are ready to install Yggdrasil:

$ cd ~
$ git clone https://github.com/Arceliar/yggdrasil-go.git
$ cd yggdrasil-go/
$ ./build

If all goes well, Yggdrasil will have built successfully with no errors. Now we are ready to generate a config file:

$ ./yggdrasil --genconf > conf.json

The config file is pretty basic and allows for some customization:

$ cat conf.json
{
  "Listen": "[::]:0",
  "Peers": [],
  "BoxPub": "46d18cbcfa0d510fcd226f323efe279525c50eb15db925d4879ee675b99b0724",
  "BoxPriv": "727213ecb3caf601ee49596fa77469674bed177f10d8607ee76ec1f35e942310",
  "SigPub": "08565493e805e905dbcc22cdaa7e60bd6cb6fc1df21d1b807b46f6285f8b86fd",
  "SigPriv": "4173f91e08ab2b6f7c5ae96cf9d61f7ac30b36be7a5eff298e00e0d08f6f5c9608565493e805e905dbcc22cdaa7e60bd6cb6fc1df21d1b807b46f6285f8b86fd",
  "Multicast": true
}

If you want Yggdrasil to listen on a static port, you can change the Listen attribute to use an IP and/or port of your choosing like "12.34.57.78:1234". You can add entries to the Peers attribute by listing them as strings (IP:PORT) in the array (comma-separated). The Multicast attribute is currently set to true</code, but you could set this to false if you didn't want to auto-peer for some reason.

Here is a sample config that listens on port 1234 on all interfaces and connects to a peer at 12.34.57.78:1234:

cat conf.json
{
  "Listen": "[::]:1234",
  "Peers": ["12.34.57.78:1234"],
  "BoxPub": "46d18cbcfa0d510fcd226f323efe279525c50eb15db925d4879ee675b99b0724",
  "BoxPriv": "727213ecb3caf601ee49596fa77469674bed177f10d8607ee76ec1f35e942310",
  "SigPub": "08565493e805e905dbcc22cdaa7e60bd6cb6fc1df21d1b807b46f6285f8b86fd",
  "SigPriv": "4173f91e08ab2b6f7c5ae96cf9d61f7ac30b36be7a5eff298e00e0d08f6f5c9608565493e805e905dbcc22cdaa7e60bd6cb6fc1df21d1b807b46f6285f8b86fd",
  "Multicast": true
}

Now, we can start Yggdrasil in the background:

sudo ./yggdrasil --useconf < conf.json &

You should now have a tun interface up for your Yggdrasil node:

$ ip a 
43: tun1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet6 fd00:4645:d147:7c16:98f2:20ea:d0ba:7174/8 scope global
       valid_lft forever preferred_lft forever

Now, you can ping other Yggdrasil nodes on the network:

$ ping6 -c4 fd1f:dd73:7cdb:773b:a924:7ec0:800b:221e
PING fd1f:dd73:7cdb:773b:a924:7ec0:800b:221e(fd1f:dd73:7cdb:773b:a924:7ec0:800b:221e) 56 data bytes
64 bytes from fd1f:dd73:7cdb:773b:a924:7ec0:800b:221e: icmp_seq=1 ttl=64 time=14.4 ms
64 bytes from fd1f:dd73:7cdb:773b:a924:7ec0:800b:221e: icmp_seq=2 ttl=64 time=12.6 ms
64 bytes from fd1f:dd73:7cdb:773b:a924:7ec0:800b:221e: icmp_seq=3 ttl=64 time=15.1 ms
64 bytes from fd1f:dd73:7cdb:773b:a924:7ec0:800b:221e: icmp_seq=4 ttl=64 time=12.9 ms

--- fd1f:dd73:7cdb:773b:a924:7ec0:800b:221e ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 12.673/13.783/15.105/1.017 ms

Further documentation for Yggdrasil is available here, and a whitepaper draft is available here.

 

Set Your Hyperboria Node’s Domain Name on the fc00 Map

fc00, a project aimed to make the Hyperboria network more accessible, has been hosting a network map of Hyperboria peers for a few years. The map, updated throughout the day, is available to view here. Unlike geographic maps, the fc00 map shows the network topology. While I won’t be able to see if my node in Seattle neighbors any other nodes I’m not connected to, I can see the edges between nodes and view the centrality of the network.

The node map at the time of writing.

The fc00 map also has a handy search feature where you are able to look up nodes based on their IP address or domain name to view cjdns version, connected peers, and centrality. However, domain names are not discovered automatically and need to be added manually via a GitHub repository.

Adding your node’s domain name to the GitHub repo is a relatively simple process, and serves as a good introduction to creating pull requests for any git project. This guide assumes you have a web browser, a GitHub account, a DNS AAAA record pointed to the IPv6 address of your Hyperboria node (subdomains work too!), and a non-root, sudo user on a Linux machine. Commands for a Linux-based workstation will be shown, but should translate easily to a workstation running OSX or BSD.

Forking the Repository

The node list lives in a GitHub repository, https://github.com/zielmicha/nodedb. Obviously, this is not our own personal repository, so we will first need to fork the repository to our own GitHub account. This will duplicate the repository’s current state so we can perform edits. Navigate to the repository’s page and press the Fork button on the right side of the page, near the top.

Press the fork button.

If prompted for a location to fork the repository to, choose your profile. You will now have a complete copy of the repo within your account. Now, on the right side of the page, press the button labeled Clone or download. In the small popup that appears, copy the URL from the field with ctrl-c to get it on our clipboard.

Press the Clone or download button and copy the URL.

Cloning the Repo

On the Linux machine, let’s do a little housekeeping and install git so we can get to work:

$ sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get install git

Now we will clone the repository to our local machine, pasting the link we copied earlier:

$ git clone https://github.com/Famicoman/nodedb.git
]Cloning into 'nodedb'...
remote: Counting objects: 578, done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 578 (delta 4), reused 0 (delta 0), pack-reused 568
Receiving objects: 100% (578/578), 163.20 KiB | 91.00 KiB/s, done.
Resolving deltas: 100% (256/256), done.
Checking connectivity... done.

Next, we can change into the directory and start making updates:

$ cd nodedb

We will only be running a script in the repo to automatically add the node to the proper location in the file (sorted by address). We will be running addnode.sh and pass in our node’s IPv6 Hyperboria address and the domain with an AAAA record pointing to it. Note: this domain should be a fully qualified domain name (FQDN). Here is an example with domain h.peer0.famicoman.phillymesh.net which points to fc9f:990d:2b0f:75ad:8783:5d59:7c84:520b:

$ ./addnode.sh fc9f:990d:2b0f:75ad:8783:5d59:7c84:520b h.peer0.famicoman.phillymesh.net

If we happen to have two or more domain names pointing to the node, we can add them all via one command (though currently on the first in the list is searchable on the fc00 map):

$ ./addnode.sh fc9f:990d:2b0f:75ad:8783:5d59:7c84:520b "h.peer0.famicoman.phillymesh.net h.peer0.famicoman.com"

The changes are now made, so we can commit them to our local, working copy of the repo with a message describing our changes:

git commit -am "Added famicoman's node0"
[master 125b493] Added famicoman's node0
 1 file changed, 1 insertion(+), 1 deletion(-)

Finally, we will push the changes back to our repo on GitHub:

$ git push origin master
Username for 'https://github.com': Famicoman
Password for 'https://Famicoman@github.com':
Counting objects: 3, done.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 292 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To https://github.com/Famicoman/nodedb.git
   3f94d16..125b493  master -> master

Creating the Pull Request

Back on Github, we can view the changes to our repository by clicking the link labeled Compare on the right side of our fork’s page.

Click the Compare link.

This will show the changes between our fork’s repository and the official repository at zielmicha/nodedb. From here, all we have to do is press the button labeled Create pull request. This will automatically create a request for a change at the official zielmicha/nodedb repository which the author can then approve and merge into the main branch.

Press the button for Create Pull Request.

Making Updates

Later on, we may want to go back and add more nodes to the list or otherwise edit nodes we have already added. We already know the commands to run to get our changes into our fork, but by this time it may be out of date! Luckily we can force an update to our fork from the main zielmicha/nodedb branch so we can perform our edits on the most up-to-date version of the repository.

Back on the Linux machine, simply run the following set of commands:

$ git remote add upstream https://github.com/zielmicha/nodedb
$ git fetch upstream
$ git checkout master
$ git reset --hard upstream/master
$ git push origin master --force

Now we are free to add nodes, commit, push changes to our fork,and create a new pull request!

Conclusion

Creating pull requests through GitHub is a simple task, and adding a node to the fc00 map is a perfect introduction to the process. After your pull request gets successfully merged, be sure to check out your node on the map!

My (small) node.

 

Run cjdns on Google Compute Engine

Google recently announced the Google Cloud Platform Free Tier, which includes an f1-micro instance on the Google Compute Engine. Compute Engine instances are essentially virtual private servers, with full customization available for disk size, number of processors and amount of memory. The f1-micro instance supplied free to each user boasts modest specifications: 1x shared virtual CPU (quantified as 0.2 of a CPU before bursting as available), 600MB of RAM, 5GB of snapshot storage, and 30GB of HDD persistent disk. As with all cloud providers, Compute Engine offers beefier machines priced accordingly. Luckily, Google is currently offering a (US only for the time being) free trial of the Google Cloud Platform for 12 months /$300. At around $4-$5 per f1-instance, a user could run several of these lower-tier machines for a year under their trial period.

Getting started with Google Compute Engine is relatively simple. This guide assumes you have a web browser, and a machine to make SSH connections from to access your Compute Engine instance. Commands for a Linux-based workstation will be shown, but should translate easily to a workstation running OSX or BSD.

Compute Engine Instance Setup

Start by signing up for the free trial here (If you have already completed a trial, the same link should log you in to your account). This will ask for some personal information (if it is not already in your Google account) and credit card information to validate the account. Afterwards, we will be automatically logged into the Google Cloud Platform console. From here, we can create a new project by clicking the Create project link in the console’s banner near the top left of the screen.

Create a new project.

Give the project a name and click on the Create button. Now we will be in the project’s dashboard. Back in the console’s banner, press the Products & services menu button on the left and select Compute Engine from the menu.

Select Compute Engine from the menu.

A pop-up will now appear regarding VM instances. We will want to press the blue Create button when prompted, this will launch the instance creation wizard. Give the instance a Name (if desired) and pick a Zone from the drop-down menu. There are many zones available, and their features are listed here. It is important to note that the free f1-micro instances are limited to US regions, so select one of those if interested in completing the always-free offer. If using  free trial credit or otherwise paying for an instance, feel free to select any zone desired. Under Machine Type, click the Customize link to show our specification options.

Moving the slider for Cores all the way to the left will adjust the specs to an f1-micro instance.

Select the number of cores for the target virtual machine we are creating. Moving the slider all the way to the left will give us 1 shared vCPU and 600MB of RAM. Note of the monthly rate on the right side of the screen changes based on the modifications. As shown under Machine type, Compute Engine offers a wide array of options for your instance, including GPU selection and even the option to run with 64 cores. We could pick a better instance for now and scale it down before the 12-month period is over to retain under the free tier, but for the sake of testing the f1-micro instance, the rest of this tutorial will proceed with settings as shown.

Lower in the page, we will leave the Boot Disk selection alone as Debian Jessie on 10GB is more than enough for cjdns. Click the link for Management, disk, networking, SSH keys to expand a few more options, and click on the Networking tab.

The Networking tab for our instance.

Click on the drop-down menu under External IP and select New static IP address. When prompted, give this IP address a name (anything will do) and press the Reserve button to complete the action. This will make sure our external IP address does not change unexpectedly.

Now, we want to add an SSH key for remote access to our instance. On our Linux workstation, bring up a new console. To create the new SSH key-pair we will run ssh-keygen, specifying a file and our username. Make sure to create a passphrase when prompted!

$ ssh-keygen -t rsa -f ~/.ssh/google-compute-ssh -C famicoman
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/famicoman/.ssh/google-compute-ssh.
Your public key has been saved in /home/famicoman/.ssh/google-compute-ssh.pub.
The key fingerprint is:
d3:f1:63:ab:5a:f9:e8:54:87:92:e1:95:de:3c:58:27 famicoman
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|             .   |
|          o o E .|
|         o B * o |
|        S = O =  |
|         . = + . |
|          + .    |
|         o +     |
|        .o+ .    |
+-----------------+

Now we will modify the permissions of our private key to ensure only our user can access it:

$ chmod 400 ~/.ssh/google-compute-ssh

Finally, read the contents of our public key file, which we will be using to complete the Compute instance setup.

 $ cat ~/.ssh/google-compute-ssh.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDbEKVCbgDus+hno+GKtsuYgrWSgarOH3/2R1q5IA1SWZXn9OJZQ1LAWn7QsM01tg7TTrrAo57wAlemC+eJgLLVsAnayvA/WZjlvjEdcX3TvQqzNnWGNf4VK7g/PvIOvyX4gj8uaHc/zWdz28KIgz2cUP/2bNWJox/p9M3vssxttpuTIL2knFIOKYphxaBWvfXu92a5kB60gso/0pcyZ7O+Kt2evfjtyByJRvypeM0YyMHM+xZ7b0gooWk9rSkpkGRJMBtHQOVVWqIQ2rjnEMPWNpEyqqC3KZmgolRP4a8TpXEDyP6N/2pc53zuVqCrgL7M8jYcIEmcLYASvTqWJlh5 famicoman

Back in our Compute instance setup, click on the SSH Keys tab, directly to the right of the Networking tab we are currently on. In the Enter entire key data textbox, paste the contents of the google-compute-ssh.pub file we just printed with cat and press the Add item button.

Instance configuration after adding the SSH key.

Finally, press the Create button to make our instance and boot it up for the first time.

After a few seconds, the virtual machine will appear on our screen and be ready to access.

A fresh Compute instance!

Cjdns Installation

Now back on our Linux workstation, we will SSH to the instance using the External IP displayed for our instance:

$ ssh -i .ssh/google-compute-ssh famicoman@146.148.43.98

When prompted, accept the key for the host and enter the passphrase for the SSH key.

Now on our instance, let’s change over to root, run an update and then upgrade all of the packages:

famicoman@instance-1:~$ sudo -i
root@instance-1:~$ apt-get update && apt-get upgrade -y

Now, we will install the packages we need to retrieve and install cjdns:

root@instance-1:~# apt-get install -y git build-essential

IPv6 is disabled on all Compute Engine instances by default, so we will want to override this by adding a few lines the sysctl.conf file to enable IPv6 and then reboot the instance:

root@instance-1:~$ { echo "net.ipv6.conf.all.disable_ipv6 = 0"; echo "net.ipv6.conf.default.disable_ipv6 = 0"; echo "net.ipv6.conf.tun0.disable_ipv6 = 0 = 0";} >> /etc/sysctl.conf && reboot

After a few moments, the machine should become unresponsive and the SSH session should terminate. Afterwards, SSH back into the machine from the workstation and change over to root again:

$ ssh -i .ssh/google-compute-ssh famicoman@146.148.43.98

famicoman@instance-1:~$ sudo -i
root@instance-1:~$ apt-get update && apt-get upgrade -y

Now we will change over to the /opt directory and clone the cjdns repo, checking out the latest version:

root@instance-1:~$ cd /opt
root@instance-1:/opt$ git clone https://github.com/cjdelisle/cjdns.git
root@instance-1:/opt$ cd cjdns && git checkout cjdns-v19.1

Now that the latest version is checked out, we can perform an optimized build:

root@instance-1:/opt/cjdns$ CFLAGS="-O2 -s -static -Wall -march=native" ./do

After a few minutes the build will be done. Next, we will link cjdroute into /usr/local/bin and generate a configuration file.

root@instance-1:/opt/cjdns$ cd /usr/local/bin/ && ln -s /opt/cjdns/cjdroute cjdroute
root@instance-1:/usr/local/bin$ cjdroute --genconf > /usr/local/etc/cjdroute.conf

Finally, we can start cjdns:

root@instance-1:/usr/local/bin$ cjdroute < /usr/local/etc/cjdroute.conf
1489881689 INFO cjdroute2.c:642 Cjdns amd64 linux +seccomp
1489881689 INFO cjdroute2.c:646 Checking for running instance...
1489881689 DEBUG UDPAddrIface.c:293 Bound to address [0.0.0.0:34199]
1489881689 DEBUG AdminClient.c:333 Connecting to [127.0.0.1:11234]
1489881690 DEBUG Pipe.c:134 Buffering a message
1489881690 DEBUG cjdroute2.c:699 Sent [144] bytes to core
1489881690 INFO RandomSeed.c:42 Attempting to seed random number generator
1489881690 INFO RandomSeed.c:50 Trying random seed [/dev/urandom] Success
1489881690 INFO RandomSeed.c:56 Trying random seed [sysctl(RANDOM_UUID) (Linux)] Failed
1489881690 INFO RandomSeed.c:50 Trying random seed [/proc/sys/kernel/random/uuid (Linux)] Success
1489881690 INFO RandomSeed.c:64 Seeding random number generator succeeded with [2] sources
1489881690 INFO LibuvEntropyProvider.c:59 Taking clock samples every [1000]ms for random generator
1489881690 DEBUG Pipe.c:231 Pipe [/tmp/cjdns_pipe_client-core-ycdqw9fs9m75mv3vqv16x7pvg0mcw5] established connection
1489881690 DEBUG Pipe.c:253 Sending buffered message
1489881690 DEBUG Core.c:354 Getting pre-configuration from client
1489881690 DEBUG Pipe.c:231 Pipe [/tmp/cjdns_pipe_client-core-ycdqw9fs9m75mv3vqv16x7pvg0mcw5] established connection
1489881690 DEBUG Core.c:357 Finished getting pre-configuration from client
1489881690 DEBUG UDPAddrIface.c:254 Binding to address [127.0.0.1:11234]
1489881690 DEBUG UDPAddrIface.c:293 Bound to address [127.0.0.1:11234]
1489881690 DEBUG UDPAddrIface.c:293 Bound to address [0.0.0.0:37092]
1489881690 DEBUG AdminClient.c:333 Connecting to [127.0.0.1:11234]
1489881690 INFO Configurator.c:135 Checking authorized password 0.
1489881690 INFO Configurator.c:159 Adding authorized password #[0] for user [default-login].
1489881690 INFO Configurator.c:411 Setting up all ETHInterfaces...
1489881690 INFO Configurator.c:427 Creating new ETHInterface [eth0]
1489881690 INFO Configurator.c:388 Setting beacon mode on ETHInterface to [2].
1489881690 DEBUG Configurator.c:531 Security_chroot(/var/run/)
1489881690 DEBUG Configurator.c:576 Security_noforks()
1489881690 DEBUG Configurator.c:581 Security_setUser(uid:65534, keepNetAdmin:1)
1489881690 DEBUG Configurator.c:596 Security_seccomp()
1489881690 DEBUG Configurator.c:601 Security_setupComplete()
1489881690 DEBUG Configurator.c:685 Cjdns started in the background

Now that cjdns is running, let’s verify this by checking out the interface with ifconfig:

root@instance-1:/usr/local/bin# ifconfig tun0
tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet6 addr: fcab:62fc:cd8b:36f0:7837:4361:9aa3:946d/8 Scope:Global
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1304  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Conclusion

We are now free to add peers to our cjdroute.conf file or generate our own peering credentials for other installations! Be sure to keep in mind Google Compute Engine’s rates for egress bandwidth as noted here. Nobody wants to make up with a $500 bill for bandwidth. And, as always, consider locking down your node with proper firewall rules, etc. now that it is live and Internet-facing.

Compute instances should perform relatively well, even the f1-micro instances. Below are sample cjdns bench results for the f1-micro:

root@instance-1:/usr/local/bin# cjdroute --bench | grep "per second"
1489881851 INFO Benchmark.c:62 Benchmark salsa20/poly1305 in 653ms. 1837672 kilobits per second
1489881851 INFO Benchmark.c:62 Benchmark Switching in 342ms. 598830 packets per second
root@instance-1:/usr/local/bin# cjdroute --bench | grep "per second"
1489881854 INFO Benchmark.c:62 Benchmark salsa20/poly1305 in 635ms. 1889763 kilobits per second
1489881854 INFO Benchmark.c:62 Benchmark Switching in 335ms. 611343 packets per second
root@instance-1:/usr/local/bin# cjdroute --bench | grep "per second"
1489881858 INFO Benchmark.c:62 Benchmark salsa20/poly1305 in 664ms. 1807228 kilobits per second
1489881858 INFO Benchmark.c:62 Benchmark Switching in 355ms. 576901 packets per second
 

Building DIY Community Mesh Networks (2600 Article)

Now that the article has been printed in 2600 magazine, Volume 33, Issue 3 (2016-10-10), I’m able to republish it on the web. The article below is my submission to 2600 with some slight formatting changes for hyperlinks.

Building DIY Community Mesh Networks
By Mike Dank
Famicoman@gmail.com

Today, we are faced with issues regarding our access to the Internet, as well as our freedoms on it. As governmental bodies fight to gain more control and influence over the flow of our information, some choose to look for alternatives to the traditional Internet and build their own networks as they see fit. These community networks can pop up in dense urban areas, remote locations with limited Internet access, and everywhere in between.Whether you are politically fueled by issues of net neutrality, privacy, and censorship, fed up with an oligarchy of Internet service providers, or just like tinkering with hardware, a wireless mesh network (or “meshnet”) can be an invaluable project to work on. Numerous groups and organizations have popped up all over the world, creating robust mesh networks and refining the technologies that make them possible. While the overall task of building a wireless mesh network for your community may seem daunting, it is easy to get started and scale up as needed.

What Are Mesh Networks?

Think about your existing home network. Most people have a centralized router with several devices hooked up to it. Each device communicates directly with the central router and relies on it to relay traffic to and from other devices. This is called a hub/spoke topology, and you’ll notice that it has a single point of failure. With a mesh topology, many different routers (referred to as nodes) relay traffic to one another on the path to the target machine. Nodes in this network can be set up ad-hoc; if one node goes down, traffic can easily be rerouted to another node. If new nodes come online, they can be seamlessly integrated into the network. In the wireless space, distant users can be connected together with the help of directional antennas and share network access. As more nodes join a network, service only improves as various gaps are filled in and connections are made more redundant. Ultimately, a network is created that is both decentralized and distributed. There is no single point of failure, making it difficult to shut down.

When creating mesh networks, we are mostly concerned with how devices are routing to and linking with one another. This means that most services you are used to running like HTTP or IRC daemons should be able to operate without a hitch. Additionally, you are presented with the choice of whether or not to create a darknet (completely separated from the Internet) or host exit nodes to allow your traffic out of the mesh.

Existing Community Mesh Networking Projects

One of the most well-known grassroots community mesh networks is Freifunk, based out of Germany, encompassing over 150 local communities with over 25,000 access points. Guifi.net based in Spain, boasts over 27,000 nodes spanning over 36,000 km. In North America we see projects like Hyperboria which connect smaller mesh networking communities together such as Seattle Meshnet, NYC Mesh, and Toronto Mesh. We also see standalone projects like PittMesh in Pittsburgh, WasabiNet in St. Louis, and People’s Open Network in Oakland, California.

While each of these mesh networks may run different software and have a different base of users, they all serve an important purpose within their communities. Additionally, many of these networks consistently give back to the greater mesh networking community and choose to share information about their hardware configurations, software stacks, and infrastructure. This only benefits those who want to start their own networks or improve existing ones.

Picking Your Hardware & OS

When I was first starting out with Philly Mesh, I was faced with the issue of acquiring hardware on a shoestring budget. Many will tell you that the best hardware is low-power computers with dedicated wireless cards. This however can incur a cost of several hundred dollars per node. Alternatively, many groups make use of SOHO routers purchased off-the-shelf, flashed with custom firmware. The most popular firmware used here is OpenWRT, an open source alternative that supports a large majority of consumer routers. If you have a relatively modern router in your house, there is a good chance it is already supported (if you are buying specifically for meshing, consider consulting OpenWRT’s wiki for compatibility. Based on Linux, OpenWRT really shines with its packaging system, allowing you to easily install and configure packages of networking software across several routers regardless of most hardware differences between nodes. With only a few commands, you can have mesh packages installed and ready for production.

Other groups are turning towards credit-card-sized computers like the BeagleBone Black and Raspberry Pi, using multiple USB WiFi dongles to perform over-the-air communication. Here, we have many more options for an operating system as many prefer to use a flavor of Linux or BSD, though most of these platforms also have OpenWRT support.

There are no specific wrong answers here when choosing your hardware. Some platforms may be better suited to different scenarios. For the sake of getting started, spec’ing out some inexpensive routers (aim for something with at least two radios, 8MB of flash) or repurposing some Raspberry Pis is perfectly adequate and will help you learn the fundamental concepts of mesh networking as well develop a working prototype that can be upgraded or expanded as needed (hooray for portable configurations). Make sure you consider options like indoor vs outdoor use, 2.4 GHz vs. 5 GHz band, etc.

Meshing Software

You have OpenWRT or another operating system installed, but how can you mesh your router with others wirelessly? Now, you have to pick out some software that will allow you to facilitate a mesh network. The first packages that you need to look at are for what is called the data link layer of the OSI model of computer networking (or OSI layer 2). Software here establishes the protocol that controls how your packets get transferred from node A to node B. Common software in this space is batman-adv (not to be confused with the layer 3 B.A.T.M.A.N. daemon), and open80211s, which are available for most operating systems. Each of these pieces of software have their own strengths and weaknesses; it might be best to install each package on a pair of routers and see which one works best for you. There is currently a lot of praise for batman-adv as it has been integrated into the mainline Linux tree and was developed by Freifunk to use within their own mesh network.

Revisiting the OSI model again, you will also need some software to work at the network layer (OSI layer 3). This will control your IP routing, allowing for each node to compute where to send traffic next on its forwarding path to the final destination on the network. There are many software packages here such as OLSR (Optimized Link State Routing), B.A.T.M.A.N (Better Approach To Mobile Adhoc Networking), Babel, BMX6, and CJDNS (Caleb James Delisle’s Networking Suite). Each of these addresses the task in its own way, making use of a proactive, reactive, or hybrid approach to determine routing. B.A.T.M.A.N. and OLSR are popular here, both developed by Freifunk. Though B.A.T.M.A.N. was designed as a replacement for OLSR, each is actively used and OLSR is highly utilized in the Commotion mesh networking firmware (a router firmware based off of OpenWRT).

For my needs, I settled on CJDNS which boasts IPv6 addressing, secure communications, and some flexibility in auto-peering with local nodes. Additionally, CJDNS is agnostic to how its host connects to peers. It will work whether you want to connect to another access point over batman-adv, or even tunnel over the existing Internet (similar to Tor or a VPN)! This is useful for mesh networks starting out that may have nodes too distant to connect wirelessly until more nodes are set up in-between. This gives you a chance to lay infrastructure sooner rather than later, and simply swap-out for wireless linking when possible. You also get the interesting ability to link multiple meshnets together that may not be geographically close.

Putting It Together

At this point, you should have at least one node (though you will probably want two for testing) running the software stack that you have settled on. With wireless communications, you can generally say that the higher you place the antenna, the better. Many community mesh groups try to establish nodes on top of buildings with roof access, making use of both directional antennas (to connect to distant nodes within the line of sight) as well as omnidirectional antennas to connect to nearby nodes and/or peers. By arranging several distant nodes to connect to one another via line of sight, you can establish a networking backbone for your meshnet that other nodes in the city can easily connect to and branch off of.

Gathering Interest

Mesh networks can only grow so much when you are working by yourself. At some point, you are going to need help finding homes for more nodes and expanding the network. You can easily start with friends and family – see if they are willing to host a node (they probably wouldn’t even notice it after a while). Otherwise, you will want to meet with like-minded people who can help configure hardware and software, or plan out the infrastructure. You can start small online by setting up a website with a mission statement and making a post or two on Reddit (/r/darknetplan in particular) or Twitter. Do you have hackerspaces in your area? Linux or amateur radio groups? A 2600 meeting you frequent? All of these are great resources to meet people face-to-face and grow your network one node at a time.

Conclusion

Starting a mesh network is easier than many think, and is an incredible way to learn about networking, Linux, micro platforms, embedded systems, and wireless communication. With only a few off-the-shelf devices, one can get their own working network set up and scale it to accommodate more users. Community-run mesh networks not only aid in helping those fed up with or persecuted by traditional network providers, but also those who want to construct, experiment, and tinker. With mesh networks, we can build our own future of communication and free the network for everyone.

 

CJDNS on OpenWRT – Part 3: Installing & Configuring CJDNS

Now that we have OpenWRT installed and ensured that we have enough space to experiment and install packages, we can proceed to install and configure cjdns.

I have opted to install a GUI package to allow for easier configuration (though I also wanted to see what it had to offer over editing configuration files). The package used here is luci-app-cjdns, relying on the LuCI interface that comes default in most OpenWRT images. If you want to install cjdns without the GUI or do not use LuCI, you can install the regular cjdns package. Note: The standard cjdns package was left out of OpenWRT 15.05.1, but should be in the older 15.05 image. The luci-app-cjdns package should be available in both versions, so you won’t have any issue with the remainder of this guide.

Now we are ready to install cjdns for LuCI. SSH into the access point and run the following command to update and install luci-app-cjdns.

opkg update && opkg install luci-app-cjdns

After this finishes, leave the SSH session open and then load up the OpenWRT web console in a browser and log in. By default, this interface can be reached via http://192.168.1.1. Now that we’re in the console, select cjdns from the Services dropdown on the top menu. An Overview page for cjdns will load (and look rather empty). Now, click the Peers sub-tab link near the top of this page.

Now, we can enter in the peering information for any number of peers to connect to. You will likely want to populate the Authorized Passwords and Outgoing UDP Peers sections as I have below.

CJDNS Peers Tab

When finished, press the Save & Apply button to commit any changes and restart cjdns. These steps can be repeated to add as many peers as needed.

Now, navigate back to the Overview page by clicking on the Overview sub-tab link.

After loading, we should now have connection information about the configured peers as shown below.

CJDNS Overview Page

That’s all there is to it! Back in our SSH session, we can try pinging a machine on Hyperboria to confirm a connection:

 ping6 h.peer0.famicoman.com
PING h.peer0.famicoman.com (fc9f:990d:2b0f:75ad:8783:5d59:7c84:520b): 56 data bytes
64 bytes from fc9f:990d:2b0f:75ad:8783:5d59:7c84:520b: seq=0 ttl=42 time=4072.631 ms
64 bytes from fc9f:990d:2b0f:75ad:8783:5d59:7c84:520b: seq=1 ttl=42 time=3800.924 ms
64 bytes from fc9f:990d:2b0f:75ad:8783:5d59:7c84:520b: seq=2 ttl=42 time=4594.193 ms
64 bytes from fc9f:990d:2b0f:75ad:8783:5d59:7c84:520b: seq=3 ttl=42 time=4329.846 ms
^C
--- h.peer0.famicoman.com ping statistics ---
9 packets transmitted, 4 packets received, 55% packet loss
round-trip min/avg/max = 3800.924/4199.398/4594.193 ms

If all went as expected, you now have cjdns running on your OpenWRT router! This can be expanded in the future by copying an OpwnWRT configuration onto several routers, and then linking them together wirelessly.

 

CJDNS on OpenWRT – Part 2: Configuring Extroot for More Storage

If you have any low-memory OpenWRT device (4MB of flash) you will probably fill up any free space quickly after the initial OpenWRT install and need more room to grow. Luckily, you can transfer your root file system to a flash drive and boot off of it as long as your access point has a USB port.

If you are following along with our Western Digital N600, you probably don’t need to do this. The N600 comes equipped with 12MB of built-in flash, more than enough to accommodate the software packages we will install in the future. If you have less than this or want to have a nice learning exercise, read on!

You are going to need a Linux machine and a flash drive. The flash drive size shouldn’t matter too much. A lot of people run OpenWRT off of 8MB of internal flash, so any small drive should have plenty of room.

On your Linux machine, plug in the flash drive (mine is a ~10 year old 64MB), and run dmesg to get the kernel message buffer.

dmesg

You should get a lot of output, but importantly at the end, we should see our flash drive being recognized.

[26913.782811] usb 1-1.4: new high-speed USB device number 4 using dwc_otg
[26913.883754] usb 1-1.4: New USB device found, idVendor=0457, idProduct=0151[26913.883779] usb 1-1.4: New USB device strings: Mfr=0, Product=2, SerialNumber=3
[26913.883796] usb 1-1.4: Product: USB Mass Storage Device
[26913.883812] usb 1-1.4: SerialNumber: 00000000004FDE
[26913.884913] usb-storage 1-1.4:1.0: USB Mass Storage device detected
[26913.887282] usb-storage 1-1.4:1.0: Quirks match for vid 0457 pid 0151: 80
[26913.887450] scsi host0: usb-storage 1-1.4:1.0
[26914.884185] scsi 0:0:0:0: Direct-Access     Staples                   0.00 PQ: 0 ANSI: 2
[26914.886688] sd 0:0:0:0: [sda] 124000 512-byte logical blocks: (63.4 MB/60.5 MiB)
[26914.887210] sd 0:0:0:0: [sda] Write Protect is off
[26914.887235] sd 0:0:0:0: [sda] Mode Sense: 00 00 00 00
[26914.887748] sd 0:0:0:0: [sda] Asking for cache data failed
[26914.887771] sd 0:0:0:0: [sda] Assuming drive cache: write through
[26914.916959]  sda: sda1
[26914.920057] sd 0:0:0:0: [sda] Attached SCSI removable disk
[26914.922645] sd 0:0:0:0: Attached scsi generic sg0 type 0

We see that our physical device is sda, with one partition sda1. Your drive/partition may be labeled differently depending on how many drives you have installed or plugged into your machine, and how many partitions your flash drive has. We can verify we are looking at our flash drive by listing via fdisk.

fdisk -l /dev/sda

You will get a lot of informative output about the device:

Disk /dev/sda: 63 MB, 63488000 bytes
16 heads, 32 sectors/track, 242 cylinders, total 124000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x91f72d24

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          32      123999       61984    b  W95 FAT32

Now we will go ahead and format the drive as ext4. First however, we need to create a partition by running the fdisk command on the drive (without the -l option).

fdisk /dev/sda

This is an interactive utility, so when prompted, enter d to delete the current partition on the drive. Then enter n for a new partition (taking the defaults by pressing the return key). Finally, enter w to apply the changes to the disk and exit.

Now, we can make a file system on the partition we just created, formatting it as ext4:

mkfs.ext4 /dev/sda1

Afterwards we can list with fdisk again to see our changes:

fdisk -l /dev/sda

Disk /dev/sda: 63 MB, 63488000 bytes
3 heads, 32 sectors/track, 1291 cylinders, total 124000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x91f72d24

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048      123999       60976   83  Linux

You can now remove your USB drive from your Linux machine and plug it into your OpenWRT device.

Next, we ssh into our OpenWRT device and login as root. We need to install a few utilities with opkg before we can switch over the root filesystem.

opkg update && opkg install block-mount kmod-fs-ext4 kmod-usb-storage fdisk nano

Now we will run fdisk to see that our drive is recognized:

root@OpenWrt:~# fdisk -l

Disk /dev/mtdblock0: 256 KiB, 262144 bytes, 512 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mtdblock1: 64 KiB, 65536 bytes, 128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mtdblock2: 64 KiB, 65536 bytes, 128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mtdblock3: 64 KiB, 65536 bytes, 128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mtdblock4: 15.5 MiB, 16252928 bytes, 31744 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mtdblock5: 1.3 MiB, 1310720 bytes, 2560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mtdblock6: 14.3 MiB, 14942208 bytes, 29184 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mtdblock7: 12.1 MiB, 12648448 bytes, 24704 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mtdblock8: 64 KiB, 65536 bytes, 128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 60.6 MiB, 63488000 bytes, 124000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x91f72d24

Device     Boot Start    End Sectors  Size Id Type
/dev/sda1        2048 123999  121952 59.6M 83 Linux

My drive registers as /dev/sda1, and this is used through the commands below. Make sure you take note of yours and make the necessary changes when running additional commands.

Next, we will copy our filesystem over and set the device to boot from the flash drive. If you are squeamish about making this change via a terminal, scroll down and view the sources referenced at the end of this tutorial. There is a way to perform this configuration using OpenWRT’s default graphical interface: LuCI. If you prefer to issue commands over ssh, read on.

Now, we will actually mount the drive, and copy the root filesystem to it:

mkdir /mnt/sda1
mount /dev/sda1 /mnt/sda1
mkdir -p /tmp/cproot
mount --bind / /tmp/cproot
tar -C /tmp/cproot -cvf - . | tar -C /mnt/sda1 -xf -
umount /tmp/cproot

After we have moved the filesystem over, we can modify the fstab with our editor of choice.

nano /etc/config/fstab

Paste the following at the top the file and save it. This will use the flash drive as the primary filesystem and preserve this configuration even after reboot.

config mount
        option device '/dev/sda1'
        option target '/'
        option fstype 'ext4'
        option enabled '1'

Finally, we reboot the device by issuing the reboot command:

reboot

After the system comes back up, ssh into it once more and run the df command.

df -h

In the output, pay attention to the Size column to make sure it matches up closely to your flash drive’s capacity to verify that you are actually running off of the flash drive.

Filesystem                Size      Used Available Use% Mounted on
rootfs                   53.7M      9.7M     39.8M  20% /
/dev/root                 2.3M      2.3M         0 100% /rom
tmpfs                    61.6M    628.0K     61.0M   1% /tmp
/dev/sda1                53.7M      9.7M     39.8M  20% /
tmpfs                   512.0K         0    512.0K   0% /dev

That’s all there is to it! Now, you are booting directly off of the flash drive and have more room to install packages. Additionally, you may want to save an image of your drive for future use. You never know when you will be trying something out and need to restore from a backup!

Sources

 

Setting Up Dynamic DNS for Your Registered Domain through CloudFlare

Note: As of cjdns version 18, cjdns peering credentials will not be valid if they use domain names as opposed to IP addresses. If you are reading this in the year 2017 or later, the below guide is essentially obsolete from a peering perspective but may still be useful for automatically updating DNS records for other purposes.

If you host a node at a residential location, you probably do not have the comfort of a static IP address, one which does not change. Residential Internet accounts usually have the unfortunate side effect of occasional public-facing IP address changes, meaning you have a dynamic IP address. If you give out your peering information to others with your IP address and it changes, these credentials will fail until you are able to supply all of your peers with an updated IP address. Even after you discover the issue, it could take a while for all of your peers to adopt the new information, causing plenty of broken connections.

To solve this problem, we can use dynamic DNS, and get you a domain name to use in place of your IP address by acting as a pointer to it. There are many providers that do this including Duck DNS and FreeDNS. Almost all of them allow you to create subdomains (like yoursupercoolsub.duckdns.org) which behind the scenes point to your IP address. In DNS lingo, this is called an A record (or AAAA record if you have an IPv6 address). Most of these services also allow you to download a client to automatically update your IP address for the record if it changes, so you can just distribute the subdomain’s name to your peers and not worry about unexpected changes.

This works well, but I (and possibly some of you) have a paid domain set up through CloudFlare’s DNS. Instead of famicomans-node.duckdns.org, I would prefer to use my own domain and configure something like node.famicoman.com to act as a pointer to my IP address. Now, I could make a record with a dynamic DNS provider (like famicomans-node.duckdns.org) and have node.famicoman.com point to it instead of the IP address directly (this is called a C record) but that would require me to sign up for another service and configure the two to work together.

Instead of this, I decided to use CloudFlare directly, working with their API, some scripting, and a cron job.

The following assumes that you have registered at CloudFlare and have already added your domain to its care. We also assume that your cjdns installation is on a Linux machine, or you have access to one on your home network.

Before doing anything else, you need to retrieve your CloudFlare API key. Log into the CloudFlare console and navigate to My Settings. Scroll down until you find the API Key item and press the button labeled View API Key. After your API key displays, record it for use later.

Now, navigate to the DNS tab for your domain name. Create a new A record for your public IP by specifying a name (I chose “node”) and a dummy IP address (I used “8.8.8.8”). It doesn’t matter what IP address you supply as it will be updated later. Afterwards, press the green Add Record button.

The record should now display below with a gray cloud icon next to it. This shows that CloudFlare’s CDN services are not active for the record (which is what we want) and that this sub domain name will resolve the IP address directly.

Now on your cjdns node, we can start to configure automatic updates.

First, we need to set up some directories. Log into a non-root account and execute the following to create and change into our new directory. I like to put each of my scripts in its own folder under a ‘scripts’ folder in my home directory, but feel free to use a different location.

mkdir -p ~/scripts/cloudflare-update-record
cd ~/scripts/cloudflare-update-record

Now, we will download a gist with a CloudFlare update script, saving it as ‘cloudflare-update-record.sh’. As always, it is unwise to blindly execute something from a random script on the Internet, so review the code to check for any funny business. You can see updates to and comments on this script here.

wget https://gist.githubusercontent.com/benkulbertis/fff10759c2391b6618dd/raw/0e365a91a15e15494b312cb5492e40dec2072414/cloudflare-update-record.sh

Now we will edit the script to supply credentials for connecting to and updating our CloudFlare record.

nano cloudflare-update-record.sh

Near the top of the file, there are four fields you need to set with some values. Enter your email as the auth_email, your API key from earlier as the auth_key, your root domain as the zone_name, and your subdomains name as the record_name. Below is an example of what these fields might look like filled out.

auth_email="famicoman@gmail.com"
auth_key="a456a7b68cb9ac65b34c2b43b3c6bc4c2b4" # found in cloudflare account settings
zone_name="famicoman.com"
record_name="node.famicoman.com"

Now we want to change permissions so only you can read, write, and execute the script (we wouldn’t want any other users peeking at your API key), and finally execute the script to test it out.

chmod 700 cloudflare-update-record.sh
./cloudflare-update-record.sh

If all goes to plan, you should receive output similar to the following in your console:

>>> IP changed to: 127.112.6.34

If you don’t get a message that your IP is changed, go back and check the credentials you entered into cloudflare-update-record.sh and make sure they are correct.

Additionally, we can check cloudflare.log to see a time-stamped account of our execution.

cat cloudflare.log
[Tue  9 Feb 22:32:30 UTC 2016] - Check Initiated
[Tue  9 Feb 22:32:33 UTC 2016] - IP changed to: 127.112.6.34

Now, we want to have this script run automatically to reduce downtime. I’ve opted to set this script up to run every 10 minutes and schedule it with cron.

Bring up your crontab with the following command:

crontab -e

Scroll to the bottom of the file in your editor and paste the below command on a new line. Every 10 minutes, every hour, every day of the month, every month, and every day of the week this job will run by changing to your cloudflare-update-record directory and then executing cloudflare-update-record.sh.

*/10 * * * * cd ~/scripts/cloudflare-update-record && /bin/bash ~/scripts/cloudflare-update-record/cloudflare-update-record.sh

After saving (C-x), the new job will become active. We can watch the job go off by tailing the cloudflare.log file.

tail -f cloudflare.log

It will take some time depending on when you first save the crontab, but you should see records written to the log in realtime, every 10 minutes starting on the hour.

[Tue  9 Feb 22:32:30 UTC 2016] - Check Initiated
[Tue  9 Feb 22:32:33 UTC 2016] - IP changed to: 127.112.6.34
[Tue  9 Feb 22:40:02 UTC 2016] - Check Initiated
[Tue  9 Feb 22:50:02 UTC 2016] - Check Initiated
[Tue  9 Feb 23:00:02 UTC 2016] - Check Initiated

That’s all there is to it, you can now distribute your connection details using a subdomain on a registered domain name instead of your IP address. Keep in mind that this can be scaled to multiple nodes by giving them multiple subdomain names (node1, node2, etc.) for easy remember-ability.

 

Running cjdns on Raspbian Jessie

If you’re like me, you have a few Raspberry Pis kicking around, waiting for a job to do. I adopted early and purchased (at least one) original Model B with 256MB of RAM. This was a nifty little box four years ago, but has since been overshadowed by its older brother, the revised Model B with 512MB of RAM, and its cousins: the B+ and RPi 2. These originals still have life left in them, and can often be found below the original $35 price tag. When it comes to running cjdns, they do a fantastic job!

This post assumes you have a Raspberry Pi, power adapter, sd card, and enough smarts to hook the Pi up to your network and feed it a Raspbian image. This should work for any Model B Raspberry Pi, from the original up to the Raspberry Pi 2.

Installation

The Hyperboria documentation offers a guide on installing for Debian Jessie, but it fails on the Raspberry Pi out of the box. Prior to this, I was running Raspbian Wheezy with few issues in relation to cjdns, but other work I wanted to do had me yearning for a more up-to-date distribution. I grabbed Raspbian Jessie from the official download site (Release date 2015-11-21), loaded it onto my SD card, and booted up.

On an original Raspberry Pi, this process takes a while. At this point, start yourself a pot of coffee and then log on to your pi using the default username and password, pi/raspberry. I run headless, so everything will be done from the console over ssh, but that shouldn’t matter too much.

We are going to install cjdns as a service, so let’s change over into root to make this a little more comfortable.

su -

If you have a fresh installation, be sure to run config and expand to fit the full size of your SD card, overclock if you want, and restart the pi when done. After restarting, log in again and change back to root.

raspi-config

I also cannot express how important it is to change the default user and root passwords.

passwd pi
passwd root

Now, we want to update, upgrade, and install some dependencies.

apt-get update
apt-get upgrade
apt-get install nodejs build-essential git

Somewhere during these commands, your coffee should have finished. Go pour a cup and come back to watch the console until everything is tidied up.

Afterwards, we are going to start building cjdns by pulling down the latest code and building.

cd /opt
git clone https://github.com/cjdelisle/cjdns.git
cd cjdns
NO_TEST=1 Seccomp_NO=1 ./do

Pay close attention to this last line. We need to execute ./do in this fashion because of a current issue with the kernel on Raspbian Jessie.

After a little wait, the build should finish successfully. Now we want to configure cjdns to run as a daemon, so let’s create a link to the binary, generate a configuration file, and copy over the service file.

ln -s /opt/cjdns/cjdroute /usr/bin
(umask 077 && ./cjdroute --genconf > /etc/cjdroute.conf)
cp contrib/systemd/cjdns.service /etc/systemd/system/

All that’s left is to enable the service and start it up. Afterwards, it should start on every boot-up automatically.

systemctl enable cjdns
systemctl start cjdns

If you want to edit your cjdroute.conf for adding peers or… well… anything else, simply edit the file in /etc/cjdroute.conf and restart the service.

nano /etc/cjdroute.conf
systemctl restart cjdns

At any point, you can check the status of the service:

systemctl

In the output, you should see the following:

cjdns.service    loaded    active running    cjdns: routing engine designed for sec...

Troubleshooting & Debugging

If at any point you want to check the output generated by starting cjdns, stop the service and run cjdroute manually.

systemctl stop cjdns
/opt/cjdns/cjdroute < /etc/cjdroute.conf

Ocasionally, you may get a Configurator error like this below:

1454470218 CRITICAL Configurator.c:97 Failed to make function call [Timed out waiting for a response], error: [UDPInterface_beginConnection]

If this happens, run the following before starting cjdroute again to ensure that the ipv6 kernel module is loaded:

modprobe ipv6

Conclusion

That’s all it takes, you now have a node capable of connecting to Hyperboria! Now, all you need to do is find some peers, add them to your configuration file, and join the network!