Tinc Boot Full Mesh VPN without pain

Automatic, secure, distributed, with transitive connections (that is, forwarding messages when there is no direct access between subscribers), without a single point of failure, peer-to-peer, time-tested, low resource consumption, and a full-mesh VPN network with the ability to “punch” NAT – is it possible?

https://github.com/reddec/tinc-boot

Article published on Dev.To

Overview of Tinc

Unfortunately, Tinc VPN has no such big community like Open VPN or similar solutions, however, there are some good tutorials:

The Tinc man is always a good source of truth.

Tinc VPN (from the official site) is a service (the tincd daemon) that makes a private network by tunneling and encrypting traffic between nodes. Source code is open and available under the GPL2 license. Like a classic (OpenVPN) solutions, the virtual network created will be available at the IP level (OSI 3) which generally means that making changes to the applications is not required.

Key features:

There are two branches of tinc development: 1.0.x (in almost all repositories) and 1.1 (eternal beta). In this article, only version 1.0.x was used.

From my point of view, some of the strongest features of Tinc is ability to forward messages over peers when direct connection is not possible. Routing tables are built automatically. Even nodes without a public address can become a relay server.

case scenario

Consider a situation with three servers (China, Russia, Singapore) and three clients (Russia, China and the Philippines):

Using the traffic exchange between Shanghai and Moscow as an example, consider the following Tinc scenarios (approximately):

  1. Normal situation: Moscow <-> russia-srv <-> china-srv <-> Shanghai
  2. Due to censorship rules, connection to China has been closed: Moscow <-> russia-srv <-> Manila <-> Singapore <-> Shanghai
  3. (after 2) in case of server failure in Singapore, traffic is transferred to the server in China and vice versa.

Whenever possible, Tinc attempts to establish a direct connection between the two nodes behind NAT by punching.

Brief introduction to Tinc configuration

Tinc is positioned as an easy-to-configure service, however, something went wrong - to create a new node, minimal requirements are:

In addition to described above, when connecting to an existing network, you must obtain the existing host keys and provide your own.

Ie: for the second node

second node

For the third node

third node

By using two-way synchronization (for example, unison), the number of additional operations increases to N, where N is the number of public nodes.

With respect to the developers of Tinc - to join to the existing network, you just need to exchange keys with only one of the nodes (bootnode). After starting the service and connecting to the participant, tinc will get the network topology and will be able to work with all members.

However, in case of the boot host has become unavailable and then tinc was restarted, then there is no way to connect to the virtual network.

Moreover, the enormous possibilities of tinc, together with the academic documentation of the project (well described, but few examples), provide an extensive field for making mistakes.

Reasons to create tinc-boot

Generalizing the problems described above and then formulating them as tasks, we got the following as necessities:

  1. simplify the process of creating a new node;
    • potentially, should be enough to give a common user (“Power User” in Windows terms) one small shell command to create a new node and join the network (I keep in mind something like this to connect with my grandad - former tech guy);
  2. provide automatic distribution of keys between all active nodes;
  3. provide a simplified procedure for exchanging keys between the bootnode and the new client.

bootnode - node with a public address;

Due to task 2 above, we can guarantee that after the key exchange between the bootnode and the new node and after establishing a connection to the network, the new key will be distributed automatically.

A tinc-boot was made to solve said tasks.

tinc-boot is a self-contained (except tinc) open source application that provides:

Architecture

overview

The tinc-boot executable file contains four components: a bootnode server, a key distribution management server and RPC management commands for it, as well as a node generation module.

Module: config generation

The node generation module (tinc-boot gen) creates all the necessary files for tinc to run successfully.

Simplified, its algorithm can be described as follows:

  1. Define the host name, network, IP parameters, port, subnet mask, etc.
  2. Normalize them (tinc has a limit on some values) and create the missing ones
  3. Check parameters
  4. If necessary - install tinc-boot to the system (could be disabled)
  5. Create scripts tinc-up, tinc-down, subnet-up, subnet-down
  6. Create a configuration file tinc.conf
  7. Create a host file hosts /
  8. Perform key generation (4096 bits)
  9. Exchange keys with bootnode
    1. Encrypt and sign own host file with a public key, a random initialization vector (nounce) and host name using xchacha20poly1305, where the encryption key is the result of the sha256 function from the token
    2. Send data via HTTP/HTTPS protocol to bootnode
    3. Receive answer and the header X-Node containing the name of the boot node, decrypt using the original nounce and the same algorithm
    4. If successful, save the received key in hosts / and add the ConnectTo entry to the configuration file (ie a recommendation where to connect during tinc start)
    5. Otherwise, use the address of the boot node next in the list and repeat from step 2
  10. Print recommendations for starting the service

Conversion via SHA-256 is used only to normalize the key to 32 bytes

For the very first node (that is, when there is nothing to specify as the boot address), step 9 is skipped. The --standalone flag.

Example 1 - creating the first public node

Assume that the public address is 1.2.3.4

sudo tinc-boot gen --standalone -a 1.2.3.4

Example 2 - adding a non-public node to the network

The boot node will be taken from the example above. The host must have tinc-boot bootnode running (to be described later).

sudo tinc-boot gen --token "<MY TOKEN>" http://1.2.3.4:8655

Bootstrap module

The boot module (tinc-boot bootnode) runs an HTTP server and serves as API for the key exchange with new clients.

By default it uses port 8655.

Simplified, the algorithm can be described as follows:

  1. Accept a request from a client
  2. Decrypt and verify the request using xchacha20poly1305, using the initialization vector (nounce) passed during the request, and where the encryption key is the result of the sha256 function from the token
  3. Check name
  4. Save the file if there is no file with the same name yet
  5. Encrypt and sign your own host file and name using the algorithm described above
  6. Return to paragraph 1

Together, the primary key exchange process is as follows:

sequence

Example 1 - start bootnode

It is assumed that the first initialization of the node already was done (tinc-boot gen).

tinc-boot bootnode --token "MY TOKEN"

Example 2 - start bootnode as a system service

tinc-boot bootnode --service --token "MY TOKEN"

Module: key distribution

The key distribution module (tinc-boot monitor) raises an HTTP server with an API for exchanging keys with other nodes inside the VPN. It binds to the address issued by the network (the default port is 1655, so there will be no conflict with several instances since each network has (or must have) its own address)

There is no need to manually work on the module since it starts and works automatically.

The module starts automatically when the network is up (in the tinc-up script) and automatically stops when it’s down (in the tinc-down script).

Supports operations:

In addition, every minute (by default) and when a new configuration file is received, indexing of the saved nodes is made for new public (with public IP) nodes. When nodes with the Address flag are detected, an entry is added to the tinc.conf configuration file to recommend connection during restart.

Module: key distribution control

Commands for requesting (tinc-boot watch) and canceling the request ( tinc-boot forget) of the configuration file from other nodes are executed automatically upon detection of a new node (subnet-up script) and stop (subnet-down script) respectively.

In the process of stopping the service, the tinc-down script is executed in which the tinc-boot kill command stops the key distribution module.

To summarize

This utility was created under the influence of cognitive dissonance between the genius of Tinc developers and the linearly growing complexity of setting up new nodes.

The main ideas in the development process were:

During development, I actively tested on real servers and clients (the picture from the description of tinc above is taken from real life). Now the system works flawlessly.

The application code is written in GO and open under the MPL 2.0 license. The license (free translation) allows commercial (if suddenly someone needs) use without opening the source product. The only requirement is that the changes must be transferred to the project.

Pool requests are welcome.

Useful links