Hardware platforms of a Freifunk network

This post is part of the series Building your own Software Defined Network with Linux and Open Source Tools and covers the hardware platforms used within the backbone network infrastructure.

In the early days into the project we didn’t have much funds but thankfully received quite some donations in terms of old hardware as well as money. As we were young and didn’t know what we know today, we went down quite some different roads, made lots of experiences along the way, eventually reaching the setup we have today. This posts lists most the platforms we used within the last years, basically only leaving out early wireless platforms and sponsored server machines.

As most Freifunk communities rely heavily on products from the portfolio of Ubiquiti Networks, quite some devices will be covered. In the following I will just call them ubnt.

Routers and Switches

Every network needs routers. Within the Freifunk backbone network we need routers for usual IPv4 + IPv6 traffic as well as routers for the B.A.T.M.A.N. advanced mesh network. We basically use(d) two categories of devices for these purposes:

Ubnt EdgeRouters

We started out with some EdgeRouter Lite and EdgeRouter POE boxes which at the time (2014/2015) were quite shiny little boxes with quite some bang for the buck. We even tried (and succeeded) to get B.A.T.M.A.N. adv. to run on one of them, but this turned out to be painful and not update-safe to we dropped the approach into the bin of history and the boxes went there with it.

When the EdgeRouter X and EdgeRouter X-SFP dropped they opened up a new incredibly cheap opportunity for simple routers and switches. We still use some of those in some small setups where we need a small switch with PoE and VLAN capabilities, with or without and the need for IP routing / CPE functionality. The fact that they can run OpenVPN with enough throughput to fill some not so broadband-ish DSL lines is the cherry on top.

PCengines APU Boards

The main platform we use as routers since 2015ish are APU boards by PCEngines. These small form factor, fanless and low power (6-8W) boards come with a regular amd64 CPU, 2GB or 4GB of RAM, 2-4 1GB NICs and can (and should) be fitted with a mS-ATA SSD.

They run any off the shelf Linux – Debian in our case – and therefore can run any daemon, tool, or infrastructure component you can imagine. We use bird for IP routing (with OSPF + BGP), VXLAN overlays to set up PTP L2 bridges and B.A.T.M.A.N. adv. on top, as described in more detail in a previous post. As of today our network consists of 35 backbone and edge routers based on this platforms and more to come.

For 19″ installations we use the 19″ version which includes the board + PSU.

TP-Link TL-SG3126 / TL-SG3424 / TL-SG5412F

Thanks to a gift we got our hands on a bunch of TP-Link switches from the 1Gb/s era, which can be managed, are VLAN and LACP capable, fanless, and have a very low power consumption.

We don’t use them anymore as they have been replaced by PoE capable devices in access POPs and by devices which can do 10Gb/s in the data centers, but still keep them around for ad hoc setups in refugee camps of other occasion. If you can get your hand on them somewhere and have a use case, they still are a good choice.

Netonix WISP Switches

For our access and backbone POPs which are connected via WiFi links we  have mostly settled on Netonix WS-12-250-AC or WS-6-MINI. They support VLANs, LAGs, SNMP, remote logging, etc. and can be managed by a nice WebUI or CLI. 

At the time we were looking for a new device for this role we had more actively been using Ubnt AF-5Xes which require 24VH PoE, which only few devices could provide at the time, one of them being the Netonix ones. They have become incredible expensive and hard to get in the last years so might need to look for an alternative although we don’t have any other reason to change to a new platform.

Cisco 3560 / 3750 / 4900-M / ME-3600X

Thanks to some other nice gifts we are the proud owners of a bunch of Cisco Catalyst 3560  & 3750 switches (with and without PoE) as well as a ME-3600X metro ethernet switch. They basically support everything one could hope for and a lot more. We use these as access and distribution switches in larger refugee installations and temporary setups.

Some years ago we got our hands on some Cisco Catalayst 4900-M switches which we are currently using as data center switches. With the right modules they provide a nice mix of 1GE copper and 1Gb/s + 10Gb/s module ports for fiber connections, and with dual PSUs provide power redundancy which previous DC switches did not provide. Given the space and power consumption they will likely be replaced shortly and another platform will be added to this post 🙂

FiberStore S5850-24T16S / 8TF12S

As the Catalyst 4900-M started to act up within our OSPF domain and we were to set up a new DC POP, we started looking into newer switches, which would also consume less space and power.

The port mix we were looking into consisted of

  • 1G Copper ports for IPMIs and older servers
  • 1G SFP ports for single/multi mode fiber connections
  • 10G SFP+ ports (if available we’d also take 25G SFP25), for links to servers and between DCs

The switch should also support L3, especially OSPF and BGP, so they could act as the router within the DC and provide access to the management network.

After a very interesting POC of Huwaei CloudEngine S5732-H, we decided against them as they drew a lot of power for PoE, which we didn’t need in the DC, and the overall experience wasn’t too great.

The FiberStore S5850-24T16S box with 24x GE and 16x SFP25 ports looked like a great alternative and we got ourselves one. Things went mostly well, the 10Gb/s connection to the backbone and server was fine, however the 1G SFP ports for the uplink nor the downlink to the roof didn’t want to cooperate at all. It turns out the SFP25 ports only support 10Gb/s and 25Gb/s but do not support 1Gb/s, which rendered the box incompatible with our needs and we sent it back. There went the plan to replace all Cisco 4500-M in all DCs with the same platform.

For the smaller DC, we were about get online, we got a FiberStore 8TF12S device, which now does the job very well, after we got familiar with the mostly industry-standard CLI.

Optical equipment

When we got our first metro dark fiber connection to connect two of our data centers we decided to go for CWDM on top of that link to build basic redundancy for failing transceivers. This way we can survive one transceiver decaying or going on vacation with any immediate impact to our network and have time to debug the issue and replace the broken transceiver.

As we are running this infrastructure in our spare time having more head room before any problem creates user noticeable impact is an important issue we factor into out design choices.

Wireless backbone

Most of our POPs are connected via Wireless links, as they are built on locations where the owners allow us to put stuff on the roof and maybe APs on the walls, but we don’t own any fiber nor would it be affordable to pull any. Today we only use 5GHz based wireless hardware from ubnt in our production network, but are evaluation 60GHz solutions too.

Ubnt AirFiber 5X

In 2015 we heard a lot of good things of AirFibers 5Xes and were able to get our hands on a bunch of those. Sadly the experience wasn’t that great as a lot of links went into RADAR detect on a regular basis rendering the links unreliable. Also the physical connectors on the LAN side seemed to have been flaky at least on the old models, so we started to migrate them out of our network rather soon.

Ubnt AirMax devices

Today most of our wireless backbone links are built using Ubnt Air Max devices operating in the 5GHz spectrum. This includes NanoBeams, PowerBeams, as well as LightBeams. Within the backbone we strictly build PTP connections as one would also do using wired connections.

For local connections in larger sites we sometimes use LiteBeam APs with a 120° sector antenna to fan out to multiple buildings.

Ubnt GigaBeam devices

In 2023 we started deploying GigaBeam 60GHz devices for new links and also as replacements for links which broke after severe weather. The 60GHz links in general are great, when there’s no heavy rain, however the 5GHz backup link is merely useless if needed.

Compared to the AirMax device we used before, the available bandwidth is much worse and mostly barely usable  (10-30Mb/s). The screws shipping with the GBE-LR devices seem to be of lowish quality and some had to be removed with an angle grinder as the nut didn’t want to move in any direction and we couldn’t fix the device at its pole.

In Day 2 Operations the devices don’t make us too happy either as they sometimes fall back to 5Ghz without any known reason and a reboot usually makes them happy again, which makes us unhappy.

Ubnt AirFiber 60GHz devices

In 2024 we started buying AirFiber 60GHz devices as we had some links above 2km distance, which GigaBeams couldn’t handle. The hardware and especially the mounting material look much nicer and more solid, it even comes with a nice way for fine tuning of the alignment.

I’ll update this post when we have Day 2 Operations experience.

Wireless access (points)

For client connectivity we we mainly use Ubnt access points. For indoor installations this usually are AC Lite or AC pro models, for outdoor installations Mesh and Mesh Pro are used in a lot of places. Given current lead times (of up to 10 months lately) for some of these devices we will be looking into alternatives, for example TP-Link Omada.

For a large installation in a refugee camp a set of Ruckus AC APs including controllers are used to provide WiFi access in a dense environment.

Servers

Obviously we also need some compute and storage to run basic infrastructure services. Most of these run on SuperMicro 1RU boxes sitting in some data centers. We also got some gifts of Dell, HP and Isilon hardware which we use for different purposes. Besides the latter they all or off the shelve servers with all of them running plain Debian Linux.

Leave a Reply