NHacker Next
login
▲A low power 1U Raspberry Pi cluster server for inexpensive colocation (2021)github.com
99 points by LorenDB 4 days ago | 39 comments
Loading comments...
rbanffy 3 minutes ago [-]
There are many cluster boards that allow plugging compute module boards that have an onboard switch. Such an arrangement would provide a much denser system. Making a new one, however, requires a lot of work. I'm not even sure how you do ethernet over PCB traces.

One project I keep telling myself I'll eventually do is to make a cluster board with 32 Octavo SoMs (each with 2 ethernets, CPU, GPU, RAM, and some flash), and a network switch (or two). And 32 activity LEDs on the side so a set of 16 boards will look like a Connnection Machine module.

postpawl 10 hours ago [-]
Project author here. This project is 4 years old at this point, and now it probably makes more sense to use Mac minis or mini pcs. I also wouldn’t rely on cheap colocation for anything security sensitive or critical. They gave my same block of IPs to another customer at some point and there were issues with IP conflicts (eventually got resolved).

It lasted for about 3 years and the colocation company went bankrupt and got bought by another company, so they returned the hardware. I’m surprised a technical failure didn’t kill it.

wkat4242 5 hours ago [-]
Another thing I had issues with cheap small time colocation was that people were using it for spamming/phishing and they got the whole ISP IP range blocked with spamhaus. I was running a legit mailserver so it was really annoying.

I rooted around on the block for a bit and I found several phishing sites, it was a mess.

The problem is the more serious colocators don't really want you if you just want 1U. And if they allow it it's definitely not for a good price.

vidarh 3 hours ago [-]
For 1U I'd generally just opt for a rented managed server unless I had a really compelling reason for why I wanted to use specifically my own hardware (e.g. I needed something very esotheric).
jasonjayr 4 hours ago [-]
From their perspective, the users seeking the cheapest price will probably be the most trouble!
wkat4242 4 hours ago [-]
Probably yes, but it's also that setting up all the physical access administration etc might not be worth it for them for a customer that pays $15 a month. Which is what I used to pay for my 1U (including power, but it was a super low energy server)
JdeBP 1 hours ago [-]
So the people back in 2021 in the prior Hacker News discussion saying that they were worried about component failures … turned out to be worrying about completely the wrong thing. (-:
p0w3n3d 8 hours ago [-]
Mac minis are quite expensive. I'm trying to rebuild my home server and it will be probably Chinese minipc Ryzen 5700u, still tdwp 25 IIRC
beng-nl 5 hours ago [-]
Depends on the Mac mini model I think; used Mac mini m4 16gb 256gb are priced quite low compared to the performance - I believe close to best bang for the buck - and are I believe near best energy efficiency. I love them for heavy cpu tasks without loud fans and heat in my office (pcs did this in my previous homelab setup for a bit less performance).

The higher specced max minis (more memory or pro) Are worse bang for buck.

p0w3n3d 4 hours ago [-]
I'd love to have possibility to compare M4 with Ryzen 5700u as a server. I've read a comparison of M1 against 5700u and it was quite good price for value tbh.

EDIT According to some site M4 is 2 times faster and 3 times more expensive, while you can later add memory to Ryzen but not to M4

youngtaff 6 hours ago [-]
They’re not too badly priced if you buy them used… no where near as cheap as a one of the small Lenovos, Dells or HPs though or as easy to upgrade

What I really want is a IP KVM that connects to a MacMini using a single Thunderbolt port for everything - power, video, keyboard and mouse

sneak 7 hours ago [-]
They are indeed expensive, but wow are they fast, and they have 10GE options.
noosphr 8 hours ago [-]
Do you know what the most cost effective hardware for general internet stuff is currently? I do ML and have to deal with removing tens of kilowatts from a small closet when it comes to on prem stuff - mainly for compliance reasons - and I have no idea what good, cheap low power hardware for web, sql and similar servers is.
reactordev 2 hours ago [-]
For general internet stuff, a Mac mini or any SFF pc will be just fine. For ML, you’ll need at least a dozen or more thousand for GPU’s if you run your own inference. If you use a 3rd party, like OpenAI, it’s just an API call and you can do that on your SFF mini or pi.

Web hosts can start at $10 (or free + internet) and GPU hosts can start at $4,000 USD.

At peak, a “cluster node” could be $10,000 and a GPU node could be $80,000.

The question you have to ask yourself is: what are your requirements.

tonyhart7 9 hours ago [-]
Yeah with 30 bucks a month for co loc, I cant expect them to run for years

even if they can sustain that, how the heat and energy health for that cheap building

louwrentius 7 hours ago [-]
Thanks for sharing.

I came to a similar conclusion: TiniMiniMicro 1L PCs are in many ways a better option than Raspberry Pis. Or any mini PC with an Intel N-series CPU.

mvip 9 hours ago [-]
If you like this stuff, you should check out what the guys at Mythic Beasts are doing. They’re squeezing a ton of Raspberry Pi’s into racks. They also host the Raspberry Pi website on pi’s.

We used them for a while and there are some photos here https://www.screenly.io/blog/2023/05/25/updated-qc-rig/

rahimnathwani 4 hours ago [-]
The first hosted server I ever rented (~20 years ago) was as the first customer of Black Cat Networks (which was later acquired by Mythic Beasts).

It was a small Apple machine running Debian. ISTR it was an Apple TV (1st gen), but it might have been a Mac Mini.

Catbert59 9 hours ago [-]
In 2025 I'd go for an Intel n100/n150/n305 with 32GB RAM (in officially supported). If nothing rotates nothing can break.

At work we have ~10 of these passive cooled TopTon n100 with their 5x Intel i226-IV 2.5GE interfaces laying around for emergency router setups. They are great for a lot of things.

But be careful: starting with the n150 you will need active cooling.

rented_mule 8 hours ago [-]
100%!!!

I have an n305 with the CPU thermally bonded to its small aluminum case with a quiet 80 mm Noctua fan screwed into the fins of the case. The manufacturer on Ali Express said the fan is optional, but it can get to ~85°F in the room where the computer is, so I want to be careful. At idle, the CPU reports 5-10°F above the room's temperature.

It has 10 TB spread across 3 SSDs and 2 x 10 TB spinning drives attached. It's a Time Machine target for a handful of Macs and a Borg Backup target for several machines, including some across the internet using Tailscale. It's also running Home Assistant with AppDaemon (with dozens of little apps), Frigate (object detection for 3 Ethernet-connected cameras using a Google Coral TPU over USB), Paperless-ngx (15 GB of PDFs), LibrePhotos (1.2 TB of photos), Syncthing, Tiny Tiny RSS, a UniFi Controller, distinct PostgreSQL instances for various of those, and more. I count 21 Docker containers running right now, and not everything is containerized.

The spinning drives are powered down with a smart plug for all but 1-2 hours at night for backups. With those off, the thing sips power... 10-15 W most of the time with occasional spikes up to ~30 W when LibrePhotos is doing facial recognition or Paperless-ngx is updating its ML models. It never feels slow. I've been running one or more servers at home for 30+ years, and this single machine handles everything so much better than any combination of machines I've had.

dwood_dev 4 hours ago [-]
32GB is official, but 48GB works just fine.

64GB SODIMMs are now available and there are multiple reports of them working fine with the N305[0]. It is highly likely that it will work fine with the N100 as well.

0: https://www.reddit.com/r/homelab/comments/1m8fgec/intel_n305...

omarqureshi 1 hours ago [-]
Similar setup (not in a 1U). I have 3 Pi 5's in my AV rack with M2 and PoE Hats - they actually work fairly well and bonus? Don't need to power them. I could get a 1U enclosure but, it's fine.

The only issue is one of the PoE Hats fan is catching something (though nothing i can see), so on occasion it will need persuasion to be quiet

bullen 3 hours ago [-]
I built the cluster thick instead: https://github.com/tinspin/rupy

And selfhost on home fiber.

Saves space and cools silently.

Mixing old 2 and 4 for different use cases.

Raspberry 5 and 3588 are too hot.

Not in the picture Mean Well 50W passive PSU.

procaryote 9 hours ago [-]
Pretty fun!

Pi's are powerable from the header pins, so you could save the usb adapters and route power directly from the relays to the power pins.

I'd also be tempted to add a way to access the serial consoles and the power button-equivalent pins of the pis for a lom-equivalent. It might be doable with a pico or two.

Nowadays with pi5s you could also of course hook a m2 board up to the pci lane and skip the m2 enclosures.

I'd personally prefer a recessed push-button power switch too; the switch you use would make me nervous something would drop on it and turn the system off

mmastrac 6 minutes ago [-]
I've done this in the past and it works quite well. I recommend that you buy a 5V, 20A power supply, crank it up to 5.3V-ish, and power directly to each Pi's header through the relay. Put a big cap over the Pi's power pins to handle transient power blips which was the most likely cause of most of my Pi lockups over time.

The nice thing about relay power is that you don't need a power button. In my case I actually had a little arduino running a USB stack that could toggle GPIOs to power cycle the Pis if they wedged: https://github.com/mmastrac/pi-power-vusb (I forgot that I even added default power states and power-on sequencing there)

Serial is definitely a nice to have. One USB-serial per pi, with one overall controller Pi that can aggregate it all.

I would add that TFTP boot for each Pi is also really convenient. This is pretty easy to set up. Dedicate one Pi to manage the cluster power, serial and TFTP and you have a pretty robust setup.

hawk_ 9 hours ago [-]
What is the use of colo with an arbitrary provider? (Asking as I have only heard of colo in the context of say an exchange or something else specific)
msh 9 hours ago [-]
You get your server hosted in a real data center.

It quite common, a stepping stone between using rented hardware and having your own data center.

hawk_ 9 hours ago [-]
Oh ok so colo here means bring your own hardware.
dnemmers 4 days ago [-]
Maybe it’s just me, but I can’t imagine this thing staying together in one piece after being shipped. Too much double-sided tape and kludging.

Could work well at home, however.

xvfLJfx9 6 hours ago [-]
I'd recommend just getting a piblade and mounting them...
arnon 10 hours ago [-]
I like the idea but this is not going to last long

Good start though!

crinkly 10 hours ago [-]
I’ve found colo companies to be pretty fussy about what hardware they will accept. Ours would reject this.

Edit: and problem with micro clusters like this is always the IPv4 costs.

petesergeant 9 hours ago [-]
I was going to ask, if colos providers will allow a hacked together chassis like this, what’s to stop someone sending them an intentionally or unintentionally destructive device?
15155 7 hours ago [-]
The same thing that stops most bad behavior: the threat of legal action afterwards.
sneak 7 hours ago [-]
You can do that with the mail now. It’s not an additional threat.
pluto_modadic 8 hours ago [-]
Could be fun for an ipv6 cluster
asteroidburger 10 hours ago [-]
Only 16GB of DDR4 and 1.2TB of storage is not exactly a lot, especially when it’s spread across all of those nodes.
tgv 9 hours ago [-]
What do you expect to run on them then? I run 3 user facing services plus their test environments plus the database on the same 8GB server, and half of the memory is free. The database takes about 10GB (growing slowly). There's also a 10GB media directory that grows slowly. Most of it is images, I think, and videos are short. But it'll be quite a while before reaching 1TB.
dnemmers 4 days ago [-]
Previous comments:

https://news.ycombinator.com/item?id=27862967

5 hours ago [-]
geniusalsdja 8 hours ago [-]
[dead]