Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kobol Team Is Pulling the Plug from “Helios64 Open Source NAS” (kobol.io)
118 points by nixcraft on Aug 25, 2021 | hide | past | favorite | 40 comments


As sad as this is, I find it very refreshing with a transparent and honest update like this. I've seen so many initially promising hardware projects just fizzle out in uncertainty and radio silence.

Especially that they are open with reasons why, what they feel they could have done better, and releasing blueprints allowing others to pick up.

No doubt 2020/2021 must have been the most challenging years in decades for an indie team to launch a new hardware project like this.

I was looking quite seriously at the Kobol64 last year; the only thing that eventually made me not preorder was realizing the RAM would not be sufficient for my purposes (ZFS-backed Glusterfs nodes).


I got a Helios64 with the purpose of using testing how well it would work as a storage node in my self-hosted setup

It worked okay for simple storage tasks and it's definitely great as a rpi alternative for simple setups, but not for any kind of serious cluster

I still hope that there'll one day be a RISC-V or ARM board with a 10gbe port, 4-8 SATA ports, and 32gb-64gb ram

Seems like the only option for redundancy at the hardware level for storage is refurbished server hardware or spending a lot of time on a DIY setup


I'm quite happy with ASROCK RACK's X570 boards. AMD-based, 10GBe, up to 8 SATA straight from the board, up to 128GB RAM, come in both mITX and mATX variants. And they have IPMI! And an integrated super basic GPU so you can run the non-APU Ryzens without a dGPU.

If you don't enjoy DIYing too much I'd recommend the mATX ones - the form factor and cooling alternatives for the mITX are... unorthodox.

But yeah, comparable non-x86 accessible for prosumers will be years away.


This is exactly what I've done, by the way. Same mITX motherboard you cite below with a Ryzen without integrated graphics (since the board itself supports VGA). 10 GbE and a whole bunch of SATA lanes. I didn't bother with trying to get it into a real chassis. I just grabbed a desktop case from Fractal Design that can hold 10 disks and it fits fine in my wall-mounted entertainment center cabinet below my television. It's in there with my router, a switch, and a few Minisforum small form factor Ryzen PCs. The only "gotcha" with cooling is you need to use a CPU cooler meant for Intel even though the chip is Ryzen, but the board documentation clearly tells you this. I'm just using simple Noctua low-profile fans. I also cut some holes in the cabinet and put in USB-powered cabinet fans, so I can keep the doors closed.

So sure, it's DIY and some work, but putting together the NAS itself and putting together the cabinet, including the cooling and wiring, only took one weekend.

It consumes more power compared to ARM, but I'm cooling a 4-story house with 18 foot ceilings in Texas anyway, so it's not a noticeable difference.


Besides some of the ARM options like the Honeycomb that have been mentioned, there are also the POWER9 systems from Raptor Computing Systems. Much more expensive than the x86 systems, but I feel compelled to mention them, since they have mATX boards with most of those yummy features (fewer SATA ports but higher RAM capacity, for instance).


I have looked into exactly those boards and they by far seem like the best available solution. I'd be really interested in hearing about your experience with them

Which type of case/chassis are you using for your set-up and how many nodes are you running?


I have 3x X570D4I-2T with Ryzen 3600. It's my understanding the new Zen3 should work just fine as well.

* I gave up on finding a chassis that wasn't either gigantic, too tight, or crazy expensive. They run naked resting on cork-boards on a very cheap DIY shelf that I've hung some 140mm fans on. Looks a bit cyberpunk and swapping drives is super easy. You may have better luck with chassis if you're in the US.

* ECC RAM supported, which is nice. I've understood you can't surveil the ECC, so I've seen it described as "semi-ECC". This is a Ryzen thing, not an ASROCK-RACK-thing.

* ECC DDR4 SODIMMs are expensive and rare. If you need ECC and lots of RAM but don't care about the small form factor, a bigger board will be a lot more cost-efficient. I got my sticks from Nemix and Samsung.

* Intially I had stability issues and errors in memtest. Completely disappeared after putting them behind a UPS; turned out the culprit was power fluctuations in my previous house.

* They need a PSU with EPS (my old leftover 15y/o PSU in the closet only booted the IPMI, not the board itself. Silverstone SFX 450W is great and cheap)

* (Specific for mITX) The CPU socket has bastard Intel fittings. I got Scythe Shoten and Noctua on there fine in the end, but it took some anxious-inducing creativity with M3 screws/nuts and DIY spacers,

* IPMI and monitoring are great for initial setup and if something would go wrong. Flashing UEFI/BIOS is a breeze.

* Apart from the on-board SATA I plopped in 4xM.2 PCIe bifurcator cards from 10Gtek - so besides the boot drive I can have 4xM2 NVMe drives on each board. Works great.

* I'd really really like to figure out something more compact PSU-wise - either Meanwell or Pico-style - but electricity is not my strong side and despite researching quit a bit I still haven't figured out exactly what I should buy and how I would hook that up.

Overall I'm happy. It's quite impressive how much IO you can get out of such a small board. But since I moved to a bigger house and the mATX model came out I'd have bought that instead if it was today.

In case you haven't checked out the servethehome forums already there are some people writing about their boards there as well.

Oh, and since you mentioned redundant storage: Glusterfs is great, except when it isn't. I have issues with it every now and then (files being stuck in heal without going in split-brain requiring low-level manual intervention, and fuse mounts on other nodes randomly disconnecting with "transport error" until I manually remount, despite always having min 2/3 nodes continuously online). I kind of wish I went with Ceph or LizardFS or something instead but I sunk so much time into Gluster already and have better things to do than start over with something new - and who knows if the grass would actually be greener.


That's a fantastic rundown - thank you so much for taking the time to write such a comprehensible explanation!

I live in a rather small apartment (56 square metres), so I'm trying to find an acceptable trade-off between noise and space. I'm in Europe, so sourcing parts can unfortunately be a bit of a challenge, so I'll probably have to be creative to solve some issues (which honestly is also fun)

I had found a couple of 2U racks that could hold 2 mITX boards, but the lack of cooling options made me nervous whether it would get too hot

With you now writing that you would have gone with the mATX board if you were making the build today, I think I'll probably go with that instead - especially since I'd like to use ECC ram, and you mention that ECC SODIMMs are expensive and difficult to find

I've been reading a lot of reviews on servethehome, but haven't gone through their forums yet. I'll make sure to check it out - thanks for the tip!

I haven't yet decided whether I'll go with ceph, gluster, or something else entirely. I'm leaning towards ceph, but I've read several warnings about running only 3 nodes in a ceph cluster, so it's (once again) a matter of figuring out a good trade-off

In any case - you've definitely helped me out here and I really appreciate that you took the time to write such a comprehensive explanation


The power connectors on that board are strange - it's almost ATX12VO but not quite. Does the motherboard come with the 24-to-4-pin adapter?

Assuming you need AC->DC an SFX power supply is probably your best option. It's hard to find generic power supplies that have power good, 5VSB, and remote PS_ON. The generic supplies of reasonable cost are almost certainly less efficient as well.


Wow, that actually sounds awesome and something that could finally enable me to retire my shitty QNAP NAS and my Proxmox "Server" running on an old i7 2nd gen desktop...

If you don't mind, how much power/watts does one of these draw continously?


Isn't the reason why people are interested in projects like Gnubee or Helios that they want to get away from closed source hardware with rootkits like the Intel ME?

I thought this was the main criteria why such projects based upon RISC-V even exist.


Does the SolidRun Honeycomb LX2 board not do what you need? It is certainly pricy ($750 without RAM, optics, a case, or drives), but powerful.

based on NXP’s outstanding 16 core LX2160A Arm Cortex A72 (2GHz) offering up to 64GB DDR4 (dual channel) and up to 40GbE.

https://www.solid-run.com/arm-servers-networking-platforms/h...


Those boards look amazing!

Any complaints I could have would honestly be nitpicking - it might honestly be a bit overkill for my use case.

The price isn't too bad considering what a x86 alternative would cost (especially considering the electrical bill)

Thank you for bringing them to my attention!

Now the challenge will be, whether I can find a European seller


SolidRun themselves ship worldwide by EMS



Sad to see it end, but I appreciate the honesty.

One of the huge challenges with building niche, enthusiast hardware is that you have fewer resources than a bigger company, yet enthusiast customers tend to be far more demanding than normal. The Helios products look fantastic, but even in this HN thread you can see potential customers explaining why they wanted more out of the product (Mostly RAM). I have a feeling their fate would have been a lot of people agreeing it was a cool product but then going off and buying something else after doing some research.

ARM boards are hard right now because we’re not quite to the point of powerful, high-RAM systems being affordable, so most things are a compromise. In retrospect, I wish they would have applied their obviously fantastic product engineering skills to building an awesome NAS case first that accepted a standard Mini-ITX board. From there, they could have partnered with a Nano-ITX board vendor to offer a smaller V2, then as they picked up momentum they could branch into the custom all-in-one solution. Trying to jump straight to a full custom product from top to bottom is extremely hard, to say the least.


> ARM boards are hard right now because we’re not quite to the point of powerful, high-RAM systems being affordable

I get the feeling that most ARM SoC vendors had made their first 64-bit products by just pasteing a 64-bit CPU into their previous 32-bit SoC design. It is only recently that reasonably cheap 8GB boards have become available.


This reminds of an exchange I read between a Rockchip dev and a kernel maintainer. The Rockchip dev wrote the following as justification for special handling of their board:

There are many legacy IPs which only support 32bit bus, we have to use them as is in the new 64bit SoCs, I think the 32bit GIC can be considered the same as those case, can we add CONFIG_ZONE_DMA32 support in GIC driver?

https://lore.kernel.org/linux-rockchip/874kg0q6lc.wl-maz@ker...

Note that this is in regards to their new rk356x platform, which supports up to 8 GB of RAM.


The business model of Mediatek, Rockchip etc is not to sell chips on a traditional merchant basis. They don't even produce data sheets. So there isn't much value for them in working on reference boards.

Their model is to qualify a customer by volume and then provide a "solution": IC with a custom linux or android on it, including undocumented drivers etc.


The Kobol Team did a much better job than any of the other ARM Vendors with their devices. The Wiki always detailed EVERY PART of the Hardware and their devices ran mainline kernel the day i bought them. I have come to hate almost every other ARM Device because at the end of the day the software support was only good enough when the hardware was already dying of old age. It's a little easier with no GPU to support though.

That being said, Kobos never delivered quality hardware. My Helios4 died, likely the PSU, early Helios64 needed a botch job for their LAN Ports to work correctly and they never delivered on the ECC RAM needed for a NAS. Performance on the Helios4 was also absolutely crappy, never being able to deliver gigabit speeds over the LAN Port. I wish them all the best though, my personal hardware is going to be x86 for the foreseeable future unless ARM makes MAJOR strides in terms of linux compatibility


Sad news indeed, the Helios64 was promising although some users reported instability issues, but after some work it could have become a killer product. It's all Open Source though, so hopefully other people can join and take over the development/production, also showing in the process one more example (do we still need them?) why Open Source is the better choice.


Nooo. I really wanted a Helios64. Good luck to the team and I hopefully someone can create a similar product.


Same for me. I planned to replace my old NAS with a Helios64 :-(


That is really sad. The products looked really good and I had the feeling they were targeted at a very interesting niche that definitely needs to be covered.


Does anyone know of a similar product? I'm particularly impressed by the built in UPS and five bay storage.


I've built a couple NAS machines off Atom Mini-ITX boards, and in one case I supplied the mainboard and the two disks using a PicoPSU 12V adapter. Once the system accepts 12V, the UPS can be a car/motorcycle battery left inline as buffer, so that in case of blackout there's no switching involved. Not compact as the Helios64 UPS, but it works and is less risky than having two Lithium cells near the drives. I didn't try this configuration as my PicoPSU needed quite accurate 12V input, which the car batteries would exceed when fully charged, but their M3-ATX Automotive model accepts from 6 to 24 volts making ideal, as the name implies, for being supplied by a car battery.

Regarding the 5 bay storage, you can buy backplanes with pull out bays for 5*3.5 disks that would fit into 3*5.25 bays. A search for "3 to 5 sata backplane" returns some products.


Doesn't exist AFAIK unless something new was announced in the last couple of months that I missed - I researched quite a lot a little bit back.

Somewhat similar scopes but without both of those:

https://wiretrustee.com/raspberrypi-cm4-sata-board/ (not launched yet)

https://shop.allnetchina.cn/collections/sata-hat/products/pe... (they also have an enclosure kit)

https://www.hardkernel.com/shop/odroid-hc4/ (just 2 drives; I have one as a sink for automated ZFS snapshots using zrepl; a friend is using one for syncthing+apple time machine sink)

If you want something within the coming ~2 years and any of the above don't fit your bill I think you'll have to resort to x86 and an external UPS. I'd love to be proven wrong, though.


I have a somewhat similar project based around the raspberry pi. It's rather DIY compared to Helios64's project - 3d printed case, hand soldered electronics, amazon sourcing - though the goals are similar. Specifically, my goals were to host 2x 3.5" drives and to be printable on a standard 200x200 mm 3d printer.

https://old.reddit.com/r/DataHoarder/comments/n277ip/raspber...


Yeah that's a great idea for saving space and would be perfect if it automatically shut down after the battery gets low


This feels like the 80/20 rule in action. These boards mostly work but had (in my experience) non-trivial reliability and quality issues and they would have been expensive and tricky to get fully right.

The cynic in my says another company will probably pop up shortly with a new Icarus128 NAS board.


It seems like hardware could benefit a lot from more standardization. One of the big problems with a hardware project like this is that parts aren't fully interchangeable and therefore redesigns are necessary for minor part changes. If hardware parts were commodities instead of unique products with unique requirements, it would be a lot easier to handle supply chain disruptions.

And supply chain disruptions have been so extremely painful in the past year, why doesn't the industry do this? It seems like if you could provide guarantees about interchangeability, that translate to guarantees about availability, people would be willing to pay a premium.


Cant blame them. There are no end in sight with component shortage. I mean within the NAS market, if Synology and Qnap are having trouble with supply. It is quite literally impossible to get anything done during these difficult time with the small volume they are making.

Rather than hoping or praying things will change ( it wont ), I do appreciate their honest take on the issue.


Very sad news, I was looking forward to the ECC variant.

What's the smallest ECC NAS board on the market right now?


Thank you Kobol team - sad to see it end, but love the candour and wish you all the best.

I ended up getting the Helios64 as it fitted both my needs pretty well and got to support an amazing project - hopefully the community can live on and help each other out!


That's a real shame. I have a Helios64 that has been rock solid since launch and I was thinking of getting another one if they ever became available again.

Thank you Kobol team and thank you for the openness and transparency the whole time.


Still running my original Helios4. Hasn't missed a beat for 4+ years.


I STRONGLY recommend getting a new PSU asap. The PSUs they send are dying one after another at the moment and mine likely took the board with it.


As another currently-happy Helios4 owner, any recommendations?



Wish the team good luck in their new endeavors. You definetly had me impressed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: