Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They mention GCS fuse. We've had nothing but performance and stability problems with this.

We treat it as a best effort alternative when native GCS access isn't possible.



fuse based filesystems in general shouldn’t be treated as production ready in my experience.

They’re wonderful for low volume, low performance and low reliability operations. (browsing, copying, integrating with legacy systems that do not permit native access), but beyond that they consume huge resources and do odd things when the backend is not in its most ideal state.


I started rewriting gcsfuse using https://github.com/hanwen/go-fuse instead of https://github.com/jacobsa/fuse and found it rock-solid. FUSE has come a long way in the last few years, including things like passthrough.

Honestly, I'd give FUSE a second chance, you'd be surprised at how useful it can be -- after all, it's literally running in userland so you don't need to do anything funky with privileges. However, if I starting afresh on a similar project I'd probably be looking at using 9p2000.L instead.


I think it's possible to write a solid fuse filesystem. Not as performant as in-kernel but it could easily not be the bottleneck depending on the backend.

I commented though because GCP highlights it in a few places as component for AI workloads. I'm curious if anyone is using it in an important application and happy with it.


AWS Lambda uses FUSE and that’s one of the largest prod systems in the world.


An option exists, but they prefer you use the block storage API.


No, as in Lambda itself uses FUSE as an implementation detail of their container filesystem.


It seems there were some major issues, but AWS has developed around them and optimised for its needs; (https://www.madebymikal.com/on-demand-container-loading-in-a...)

Fair, but far from a common advice I’m willing to tell people (other CTOs) to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: