Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
FF4J – Feature Flags for Java (ff4j.github.io)
113 points by saikatsg on June 7, 2023 | hide | past | favorite | 79 comments


https://github.com/ff4j/ff4j/blob/v1/ff4j-core/src/main/java...

> The feature will be flipped after release date is reached.

Don't ever use this. This is a "time bomb". This is a very bad idea. It's basically like scheduling a time for your app to go down when you aren't paying attention.

You have to have a human in the loop. The right way to "schedule a release" is to have a human flip the flag at the appropriate time (preferably, ramp it up to 100% of users gradually) and then stay online for a little while to do an emergency ramp down if/when things go bad (that weren't caught by some automated system in the ramp up).


Uh this class is even worse, SDF is not a thread safe date formatter, so it never should be `final static`. `releaseDate` and `new Date()` will be ignorant of summer/winter time changes. For such scheduling one should use Quartz https://www.quartz-scheduler.org/


Unless you have a legal requirement starting on January the 1st 00:00


All the more reason to have a human in control. When you are on the hook to release the feature then you really don't want to start crashing in the middle of the night.


Don't forget the timezone!


Agreed. Problem isn’t timed rollout but lack of automated rollback. If you don’t have one don’t do this. The automated rollback should look for acceptable error rates, latency etc.


Having recently kind of abandoned golang for Java, I've got to say I love the richness of the Spring ecosystem.


One thing that bothers me about Spring is how it manages to use so much off-heap.

I used it initially for a small project (hobby-tier) and was shocked to find it was using 200-300MB of offheap (or something in that range). To where my poor 1GB RAM digital ocean VPS was 20-30% just random Spring offheap crap.

I ported my handlers from Spring to just using netty and their provided HTTP codecs (and I wrote my own http router implementation which was fun but also wish there was an easy one to just use, I didn't see anything I liked when I looked) and suddenly offheap was ~0 and I could've easily crammed 100 of these into this tiny VPS. (Not that I planned to, but I did want to use this VPS for other things too).

I know "300MB offheap" is not something Spring will care about when they're probably normally running on giant ec2 instances, but for my case, while I would've liked to use the "django/rails but for java"-type tool, it did actually end up being too heavy in the most-constrained vector of cloud hosting (RAM).


I'm not an expert, but I think the memory used is not necessarily the memory it requires.

I think Java tends to grow memory usage up to a point, rather than to try and release it at the cost of performance. That doesn't mean it would not be able to run on a machine with lower memory. It could run more expensive GC if it was needed


I'm here specifically referring to off-heap memory. I verified this by looking into the jvm with VisualVM. These are objects that aren't part of the normal java heap area and aren't part of garbage collection. The offheap can contain many things - classloaded code, memory-mapped files, direct bytebuffers, and so on. And since it's offheap, it's hard to analyze from Java tooling. So I literally don't know what it was doing.

So in my above scenario, I could indeed run the app with a -Xmx32M and yet the application was still using 200-300MB of resident memory (RSS) due to offheap usage.


You can use native memory tracking to better inspect offheap usages. I had a similar problem where the default MaxMetaspaceSize was excessive and adjusted it to a reasonable setting.

-XX:NativeMemoryTracking=summary

jcmd <pid> VM.native_memory summary


Java by default allocates a healthy percentage of available RAM. It releases memory if a large percentage is unused, and it only runs full GC when a large percentage is used. The trick is therefore to decrease the initial max memory. Otherwise you can wind up with 70% usage, a lot of objects that could be GC'd but are not because there is not enough memory pressure to run full GC, and there is not enough unused memory to be able to release RAM back to the OS


See my reply to the other comment - I specifically refer to off-heap memory, not on-heap/GC-managed memory.


I have been told this before too.

Currently I can't run my integration tests on my work laptop because they spin up a bunch of "micro" services, so I run out of ram and need to reboot.

The memory that Java "doesn't require" really adds up.


I believe that is why the original Java answer to microservices was EARs all within a single app server so they could share a JVM and caching. Then in production for load you could spread them all out and they can find eachother... but I've never worked in actively developed JEE (as opposed to a legacy one).


It's especially bad when you have to run both IntelliJ with multiple projects and grails.


I am sorry but RAM is the least constrained vector of cloud hosting. I have apps that need to do video encoding in real time with 100ms latency and those are always bottlenecked on CPUs.

I have a micronaut app that uses around 350MB which is almost entirely JVM overhead.


Of course that all depends on the context, for my cases I'll certainly keep my distance from either Spring or JBoss. Prefer to build faster apps with Java.


Not sure what you did wrong. You can easily fit whole spring web app including support for jpa and what not into 300 MB heap/native together.


Possible that I was using Spring in a wrong fashion but for me (project size around 300 KLOC) Spring dependency injection model with so much magic happening (all the auto configurations, guessing which classes to load etc.) was extremely hard to maintain and understand. I think I prefer Guice/Dagger style of defining dependencies which is much more explicit.


> Possible that I was using Spring in a wrong fashion but for me (project size around 300 KLOC) Spring dependency injection model with so much magic happening (all the auto configurations, guessing which classes to load etc.) was extremely hard to maintain and understand. I think I prefer Guice/Dagger style of defining dependencies which is much more explicit.

Spring used to be (and still can be) very explicit about defining dependencies, but they've been pushing more towards encouraging the use of "magic" over the last many years. I prefer to be as explicit as possible about everything, unless I have a very good reason not to be, and tend to either specify the literal bean name in the annotations or use the older XML-style configuration (and do the same).


It's a fair point, although you can set it up to only do explicit defined dependencies which I definitely recommend.


We had a similar sized project grow out of a 2-pizza-team and had the same problems. Kinda like how Java devs come to JavaScript and are shocked that this language doesn't have proper numerical types, I came to the Java ecosystem and was shocked that everything pivotally hinged on whether you had the right magic beans and especially if you had different beans vying to be the same sort of magic it was awful.

My favorite bug we ran into was a short little RedisConfig.java file that got copy-pasted into a bunch of our microservices in the test environment. It appeared on a very casual glance to connect to Redis. But our tests did not run a Redis instance, so the configuration should have failed. Instead if you looked carefully at this file you could determine that the connection string was all broken but that didn't matter because it never implemented the Redis Connection Bean logic anyway and in fact you could replace it with a purely empty class and tests still pass.

But, tests break when you delete this unused file, the file needs to exist even if it is the empty `class RedisConfig {}`, because now tests are trying to connect to a nonexistent Redis and not succeeding. Same if you try to rename it to like `class RedisTestConfig {}`, suddenly tests want to reconnect to Redis and they all break!! So, everyone has more important things to do, it gets copied to everyone, nobody understands what on earth it is doing since the code inside the file is clearly broken.

Eventually, we start experimenting with Kotlin. When you copy this pattern over to the Kotlin repo, it starts to complain VERY loudly that you have two objects with the same name in the same namespace and it won't start because it is too confused. And that was the key hint. As far as Spring was concerned, this file, because it has the same class name in the same namespace, basically string-replaces that file. (Nondeterministically? Or maybe it was deterministic that "test" won over "main" because it was lexicographically larger?) So we were string-replacing to a RedisConfig that could never be used as a Redis configuration bean, which causes Spring's Redis Cache logic to say "oh I have no configuration, I should assume that I am unneeded", which disables Redis.

But if you rename it, then Spring Redis says "oh I see two things, I can't use that one but I can use this one, let's connect to Redis with that. oh no!! Redis is DOWN!~ crash all the tests!!"


> Nondeterministically? Or maybe it was deterministic that "test" won over "main" because it was lexicographically larger?)

Java uses classpath ordering for resolution, so a duplicated fully-qualified class name will take the first on the path. This was useful when patching or pre-dependency management, when some library authors would bundle into a fat jar with incompatible versions. It became disallowed by Java 9's modules without command-line flags. It was always considered a bad practices and a natural fallout of the dynamic nature of the JVM. In your case test classes were resolved before main sources and libraries.

Unfortunately there is a broad view that Java == Spring. In over 15 years of full time Java development, I only used Spring Framework 1.x and 2.0 (migrating from pre-DI). Since then my employers and network all used Guice, though I never selected a role where I realized that choice upfront. That framework is far less magical and explicit, making it very easy to debug. I suppose tool popularity is regional, it's just that this type of Spring magic is abnormal and disliked throughout my work experience.


You don't need to use Spring Boot (which does all of the opinionated autoconfig). Just import the Spring components that you need.


What made you switch?


I want to create a load of rapid prototypes. With Go it seemed I was wading through treacle just to create a DB-backed REST server that basically just permitted CRUD on the DB with some extra business logic.

With Spring, you just declare a model object and it can infer a data repository with a ton of convention-named query methods, so the number you actually have to write is far fewer. Spring is used by so many people there are loads of good libraries. E.g. I'm just adding permissions, and without needing to hunt around for a compatible library or litter my code with conditionals, it's pretty easy to use Spring Security to configure sane permissions with annotations. With Lombok there's no need to write the garbage you used to have to (loads of getters, setters, etc.) so the code is quite clean.

Annotations also simplify DTOs with mappers - in Go I had this triplicate of model struct, DTO and mapper. Maybe there was a library I could have used to help, but the faff around setting up the DB (e.g writing SQL, configuring an ORM), updating DTOs and mappers was just crippling my velocity.

Plus, for hiring, there are tons of Java devs around the world, while for Go rates are much higher. So yeah, several factors.


Have you come across sqlc? https://docs.sqlc.dev/en/stable/

It gets rid of the crufty parts of DB interaction with Go.


I did, but didn't end up trying it. Being able to hire a Spring dev for $10 p/h vs several times that for a golang dev (plus time learning a particular codebase) just means it doesn't make sense to use Go if you want to outsource in future.

I plan to run multiple experiments in parallel, keep working the day job and outsource the ones that take off. So the more standardised I can make the tech stack, the faster and cheaper dev work will be.


$10 p/h ?

are you joking? if you can find good (I don't want really great) Spring devs for under $80.00 p/h let me know.

You can hire at that rate and everyone I hired had faked their experience and could get through the interview but did a terrible job.

So we ended up firing everyone and had 3 devs for above $110 p/h that were truly awesome who got the job done.


Yeah I'll see how it goes. Tbh at the moment I expect those bottom end devs are worse than just using chatgpt directly. I'll probably have to spend more time explaining to them what I want.

Once IDE plugins can scan the majority of a codebase it'll be easier to just write a bulleted set of requirements and let it get on with it.

But anyway I was talking to a friend who's hired some great Ukrainians quite cheaply and I hear there are some good devs in the Philippines.


Being able to hire a decent Spring (or any good developer) at $10 p/h is HIGHLY theoretical.


Any with any sense will be using chatgpt anyway. It's how I've written most of it. And it's easier to just add to a working project than set one up.

But yeah, we'll see...


Thanks for sharing :)


What is wrong with Go?


it should have been named stop cause you have to stop at every function call to check for an error.


That it has the expressivity of Java.. from 20 years ago.


In theory it's a good Python alternative if you want more speed. But having done a bit of Go, I tend to agree. Developing in Go can be frustrating.

Java these days is however looking much better.


Realy? I saw "Spring", suddenly acquired a skin rash, and swore I'd never look at this again...


The copy is in pretty rough shape. You should try to get a native English speaker to edit the contents for you.


Yikes:

> the behavior of a feature can be enslaved with your custom implementation

https://github.com/ff4j/ff4j/wiki/Flipping-Strategies


They could always use chatgpt to improve some of the diction, but it's perfectly readable.


Huh, I've been using this in production for years. Given my company's choices, I just assumed it was pretty well known & standard. Overall it's pretty nice, although I've run into a critical error a couple times which rendered a flag untouchable.


You certainly built in a lot of end-user choice for backend storage of the flags, ways to access and change them (jmx, cli, rest), and so on. I read some of the code, and there seems to be an opportunity. I don't see a lot of namespacing. So, for example, you might have a group of services that use the same backend to store flags. A pattern where you have a namespace (appname/service1, appname/service2) tied to get/set would relieve the end user from having to prepend prefixes on every call.

I see something like that in your redis client...a default prefix. But then for some of the other backends, there's no similar concept.


Anybody who has looked at it in depth: does it support gradual rollout via consistent hashes?


What does this mean? Is it setting the flags at runtime without a new build artifact (ie shame binary hash)?


I believe he means doing a rollout to 30% of users and having that be consistent, so that the same 30% of users are always in the flag, not just 30% of evaluations.

This is typically achieved by hashing the flag+userId, converting to an integer and dividing by Max.integer.

I see https://github.com/ff4j/ff4j/blob/main/ff4j-core/src/main/ja... which looks like a rollout for 30% of evaluations but I don't see what you're looking for. I may well be looking in the wrong spot.


> I believe he means doing a rollout to 30% of users and having that be consistent

Exactly. Also, not only shall the evaluation be consistent for a specific percentage and userId, but also, when increasing the percentage, an enabled feature shall never be retracted for a user which had the feature enabled before.

userId could also be IPs, country codes, worker nodes etc etc.


Would you mind slightly elucidating on that algorithm? Max.Integer would be 2^32-1 for instance? Where does the 30% parameter get exercised?

I thought it was the traditional circle point consistent hashing algo that GP was discussing but this is interesting.


Sure. https://github.com/prefab-cloud/prefab-cloud-java/blob/main/... is consistent function that turns a string into a float in the range 0-1.

So if you hash "flagA-user1234" you'll get a number 0-1 and if you compare that to < .3 you'll know whether they should be "in the flag".

Relies on the uniformity of the hashing function.


Oh of course. Very reasonable. Thank you for source code link. I think I understand now.


yeah, "an enabled feature shall never be retracted for a user which had the feature enabled before" is extra spicy and requires a different architecture entirely, since you fundamentally need to store the evaluation for each user and check before you return a value.

Definitely something to use with caution in my experience, since the performance profile is very different.


IMHO it's fine if the feature is retracted for a minimal number of users when scaling down (can be done stateless, as described in your other post). It's just that it shouldn't jiggle all over the place when gradually increasing the percentage.


We've been using Togglz (https://www.togglz.org/) and have had no issues so far. FF4J does seem to be a bit more feature rich though.


I've been interested in trying Unleash. Looks like a very polished, cross language option that appears to directly integrate with Gitlab too.

https://www.getunleash.io/


I use unleash pretty heavily and it works well


I just put together a comparison yesterday of some tools and I found unleash to be more unwieldy than I expected. I wanted to set the result to a specific value for a shared segment and couldn't do it. For some reason the overrides didn't let me use a segment. Was I doing it wrong?


does it support Spring?


We recently switched from ff4j to togglz and are very happy with the choice.

ff4j's audit feature was buggy and caused our database to run full with entries: Togglz' simplicity is a feature I guess :)

Added plus: Togglz is maintained by people from my corp.


If you're using numbers for the acronym, you could've gone with F3J.


The "4j" suffix literally means "for Java" and is commonly used to indicate that the project is a Java library, e.g. log4j, slf4j, &c.


baxuz' joke still stands


Growthbook has a better UI and is more fully featured. They also have a java SDK https://docs.growthbook.io/lib/java


We use it in production at lastminute.com and are pretty happy with it :-)


This is absolutely great and can hopefully help kill the need for overpriced 3rd party services.


.03 the "overpricedness" of 3rd parties is starting to get better. I've started https://prefab.cloud/features/feature-flags/ due to my frustrations with overpriced seat-based solutions.

I'd be curious if you think this ~$1/pod pricing feels fair.


Would be nice when it supports the OpenFeature spec


...do we know that it doesn't?


Feels like after the log4j situation - the naming convention of XforJ would be a bad omen.


my thoughts exactly. i came here to see if this was another vuln being discovered in the wild specifically because of the name.


This is an old product, what is the point of randomly posting a GitHub repo on HackerNews?


Conflicting name with one of my favorite games! Final Fantasy 4 J / Hard Type.


This is going to be very confusing, since FF4J already means the Japanese version of Final Fantasy IV on the SNES.


I’m looking forward to the Java products that build in the wrong one by mistake.


While the Venn diagram of anime nerds and programmers has an undeniable overlap, I don't think this should influence naming choices too much.


Someone is going to build an SQL engine with zig and call it DBZ.


There’s already DB Zipper which uses *.dbz as a file extension.


Final Fantasy IV Java Edition would be a pretty wild guess by anyone's standards. I certainly wouldn't have thought of it, and I'm an ex-java developer who speaks Japanese and was extremely fond of FF4.


I know! I was so confused that I tried downloading it and playing and it wouldn't work. I've been in a spiral all day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: