> The feature will be flipped after release date is reached.
Don't ever use this. This is a "time bomb". This is a very bad idea. It's basically like scheduling a time for your app to go down when you aren't paying attention.
You have to have a human in the loop. The right way to "schedule a release" is to have a human flip the flag at the appropriate time (preferably, ramp it up to 100% of users gradually) and then stay online for a little while to do an emergency ramp down if/when things go bad (that weren't caught by some automated system in the ramp up).
Uh this class is even worse, SDF is not a thread safe date formatter, so it never should be `final static`. `releaseDate` and `new Date()` will be ignorant of summer/winter time changes. For such scheduling one should use Quartz https://www.quartz-scheduler.org/
All the more reason to have a human in control. When you are on the hook to release the feature then you really don't want to start crashing in the middle of the night.
Agreed. Problem isn’t timed rollout but lack of automated rollback. If you don’t have one don’t do this. The automated rollback should look for acceptable error rates, latency etc.
One thing that bothers me about Spring is how it manages to use so much off-heap.
I used it initially for a small project (hobby-tier) and was shocked to find it was using 200-300MB of offheap (or something in that range). To where my poor 1GB RAM digital ocean VPS was 20-30% just random Spring offheap crap.
I ported my handlers from Spring to just using netty and their provided HTTP codecs (and I wrote my own http router implementation which was fun but also wish there was an easy one to just use, I didn't see anything I liked when I looked) and suddenly offheap was ~0 and I could've easily crammed 100 of these into this tiny VPS. (Not that I planned to, but I did want to use this VPS for other things too).
I know "300MB offheap" is not something Spring will care about when they're probably normally running on giant ec2 instances, but for my case, while I would've liked to use the "django/rails but for java"-type tool, it did actually end up being too heavy in the most-constrained vector of cloud hosting (RAM).
I'm not an expert, but I think the memory used is not necessarily the memory it requires.
I think Java tends to grow memory usage up to a point, rather than to try and release it at the cost of performance. That doesn't mean it would not be able to run on a machine with lower memory. It could run more expensive GC if it was needed
I'm here specifically referring to off-heap memory. I verified this by looking into the jvm with VisualVM. These are objects that aren't part of the normal java heap area and aren't part of garbage collection. The offheap can contain many things - classloaded code, memory-mapped files, direct bytebuffers, and so on. And since it's offheap, it's hard to analyze from Java tooling. So I literally don't know what it was doing.
So in my above scenario, I could indeed run the app with a -Xmx32M and yet the application was still using 200-300MB of resident memory (RSS) due to offheap usage.
You can use native memory tracking to better inspect offheap usages. I had a similar problem where the default MaxMetaspaceSize was excessive and adjusted it to a reasonable setting.
Java by default allocates a healthy percentage of available RAM. It releases memory if a large percentage is unused, and it only runs full GC when a large percentage is used. The trick is therefore to decrease the initial max memory. Otherwise you can wind up with 70% usage, a lot of objects that could be GC'd but are not because there is not enough memory pressure to run full GC, and there is not enough unused memory to be able to release RAM back to the OS
I believe that is why the original Java answer to microservices was EARs all within a single app server so they could share a JVM and caching. Then in production for load you could spread them all out and they can find eachother... but I've never worked in actively developed JEE (as opposed to a legacy one).
I am sorry but RAM is the least constrained vector of cloud hosting. I have apps that need to do video encoding in real time with 100ms latency and those are always bottlenecked on CPUs.
I have a micronaut app that uses around 350MB which is almost entirely JVM overhead.
Of course that all depends on the context, for my cases I'll certainly keep my distance from either Spring or JBoss. Prefer to build faster apps with Java.
Possible that I was using Spring in a wrong fashion but for me (project size around 300 KLOC) Spring dependency injection model with so much magic happening (all the auto configurations, guessing which classes to load etc.) was extremely hard to maintain and understand. I think I prefer Guice/Dagger style of defining dependencies which is much more explicit.
> Possible that I was using Spring in a wrong fashion but for me (project size around 300 KLOC) Spring dependency injection model with so much magic happening (all the auto configurations, guessing which classes to load etc.) was extremely hard to maintain and understand. I think I prefer Guice/Dagger style of defining dependencies which is much more explicit.
Spring used to be (and still can be) very explicit about defining dependencies, but they've been pushing more towards encouraging the use of "magic" over the last many years. I prefer to be as explicit as possible about everything, unless I have a very good reason not to be, and tend to either specify the literal bean name in the annotations or use the older XML-style configuration (and do the same).
We had a similar sized project grow out of a 2-pizza-team and had the same problems. Kinda like how Java devs come to JavaScript and are shocked that this language doesn't have proper numerical types, I came to the Java ecosystem and was shocked that everything pivotally hinged on whether you had the right magic beans and especially if you had different beans vying to be the same sort of magic it was awful.
My favorite bug we ran into was a short little RedisConfig.java file that got copy-pasted into a bunch of our microservices in the test environment. It appeared on a very casual glance to connect to Redis. But our tests did not run a Redis instance, so the configuration should have failed. Instead if you looked carefully at this file you could determine that the connection string was all broken but that didn't matter because it never implemented the Redis Connection Bean logic anyway and in fact you could replace it with a purely empty class and tests still pass.
But, tests break when you delete this unused file, the file needs to exist even if it is the empty `class RedisConfig {}`, because now tests are trying to connect to a nonexistent Redis and not succeeding. Same if you try to rename it to like `class RedisTestConfig {}`, suddenly tests want to reconnect to Redis and they all break!! So, everyone has more important things to do, it gets copied to everyone, nobody understands what on earth it is doing since the code inside the file is clearly broken.
Eventually, we start experimenting with Kotlin. When you copy this pattern over to the Kotlin repo, it starts to complain VERY loudly that you have two objects with the same name in the same namespace and it won't start because it is too confused. And that was the key hint. As far as Spring was concerned, this file, because it has the same class name in the same namespace, basically string-replaces that file. (Nondeterministically? Or maybe it was deterministic that "test" won over "main" because it was lexicographically larger?) So we were string-replacing to a RedisConfig that could never be used as a Redis configuration bean, which causes Spring's Redis Cache logic to say "oh I have no configuration, I should assume that I am unneeded", which disables Redis.
But if you rename it, then Spring Redis says "oh I see two things, I can't use that one but I can use this one, let's connect to Redis with that. oh no!! Redis is DOWN!~ crash all the tests!!"
> Nondeterministically? Or maybe it was deterministic that "test" won over "main" because it was lexicographically larger?)
Java uses classpath ordering for resolution, so a duplicated fully-qualified class name will take the first on the path. This was useful when patching or pre-dependency management, when some library authors would bundle into a fat jar with incompatible versions. It became disallowed by Java 9's modules without command-line flags. It was always considered a bad practices and a natural fallout of the dynamic nature of the JVM. In your case test classes were resolved before main sources and libraries.
Unfortunately there is a broad view that Java == Spring. In over 15 years of full time Java development, I only used Spring Framework 1.x and 2.0 (migrating from pre-DI). Since then my employers and network all used Guice, though I never selected a role where I realized that choice upfront. That framework is far less magical and explicit, making it very easy to debug. I suppose tool popularity is regional, it's just that this type of Spring magic is abnormal and disliked throughout my work experience.
I want to create a load of rapid prototypes. With Go it seemed I was wading through treacle just to create a DB-backed REST server that basically just permitted CRUD on the DB with some extra business logic.
With Spring, you just declare a model object and it can infer a data repository with a ton of convention-named query methods, so the number you actually have to write is far fewer. Spring is used by so many people there are loads of good libraries. E.g. I'm just adding permissions, and without needing to hunt around for a compatible library or litter my code with conditionals, it's pretty easy to use Spring Security to configure sane permissions with annotations. With Lombok there's no need to write the garbage you used to have to (loads of getters, setters, etc.) so the code is quite clean.
Annotations also simplify DTOs with mappers - in Go I had this triplicate of model struct, DTO and mapper. Maybe there was a library I could have used to help, but the faff around setting up the DB (e.g writing SQL, configuring an ORM), updating DTOs and mappers was just crippling my velocity.
Plus, for hiring, there are tons of Java devs around the world, while for Go rates are much higher. So yeah, several factors.
I did, but didn't end up trying it. Being able to hire a Spring dev for $10 p/h vs several times that for a golang dev (plus time learning a particular codebase) just means it doesn't make sense to use Go if you want to outsource in future.
I plan to run multiple experiments in parallel, keep working the day job and outsource the ones that take off. So the more standardised I can make the tech stack, the faster and cheaper dev work will be.
Yeah I'll see how it goes. Tbh at the moment I expect those bottom end devs are worse than just using chatgpt directly. I'll probably have to spend more time explaining to them what I want.
Once IDE plugins can scan the majority of a codebase it'll be easier to just write a bulleted set of requirements and let it get on with it.
But anyway I was talking to a friend who's hired some great Ukrainians quite cheaply and I hear there are some good devs in the Philippines.
Huh, I've been using this in production for years. Given my company's choices, I just assumed it was pretty well known & standard. Overall it's pretty nice, although I've run into a critical error a couple times which rendered a flag untouchable.
You certainly built in a lot of end-user choice for backend storage of the flags, ways to access and change them (jmx, cli, rest), and so on. I read some of the code, and there seems to be an opportunity. I don't see a lot of namespacing. So, for example, you might have a group of services that use the same backend to store flags. A pattern where you have a namespace (appname/service1, appname/service2) tied to get/set would relieve the end user from having to prepend prefixes on every call.
I see something like that in your redis client...a default prefix. But then for some of the other backends, there's no similar concept.
I believe he means doing a rollout to 30% of users and having that be consistent, so that the same 30% of users are always in the flag, not just 30% of evaluations.
This is typically achieved by hashing the flag+userId, converting to an integer and dividing by Max.integer.
> I believe he means doing a rollout to 30% of users and having that be consistent
Exactly. Also, not only shall the evaluation be consistent for a specific percentage and userId, but also, when increasing the percentage, an enabled feature shall never be retracted for a user which had the feature enabled before.
userId could also be IPs, country codes, worker nodes etc etc.
yeah, "an enabled feature shall never be retracted for a user which had the feature enabled before" is extra spicy and requires a different architecture entirely, since you fundamentally need to store the evaluation for each user and check before you return a value.
Definitely something to use with caution in my experience, since the performance profile is very different.
IMHO it's fine if the feature is retracted for a minimal number of users when scaling down (can be done stateless, as described in your other post). It's just that it shouldn't jiggle all over the place when gradually increasing the percentage.
I just put together a comparison yesterday of some tools and I found unleash to be more unwieldy than I expected. I wanted to set the result to a specific value for a shared segment and couldn't do it. For some reason the overrides didn't let me use a segment. Was I doing it wrong?
.03 the "overpricedness" of 3rd parties is starting to get better. I've started https://prefab.cloud/features/feature-flags/ due to my frustrations with overpriced seat-based solutions.
I'd be curious if you think this ~$1/pod pricing feels fair.
Final Fantasy IV Java Edition would be a pretty wild guess by anyone's standards. I certainly wouldn't have thought of it, and I'm an ex-java developer who speaks Japanese and was extremely fond of FF4.
> The feature will be flipped after release date is reached.
Don't ever use this. This is a "time bomb". This is a very bad idea. It's basically like scheduling a time for your app to go down when you aren't paying attention.
You have to have a human in the loop. The right way to "schedule a release" is to have a human flip the flag at the appropriate time (preferably, ramp it up to 100% of users gradually) and then stay online for a little while to do an emergency ramp down if/when things go bad (that weren't caught by some automated system in the ramp up).