Hi, I'm the hiring manager for the Flag Delivery team at LaunchDarkly. We're building and scaling the Flag Delivery Network - a globally distributed, high-throughput system that delivers hundreds of thousands to millions of real-time feature flag updates per second to SDKs across web, mobile, and server environments.
We work on challenges like streaming delivery, edge caching, global state synchronization, and operational excellence at scale. If you enjoy designing, building and running distributed systems that need to be fast, reliable, and observable - we'd love to hear from you.
I'm working on https://rinse.one which is basically a simple text box you enter common queries/commands and get answers. I have many ideas but not enough time on my own. For example, next thing I have planned is to make the commands composable.
I mostly work on the backend. I would like to collaborate with someone who does the frontend (but backend collaborators are welcome too). I like keeping things simple. The whole thing (code and infrastructure) is open source - check out the "about" and "commands" links from the main site.
I too was playing around with Go generics. I wrote some naive concurrent filter and fold (reduce) functions for slices and maps here https://github.com/unix1/gostdx if anyone is curious how those would feel.
> migrating to Linux is fine, but don't expect a better experience than the Mac.
It seems that depends on what you do (and like). If you spend most of your time in Mail.app and previewing your files with Preview app and you like those apps, then perhaps Mac OS is a good choice for you.
I personally don't fall into those categories. I care more about the software development experience where Linux is a better option for me. In fact, I was a bit surprised that the blog post was from "Engineer, developer, entrepreneur" and there was no coverage of software engineering/development tools.
Here are things I prefer on Linux (not in any particular order): docker actually runs without taking over half to all of the system, window manager behaves the way I configure it, Kate works better, it's closer to what's in production - I don't need to run another docker container do/test simple things or fight with Mac OS workarounds, most tools/packages I use are a simple command away - Linux package managers are better and have a lot more packages than Homebrew, all/most installed packages get updates automatically, GUI file managers and file open dialogs are better than the Finder app, app menus belong to their windows, fonts aren't extra blurry on external monitors, most software stack is open source - if I find a bug I can troubleshoot or even try fixing it, etc., etc.
Things I liked better on a Mac: giant click anywhere touchpad (though tap-to-click feature was inadequate).
In 1988 Hubert L. Dreyfus and Stuart E. Dreyfus released a paperback version of their previously published "Mind over Machine" book, in which they mostly spend time debunking the myth that expert systems and rule-based programs are ever going to have "intelligence" on par with human brain.
The book is an interesting read in itself, but what I found remarkable is that in the 1988 release they added a "preface to paperback edition" in which they used a couple of pages to give their views on artificial neural networks, which (though not new) was gaining some steam at the time. The conclusions they reached are as relevant now as they were 3 decades ago.
There have been no new breakthroughs in this area. Most of the research being done is in application of what we have known for decades in specific areas, with minor insights into tweaks and uses of combinations of algorithms to better solve specific problems. The big differences between then and now are: (1) technology is more accessible - data is easier to collect, store and output via many input/output methods; and (2) the hardware is significantly faster - we can now go through more data, make algorithms run faster, and appear to perform better.
This inevitably brought a lot of hype, including many predicting human-like artificial intelligence not too far away. But maybe those with experience in 60s and 70s in the field in USA and Japan can draw a parallel between what's happening now and what has happened few times in the past in this area:
- companies perform neat promising demos with unrealistic implicit or explicit promises
- investors pour money in
- media hype ensues
- after awhile - no new breakthroughs: still can't turn ANN or expert system into a human brain
- outcome is improvements in limited use cases
- hype dies down, but we can repeat the cycle after improvements in hardware
> There have been no new breakthroughs in this area. Most of the research being done is in application of what we have known for decades in specific areas, with minor insights into tweaks and uses of combinations ...
There are 2 huge problems with that:
1) nobody is trying to "embody" an intelligence with any sort of research project behind it. Nobody's even trying to create an artificial individual using neural networks. There are several obvious ways to do this, so that's not really the problem.
Therefore I claim that your implied conclusion, that it isn't possible with neural networks somewhere between premature and wrong.
2) What if the difference between an ANN and our brain is a difference of scale and ... nothing more ? We still do not have the scale in hardware to get anywhere near the human brain, and just so we're clear, the differences are still huge.
Human neocortex (which is roughly what decides on actions to take): 100 billion neurons
Human cortex (which is everything that directs a human action directly. Neocortex decides to throw spear and the target, cortex aims, directs muscle forces, moves the body and compensates for any disturbance like say uneven terrain): another 20 billion neurons.
Various neurons on the muscles and in the central nervous system directly: a few million (mostly on the heart and womb. Yes, also in men, who do have a womb it's just shriveled and inactive). They're extremely critical, but don't change the count very much.
AlphaGo 19x19x48, times 4 I think. About 70000 neurons, and that does sound like the correct number for recent large-scale networks.
A human neuron takes inputs from ~10000 other neurons, on average. A state-of-the-art ANN neuron takes input from ~100, and since it's Google and they've got datacenters, AlphaGo was ~400.
So the state of the art networks we have are on par with animal intelligence of the level of a lobster, ant and honeybee. I think it is wholly unremarkable and understandable that these networks do not exhibit human-level AGI.
What is remarkable is what they can do. They can analyze species from pictures better than human specialists (and orders of magnitude better than normal humans). They can speak. They can answer questions about a text. They can ... etc.
Give it a few orders of magnitude and there will be nothing these networks don't beat humans on.
It would be hard to argue the inverse hypothetical outcome, but I do think that licensing (GPLv2) had a significant role in where Linux is today. It is precisely because the license obligates the distributors to share the code that made the whole a better software and a more attractive platform to contribute to. In fact, most contributors, big and small, work with the upstream to streamline their contributions. Having to share the source has worked out well for both programmers (individuals, companies) and end users in the long run.
Stating that there are some number of cases of GPL violations that haven't been enforced or are in the gray area is not a logical base for the argument that the idea of having to share the source code should be abandoned. In fact, history shows otherwise - that Linux has had more success (as measured by uptake globally) than any other non-GPL open source kernel ecosystem.
Similarly, just because there may be people who can find loopholes in certain well-intended laws and regulations is not a good reason to abandon their intents. Instead, the questions should be - can we keep these intents and fix the loopholes or make the enforcement more straightforward? I think Google could, but maybe it's not in their immediate interests, one of which may be closer to - how do I upgrade the kernel without recompiling that other stuff.
> you're already in the business of trusting the creator of the extension
Those were my thoughts exactly.
And tangentially related to your point, I am wondering why the addon developer, who I have explicitly trusted by intentionally installing their software, is not at least on the same or even higher level of trust as an unknown 3rd party web developer whose arbitrary Javascript application the browser automatically installs and runs when I visit a desired 1st party website?
There are no built-in protections that Firefox (or any browser) provides for running arbitrary 3rd party code that happened to be included by an unsuspected website that features 3rd party fingerprinting, tracking user actions, access to DOM, whether for "benign" or malicious purposes. In my mind that is just as, if not more, important for both security and privacy.
It looks like it is yet to be seen whether Mozilla's extended WebExtensions API will provide enough for existing add-ons that use current low-level access that to some level restrict 3rd party web applications.
Hi, I'm the hiring manager for the Flag Delivery team at LaunchDarkly. We're building and scaling the Flag Delivery Network - a globally distributed, high-throughput system that delivers hundreds of thousands to millions of real-time feature flag updates per second to SDKs across web, mobile, and server environments.
We work on challenges like streaming delivery, edge caching, global state synchronization, and operational excellence at scale. If you enjoy designing, building and running distributed systems that need to be fast, reliable, and observable - we'd love to hear from you.
Apply here: https://job-boards.greenhouse.io/launchdarkly/jobs/661498400...
Or feel free to reach out directly: zurab (at) launchdarkly (dot) com