Hacker Newsnew | past | comments | ask | show | jobs | submit | ematvey's commentslogin

Bringing lakeFS into our software stack was a huge mistake.


I'm also wondering the same thing. It has a bunch of very nice capabilities, but taking away such a basic piece of functionality to push people towards the paid options is very concerning.


I raised concerns about deficiencies of their RBAC on their Slack before and they immediately started pushing me to their enterprise offering. I thought it was a red flag but have not imagined they would stoop to this.

When we started using them they seemed like a genuine open source. Is there a term for open source traps like that? If not we need to come up with something. Trapware?


Could be called some kind of https://en.wikipedia.org/wiki/Bait-and-switch , or BAS = B(ullshit)A(ssisted)S(ales)?


Yep, it fits


It could be argued that acquiring a whole security-focused company is a signal they’re seriously reconsidering their approach to security and deserve a benefit of the doubt.


Sure, you could argue that. It would be a terrible argument though.

Why do I owe a commercial enterprise anything? They demonstrated repeatedly they cannot be trusted. In obvious and extreme fashion.

The fact that Keybase agreed to this tells me a lot more about Keybase than it does about Zoom.


They also lied about having end-to-end encryption. The awful security practices could be chalked up to incompetence but the fact that they lied has taken it too far, in my opinion. I too have deleted by Keybase account because of this.


I might be missing something here, but how does the knowledge pandemic and about the severity of the disease about qualify as insider information? I guess this guy can be accused of not expressing his real beliefs to the public (to contain the panic as he would surely argue), but that is not the same as insider trading.


He did express his concerns about the economy to a small group of donors. He also was one of only 3 senators to vote against the stock act, making things like this illegal.


I was expecting to read about mapping crustaceans connectome here.


It really helped in our case. We have a team of 10+ researchers who alse ship code in production. They were repeatedly running into a problem where they recompute same data in runtime, or reinvent the wheel because they didn’t know somebody already computed that datum. I end up writing a small single-process (for now) workflow engine running a “company-wide DAG” of reusable data processing nodes (all derived from user-submitted input + models). Now it is much easier for individuals to contribute + much easier to optimize pipelines separately. I might open source it some time soon.


Would you say it is highly relevant for those interested in AI safety?


What's your threat model when you say "AI safety"? Which scenarios are you attempting to prevent?


For such an exciting title, I was a bit disappointed to find another job board in here. Should I apply if I’m not looking for a job but want to participate in a professional community?


Yes. While we make money by placing people, most of the day-to-day activity involves sharing papers, open source collaboration, etc. (plus job-related things like sharing interview tips)


Companies doing ML do not and should not base their decisions on some hypothetical considerations about long-term effects on market health, or just present/future hardware cost. They work with what's available. It is market's job to correct. And it surely will now, as current situation becomes more of a problem.


To OpenAI folks: are you planning to publish a paper with implementation details?


Thank you for doing this openly. Will you be posting the recordings of project brainstorm sessions?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: