Hacker Newsnew | past | comments | ask | show | jobs | submit | baxter001's commentslogin

I recall doing a rather large technical case for using raft and a replicated log for water industry back in 2017 it must have been.

That's from knowing nothing about the shape of consensus algorithms to almost getting one adopted.

To me Raft's brilliance is how easy, clear and comprehensible they made thinking about it.


> I was surprised how well the bird was getting around.

SurvivorBias.png except it's a silhouette of a goose with numerous red arrows drawn over it.


You'd be surprised what's in there, a few forms of NNs are already supported for denoising, speech detection.

I think having this flow out to all of the deps of libav is a greater good than notions of lib purity.


More precisely libavfilter, so it's also soon in mpv and other dependent players.

This is going to be great for real-time audio translation.


The fact that you think it was suggested _by_ them is I think where your mental model is misleading you.

LLMs can be thought of metaphorically as a process of decompression, if you can give it a compressed form for your scenario 1 it'll go great - you're actually doing a lot of mental work to arrive at that 'compressed' request, checking technical feasibility, thinking about interactions, hinting at solutions.

If you feed it back it's own suggestion it's no so guaranteed to work.


I don't think that the suggestions in the prompt box are being automatically generated on the fly for everyone. At least I don't see why they would be. Why not just have some engineers come up with 100 prompts, test them to make sure they work, and then hard-code those?


I would hope the suggestions in the prompt box are not being automatically generated by everyone else's inputs, I know what matters most is not the idea but execution but in the off hand you do have a really great and somewhat unique idea you probably wouldn't want it to be sent out to everyone who likes to take great ideas and implement it while you yourself are working on it.


Why do that when you can be lazy and get ‘AI’ to do the work.


You're misunderstanding me. Underneath the prompt box on the main page are suggestions of types of apps you can build. These are, presumably, chosen by people at the company. I'm not talking about things suggested within the chat.


What technologies we have in 2025!


It seems to be the "conversation" part, not necessarily "communication", one of the early stages in children's language acquisition is teaching the flow of back and forth responses and pausing for the other to speak: which seems to be what they're indicating in this article.


No, it doesn't, Turk is what we call Turkish people too.


The "chemical burns" in the title are https://en.wikipedia.org/wiki/Nitrogen_mustard burns, a blistering agent and schedule 1 substance under the chemical weapons convention.


Specifically, nitrogen mustard is under part A of schedule 1, which means it can directly be used as a chemical weapon (schedule 1 means it has very few or no uses outside of chemical warfare, or the manufacture of chemical weapons). Interestingly, nitrogen mustard is also used to treat some cancers.

https://www.google.com/url?q=https://en.m.wikipedia.org/wiki...


Completely not the focus of the article, and you've turned the result of an error rate of 0.8 percent for gender classification of light-skinned men and a 34.7 percent error rate for the same classifier on dark-skinned women - into some kind of google image search language game?

I can only quote Joy Buolamwini on this:

“To fail on one in three, in a commercial system, on something that’s been reduced to a binary classification task, you have to ask, would that have been permitted if those failure rates were in a different subgroup?”


The answer would probably be yes if that subgroup wasn't a large percentage of the dataset used for training and testing. Or if that subgroup wasn't a large percentage of the user base.

Come on, if you've worked at any large company using ML you know model performance is literally just taking the average accuracy/ROC/precision/etc over your training dataset plus some hold out sets. Then you track proxy metrics like engagement to see if your model actually works in production. At no point does race come into the equation. Naturally, if your choice of subgroup happens to not be a large proportion of either the dataset or the userbase then you don't see the poor performance on that subgroup show up in your metrics so you don't care to fix it.


Obviously, but the question is, why were there no Black women in the data set, and what care can be taken to prevent racialized bias when selecting the data set in the future?


I would assume these data sets are not manually selected but imported from some mechanism.

Other issues which are sure to arise is that the a.i. will have trouble with people who aren't smiling, and that the data set probably contains people who look better than average, and almost certainly excludes people who suffer from injuries or deformities in appropriate proportions.

Perhaps an interesting project is simply the compilation of a vast dataset of “world proportional pictures of people”. — It would be an interesting undertaking to realize such a dataset.


World proportional is not good enough for this type of task. If we are to rely on AI for things like identifying people in pictures in a trial, we would need equal representation in the data set, so the AI doesn't have any kind of systematic bias. Otherwise, the AI's bias will compound errors in the real world. So you would need as many pictures of Australian aborigenees in the data set as Han Chinese people if you wanted to be sure there isn't a risk that a random person would be confused for someone of the over or under represented groups.


Certainly you can ask these questions but these are business process issues, not technical ones. They're unrelated to AI.

My personal take is you won't see any tangible movement on this until black women (or whatever group you choose) comprise a tangible proportion of revenue generating users. Corporations operate for money and nothing else.


Of course they are related to what we call AI, because what we call AI is primarily dependent on the quality of the business processes behind data selection and testing. If there is a strong trend of business processes to create systematic errors in the results the technology generates (an AI trained in China sucking at recognising white people wouldn't be a counter example of this phenomenon, it would be the same issue) it's an underlying weakness of the technology, and the utility of the technology needs to be viewed in the context that it's likely compromised by biases in the business processes of its developers.

Black women or other groups not viewed as the mainstream target for an AI solution aren't going to form a tangible proportion of revenue generating users if the software doesn't function properly for them. And a lot of the use cases for AI analysis don't involve the unrepresented-in-corpus minority group being the consumer anyway, they involve it being used to screen them by a third party who's been sold the tool on the false premise that it's free from human bias.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: