Hacker Newsnew | past | comments | ask | show | jobs | submit | webmaven's commentslogin

Furthermore, a bunch of functionality was entirely deleted, and the effect on the quality of discourse has been... Profoundly negative.


Yea but hey 80% of people lost their job. No wonder all SV CEOs are so eager to copy Elon


The smokers who DON'T get lung cancer still have more heart and lung problems (like emphysema, COPD, etc.), and those get significantly worse late in life.


Hmm. Every time I've been in a similar scenario (breaking a fall, warding off an incoming projectile) it has been my wrists that broke.


Don't forget that the dishwasher also uses less water and the dishes get sterilized by the steam.These features may or may not be important for a particular user.


But doesn't that mean that ranking the ideas to find the ones most worth testing is a useful problem to solve?


The one model that would actually make a huge difference in pharma velocity is one that takes a target (protein that causes disease or whatever), a drug molecule (the putative treatment for the disease), and outputs the probability the drug will be approved by the FDA, how much it will cost to get approved, and the revenue for the next ten years.

If you could run that on a few thousand targets and a few million molecules in a month, you'd be able to make a compelling argument to the committee that approves molecules to go into development (probability of approval * revenue >> cost of approval)


If you had a crystal ball that could predict the properties of the molecule, perhaps.


If I understand what's been published about this, it isn't just ideation, but also critiquing and ranking them, to select the few most worth pursuing.

Choosing a hypothesis to test is actually a hard problem, and one that a lot of humans do poorly, with significant impact on their subsequent career. From what I have seen as an outsider to academia, many of the people who choose good hypotheses for their dissertation describe it as having been lucky.


I bet all of these researchers involved had a long list of candidates they'd like to test and have a very good idea what the lowest hanging fruit are, sometimes for more interesting reasons than 'it was used successfully as an inhibitor for X and hasn't been tried yet in this context' — not that that isn't a perfectly good reason. I don't think ideas are the limiting factor. The reason attention was paid to this particular candidate is because google put money down.


For a while, anyway.


People are using these models to generate candidate molecules for specific purposes. According to some estimates I've seen, their hit rate is about 50% instead of 25-33%, and doesn't take two years.


Um. How are you measuring that productivity?


Any meaningful metric.


Are the training costs (CapEx) and inference costs (OpEx) being lumped together?


Not sure if it matters at this point. There will need to be many more rounds of CapEx to realize the promises that have been put forth about these models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: