Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As I am now casually interested in the subject, what's a good resource to find out why this disease is hard to detect and cure and various other interesting facts?


One neglected issue in the article and the comments is that Parkinson's is associated with exposure to various neurotoxic substances including organophosphorous pesticides:

https://academic.oup.com/ije/article/42/5/1476/623189

> "In a population-based case-control study, we assessed frequency of household pesticide use for 357 cases and 807 controls... Frequent use of any household pesticide increased the odds of PD by 47% [odds ratio (OR) = 1.47, (95% confidence interval (CI): 1.13, 1.92)]; frequent use of products containing OPs increased the odds of PD more strongly by 71% [OR = 1.71, (95% CI: 1.21, 2.41)] and frequent organothiophosphate use almost doubled the odds of PD."

This is further supported by previous discoveries of Parkinson's brought on by exposure to an opiate analog MPTP, as well as several other pesticides:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5345642/

> "The identification of MPTP, a relatively simple compound which causes selective degeneration of the substantia nigra after systemic administration, has had an a significant impact on the understanding and treatment of Parkinson’s disease (PD) over the last 30 years."

It's rather curious that this foundation neglects to discuss any of this, but it is funded by entities affiliated with pharmaceutical manufacturers so perhaps it's not something they want to bring attention to? It does fit with a general pattern of attempting to blame diseases affiliated with environmental exposures on genetics, however.


I’d recommend Rory Cellan Jones on Substack [1]. He was the technology correspondent for the BBC until a few years ago - and is now retired partly because of the onset of Parkinson’s.

As a result he’s devoted much of his retirement to reporting on innovation in treating and managing Parkinson’s.

[1] https://rorycellanjones.substack.com


This might be useless to you, but OMIM is the canonical database of genetic diseases, and has tons of information:

https://www.omim.org/entry/168600

https://www.omim.org/entry/168601


A cursory browse of wikipedia is my first port of call, that's what chatgpt bases their answer on as well (what the other commenter pointed out).

Yeah it's not verified etc, but it's more accessible than reading the sources it cites itself on.


[flagged]


Can we have a line on the HN guidelines saying "telling people you asked about <topic> on ChatGPT doesn't make for interesting discussion. Please refrain from doing so."

Especially in this case where you don't even tell what did you find out, only that you did ask ChatGPT. Good for you.


It's the new LMGTFY


But posing as helpful instead of sarcastic, yet equally annoying.


Why feed yourself with potential misinformation that you don’t know the true source of and you might never double check?


> Why feed yourself with potential misinformation that you don’t know the true source of and you might never double check?

Because I enjoy reading HN?


When you run out of HN, do you hit up ChatGPT?


I ask ChatGPT to write me a comment thread in the style of Hacker News.


Oh, that was just too easy... :)


As opposed to? Unless you're actually going to read individual papers, vet the authors, have the baseline statistical knowledge to understand the results, etc, you're always going to be susceptible to potential misinformation. This HN topic is a good example of the difficulty of verifying information.

Who's to say that ChatGPT doesn't provide a better job of filtering it out?


Yes, I would start with Google Scholar, or at least Wikipedia because it tries to provide sources for each statement that it makes.

> Who’s to say that ChatGPT doesn’t provide a better job of filtering it out?

No one because no one knows what sources ChatGPT is blending together in its sentences.


> I would start with Google Scholar

You actually should vet everything you read through Google Scholar too. There is a pervasive belief of consensus in academic science and unfortunately individually verifying information is the only thing we know that actually works- not political consensus.


[flagged]


I am not trolling. I don’t recommend engaging in a good-faith dialogue with someone by prefacing it with your belief that they are trolling.

If you're basing your trust in ChatGPT on the claim that it is trained on Wikipedia, you might as well read Wikipedia instead because then you also see the sources for each claim, or the fact that certain claims are unsourced. ChatGPT will not let you know if a certain claim is more controversial, nor give you further sources to read if you want to know the background of a claim.


I don't have trust for things, I build trust with other humans if earned, potentially dogs. Beyond that there's no reason to trust with objectivity and reason.

Digression aside, I would impress upon you I haven't claimed, nor do I believe what you're claiming I have. I am simply providing you with a slice of the larger original academic paper that provides great, rigorous, and peer-reviewed documentation of precisely what GPT is, and what it's trained on.

From this paper you'll perhaps gather an instinct that these folks working on this problem are widely aware of the extremely well-known concept of "open source knowledge" and were well considered in their application of pruning data to their needs.

I believe you'll perhaps further gather retrospective insight upon the idea that GPT is doing anything more than giving you a T9-predictive-autotext for the entirety of that dataset; meaning you can try to coax it into saying anything you want but if the p-values aren't right or the predictive potential of a given token "coming up next" isn't there, that's just.. how it goes.

It's data. It's not quite the tower of Babel, but we'll get there soon enough. Kind of like how all the fancy 3D video game rendering software that looks incredible these days is still just manipulating tuples and vectors with matrix calculations, stuff you could do on paper but why would you do that math to describe a picture when you could just draw it.

ChatGPT is just drawing the pictures (in this poorly chosen analogy), and giving us the cool graphics. The math is all pretty benign and based in the fundamentals of neural networks, not even the more fanciful CV and deeper trained NLP can get to.

Info-scientists, I wonder what the "rainbow table" of all language and ideas etc. that would be relevant to a latent language learning model would be... this is a wonder to me because I lack the sufficient knowledge and field expertise.


Because ChatGPT assembles letters into words and sentences without attribution and so is untrustworthy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: