Hacker Newsnew | past | comments | ask | show | jobs | submit | Unlisted6446's commentslogin

I think I understand what you mean.

You're saying that relative to the 'typical individual', autistic brains weigh sensory inputs more heavily than their internal model. And that in schizotypal brains, relative to the 'typical individual', the internal model is weighed more heavily than the sensory input, right?

I don't know much about this area, so I can't comment on the correctness. However, I think we should be cautious in saying 'over-weigh' and 'under-weigh' because I really do think that there may be a real normative undertone when we say 'over-weigh'. I think it needlessly elevates what the typical individual experiences into what we should consider to be the norm and, by implicit extension, the 'correct way' of doing cognition.

I don't say this to try to undermine the challenges by people with autism or schizotypy. However, I think it's also fair to say that if we consider what the 'typical' person really is and how the 'typical' person really acts, they frequently do a lot of illogical and --- simply-put --- 'crazy' things.


>However, I think we should be cautious in saying 'over-weigh' and 'under-weigh' because I really do think that there may be a real normative undertone when we say 'over-weigh'. I think it needlessly elevates what the typical individual experiences into what we should consider to be the norm and, by implicit extension, the 'correct way' of doing cognition.

No biggie, there's a real normative undertone to the world in general too.

Norm itself means "what the majority does" or the socially (i.e. majority) accepted yardstick ("norma" in latin was a literal yardstick-like tool).

It's not about the typical person _always_ doing things in a better way, or the autistic person always doing things differently. It's about the distribution of typical vs atypical behavior. So, it's not very useful to characterize such atypical behavior better or worse based on absolute moral or technical judgement. Morality changes over time, cultures, and even social groups, to a bigger or smaller degree.

If, however, we use "degree of comformity with majority behaviors/expectations" as the measurement, autistics do perform worse on that.


Norm is descriptive. Normative is prescriptive.

Knowing the difference is important to understanding and empathizing with the person you replied to.


A "norm" can refer be either descriptive (average) or prescriptive (standard), but "normative" specifically is an adjective which refers to things establishing or relating prescriptive norms (this subtle distinction is often not made in short dictionary definitions but is readily observable in use.)

Normative is just the adjective form of "related to norm" - can still be perfectly descriptive in use. The difference you allude do is more about the practical enforcement of a norm (or lack thereof), than the kind of the part of speech use to refer to it.

I 100% understand and empathize, doesn't mean I agree.


> Normative is just the adjective form of "related to norm"

You might want to recheck the definition of normative. Yours is a non-standard usage and you will be misunderstood if you continue to use it that way.

Norm is is, Normative is ought.

> Normative: pertaining to giving directives or rules

> Synonyms: prescriptive


No. Both definitions are correct. Don't tell people to recheck without first doing so yourself.

https://www.merriam-webster.com/dictionary/normative


Literally false.

It is fine if you disagree with Marriam Webster, but maybe chill a bit with your attitude.

The center of the normal ditribution is “normal” or “normative.” That’s where the term comes from.

It’s like saying we shouldn’t call immigrants “aliens” because that conjures images of space. Where do you think the term comes from?


Isn't "what the typical individual experiences" pretty much the definition of "normal"?

Whether "normal" is also "correct" is a completely separate question. There are plenty of fields where the behavior of the typical person is also widely perceived to be incorrect, like personal finance or exercise routines.


I'm pretty sure, yes? Cauchy distribution and student-t have fatter tails than a standard normal distribution.


I don't understand why they aren't trying to use multiple linear regression to control for the effects of how old SSD vs HDD are, or to use something like survival analysis? I thought this was a largely solved problem...


Well, there's a couple of things going on, right? One is whether we it's a bad idea to judge an entire mass of literature because of its epistemology and the second is whether the OP's claim that psychology has horrendous epistemology is valid.

I'd say that judging an entire mass of literature because of its epistemology makes logical sense. However, in practice, it's not possible to make a judgment as to 'what the epistemology of an entire field is'. What would that even mean? Does OP think that every psychologist has an analogous enough epistemology that anyone can claim what the field's epistemology is? I think not.


We can start by asking, "What are the epistemologies that exist within psychology?"


Well that's a thorny question, now isn't it? I mean, if it was so clear what 'epistemologies' exist in any field, then there would be little need or interest in the study of philosophy and history of science, no? If it was clear, then I think one would simply state what the epistemology of the field is.

That philosophy and history of science are so successful seems to suggest that the way of the scientist is both multifarious and difficult to pin down. I'm skeptical about using either the conscious report of the practitioner of psychology or the labels we may ascribe to their behaviors to triangulate on what their epistemology could be.


Surely the philosophy of science has results, no?

Aren't positivism, anti-postitivism, post-positivism, experientialism, and critical realism among others we can rubric psychology or psychological thinkers or results against?


Well, my understanding is that we have not yet found any clear scientific method that will be consistently 'the one' to choose at any time. There are a few criteria that generally stand out, but a general method--no. And if there's no general method, then how can there be a general epistemology?

I mean, psychology isn't actually paradigmatic yet, is it? I don't think there actually is a general method throughout the field beyond surveys and null hypothesis significance testing--but those are too broad to be particularly symbolic of psychology imo.

In that sense, I'm not sure what value the list of perspectives you provided have i.r.t to what scientists actually do in practice and what kind of practice is successful.


Epistemology is beyond the realm of science. All I desire is for folks to be transparent and have some consistency surrounding the epistemological basis for their agreement or disagreement with particular facts.

Instead what I see, often here, is that folks switch epistemic frameworks in order to prove or rationalize their beliefs. RCT becomes the threshold for knowledge when we're discussing psychology and sociology and trust experts or "science" or vaguely "logic" becomes the basis when talking about "hard" sciences. They typically reinforce the writers preconceived validity of the topic rather than acting as the foundations of belief, and I hope we can agree that's the opposite of how it is supposed to work.


All things considered, although I'm in favor of Anthropic's suggestions, I'm surprised that they're not recommending more (nominally) advanced statistical methods. I wonder if this is because more advanced methods don't have any benefits or if they don't want to overwhelm the ML community.

For one, they could consider using equivalence testing for comparing models, instead of significance testing. I'd be surprised if their significance tests were not significant given 10000 eval questions and I don't see why they couldn't ask the competing models 10000 eval questions?

My intuition is that multilevel modelling could help with the clustered standard errors, but I'll assume that they know what they're doing.


I wouldn't be surprised if the benefits from doing something more advanced aren't worth it.


Well, I think it's usually more complicated than that. An over-simplification is that there's no free lunch.

If you use a robust sandwich estimator, you're robust against non-normality and etc, but you lower the efficiency of your estimator.

If you use bayes, the con is you have a prior and the pro is you have a prior + a lot of other things.

And strictly speaking, these are benefits on paper based on theory. In practice, of course, the drawback to using a new advanced technique is that there may be a bug(s) lurking in the software implementation that might invalidate your results.

In practice, we generally forget to account for the psychology of the analyst. Their biases, what they're willing to double-check and what they're willing to take for granted. There's also the issue of bayesian methods being somewhat confirmatory, to the point that the psychological experience of doing bayesian statistics makes one so concerned with the data generating process and of the statistical model, that one might forget to 'really check their data'.


> The idea that median researchers are not intelligent enough to understand p-hacking is just absurd: it is not a sophisticated topic. I imagine the median researcher in fact has a robust and cynical understanding of p-hacking because they can do it to their own data. Such a researcher may be cowardly and dishonest, but their intelligence is not the problem. This is the crux of my disagreement with the post: the replication crisis is a social problem, not a cognitive problem.

That doesn't seem true: See Figure 1 of https://www.sciencedirect.com/science/article/pii/S105353570...? and the original results associated with the Linda problem.

Statistics is difficult and unintuitive.


Something feels off about this. I mean, it can go both ways, no? Perhaps, pressure from attending a HAS might push one towards substance abuse and more. But couldn't pressure from attending an elite institution and being an elite also make push one against activities like substance abuse?

If we assume that the type of school affects lifelong outcomes, then we should also control for something like parent's latent neuroticism, which would affect both what school their child goes to and (I presume) also life-long probability of engaging in substance abuse as a coping mechanism.


> Something feels off about this. I mean, it can go both ways, no?

man, it can go every way imaginable. Childhood is so short and no on worries like a parent does. Every parent wants to get it 100% right but that's an impossible task yet there's very serious consequences for your child when you get it wrong. Further, parents are just regular people who get misinformed or are ignorant of the path forward and have to just do their best at every crucial step in a child's development. Parenthood is an impossible task to get perfect and you have to give yourself grace but you have to work and learn from your mistakes and improve because a life is at stake.


What would you like to say about factor analysis? Lay it on me


It’s poorly understood by many who use the DSM, and without understanding how arbitrary and or subjective it can be it may be difficult to avoid “overfitting” in the clinical setting.


I'm not sure I'm particularly convinced that this is an issue with the method of factor analysis and by extension psychometrics, per-se. Unless one specifies a causal model and actually tries to do a risky test of their theory, any other method is liable to the issue of arbitrariness and subjectivity. Psychometrics itself has come a long way and there have been many advancements to put it on firmer footing. If anything, the issue isn't with the method, but by the user of the method. I don't know if I agree that it's an issue of understanding a method, rather than an over-reliance on data (analysis) over theoretical guidance and trying to take a hammer to theories.


It’s not a problem with FA, it’s a problem with people using the DSM who don’t understand how the math behind it influences what they are doing. Ditto for IQ. If you use IQ measures professionally, you should grok FA.


But what would you have them do instead of FA? I think we're partially agreeing here, but my thinking is that no analytical technique on its-own will be a panacea whether the users really understand it or not. Why would increasing their understanding of the technique affect what they do, when there's not really any other truly different methodological alternative?


This aspect was investigated in the literature and broadly speaking, the biases still pop-up: https://journals.sagepub.com/doi/abs/10.1111/1467-9280.00350

- It was also touched on in the original paper that Tversky and Kahneman put out https://psycnet.apa.org/record/1984-03110-001


So I'm a researcher that almost always uses pdfs... Does HTML have the reproducibility that PDF promises? My feeling is that if I store a PDF, it'll look the same in a decade. But is HTML the same way? It seems like it relies on the web browser and many other things... How would one manage things like images and gifs? Is there a way to keep everything into one HTML file that's easily shareable and feels secure?


The potential to freeze an HTML page in time with minimal changes at render time is already there. [0] Such an ability can even be baked directly into the rendered HTML page so the viewer would be able to download a copy of the page as it is seen at a given time. Other archiving facilities, such as archive.org, take static snapshots of accessible pages if allowed by the publisher of the page and requested by anyone who wants to make that snapshot.

My point is that it is possible to achieve in principle and in practice, albeit that might be practiced as often as one would like to see.

-------

[0] See SingleFile by gildas at https://addons.mozilla.org/en-US/firefox/addon/single-file/: “Save an entire web page—including images and styling—as a single HTML file.”


I like SingleFile, but it's not perfect. It usually works just fine, but will occasionally drop the ball depending on the type of JavaScript on the page.

For example, I once backed up a page using it, and while it got all the content, it did not grab the JavaScript necessary for the images to display correctly.


> Does HTML have the reproducibility that PDF promises? My feeling is that if I store a PDF, it'll look the same in a decade.

Feelings and promises are each one thing. Reality is another. PDF doesn't even look "the same" today. I have serious questions about how often folks who think that PDF is reliably consistent from system to system step outside their bubble and just how diverse their setups are that they're testing on.

> is HTML the same way?

Well the status-quo for copy-and-paste in HTML isn't dogshit, it's comparatively trivial to find and use tools that can thoroughly and exhaustively search your collection (or even write your own), and HTML is a dead simple plain-text format that if worst comes to worst you can read with your eyes (unlike needing to run a bunch of inscrutable code from a PostScript subset through an interpreter before you can do anything with it). So, no, I wouldn't call them the same.


Machines and humans can both easily use HTML/XML. Extracting information from PDF’s is so much harder that there’s deep learning products dedicated to doing it. They still make mistakes, too.

I’d much rather have something akin to the CHM files where everything I need is in one file, easy to analyze, and has good readers.


I explored tools to export/interchange PDF to HTML in the KnowledgeGarden app, but the results were not optimal, suffering from non-standard layout and poor typesetting of equations. Publishers of scholarly articles generate web pages of papers, but they're not replicas of PDF files.

Re. self-contained HTML (and slightly off-topic), look at TiddlyWiki, which contains data/code/layout all in one interactive, local or hosted HTML. Extensibility, plugins, and community of contributors are some key highlights, among others.

[1] https://www.tiddlywiki.com


I'd like to see PDFs move to Computational Notebooks. One can dream.


That'd be so nice. Imagine executing the code for an ai paper and seeing the beautiful visualizations as you read it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: