It doesn’t represent the situation accurately. There’s a whole thread where humans debate the performance optimization and come to the conclusion that it’s a wash but a good project for an amateur human to look into.
> It’s a tricky question because she and Kim Jong-un are not dictators by title, it’s the shorthand we and the we use to describe them.
This can't be the standard for whether we call someone a dictator. Similarly Stalin held no government title through the 1930s. Would you say it's hard to say whether he was a dictator because he was officially just a party secretary?
My favorite version of this was Deng Xiaoping who by the end of his rule had only one title - Honorary Chairman of the Chinese Contract Bridge Association.
Warning: this is in my mental list of "things to good to check" so it may or may not be true.
The way you phrased it reads very much like you started an argument about definitions.
I think requiring the title of dictator is not how the term is being used these days, on our side of the late Roman republic. It's more of a duck-typing situation now.
I think it's safe to assume the original comment meant it that way.
I wonder about the accuracy of critical text methods like the ones that have been used to putatively reconstruct the Q document and to argue about authorship and dates. Have these methods ever been validated against a ground truth that the arguers didn't know about beforehand? Like, have we ever philologically reconstructed a text from other texts, and then found exactly that text buried somewhere? Or even something close to it?
In the case of Q, you could argue that the Gospel of Thomas validates that there were texts of that kind (sayings gospels) floating around, but Thomas doesn't match the content of Q.
Outside biblical scholarship, another area where people have tried to reconstruct what is going on in ancient texts is the Chinese classics, especially the really cryptic ones like the Yijing. But whenever some actual ancient manuscript gets dug out of an old grave or a bog, it seems like it just brings up more questions and complications, instead of validating anyone's theories.
Compare to the philology methods that people use to reconstruct ancient languages. These have been validated pretty well. For example in the 19th century linguists were able to deduce that the Proto-Indo-European language must have had guttural consonants not found in any extant language, and then later when the Hittite language was decoded, the guttural consonants were right there. The theory was validated on held-out data. Has this ever happened for critical methods for discerning authorship and sources and missing texts?
The issue is that your standard is borderline setting up an impossible strawman, and when we actually do textual (and/or biblical) analysis and historical science, we never really have a "this is THE version of Q" nor do we have a "this is the FINAL/REAL version of the book" or "this is the final/absolute version of the hypothesis".
The idea that there is "one authoritative version and it was the version that was copied into this one authoritative derivative and we found the derivative so now we have to find that exact original or else its all bumpkin" simply isn't the way 2000 year old books or texts were written, copied or used. You will never find it because that's not what happened.
But we can lay out the texts side by side, arrange the narratives and see how they differ chronologiaclly from book to book, notice where particular linguistic quirks take place, notice where words are copied word for word in a particular order, where embellishments or insertions or changes are made, then just like taking several witness accounts, we build up a probablistic version of events that happened.
So there isn't "one Q", just as there isn't one authoritative version of mark, luke, john, mathew, etc. But there's patterns in the texts which strongly suggest that there was some kind of shared knowledge and a common source in the authors of the later gospels. We hypothesised that this common source seemed to be shared amongst the other gospels was a "sayings gospel", because the common ground that seemed to be repeated in the other books were primarily sayings and the other bits seemed to come from mark, which at the time met the problem that people didn't accept that such a book or source would actually exist because we'd never seen one before.
Then, after this hypothesis was formed and that objection raised, with the discovery of the nag hammadi library and the gospel of thomas, we found an actual historical sayings gospel. A confirmation that this type of literature did exist and was written in early christian communities. It was not Q, but it confirmed the hypothesised genre and existance of early christian literature.
If you're waiting for the discovery of two literal peices of text, whereby the carbon copy of the first is deduced from the discovery of multiple other historical books to the letter that followed, well then you're setting up an impossible standard. Even literal transcription probably wouldn't meet that standard.
Well, I usually reconstruct what TFA is about in Hacker News from what readers are saying about it and I can report that it works almost as well for a lot less effort!
I jest, but on a serious note, a leading theological text would probably have the same ambiguity as to its meaning even if everyone had access to the original text. Knowing what everyone thinks something means isn't better than knowing what it means... but scientifically, they're indistinguishable!
"For example in the 19th century linguists were able to deduce that the Proto-Indo-European language must have had guttural consonants not found in any extant language, and then later when the Hittite language was decoded, the guttural consonants were right there. The theory was validated on held-out data."
I find reconstruction fascinating, but it will never be completely accurate, because it just can't be. Every language has quirks, and I believe PIE probably had one or two complex features that never survived into the age of writing. Most of its vocabulary is lost, although I hold out more hopes for phonology.
I think that diff algorithms have more in common with traditional, “lower” textual criticism than with the sort of source criticism canjobear is pondering.
"Who art Henry?" was never grammatical English. "Art" was the second person singular present form of "to be" and it was already archaic by the 17th century. "Who is Henry?" would be fine.
In some languages you can put a second person conjugation next to a noun that might otherwise use third person verbs, and it serves as implying that you are that noun. I'm not sure if older forms of English had that construct. I think many Indo-European languages do.
The part of the lord's prayer that says "our father who art in heaven" is kinda like this - father is linked to a second person conjugation. You could remove some words and make it into "father art in heaven", which you claim is ungrammatical. I'm skeptical that it was.
“who art in heaven” is a grammatical relative clause because the subject of the verb is the relative pronoun “who” which is second person in that context. You can still get this kind of thing in modern English, for example “I, who am a farmer, will be happy” is grammatical because the relative pronoun “who” is first person there. That doesn’t mean it would be grammatical to say “*A farmer am happy” and it wouldn’t have worked with art either.
Conceivably it’s grammatical if Henry is vocative and the pronoun is dropped colloquially, like “Who art [thou], O Henry?” but it’s a stretch.
I think the further back you go in Indo-European grammar, the more common the thing you are describing becomes. For me it's less of a question of if English did this, and more like how far back you need to go.
Today, even ignoring the dated conjugation, "who art in heaven" or "who are in heaven", does not make sense. We would switch it into the third person.
reply