I don't know if we're even able to continue with the thread this old, but this is fun so I'll try to respond.
You're correct to point out that defending my viewpoint as merely internally consistent puts me in a position analogous to theists, and I volunteered as much elsewhere in this thread. However, the situation isn't really the same since theists tend to make wildly internally inconsistent claims, and claims that have been directly falsified. When theists reduce their ideas to a core that is internally consistent and has not been falsified they tend to end up either with something that requires surrendering any attempt at establishing the truth of anything ourselves and letting someone else merely tell us what is and is not true (I have very little time for such views), or with something that doesn't look like religion as typically practised at all (and which I have a certain amount of sympathy for).
As far as our debate is concerned, I think we've agreed that it is about being persuaded by evidence rather than considering one view to to have been proven or disproven in a mathematical sense. You could consider it mere semantics, but you used the word "unsound" and that word has a particular meaning to me. It was worth establishing that you weren't using it that way.
When it comes to the evidence, as I said I interpret and weight it differently than you. Merely asserting that the evidence is overwhelmingly against me is not an effective form of debate, especially when it includes calling the other position "stupid" (as has happened twice now in this thread) and especially not when the phrase "dumb fuck" is employed. I know I come across as comically formal when writing about this stuff, but I'm trying to be precise and to honestly acknowledge which parts of my world view I feel I have the right to assert firmly and which parts are mere beliefs-on-the-basis-of-evidence-I-personally-find-persuasive. When I do that, it just tends to end up sounding formal. I don't often see the same degree of honesty among those I debate this with here, but that is likely to be a near-universal feature HN rather than a failing of just the strong AI proponents here. At any rate "stupid dumb fucks" comes across as argument-by-ridicule to me. I don't think I've done anything to deserve it and it's certainly not likely to change my mind about anything.
You've raised one concrete point about the evidence, which I'll respond to: you've said that the ability
to contribute to frontier research maths is posessed only by a tiny number of humans and that a "bar" of "human level" intelligence set there would exclude everyone else.
I don't consider research mathematicians to possess qualitatively different abilities to the rest of the population. They think in human ways, with human minds. I think the abilities that are special to human mathematicians relative to machine mathematicians are (qualitatively) the same abilities that are special to human lawyers, social workers or doctors relative to machine ones. What's special about the case of frontier maths, I claim, is that we can pin it down. We have an unambiguous way of determining whether the goal I decided to look for (decades ago) has actually been achieved. An important-new-theorem-machine would revolutionise maths overnight, and if and when one is produced (and it's a computer) I will have no choice but to change my entire world view.
For other human tasks, it's not so easy. Either the task can't be boiled down to text generation at all or we have no unambiguous way to set a criterion for what "human-like insight" putatively adds. Maths research is at a sweet spot: it can be viewed as pure text generation and the sort of insight I'm looking for is objectively verifiable there. The need for it to be research maths is not because I only consider research mathematicians to be intelligent, but because a ground-breaking new theorem (preferably a stream of them, each building on the last) is the only example I can think of where human-like insight would be absolutely required, and where the test can be done right now (and it is, and LLMs have failed it so far).
I dispute your "level" framing, BTW. I often see people with your viewpoint assuming that the road to recreating human intelligence will be incremental, and that there's some threshold at which success can be claimed. When debating with someone who sees the world as I do, assuming that model is begging the question. I see something qualitative that separates the mechanism of human minds from all computers, not a level of "something" beyond which I think things are worthy of being called intelligent. My research maths "goal" isn't an attempt to delineate a feat that would impress me in some way, while all lesser feats leave me cold. (I am already hugely impressed by LLMs.) My "goal" is rather an attempt to identify a practically-achievable piece of evidence that would be sufficient for me to change my world view. And that, if it ever happens, will be a massive personal upheaval, so strong evidence is needed - certainly stronger than "HN commenter thinks I'm a dumb fuck".
You're correct to point out that defending my viewpoint as merely internally consistent puts me in a position analogous to theists, and I volunteered as much elsewhere in this thread. However, the situation isn't really the same since theists tend to make wildly internally inconsistent claims, and claims that have been directly falsified. When theists reduce their ideas to a core that is internally consistent and has not been falsified they tend to end up either with something that requires surrendering any attempt at establishing the truth of anything ourselves and letting someone else merely tell us what is and is not true (I have very little time for such views), or with something that doesn't look like religion as typically practised at all (and which I have a certain amount of sympathy for).
As far as our debate is concerned, I think we've agreed that it is about being persuaded by evidence rather than considering one view to to have been proven or disproven in a mathematical sense. You could consider it mere semantics, but you used the word "unsound" and that word has a particular meaning to me. It was worth establishing that you weren't using it that way.
When it comes to the evidence, as I said I interpret and weight it differently than you. Merely asserting that the evidence is overwhelmingly against me is not an effective form of debate, especially when it includes calling the other position "stupid" (as has happened twice now in this thread) and especially not when the phrase "dumb fuck" is employed. I know I come across as comically formal when writing about this stuff, but I'm trying to be precise and to honestly acknowledge which parts of my world view I feel I have the right to assert firmly and which parts are mere beliefs-on-the-basis-of-evidence-I-personally-find-persuasive. When I do that, it just tends to end up sounding formal. I don't often see the same degree of honesty among those I debate this with here, but that is likely to be a near-universal feature HN rather than a failing of just the strong AI proponents here. At any rate "stupid dumb fucks" comes across as argument-by-ridicule to me. I don't think I've done anything to deserve it and it's certainly not likely to change my mind about anything.
You've raised one concrete point about the evidence, which I'll respond to: you've said that the ability to contribute to frontier research maths is posessed only by a tiny number of humans and that a "bar" of "human level" intelligence set there would exclude everyone else.
I don't consider research mathematicians to possess qualitatively different abilities to the rest of the population. They think in human ways, with human minds. I think the abilities that are special to human mathematicians relative to machine mathematicians are (qualitatively) the same abilities that are special to human lawyers, social workers or doctors relative to machine ones. What's special about the case of frontier maths, I claim, is that we can pin it down. We have an unambiguous way of determining whether the goal I decided to look for (decades ago) has actually been achieved. An important-new-theorem-machine would revolutionise maths overnight, and if and when one is produced (and it's a computer) I will have no choice but to change my entire world view.
For other human tasks, it's not so easy. Either the task can't be boiled down to text generation at all or we have no unambiguous way to set a criterion for what "human-like insight" putatively adds. Maths research is at a sweet spot: it can be viewed as pure text generation and the sort of insight I'm looking for is objectively verifiable there. The need for it to be research maths is not because I only consider research mathematicians to be intelligent, but because a ground-breaking new theorem (preferably a stream of them, each building on the last) is the only example I can think of where human-like insight would be absolutely required, and where the test can be done right now (and it is, and LLMs have failed it so far).
I dispute your "level" framing, BTW. I often see people with your viewpoint assuming that the road to recreating human intelligence will be incremental, and that there's some threshold at which success can be claimed. When debating with someone who sees the world as I do, assuming that model is begging the question. I see something qualitative that separates the mechanism of human minds from all computers, not a level of "something" beyond which I think things are worthy of being called intelligent. My research maths "goal" isn't an attempt to delineate a feat that would impress me in some way, while all lesser feats leave me cold. (I am already hugely impressed by LLMs.) My "goal" is rather an attempt to identify a practically-achievable piece of evidence that would be sufficient for me to change my world view. And that, if it ever happens, will be a massive personal upheaval, so strong evidence is needed - certainly stronger than "HN commenter thinks I'm a dumb fuck".