If you think my confidence is misplaced, feel free to offer a counterpoint. I feel as you do about people who would say the opposite of what I am saying, though, I'd think them naive, gullible, credulous over criminal.
Stochastic AI, by definition, does not impose discrete necessary constraints on inference. It does not, under very weak assumptions, provide counterfactual simulation of alternatives. And does not provide a mechanism of self-motivation under environmental coordination.
Why? Since [Necessarily]A|B is not reducible to P(A|B, Model) -- but requires P(A|B) = 0 \forall M. Since P(A|B) and P(B|A) are symmetric in cases where A -causes-> B are not. Since Action = max P(A->B|Goal,Environment) is not the distribution P(A, B, Goal, Environment) or any conditioning of it. Since Environment is not Environment(t), and there is no formulation of Goal(t, t`), Environment(t, t`), (A->B)(t, t`) I am aware of which maintains relevant constraints dynamically without prior specification (one aspect of the Framing Problem).
Now if you have a technology in mind which is more than P(A|B), I'd be interested in hearing it. But if you just want to insist that your P(A|B) model can do all of the above, then, I'd be inclined to believe you are if not criminal, then considerably credulous.
Stochastic AI, by definition, does not impose discrete necessary constraints on inference. It does not, under very weak assumptions, provide counterfactual simulation of alternatives. And does not provide a mechanism of self-motivation under environmental coordination.
Why? Since [Necessarily]A|B is not reducible to P(A|B, Model) -- but requires P(A|B) = 0 \forall M. Since P(A|B) and P(B|A) are symmetric in cases where A -causes-> B are not. Since Action = max P(A->B|Goal,Environment) is not the distribution P(A, B, Goal, Environment) or any conditioning of it. Since Environment is not Environment(t), and there is no formulation of Goal(t, t`), Environment(t, t`), (A->B)(t, t`) I am aware of which maintains relevant constraints dynamically without prior specification (one aspect of the Framing Problem).
Now if you have a technology in mind which is more than P(A|B), I'd be interested in hearing it. But if you just want to insist that your P(A|B) model can do all of the above, then, I'd be inclined to believe you are if not criminal, then considerably credulous.