I'm not sure what "directly manipulating data" means. And - with respect to Bret Victor - I suspect no one does.
Before you can manipulate anything you have to define a set of affordances. If you have no affordances you have... nothing.
A lot of programming is really about manually creating affordances that can be applied to specific domains. The traditional medium for this is text, with dataflow diagrams a distant second.
People often forget that this is still symbolic programming. You could replace all the keywords in a language with emojis, different photos of Seattle, or hex colour codes, but we use text because it's mnemonic in a way that more abstract representations aren't.
Dataflow diagrams are good as far as they go, but it doesn't take much for a visual representation to become too complex to understand. With text you can at least take it in small chunks, and abstraction/encapsulation make it relatively easy to move between different chunk levels.
At the meta level you can imagine a magical affordance factory that somehow pre-digests data, intuits a comprehensible set of affordances, and then presents them to you. But how would that work without a model of what kinds of affordances humans find comprehensible and useful?
ML etc are the opposite of this. They pre-digest data and provide very crude access through text prompts, but they're like wearing power-gloves that can pick up cars but not small objects. You can't tell ML that a specific detail is wrong without retraining it. The affordances for that just aren't there.
And of course many domains require specialised expert skills. So a workable solution would require a form of AGI clever enough to understand the specific skills of an individual user, so that affordances could be tailored to their level of expertise.
I can't see generalised intuitive domain manipulation being possible until those problems are solved.
If anyone is looking for book recommendations that dive deeper into affordances and more things, "design of everyday things" is a great book that is helpful for everyone building things, hardware or software.
That's where I came across the concept first I think.
> I'm not sure what "directly manipulating data" means. And - with respect to Bret Victor - I suspect no one does.
Well, the best approximation we have is spreadsheets, actually. That's why they're super popular.
Put the data in front, marginalize the code. Code is either "hidden" in cells where you get a live[1] preview of the data or in modules other have built but you can modify[1] if you want to.
Now, how do we take this approach to the next level, that's a problem on the scale of figuring out human genetic engineering or fixing climate change :-)
Before you can manipulate anything you have to define a set of affordances. If you have no affordances you have... nothing.
A lot of programming is really about manually creating affordances that can be applied to specific domains. The traditional medium for this is text, with dataflow diagrams a distant second.
People often forget that this is still symbolic programming. You could replace all the keywords in a language with emojis, different photos of Seattle, or hex colour codes, but we use text because it's mnemonic in a way that more abstract representations aren't.
Dataflow diagrams are good as far as they go, but it doesn't take much for a visual representation to become too complex to understand. With text you can at least take it in small chunks, and abstraction/encapsulation make it relatively easy to move between different chunk levels.
At the meta level you can imagine a magical affordance factory that somehow pre-digests data, intuits a comprehensible set of affordances, and then presents them to you. But how would that work without a model of what kinds of affordances humans find comprehensible and useful?
ML etc are the opposite of this. They pre-digest data and provide very crude access through text prompts, but they're like wearing power-gloves that can pick up cars but not small objects. You can't tell ML that a specific detail is wrong without retraining it. The affordances for that just aren't there.
And of course many domains require specialised expert skills. So a workable solution would require a form of AGI clever enough to understand the specific skills of an individual user, so that affordances could be tailored to their level of expertise.
I can't see generalised intuitive domain manipulation being possible until those problems are solved.