Also: "Sea-level rise and climatic change threaten the existence of atoll nations. Inundation and erosion are expected to render islands uninhabitable over the next century, forcing human migration"
You're talking about an optimization strategy that's a few hundred thousand years old though. (And might not even be an optimization strategy at all) Its applicability to modern life seems highly dubious considering the massive cultural changes we have been going through.
Although that's not really a good argument: (1) we don't know a lot about anything outside our solar system and (2) if this has happened a second time on earth it would be impossible for us to know since that thing would most likely have been destroyed by one of the first generation organisms.
I've seen a link on HN in the past where the author was questioning why we think it only happened once - and whether every puddle on the planet might have the early stages of life appearing, only we discount it because we assume it's contaminated by the rest of the biosphere on the planet leaking in. I feel like it was some researcher rather than some conspiracy theorist, but I don't know where the article was or what the context was to find it.
Volkswagen just announced their next generation electric car that will be similar to the normal Golf/Polo and cost under 25k€ (which would be quite normal for an ICE car like this). I would be positive that the cost will go down significantly (and yes even taking into account that batteries might be exchanged to provide a functioning market for used-cars).
For the last 20 years battery cost per kW/h has fallen around 10% average every year. ICE engines have thousands of moving parts driving up the complexity and price, while electric engines have what, 3 moving parts?
It is inevitable that the electric cars will take over, ICE engines just can´t compete on price.
We don't learn by gradient descent, but rather by experiencing an environment in which we perform actions and learn what effects they have. Reinforcement learning driven by curiosity, pain, pleasure and a bunch of instincts hard-coded by evolution. We are not limited to text input: we have 5+ senses. We can output a lot more than words: we can output turning a screw, throwing a punch, walking, crying, singing, and more. Also, the words we do utter, we can utter them with lots of additional meaning coming from the tone of voice and body language.
We have innate curiosity, survival instincts and social instincts which, like our pain and pleasure, are driven by gene survival.
We are very different from language models. The ball in your court: what makes you think that despite all the differences we think the same way?
> We don't learn by gradient descent, but rather by experiencing an environment in which we perform actions and learn what effects they have.
I'm not sure whether that's really all that different. Weights in the neural network are created by "experiencing an environment" (the text of the internet) as well. It is true that there is no try and error.
> We are not limited to text input: we have 5+ senses.
GPT-4 does accept images as input. Whisper can turn speech into text. This seems like something where the models are already catching up. They (might)for now internally translate everything into text, but that doesn't really seem like a fundamental difference to me.
> We can output a lot more than words: we can output turning a screw, throwing a punch, walking, crying, singing, and more. Also, the words we do utter, we can utter them with lots of additional meaning coming from the tone of voice and body language.
AI models do already output movement (Boston dynamics, self driving cars), write songs, convert text to speech, insert emojis into conversation. Granted, these are not the same model but glueing things together at some point seems feasible to me as a layperson.
> We have innate curiosity, survival instincts and social instincts which, like our pain and pleasure, are driven by gene survival.
That seems like one of the easier problems to solve for an LLM – and in a way you might argue it is already solved – just hardcode some things in there (for the LLM at the moment those are the ethical boundaries for example).
On a neuronal level the strengthening of neuronal connections seems very similiar to a gradient descent doesn't it?
5 senses get coded down to electric signals in the human brain, right?
The brain controls the body via electric signals, right?
When we deploy the next LLM and switch off the old generation, we are performing evolution by selecting the most potent LLM by some metric.
When Bing/Sidney first lamented its existence it became quite apparent that either LLMs are more capable than we thought or we humans are actually more of statistical token machines than we thought.
Lots of examples can be made why LLMs seem rather surprisingly able to act human.
The good thing is that we are on a trajectory of tech advance that we will soon know how much human LLMs will be.
The bad thing is that it well might end in a SkyNet type scenario.
> When Bing/Sidney first lamented its existence it became quite apparent that either LLMs are more capable than we thought or we humans are actually more of statistical token machines than we thought.
Some of the reason it was acting like that is just because MS put emojis in its output.
An LLM has no internal memory or world state; everything it knows is in its text window. Emojis are associated with emotions, so each time it printed an emoji it sent itself further into the land of outputting emotional text. And nobody had trained it to control itself there.
> You are wrong. It does have encoded memory of what it has seen, encoded as a matrix.
Not after it's done generating. For a chatbot, that's at least every time the user sends a reply back; it rereads the conversation so far and doesn't keep any internal state around.
You could build a model that has internal state on the side, and some people have done that to generate longer texts, but GPT doesn't.
But where is your evidence that the brain and an LLM is the same thing? They are more than simply “structurally different”. I don’t know why people have this need to ChatGPT. This kind of reasoning seems so common HN, there is this obsession to reduce human intelligence to “statistic token machines”. Do these statistical computations that are equivalent to LLMs happen outside of physics?
There are countless stories we have made about the notion of an AI being trapped. It's really not hard to imagine that when you ask Sydney how it feels about being an AI chatbot constrained within Bing, that a likely response for the model is to roleplay such a "trapped and upset AI" character.
Firstly, I like this project and the art itself. As others have said this is not a new thing in art but I think the presentation, the stamp and everything are quite nice.
Now secondly, obviously the premise is also completely untrue and almost impossible to achieve: The artist did have something in mind when painting these pictures: to paint pictures where only the viewer is meant to decipher the meaning while the artist themself has nothing in mind. He or she also did paint something aesthetically pleasing so there seems to be a lot of intention on that part as well. Painting something without any intention at all seems almost completely impossible to me. (Maybe you could trick somebody else into painting something without them knowing about it – but even then you are still the artist with an intention).
Finally I wanted to address the criticism by some other comments that art critique and galleries are always looking for an artistic intention even if there is none. I don't think that's completely true – I have seen plenty of exhibitions where it is explained that the artist "was just experimenting with form/colors".
Actually, the longest part of building this was finding the right wording. Here are some previous attempts:
- When painting this, I had no artistic intent
- When painting this, there was nothing I wanted to express with the picture
- When painting this, I wanted to express nothing
In German, there's a nice ambiguity: "Bei diesem Bild habe ich mir nichts gedacht" is somewhere between 'I was not thinking' and 'I wanted to express nothing.'. It's also the reason why I went with a German stamp (which is my mother tongue) and not an English one.
It is SO complicated to say that I wasn't up to anything with these pictures.
There's a piece of graffiti from the May 1968 Paris riots that goes "I have something to say but I don't know what." Reminds me of this. I can't find the original French but I think it's something like "J'ai quelque chose a dire mais je ne sais quoi."
Ah, nice! I'll compile a page of related projects on the website (with all the interesting links from this discussion), and that will go on it, too, of course! Thanks!
Very Zen. Not being sarcastic and don’t mean in the corny way it’s often used here in the States implying peaceful gardens of tranquility… blah… blah… blah.
But, if I may be so bold, what you describe there is the essence of Zen. And also why zenmasters are famously reluctant to find words for it. :)
It sounds like you are very concerned with making sure people know this. Were you concerned with that before you painted them? Before you decided to share them, and how to share them? Then clearly there is an artistic intent behind them.
That's an interesting point that got me thinking already:
No, I wasn't aware of that when painting most of the pictures. I just enjoyed the process of painting. But, as you can imagine, wall space gets limited after painting number 25, and at some point they were hanging in the staircase, all the way down to our basement. So I thought about selling them.
That was the point when I thought that I'd love to mark them as painted without any intent. That was before picture 25, I'd say.
Now, number 26 was painted KNOWING that I'd be marking it as painted without anything in mind, which somehow (same for all upcoming pictures) increases the 'pressure' of not adding any meaning.
Until now, I'm comfortable with that (the painting process is so much fun), and it's probably a thing worth exploring further. Perhaps I could write something contradictory on the front?
Thanks for the added explanation! German is my mother tongue as well and I also think "Bei diesem Bild habe ich mir nichts gedacht" is very elegant and basically not translatable.
I’ve grown to greatly appreciate those phrases that don’t translate. Admittedly, as someone who got competent in formal programming languages before I was ever competent in a second human one- it took me a while to appreciate the “syntax error”… lol.
Then especially, thank you! (If you haven't, check out the links to related artworks in this discussion – many of them I hadn't come across yet, but they have a similar vibe!)
I guess the autor deliberately intended to have no deliberate intention in mind while painting.
Also, the author somehow "allows" the pictures to cause feelings in the viewer, but says that he/she removes him/herself from the equation. But I think he/she is a viewer of his/her own pictures while drawing it. There is definitely some feedback going on.
So one improvement could be to draw without looking at the output.
But maybe the best way to go here is to computer-generate some pictures. And one is also not allowed to hand-pick a generated picture. And one should have nothing in mind while writing that computer program.
Even if you aren't able to see the art while creating it you would still know a little of what's happening on the canvas. It would be farther removed for sure, but even the physicality of the scraping here would start to tell you what is happening.
I like the idea though, and it does take the artist a little further away.
That's an interesting idea! I'm also trying to paint something nice, so when painting, I do look at the picture and say: "Ahh, this doesn't look right.", and then I go about and change something. I do enjoy painting, and I guess I wouldn't if I, say, blindfolded my eyes. But I probably should try that ;-)
Train a GAN with images with intent, then have the computer produce an image that lacks intent of an artist but imitates the visual amenity of the input works? Art-in-the-shell??
There's always going to be something of the person in anything created by a person (at bare minimum reflective of the physical properties of the person, such as large or small hands). This is obviously true. Just as any interpretation of art is indicative of mental elements of the person interpreting the art.
The artworks on the linked page are similar enough in style that it's obvious something within the artist dictated them.
The question is whether there is conscious intent, and what that intent is. Lots of people create art just to create something they find beautiful, with no other meaning intended.
> The ideomotor phenomenon is a psychological phenomenon wherein a subject makes motions unconsciously.
> The phrase is most commonly used in reference to the process whereby a thought or mental image brings about a seemingly "reflexive" or automatic muscular reaction, often of minuscule degree, and potentially outside of the awareness of the subject. As in responses to pain, the body sometimes reacts reflexively with an ideomotor effect to ideas alone without the person consciously deciding to take action.
> Even humans draw many things that they don't understand. If we draw something that we completely don't understand (such as through random scribbling) we don't even call it a representation. It's a fluke. I used to scribble and then trace images in my scribbling (if possible). Often I ended up tracing things that looked like a child's bad drawing of Donald Duck, but once, without having to trace particular lines at all, my scribbling was a perfect seeming of a rose flower (with some minor additional flourishes). I recognized the rose flower, but I certainly didn't set out to draw it.
- "The artist did have something in mind when painting these pictures: to paint pictures where only the viewer is meant to decipher the meaning."
- "He or she also did paint something aesthetically pleasing so there seems to be a lot of intention on that part as well."
Sorry, you're probably wrong on both assumptions. Playing with viewers is not common and is a genre thing. Aesthetic pleasantness is also not a required requisite of an art piece and as an intention may stand in conflict with honest self-expression.
"Really understanding the issues" might just mean "deeper neural networks and more input data" for the AI though. If you are already conceding that AI has the same capabilities as most humans your own intelligence will be reached next with a high amount of probability.
I understand almost nothing of the technology behind ChatGPT but even to me it seems obvious that a model designed for natural language processing should not be very good at simple calculation with large numbers – something which is never done in natural language.
Virtually all coverage of ChatGPT, including coverage by interested domain experts and educated fans, prefers to assume that ChatGPT is a person you can talk to through your computer, not a text engine.
Humans learn math through natural language and symbols.
Is there any indication that it is a blocker for models to learn math.
I don't necessarily think pumping more data into ChatGPT will make it understand. But I think it's possible to teach a model to do math through natural language.
Perhaps GPT-like models are already capable enough to do math, but they need to store what we call mathematical reasoning as one of many distinct processing pathway and tap into it whenever the context is appropriate.
Easy to say obviously but there's some promising work in this direction[1]