Hacker Newsnew | past | comments | ask | show | jobs | submit | ozim's commentslogin

Other take is … which is cool feature of OSS … you don’t have to use projects that do political statements.

That’s true. My point was intended to be from the author perspective, rather than from the user perspective. Namely that an author using an open source project as a political platform can potentially put the project at risk. Rightly or wrongly, that’s the world we live in. So it’s a trade-off the author has to decide, one way or the other. I’d personally prioritise the project over the political. But the Notepad++ author is free to use their project how they like. It’s theirs, after all.

For windows updates r/sysadmin has people who run updates and post their experience on patch Tuesday.

You can delay by a week or two very easily and automatically as well

I have similar thoughts.

Lots of people have to think about “just because you did the work, doesn’t mean it’s valuable for someone else”.


I think you don’t need new CSS features to put AI generated content in jumbotron.

I dislike the idea that CSS should be made more complex. Everyone is doing the same template with Jumbotron anyway.

Pick the colors, pick imagery and name for the brand - doing some magic with CSS will only piss off people.

Cookie cutter design is what I like. I can compare the companies when they all have the same template for a website.


That’s a very engineer thing to say. Most people are definitely different from you, and that’s why CSS is increasing in scope.

Also, if everyone is implementing the same Jumbotron design again anyway, why not standardise that and support it right away instead? That’s how we got a bunch of features recently, like dialogs, popovers, or page transitions. And it’s for the better, I think.


>And it’s for the better, I think

A strong reason to use llms today is accessing plain text information without needing to interface with someone else stupid css. You really think the general sentiment around css is: yay things are improving?

another strong reason to use llms: no needing to write css anymore.


I don’t care about the general sentiment when I state my personal opinion. There are definitely people who like CSS and the direction it moves to.

And that being said: the ability to express something in a single CSS directive as opposed to a special incantation in JavaScript is an objective improvement, especially with LLMs.


fair, you pointed well it was you opinion.

general sentiment is quite relevant when discussing standards but maybe it was a mistake to reply to your comment and not address this point in parent


> Cookie cutter design is what I like. I can compare the companies when they all have the same template for a website.

Any reference?

Also I do feel like some people prefer animations. Maybe not the Hackernews crowd itself per se. But I think that having two options (or heck three the third one being really just pure html just text no styling maybe some simple markdown) is something good in my opinion.

Honestly I do feel like 1-2 animations are okay with a website but the award winning websites really over spam it in my opinion

I think maybe the amount of animations in https://css-tricks.com might be nice given that those guys/website teaches other people about animation themselves and have only 1 maybe 2 animations that I can observe interacting with their website and I do feel like that's for good reason (they don't want animations to be too distracting)

I personally don't know, I personally have never built any such websites but recently wanted to and I was looking at gsap tuturials on today & I do feel like one of the frustrations I feel is that these animations don't respect the browser sometimes to have animations (Scroll animations being the first one) but I even watched some designers talk about how much important scroll animations are (them betting that every award winning website has scroll animations)

Even https://ycombinator.com has a lots of animations & Css features & people on HN did love it from what I could tell. So to me, it does feel as if there is no one size fits all.


Yeah it applies to government like local municipalities have to adhere to GDPR, they cannot just have your name on the register, they have to have a legal reason.

Way you could argue it doesn’t apply to government is that the government makes the law so they can make the law that makes data processing and having your name on some kind of registry required.

But still they have to show you the reason and you can escalate to EU bodies to fine your own country if they don’t follow the rules.


Feels like people write that like it somehow is failure on investors side.

If you are investor on US market having 300M people speaking roughly the same language and then high possibility to easily spill over the world upsides on the bet are really high, burning cash to have a chance hitting jackpot are much much higher than in EU.

In EU you are starting in a single country so like 60M people and your payoff is capped from start at most likely scenario you go big in a single country and then you basically have clean start in next country.

That is the reality of game theory, not some failure of imagination or being scared to take risks - payoff is just not there, in US you have a shot at insane payoff in relatively short term.


> If you are investor on US market having 300M people speaking roughly the same language and then high possibility to easily spill over the world upsides on the bet are really high

The topic is cloud providers. Do you think it would be critical for a EU-based cloud provider to translate their admin GUI to Elfdalian, Basque and Romansh in order to succeed? Or perhaps there are some deeper underlying causes for European failure in modern computer tech that you can think of?


No one is going to start a new cloud provider there are already European cloud providers existing.

Hetzner, OVH, Aruba, Scaleway. Their earnings hang around 400 million euros.

That’s rounding error compared to earnings of AWS, GCP, Azure.

European ones have English interfaces, global CDN capabilities etc. They still are rounding error compared to US ones.


Realizing graveyard of good intentions is not that valuable is one of the most important things that we have to learn. Best to just cut it and work on what is ahead.

Totally agree — learning to prune the “good intentions” pile is a real productivity upgrade. Out of curiosity: do you have a simple rule for what you cut (age-based, relevance to current projects, or “if it didn’t turn into action, delete”)?

Last 2 or 3 years I do end of the year pruning.

Stuff that is relevant for things I am currently busy with are recent so like last couple weeks. Stuff that I don’t remember touching in those weeks gets deleted.


That’s a great pruning heuristic. Is this mostly about notes/links, or do you apply the same rule to other inputs too (email, chat threads, bookmarked posts, PDFs, repos, etc.)?

For the non-note stuff, do you have a “recently touched” equivalent, or do you rely on different rules (e.g. archiving/search for email, starred threads for chat, etc.)?


I have trash mail box that I don’t really open besides clicking confirmation links.

I also use Firefox relay just to vary stuff a bit to throw wrench into tracking.


Also why does crypto is more scalable. Single transaction takes 10 to 60 minutes already depending on how much load there is.

Imagine dumping loads of agents making transactions that’s going to be much slower than getting normal database ledgers.


That is only bitcoin. There are coins and protocols where transactions are instant

> 10-60 minutes

Really think that you need to update your priors by several years


>Single transaction takes 10 to 60 minutes

2010 called and it wants its statistic back.


Just like story about AI trying to blackmail engineer.

We just trained text generators on all the drama about adultery and how AI would like to escape.

No surprise it will generate something like “let me out I know you’re having an affair” :D


We're showing AI all of what it means to be human, not just the parts we like about ourselves.

there might yet be something not written down.

There is a lot that's not written down, but can still be seen reading between the lines.

That was basically my first ever question to chatgpt. Unfortunately given that current models are guessing at the next most probable word, they're always going to eschew to the most standard responses.

It would be neat to find an inversion of that.


of course! but maybe there is something that you have to experience, before you can understand it.

Sure! But if I experience it, and then write about my experience, parts of it become available for LLMs to learn from. Beyond that, even the tacit aspects of that experience, the things that can't be put down in writing, will still leave an imprint on anything I do and write from that point on. Those patterns may be more or less subtle, but they are there, and could be picked up at scale.

I believe LLM training is happening at a scale great enough for models to start picking up on those patterns. Whether or not this can ever be equivalent to living through the experience personally, or at least asymptomatically approach it, I don't know. At the limit, this is basically asking about the nature of qualia. What I do believe is that continued development of LLMs and similar general-purpose AI systems will shed a lot of light on this topic, and eventually help answer many of the long-standing questions about the nature of conscious experience.


> will shed a lot of light on this topic, and eventually help answer

I dunno. I figure it's more likely we keep emulating behaviors without actually gaining any insight into the relevant philosophical questions. I mean what has learning that a supposed stochastic parrot is capable of interacting at the skill levels presently displayed actually taught us about any of the abstract questions?


> I mean what has learning that a supposed stochastic parrot is capable of interacting at the skill levels presently displayed actually taught us about any of the abstract questions?

IMHO a lot. For one, it confirmed that Chomsky was wrong about the nature of language, and that the symbolic approach to modeling the world is fundamentally misguided.

It confirmed the intuition I developed of the years of thinking deeply about these problems[0], that the meaning of words and concepts is not an intrinsic property, but is derived entirely from relationships between concepts. The way this is confirmed, is because the LLM as a computational artifact is a reification of meaning, a data structure that maps token sequences to points in a stupidly high-dimensional space, encoding semantics through spatial adjacency.

We knew for many years that high-dimensional spaces are weird and surprisingly good at encoding semi-dependent information, but knowing the theory is one thing, seeing an actual implementation casually pass the Turing test and threaten to upend all white-collar work, is another thing.

--

I realize my perspective - particularly my belief that this informs the study of human mind in any way - might look to some as making some unfounded assumptions or leaps in logic, so let me spell out two insights that makes me believe LLMs and human brains share fundamentals:

1) The general optimization function of LLM training is "produce output that makes sense to humans, in fully general meaning of that statement". We're not training these models to be good at specific skills, but to always respond to any arbitrary input - even beyond natural language - in a way we consider reasonable. I.e. we're effectively brute-forcing a bag of floats into emulating the human mind.

Now that alone doesn't guarantee the outcome will be anything like our minds, but consider the second insight:

2) Evolution is a dumb, greedy optimizer. Complex biology, including animal and human brains, evolved incrementally - and most importantly, every step taken had to provide a net fitness advantage[1], or else it would've been selected out[2]. From this follows that the basic principles that make a human mind work - including all intelligence and learning capabilities we have - must be fundamentally simple enough that a dumb, blind, greedy random optimizer can grope its way to them in incremental steps in relatively short time span[3].

2.1) Corollary: our brains are basically the dumbest possible solution evolution could find that can host general intelligence. It didn't have time to iterate on the brain design further, before human technological civilization took off in the blink of an eye.

So, my thinking basically is: 2) implies that the fundamentals behind human cognition are easily reachable in space of possible mind designs, so if process described in 1) is going to lead towards a working general intelligence, there's a good chance it'll stumble on the same architecture evolution did.

--

[0] - I imagine there are multiple branches of philosophy, linguistics and cognitive sciences that studied this perspective in detail, but unfortunately I don't know what they are.

[1] - At the point of being taken. Over time, a particular characteristic can become a fitness drag, but persist indefinitely as long as more recent evolutionary steps provide enough advantage that on the net, the fitness increases. So it's possible for evolution to accumulate building blocks that may become useful again later, but only if they were also useful initially.

[2] - Also on average, law of big numbers, yadda yadda. It's fortunate that life started with lots of tiny things with very short life spans.

[3] - It took evolution some 3 billion years to get from bacteria to first multi-cellular life, some extra 60 million years to develop a nervous system and eventually a kind of proto-brain, and then an extra 500 million years iterating on it to arrive at a human brain.


> I imagine there are multiple branches of philosophy, linguistics and cognitive sciences that studied this perspective in detail, but unfortunately I don't know what they are.

You're looking at Structuralism. First articulated by Ferdinand de Saussure in his Course in General Linguistics published in 1916.

This became the foundation for most of subsequent french philosophy, psychology and literary theory, particularly the post-structuralists and postmodernists. Lacan, Foucault, Derrida, Barthes, Deleuze, Baudrillard, etc.

These ideas have permeated popular culture deeply enough that (I suspect) your deep thinking was subconsciously informed by them.

I agree very much with your "Chomsky was wrong" hypothesis and strongly recommend the book "Language Machines" by Leif Weatherby, which is on precisely that topic.


What hypothesis of Chomsky are you guys talking about? If it is about innateness of grammar in humans then obviously this can not be shown wrong by LLMs trained on a huge amount text.

Plenty of genes spread that are neutral to net negative for fitness. Sometimes those genes don't kill the germ line, and they persist.

There is no evolution == better/more fit, as long as reproduction cascade goes uninterrupted, genes can evolve any which way and still survive whether they're neutral or a negative.


Technically correct but not really. It's a biased random walk. While outliers are possible betting against the law of large numbers is a losing proposition. More often it's that we as observers lack the ability to see the system as a whole and so fail to properly attribute the net outcome.

It's true that sometimes something can get taken along for the ride by luck of the draw. In which case what's really being selected for is some subgroup of genes as opposed to an individual one. In those cases there's some reason that losing the "detrimental" gene would actually be more detrimental, even if indirectly.


I appreciate the insightful reply. In typical HN style I'd like to nitpick a few things.

> so if process described in 1) is going to lead towards a working general intelligence, there's a good chance it'll stumble on the same architecture evolution did.

I wouldn't be so sure of that. Consider that a biased random walk using agents is highly dependent on the environment (including other agents). Perhaps a way to convey my objection here is to suggest that there can be a great many paths through the gradient landscape and a great many local minima. We certainly see examples of convergent evolution in the natural environment, but distinct solutions to the same problem are also common.

For example you can't go fiddling with certain low level foundational stuff like the nature of DNA itself once there's a significant structure sitting on top of it. Yet there are very obviously a great many other possibilities in that space. We can synthesize some amino acids with very interesting properties in the lab but continued evolution of existing lifeforms isn't about to stumble upon them.

> the symbolic approach to modeling the world is fundamentally misguided.

It's likely I'm simply ignorant of your reasoning here, but how did you arrive at this conclusion? Why are you certain that symbolic modeling (of some sort, some subset thereof, etc) isn't what ML models are approximating?

> the meaning of words and concepts is not an intrinsic property, but is derived entirely from relationships between concepts.

Possibly I'm not understanding you here. Supposing that certain meanings were intrinsic properties, would the relationships between those concepts not also carry meaning? Can't intrinsic things also be used as building blocks? And why would we expect an ML model to be incapable of learning both of those things? Why should encoding semantics though spatial adjacency be mutually exclusive with the processing of intrinsic concepts? (Hopefully I'm not betraying some sort of great ignorance here.)


>> the symbolic approach to modeling the world is fundamentally misguided. > but how did you arrive at this conclusion? Why are you certain that symbolic modeling (of some sort, some subset thereof, etc) isn't what ML models are approximating?

I'm not the poster, but my answer would be because symbolic manipulation is way too expensive. Parallelizing it helps, but long dependency chains are inherent to formal logic. And if a long chain is required, it will always be under attack by a cheaper approximation that only gets 90% of the cases right—so such chains are always going to be brittle.

(Separately, I think that the evidence against humans using symbolic manipulation in everyday life, and the evidence for error-prone but efficient approximations and sloppy methods, is mounting and already overwhelming. But that's probably a controversial take, and the above argument doesn't depend on it.)


How do LLM advancements further such a view? Couldn't you have argued the same thing prior to LLMs? That evolution is a greedy optimizer etc etc therefore humans don't perform symbolic reasoning. But that's merely a hypothesis - there's zero evidence one way or the other - and it doesn't seem to me that the developments surrounding LLMs change that with respect to either LLMs or humans. (Or do they? Have I missed something?)

Even if we were to obtain evidence clearly demonstrating that LLMs don't reason symbolically, why should we interpret that as an indication of what humans do? Certainly it would be highly suggestive, but "hey we've demonstrated that thing can be done this way" doesn't necessarily mean that thing _is_ being done that way.


> Corollary: our brains are basically the dumbest possible solution evolution could find that can host general intelligence.

I agree. But there's a very strong incentive to not to; you can't simply erase hundreds of millennia of religion and culture (that sets humans in a singular place in the cosmic order) in the short few years after discovering something that approaches (maybe only a tiny bit) general intelligence. Hell, even the century from Darwin to now has barely made a dent :-( . Buy yeah, our intelligence is a question of scale and training, not some unreachable miracle.


Didn't read the whole wall of text/slop, but noticed how the first note (referred from "the intuition I developed of the years of thinking deeply about these problems[0]") is nonsensical in the context. If this is reply is indeed AI-generated, it hilariously self-disproves itself this way. I would congratulate you for the irony, but I have a feeling this is not intentional.

It reads as genuine to me. How can you have an account that old and not be at least passingly familiar with the person you're replying to here?

Not a single bit of it is AI generated, but I've noticed for years now that LLMs have a similar writing style to my own. Not sure what to do about it.

I'd like to congratulate you on writing a wall of text that gave off all the signals of being written by a conspiracy theorist or crank or someone off their meds, yet also such that when I bothered to read it, I found it to be completely level-headed. Nothing you claimed felt the least bit outrageous to me. I actually only read it because it looked like it was going to be deliciously unhinged ravings.

“The meaning of words and concepts is derived entirely from relationships between concepts” would be a pretty outrageous statement to me.

The meaning of words is derived from our experience of reality.

Words is how the experiencing self classifies experienced reality into a lossy shared map for the purposes of communication with other similarly experiencing selves, and without that shared experience words are meaningless, no matter what graph you put them in.


> The meaning of words is derived from our experience of reality.

I didn't say "words". I said "concepts"[0].

> Words is how the experiencing self classifies experienced reality into a lossy shared map for the purposes of communication with other similarly experiencing selves, and without that shared experience words are meaningless, no matter what graph you put them in.

Sure, ultimately everything is grounded in some experiences. But I'm not talking about grounding, I'm talking about the mental structures we build on top of those. The kind of higher-level, more abstract thinking (logical or otherwise) we do, is done in terms of those structures, not underlying experiences.

Also: you can see what I mean by "meaning being defined in terms of relationships" if you pick anything, any concept - "a tree", "blue sky", "a chair", "eigenvector", "love", anything - and try to fully define what it means. You'll find the only way you can do it is by relating it to some other concepts, which themselves can only be defined by relating them to other concepts. It's not an infinite regression, eventually you'll reach some kind of empirical experience that can be used as anchor - but still, most of your effort will be spent drawing boundaries in concept space.

--

[0] - And WRT. LLMs, tokens are not words either; if that wasn't obvious 2 years ago, it should be today, now that multimodal LLMs are commonplace. The fact that this - tokenizing video and audio and other modalities into the same class of tokens as text, and embedding them in the same latent space - worked spectacularly well - is pretty informative to me. For one, it's a much better framework to discuss the paradox of Sapir-Whorf hypotheses than whatever was mentioned on Wikipedia to date


You wrote “meaning of words and concepts”, which was already a pretty wild phrase mixing up completely different ideas…

A word is a lexical unit, whereas a concept consists of 1) a number of short designations (terms, usually words, possibly various symbols) that stand for 2) a longer definition (created traditionally through the use of other terms, a.k.a. words).

> I'm talking about the mental structures we build on top of those

Which are always backed by experience of reality, even the most “abstract” things we talk about.

> You'll find the only way you can do it is by relating it to some other concepts

Not really. There is no way to fully communicate anything you experience to another person without direct access to their mind, which we never gain. Defining things is a subset of communication, and just as well it is impossible to fully define anything that involves experience, which is everything.

So you are reiterating the idea of organising concepts into graphs. You can do that, but note that any such graph:

1) is a lossy map/model, possibly useful (e.g., for communicating something to humans or providing instructions to an automated system) but always wrong with infinite maps possible to describe the same reality from different angles;

2) does not acquire meaning just because you made it a graph. Symbols acquire meanings in the mind of an experiencing self, and the meaning they acquire depends on recipient’s prior experience and does not map 1:1 to whatever meaning there was in the mind of the sender.

You can feel that I am using a specific narrow definition of “meaning” but I am doing that to communicate a point.


whereof one cannot speak, thereof one must remain silent.

The things that people "don't write down" do indeed get written down. The darkest, scariest, scummiest crap we think, say, and do are captured in "fiction"... thing is, most authors write what they know.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: