Hacker Newsnew | past | comments | ask | show | jobs | submit | twitchard's commentslogin

Twice as fast, half as costly, too!


Why do you think that the human mind can contain semantics but a machine cannot? This argument needs some sort of dualism, or what Turing called "the objection from continuity" to account for this.

FWIW I don't think that the "triangularity" in my head is the true mathematical concept of "triangularity". When my son e.g. learned about triangles, at first the concept was just a particular triangle in his set of toy shapes. Then eventually I pointed at more things and said "triangle" and now his concept of triangle is larger and includes multiple things he has seen and sentences that people have said about triangles. I don't see any difficulty with semantics being "a matter of image", really.

Why do we believe that semantics can exist in the human mind but cannot exist in the internals of a machine?

Really "semantics"

I had come across this Catholic philosopher: https://edwardfeser.blogspot.com/2019/03/artificial-intellig... who seems to make a similar argument to this; i.e. that it's the humans who give meaning to things, "logical symbols on a piece of paper are just a bunch of meaningless ink marks"


> Why do you think that the human mind can contain semantics but a machine cannot? [...] Why do we believe that semantics can exist in the human mind but cannot exist in the internals of a machine?

Because I know human minds have semantic content (it would be incoherent to deny it, as the denial itself involves concepts), and because I know the definition of what a computer is, which is that it is a purely syntactic formalism. Anyone who knows the history of computer science will know that computation was intentionally defined as a purely syntactic process. And because it is syntactic, we can mechanize it using physical processes. And no amount of syntax ever amounts to semantics, just as so matter how many natural numbers you add, you'll never get a pineapple or even the number pi. How could it?

Whether this entails dualism or not depends on what you mean by "dualism". It does not entail Cartesian dualism, though a Cartesian dualist can accept this view as presented.

> seems to make a similar argument to this; i.e. that it's the humans who give meaning to things, "logical symbols on a piece of paper are just a bunch of meaningless ink marks"

We don't give meanings to things per se. The meaningless ink marks on a piece of paper mean just that: ink marks on a piece of paper. Those are still meanings. However, writing involves the instrumentalization of physical things to make conventional signs, and signs are things that stand in for something else. So, yes, we can make ink marks with which we associate certain meanings and agree to a convention so that we can communicate.

> FWIW I don't think that the "triangularity" in my head is the true mathematical concept of "triangularity".

What is the "true mathematical concept"?

Concepts can be vague (though triangularity per se is so crisp and simple that I reject the idea that you don't have a clear idea of "triangularity" as such), and we usually do not explicitly grasp all that's entailed by them. For example, people knew what triangles were before they learned that the sum of their angles is always 180 degrees. The latter falls out of an analysis of the concept. And this law applies to all triangles because it necessarily falls out of the concept of triangularity, not because we've empirically shown that all triangles seem to have this property, approximately.

> I don't see any difficulty with semantics being "a matter of image", really.

Your son, as he was learning, was abstracting from these individual examples. He realized that you don't mean this triangle, or that triangle, but something both have in common, and ultimately, that is triangularity, which is not just a property or feature of a given triangle, like "green" as in "green triangle", but the what of a triangle. But if you reduce concepts to images, you end up with problems and paradoxes. For example, why should a collection of these things, to the exclusion of those things, be triangles? Or the number three: you have never encountered the number three. Or the notion of similarity between images. There are well known issues with an imagist notion of the mind.


> What Turing was trying to do, is to isolate this "hard problem of consciousness" and separate it from easier problems we can actually answer.

Yes exactly. As a computer scientist this is a great thing to do, science is all about taking mushy concepts like "intelligence" and extracting simplified versions of them that are more tractable in technical settings. The trouble is, Turing doesn't seem to want to stop at merely arguing that forgetting about interior consciousness is useful for technical discussions -- he seems to think that interior consciousness shouldn't be important for philosophical or popular notions of thinking and intelligence, either, and that they should update to use something like his test.

So even if you updated the Turing Test for 2025 the church would probably still be writing "Antiqua et Nova" to remind people that -- yes, interior consciousness exists and is important and robot intelligence really isn't the same as human intelligence without it.


I think you misunderstood what I said.

I don't believe a 2025 version would solve the hard problem of consciousness, or even contribute meaningfully to solve it.

The way I see it, the church is using _an even older_ version of the same line of thought experiments.


Crazy how many voice AI related updates there were this week. Grok voice mode, Alexa+, Hume OCTAVE, Elevenlabs Scribe SST... big day for Voice AI!


Crazy how many Voice AI announcements there were today: - Octave - Alexa+ - Elevenlabs "Scribe" speech-to-text

This is all coming on the heels of Grok 3's voice mode last week.


Let me get this straight: you can be better than 90% of people if you just read a book, but wait, not just any book, it has to be the right book -- and also it doesn't have to be a book, it can also just be "lived experience" or technical documentation, that counts too.

At this point, the thesis is more qualification than statement. Mostly what I drew from the article is that the author feels smugly superior to many of his peers, and wants an excuse ("they didn't even read a book") to morally blame them for their (perceived) shortcomings, while serving up a generous helping of false modesty on the side.


You can obviously substitute a book for whatever you want, but a great book is a huge accelerator. And yeah, it has to be good, which doesn't seem very controversial. Fluent Python is a great example of this. I could, with enough effort and time, piece together some of the philosophy underlying the language's design, but someone has already put a huge amount of pedagogical effort into pre-chewing a lot of that food for me. This probably isn't intrinsic to books (I really like Josh Comeau's CSS course, for example, which has book-tier thought put into it) but I do think that books attract authors who are thoughtful, and the form factor makes them pleasant to revisit. Even some of the suspect, like The Mythical Man Month, have some beautiful prose and thought that I think wouldn't appear typically appear in a more colloquial format like a YouTube series.

I do appreciate you taking the time to read my mind for false modesty and vaguely insult me though, thank you!


A paragraph like

> ...I am clearly worse than almost everyone that emails me along all of these dimensions. I only have a dim understanding of how my 3-4 years of experience coming from a strong background in psychology has rounded to "senior engineer", I've only ever written tests for personal projects because no employer I've ever seen actually had any working tests or interest in getting them, and I wrote the entirety of my Master's thesis code without version control because one of the best universities in the country doesn't teach it. In short, I've never solved a truly hard problem, I'm just out here clicking the "save half a million dollar" button that no one else noticed. I'm a fucking moron.

comes off to me as false modesty in the context of an essay that characterizes the majority of industry colleagues as "drowning sleepwalkers." Take it as a criticism of your writing persona, not a personal insult. You are right that I can't read minds, only the text in front of me.

I am glad you are mentioning specific books about software here in the comments. The essay had a very thoughtful and detailed discussion of books about drawing. If it had kept that energy when it turned to discussing software, instead of retreating into taking potshots at "the average developer", consultants, etc., it would come off to me like a persuasive essay rather than a self-congratulatory smugpiece.


What people actually mean by "data" in a phrase like "data-driven" is fungible observations that you can do quantitative analysis with.

Yes in some sense "a podcast I heard once" is data, but nobody means that when they say "data-driven".

And what about emotions? If I (subconsciously) choose not to hire somebody because they have an asymmetrical face or are "ugly", this is 100% a "gut feeling". I suppose you could say that it's based on "data" in the sense that my DNA which gave me my face preferences is "data". But at this point you have stretched the meaning of data too far. You could even say that the Earth is being "data-driven" when it orbits the sun (the data being the initial trajectory of the Earth at the time of its formation).


You make valid points about the common understanding of "data-driven" decisions. I want to intentionally broaden the definition of data to include a wider range of inputs influencing our choices.

Let’s take the hiring process example. While it feels like a "gut feeling," I'd argue it's still rooted in data - just not the clearly quantifiable kind. This "data" includes past experiences, cultural conditioning, and even evolutionary predispositions.

I think in the current age of LLMs and multi-modal models, you can consider a podcast as data. For example, you can use the podcast to train a model. We would consider it as ‘data’ that the model is trained on, in terms of LLMs. So why not consider it as data, when we humans listen to it?

The brain is like a neural network, so the networks have some weights and biases. Every time we add new data to the brain, through various forms like sight, hearing, smell, taste, touch, etc., we modify the weights and biases of our brain. Regardless of what the data is or its relevance, it changes. So, arguably, anything we consume could be data.

I don’t think you can make the argument that Earth is data-driven. The Earth follows the laws of physics. It’s just an object without the ability to process information, so just because it is following a mathematical path, it doesn’t mean it is data-driven.

The core argument isn't that all decisions are based on spreadsheet-style data, but that what we perceive as intuition is complex processing of various inputs accumulated over time.

By expanding our concept of "data," we can gain deeper insights into human decision-making, including seemingly instinctual or emotional choices. I want to complement the current colloquial understanding of data by recognizing the full spectrum of information influencing our decisions.


Congratulations to Alex and team! Had the pleasure of working with Alex on SDK generators back in 2019 and 2020 and it's very exciting to see these ideas and the fruits of this become available to the entire world. Stainless would definitely be where I looked first if I were starting a developer-facing SaaS today.


This is "the worst part of Jenkins is that it works".

You shouldn't judge a developer tool by just what is possible to do with the tool. After all, with a little turing-completeness it is possible to do anything with anything -- you should judge the tool by what is easy to do with the tool. A good developer tool shouldn't require knowledge of a bunch of arcana to "configure correctly". A good tool protects you from "rookie mistakes" and makes sane choices the intuitive and obvious path of least resistance. Good tools can have a learning curve, but they assist the learning curve by making their abilities easy to discover and experiment with, they don't require you to dig into source code or do random searches on github to find some random pipeline somewhere that uses the configuration you need, as described in the post.


I wasn't judging the tools. I use several of them and with 0 complaints.

I'm judging the article for being incorrect about it's specific points and for focusing on a single tool while the alternatives also suffer from the exact same issues.


You are judging Jenkins favorably because it is possible (with a bunch of arcane knowledge) to build good CI on top of it.

I am saying you should judge Jenkins disfavorably for being hard to use instead of going "skill issue" when somebody describes the pain points.


Yup. I think this is the sort of perspective you get with experience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: