Hacker Newsnew | past | comments | ask | show | jobs | submit | eschaton's commentslogin

They probably mean that they don’t like the way the “install name” (as it’s referred to) of a shared library is embedded in the library and then copied to whatever links the library, and is then used to find the library at runtime. I suspect they’d prefer to build the shared library with an arbitrary install name and then just have it found automatically by being in the Frameworks or Libraries directory.

Most platforms don’t have a concept of “install name” distinct from the library name; the value was originally the full path to the deployment location of the library, which was revised to support meta-paths (like `@rpath/LibraryName`) in Mac OS X 10.4 and 10.5 and is what the runtime dynamic loader (dyld) uses to match a library at load time. So an application’s executable can have a set of self-relative run path search paths, which is how it can load libraries from its Frameworks and Libraries directories.


Ah fair enough. But generally an rpath is pretty good to go out of the box.

The primary binary encodes relative to the executable path and any dylib that loads from it should be able to (by default) load relative to that


Depends on whether you’re building with Xcode; when I worked on it, I ensured that the templates included with Xcode would have the right setup to declare appropriate run path search paths for applications, and appropriate install names for frameworks and (shared) libraries.

However when building with just command line tools and not passing all the same arguments an Xcode project and target causes to be passed, you have to do extra work to ensure the right run path search paths get into built executables and the right install names get into built libraries and frameworks.

That latter is to ensure that if you don’t pass those extra arguments, executables and shared libraries are built for Darwin-based platforms as much as reasonably possible like they are on other UNIX-like platforms.


AI is the field. Machine learning is one of many specializations within the field. “Generative AI” is the colloquial term for using various machine learning models to generate text, images, video, code, etc.; that is, it’s a subfield of machine learning.

Other subfields of AI include things like search, speech and language understanding, knowledge representation, and so on. There’s a lot more to AI than machine learning and a lot more to machine learning than LLMs (“gen AI”).


Yes, keep AI slop “fixes” to yourself and only create PRs for your own work.

Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.

We don’t need more software, we need the right software implemented better. That’s not something LLMs can possibly give us because they’re fucking pachinko machines.

Here’s a hint: Nobody should ever write a CRUD app, because nobody should ever have to write a CRUD app; that’s something that can be generated fully and deterministically (i.e. by a set of locally-executable heuristics, not a goddamn ocean-boiling LLM) from a sufficiently detailed model of the data involved.

In the 1970s you could wire up an OS-level forms library to your database schema and then serve literally thousands of users from a system less powerful than the CPU in modern peripheral or storage controller. And in less RAM too.

People need to take a look at what was done before in order to truly have a proper degree of shame about how things are being done now.


Most CRUD software development is not really about the CRUD part. And for most framework, you can find packages that generate the UI and the glue code that ties it to the database.

When you're doing CRUD, you're spending most of the time with the extra constraints designed by product. It's dealing with the CRUD events, the IAM system, the Notification system,...


> That’s not something LLMs can possibly give us because they’re fucking pachinko machines.

I mostly agree, but I do find them useful for fuzzing out tests and finding issues with implementations. I have moved away from larger architectural sketches using LLMs because over larger time scales I no longer find they actually save time, but I do think they're useful for finding ways to improve correctness and safety in code.

It isn't the exciting and magical thing AI platforms want people to think it is, and it isn't indispensable, but I like having it handy sometimes.

The key is that it still requires an operator who knows something is missing, or that there are still improvements to be made, and how to suss them out. This is far less likely to occur in the hands of people who don't know, in which case I agree that it's essentially a pachinko machine.


I’m with you. Anyone writing in anything higher level than assembly, with anything less than the optimization work done by the demo scene, should feel great same.

Down with force-multiplying abstractions! Down with intermediate languages and CPU agnostic binaries! Down with libraries!


You have clearly entirely understood exactly what I was saying and don’t look like a fool at all with this reply.

I produce a lot of shit every week too, but I don’t brag about my digestive system on “Hacker” “News.”

You are so bitter. Take a moment to ponder why you are that way.

Nice deflection. Did you use ChatGPT to come up with it?

I suspect something like this happened.

- He attended a nearby church’s youth group in hopes of meeting “marriageable” women, in his mid-late 20s or older; - Was bothering the girls or young women in attendance by being straightforward about his weird intentions; - Wouldn’t leave when asked politely to do so; and, - Had to be removed by security (maybe an off-duty cop?) or the police.

It sounds like in being removed, he made a scene, and they had to lay out right then and there (“in front of a whole bunch of eligible young adults”) that he was at least 10 years older than everyone else attending and that made his presence weird and inappropriate.


Symbolics’ big fumble was thinking their CPU was their special sauce for way too long.

They showed signs that some people there understood that their development environment was it, but it obviously never fully got through to decision-makers: They had CLOE, a 386 PC deployment story in partnership with Gold Hill, but they’d have been far better served by acquiring Gold Hill and porting Genera to the 386 PC architecture.


Xerox/Venue tried porting Interlisp (the Lisp machine environment developed at Xerox PARC) to both Unix workstations and commodity PC hardware, but it doesn't seem like that was a commercial success. Venue remained a tiny company providing support to existing Interlisp customers until its head developer died in the late 2000s and they wrapped up operations. The Unix/PC ports seem to have mostly been used as a way to run legacy Interlisp software on newer hardware rather than attracting anyone new to the Lisp machine world. I don't see why Symbolics doing the same thing as Xerox would have produced any different results. The real problem was that investment in expert systems/Lisp dried up as a whole. I don't know whether any of the Lisp vendors could have done anything to combat those market forces.


The environment lasted a long time as the basis for other Xerox products, such as their office automation system and as a front end for their printing systems. However, it wasn’t so much ported as the virtual machine was. (Just like Symbolics did with OpenGenera on Alpha.)

What I’m suggesting is that they could have done a full port to the hardware; OpenGenera is still an Ivory CPU emulator. In 1986-7 you could get an AT-compatible 80386 system running at 16-25MHz that supported 8-32MB of RAM for 10-20% the price of a Symbolics workstation, and while it might not run Lisp quite as fast as a 3600 series system, it would still be fast enough for both deployment and development—and the next generation would run Lisp at comparable performance.


I don't really understand why lisp was so intrinsically tied to expert systems and AI. It seems to me that Scheme (and, to an extent, common lisp or other lisps) are pretty good platforms for experimenting with software ideas; long before Jupiter notebooks existed.


To be fair to Symbolics: a lot of companies back then thought their CPU was the secret sauce. Some still do...


I think Apple knows that it's the whole widget (including software and hardware) that matters.


For those unaware, Symbolics eventually "pivoted" to DEC Alpha, a supposedly "open" architecture, which is how Genera became Open Genera, like OpenVMS. (And still, like OpenVMS, heavily proprietary.)


Wasn’t the “open” at the time meaning “open system” as a system that is open for external connections (aka networking) and not so much open as in “open source”?


I was both Alpha being quasi-open itself, like OpenPOWER today, and like earlier PDP minis had been, whereas VAX had been pretty locked down, and OpenVMS getting POSIX compatibility (admittedly probably more the latter than the former, but DEC was big on branding things "open" at the time, partly because they were losing ground):

https://www.digiater.nl/openvms/decus/vmslt05a/vu/alpha_hist...

> Although Alpha was declared an "open architecture" right from the start, there was no consortium to develop it. All R&D actions were handled by DEC itself, and sometimes in cooperation with Mitsubishi. In fact, though the architecture was free de jure, most important hardware designs of it were pretty much closed de facto, and had to be paid-licensed (if possible at all). So, it wasn't that thing helping to promote the architecture. To mention, soon after introduction of EV4, DEC's high management offered to license manufacturing rights to Intel, Motorola, NEC, and Texas Instruments. But all these companies were involved in different projects and were of very little to no interest in EV4, so they refused. Perhaps, the conditions could be also unacceptable, or something else. Mistake #5.


> Wasn’t the “open” at the time meaning “open system” as a system that is open for external connections (aka networking) and not so much open as in “open source”?

Networking was the initial impetus, but the phrase came to include programming interfaces, which is why POSIX was considered such a big deal. The idea was to promote interoperability and portability, as oposed to manufacturer-specific islands like those from IBM and DEC.


No, it meant industry standards, instead of proprietary ones, that is why POSIX, Motif, and others are under The Open Group.


Yes, but also. OpenGenera was ported to x86 some time ago.


I believe it's even been ported to the M1 a few years ago: https://x.com/gmpalter/status/1361855786603929601


Kinda sad seeing those follow-up tweets about licensing issues years later.


I think it would have been easier to port the MIT/LMI/TI environment to standard hardware as it was still 32-bit.


There’s not a huge amount of _explicit_ dependency on the bit width of the system in either the 3600 or Ivory. Of course there’s still plenty of _implicit_ dependency in terms of hardware interaction, object layout in memory, collector implementation, etc. but that’s all stuff that had to be dealt with anyway to port from CADR to 3600 in the first place, and then again to port from 3600-series to Ivory.

I was thinking that someone could have rewritten the CADR microcode to run on a 68020+Custom MMU system with no other OS, there isn't all that much of it.

This would be tied to the bit width of the system.


The 68030 really only dropped the Call Module and Return from Module instructions, which nobody used anyway since relatively few developers wanted to write code only for 68020 and higher around the time the 68020 and then 68030 shipped.


They did this in the 1970s and 1980s too, then they were called “forms libraries” but were often full application frameworks in ways that would be familiar to modern developers of native graphical apps.


TurboPascal springs to mind because I know someone who made a video store management system with all kinds of forms and screens (via Turbo Vision[0]) in the early 90s.

[0] https://en.wikipedia.org/wiki/Turbo_Vision


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: