Hacker Newsnew | past | comments | ask | show | jobs | submit | aiobe's commentslogin

what was the instruction to write and promote this post?


On that thought you got to ask yourself why almost every thread has 200+, some even 500+ comments now. Definitely wasn't like this a few months ago


The comments tend to be overwhelmingly negative, though.


Someone should analyze this and share results. The data should be there


This "AI" mind virus has spread.


Oh boy i suspected it's already happening. If dang and YC don't provide good guardrails against ai slop, this community will soon die.


Exactly, I'm not going to waste my time reading this AI generating post that's basically promoting itself.

What I really wonder, is who the heck is upvoting this slop on hackernews?


Another good example, from yesterday: https://news.ycombinator.com/item?id=46860845

Articles like these should be flagged, and typically would be, but they sometimes appear mysteriously flag-proof.


I did because I want to see a critical discussion around it. I'm still trying to figure out if there's any substance to OpenClaw, and hyperbolic claims like this is a great way to separate the wheat from the chaff. It's like Cunningham's Law.


It only has 11 points. It just got caught in the algorithm. That's all.


But I see these kinds of post every day on HN with hundreds of upvotes. And it's a thousand times worse on Reddit.


The hundreds of billions of dollars in investment probably have something to do with it. Many wealthy/powerful people are playing for hegemonic control of a decent chunk of the US economy. The entire GDP increase for the US last year was due to AI and by extension data centers. So not only the AI execs, but every single capitalist in the US whose wealth depends on line going every up year. Which is, like, all of them. In the wealthiest country on the planet.

So many wealthy players invested the outcome, and the technology for astroturfing (LLMs) can ironically be used to boost itself and further its own development


I was thinking the exact same thing earlier today. I think you're right. They have so much at stake, infinite money and the perfect technology to do it.


Generate hot fart to rattle HN.


very good article


Thank you


Correct. No fixed sized pages, but dynamic batches based on the lastEventId.

This is much easier to implement, both in server and client side, and it greatly removed the amount of data transferred. With fixed pages you would return content of the latest page for every poll request until it is "full".


A few issues with message brokers, esp. in the system-to-system integration:

- Security: In B2B scenarios or public APIs would you open your broker to the WWW? HTTP has a solid infrastructure, including firewalls, ddos defence, API gateways, certificate management, ... - Organisational dependencies: Some team needs to maintain the broker (team 1, team 2, or a third platform team). You have a dependency to this team, if you need a new topic, user, ... Who is on call when something goes wrong? - Technology ingestion: A message broker ingests technology into the system. You need compatible client libraries, handle version upgrades, resilience concepts, learn troubleshooting...


> Security: In B2B scenarios or public APIs would you open your broker to the WWW? HTTP has a solid infrastructure, including firewalls, ddos defence, API gateways, certificate management, ...

That's a valid point. I think it's a pity we don't have an equivalent standard for asynchronous messaging with the same support as HTTP. However, there are lots of options for presenting an asynchronous public API that would use your message broker behind the scenes, without fully exposing it: Websockets, SSE, web hooks, etc...

> Organisational dependencies: Some team needs to maintain the broker (team 1, team 2, or a third platform team). You have a dependency to this team, if you need a new topic, user,

True, but don't you have that anyway? How is this different from requesting a new database, service route, service definition, etc?

> Technology ingestion: A message broker ingests technology into the system. You need compatible client libraries, handle version upgrades, resilience concepts, learn troubleshooting...

How simple or complex this is depends on the concrete broker at hand. There are some protocols, e.g. STOMP that are simple enough that you could write your own client. And as I wrote in the parent: HTTP feeds are a technology as well. You'll have to think about troubleshooting and resiliency there as well.


Hi! I am the author of http-feeds.org. Thank you for your feedback.

For this spec I aimed to keep it as simple as possible. And plain polling-based JSON Endpoints are the most simple and robust endpoints IMHO.

If you need, you could implement an SSE representation on the server endpoint by prober content negotiation.

The main reason, why I dropped SSE it the lack of proper back pressure, i.e. what happens when a consumes slower than the server produces messages. Plus, it is quite hard to debug SSE connections, e. g. no support by Postman and other dev tools. And long lasting HTTP connections are still a problem in todays infrastructure. E. g. there is currently no support for SSE endpoints in Digital Ocean App Platform, and I am not sure about them in Google Cloud Run.

Overall, plain GET endpoints felt much simpler.


I'm not entirely sure what you mean by this. SSEs are just normal GET requests with a custom header and some formal logic around retries. I've even implemented them manually with PHP using the `retry` command to mean I don't need to have the connection open for longer than a normal pageload.

> The main reason, why I dropped SSE it the lack of proper back pressure, i.e. what happens when a consumes slower than the server produces messages.

Could you point me to where the spec handles this please? As far as I can tell it has the same problem of the server needing to buffer events until the client next connects


Thank you for writing this great spec up for others to use!

I think totally right that back-pressure and plain GETs are an important use-case to support, and am really happy to see a beautiful spec written up to articulate concretely how to support them.

It is also great to be able to switch amongst these methods of subscription, for instance, if your server can keep a persistent connection open, it's nice to be able to get realtime updates over a single channel, but to still be able to fall back to polling or long-polling if you can't. And if you switch between a polling and a subscription, it's nice if you don't have to change the entire protocol — but can just change the subscription method.

There's a growing effort to integrate all our various subscription-over-http protocols into an IETF HTTP standard, that we can use for all these use-cases: https://lists.w3.org/Archives/Public/ietf-http-wg/2022JanMar...

Maybe you'd be interested in incorporating your experience, use-cases, and design decisions into that effort? We have been talking about starting at this November's IETF. [1]

For instance, you can do polling over the Braid protocol [2] with a sequence of GETs, where you specify the version you are coming from in the Parents: header:

    GET /foo
    Parents: "1"

    GET /foo
    Parents: "2"

    GET /foo
    Parents: "3"
Each response would include the next version.

And you can get back-pressure over Braid by disconnecting a subscription when your client gets overwhelmed, and then reconnecting again later on with a new Parents: header:

    GET /foo
    Subscribe: true
    Parents: "3"

    ..25 of updates flow back to the client..

    **client disconnects**
Now the client can reconnect whenever it's ready for new data:

    GET /foo
    Subscribe: true
    Parents: "28"
Or if it wants to re-fetch an old version, it can simply ask for it via the version ID it got:

    GET /foo
    Version: "14"
And if the source for these updates is a git repository, we could use a SHA hash for the version instead of an integer:

    GET /foo
    Version: "eac8eb8cb2f21c5e79c305c738aa8a8171391b36"
    Parents: "8f8bfc8ea356d929135d5c3f8cb891031d1539bd"
There's a magical universality to the basic concepts at play here!

[1] https://www.ietf.org/how/meetings/115/

[2] https://datatracker.ietf.org/doc/html/draft-toomim-httpbis-b...


Have a look at https://www.datamesh-architecture.com

Using icons makes a difference. created with draw.io sloppy style.


OpenJDK builds by Oracle are updated only for 6 months, even for LTS versions. Plus these builds are provided for limited platforms only and have no official ready-to-use Docker images.


Well, I use those, because I uprade every 6 months. And my company forces me to use theirs docker image and I put java on top of it.

BTW. Limited platforms, but covers most of the cases (64 bit linux, windows, mac plus aarch64 for linux and mac)


What do Docker images have to do with a JDK decision? Build your own image if you want.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: