Hacker Newsnew | past | comments | ask | show | jobs | submit | ziftface's commentslogin

Incidentally, a western model has very famously been producing csam publicly for weeks.

It's totally reasonable to be skeptical of palantir without knowing the exact product in question, given their record.


[flagged]


Not really.

They have a track record of failed IT projects, because they have a very high bar for handling data properly.

Palantir have a track record of successful IT projects, because they do what they want and hope there's limited blowback - they've modelled their biggest customer very well, there.

As somebody born in an NHS hospital whose life has been saved by the NHS on at least 3 occasions, I'm more than happy to defend their record.

Palantir, given what we know that has leaked about what they do and how they do it, considerably less so.


> Palantir have a track record of successful IT projects, because they do what they want and hope there's limited blowback - they've modelled their biggest customer very well, there.

What does this mean?


Perhaps you could research it, it sounds like a fun thing to do.


Research what? The claim that Palantir "just does what it wants to do and hopes there's no blowback?"

I literally can't even parse what that means. Palantir works in very close coordination with their customers' leadership and while the company and product "have opinions" about how to do things, it doesn't at all wash out to Palantir "just doing what it wants to do."

Such a claim doesn't even make sense in the context of a business that works the way Palantir does.

Do you mean sometimes customers pay Palantir to do things that other people or the public disagree with?

And who do you think "their biggest customer" is that they're modeling their own approach after?


[flagged]


Every time I’ve heard Peter Thiel speak I’ve believed he cares about other things. I’m more concerned about his implementations of things.


I've used it before when I used to use emacs and it's really neat and simple to use but I think lazygit is better. I think of lazygit as the spiritual successor of magit. If you're curious what that looks like you can see a descriptive video on their GitHub.


If a Chinese company was making a substantial investment in Korea, they would likely not be stupid enough to jeopardize the project over some paperwork.


The citizens would be very unlikely to view it that way, so they'd have to pray it doesn't hit the news.


Do you have another explanation?


It's not necessary to provide proof that humans are not machines which do nothing but guess the next likely word. But also feeling anything at all is proof of that.


I had the opposite experience. I liked the niceties of Pydantic AI, but had trouble with it that I found difficult to deal with. For example, some of the models wouldn't stream, but the OpenAI models did. It took months to resolve, and well before that I switched to LiteLLM and just hand-rolled the agentic logic stuff. LiteLLM's docs were simple and everything worked as expected. The agentic code is simple enough that I'm not sure what the value-add for some of these libraries is besides adding complexity and the opportunity for more bugs. I'm sure for more complex use cases they can be useful, but for most of the applications I've seen, a simple translation layer like LiteLLM or maybe OpenRouter is more than enough.


I'm not sure how long ago you tried streaming with Pydantic AI, but as of right now we (I'm a maintainer) support streaming against the OpenAI, Claude, Bedrock, Gemini, Groq, HuggingFace, and Mistral APIs, as well as all OpenAI Chat Completions-compatible APIs like DeepSeek, Grok, Perplexity, Ollama and vLLM, and cloud gateways like OpenRouter, Together AI, Fireworks AI, Azure AI Foundry, Vercel, Heroku, GitHub and Cerebras.


I think Simon was being overly charitable by pointing out that there's a chance this exact behavior was unintentional.

It really strains credulity to say that a Musk-owned ai model that answers controversial questions by looking up what his Twitter profile says was completely out of the blue. Unless they are able to somehow show this wasn't built into the training process I don't see anyone taking this model seriously for its intended use, besides maybe the sycophants who badly need to a summary of Elon Musk's tweets.


The only reason I doubt it's intentional is that it is so transparent. If they did this intentionally, I would assume you would not see it in its public reasoning stream.


They've made a series of equally transparent, awkward changes to the bot in the past; this is part of a pattern.


Would love if the site had some more information about how the components are implemented, eg does it use tailwind so they're easily modifiable, is there a light mode and a dark mode for each, can you update the animations to fit your needs, etc. They look good though!


Shoot, I forgot to add the FAQ section. Thank you! The components are built with Framer Motion and MUI (just for the sx prop). You will have access to the raw code, github repo and the npm package. If the component needs to be modified heavily, you can use the raw code. There's still more work that needs to be done to handle the full customisation via the npm package. I didn't use tailwind strictly because existing solutions all tend to use tailwind and shadcn, leaving other developers who don't use tailwind with not many options.


It might be worth putting those examples and other use cases you've found in the post. That is certainly something Google can't do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: