Hacker Newsnew | past | comments | ask | show | jobs | submit | feisty0630's commentslogin

Are you from the middle ages, or are you so out of touch with blue-collar work that you're under the impression the average sewer worker has to manually handle waste?


It's not "dumb", you're just presenting a steelman that directly contradicts what the person you're replying to wrote.

You might indeed be shocked to find that not everyone consumes fast food.


What strikes me about this exchange is no one is talking about the money. In the past, you could do either and no one had to care except you. Now a lot of jobs that people could find fulfilling aren't because the economy is so distorted, so how are we supposed to honestly look at this? I guess let's walk these people off the plank and get this over with...


Did the people who worked on the farm to grow and harvest your food enjoy it?


Are you seriously and earnestly arguing that harm-minimisation is useless and we should all just open the human-suffering throttle, or did you just not think that far ahead?

I am hoping the latter. Being foolish is far more temporary a condition than being cruel.


Increasing productivity is how we minimize harm. Many people hate their job but are happy to have it because it allows them to consume things. More production = less suffering


How are you “minimizing harm” by pearl clutching about not eating fast food? The front line people you are interacting with at the fast food restaurant or the grocery store have it easiest in the chain of events that it takes food to get to you. Do you think that fast food workers have it harder than the people at the grocery store?


0.8*harm < 0.81*harm - hope this helps!

Also, the core point is about people being able to find meaning in their work. That you've decided to laser in on this specific point to go on a tangent of whattaboutism is largely irrelevant.

Have a nice day.


The fact is that most of the 3-4 billion+ people on earth don’t “find meaning in their work” and they only work because they haven’t overcome their addiction to food and shelter. If the point was irrelevant to your argument, why make it?


I didn't actually make the point initially. I was challenging the reply's point that:

a) just because some people are miserable at work, doesn't mean we shouldn't care that other people might become miserable at work

b) Someone saying they prefer their food to be made without suffering is clearly a hypocrite in all cases because... there are miserable people in fast food jobs?

I mean... really. Come on now.


People who work in fast food may not be “passionate” about their job. But they aren’t “suffering”. You aren’t relieving anyone’s “suffering” by not eating fast food or even if there was no fast food. They aren’t “suffering” anymore than people working at the grocery store.

Cry me a river for software developers (been delivering code professionally for 30 years and before that as a hobbyist) because now we have something that makes us more efficient.


I don't know if you're intentionally being obtuse or you just failed third grade reading comprehension, but can you please go argue with the people actually making these points (rather than me, a random person who has replied to them)?


So exactly what point are you trying to make? That software developers - at least the employed ones - “are suffering” because of AI? That you don’t eat fast food because you believe the employees are being exploited? What exactly is your point?


As soon as Intel killed Itanium, the clock was ticking for HP-UX.


It's almost as if different equipment can serve different purposes...


There are F500 companies shipping Ubuntu Core on devices that will only permit signed firmware, so I'm not sure your assessment is correct.

https://buildings.honeywell.com/au/en/products/by-category/b...


Depending on the product, this might be OK! If you've ever had cause to closely read the GPLv3, the anti-tivoisation clause for some reason is only really aimed at "User products" (defined as "(1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling"). This one looks like it's a potential grey area, since it's not obvious if it's intended for buildings that anyone would live in.


I worked on an embedded product that was leased to customers (not sold). The system included GPLv3 portions (e.g. bash 5.x) but they concluded that we did not need to offer source code to their cuatomers.

The reasoning was that the users didn’t own the device. While I personally believe this is not consistent with recent interpretations of the license by the courts, I think they concluded that it was worth the risk of a customer suing to get the source code, as the company could then pull the hardware and leave that customer high and dry. It is unlikely any of their users a would risk that outcome.


Take a look at their customer testimonials [0] and ask yourself if they have recently made anticompetitive or user-hostile moves. Now, ask yourself: do you think they like being beholden to a license that makes it harder for them to keep their monopolies?

[0]: https://ubuntu.com/pro/

Edited to add: it would be cool if, instead of the top-most wealth-concentrators F[500:], there was an index of the top-most wealth-spreaders F[:500]. What would that look like? A list of cooperatives?


As long as nobody sues them everything is fine


And if drivers followed the Safe Driving Protocol (SDP), we wouldn't need airbags. Real life happens regardless of the imaginary frameworks infosec people dream up.


That's not "verification" by any definition of the word.


Good point. In a way we can verify to a customer that we have that policy set up with them by showing them the certificate. But you are correct in that we haven't gone as far as asking for proof from Anthropic or OpenAI on not retaining any of our data but what we did do is we got their SOC 2 Type II reports and they showed no significant security vulnerabilities that will impact our usage of their service. So now we have been operating under the assumption that they are honoring our signed agreement within the context of the SOC 2 Type II report we retrieved, and our customers have been okay with that. But we are definitely open to pursuing that kind of proof at some point.


All of which has nothing to do with OpenAI or Anthropic deciding to use your data??? SOC 2 Type II is completely irrelevant.

You've got two companies that basically built their entire business upon stealing people's content, and they've given you a piece of paper saying "trust me bro".


I appreciate your skepticism. At the end of the day we're focused on delivering real value while taking every security precaution we can reasonably take and build new technology at the same time. Eventually as we grow we'll be able to do full self hosting for our customers and perhaps even spin up our own LLMs in our own servers. But until then, we can only do so much.


Welcome to the invalidated EU-US Safe Harbour, the invalidated EU-US Privacy Shield, and the soon-to-be invalidated EU-US Data Privacy Framework (DPF) and Transatlantic Data Privacy Framework (TADPF).

Digital sovereignty and respect for privacy and local laws are the exception in this domain, not the expectation.

As Max Schrems puts it "Instead of stable legal limitations, the EU agreed to executive promises that can be overturned in seconds. Now that the first Trump waves hit this deal, it quickly throws many EU businesses into a legal limbo."

After recently terrifying the EU with the truth in an ill-advised blogpost, Microsoft are now attempting the concept of a 'Sovereign Public Cloud' with a supposedly transparent and indelible access-log service called Data Guardian.

https://blogs.microsoft.com/on-the-issues/2025/04/30/europea...

https://www.lightreading.com/cloud/microsoft-shows-who-reall...

If Nation States can't manage to keep their grubby hands off your data, private US Companies obliged to co-operate with Intelligence Apparatus certainly won't be.


You make valid points. At the end of the day we're focused on delivering real value while taking every security precaution we can reasonably take and build new technology at the same time. Eventually as we grow we'll be able to do full self hosting for our customers and perhaps even spin up our own LLMs in our own servers. But until then, we can only do so much.


Honestly, I'm surprised your lawyers let you post that here.

+1 for honesty and transparency


Typically with this sort of thing the way it really works is that you, the startup, use a service provider (like OpenAI) who publish their own external audit reports (like a SOC 2 Type 2) and then the SOC 2 auditors will see that the service provider company has a policy related to how it handles customer data for customers covered by Agreement XYZ, and require evidence to prove that the service provider company is following its policies related to not using that data for undeclared purposes or whatever else.

Audit rights are all about who has the most power in a given situation. Just like very few customers are big enough to go to AWS and say "let us audit you", you're not going to get that right with a vendor like Anthropic or OpenAI unless you're certifiably huge, and even then it will come with lots of caveats. Instead, you trust the audit results they publish and implicitly are trusting the auditors they hire.

Whether that is sufficient level of trust is really up to the customer buying the service. There's a reason many companies sell on-prem hosted solutions or even support airgapped deployments, because no level of external trust is quite enough. But for many other companies and industries, some level of trust in a reputable auditor is acceptable.


Thanks for the breakdown Seth! We did indeed get their SOC 2 Type II reports and made sure they showed no significant security vulnerabilities that will impact our usage of their service.


Is it a 3rd party that is verifying?


We haven't looked into this kind of approach yet, but definitely worthwhile to do at some point!


So you’re taking the largest copywriting infringements at their word for it?


Right now we are taking the policies we signed with our LLM vendors as a verification of a zero data retention policy. We did also get their SOC 2 Type II reports and they showed no significant security vulnerabilities that will impact our usage of their service. We're doing our best to deliver value while taking as many security precautions as possible: our own data retention policy, encrypting data at rest and in transit, row-level security, SOC 2 Type I and HIPAA compliance (in observation for Type II), secret managers. We have other measures we plan to take like de-identifying screenshots before sending them up. Would love to get your thoughts on any other security measures you would recommend!


How exactly would you do this? Be realistic


> Social media may be the actual lead pipes to our empire [1].

In America, the lead pipes of their empire are the literal lead pipes still in use all over the country.


Wait until you find out about 'docker-compose' vs 'docker compose'!


Probably before llms figure it out


It seems like they've just made a board to route all the NAND modules' pins to a connector, and then another board to go back again. There don't seem to be any electronics on the boards other than the modules and a few resistors.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: