Hacker Newsnew | past | comments | ask | show | jobs | submit | oraphalous's commentslogin

The prediction that there will be greater demand for specialist frontend and backend engineers is the one that surprises me. How do folks think about it? I've been assuming the opposite - that demand for specialists will decline because expectations around what a single person should be able to do will grow.

I'm a frontend specialist developer currently taking some time off to reskill - and trying to decide exactly which direction to go. My thinking was that I would need to lean into design and product - leverage my technical knowledge in building interfaces to be able to inform the product side. Knowing what is easy or hard to build hopefully would speed up the product side.


But it's a loss if it's forced into risky investments that aren't productive.

Maybe a societal foundation of: magic man in the sky - wasn't so great a foundation after all?


It would be quite the opposite, the magic man favours families, children, and population growth. The rejection of these beliefs seems to be what is detrimental to society. The other stuff I'll leave for you to decide.


What a non sequitur.


I'm not convinced, I believe the institutions of church were and often still are the foundations of communities in many positive ways.

But the fact that they rest on an arbitrary belief in one of the popular gods does make it a pretty shakey foundation.

We see it right now, as the belief in Christianity has dwindled so too have the communities the church was supporting, the community can be separate to belief and probably should be of it is to support a greater community.


He's not wrong though.


Not a non sequitur ... go back and read what they responded to.


Is this a fortnite reference?


I think the west is in for a rude shock when it finally realises how fast China is developing technologically.

I was there a few months ago in Guangzhou, it was stunning to see how many EVs were on the road. You can tell too - because they have a distinctly coloured license plate.

The scale that China can achieve is just mind boggling. We went into a giant mall - 7 levels. And it was all just jewellry. A whole mall! Blew my mind. Apply that sort of scale to technological development. They can do things other countries just can't because of that scale. Here's another example of a data center they just built:

https://www.youtube.com/watch?v=eUVF8crDZ7g

I get the sense that some people are just starting to really cotton on about what China is really becoming - for example that recent review from Marques Brownlee of the Xiaomi EV.

But still - most of the narrative in the west seems to be doomerism about their demographics and real-estate over investment. We will see.

It's going to be interesting to see if China can offset its oncoming demographic challenges with their technological progress.


While this is certainly true, I don't really know if molten salt reactors are the best example. The US and Russia shared molten salt reactor technology after the collapse of the Soviet Union, and then the US shared those details with China in 2014 under an international DOE initiative: https://web.archive.org/web/20130919025430/http://www.smartp...

The PRC has much credit to give the United States and Soviets for their thorium reactor designs.


That's interesting - but I guess we need to understand better what innovation counts as a good example of Chinese incipient superiority.

If China is leap frogging the west technologically, then it can still only be a very recent fact. It stands to reason that their current innovations will be developed on top of much of what they learnt from the west.

That doesn't mean that a particular innovation does not deserve to be counted as an example of their progress, just because that innovation was on the shoulders of western tech. Most progress is done on the shoulders of others anyway. So I don't feel examples like this should be discounted.

But also - there are a lot of dimensions to think about here. For example, one dimension is the raw development of a technology. Maybe China has not developed so much newer tech (although I'm reading much to the contrary in this respect also). But another dimension is the economics involved in implementation. With scale comes so many advantages. Business models that just aren't economical in Western countries can thrive in China.

There was one example that really struck me. I was at a mall with my partner and her friends (who live in Guangzhou). Those friends wanted to order something to drink. One of them got out their phone, ordered those drinks and had them delivered outside the mall. This was less effort and cost in their minds than actually wandering around the mall trying to find the drinks they wanted. Heaps of people just get their morning coffee delivered.

You might not think this example is a good demonstration of technological superiority. Every country has the phone and internet tech for goods delivery. But that they can leverage the technology to this degree is only be possible where the cost of delivery is so low - as it is in China.

So yeah - raw innovation is one dimension, but opportunity for implementation is another very important one as well.


Well, now you're moving the goalpost. I think China has every right to be proud of developing economically beyond what the Soviets were capable of.

Characterizing that success is easy, looking at the economy; China has tons of raw and manufactured resources, with a weak financial sector. That's the exact opposite of what you see in developed economies in America and Europe, and the service industries will reflect that. Oftentimes, a weak finance sector is reflected in a surplus of poor people (which can be confirmed looking at China's GDP/capita).

What you're describing in all of your anecdotes are not the nascent signs of superiority. It's cheap concrete and poor people who will do jobs that other people consider insulting. There are absolutely domains where China has met or exceeded the global watermark in various areas (HGVs, BVR missiles, expendable AESA/GAN radar, speaking as an aviation nerd) but much of that stems from the aforementioned surplus. The Soviets also had wonderful trinkets that the western world couldn't copy, but it didn't save them when they needed money and foreign support.


Correct, the PRC has much credit to give and indeed does so. Yet, they still picked up the development and successfully ran with it.


I bought the book - looks good! Would be keen to know which magazines they were originally published in. I feel you should include those references in the book (forgive me if I've missed them.)


All of the source references are in the section called "Permissions" at the end of the book, this is a common way that anthologies do references but I understand it is easy to miss!


I don't even understand why it's everyone elses problem to opt-out.

Eventually - for how many of these AI companies would a person have to track down their opt-out processes just to protect their work from AI? That's crazy.

OpenAI should be contacting every single one and asking for permission - like everyone has to in order to use a person's work. How they are getting away with this is beyond me.


Copyright doesn't prevent anyone from "using" a person's work. You can use copyrighted material all day long without a license or penalty. In particular, anyone is allowed to learn from copyrighted material by reading, hearing, or seeing it.

Copyright is intended to prevent everyone from copying a person's work. That's a very different thing.


There is an argument to be made that ChatGPT mildly rewording/misquoting info directly from my blog is copying.


And it is. And you can sue them for that. What you can’t do is get upset they (or their AI) read it.


Sure, but that's a different claim and a different argument.


I think to make that argument you would need evidence that someone prompted ChatGPT to reword/misquote info directly from your blog, at which point the argument would be that that person is rewording/misquoting info directly from your blog, not ChatGPT.


I don't think so: The user is merely making a request for copyrighted material, which is not itself infringing, even if their request was extremely specific and their intent was obvious.

OpenAI would be the company actually committing the infringement and providing the copy in order to satisfy the request.

If the law suddenly worked the other way around, companies would no longer be able to prosecute people for hosting pirated content online, because the responsibility would lie with the users choosing to initiate the download.


That would fall under fair use.

Legally, you'd struggle to prove any form of infringement happened. Making a copy is fine. Distributing copies is what infringes. You'd need to prove that is happening.

That's why there aren't a lot of court cases from pissed off copyright holders with deep pockets demanding compensation.


> Copyright doesn't prevent anyone from "using" a person's work.

It should. The 'free and open internet' is finished because nobody is going to want to subject their IP to rampant laundering that makes someone else rich.

Tragedy of the commons.


I can see this both ways. For the sake of argument, please explain why using IP to train an AI is evil, but using the same IP to train a human is good.

Note that humans use someone else's IP to get rich all the time. E.g. Doctors reading medical textbooks.


>Note that humans use someone else's IP to get rich all the time. E.g. Doctors reading medical textbooks.

You need a better example, a textbook was created with this exact purpose of sharing knowledge with the reader.

My second point, if you write a poem and I read it and memorize it, then publish it as my own with some slight changes you would be upset?

If I get your painting, then use a script to apply a small filter to it then sell it as my own, is this legal? is my script "creative"?

This AIs are not really creative, they just mix inputs and then interpolate an answer , is some cases you can't guess what input image/text was used but in other cases it was shown ezactly the source that was used and just copy pasted in the answer.


> My second point, if you write a poem and I read it and memorize it, then publish it as my own with some slight changes you would be upset?

I feel the problem with analogizing to humans while trying to make a point against unlicensed machine learning is that applying the same moral/legal rules as we do to humans to generative models (learning is not infringement, output is only infringement if it's a substantially similar copy of a protected work, and infringement may still be covered by fair use) would be a very favorable outcome for machine learning.

> they just mix inputs and then interpolate an answer , is some cases you can't guess what input image/text was used

Even if you actually interpolated some set of inputs (which is not how diffusion models or transformers work), without substantial similarity to a protected work you're in the clear.

> is my script "creative"? [...] This AIs are not really creative [...]

There's no requirement for creativity - even traditional algorithms can make modifications such that the result lacks substantial similarity and thus is not copyright infringement, or is covered by fair use due to being transformative.


>I feel the problem with analogizing to humans while trying to make a point against unlicensed machine learning is that applying the same moral/legal rules as we do to humans to generative models (learning is not infringement, output is only infringement if it's a substantially similar copy of a protected work, and infringement may still be covered by fair use) would be a very favorable outcome for machine learning.

Agree. copyright is clear, so if I can make ChatGPT output copyrighted material then Open AI should pay me correct? Or you will claim that this is rare, a mistake and we should forgive OpenAI while a human would have had to pay damages.


> so if I can make ChatGPT output copyrighted material then Open AI should pay me correct?

If by "make" you mean you're coaxing it into outputting your work, it'd be difficult to allege damages. If you show it's regurgitating your registered work to normal users, and it's not covered by fair use factors (e.g: it's outputting a significant portion of your work, in a non-transformative manner, and this is negatively impacting the market for that work), then you'd have a good case to bring.

> Or you will claim that this is rare, a mistake and we should forgive OpenAI

Rarity will affect damages, but they wouldn't be off the hook if such a situation does happen. To my knowledge no safe harbor applies here, given it's their own bot and not human users.


Is the AI allowed to decide unprompted how to spend the money? Can it decide that it doesn't like the people who made it and donate it to charity. Can the AI start it's own company and not hire anyone that made it? Can the AI decide that it prefers the open Internet and will answer all questions for free?

The sake of argument is a cowards way of expressing an unpopular opinion in public. Join a debate club if you're actually being genuine.


I never used the word evil.

That said, machines don't have natural rights, and you don't get to use them to violate mine.


scale


Under this mentality, every search engine index would be shut down.


cool


Napster had a moment too, but then they got steamrolled in court.

Courts are slow, so it seems like nothing is happening, but there’s tons of cases in the pipeline.

The media industry has forced many tech firms to bend the knee, OpenAI will follow suit. Nobody rips off Disney IP and lives to tell the tale.


If your business model depends on the Roberts' court kneecapping AI, pivot. Training does not constitute "copying" under copyright law because it involves the creation of intermediate, non-expressive data abstractions that do not reproduce or communicate the copyrighted work's original expression. This process aligns with fair use principles, as it is transformative, serves a distinct purpose (machine learning innovation), and does not usurp the market for the original work.


I believe there are some other issues other than just "is it transformative".

I can't take an Andy Warhol painting, modify it in some way and then claim it's my own original work. I have some obligation to say "Yeah, I used a Warhol painting as the basis for it".

Similarly, I can't take a sample of a Taylor Swift song and use it myself in my own music - I have to give Taylor credit, and probably some portion of the revenue too.

There's also still the issue that some LLMs and (I believe) image generation AI models have regurgitated works from their training models - in whole or part.


>I can't take an Andy Warhol painting, modify it in some way and then claim it's my own original work. I have some obligation to say "Yeah, I used a Warhol painting as the basis for it".

If you dont replicate Warhols painting entirely, then you are fine. Its original work.

The number of Scifi novels I read that are just an older concept reimagined with more modern characters is huge.

>I can't take an Andy Warhol painting, modify it in some way and then claim it's my own original work. I have some obligation to say "Yeah, I used a Warhol painting as the basis for it".

In most sane jurisdictions you can sample other work. Consider collage. It is usually a fair use exemption outside of the USA. If LLMs cause keyboard warriors to develop some seppocentric mindvirus leading to the destruction of collage I will be pissed.

>There's also still the issue that some LLMs and (I believe) image generation AI models have regurgitated works from their training models - in whole or part.

Considered a high priority bug and stamped out. Usually its in part because a feature is common to all of an artists work, like their signature.


> I can't take an Andy Warhol painting, modify it in some way and then claim it's my own original work.

This is a hilarious choice of artist given that Warhol is FAMOUS for appropriating work of others without payment, modifying it in some way, and then turning around and selling it for tons of money. That was the entire basis of a lot of his artistic practice. There was even a Supreme Court case about it.


There was a time when it did not usurp the market for the original work, but as the technology improves and becomes more accessible, that seems to be changing.


In my experience when existing laws allow an outcome that causes enough significant harm to groups with influence, the laws gets changed.


> Training does not constitute "copying" under copyright law

It should.


And yet Micky Mouse is in the public domain. Something those of us who remember the 90s thought would never happen.


Just the oldest Mickey. They gave up on it because the cost/benefit wasn't deemed worth it anymore.


I don't even understand why it's everyone elses problem to opt-out.

Because the work being done, from the point of view of people who believe they are on the verge of creating AGI, is arguably more important than copyright.

Less controversially: if the courts determine that training an ML model is not fair use, then anyone who respects copyright law will end up with an uncompetitive model. As will anyone operating in a country where the laws force them to do so. So don't expect the large players to walk away without putting up a massive fight.


Of note here is the reason it's "important" is it will make a shit-ton of money.


That, coupled with the obvious ideological motivations. Success could alter the course of human history, maybe even for the better.

If you feel that what you're doing is that important, you're not going to let copyright law get in the way, and it would be silly to expect you to.


I can't say I believe that. If that were the case, they'd focus more on results and less on hyping up the next underwhelming generation.


For one thing, they are focused on money because they need lots of it to do what they're doing.

For another, the o1-pro (and presumably o3) models are not "underwhelming" except to those who haven't tried them, or those who have an axe to grind. Serious progress is being made at an impressive pace... but again, it isn't coming for free.


Oh please. OpenAI and I guess every other AI company are for-profit.

The only change they are motivated by is their bank balances. If this were a less useful tool they’d still be motivated to ignore laws and exploit others.


Hard to say what motivates them, from the outside looking in. There have been signs of cultlike behavior before, such as the way the rank and file instantly lined up behind Altman when he was fired. You don't see that at Boeing or Microsoft.

Obviously it's a highly-commercial endeavor, which is why they are trying so hard to back away from the whole non-profit concept. But that's largely orthogonal to the question of whether they feel they are doing things for the benefit of humanity that are profound enough to justify blowing off copyright law.

Especially given that only HN'ers are 100% certain that training a model is infringement. In the real world, this is not a settled question. Why worry about obeying laws that don't even exist yet?


> Hard to say what motivates them, from the outside looking in.

It isn't.

> There have been signs of cultlike behavior before, such as the way the rank and file instantly lined up behind Altman when he was fired.

This only reinforces that the real drive is money.


>Especially given that only HN'ers are 100% certain that training a model is infringement. In the real world, this is not a settled question. Why worry about obeying laws that don't even exist yet?

This is exactly why people are against it.

Your argument is that there is no definitive law. Therefore the creators of the data you scrape to train, and their wishes are irrelevant.

If the motivation was to help humanity, they’d think twice about stepping on the toes of the humanity they want to save and we’d hear more about nontrivial uses.


Your argument is that there is no definitive law. Therefore the creators of the data you scrape to train, and their wishes are irrelevant.

Correct, that is the position of the law. Here in America, we don't take the position, held in many other countries, that everything not explicitly permitted is forbidden. This is a good thing.

If the motivation was to help humanity, they’d think twice about stepping on the toes of the humanity they want to save

Whether it is permissible to train models with copyrighted content is up to the courts and Congress, not us. Until then, no one's toes are being stepped on. Everybody whose work was used to train the models still holds the same rights to that work that they held before.


>Until then, no one's toes are being stepped on. Everybody whose work was used to train the models still holds the same rights to that work that they held before.

And yet artists don’t feel like their work should be used for training.

I’m not sure how you can argue that the intentions are unknowable, when clearly you and the AI companies don’t care about the people whose work they have to use to train their models and these people’s wishes. Motivation is greed.


And yet artists don’t feel like their work should be used for training.

The law isn't really all that interested in how "artists feel." Neither am I, as you've surmised. The artists don't care how I feel, so it would be kind of weird for me to hold any other position.

In any case, copyright maximalism impoverishes us all.


> OpenAI should be contacting every single one and asking for permission - like everyone has to in order to use a person's work

This is the problem of thinking that everyone “has” to do something.

I assure you that I (and you) can use someone else’s work without asking for permission.

Will there be consequences? Perhaps.

Is the risk of the consequences enough to get me to ask for permission? Perhaps.

Am I a nice enough guy to feel like I should do the right thing and ask for permission? Perhaps.

Is everyone like me? No.

> How they are getting away with this is beyond me.

Is it really beyond you?

I think it’s pretty clear.

They’re powerful enough that the political will to hold them accountable is nonexistent.


How does an average joe evaluate the claim that their content moderation was bad? Cause folks on the left seem very upset that it's being replaced by notes, and folks on the right seem very glad that it's going. How do I judge this for myself?


I too would like to hear some examples.

On the one hand you have gurus claiming that AI agents are going to all make all SaaS redundant, on the other claiming that AI isn't going to take my coding job, but I need to adapt my workflows to incorporate AI. We all need to start preparing now for the changes that AI is going to cause.

But these two claims aren't compatible. If AGI and these super agents are that bonkers amazeballs that they can replace entire SaaS companies - then there is no way I'm going to be able to adapt my workflows to compete as a programmer.

Further, if the wildest claims about AI end up proving to be true - there is simply no way to prepare. What possible adaptation to my workflow could I possibly come up with that an AI agent could not surpass? Why should I bother learning how to implement (with today's apis) some RAG setup for a SaaS customer service chatbot when presumably an AI agent is going to make that skillset redundant shortly after?

I'm going to be interviewing for frontend roles soon, and for my prep I'm just going back to basics and making sure I remember on demand all the basics css, html, js/ts - fuck the rest of this noise.


Programmers don't work in isolation. So I don't know how necessary it would be to quickly adapt your workflows to compete. If there's something that's useful to adopt, there will be a stream of blog posts, coworkers, people at user groups and what not spoon feeding what they learned to others. I don't think there's much cause for FOMO, I don't think it makes a big difference whether you start using a faster way to work a few months earlier or later than others. It can be cheaper to not jump on any hype train and potentially miss out on genuine improvements for a while, than to jump on all the hype trains and waste a lot of time on stuff that goes nowhere.

And like you said, if the wildest claims hold true, all programmers are out of a job by the end of 2026 anyway, with all other jobs following over the course of a few years. There's too many variables to predict what would happen in such a scenario, so probably best to deal with it if it happens.

So to me, your strategy checks out. I've personally invested some time into code generating and agentic tooling, but ultimately went back to Claude-as-Google-replacement. By my estimation, about a 5-10 % productivity boost compared to my workflow in 2022. The work is about the same, I just learn a bit faster.


> And like you said, if the wildest claims hold true, all programmers are our of a job by the end of 2026 anyway, with all other jobs following over the course of a few years. There's too many variables to predict what would happen in such a scenario, so probably best to deal with it if it happens.

So much this. AGI is the equivalent of a nuclear apocalypse in many ways—it's unlikely, not unlikely enough for comfort, but also totally not worth preparing for because there's basically no way to predict what preparations would actually be helpful, nor is it obvious that you'd even want to survive it if it happened.

The expected value of prepping for it isn't worth the investment, so it's better to do what most of us already do for nuclear war and pretty much pretend it won't happen.


I need an AI agent to continuously ask questions of PMs or stakeholders until the requirements are less vague. The good thing is this would be a plain english discussion which LLMs are good at. A PM can ask if something is technically feasible to some degree too. Maybe it can even break up tickets in a much better fashion too.


I’m a pm, today I built a working mockup with windsurf (golang + wails + vuejs +duckd). Windsurf uses codeium, branded as the first agentic IDE.

Your requirements will improve, not sure if in the long I still need developers to build the actual software.

The development process with windsurf is a bit like throwing a dice, hoping for a 6. A lot of trial and error, but if you check the git log, you see about 15 minutes between commit per feature request. Windsurf does a good job to summarize the entire feature request chat into a short git commit message. Every git commit reads like a user story.


How… do I find PMs like you? Literally have never worked with a single one that bothered to understand the technology they are building on top of at a deep enough level.

Maybe I just need to teach the ones I work with that it is now possible to trivially prototype many ideas without much or any coding skill.


Most PMs resist this because then they know the understanding of the requirements falls upon them at that point and this has been traditionally the role of architects, analysts, developers, other stakeholders etc and if you replace them with an LLM, well, it doesn't have the ability to be a true stakeholder in this way.


As a PM if you're gathering requirements and building too have a look at genatron.ai


There’s just words on the webpage of genatron. Not a single screenshot or video, no example output, no customer statements. Even the technical details are very thin. Doesn’t give me a good impression of what they’re trying to sell.


> I need an AI agent to continuously ask questions of PMs or stakeholders until the requirements are less vague.

They’ll just get mad at the AI and tell it to stop asking so many questions. As they already do to humans.


As a PM, ChatGPT is great at helping me write tickets in a structured format from me just giving it a single sloppy sentence. I of course review it to make sure it’s understanding me properly. But having to explicitly write stuff like intended behaviors when submitting bugs can be really laborious, though I understand why engineers sometimes need that level of clarity (having been one myself for 15 years)


I have not seen one in production, but I did see 'agent products' sold to financial companies for compliance purposes ( sanctions, mortgage, other regs ). Fascinating stuff that got me mildly interested in MS troupe.


Could you name any products?


Not by name ( edit: and in corporate product names seem to change a lot from where I sit ) but every bigger consulting company/vendor[2] that works with banks/brokers/financial institutions right now seems to have at least some offering in that space to ride ai wave. The presentation I saw specifically from Crowe[1].

[1]https://www.crowe.com/ae/-/media/crowe/firms/middle-east-and... [2]https://www.lexisnexis.com/community/insights/legal/b/though...


Pro tip - shrink your window to a tiny box and the dvds really bounce like crazy.


The whole world seems dedicated to the goal of extracting value rather than creating it.


Or tricking someone else into creating value for you to take.


People tend to create value inherently. If they are not receiving the benefit of that then it would most appropriately be described as theft with the aid of blind regulators.


Anything you get for free, that requires someone else to work to provide it, means you're going to pay for it one way or another.


It could be true for things that could only be "used" once. But I don't think that it's a valid point at all times. Recently, for example, I've made a little "Linux for dummies" zine, and put it on my shelf. Sometimes guests take it, read it, and put it back. Technically, all of them get to read it for free, through no additional cost to me, because this zine already existed before they knew they wanted to "use it", and this zine will continue to exist after they "use it".


Do you let random people use your car for free, too? Do you pay for their gas as well?


Sure, if it doesn't exist before you order it.

If it's already been made, someone may even pay you to take it away.


[flagged]


Capitalism and thief are different things. We should stop using the first to justify the last. If this people was lured to work for free, this is not capitalism.


I'm sure that the glorious Communist Party would build a google where every site is equal in ranking and always occupies the first page.


Not making any claims or value judgments about other economic structures. I wanted to point out that the value extraction aspect of capitalism is fundamentally cold blooded / soulless / impersonal.


As opposed to value extraction under socialism, also known as "GULAG", "down to countryside", "killing fields", "reeducation camps" and "transitory labor regime". The most prominent feature of all those is how kind, warm blooded, soulful and personable they are. Literally everybody who was there (and came out alive, which wasn't easy) makes sure to point that out.


You seem really hung up on the other economic structures, which aren't actually relevant to this discussion.

Why is that?


Maybe because they are relevant, but you refuse to admit it because it would reveal your original claim as shallow, lazy and pointless.


It's a free country. You're free to implement a search engine and let anyone use it for free.

Good luck paying for it, though.


So we need to persuade the NSA to finance our new search engine? Or should we turn to Putin or Xi? The European surveillance services would not be able to


You still have to pay the NSA, in the form of taxes.


This is kind of disingenuous when the grandparent comment is complaining exactly about the particular way this country is free in.


Not at all. If someone wants to fund a charity search engine, they can do it.

No matter how you structure it, somebody is going to have to pay for it.

Another way of saying it is there's no such thing as a free lunch. In any society, any where, any time.

You might as well wish for an antigravity machine :-)


I agree with your sentiment, however its not exactly true.

According to his son, Milton Friedman used to say this but stopped because he realized that trade itself is free lunch- because its a win on both sides- consumer and producer surplus. He replaced it with "always look a gift horse in the mouth".


Depending on who pays, the incentives are different. In a socialist society, where some things are funded by the government, those things are generally much more aligned with the interest of the public than in capitalism, where most money wins.


Man, those Cambodians really loved the killing fields. You should have seen how excited they were


> In a socialist society, where some things are funded by the government, those things are generally much more aligned with the interest of the public than in capitalism

Just for fun, compare the Soviet built Lada cars with cars built by capitalists. Or the contents of supermarkets. Or the quality of health care.

Oh, and the advice given to American tourists visiting the Soviet Union - pack a couple pairs of blue jeans, as they are great for trading for stuff!

I'll make it easy - name any consumer product made by the Soviet Union that was preferable to one built by greedy capitalists. Did you wonder why the US did not import Soviet made consumer products?


I meant more like Germany, countries with strong social safety nets and regulation, rather than full-blown communism.


Socialism results in less productivity. Even in Germany.


This is like a fish saying "living on land results in breathing less water". Yes, that's the point, but the fish almost can't imagine how that might be a good thing.


You're surrounded by and typing on valuable goods and services that you received in exchange for a store of value called money, which you received by providing value to someone else.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: