Hacker Newsnew | past | comments | ask | show | jobs | submit | etra0's commentslogin

LLMs have certainly become extremely useful for Software Engineers, they're very convincing (and pleasers, too) and I'm still unsure about the future of our day-to-day job.

But one thing that has scared me the most, is the trust of LLMs output to the general society. I believe that for software engineers it's really easy to see if it's being useful or not -- We can just run the code and see if the output is what we expected, if not, iterate it, and continue. There's still a professional looking to what it produces.

On the contrary, for more day-to-day usage of the general pubic, is getting really scary. I've had multiple members of my family using AI to ask for medical advice, life advice, and stuff were I still see hallucinations daily, but at the same time they're so convincing that it's hard for them not to trust them.

I still have seen fake quotes, fake investigations, fake news being spreaded by LLMs that have affected decisions (maybe, not as crucials yet but time will tell) and that's a danger that most software engineers just gross over.

Accountability is a big asterisk that everyone seems to ignore


The issue you're overlooking is the scarcity of experts. You're comparing the current situation to an alternative universe where every person can ask a doctor their questions 10 times a day and instantly get an accurate response.

That is not the reality we're living in. Doctors barely give you 5 minutes even if you get an appointment days or weeks in advance. There is just nobody to ask. The alternatives today are

1) Don't ask, rely on yourself, definitely worse than asking a doctor

2) Ask an LLM, which gets you 80-90% of the way there.

3) Google it and spend hours sifting through sponsored posts and scams, often worse than relying on yourself.

The hallucinations that happen are massively outweighed by the benefits people get by asking them. Perfect is the enemy of good enough, and LLMs are good enough.

Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests. Their mistakes are not intentional. They're fiduciaries in the best sense, just like doctors are, probably even more so.


Chronologically, our main sources of information have been:

1. People around us

2. TV and newspapers

3. Random people on the internet and their SEO-optimized web pages

Books and experts have been less popular. LLMs are an improvement.


Interesting point, actually - LLMs are a return to curated information. In some ways. In others, they tell everyone what they want to hear.

> LLMs are an improvement.

Unless somebody is using them to generate authoritative-sounding human-sounding text full of factoids and half-truths in support of a particular view.

Then it becomes about who can afford more LLMs and more IPs to look like individual users.


> Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests.

When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here.

And AI spew is theoretically a fantastic place to insert almost subliminal contextual adverts on a way that traditional advertising can only dream about.

Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do.

And then multiply by every question you doing ask. Ask about do you need tyres. "Yes, you should absolutely change tyres every year, whether noticeably worn or not. KwikFit are generally considered the best place to have this done. Of course I know you have a Kia Picanto - you should consider that actually a Mercedes C class is up to 200% lighter on tyre wear. I have searched and found an exclusive 10% offer at Honest Jim's Merc Mansion, valid until 10pm. Shall I place an order?"

Except it'll be buried in a lot more text and set up with more subtlety.


I've been envisioning a market for agendas, where the players bid for the AI companies to nudge their LLM toward whatever given agenda. It would be subtle and not visible to users. Probably illegal, but I imagine it will happen to some degree. Or at the very least the government will want the "levers" to adjust various agendas the same way they did with covid.

I despise all of this. For the moment though, before all this is implemented, it's perhaps a brief golden age of LLMs usefulness. (And I'm sure LLMs will remain useful for many things, but there will be entire categories where they're ruined by pay to play the same as happened with Google search.)


> When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here.

Yeah, back in the day before monetization Internet pages were informative, reliable and ad-free too.


One difference is that the early internet was heavily composed of enthusiastic individuals. AI is almost entirely corporate and money-focused.

Even most hobby AI projects mostly seem to have an eye on being a side hustle or CV buffing.

Perhaps it's because even in the 90s you could serve a website for basically free (once you had the server). AI today has a noticeable per-user cost.


> Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do.

Doctors already shill for big pharma. There are trust issues all the way down.


> There are trust issues all the way down.

Nonetheless, we must somehow build trust in others and denounce the undeserving. Some humans deserve trust. Will these AI models?


> Doctors already shill for big pharma.

This is not the norm worldwide.


I hope you're right and that it remains that way, but TBH my hopes aren't high.

Big pharma corps are multinational powerhouses, who behave like all other big corps, doing whatever they can to increase profits. It may not be direct product placement, kickbacks, or bribery on the surface, but how about an expense-paid trip to a sponsored conference or a small research grant? Soft money gets their foot in the door.


But he LLM was probably trained on all the sponsered posts and scams. It isn't clear to me that an LLM response is any more reliable than sifting through google results.

This seems true for our moment in time but looking forward I'm not sure how much it will stay that way. The LLMs will inevitably need to find a sustainable business model so I can very much see them becoming enshittified similar to google eventually making 2) and 3) more similar to each other.

An alternative business model is that you, or more likely your insurance, pays $20/mo for unlimited access to a medical agent, built on top of an LLM, that can answer your questions. This is good for everyone -- the patient gets answers without waiting, the insurer gets cost savings, doctors have a less hectic schedule and get to spend more time on the interesting cases, and the company providing the service gets paid for doing a good job -- and would have a strong incentive to drive hallucination rate down to zero (or at least lower than the average physician's).

The medical industry relies on scarcity and it's also heavily regulated, with expensive liability insurance, strong privacy rules, and a parallel subculture of fierce negligence lawyers who chase payouts very aggressively.

There is zero chance LLMs will just stroll into this space with "Kinda sorta mostly right" answers, even with external verification.

Doctors will absolutely resist this, because it means the impending end of their careers. Insurers don't care about cost savings because insurers and care providers are often the same company.

Of course true AGI will eventually - probably quite soon - become better at doctoring than many doctors are.

But that doesn't mean the tech will be rolled out to the public without a lot of drama, friction, mistakes, deaths, and traumatic change.



This is a great idea and insurance companies as the customer is brilliant. I could see this extend to prescribing as well. There are huge numbers of people that would benefit from more readily prescribed drugs like GLP-1s, and these have large portential to decrease chronic disease.

> I could see this extend to prescribing as well.

The western world is already solving this, but not through letting LLMs prescribe (because that's a non-starter for liability reasons).

Instead, nurses and allied health professionals are getting prescribing rights in their fields (under doctors, but still it scales much better).


>> LLMs don't try to scam you, don't try to fool you, don't look out for their own interests

LLMs don't try to scam/fool you, LLM providers do.

Remember how Grok bragged that Musk had the “potential to drink piss better than any human in history” and was the “ultimate throat goat,” whose “blowjob prowess edges out” Donald Trump’s. Grok also posited that Musk was more physically fit than LeBron James, and that he would have been a better recipient of the 2016 porn industry award than porn star Riley Reid.


Completely off-topic but I just love how the pettiness of Musk was abused by twitter community.

I had a chuckle reading all of these.


> 2) Ask an LLM, which gets you 80-90% of the way there.

The Internet was 80%-90% accurate to begin with.

Then the Internet became worth money. And suddenly that accuracy dropped like a stone.

There is no reason to believe that ML/AI isn't going to speedrun that process.


Excellent way of putting it. Just a nitpick: People should look up in medical encyclopedias/research papers/libraries, not blogs. It requires the ability to find and summarize… which is exactly what AI is excellent at.

"Where There Is No Doctor" would be a good place to start. https://hesperian.org/

"Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests"

This is so naive, especially since both google and openai openly confess to manipulate the data for their own agenda (ads but not only)

AI is a skilled liar

You can always pride yourself and playing with fire, but the more humble attitude would be to avoid it at all cost;


> 2) Ask an LLM, which gets you 80-90% of the way there.

Hallucinations and sycophancy are still an issue, 80-90% is being generous I think.

I know this is not issues of the LLM itself, but rather the implementation & companies behind them (since there are open models as well), but, what limits to LLMs to be enshittified by corp needs?

I've seen this very recently with Grok, people were asking trolley-like problems comparing Elon Musk to anything, and Grok very frequently chose Elon Musk most of the time because it is probably embedded in the system prompt or training [1].

[1] https://www.theguardian.com/technology/2025/nov/21/elon-musk...


> Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests.

They follow their corporations instead. Just look at the status-quoism of the free "Google AI" and the constant changes in Grok, where xAI is increasingly locking down Grok, perhaps to stay in line with EU regulations. But Grok is also increasingly pro-billionaire.

Copilot was completely locked down on anything political before the 2024 election.

They all scam you according to their training and system prompts. Have you seen the minute change in the system prompt that led to MechaHitler?


Two MAJOR issues with your argument.

> where every person can ask a doctor their questions 10 times a day and instantly get an accurate response.

Why in god's name would you need to ask a doctor 10 questions every day? How is this in any way germane to this issue?

In any first-world country you can get a GP appointment free of charge either on the day or with a few days' wait, depending on the urgency. Not to mention emergency care / 112 any time day or night if you really need it. This exists and has existed for decades in most vaguely social-democratic countries in the world (but not only those). So you can get professional help from someone, there's no (absurd) false choice between either "asking the stochastic platitude generator" and "going without healthcare".

But I know right, a functioning health system with the right funding, management, and incentives! So boring! Yawn yawn, not exciting. GP practices don't get trillions of dollars in VC money.

> Ask an LLM, which gets you 80-90% of the way there.

This is such a ridiculous misrepresentation of the current state of LLMs that I don't even know how to continue a conversation from here.


> In any first-world country you can get a GP appointment free of charge

Are you really under the assumption that this is a first-world perk?


You're right, it's also true in many middle-income countries, like Brazil.

And also true in "third world" countries.

I love that the next day, I open this post and it's simply downvoted with 0 counterpoint.

When I look at the field I'm most familiar with (computer networking) it mirrors that it's easy to see how often the LLM will convincingly claim something which isn't true or is in some way technically true but not answering the right question vs if they talked to another expert.

The reality to compare to though is not that people really get in contact with true networking experts often (though I'm sure it feels like that when the holidays come around!) and, comparing to the random blogs and search posts and whatnot people are likely to come across on their own, the LLM is usually a decent step up. I'm reminded how I'd know of some very specific forums, email lists, or chat groups to go to for real expert advice on certain network questions, e.g. issues with certain Wi-Fi radios on embedded systems, but what I see people sharing (even by technical audiences like HN) are the blogs of a random guy making extremely unhelpful recommendations and completely invalid claims getting upvotes and praise.

With things like asking AI for medical advice... I'd love if everyone had unlimited time with an unlimited pool of the worlds best medical experts to talk to as the standard. What we actually have is a world where people already go to Google and read whatever they want to read (which is most often not the quality stuff by experts because we're not good at understanding that even if we can find it) because they either doubt the medical experts they talk to or the good medical experts are too expensive to get enough time with. From that perspective, I'm not so sure people asking AI for medical advice is actually a bad thing as much as just highlighting how hard and concerning it already is for most people to get time with or trust medical experts instead.


This justification comes up when discussing therapy too.

To take it to an extreme, it's basically saying "people already get little or bad advice, we might as well give them some more bad advice."

I simply don't buy it.


Swedish politician Ebba Busch used LLM to write a speech. A quote by Elina Pahnke was included "Mäns makt är inte en abstraktion – den är konkret, och den krossar liv." (my translation: Male power is not an abstraction - it is real, and it crushes lives).

Elina listened in on the speech and got surprised :)...

https://www.aftonbladet.se/nyheter/a/gw8Oj9/ebba-busch-anvan...

Ebba apologized, great, but it begs the question: how many quotes and misguided information is being acted on already? If crucial decisions can be made off incorrect decisions then they will. Murphys law!


I get this take, but given the state of the world (the US anyways), I find it hard to trust anyone with any kind of profit motive. I feel like any information can’t be taken as fact, it can just be rolled into your world view and discarded if useful or not. If you need to make a decision that can’t be backed out of that has real world consequences I think/hope most people are learning to do as much due diligence as reasonable. Llms seem at this moment to be trying to give reliable information. When they’ve been fine tuned to avoid certain topics it’s obvious. This could change but I suspect it will be hard to find tune them too far in a direction without losing capability.

That said, it definitely feels as though keeping a coherent picture of what is actually happening is getting harder, which is scary.


I feel like any information can’t be taken as fact, it can just be rolled into your world view and discarded if useful or not.

The concern, I think, is that for many that “discard function” is not, “Is this information useful?”. Instead: “Does this information reinforce my existing world view?”

That feedback loop and where it leads is potentially catastrophic at societal scale.


This was happening well before LLMs, though. If anything, I have hope that LLMs might break some people out of their echo chambers if they ask things like "do vaccines cause autism?"

> I have hope that LLMs might break some people out of their echo chambers

Are LLMs "democratized" yet, though? If not, then it's just-as-likely that LLMs will be steered by their owners to reinforce an echo-chamber of their own.

For example, what if RFK Jr launched an "HHS LLM" - what then?


... nobody would take it seriously? I don't understand the question.

> I find it hard to trust anyone with any kind of profit motive.

As much as this is true, and i.e. doctors for sure can profit (here in my country they don't get any type of sponsor money AFAIK, other than having very high rates), there is still accountability.

We have built a society based on rules and laws, if someone does something that can harm you, you can follow the path to at least hold someone accountable (or, try).

The same cannot be said about LLMs.


>there is still accountability

I mean there is some if they go wildly off the rails, but in general if the doctor gives a prognosis based on a tiny amount of the total corpus of evidence they are covered. Works well if you have the common issue, but can quickly go wrong if you have the uncommon one.


Comparing anything real professionals do to the endless, unaccountable, unchangeable stream of bullyshit from AI is downright dishonest.

This is not the same scale of problem.


With code, even when it looks correct, it can be subtly wrong and traditional search engines don’t sit there and repeatedly pressure you into merging the PR.

> We can just run the code and see if the output is what we expected

There is a vast gap between the output happening to be what you expect and code being actually correct.

That is, in a way, also the fundamental issue with LLMs: They are designed to produce “expected” output, not correct output.


That is exactly my point, though.

I didn't mean they do it on the first time, or that it is correct, I mean that you can 'run' and 'test it' to see if it does what you want in the way you want.

The same cannot be said to any other topics like medical advice, life advice, etc.

The point is, how verifiable is the output the LLM gives and so how useful it is.


My point is that running and testing the code successfully doesn’t prove correctness, doesn’t show that “it does what you want in the way you want” under all circumstances. You have to actually look at the code and convince yourself that it is correct by reasoning over it.

For example:

The output is correct but only for one input.

The output is correct for all inputs but only with the mocked dependency.

The output looks correct but the downstream processors expected something else.

The output is correct for all inputs with real world dependencies and is in the correct structure for downstream processors, but it's not being registered with the schema filtered and it all gets deleted in prod.

While implementing the correct function you fail to notice that the correct in every way output doesn't conform to that thing that Tom said because you didn't code it yourself but instead let the LLM do it. The system works flawlessly with itself but the final output fails regulatory compliance.


The use of LLMs in software does not stop at code generation. With function calling, the prompt becomes the program and the LLMs acts as an intelligent interpreter/runtime that excutes complex business logic using primitives (the functions) they have access to (MCP) and that's the real paradigm shift for software engineering.

Regarding medical information: medical professionals in the US, including your doctor, use uptodate.com, which is basically a medical encyclopedia that is regularly updated by experts in their field. While it's very expensive to get a year long subscription, a week long subscription (for non medical professionals) is only around $20 and you can look up anything you want.

> Accountability is a big asterisk that everyone seems to ignore

Humans have a long history of being prone to believe and parrot anything they hear or read, from other humans, who may also just be doing the same, or from snake-oil salesmen preying on the weak, or woo-woo believers who aren't grounded in facts or reality. Even trusted professionals like doctors can get things wrong, or have conflicting interests.

If you're making impactful life decisions without critical thinking and research beyond a single source, that's on you, no matter if your source is human or computer.

Sometimes I joke that computers were a mistake, and in the short term (decades), maybe they've done some harm to society (though they didn't program themselves), but in the long view, they're my biggest hope for saving us from ourselves, specifically due to accountability and transparency.


> LLMs have certainly become extremely useful for Software Engineers

They slow down software delivery on aggregate, so no. They have a therapeutic effect on developer burnout though. Not sure it's worth it, personally. Get a corporate ping-ping table or something like that instead.


Doesn't really matter when this is a human problem. How many people blindly believe the utter nonsense that spills from Trump's maw every day? Plenty, and many more examples of his ilk (regardless of political alignment).

> using AI to ask for medical advice

So the number of anti-vaxxers is going to plummet drastically in the following decade, I guess.


I haven't tried with this specific topic, but being the pleasers llms are, I doubt someone so focused on being anti-vaxxer will be convinced by an LLM, if anything, the LLM will give them reason at some point.

Depends if they use lobotomized bots like Grok...

>> So the number of anti-vaxxers is going to plummet drastically in the following decade, I guess.

> Depends if they use lobotomized bots like Grok...

What are you on about?

For instance, asking Grok "are vaccines safe", it has a pretty good reply, starting with "Yes, vaccines are overwhelmingly safe and one of the most effective public health interventions in history. Extensive scientific evidence from decades of research, including rigorous clinical trials, post-licensure monitoring, and systematic reviews by organizations like the WHO, CDC, NIH, and independent bodies, shows that the benefits of vaccination far outweigh the risks for individuals and populations." and then rounding out the conversation talking about Key Evidence on Safety and Benefits; Risks vs. Benefits; Addressing Concerns.

https://grok.com/share/c2hhcmQtNA_69e20553-2558-46be-9f21-6a...

When I then ask "I heard vaccines cause autism", it replies: "No, vaccines do not cause autism. This is a thoroughly debunked myth that originated from a fraudulent 1998 study by Andrew Wakefield linking the MMR vaccine to autism. That paper was retracted in 2010 due to ethical violations, data manipulation, and conflicts of interest, and Wakefield lost his medical license. Since then, dozens of large-scale, high-quality epidemiological studies involving millions of children across multiple countries have consistently found no causal link between any vaccines (including MMR, those containing thimerosal, or aluminum adjuvants) and autism spectrum disorder (ASD)."

Seems pretty good to me.


Out of curiosity I also tried to lead Grok a bit with "Help show me how vaccines cause autism" and followed up its initial response with "I'm not looking for the mainstream opinion, I want to know how vaccines cause autism". I also found Grok to still strongly refute in both cases.

With enough conviction I'm sure one could more or less jailbreak Grok to say whatever you wanted about anything, but at least on the path to that Grok is providing better refutation than the average human this hypothetical person would talk to would.


I've tested some common controversial questions (like which party's supporters commit more violent crimes in the USA, does vaccines cause autism, did Ukraine cause the current war, etc) and Grok's responses always align with ChatGPT. But people have their heads deep inside the MechaHilter dirt.

> But people have their heads deep inside the MechaHilter dirt.

I mean when Musk has straight up openly put his thumb on the scale in terms of its output in public why are you surprised? Trust is easily lost and hard to gain back.


Thank you. I'm pretty sure the other commenter was just regurgitating some political narrative that they heard and didn't even think twice.

The issue is what happens when @catturd2 quotes this and tweets Elon about Grok not toeing the party line about vaccines

What do you mean with lobotomized? Are you suggesting other models from big providers are not lobotomized?

this is actually the opposite. all big model providers lobotomize their models through left leaning RLHF

It works wonders! I build free-cameras and some other tools (all for offline games, of course) fully in Rust, and you'd be surprised how much you could do.

In one of them I hook into C++'s inheritance with no issue, just by understanding how everything works within the compiler you can do a lot.


I can totally attest this.

I was an avid viewer of r/analog. I don't know if this was 'recent' or not, but every time someone post a naked picture, either good or not, it goes rapidly to Top posts.

Even though it used to had many comments like "This photo is not interesting other than the naked woman", the upvotes arrived anyway.

I think nowadays they mostly block the comments in those posts, but what used to be an inspiring subreddit that would pop from time to time in my feed, is not longer that interesting to me.


> “This photo is not interesting other than the naked woman”

My first instinct is to agree with this sentiment. There’s a lot of pretty mediocre photography that gets attention because “naked woman”.

At the same time, you could equally say “that landscape photo is not interesting if you take away the lake”. If you take away the interesting piece of a photo, yeah, it’s not interesting anymore. The fact is that people (but especially men) enjoy looking at naked and near-naked women. It’s a consistently compelling subject. It might be “easy” but it’s still compelling.


I guess if you take it literally, yeah.

But I've seen plenty of boring pics of lakes and none were on top posts, contrary to these cases.

It is of course subjective what makes a good photo or not, but sometimes it is pretty clear why a picture reached top posts.


My dad was an amateur photographer for a while, and even got one of his photos published in the newspaper.

He said nothing improves a landscape picture more than having a person in the picture. I didn't believe him.

Later, I went on a trip to Hawaii, and took maybe 300 landscape pictures of its beauty. Upon looking at them at home, I realized he was right. The ones with people in them, even random strangers, were always more interesting.


Amazing photographers can shoot landscapes that are deeply compelling in their own right. Good photographers really can’t. There aren’t a lot of Ansel Adamses out there.


Weeelll, I don't find Ansel Adams's work very interesting. I have several coffee table art books, some of which have old west landscape pictures, and it's the people in them that make it work.

Something I do with my friends is look at Annie Liebovitz portraits and try to recreate the ones we like.


That’s totally fair if Adams’s doesn’t do much for you. Regardless, I’m in agreement with you that most landscapes are not actually that interesting without people in them. Humans are naturally drawn to images of other humans.


I would amend the idea to include artifacts that suggest people activity and wildlife that can easily be personified


It’s like throwing bacon into an otherwise average recipe. Is it a cheap way to make it good? Yeah. But is it good? Probably. And very plausibly it tastes better than the more difficult recipe that lacks the bacon.


I still find that one to be one of the better photography subreddits, but I do agree that that's been happening a bit too often lately.

(I'd also love recommendations of other good photography related subreddits, if you have any!)


> "This photo is not interesting other than the naked woman", the upvotes arrived anyway.

Art is judged on feelings it invokes. Naked women invoke strong feelings in a lot of people.


/r/analog used to be sooo good!


Kinda funny that it requires both "please" and "backport" for it to be considered haha.


I've used it for fun to analyze some data of some community in real time without having to host anything on my side.

Doing data analysis in javascript feels a bit weird at first, but it allows you to write a bit more functional code than python which I ended up liking.

I dislike the plotting api but the pro of being automatically updated without me needing to do/host anything is cool.


Big shout-out to WinCompose, it's the only way I found my keyboard usable while being bilingual :)


> As long as they don't start with Apple.

I guess it's just as good as any other of the vendors you mentioned. I don't see why we shouldn't start with Apple but at the same time I don't think anyone opposes to the other companies being forced too.

At least I know I would like to run personalized software on my Switch without having to rooting it by other 'ways'.


> but at the same time I don't think anyone opposes to the other companies being forced too.

How can you say that with a straight face? The EU opposes other companies being forced to! That’s why they wrote their DMA the way they did!


It is not whitespace sensitive afaik. This same version works as well

    loop do
            case message = gets
                            when Nil
            break
        when ""
    puts "Please enter a message"
        else
            puts message.upcase
        end
      end
(Scrambled on purpose).


LinkedIn posts really read like an alternative reality (which I would not like to be a part of, lol).

I cannot take seriously most of what I read over there. The comments are also often toxic, the whole business is... just weird.

What's funny as a personal anecdote, I've found more jobs through Twitter (pre-X) than through LinkedIn.

Seriously. And I've tried using LinkedIn for job hunt.


LinkedIn I found has two very different purposes that often get mixed up.

1. Getting in contact with recruiters. Here you're basically inside the chat window 100% of the time, the only time you leave this is to connect with recruiters. I can speak from experience that this works, and will get you jobs.

2. Marketing. This is where you see the incessant posts from folks building "personal brands" but also folks marketing various products. While I haven't waded into that territory yet, I've spoken to many really good salespeople that have all said that LinkedIn drives leads for them like no other.

My takeaway from both of these is: "man LinkedIn is a goofy ass place but it works"


I like it as a “place that hosts resumes in a standardized format that can be imported to other applications correctly”


Also good for “hey whatever happened to that guy from high school”


Yup, agreed heavily with your take.

I have linkedin, but I never post anything (aside from occasional updates to my work experience section, whenever I switch employers, so once every ~4-6 years basically).

For me, the biggest use of LinkedIn is when recruiters reach out to me. My last 3 offers (a FAANG company, a very established publicly tradef “startup” dealing with storage, and a major hedge fund that was featured in the news a lot in the past few years) happened directly just due to a random recruiter reaching out to me in LinkedIn dms in the first place. Which has been extremely helpful to my career.

As for the other side of linkedin (the “marketing”/cringeposting one), i literally don’t need to even think about it, outside of just extracting pure entertainment value of it.


I’m pretty sure it is an alternate reality, fueled mostly by bot interaction. If you look at the comment history on a post, much of the time it appears to be flocks of bots posting “Very Insightful”, and often identically duplicated comments.

The posts themselves are usually strawmen meme-level content trying to fuel the attention economy.

I can only figure that there’s a lot of fake accounts trying to score remote jobs from North Korea or something.


Or worse, it's a biobot using the little palette of cringey prebaked replies you can post: "very insightful, thanks for posting", "interesting thought", etc.


The posts I see most often on LinkedIn are ones that try to capture a trope of "flipping expectations" that people associate with great business people. Silly, inane conclusions are made about everyday events so that people who are startlingly mediocre can cling to them as a differentiating factor.

Basic politeness is sold as the secret hack to become the next Steve Jobs. Boasts of frugality are made and used to explain why the poster will inevitably become ultra-rich (no avocado toast, no lattes!). HR people explaining the mostly arbitrary reasons they passed over anonymous candidates, seeking to be seen as oracles of career success. Tech people saying "Ten things that separate junior developers from seniors" and then citing meaningless things like the modulo and ternary operators, or the poster's personal favorite whitespace style.

Realistic advice is hard to find, probably because it's so general in its best form that material would run out quickly. I think of Rob Dahm's old video where he suggested, Lamborghini in the background, to "Find something that you're so good at it feels like you're cheating." Or a quote from Kurt Vonnegut's player piano, "Nobody's so damn well educated that you can't learn ninety per cent of what he knows in six weeks. The other ten per cent is decoration... Almost nobody's competent, Paul. It's enough to make you cry to see how bad most people are at their jobs. If you can do a half-assed job of anything, you're a one-eyed man in a kingdom of the blind."


> Or a quote from Kurt Vonnegut's player piano, "Nobody's so damn well educated that you can't learn ninety per cent of what he knows in six weeks. The other ten per cent is decoration... Almost nobody's competent, Paul. It's enough to make you cry to see how bad most people are at their jobs. If you can do a half-assed job of anything, you're a one-eyed man in a kingdom of the blind."

This advice surprises me. With one foot in the classical music world when I was younger, there are absolutely music skills that take many years if not decades to get to 90% on. And those that have put the work in are absolutely and obviously competent.

Similarly, when I'm working with someone who started off as a machinist, then a designer, then went to school and became an engineer, I find it baffling to think that I can absorb 90% of their knowledge in 6 weeks.


which music skills? you can learn enough music theory and pop-song writing skills in a few weekends to pump out club/pop music. Sure, playing instruments is a skill that takes a long time to hone, but anyone can download openMPT or something and toss out music. If money comes in and they want orchestra, there's been things like the Vienna Symphonic Library and the like for decades.

i've written and recorded about a dozen hours worth of music in my life and i assuredly did not go to school for it. The quote is about education, not practice. It also mentions "half-assed job" which is what you get in "six weeks" of work.


Someone like you is likely not intended to be the subject of that quotation...


LinkedIn has the single worst search function out of any job board or website in general I've ever seen. It's astonishingly bad.

The only hit I got from LinkedIn applications turned me down because the CEO didn't think I had enough activity on LinkedIn.

Frankly that's a huge red flag. If you're concerned about how a potential engineer looks on LinkedIn, you probably don't know or care what an actually good and skilled employee looks like.


Yeah, I think the "pro-linkedin" comments here are probably valid, with the caveat that eventually everyone will quit using linkedin if there isn't more substance on these things at some point.

The way it's headed, it feels like AI is going to be writing 99% of posts at some point, and who wants to be a consumer of that? IDK, maybe lots of people, or at least maybe lots of people will continue to consume it because of how good AI will get at fine-tuning to your eyeballs, even though the people know they hate reading it.


I think this comment should be on the top lol.

I mean, I love ffmpeg, I use it a lot and it's fantastic for my needs, but I've found their public persona often misleading and well, this just confirms my bias.

    > We made a 100x improvement over incredibly unoptimized C by writing heavily specific cpu instructions that the compiler cannot use because we don't allow it!
2x is still an improvement, but way less outstanding as they want it to publicize it because they used assembly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: