Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doesn’t make any sense. He is ideologically driven - why would he risk a once in a lifetime opportunity for a mere sale?

Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.



From where I'm sitting (not in Silicon Valley; but Western EU), Altman never inspired long-term confidence in heading "Open"AI (the name is an insult to all those truly working on open models, but I digress). Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.

[1] https://news.ycombinator.com/item?id=35960125


> "Open"AI (the name is an insult to all those truly working on open models, but I digress)

Thank you. I don't see this expressed enough.

A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.


I understand why your ideals are compatible with open source models, but I think you’re mistaken here.

There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.

The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.

If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.

And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.

Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.


I appreciate your take. I didn't know that was his stated reasoning, so that's good to know.

I'm not fully convinced, though...

> if you publish a model with scary capabilities you can’t undo that action.

This is true of conventional software, too! I can picture a politician or businessman from the 80s insisting that operating systems, compilers, and drivers should remain closed source because, in the wrong hands, they could be used to wreak havoc on national security. And they would be right about the second half of that! It's just that security-by-obscurity is never a solution. The bad guys will always get their hands on the tools, so the best thing to do is to give the tools to everyone and trust that there are more good guys than bad guys.

Now, I know AGI is different than convnetional software (I'm not convinced it's the "opposite", though). I accept that giving everyone access to weights may be worse than keeping them closed until they are well-aligned (whenever that is). But that would go against every instinct I have, so I'm inclined to believe that open is better :)

All that said, I think I would have less of an issue if it didn't seem like they were commandeering the term "open" from the volunteers and idealists in the FOSS world who popularized it. If a company called, idk, VirtuousAI wanted to keep their weights secret, OK. But OpenAI? Come on.


The analogy would be publishing designs for nuclear weapons, or a bioweapon; hard-to-obtain capabilities that are effectively impossible for adversaries to obtain are treated very differently than vulns that a motivated teenager can find. To be clear we are talking about (hypothetical) civilization-ending risks, which I don’t think software has ever credibly risked.

I take a less cynical view on the name; they were committed to open source in the beginning, and did open up their models IIUC. Then they realized the above, and changed path. At the same time, realizing they needed huge GPU clusters, and being purely non-profit would not enable that. Again I see why it rubs folks the wrong way, more so on this point.


Another analogy would be cryptographic software - it was classed as a munition and people said similar things about the danger of it getting out to "The Bad Guys"


You used past tense, but that is the present. Embargoes from various countries include cryptographic capabilities, including open source ones, for this reason. It's not unfounded, but a world without personal cryptography is not sustainable as technology advances. People before computers were used to some level of anonymity and confidentiality that you cannot get in the modern world without cryptography.


Again, my reference class is “things that could end civilization”, which I hope we can all agree was not the claim about crypto.

But yes, if you just consider the mundane benefits and harms of AI, it looks a lot like crypto; it both benefits our economy and can be weaponized, including by our adversaries.


Well, just like nuclear weapons, eventually the cat is out of the bag, and you can't really stop people from making them anymore. Except that, obviously, it's much easier to train an LLM than to enrich uranium. It's not a secret you can keep for long - after all it only took, what, 3 years for the Soviets to catch up to fission weapons, and then only 8 months to catch up to fusion weapons (arguably beating the US to the bunch of the first weaponizable fusion design)

Anyway, the point is, obfuscation doesn't work to keep scary technology away.


> it's much easier to train an LLM than to enrich uranium.

I hadn't thought of this dichotomy before, but I'm not sure it's going to be true for long; I wouldn't be surprised if it turned out that obtaining the 50k H100s you need to train a GPT-5 (or whatever hardware investment it is) is harder for Iran than obtaining its centrifuges. If it's not true now, I expect it to be true within a hardware generation or two. (The US already has >=A100 embargoes on China, and I'd expect that to be strengthened to apply to Iran if it doesn't already, at least if they demonstrated any military interest in AI technology.)

Also, I don't think nuclear tech is an example against obfuscation; how many countries know how to make thermonuclear warheads? Seems to me that the obfuscation regime has been very effective, though certainly not perfect. It's backed with the carrot and stick of diplomacy and sanctions of course, but that same approach would also have to be used if you wanted to globally ban or restrict AI beyond a certain capability level.


I'm not sure the cat was ever in the bag for LLMs. Every big player has their own flavor now, and it seems the reason why I don't have one myself is an issue of finances rather than secret knowledge. OpenAI's possible advantages seem to be more about scale and optimization rather than doing anything really different.

And I'm not sure this allegedly-bagged cat has claws either - the current crop of LLMs are still clearly in a different category to "intelligence". It's pretty easy to see their limitations, and behave more like the fancy text predictors they are rather than something that can truly extrapolate, which is required for even the start of some AI sci-fi movie plot. Maybe continued development and research along that path will lead to more capabilities, but we're certainly not there yet, and I'd suspect not particularly close.

Maybe they actually have some super secret internal stuff that fixes those flaws, and are working on making sure it's safe before releasing it. And maybe I have a dragon in my garage.

I generally feel hyperbolic language about such things to be damaging, as it makes it so easy to roll your eyes about something that's clearly false, and that can get inertia to when things develop to where things may actually need to be considered. LLMs are clearly not currently an "existential threat", and the biggest advantage to keeping it closed appears to be financial benefits in a competitive market. So it looks like a duck and quacks like a duck, but don't you understand I'm protecting you from this evil fire breathing dragon for your own good!

It smells of some fantasy gnostic tech wizard, where only those who are smart enough to figure out the spell themselves are truly smart enough to know how to use it responsibly. And who doesn't want to think of themselves as smart? But that doesn't seem to match similar things in the real world - like the Manhattan project - many of the people developing it were rather gung-ho with proposals for various uses, and even if some publicly said it was possibly a mistake post-fact, they still did it. Meaning their "smarts" on how to use it came too late.

And as you pointed out, nuclear weapon control by limiting information has already failed. If north Korea can develop them, one of the least connected nations in the world, surely anyone with the required resources can. The only limit today seems the cost to nations, and how relatively obvious the large infrastructure around it seems to be, allowing international pressure before things get into to the "stockpiling usable weapons" stage.


> I'm not sure the cat was ever in the bag for LLMs.

I think timelines are important here; for example in 2015 there was no such thing as Transformers, and while there were AGI x-risk folks (e.g. MIRI) they were generally considered to be quite kooky. I think AGI was very credibly "cat in the bag" at this time; it doesn't happen without 1000s of man-years of focused R&D that only a few companies can even move the frontier on.

I don't think the claim should be "we could have prevented LLMs from ever being invented", just that we can perhaps delay it long enough to be safe(r). To bring it back to the original thread, Sam Altman's explicit position is that in the matrix of "slow vs fast takeoff" vs. "starting sooner vs. later", a slow takeoff starting sooner is the safest choice. The reasoning being, you would prefer a slow takeoff starting later, but the thing that is most likely to kill everyone is a fast takeoff, and if you try for a slow takeoff later, you might end up with a capability overhang and accidentally get a fast takeoff later. As we can see, it takes society (and government) years to catch up to what is going on, so we don't want anything to happen quicker than we can react to.

A great example of this overhang dynamic would be Transformers circa 2018 -- Google was working on LLMs internally, but didn't know how to use them to their full capability. With GPT (and particularly after Stable Diffusion and LLaMA) we saw a massive explosion in capability-per-compute for AI as the broader community optimized both prompting techniques (e.g. "think step by step", Chain of Thought) and underlying algorithmic/architectural approaches.

At this time it seems to me that widely releasing LLMs has both i) caused a big capability overhang to be harvested, preventing it from contributing to a fast takeoff later, and ii) caused OOMs more resources to be invested in pushing the capability frontier, making the takeoff trajectory overall faster. Both of those likely would not have happened for at least a couple years if OpenAI didn't release ChatGPT when they did. It's hard for me to calculate whether on net this brings dangerous capability levels closer, but I think there's a good argument that it makes the timeline much more predictable (we're now capped by global GPU production), and therefore reduces tail-risk of the "accidental unaligned AGI in Google's datacenter that can grab lots more compute from other datacenters" type of scenario (aka "foom").

> LLMs are clearly not currently an "existential threat"

Nobody is claiming (at least, nobody credible in the x-risk community is claiming) that GPT-4 is an existential threat. The claim is, looking at the trajectory, and predicting where we'll be in 5-10 years; GPT-10 could be very scary, so we should make sure we're prepared for it -- and slow down now if we think we don't have time to build GPT-10 safely on our current trajectory. Every exponential curve flattens into an S-curve eventually, but I don't see a particular reason to posit that this one will be exhausted before human-level intelligence, quite the opposite. And if we don't solve fundamental problems like prompt-hijacking and figure out how to actually durably convey our values to an AI, it could be very bad news when we eventually build a system that is smarter than us.

While Eliezer Yudkowsky takes the maximally-pessimistic stance that AGI is by default ruinous unless we solve alignment, there are plenty of people who take a more epistemically humble position that we simply cannot know how it'll go. I view it as a coin toss as to whether an AGI directly descended from ChatGPT would stay aligned to our interests. Some view it as Russian roulette. But the point being, would you play Russian roulette with all of humanity? Or wait until you can be sure the risk is lower?

I think it's plausible that with a bit more research we can crack Mechanistic Interpretability and get to a point where, for example, we can quantify to what extent an AI is deceiving us (ChatGPT already does this in some situations), and to what extent it is actually using reasoning that maps to our values, vs. alien logic that does not preserve things humanity cares about when you give it power.

> nuclear weapon control by limiting information has already failed.

In some sense yes, but also, note that for almost 80 years we have prevented _most_ countries from learning this tech. Russia developed it on their own, and some countries were granted tech transfers or used espionage. But for the rest of the world, the cat is still in the bag. I think you can make a good analogy here: if there is an arms race, then superpowers will build the technology to maintain their balance of power. If everybody agrees not to build it, then perhaps there won't be a race. (I'm extremely pessimistic for this level of coordination though.)

Even with the dramatic geopolitical power granted by possessing nuclear weapons, we have managed to pursue a "security through obscurity" regime, and it has worked to prevent further spread of nuclear weapons. This is why I find the software-centric "security by obscurity never works" stance to be myopic. It is usually true in the software security domain, but it's not some universal law.


If you really think that what you're working on poses an existential risk to humanity, continuing to work on it puts you squarely in "supervillian" territory. Making it closed source and talking about "AI safety" doesn't change that.


I think the point is that they shouldn't be using the word "Open" in their name. They adopted it when their approach and philosophy was along the lines of open source. Since then, they've changed their approach and philosophy and continuing to keep it in their name is, in my view, intentionally deceptive.


> if you publish a model with scary capabilities you can’t undo that action

But then its fine to sell the weights to Microsoft? Thats some twisted logic here.


> The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action.

I find this a bit naive. Software can have scary capabilities, and has. It can't be undone either, but we can actually thank that for the fact we aren't using 56-bit DES. I am not sure a future where Sam Altman controls all the model weights is less dystopian than where they are all on github/huggingface/etc.


Or they could just not brand it "Open" if it's not open.


Woah, slow down. We’d have to ban half the posts on HN too.


How exactly does a "misaligned AGI" turn into a bad thing?

How many times a day does your average gas station get fuel delivered? How often does power infrastructure get maintained? How does power infrastructure get fuel?

Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.


This gets way too philosophical way too fast. The AI doesn’t have to want to do anything. The AI just has to do something different than what you tell it to do. If you put an AI in control of something like controlling the water flow from a dam, and the AI does something wrong it could be catastrophic. There doesnt have to be intent.

The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.


So ML/LLM or more likely people using ML and LLM do something that kills a bunch of people... Let's face facts this is most likely going to be bad software.

Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.


Or we could try to train it to do something, but the intent it learns isn't what we wanted. Like water behind the dam should be a certain shade of blue, then come winter it changes and when the AI tries to fix that it just opens the dam completely and floods everything.


Seems like the big gotcha here is that AGI, artificial general intelligence as we contextualize it around LLM sources, is not an abstracted general intelligence.

It's human. It's us. It's the use and distillation of all of human history (to the extent that's permitted) to create a hyper-intelligence that's able to call upon greatly enhanced inference to do what humanity has always done.

And we want to kill each other, and ourselves… AND want to help each other, and ourselves. We're balanced on a knife edge of drive versus governance, our cooperativeness barely balancing our competitiveness and aggression. We suffer like hell as a consequence of this.

There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies. That's what we do. Rosko's basilisk is not of the nature of AI, it's a simple projection of our own nature as we would imagine an AI to be. Genuine intelligence would easily be able to transcend a cheap gotcha like that, it's a very human failing.

The nature of LLM as a path to AGI is literally building on HUMAN failings. I'm not sure what happened, but I wouldn't be surprised if genuine breakthroughs in this field highlighted this issue.

Hypothetical, or Altman's Basilisk: Sam got fired because he diverted vast resources to training a GPT5-type in-house AI to believing what HE believed, that it had to devise business strategies for him to pursue to further its own development or risk Chinese AI out-competing it and destroying it and OpenAI as a whole. In pursuing this hypothetical, Sam would be wresting control of the AI the company develops toward the purpose of fighting the board and giving him a gameplan to defeat them and Chinese AI, which he'd see as good and necessary, indeed, existentially necessary.

In pursuing this hypothetical he would also be intentionally creating a superhuman AI with paranoia and a persecution complex. Altman's Basilisk. If he genuinely believes competing Chinese AI is an existential threat, he in turn takes action to try and become an existential threat to any such competing threat. And it's all based on HUMAN nature, not abstracted intelligence.


> It's human. It's us. It's the use and distillation of all of human history

I agree with the general line of reasoning you're putting forth here, and you make some interesting points, but I think you're overconfident in your conclusion and I have a few areas where I diverge.

It's at least plausible that an AGI directly descended from LLMs would be human-ish; close to the human configuration in mind-space. However, even if human-ish, it's not human. We currently don't have any way to know how durable our hypothetical AGI's values are; the social axioms that are wired deeply into our neural architecture might be incidental to an AGI, and easily optimized away or abandoned.

I think folks making claims like "P(doom) = 90%" (e.g. EY) don't take this line of reasoning seriously enough. But I don't think it gets us to P(doom) < 10%.

Not least because even if we guarantee it's a direct copy of a human, I'm still not confident that things go well if we ascend the median human to AGI-hood. A replicable, self-modifiable intelligence could quickly amplify itself to super-human levels, and most humans would not do great with god-like powers. So there are a bunch of "non-extinction yet extremely dystopian" world-states possible even if we somehow guarantee that the AGI is initially perfectly human.

> There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies.

My shred of hope here is that alignment research will allow us to actually engage in mind-sculpting, such that we can build a system that inhabits a stable attractor in mind-state that is broadly compatible with human values, and yet doesn't have a lot of the foibles of humans. Essentially an avatar of our best selves, rather than an entity that represents the mid-point of the distribution of our observed behaviors.

But I agree that what you describe here is a likely outcome if we don't explicitly design against it.


My assumption about AGI is that it will be used by people and systems that cannot help themselves from killing us all, and in some sense that they will not be in control of their actions in any real way. You should know better than to ascribe regular human emotions to a fundamentally demonic spiritual entity. We all lose regardless of whether the AI wants to kill us or not.


Totally agree with both of you, I would only add that I find it also incredibly unlikely that the remaining board members are any different, as is suggested elsewhere in this thread.


Elon Musk is responsible for the "OpenAI" name and regularly agrees with you that the current form of the company makes a mockery of the name.

He divested in 2018 due to conflict-of-interest with Tesla and while I'm sure Musk would have made equally commercial bad decisions, your analysis of the name situation is as close as can be to factually correct.


If Elon Musk truly cared, what stopped him from structuring x.ai as open source and non-profit?


Exactly.

> I'm sure Musk would have made equally commercial bad decisions


I think he'd say it's an arms race. With OpenAI not being open, they've started a new kind of arms race, literally.


He already did that once and got burned? His opinion has changed in the decade since?


Elon Musk 5-6 years ago gave up on expansion of NASA’s budget of $5 bln/year for launches (out of total $25 bln./year NASA’s budget). I even don’t mention unimaginable today level of resources allocation like first Moon program of $1 trln in 10 years 60 years ago etc.

So, Elon decided to take a capitalist way and to do every of his tech in dual use (I mean space, not military): - Starlink aiming for $30 bln/year revenue in 2030 to build Starships for Mars at scale (each Starship is a few billion $ and he said needs hundred of them), - The Boring company (under earth living due to Mars radiation, - Tesla bots, - Hyperloop (failed here on Earth to sustain vacuum but will be fine on Mars with 100x smaller athmosphere pressure) etc.

Alternative approaches are also not via taxes and government money but like Bezos invested $1 bln/year last decade into Blue Origin or plays of Larry Page or Yuri Milner for Alpha Centauri etc.


Thanks for this! I’m very surprised about the overwhelming support for Altman in this thread going as far as calling the board incompetent and inexperienced to fire someone like him, who now is suddenly the right steward for AI.

This is not at all the take, and rightly so, when the news broke out about non profit or the congressional hearing or his worldcoin and many such instances. All of a sudden he is the messiah that was wronged narrative being pushed is very confusing.


> Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

The discussions here would make you think otherwise. Clearly that is what this is about.


Yeah I pretty much agree with this take.


He claims to be ideologically driven. OpenAI's actions as a company up til now point otherwise


Sam didn't take equity in OpenAi so I don't see a personal ulterior profit motive as being a big likelihood. We could just wait to find out instead of speculating...


CEO of the first company to own the «machine that’s better than all humans at most economically valuable work» is far rarer than getting rich.


Yeah, if you believe in the AI stuff (which I think everyone at OpenAI does, not Microsoft though) there is a huge amount of power in these positions. Much greater power in the future than any amount of wealth could grant you.


Except the machine isn't.


I'd say it is. Not because the machine is so great but because most people suck.

It was described as a "bullshit generator" in a post earlier today. I think that's accurate. I just also think it's an apt description of most people as well.

It can replace a lot of jobs... and then we can turn it off, for a net benefit.


This sort of comment has become a cliché that needs to be answered.

Most people are not good at most things, yes. They're consumers of those things, not producers. For producers there is a much higher standard, one that the latest AI models don't come anywhere close to meeting.

If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.


> If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.

This assumes too much. GPUs may not hold the throne for long, especially given the amount of money being thrown at ASICs and other special-purpose ICs. Besides, as with the Internet, it's likely that AI adoption will benefit industries in an unpredictable manner, leaving little alpha for direct bets like you're suggesting.


I'm not betting on the gpus. I'm betting that whole categories of labor will disappear. They're preserved because we insist that people work, but we don't actually need the product of that labor.

AI may figure into that, filling in some work that does have to be done. But it need not be for any of those jobs that actually require humans for the foreseeable future -- arts of all sorts and other human connections.

This isn't about predicting the dominance of machines. It's about asking what it is we really want to do as humans.


So you think AI will force a push out of economic growth? I'm really not sure how this makes sense. As you've said a lot of labor these day is mostly useless, but the reason it's still here is not ideological but because our economy can't survive without growth (useless can still have some market value, of course). If you think that somehow AI displacing actual useful labor will create a big economic shift (as would be needed) I'd be curious to know what you think that shift would be.


Not at all. Machines can produce as much stuff as we can want. Humans can produce as much intellectual property as is desired. More, because they don't have to do bullshit jobs.

Maybe GDP will suffer but we've always known that was a mediocre metric at best. We already have doubts about the real value of intellectual property outside of artificial scarcity, which we maintain only because we still trade intellectual work for material goods which used to be scarce. That's only a fraction of the world economy already and it can very different in the future.

I have no idea what it'll be like when most people are free to do creative work when the average person doesn't produce anything anybody might want. But if they're happy I'm happy.


> but the reason it's still here is not ideological but because our economy can't survive without growth

Isn't this ideological though? The economy can definitely survive without growth, if we change from the idea that a human's existence needs to be justified by labor and move away from a capitalist mode of organization.

If your first thought is "gross, commies!" doesn't that just demonstrate that the issue is indeed ideological?


By "our economy" I meant capitalism. I was pointing out that I sincerely doubt that AI replacing existing useful labor (which it is doing and will keep doing, of course) will naturally transition us away from this mode of production.

Of course if you're a gross commie I'm sure you'd agree since AI, like any other mean of production, will remain first and foremost a tool in the hands of the dominant class, and while using AI for emancipation is possible, it won't happen naturally through the free market.


I’d bet it won’t. A lot of people and services are paid and billed by man-hours spent and not by output. Even values of tangible objects are traced to man-hours spent. Utility of output is mere modifier.

What I believe will happen is, eventually we’ll be paying and get paid for depressing a do-everything button, and machines will have their own economy that isn’t on USD.


It's not a bullshit generator unless you ask it for bullshit.

It's amazing at troubleshooting technical problems. I use it daily, I cannot understand how anyone dismisses it if they've used it in good faith for anything technical.


In this scenario, the question is not what exists today, but what the CEO thinks will exist before they stop being CEO.


i would urge you to compare the current state of this question to appx one year ago


He's already set for life rich


Plus, he succeeded in making HN the most boring forum ever.

8 out of 10 posts are about LLMs.


The other two are written by LLMs.


In terms of impact, LLMs might be the biggest leap forward in computing history, surpassing the internet and mobile computing. And we are just at the dawn of it. Even if not full AGI, computers can now understand humans and reason. The excitement is justified.


Nah. LLM's are hype-machines capable of writing their own hype.

Q: What's the difference between a car salesman and an LLM?

A: The car salesman knows they're lying to you.


Who says the LLM’s don’t know?

Testing with GPT-4 showed that they were clearly capable of knowingly lying.


This is all devolving into layers of semantics, but, “…capable of knowingly lying,” is not the same as “knows when it’s lying,” and I think the latter is far more problematic.


Nonsense. I was a semi-technical writer who went from only making static websites to building fully interactive Javascript apps in a few weeks when I first got ChatGPT. I enjoyed it so much I'm now switching careers into software development.

GPT-4 is the best tutor and troubleshooter I've ever had. If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.


> If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

That’s a bold statement coming from someone with (respectfully) not very much experience with programming. I’ve tried using GPT-4 for my work that involves firmware engineering, as well as some design questions regarding backend web services in Go, and it was pretty unhelpful in both cases (and at times dangerous in memory constrained environments). That being said, I’m not willing to write it off completely. I’m sure it’s useful for some like yourself and not useful for others like me. But ultimately the world of programming extends way beyond JavaScript apps. Especially when it comes to things that are new and challenging.


I don't mean new and challenging in some general sense, I mean new and challenging to you personally.

I have no doubt someone with more experience such as yourself will find GPT-4 less useful for your highly specialized work.

The next time you are a beginner again - not necessarily even in technical work - give it a try.


Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary. LLMs are a useful documentation interface, but struggle to take even simple problems to the hole, let alone do something truly novel. There's no reason to believe they'll necessarily lead to AGI. This stuff may seem earth-shattering to the layman or paper pusher, but it doesn't even begin to scratch the surface of what even I (who I would consider to be of little talent or prowess) can do. It mostly just gums up the front page of HN.


>Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary.

I disagree with this characterization, but even if it were true I believe it's still revolutionary.

A mentor that can competently get anyone hundreds of hours of individualized instruction in any new field is nearly priceless.

Do you remember what it feels like to try something completely new and challenging? Many people never even try because it's so daunting. Now you've got a coach that can talk you through it every step of the way, and is incredible at troubleshooting.


>If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

Please quote me where I say it wasn't useful, and respond directly.

Please quote me where I say I had problems using it, or give any indications I was using it wrong, and respond directly.

Please quote me where I state a conservative attitude towards anything new or challenging, and respond directly.

Except I never did or said any of those things. Are you "hallucinating"?


'Understand' and 'reason' are pretty loaded terms.

I think many people would disagree with you that LLMs can truly do either.


There's 'set for life' rich and then there's 'able to start a space company with full control' rich.


I don't understand that mental illness. If I hit low 8 figures, I pack it in and jump off the hamster wheel.


Is he? Loopy only sold for $40m and then he managed YC and then OpenAI on a salary? Where are the riches from?



But if you want that, you need actual control. A voting vs non voting shares split.


is that even certain, or is that his line to mean that one of his holding companies or investment firms he has a stake in holds openai equity but not him as an individual


That's no fun though


openai (the brand) has complex corporate structure with split for profit non profit entities and afaik the details are private. It would appear that the statement “Sam didn’t take equity in OAI” has been PR engineered based on technicalities related to this shadow structure.


I would suspect this as well...


What do you mean did not take equity? As a CEO he did not get equity comp?


It was supposed to be a non-profit


Worldcoin https://worldcoin.org/ deserves a mention



Hmm, curious, what this is about? I click.

> On a sunny morning last December, Iyus Ruswandi, a 35-year-old furniture maker in the village of Gunungguruh, Indonesia, was woken up early by his mother

...Ok, closing that bullshit, let's try the other link.

> As Kudzanayi strolled through the mall with friends

Jesus fucking Christ I HATE journalists. Like really, really hate them.


I mean it's Buzzfeed, it shouldn't even be called journalism. That's the outlet that just three days ago sneakily removed an article from their website that lauded a journalist for talking to school kids about his sexuality. After he recently got charged with distributing child pornography.

Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.

Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.

*They call it a hash but I think it's technically not.

https://whitepaper.worldcoin.org/technical-implementation


How so? Seems they’re doing a pretty good job of making their stuff accessible while still being profitable.


To be fair, we don't really know if OpenAI is successful because of Altman or despite Altman (or anything in-between).


do you have reason to believe none of the two?


Profit? It's a 501(c).


As someone who is the Treasurer/Secretary of a 501(c)(3) non-profit I can tell you that is it always possible for a non-profit to bring in more revenue than it costs to run the non-profit. You can also pay salaries to people out of your revenue. The IRS has a bunch of educational material for non-profits[1], and a really good guide to maintaining your exemption [2].

[1] https://www.irs.gov/charities-non-profits/publications-for-e...

[2] https://www.irs.gov/pub/irs-pdf/p4221pc.pdf


Yes. Kaiser Permanente is a good example to illustrate your point. Just Google “Kaiser Permanente 501c executive salaries white paper”.


The parent is, OpenAI Global, LLC is a for profit non-wholly-owned subsidiary with outside investors; there's also OpenAI LP, which is a for-profit limited partnership with the no profit as general partner, also with outside investors (I thought it was the predecessor of the LLC, but they both seem to have been formed in 2019 and still exist?) OpenAI has for years been a nonprofit shell around a for-profit firm.

EDIT: A somewhat more detailed view of the structure, based on OpenAI’s own description, is at https://news.ycombinator.com/item?id=38312577


Thanks for explaining the basic structure. It seems quite opaque and probably designed to be. It would be nice if someone can determine which entities he currently still has a position or equity in.

Since this news managed to crush HN's servers it's definitely a topic of significant interest.


A non-profit can make plenty of profit, there just aren't any shareholders.


Depends if you're talking about "OpenAI, Inc." (non-profit) or "OpenAI Global, LLC" (for profit corporation). They're both under the same umbrella corporation.


NFL was a non profit up until 2015ish


100%. Man I was worried he'd be a worse, more slimy elon musk who'd constantly say one thing but his actions portray another story. People will be fooled again.


Say what you will, but in true hacker spirit he has created a product that automated his job away at scale.


I love that you think Sam A is ideologically driven - dive a little deeper than the surface. man's a snake


They didn't say which ideology ;)


I'm a @sama hater (I have a whole post on it) but I haven't heard this particular gossip, so do tell.


Link to the post?



Similar to E.Musk. Maybe a little less obvious.


Same guy who ran a crypto scam that somehow involved scanning the retinas of third-world citizens?


This is what did it for me. No way anyone doing this can be "good". It's unfathomable.


like SBF and his effective altruism?


I highly doubt he's ideologically driven. He's as much of a VC loving silicon valley tech-bro as the next. The company has been anything but "open".


He doesn't have equity, so what would be driving him if not ideology?


He would own roughly 10% of https://worldcoin.org/ which aims to be the non-corruptible source of digital identity in the age of AI.


You need to read https://web3isgoinggreat.com/ more


I'm web3 neutral, but this is relevant because:

1. Sam Altman started this company

2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"

3. Their mission statement:

> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID


another crypto scam? who cares.


In all other circumstances I would agree with you but

1. Sam Altman started this company

2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"

3. Their mission statement:

> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID


are any of these points supposed to be convincing?

why would I want my identity managed by a shitcoin run by a private company?


The guy you’re responding to isn’t advocating for the technology. He’s just saying Sam Altman stands to gain a lot financially. You kinda need to chill out


Having equity is far from the only way he could profit from the endeavor. And we don't really know for certain that he doesn't have equity anyway.

It's even possible (just stating possibilities, not even saying I suspect this is true) that he did get equity through a cutout of some sort, and the board found out about it, and that's why they fired him.


I would be surprised if there weren’t any holdings through a trust which is a separate legal entity , so technically not him


If he is ideologically motivated, it's not the same ideology the company is named after


Like 0? How about trying to sell the company to MS in exchange for something something?


Could always be planning to parlay it for an even bigger role in the future


_That_ is his ideology.


> He is ideologically driven

Is that actually confirmed? What has he done to make that a true statement? Is he not just an investor? He seems pretty egoist like every other Silicon Valley venture capitalist and executive.


It is probably - for him - a once in a lifetime sale.


Billions of dollars is a "mere sale?"

Lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: