Hacker Newsnew | past | comments | ask | show | jobs | submit | smallnix's commentslogin

> It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.

I always thought they deliberately tried to contain the genie in the bottle as long as they could


Their unreleased LaMDA[1] famously caused one of their own engineers to have a public crashout in 2022, before ChatGPT dropped. Pre-ChatGPT they also showed it off in their research blog[2] and showed it doing very ChatGPT-like things and they alluded to 'risks,' but those were primarily around it using naughty language or spreading misinformation.

I think they were worried that releasing a product like ChatGPT only had downside risks for them, because it might mess up their money printing operation over in advertising by doing slurs and swears. Those sweet summer children: little did they know they could run an operation with a seig-heiling CEO who uses LLMs to manufacture and distribute CSAM worldwide, and it wouldn't make above-the-fold news.

[1] https://en.wikipedia.org/wiki/LaMDA#Sentience_claims

[2] https://research.google/blog/lamda-towards-safe-grounded-and...


The front runner is not always the winner. If they were able to keep pace with openai while letting them take all the hits and miss steps, it could pay off.

Time will tell if LLM training becomes a race to the bottom or the release of the "open source" ones proves to be a spoiler. From the outside looking while ChatGPT has brand recognition for the average person who could not tell the difference between any two LLMs google offering Gemini in android phones could perhaps supplant them.


I swear the Tay incident caused tech companies to be unnecessarily risk averse with chatbots for years.

Attention is all you need was written by Googlers IIRC.

Indeed, none of the current AI boom would’ve happened without Google Brain and their failure to execute on their huge early lead. It’s basically a Xerox Parc do-over with ads instead of printers.

> Computer science has been advancing language design by building higher and higher level languages

Why? Because new languages have an IR in their compilation path?


This touches the toupet fallacy: "I never saw a large company fail to grow large because of deferred scaling"

Friendster might fit though: https://highscalability.com/friendster-lost-lead-because-of-...


Even that would be wrong. Von der Leyen was strong armed into her position by Merkel and the other heads of states, overruling Timmermans nomination.

Isn't every AI datacenter chip manufacturer critically dependent on EU (ASML)?

Sure but the US isn’t vowing to eliminate all dependencies on EU goods. (Just burning all their good will.)

If the car that did a hit-and-run was operated autonomously the insurance of the maker of that car should pay. Otherwise it's a human and the situation falls into the bucket of what we already have today.

So yes, carmakers would pay in a hit-and-run.


> If the car that did a hit-and-run was operated autonomously the insurance of the maker of that car should pay

Why? That's not their fault. If a car hits and runs my uninsured bicycle, the manufacturer isn't liable. (My personal umbrella or other insurance, on the other hand, may cover it.)


They're describing a situation of liability, not mere damage. If yor bicycle is hit you didn't do anything wrong.

If you run into someone on your bike and are at fault then you generally would be liable.

They're talking about the hypothetical where you're on your bike, which was sold as an autobomous bike and the bike manufacturer's software fully drives the bike, and it runs into someone and is at fault.


E.g. with stuxnet they got to the air-gapped machines by letting worms loose on the network of suppliers, targeting technicians laptops.


> no technology could recognize that.

Perhaps require monitoring of the arm muscle electrical signals, build a profile, match the readings to the game actions and check that the profile matches the advertised player


Sounds like it could be fixed by making it configurable to hide all issues without a certain tag (or auto-apply a hiding tag) for the issues "landing page".


Interesting that EU is becoming stricter than US with growing life expectancy.

Life + 70 can mean the work is protected 120 years (publish at 40, dies at 90)?


It's worse. There's also the war periods for which copyright time is extended in some countries like France.

For example, lyrics to The Internationale were composed in 1871 and music in 1888. They fully entered public domain... in 2014 https://en.wikipedia.org/wiki/The_Internationale#Authorship_... Over 140 years of copyright.


Some people publish meaningful works at age 20. And some people live to 100.

That could be 150 years of copyright.


European laws are stricter on paper but more loosely enforced.

US laws are looser on paper but viciously enforced.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: