Hacker Newsnew | past | comments | ask | show | jobs | submit | hhdhdbdb's commentslogin

AMD, Groq in the future, TMSC?

Edit: I will add Apple.


Groq has already been around for 8 years, and they have just over one ten-thousandth the revenue of Nvidia. They're also hemorrhaging money, their annual spend is 30x their revenue. Probably they'll get acquired into someone else in a few years. *Especially* since we're discussing the scenario of the AI bubble popping, Groq would implode.

The other two, sure.


Already 8 years and being non profitable is forgivable for a hardware startup creating something new, especially a new silicon.

Depends why they are losing money. But losing money alone is not bad. This ain't an ice cream truck.

Agree groq implodes if demand for intellegent automation tanks (i.e. bubble burst), or competitors disrupt the disruptor. Another Z is all you need paper means my old laptop is running models fast or something means groq is not something you need.

They can probably pivot.

BTW OpenAI is 9 years old and they are a SaaS/PaaS running on Azure.


> Another Z is all you need paper means my old laptop is running models fast or something means groq is not something you need.

I’m having trouble parsing that sentence. Is it missing some punctuation?


They're referencing papers like "Attention is all you need", where such papers have usually made AI models significantly faster/better. Better/faster models means great models running well on old hardware, obviating the need for specialized hardware.

More punctuation would look like:

> Another "Z is all you need" paper means my old laptop is running models fast or something, [therefore] means groq is not something you need.


BATNA is kind of a dog whistle that this is a slighty hostile business deal. Hostile under the hood. Smiles and handshakes on the surface.


That is every negotiation I’ve ever seen, frankly. What else would you expect?

Friendly tends to be called ‘charity’


Friendly would be win win. We'll pay you so you dont worry about money, and we win because you are great. Batna (looks better as a word) is there but in the distance.


Win/win also has some teeth under it somewhere, or they’d just give you everything yes? In your scenario, why not give you a billion dollars vs a million vs 100k?

BATNA (and I am every familiar with the term, as I’ve had to do a significant amount of negotiation) is the ‘plan B’ if there is no acceptable ‘good’ agreement.

If someone has no acceptable BATNA, their negotiation is essentially ‘please don’t beat me up too bad’, not ‘let’s see if we can agree on something that works for both of us’. No BATNA means you aren’t effectively negotiating, just begging to not be abused too badly.

Sometimes an employer doesn’t have a BATNA option, but that is rare. Usually it’s the employee which doesn’t have a plan B.

A BATNA when doing negotiations over salary, for instance, might be taking another offer with an otherwise less desirable company. Or continuing a job search for more time. Or becoming a sugar baby for an older woman somewhere, etc. or filing for bankruptcy.

Knowing what your own (and what the other sides) plan B is, is important to actually have a successful negotiation, because when your offer starts to look less desirable than their plan B, it isn’t going to go well.


Since you said you have a lot of negotiation experience, how do you negotiate if you were let go and have to quickly find a job?


1. Tell them you have one other offer (or late stage interview).

2. Ask for more money and/or more shares (if you think the shares are any good).


Yeah. Nah that makes you sound silly (at least for software jobs) unless there is a good reason for it, like currency conversions or peverse tax incentives.

There is a tiny chance that asking for 133700 might get a smile at a smaller tech focused company though.


Worked for me at a large bank.


Hard to know that unless you got some inside to what they were thinking during hire.

If it worked I wonder if it worked as a shibboleth.


Why is fertility declining? I posit we are hitting non-food constraints. Political ones. Land use constraints. If you build millions of homes fertility will go up.


In wealthier, modern economies:

* More women work more and invest in their own education and fewer spend time alone at home as they might in poorer countries which would facilitate giving birth and investing time on childcare that way.

* More men and women derive their primary income from work that children cannot easily participate in. EG: office work, work from home computer work, vs farming or working with one's hands. In many poorer countries it is common practice to have more children at least partially to bolster the labor force around the house.

* Wealthier nations have better access to family planning: contraception, abortion, pasttimes that can meaningfully compete against getting laid in the first place.

Sources: Colleran, H., Snopkowski, K. Variation in wealth and educational drivers of fertility decline across 45 countries. Popul Ecol 60, 155–169 (2018). https://doi.org/10.1007/s10144-018-0626-5 https://link.springer.com/article/10.1007/s10144-018-0626-5

More Work, Fewer Babies: What Does Workism Have to Do with Falling Fertility? - Laurie DeRose and Lyman Stone https://ifstudies.org/ifs-admin/resources/reports/ifs-workis...


There are millions of empty homes in this world.

I'd assume environmental, but there's also more subtle answers than will fit in a comment box — whatever the cause, it has to be near-global.

China's building loads more houses, still has a fertility decline.


Surely, the reasons are multivariate with all kinds of interactions and feedback mechanisms between the variables.

It is really a good example of what natural dimension reducers we are, even when we know it makes no sense. It is like we can't but help ourselves to reduce things to one explanatory variable.

My favorite is the news headline "The market went up today because of X".


They never say that.

They say: Tesla shares up as revealations surface that the wind is blowing east.


Yes I forgot to mention the implied: Homes, that meet code, with connected utilities in places people want to live that are not being landbanked.


Bitcoin is a pure exanple thay shows the limit to energy consumption is how much money people have to throw at it. And if that money is thrown into generating more energy it is a cycle. There is no stomach size and human reproduction constraints. We can waste power as quickly as we can generate more.

The only hope is to generate this power greenly.


The existence of examples where it happens by design does not say anything either way about if it must happen all the time.


Yeah I am not saying all the time, but I am saying when it happens it can br less bounded than "human population growth in the early 21st century."


How is this a narrow niche?

Chain of thought type operations is in this "niche".

Also anything where the value is in the follow up chat not the one shot.


A gravitational wave requires an event like a black hole merger, or basicially somerhing to move and change the field, right?

In this case, how does the fact that a big object is still influencing space/time around it communicate that fact when it is not moving. Is that still gravitrons?


Everything creates gravitational waves. They are very difficult to detect, so we can only detect the ones created by black hole merger or something similar.

Assuming our guess about quantum gravity are correct, the normal gravitation force use gravitons too, they are virtual gravitons but the distinction between "real" and "virtual" particles is another whole can of worms.


Google can't please all of the sites all of the time, or all the visitors.

It is too big to evem worry about that 4k a day clicks for one site. It is like us optimizing the expense of 0.01c. It makes a difference when that 0.01c is an API call that you call a million times. But it only surfaces if you do aggregate it.

Therefore this problem can only even be seem by Google if it can be surfaced in aggregate overy say a billion queries.

I wonder how that can be done.

Probably only can be done using data. Which means spying on people in various ways. And making assumptions about length of time on site equals quality.

They probably use machine learning too. There may be no reason for the lost rankings other than a wind change caused by some updated parameters in an OKR chasing model.


I realize "did you actually read the article" is against HN guidelines, but what else am I supposed to say when I see this at the top of the comment thread?

I mean when you say "Google can't please all of the sites all of the time, or all the visitors.", I wholeheartedly agree, but this blog post was excellently sourced with data that shows exactly how Google raising sites that any reasonable human would say are considerably shittier than this site that is getting down ranked. It also seems pretty clear that the things that have changed are Google's ranking algorithm at specific points.

> They probably use machine learning too. There may be no reason for the lost rankings other than a wind change caused by some updated parameters in an OKR chasing model.

That is literally what TFA says in the very first section: "Some people believe they have lost control of their AI ranking systems, ..."


I think I said a lot more than TFA on those points.


Any timing attacks possible on a virtualized system using dedupe?

Eg find out what my neighbours have installed.

Or if the data before an SSH key is predictable, keep writing that out to disk guessing the next byte or something like that.


I don't think you even need timing attacks if you can read the zpool statistics; you can ask for a histogram of deduped blocks.

Guessing one byte at a time is not possible though because dedupe is block-level in ZFS.


Gosh, you’re likely right, but what if comparing the blocks (to decide on deduping) is a byte at a time and somehow that can be detected (with a timing channel or a uarch side channel)? Zfs likely compares the hash, but I think KSM doesn’t use hashes but memcmp (or something in that spirit) to avoid collisions. So just maybe… just maybe GP is onto something.. interesting fantasy ;-)


Thanks for putting meat on the (speculitive) bone I threw out! Very interesting.


VMWare ESXi used to dedupe RAM and had to disable this by default because of a security issue it caused that leaded data between VMs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: