Hacker Newsnew | past | comments | ask | show | jobs | submit | JohnMakin's commentslogin

What's disappointing is that this precise thing was happening during people trying to report on Gaza issues, and were encountering ghost "technical issues" and shadow bannings/outright bannings, but any such discussion about it here seemed to be getting flagged.

I have been studying and practicing tibetan buddhism for a little over a year now, particularly dream yoga, but branched into some of the other practices. I'm always a skeptic but it is fascinating some of the stuff they can do. There is scientific evidence they can raise and lower their body temperatures through meditation, withstanding great heat/cold and deprivation conditions. I've played around with deprivation and what it has done for my mental health and body has surprised me. I'm but a novice, but I absolutely believe they are tapping into something scientific about the body/mind that is still unknown.

There are reasons to be extremely skeptical about some of their claims, but, some of it is very interesting and credible.


Do you have some literature resources on where to start reading on that stuff?

the translation of their texts is widely available, I started with tibetan yoga and secret doctrines by evan-wentz. There’s a lot of lore and tradition you need to understand for some of it but the gist is there.

playing "GTO" doesn't mean you will destroy people, this is a common misunderstanding of the term. It means that you are playing in a way that cannot be exploited - this does not mean you're also playing in the way that will win you the most money.

Also, there is no "better understanding" of GTO because poker is an unsolved game, and the assumptions you feed into a GTO playstyle can change quickly or be wrong. The thought you can sit there like an automaton with a set strategy and win is false.

been playing off and on professionally for 20 years


> The thought you can sit there like an automaton with a set strategy and win is false.

This is provably false.

You're absolutely right that GTO does not guarantee you'll win the maximum against a fish, but neither does exploitative play. In fact, exploitative play can't guarantee you anything, which is probably why old-school pro players are perennially going broke throughout their careers (that and bad bankroll management).

IMO, currently, over 90% of pro poker players (especially live and in the US) fundamentally do not understand how poker should be played (which is why they get so easily destroyed by the new generation in online heads up).


> This is provably false.

Where is the proof?

> You're absolutely right that GTO does not guarantee you'll win the maximum against a fish, but neither does exploitative play. In fact, exploitative play can't guarantee you anything, which is probably why old-school pro players are perennially going broke throughout their careers (that and bad bankroll management).

I'm not arguing in favor of one or the other, I am just correcting the misunderstanding. In reality, you should adapt to the conditions at the table and your opponents habits, because "GTO" is only possible against perfect play to begin with, so you're always going to be playing slightly imperfectly. so is everyone, because you cannot know everything. And again, it's almost never the way to win the most money. It's a distinction not a lot of GTO nerds understand. I'm not arguing against it at all - I use GTO solvers to work on stuff a lot.

And I also never claimed exploitative strategies guarantee everything, for the same reason "GTO" doesn't either. It's a game of incomplete information. The skill comes in using incomplete information in making good assumptions - that is almost nothing to do with math. And, there are pros that have been winning for long amounts of time knowing zero about GTO theory.


There is an entire area of math about the uncertainty of decision correctness in incomplete information scenarios. One of the neat aspects of it is that all computable optimal decision makers are mechanically exploitable if you have a reasonably accurate model of their finiteness. In the case of human minds, that just means they are a lot like you. The exploits require iterated games and are cognitively difficult (you have to track a lot of state).

Anecdotally, in my poker playing days I had a lot of success by attacking quasi-optimal play this way. Optimality is contextual. You can engineer a context that motivates suboptimal decisions in fact, though it isn’t easy.

However, at the limit, this is really just attacking the cognitive facilities of your opponents rather than the math of the game. Someone that with a similar ability to manipulate large amounts of state mentally could nullify the advantage. It is meta-games all the way down.


> Optimality is contextual. You can engineer a context that motivates suboptimal decisions in fact, though it isn’t easy.

I agree with your post but I'd just like to nitpick that this first phrase is not true, the equilibrium point is independent of what your opponent does. I'm pretty sure you know this as you then go on to describe not a context where the equilibrium changes but where it becomes hard for humans to find the equilibrium.

So we agree, it's just a small nitpick of how you worded.


That is fair point.

> Where is the proof?

von Neuman proved it for 2 players in his classic "Theory of Games and Economic Behavior": https://archive.org/details/in.ernet.dli.2015.215284/page/n3...

Nash proved it for n-players: https://pmc.ncbi.nlm.nih.gov/articles/PMC1063129/pdf/pnas015...

> "GTO" is only possible against perfect play to begin with

This is a very common misconception, probably because GTO is usually explained as the equilibrium reached by 2 perfect players. The key insight of GTO is that you do not adjust your strategy to what your opponent is doing. If you play the equilibrium strategy and they don't, you're guaranteed to make money.

> And I also never claimed exploitative strategies guarantee everything, for the same reason "GTO" doesn't either. It's a game of incomplete information.

I didn't say you did, I was just making my own independent argument as to why intuitive play is dangerous and people often end up deceiving themselves into thinking they're winning players.

> And, there are pros that have been winning for long amounts of time knowing zero about GTO theory.

Which is why I said that, IMO, 90% of pro players fundamentally misunderstand poker (and that's not even counting the losing players who think they're "pro").


I definitely understand the term, I promise you. if you are playing GTO, you will destroy people. I am 100% confident of this.

"Rising costs." BS. These same companies are also going around saying AI is gonna replace their workforce. Rockstar's been laying off like crazy the last few years, gee, what a shocker GTA VI keeps getting delayed.

It's almost, almost like people are valuable and worth retaining.


> It has become difficult to grade students using anything other than in-person pen and paper assessments,

This shouldn't be a big deal. This was the norm for decades. My CS undergrad I only finished ~10 years ago, and every test was proctored and pen and paper. Very, very rarely would there be a remote submission. It did not seem possible to easily cheat in that environment unless the test allowed notes you yourself did not write, or if you procured a copy of the test beforehand and were able to study off that previously, but the material was sufficiently rigorous that you sort of had to know it well to pass the class, which seems to me the whole aim of a college course.


> This shouldn't be a big deal. This was the norm for decades.

We need to hire more professors, then, as the ratio of FTE profs to FTE students is significantly lower, even over just a decade.

Edit: But I agree. I've mentioned to my professor wife that there needs to be movement back to oral exams. Orals exams are graded, nothing else is. IT works for law school. One of the only things that works for law school. One exam at the end of the semester. Nothing else matters, because the only thing a class needs to measure is mastery of the material, not whether you are diligent at completing basic work with the help of textbooks and friends and the Internet.


I was referring to other kinds of assessment such as exercise lists, take-home coding projects, technical writing, etc.

> The measured take, that LLMs are a significant productivity tool comparable to previous technological shifts but not a rupture in the basic economic fabric, doesn’t generate much engagement. It’s boring.

This isn't a new take. The problem is, "boring" doesn't warrant the massive bet the market has made on it, so this argument is essentially "AI is worthless" to someone significantly invested.

It's not so much that people aren't making this argument, it's that it's being tossed immediately into the "stochastic parrot" bunch.


I will say that I'm not entirely clear on how the boring case- which I agree seems most likely- manages to pay for all this massive investment in datacenters, power plants, new frontier models, etc. OpenAI has already raised 60Gigabucks from people who are expecting a return on that money, and since it is not actually profitable yet it will need to raise more money to get to that point. I'm not clear on how they manage to make enough profit to pay off all that investment if AI is actually a "5-25% improvement in productivity for some classes of white collar workers" sort of proposition.

California is big, and the LA basin can be extremely dry. For me this is the most I’ve seen since the one bad el nino season in the 90’s, but that one didn’t last nearly as long. It seems normal the last few years to get winter storm conditions that last months.

2025 was the coolest summer I’ve ever experienced living where I do near the coast with an onshore breeze that is now frigid and very wet at times. I usually get fog now in times of the year it rarely happened - almost like san francisco’s notorious summers.

Tracking local weather patterns used to be part of my last career so this stuff I notice pretty well.


Agreed - Interesting that these systems inevitably involve proving you're a citizen in some way, which seems unnecessary if your goal is to try to figure out someone's age.

Hard no. It's so easy to get "flagged" by opaque systems for "Age verification" processes or account lockouts that require giving far too much PII to a company like this for my liking.

> Users who are incorrectly placed in the under-18 experience will always have a fast, simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.

Yea, my linkedin account which was 15 years old and was a paid pro user for several years got flagged for verification (no reason ever given, I rarely used it for anything other than interacting with recruiters) with this same company as their backend provider. They wouldn't accept a (super invasive feeling) full facial scan + a REAL ID, they also wanted a passport. So I opted out of the platform. There was no one to contact - it wasn't "fast" or "easy" at all. This kind of behavior feels like a data grab for more nefarious actors and data brokers further downstream of these kinds of services.


The unfortunate reality is this isn't just corporations acting against user's interests, governments around the world are pushing for these surveillance systems as well. It's all about centralizing power and control.

Don't forget the journalists.

Facebook made journalists a lot less relevant, so anything that hurts Meta (and hence anything that hurts tech in general) is a good story that helps journalists get revenge and revenue.

"Think of the children", as much as it is hated on HN, is a great way to get the population riled up. If there's something that happens which involves a tech company and a child, even if this is an anecdote that should have no bearing on policy, the media goes into a frenzy. As we all know, the media (and their consumers) love anecdotes and hate statistics, and because of how many users most tech products have, there are plenty of anecdotes to go around, no matter how good the company's intentions.

Politicians still read the NY Times, who had reporters admit on record that they were explicitly asked to put tech in an unfavorable light[1], so if the NYT says something is a problem, legislators will try to legislate that problem away, no matter how harebrained, ineffective and ultimately harmful the solution is.

[1] https://x.com/KelseyTuoc/status/1588231892792328192?lang=en


It cracks me up that Persona is the vendor OpenAI will use to do manual verifications (as someone who works on integrations with Persona).

I'm glad ChatGPT will get a taste of VC startup tech quality ;)


Yep. Whenever platforms opt for more data I opt out. And like clockwork they let loose all that PII to hackers within months.

Yeah, this is all far far too invasive. The goal is obviously to gather as much data on you as possible under whatever pretense users are most likely to accept. "Think of the children", as always. This will then be used to sell advertising to you, or outright sell it to data brokers.

New boss, same as the old boss.


Yeah I have had a linkedin account since forever and am over 50 and linkedin still occasionally gives me the “since you are a teen…” screen. What kind of teen is on linkedin anyway?

Prediction markets to me are a natural consequence of the post-truth world we live in.

For one, it creates insanely imbalanced systems where two sides of a 50/50 market have completely opposite world views that when the market resolves may have access to sources, whether completely fake or misinfo'd, that lead them to believe the market should be settled in their favor. This then puts the prediction market in a place where it is the "arbiter" of reality - in a world where media has become so corrupt and full of noise, finding out what actually happened can be really hard.

So then, a market such as this seems inevitable to me because you can then point to markets and say "see, it resolved this way." It does not perfectly align with truth but with market forces would probably lead to more truth than a reality where you just cherrypick whatever biased source that makes you feel good does. And of course, the market itself can become corrupt.


If those prediction markets can be used as an alternative way to put resources into "reporting" then it could be valuable. The downfall of journalism has been preceded by the loss in economics from newspapers with M&As causing cultural inbreeding.

Stock markets aren't too different in that they help predict which companies will be valuable in the future.


natural consequence of the post-truth world we live in

We _currently_ live in. Truth and trust are older than computers. They will outlive computers, too. Computers and the system we have built will eventually yield to truth, not the other way around.

One can get a credit card and say they are living in a post-scarcity world until they hit the limit, but that doesn't make it true.


I haven't found many cases of this.

The only one I know of is the bet on whether Venezuela would be invaded, which PolyMarket didn't resolve as yes arguing that it technically wasn't an invasion. So it isn't that there's two realities really, it's just a very inconsequential word play on how we describe reality, and the only ones concerned with it are those who made the bet.


> Prediction markets to me are a natural consequence of the post-truth world we live in

My best strategy for dealing with people who have been radicalized is to make wagers. Even making the wagers is hard because they are often about individuals who's job it is to gaslight their constituents... but its still been working pretty well. I'm up probably $1000 off of these little bets and they have helped me win later arguments.


Your criticism applies to any epistemic institution. Prediction markets are far more robust than any epistemic institution that humans have come up with so far.

Contrary to what you insinuate, we don't have 2 parallel markets for each issue, where each groups bets in their own echo chamber market to feel good about their beliefs. The law of one price holds and we have a single market per event, and it almost always resolves such that >> 50% of people agree with the result retrospectively. The resolution mechanism affects the price. If the market is expected to be biased towards a certain outcome, then that outcome will trade at a higher price.

You could easily prove your theory by giving an example of two markets about the same issue, with different arbiters, where the price significantly diverges. Or where the arbiters disagreed after the fact. There will be examples, but how much Volume traded in those markets, compared to the markets where it didn't happen?


I don't know why you're reacting to the parent comment as a criticism - it is just observation. I have spent a lot of time over the years in similar markets. A trivial example - "Will Hillary be the Democrat Nominee?" Yes/No market on PredictIt for the 2020 election.

Even though the market basically settled in 2019 as she missed filing deadlines, a whole group of conspiracy theorists in the "Yes" market descended in the comments, and there was 10-15% on "YES" even til the very last hours they closed the market, which was like a ~week or two before the actual election.

Every news event was interpreted by this group as a "sign" she was secretly running, and they provided their own sources. I was arbitraging this market for the whole year on any "new" hillary news, so paid very close attention to the discussion circles happening around it - I checked in after the market closed and many of the genuine "YES" people putting up as much money as they were able to concluded PredictIt was incorrect and that Hillary was still the "phantom" nominee. It would not at all surprise me if a few of them still believe she actually ran for president. They probably should have closed the market several months before, but they kept it open. If they had closed it at any reasonable point beforehand a huge chunk of the "YES" market would have lost their minds.

People are goofy and prone to conspiratorial thinking. If you want an example of something that there isn't a market for, but just an example of how much 2 groups of people's thinking can diverge - Ask 100 random people what happened to an ICE officer in minnesota that was involved in a shooting very recently.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: