The website you're using right now is hosted from a single location without any kind of CDN in front, so unless by coincidence you happen to live next door then you seem to be managing. CDNs do help, but just not bundling 40MB of Javascript or doing 50 roundtrips to load a page can go a long way.
What is "high latency" nowadays? If people wouldn't bundle 30mb into every html page it wouldn't be needed.
Also cloudflare is needed due to DDOS and abuse from rogue actors, which are mostly located in specific areas. Residential IP ranges in democratic countries are not causing the issues.
Of course they are, but these botnets are actively combated by the ISPs.
The main bad traffic that I receive comes from server IP ranges all over the world and several rogue countries who think it makes sense to wage hybrid war against us. But residential IP ranges are not the majority of bad traffic.
I would even say that residential IP ranges are most of the paying customers for companies, and if you just block everything else you most likely wouldn't need to use cloudflare.
Unfortunately firewall technology is not there yet. It's quite hard to block entire countries, even harder to block any non-residential ASN. And then you can still add some open source "i am human" captcha solution before you need to use cloudflare.
That stupid Cloudflare check page often adds latency in orders of magnitude compared to what a few thousand miles of cables would. Also most applications and websites are not that sensitive to latency anyway, at least when done properly.
This is quite literally what we've built @ Gobii, but it's prod ready and scalable.
The idea is you spin up a team of agents, they're always on, they can talk to one another, and you and your team can interact with them via email, sms, slack, discord, etc.
And they simulate a externalized team where the enterprize that pays the team doesn't knows that it's just AI and just thinks that these chinese/indian/african people of this external team are really bad at what they are doing.
Interesting approach, but I mean more in the sense of a multi-agent sandbox than workflow automation. Your project feels like wrapping a bunch of LLMs into "agents" with fixed cadences, it is a neat product idea, even if it mostly ends up orchestrating API calls and cron jobs.
The thing I’m curious about is the emergent behavior, letting multiple LLMs interact freely in a simulated organization to see how coordination, bottlenecks, and miscommunication naturally arise.
Agreed, the emergent behavior is the most interesting and valuable part. We don't want bad emergent behavior (agents going rogue) but we do want the good kind (solving problems in unexpected ways.)
There's massive hardware and energy infra built out going on. None of that is specialized to run only transformers at this point, so wouldn't that create a huge incentive to find newer and better architectures to get the most out of all this hardware and energy infra?
Only being able to run transformers is a silly concept, because attention consists of two matrix multiplications, which are the standard operation in feed forward and convolutional layers. Basically, you get transformers for free.
I think there's some correlation but no, you can be amazingly strong at chess and not super-intelligent by other measures, or be incredibly good at physics or philosophy or poetry or whatever and have no talent for chess.
But, having watched some of Naroditsky's videos, it seems pretty clear that he was in fact very intelligent as well as very good at chess.
Maybe, exactly what are you asking? What even is intelligence - that is a question I've never seen formally answered (and even if someone does, their definition may not match your intuitive feel for what it means and thus it is useless outside of the exact paper they defined it in)
Formal definitions aside, it isn't possible for "stupid" people to be good at chess. There also is no other animal or known alien that is good at chess. Thus being good at chess is a strong sign of an intelligent human.
We can't go the other way. There are plenty of humans generally known to be "intelligent" who are not good at chess. There is a lot more than intelligence needed to be good at chess (practice and study come to mind, there might be more).
While there are no known aliens that are good at chess, that doesn't preclude that we may discover them in the future. (not in your lifetime though - the speed of light is too slow for them to learn the rules of our chess and communicate back proof that they are good, no matter how intelligent they are)
The ability to plan and operate several moves ahead of one’s opponent has always suggested higher intelligence.
When applied to war we celebrate the general’s brilliance. When applied to economics we say they had excellent foresight. When applied to any human endeavor, except chess, the accomplishment is celebrated as a human achievement.
This is due to humans placing great value upon thinking and planning ahead. Only the intelligent exhibit this behavior.
I think Hikaru’s fans made him take an IQ or intelligence test a couple of years ago and it showed he wasn’t exactly a mastermind. He said at the time something along the lines that being good at chess shows you are intelligent only in that one domain of chess-type thinking, not general intelligence.
I used to watch a lot of his streams/videos, but I always thought he was just not taking himself seriously, being entertaining, "memeing and trolling". I thought it was his strategy to do unfiltered ADHD thoughts even when they don't make any sense, because that's what brings him viewers.
Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
(from Wikipedia)
Intelligence is multifactoral. Being good at chess was one aspect of intelligence in the complexity of Daniel's life, and in anyone's life.
What makes this guy's blog post a more authoritative reference than 1000 other sites, as well as everyone's personal experience of feeling naturally "good" at some tasks and "bad" at others?
> Why? Well, the basic idea behind this theory is that people are different, and maybe you’ve noticed – they really are. People have different interests, different abilities, different moods, etc.
The author isn’t saying that the multiple-intelligence theory is itself valid. Rather, in an educational context, there is a kernel of value in the idea that different students are different. That’s entirely consistent with intelligence being a single thing.
Not that long ago I would have said no, but I increasingly think that intelligence is mostly about the ability to learn. And chess, at a high level, requires a mixture of achieving an extremely high degree of unconscious competence working right alongside a high degree of conscious competence for things like opening prep. And most, if not all, high level chess players that have tried, in earnest, to do things outside of chess have excelled across a wide variety of fields.
But I think where people get confused is in the inverse. If you take a very smart person and he dedicates two years of his life to chess, all alongside training from some of the world's best, then he's still going to be, at best, a strong amateur at the end. In fact I know at least one instance where this exact experiment was tried. This is generally unlike other fields where such an effort would generally put you well into the realm of mastery.
But that's the unconscious competence part - chess takes many years of very serious training to even start to have it become 'natural' for you, and it's that point that your training journey begins all over again because suddenly things like opening preparation starts to become critical. So it can give the appearance that since seemingly smart people don't do particularly well at chess, while people like Magnus who has/had (daddyhood changes a lot...) a complete 'bro' personality, is arguably the strongest player of all time, it gives the impression that being smart must not be a prerequisite for success at chess.
The education system bored me a lot and made an effort to portray me as some kind of mentally disabled retard. It was rather interesting to me that successful career grown ups couldn't win a single game.
I wasn't interested in chess but I could see their entire plan unfold on the board. Unless they were actually good I didn't even try to win, in stead I let them unfold their plan into their own demise.
My winning streak ended when I got to play against the best kid from a different school. His was the biggest brain I have ever seen from the inside. He pretty much violated basic principles of the game in a way that still bothers me 35 years later.
The game was much to open to really look far ahead. The way one would play against a computer. His actual goal was to trade his knights and bishops for two pawns each!?!?! He pulled off 3 such trades. He carefully set up the trades and it made no fkn sense.
Then came a really long and slow pawn push that didn't allow me to trade my knights and bishops for more than a single pawn.
It took so many moves that he pretty much convinced me that 2 bishops and a knight are worth less than 5 points. I haven't seen a second game like it but I'm still 100% convinced.
One aspect is that highly intelligent people have a hard time asking for help as they are used to always having the answers. It's entirely foreign to them.
No[1]. It is a common misconception that puts more stigma on people with mental illness. My family called my schizoaffective disorder a "gift" because I was creative and intelligent. This lead them to ignore my suffering, which made me more depressed.
I don't think the hatch act is supposed to prevent the use of the White House account for political purposes. It seems like basically every administration with an X account has done this, e.g.: https://x.com/WhiteHouse46/status/1662171756830892032 .
Although there are probably more contentious case with other government agencies.
A nonpartisan newspaper could also condemn or blame a political party for an action. But if all of its posts were supportive of one administration, it would no longer be partisan.
You can just look through the old white house accounts. For example this tweet https://x.com/WhiteHouse46/status/1879171105044181097 , "While Congressional Republicans refused to pass a bipartisan border security agreement, President Biden took action and encounters today are the lowest since July 2020."
Looking through the tweets, you'll see it's not nonpartisan and isn't supposed to be.
Couple that with this admin going out of their way to only selectively enforce laws/policies in ways that benefit only them and their desired constituents, and it’s not even worth talking about the Hatch Act until someone else heads the Executive Branch.
Due to a lapse in appropriations, the U.S. Office of Special Counsel is closed.
Complaints may still be filed, but most will not be addressed until OSC reopens. [0]
They are the "independent" body that enforces Hatch act regulations.
Nobody is going to enforce any laws on Trump or his executive. Either the midterms will allow for oversight to return or he dies before the transition into authoritarianism is complete. With SCOTUS about to end the Voting Rights Act, it could be over sooner than people think.
If it were August-September of 2026 and an election were around the corner, yes. The Hatch act is pretty narrowly tailored if you look at its enforcement history, though.
People want predictability from LLMs, but these things are inherently stochastic, not deterministic compilers. What’s working right now isn’t "prompting better," it’s building systems that keep the LLM on track over time: logging, retrying, verifying outputs, giving it context windows that evolve with the repo, etc.
That’s why we’ve been investing so much in multi-agent supervision and reproducibility loops at gobii.ai. You can’t just "trust" the model; you need an environment where it’s continuously evaluated, self-corrects, and coordinates with other agents (and humans) around shared state. Once you do that, it stops feeling like RNG and starts looking like an actual engineering workflow, distributed between humans and LLMs.
reply