Hacker Newsnew | past | comments | ask | show | jobs | submit | crmi's commentslogin

Yep — on mainstream AI YouTube videos, I see a lot of comment replies accusing anyone of commenting with ai (if its a well weitten post, with punctuation). Others chiming in "ran through ai detector, says 37% human 63% air — BOT" etc

YT comments were never a good place, but it's interesting to see this shift now its hit the masses

And interesting that no ones commenting about the robotic AI voice over... Instead pointing finger's at each other like the spiderman meme.


I’m not a YouTube commenter, so I don’t have a horse in this race, but it seems like Google is in a very good position to know if a commenter is AI or not. It is very likely that their YouTube account is tied to a Gmail account… they know if that’s being used by a person or not.

Google’s unwillingness to tackle the comment problems has been a problem for years, long before AI. I recall seeing some videos on some scripts a guy wrote to flag and filter spam and bots on YouTube comments. Something Google could and should have done a long time ago. If they have done anything at all recently, it’s been due to public shaming from top creators on the platform.


Could it be, AOL ending dialup – marks the official end of the dotcom bubble? Which means the next one, whatever that may be... starts now?


Perhaps they're taking a leaf from nvidias book - influencers dunking on their bar charts gives a lot of free press coverage/mindshare


Debian? I noticed this too, switched from using LTS version to latest - much better. On arch, no FF issues at all.


Really feels like it could be an onion title.


Ha! Came here to say the same thing...


Really that good? I've got back into using Cursor and if seems theyved u1


I'm a tmux/vim user and heavily use claude code. It's good.


I've used vim to the stage that I not only know how to exit it, its muscle memory... And I still prefer nano. Its just better imo.


That puts you vastly ahead of most people who encounter vim.


Some of the young team haven't seen a rotary phone before.. Using t9 typing must seem like encryption to them.


To add to this, I ran into a lot of issues too. And similar when using cursor... Until I started creating a mega list of rules for it to follow that attaches to the prompts. Then outputs improved (but fell off after the context window got too large). At that stage I then used a prompt to summarize, to continue with a new context.


I've got a working theory that models perform differently when used in different timezones... As in during US working hours they dont work as well due to high load. When used at 'offpeak' hours not only are they (obviously) snappier but the outputs appear to be a higher standard. Thought this for a while but now noticing with Claude4 [thinking] recently. Textbook case of anecdata of course though.


Interesting thought, if nothing less. Unless I misunderstand, it would be easy to run a study to see if this is true; use the API to send the same but slightly different prompt (as to avoid the caches) which has a definite answer, then run that once per hour for a week and see if the accuracy oscillates or not.


Yes good idea - although it appears we would also have to account for the possibility of providers nerfing their models. I've read others also think models are being quantized after a while to cut costs.


Same! I did notice, a couples of months ago, that same prompt in the morning failed and then, later that day, when starting from scratch with identical prompts, the results were much better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: