The blog says "With a relaxed submission constraint, we found that model performance improved significantly. When allowed 10,000 submissions per problem, the model achieved a score of 362.14 – above the gold medal threshold – even without any test-time selection strategy."
I am interpreting this to mean that the model tried 10K approaches to solve the problem, and finally selected the one that did the trick. Am I wrong?
I play electric guitar at a decent speed but want to get very fast, say ~600 notes per minute (npm). I had two guitar teachers who gave me conflicting advice:
1) Practice doesn't make perfect, only "perfect practice makes perfect". Use a metronome, start with a speed I'm 100% comfortable with, and slowly build up my speed at 5 npm intervals. When I start making even minor mistakes, stop, go back 10 rpm, and hold my practice there for 5-10 minutes. Keep to this technique and I will get faster over time.
2. Don’t aim for perfection. Get comfortable with the concept of fast playing in my mind, and my fingers will follow. Warm up a little, but after that, jump to playing at my target speed, even if it sounds sloppy. Repeat this enough number of times and I will get faster over months and years.
I’ve tried both techniques over the last 3 years and have gotten considerably faster. But I'm not sure which of these techniques has worked more than the other.
My own take is that it takes time, and staying on the edge of what I’m capable of doing is important. No rules beyond that, really.
Question for this audience, especially for musicians and guitarists: how do you structure your practice to become great?
Not a guitarist (take my comments with a pinch of salt).
Your teachers are both right: Either advices (1) or (2) works as long as the practice is hard.
That said, while (2) may sound easier because the approach pursues "comfort in the mind" over perfection... this is still hard because because by definition: you still need to get from uncomfortable to comfortable!
A similar example in bodybuilding: muscle confusion [1]. To build better squats, one requires both compound(2) and isolation (1) exercises
Awesome website, well done! Great trip down memory lane. Looking at the 90's era catalogs highlighted the astounding deflationary power of technology. Many devices are the same cost right now that they were back then. Plus, look at all the alarms, photo and video cameras, radios, walkmans, dvd players etc. that have been replaced by a single smart phone. It certainly feels like it was the era of "peak electronics".
-Because opinion pieces increasing masquerade as news articles.
-Because journalists have no comprehension of basic math and statistics, so stats like "a woman earns $0.72 for each $1 earned by a man" are taken at face value or parroted endlessly. Most news articles show a lack of critical thinking.
-Context is deliberately avoided to paint nuanced topics as black and white.
-Graphs are intentionally created in a way to provoke outrage instead of understanding.
-Clickbait titles.
Sadly, all of this is true even for paid news such as NYT, WSJ etc.
I think there are several parts to the cause, there's the issue that 'professional' journalists often act like their job is sacred and their credentials make them superior to 'indie' journalists and other 'commoner' scum despite the current standards being even lower than what it'd take for someone without a CS degree to write code. We see this alot in relation to the discourse around misinformation or reporting on violent events on sanitized platforms like YouTube. This makes people question why they should bother paying for the opinion of someone who looks down on them.
Then there's the other issue that we see even scientists frequently make mistakes interpreting data despite having a far more rigorous education and far more experience interpreting data and risking significant professional consequences if caught. But journalists have none of that, they don't have to actually understand what they're trying to report on, they don't have to interpret the data in good faith and they don't really face any consequences for being wrong. A scientist might end up having to retract a paper if it's wrong, a journalist doesn't even necessarily have to add a correction.
This also leads into an additional issue about journalists who specialize in certain things. Like, say, games journalists, tech journalists, aerospace journalists, medical journalists etc. Often they don't have any expertise in the field they're reporting on, it's so common for:
- tech journalists to report obviously incorrect interpretations of basic technical matters
- game journalists to be completely out of touch with gaming
- aerospace journalists to report information that makes it obvious they don't know/care about the accuracy of what they're saying (there's an example from just a few days ago, of a journalist latching onto one typo of a number reported correctly in several other parts of the report for a hit piece, refusing to issue a proper correction despite being publicly called out by the company they targeted)
- medical journalists to report research results without understanding the caveats or confidence levels of the study (eg the jumping back and forth on how coffee can provide X health benefit)
- tv/movie journalists to have opinions that are more often than not completely opposite to those of the public, complete with looking down on the public disdainfully for the disagreement rather than updating their reporting style to at least also fairly cover public sentiment
These are topics people tend to be passionate about and thus are more likely to spot issues, which reduces trust in journalism as a whole. After all, if the reporting on a topic they follow in depth is so bad, how bad might the reporting on topics they don't know as much about be?
To me, the solution would be to make professional journalism actually require skills and that they also need to have some humility. Like, a tech journalist should be someone who has had decent experience in the tech field, such that they understand the technology they're covering.
I'll add onto this a complaint I have that I don't see mentioned often. News articles always cover the first half of a story when it's hot and never follow up. It's obnoxious if you have an attention span longer than whatever is happening at the exact moment.
This again is where Wikipedia is often invaluable, so long as the story is in fact covered there. And is why I wish news organisations would adopt a Wikipedia-like approach to complex stories.
The vast majority of stories don't make it to wikipedia, and wikipedia just regurgitates shit it gets from the news. If I could find an answer online I wouldn't need the news to actually do their job.
The overwhelming majority of stories I've found it useful for me to look up on Wikipedia ... tend to be there. Most of these tend toward natural disasters, industrial / infrastructure incidents, business/political news, possibly something in technology or the sciences. Not so much "human interest", celebrities/gossip, entertainment, etc., though I suspect some of those might also find their way to Wikpedia, modulo BLP considerations (<https://en.wikipedia.org/wiki/Wikipedia:Biographies_of_livin...>).
Wikipedia does not merely regurgitate news, but processes, synthesizes, and very often balances multiple viewpoints.
And, just to give a current example, the sinking of Bayesian doesn't merit its own article (yet?), but there is a section on the page covering that yacht:
I don't see anything that wants me to upgrade my four-year old iPhone 12. (TBH, it's not just about these new Pixel phones; I don't know whether I will upgrade in the next 3-4 years.)
Minor rant: All of this powerful technology, and yet the examples they can come up with are always about e-commerce/shopping, photos, calendaring etc. Why can't they talk about something more fundamentally useful, like a feature that would reduce your phone usage or budget better, etc.? I guess I can dream.
Reading the comments here, I expected to see a bad "Will Smith eating spaghetti" type video, but I found it to be decent without major glitches. Yes, nothing extraordinary, but that's due to the script, not the video. Not sure that it provokes such a negative reaction just because it was made with AI.
A chunk of folks have been bullshitted into being aggressively opposed to anything, especially creative works, if there's even a hint of generative tooling. Their reaction is so overly disproportionate and irrational that many of them are calling even traditional CG imagery used in VFX work "AI", dismissively. It's like they saw witchcraft once and their brains snapped trying to cope with it.
My take on this: AI is real, and AI products are adding significant value, but this will not show up as revenue gains for companies. Companies have no choice but to invest in AI to remain competitive in terms of costs. Consumers will realize a lot of this value as reduced prices.
I saw the Google I/O keynote today. What really bugs me is that Google has all this awesome technology, and they can't figure out a better example to showcase it other than the old "make a restaurant reservation"? I'm excited about the possibility of AGI but this is not earth-shattering.
I am interpreting this to mean that the model tried 10K approaches to solve the problem, and finally selected the one that did the trick. Am I wrong?