Hacker Newsnew | past | comments | ask | show | jobs | submit | michaericalribo's commentslogin

Can confirm, it was an Encyclopedia Brown book and it was World War One vs the Great War that gave away the sword as a counterfeit!

Wow, 6.5 hours is a long time to do anything that requires focus, much less a spacewalk…!


There are CEOs who work 19 hours a day. /s


When will Gemini release a desktop app and enable MCP and coding agents??


I assume you're aware but there are some open source assistants that can get you this (at least to some extent).

E.g. Codename Goose https://block.github.io/goose/docs/quickstart/

Which I think supports Gemini among with all the other major AI providers, plus MCP. I have heard anecdotally that it doesn't work as well as Claude Code, so maybe there are additional smarts required to make a top notch agent.

I've also heard that Claude is just the best LLM at tool use at the moment, so YMMV with other models.


Not a desktop app, but Gemini has “Jules” which is their autonomous coding agent

You hook up your GitHub repo and it’ll clone it, setup the env and then work on the task you give it


Cursor + Gemini is rather good.


So, no third party source.


So, you would like another independent non-government entity with full access so they can evaluate DOGE? Like a DO(DOGE)E?


That shouldn’t be necessary. Just literally any independent source corroborating the claims. I would also be immensely interested in that.


Yes. Exactly. It's checks and balances.

The 'right' are all about being open. If something is being cut, or fired, then publish those finding openly. Make the data public, for open review.

Funny how all of a sudden "we need to keep what we are doing secrete" is a fine argument.

Otherwise you are just putting in place a new 'Deep State'. Guess that is fine and dandy now.


It’d be cool if we still had the independent IGs in place to make sure everything’s on the up-and-up. That would definitely make me feel better about this.

But one of the first things Trump did was fire a bunch of them. Blatantly illegally, because of course that’s how he’d do it.


First skiing, now this -- you'd think Apple would get their act together and come up with a better solution than "turn it off"


It also got triggered on roller coasters: https://www.macrumors.com/2022/10/09/iphone-14-crash-detecti...


I can believe that a roller coaster ride has similarities with a car crash, in terms of acceleration, but dancing?


What kind of rollercoaster ride has more than ~10-20 g forces ?

I could see how "negative" g forces could lead to a false positive but I think the absolute forces are too low.

On the other hand I could see how during dancing the phone gets thrown around and that may be a similar pattern to a car crash.


> What kind of rollercoaster ride has more than ~10-20 g forces ?

I suppose that a regular crash but with a lucky ending can do a significant damage to phone in the pocket, because while the body is losing speed slowly, the phone and other things in pocket can hit the road hardly and ricochet with insane acceleration. I am a fan of downhill but I never carry my phone during that kind of ride when I am acknowledged about high chances of my crash. And in that risky kind of event I would rather go bombing the hill with a friend who can give me a proper help than rely on some computer.


Dancing with a phone in your hand (like an old ipod advert) absolutely.


Or worse, a pocket: phone floats up with you as you jump, then starts to freefall, then slams hard against your rising pocket.


Maybe in a bag you are swinging around or more likely in a Moshpit (rapid acceleration with a sudden stop)


To Apple's 100% credit they were willing to ship engineers out.

This happened day 1 (Thursday) of a 4-day festival. Even though they declined to have Apple come out you know they still sent a couple engineers out to gather data regardless.

On top of that, being proactive once it started happening and getting messaging out to attendees helped cut down the calls by ~50%, which definitely helped.

All you can do is continually refine it once you deploy it. A bit damned if you do, damned if you don't.


> All you can do is continually refine it once you deploy it. A bit damned if you do, damned if you don't

you could also get it to work with fewer false positives before deploying it, so more like damned if you do rush to market without proper testing and not-damned if you're thoughtful and do minimal testing first (which would obviously include dancing)

or, alternatively, just don't deploy and keep deployed a feature which fails so often and for which failures have such a significant impact on people


>you could also get it to work with fewer false positives before deploying it

I'm sure they did.


"fewer" obviously means fewer than it has now: the amount it currently has is too many, the amount it needs to have is fewer than the amount it currently has

this means they either didn't get it to work with fewer false positives than it has, or they did, then they discarded those improvements, leading to the inadequate release we have now (in which, again, the amount of false positives is too many)


> the amount it currently has is too many, the amount it needs to have is fewer than the amount it currently has

we can't define what "the amount it needs" is, and we don't know what Apple's definition of "the amount it needs" is. So this is ultimately a fruitless argument.


> we can't define what "the amount it needs" is

speak for yourself: perhaps you can't define it, if you want to admit so.

I can do so, easily: the amount it needs is an amount low enough to not prompt complaints from first responders.

In fact, that's the primary requirement. Doing fancy detection of whatever only comes when you know you aren't butt dialing paramedics and wasting their time.

Sorry, you can't push that negative externality onto society just for a neat slide at WWDC. Denied.


>I can do so, easily: the amount it needs is an amount low enough to not prompt complaints from first responders.

Great, you then deferred your meaning to another nebulous amount that is again, not defined. Maybe if you were willing to research and derive the current amount of false positives received, and if that threshold is acceptable as is, Id agree.

Otherwise: Maybe you consider mind reading as "easy" and I congratulate you on your talents. In the meantime I'll simply not assume what thresholds are what in the minds of operations that I'm not familiar with.

Alternatively you may be a 911 dispatcher with the power to define this. I hope you can communicate properly with apple becsuse your current estimate isn't really quantifiable.


You're not damned if you practice judgment that keeps you from using machine learning anywhere just because you can, inflicting others with the hassle of inevitable false positives, false negatives, and false sense of security in a solution that will remain flawed regardless of your own level of patience to throw new versions out.


> A realistic set of mathematical equations to describe fern leaf or cauliflower curd development is needed

I wish the author had derived these equations -- something like the book The Geometry of Pasta

https://www.amazon.com/Geometry-Pasta-Caz-Hildebrand/dp/1594...


Science is hard, and that's why science rarely gives black-and-white answers. This study is not the one final word on the matter. But that doesn't mean it's not still useful.

Science is done based on the evidence that is found, and this counts as some evidence. Does it answer every question? Of course not. Does it help improve our understanding of the occurrence of it? Yes.

Correlation is not causation, but where there's smoke, there's often a fire—as these findings continue to be validated (and there have been other studies that find similar links), it becomes more and more relevant to understand why there is such a correlation—to find a causal mechanism, if it exists, or to confirm that it's just spurious correlation.


> where there's smoke, there's often a fire

Actually, as the replication crises shows, most smoke from papers either isn't real / doesn't point to a fire. So that maybe is a flawed line of reasoning. Correlation (if the research is done carefully to avoid intentional or unintentional p-hacking, and free of fraud) can point to maybe do a follow-up study (or do lots of different kinds of studies to do a good meta analysis that can try to establish causation) but the replication crisis indicates the good studies are swamped by the meaningless ones.


The author notes that in the data analysis, ChatGPT hallucinates an incorrect fact…there seems like such a huge gulf between “ChatGPT can do 90% of a data analysis but some might be wrong” and “ChatGPT can do data analysis with great accuracy.”

It’s a difference between proof of concept and production ready software. Maybe these shortcomings can be addressed in a larger software ecosystem, but ChatGPT still feels pretty far from properly replacing human work—it’s just not reliable enough


I tell my coworkers that ChatGPT is useful whenever you can verify the output. Don’t take its advice on topics you don’t understand yourself.

For example, I ask it to solve simple maths problems, then ask it to output the solution in Wolfram Language format so I can numerically verify it in Mathematica.

Similarly, I ask it to write simple scripts I could write myself but can’t be bothered to. I can understand the script and verify that it compiles and runs, so this is safe.

I don’t ask it for a step-by-step guide for brain surgery.


Is there any scientific evidence to support the value of AI therapy?


Ha. Can you imagine what that study would look like? And what an ethics committee would have to say about it?

Naw. Tech people aren’t about “research”. Throw something together, and when your uninformed gut feeling tells you that it’s good enough, you snap a nice marketing site together and throw it over the fence.


I would never attempt it. AI is way to nascent to be applied like this when it is convincing people to commit suicide to save the planet, like last week.


I did a PhD in statistics, and spent a lot of that time doing applied modeling.

My perspective is that MCMC is one of the great scientific discoveries of the 20th century. There are pros and cons to different samplers, lots of caveats, concerns over convergence—all valid. Still, MCMC opens up a whole new world of model building, letting you easily try a whole slew of models. It provides flexibility, even creativity. As a concept it’s clever. As a technical achievement it’s amazing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: