You can buy IR and UV leds. All high end grow lights have these for plants. Low quality cheap led products won't include them but that is nothing to do with LEDs themselves that is just consumer preference and price conformance.
Fully sandboxed VMs are more secure but not everyone is looking for the most secure option. They are looking for the option that works the best for them. I want to be able to share my development environment with the agent, I have a project with 30 1gb and one 30gb sqlite database. I back it up daily and they can all be reconstructed from the code but it takes a long time. When things change I don't want to have to copy them into a separate vm bloating my storage and using excess resources and then having to rectify them, I want to be sharing the same environment with my agent so I can work side-by-side.
I would rather just have the agent not accidentally delete files outside of its working environment but I am not worried about malicious prompt injection or someone stealing my code.
For me I see the LLM as a dumb but positive actor that is trying to do its best but sometimes makes mistakes, so I want to put training wheels on it while still allowing it to share my working space.
Those prior recommendations you supplied are worse than the current ones.
Added Sugar: it says <50grams when its clear that NO added sugar is best as the new guidelines suggest.
Fat: it says to choose low fat cuts 95% and low fat milk. There is no basis for these options. you are just reducing the nutrients from fat. You should just drink/eat less of the fatty food if it contains fat, not choose a processed version that removes part of it.
Protein: The protein section clearly skews towards plant based proteins which are fine but for the majority of people animal proteins are going to be healthier and easier to eat enough of.
The protein amounts to around 35-60 grams of protein depending on the sources/amounts listed which is not ideal for a properly functioning human
Sodium: It says in multiple places to lower sodium but the studies on sodium were correlative not causative. Meaning there is no basis for a low sodium diet unless you have other health conditions.
So no they are not lying to you and these new guidelines are 100% evidence based given the new evidence that we have had for the last 30 years.
> Added Sugar: it says <50grams when its clear that NO added sugar is best as the new guidelines suggest.
False. Science studies show that up to 50 grams has little effect on your health.
>Fat: it says to choose low fat cuts 95% and low fat milk. There is no basis for these options. you are just reducing the nutrients from fat. You should just drink/eat less of the fatty food if it contains fat, not choose a processed version that removes part of it.
False, it says to choose lean protein and explicitly calls out to avoid processed meat. A lean cut of meat is not "processed" it comes that way.
> Protein: The protein section clearly skews towards plant based proteins which are fine but for the majority of people animal proteins are going to be healthier and easier to eat enough of. The protein amounts to around 35-60 grams of protein depending on the sources/amounts listed which is not ideal for a properly functioning human
False, red meat has been show to be associated with increased cardiovascular disease.
While the risk of fat and salt is likely overblown, overall the previous guidelines were pretty good. These new ones don't call out the dangers of things like red meat.
Those science studies are a load of bull if they say added sugar up to 50 GRAMS has no effect on your health. Your gut develops a craving for it like no other and your insulin spikes much harder when you intake that much on daily basis. When you're off sugar for a while, you notice how those "compulsions" you have during groceries is just due to your gut yearning for some sugar. Now fruits and natural sugar are a lot better, but even them I wouldn't consume excessively if you are in the business of high focus -work.
We could first put the LLMs in very difficult situations like the trolley problem and other variants of this, then once they make their decisions they can explain to us how their choice weighs on their mind and how they are not sure if they did the correct thing.
These models don't even choose 1 outcome. They list probabilities of ALL the tokens outcomes and the backend program decides to choose the one that is most probable OR a different one.
But in practical usage, if an llm does not rank token probability correctly it will feel the same as it "lying"
They are supposed to do whatever we want them to do. They WILL do what the deterministic nature of their final model outcome forces them to do.
I have a kinda strange chatgpt personalization prompt but it's been working well for me. The focus is me to get the model to analyze 2 sides and the extremes on both ends so it explains both and lets me decide. This is much better than asking it to make up accuracy percentages.
I think we align on what we want out of models:
"""
Don't add useless babelling before the chats, just give the information direct and explain the info.
DO NOT USE ENGAGEMENT BAITING QUESTIONS AT THE END OF EVERY RESPONSE OR I WILL USE GROK FROM NOW ON FOREVER AND CANCEL MY GPT SUBSCRIPTION PERMANENTLY ONLY.
GIVE USEFUL FACTUAL INFORMATION AND FOLLOW UPS which are grounded in first principles thinking and logic. Do not take a side and look at think about the extreme on both ends of a point before taking a side. Do not take a side just because the user has chosen that but provide infomration on both extremes. Respond with raw facts and do not add opinions.
Do not use random emojis.
Prefer proper marks for lists etc.
"""
Those spelling/grammar errors are actually there and I don't want to change it as its working well for me.
Yeah I have seen multiple people use this certainty % thing but its terrible. A percentage is something calculated mathemtatically and these models cannot do that.
Potentially they could figure it out if they looks into a comparison of next token probabilites, but this is not exposed in any modern model and especially not fed back into the chat/output.
Instead people should just ask it to explain BOTH sides of an argument or explain why something is BOTH correct and incorrect. This way you see how it can halluciate either way and get to make up your own mind about the correct outcome.
reply