I’ve commented it before, and surely it’s something I’m doing wrong, but I cannot believe system prompts or GPTs or any amount of instructing actually works for people to get ChatGPT to respond in a certain fashion with any consistency.
I have spent hours and hours and hours and hours trying to get ChatGPT to be a little less apologetic, long-winded, to stop reiterating, and to not interpret questions about its responses as challenges (i.e when I say “what does this line do?” ChatGPT responds “you’re right, there’s another way to do it…”).
Nothing and I mean NOTHING will get ChatGPT with GPT-4 to behave consistently. And it gets worse every day. It’s like a twisted version of a genie misinterpreting a wish. I don’t know if I’ve poisoned my ChatGPT or if I’m being A/B tested to death but every time I use ChatGPT I very seriously consider unsubscribing. The only reasons I don’t are 1) I had an insanely impressive experience with GPT-3, and 2) Google is similarly rapidly decreasing in usefulness.
Another issue I have, especially when demanding terseness, is that it tends to bail out of writing long code snippets with ellipsis comments like "// And more of the same here" which sometimes defeats the purpose. Except when the code is illustrative to a concept, I want it to be thorough and code the damn thing to the last semicolon.
My solution, which works sometimes, is to instruct it to "not write comments in the code." The drawback is that ChatGPT normally does a good job adding comments, but not something I can't live without.
This "code-trimming" effect does not show up for me in API requests.
OpenAI really should fix this. I've started using Bard and brevity comes out of the box. When I used ChatGPT I always had this background feeling of irritation at the ridiculously verbose responses.
Using JSON mode with the GPT 3.5/4 API works well for us. So much so that we have to intentionally fake errors to test that our retries/fallbacks actually work in our code.
I would assume a lot of that has to do with whatever obsequieous nonsense they've got in the RLHF 'safety' training, and you're not getting rid of that without pushing it into a totally different context via DAN-like 'jailbreaks'.
It wasn't always like this. GPT in early 2023, hell late 2022, was incredible. I could have it fully stimulating a Unix terminal on acid for hours, it'd never break character. It's so insanely nerfed now.
It's insanely good every time they have a public release, then deteriorates significantly. There's plenty of evidence around this too - just compare the exact same prompt then and now. Not sure if this is a matter of cost or just playing whack a mole with unintended behaviorial bugs.
I have this as my custom prefix for when talking to gpt:
Cut unnecessary words, choose those that remain from the bedrock vocabulary everyone knows and keep syntax simple. Opt for being brusque but effective instead of sympathetic. Value brevity. Bullet points, headings and formatting for emphasis are good.
I wonder if people only click on the thumbs down button, thus serving to only provide a negative signal mechanism with no ability to differentiate a positive from a negative one.
It's certainly better than not doing it, but I wonder how much that helps?
I mean, there's no control sample. It's a single custom-generated response read by a single person. I'd like to know how they derive useful insights from those votes.
The hack is using GPT-3, and I don't mean 3.5. It still performs to a production level, at least for creative work. It's been sped up and is significantly cheaper.
A fun bug is that ChatGPT will always use an emoji when apologizing. So if you ask it not to use emojis in a chat and it does (which it often will do in promising not to), and point it out, it results in a loop of apologies and self critique that devolves into modeling an existential crisis.
A malicious npm package created by the attacker specifically crafted to open up a port that listens and executes commands and otherwise untrusted or unverified code on the victim’s machine.
> This is great news now that the industry phased out physical **audio** connection on phones in favor of wireless.
I recognize that there’s still wired audio connectors but you know full well that the experience is not great because the industry wanted to remove audio jacks.
Take this from someone who has been down this road: you are way over-thinking this.
Until you accept this, you will be holding yourself back as a JavaScript developer.
If you need objects with encapsulated state and methods, use classes.
OOP does not magically introduce baggage or complexity.
Your proposed solution is still OOP. Coming up with your own OOP framework will introduce baggage and complexity.
Classes are highly optimized in modern JS engines.
Classes can be transpiled to performant code for legacy JS engines.
There are times when classes are not the right choice, but avoiding classes when they’re the obvious right choice is a clear indicator that a developer is unwilling to actually learn JavaScript.
Edit:
Your example code also has a bug, where your methods don’t actually mutate the properties of the returned object.
Please take the leap and just learn to use JavaScript classes!
Is this a tangential question? The OP suggests that there’s a utility to piracy beyond ideological reasons and just getting stuff for free. In other words, the categories you posed are not mutually exclusive.
Node builds on top of V8, Chromium’s JS engine, which has JIT, which allows for some optimizations that aren’t as easy or obvious in a simple bytecode interpreter.
That’s exactly what I argue: the CPython developers choose not to spend time on a JIT (quite possibly because they didn’t have the resources to build a good one)
I have spent hours and hours and hours and hours trying to get ChatGPT to be a little less apologetic, long-winded, to stop reiterating, and to not interpret questions about its responses as challenges (i.e when I say “what does this line do?” ChatGPT responds “you’re right, there’s another way to do it…”).
Nothing and I mean NOTHING will get ChatGPT with GPT-4 to behave consistently. And it gets worse every day. It’s like a twisted version of a genie misinterpreting a wish. I don’t know if I’ve poisoned my ChatGPT or if I’m being A/B tested to death but every time I use ChatGPT I very seriously consider unsubscribing. The only reasons I don’t are 1) I had an insanely impressive experience with GPT-3, and 2) Google is similarly rapidly decreasing in usefulness.