Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You ask it to suggest plugins for your use case.


This is a great reply because it shows just how much things are about to change. OP is likely a smart, digitally inclined individual and they missed this use case. This will sweep through the general population eventually so lean in and learn how to interface with a young AGI.


I interface with it every day and it gives me wrong answers about 80% of the time. Is that about to change too?

I keep using it because it often has interesting clues, like the name of an algorithm I’ve never heard of, buried amongst the noise. I couldn’t imagine sending it off to do work for me without any supervision, though. Not by a long shot.


It depends. Do you ask it properly, or do you expect the proper thing from it? Using Google is a skill, just like using AIs.

Btw, GPT-4 gives me way better answers than ChatGPT. Obviously, it still needs supervision, just like Google/StackOverflow/Reddit answers. I don’t expect that supervision won’t be needed in the near future, and of course the answers still need to be adapted for the exact context.


> Do you ask it properly

Maybe not! But how would I gauge such a thing? I try to be very specific with my wording. And I’m very wary of including keywords that might have it land in a category that I don’t want it to land in. Clearly my strategy isn’t working though. Is there a resource for writing good ChatGPT prompts related to programming?

> GPT-4 gives me way better answers than ChatGPT

I might need to try that, then. I’ve only used ChatGPT so far.


> I interface with it every day and it gives me wrong answers about 80% of the time. Is that about to change too?

What kind of questions are you asking?

I've used GPT-4 since initial release, both via ChatGPT and the API, and I'm getting mostly correct answers for writing code in Rust, JavaScript and Python. It had troubles with Clojure, and sometimes the API for certain fast-moving Rust crates is wrong, but if I send the updated function signature in the next message, it can correct the mistakes.


Lately it’s been game-development questions related to physics. Mostly using JavaScript and GLSL, but sometimes Houdini’s VEX, which it probably has the worst success rate with.

I have a feeling if my domain was more mainstream I’d be getting much better results. Or maybe I just need to write longer, more detailed props or something?


In a generation or two we'll have language models that understand aesthetics and accuracy (by being trained on token sequences annotated with aesthetic/accuracy scores), and we'll be able to ask the model to generate well written, factual answers by conditioning probabilities based on high aesthetic/accuracy scores.


They did not 'miss the use case'. ChatGPT is known not to be reliable in many contexts, and system configuration/product development is not an area where you want everything to be opaque or where you should assume reliable defaults.


That will work great until the LLM equivalent of SEO develops.


These things are trained on internet content right? What will they be trained on years from now? I bet it would end up being a lot of their own or others model output that they end up retraining on in the future, or they continue training on the internet as it was before chatgpt and information in their datasets grows stale.


They can learn from the outcomes of their actions, even if they can only act in a Python REPL or a game, because that would be easy to scale. But interfacing LLMs with external systems and people is an even better source of feedback. In other words create their own experiences and learn from them.


Presumably there will be manual review and if people are doing blatant SEO they get rejected.

They already seem to have thought about this, if you read their rules you are not allowed to include explicit instructions in the model_description field about when your plugin should be invoked.


The phrase you are looking for is AEO (answer engine optimisation). AEO is similar to SEO except you lack the ability to cross check responses.


Not to mention the LLM equivalent of Trojan horses


Better yet: it will recommend them on its own when you ask something.


Jesus. Why does this give me just absolute chills?

Why do I think we're about to see levels of gamification and subtle sales that have only been imagined at in the past via astro-turfing?

Maybe I'm showing my age, but trusting an AI to have your best interests in mind, and not that of the company/person who created it seems naive maybe?


If you google for language learning apps you get a list of results, dominated by ads at the top but rife for exploring if you are inclined to wade through it.

If you ask ChatGPT it will simply say Duolingo (as they are a partner currently) and you don't even click through as it throws right to it inline.

https://en.wikipedia.org/wiki/Pareidolia is exploited in the design of cars and appliances but never before by software to this extent. For most people due to the nature of the conversational interface it feels no different than texting a trusted friend. This is probably accidental, one might note it is yet another unexpected emerging property.

In other words if you think para-social relationships on social media were bad for humans you ain't seen nothing yet. Now your search engine is your fren.


Just wait until companies partner with chatgpt to license their product as the default choice for a given query. If chatgpt is actually for profit now, they basically have to do this or they are doing a disservice to their msft shareholders by leaving money on the table not acting as evilly profitable as possible.


Got it. Even better if it automatically suggested plugins to enable to help with a stated task.


Or wrote its own plugins as needed.


It will suggest plugins that don't exist




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: