Hacker Newsnew | past | comments | ask | show | jobs | submit | more madiator's commentslogin

Now, it will be nice to get the code snippets for these!


Right at the bottom: “HIRE US” :)

I suspect these are all so hand crafted that there’s not much in the way of code.


There are several types of hallucinations, and the most important one for RAG is grounded factuality.

We built a model to detect this, and it does pretty well! Given a context and a claim, it tells how well the context supports the claim. You can check out a demo at https://playground.bespokelabs.ai


That's great, but it did not really write the program that the human asked it to do. :)


That's because it's the base model, not the instruct tuned one.


I am sure most people never asked these questions to a human doing this research.


Most B2B focused AI tools will do 10x better if they pretend to be a normal human run company but just have the AI at the back.

Their clients want to know that the research report was written by a real person and not a bot.

But that doesn’t mean it actually has to be written by a real person and not a bot.


I will be typing this comment on my Nokia's keyboard.


Why do your rubber boots have a keyboard?


Yeah it wouldn't fit. GPT3 is 175B params, so even if you use 8 bit for each weight, you need 175×10^9÷2^30 = 163GiB of memory.


https://www.reddit.com/r/ChatGPT/comments/zhzjpq/comment/izo...

>It's around 500gbs and requires around 300+gbs of vram from my understanding and runs on one of the largest super computers in the world. Sable diffusion has around 6 billion parameters gpt-3/chatgpt has 175 billion.


Wouldn’t that be possible with about 4 powerful GPUs? Or does it not work like that?


Possibly, but that would be 10 of thousands of dollars worth of GPUs.


Silly question: how does OpenAI host/serve it?


I think on professional hardware you can get 80G of memory per GPU and they can likely do memory pooling.


I am the author of the blog and I want to thank for all the feedback here. Because I now do realize the article starts with a high premise but fails short. Will do a better job next time!

What I meant to talk about is as follows: we have engineered away all sorts of discomfort from our lives and I think that's bad. So (1) be aware of this, (2) seek some discomfort, and (3) if you run into discomfort, take it in a positive way (this I didn't convey in the article).

But yeah I don't mean to say chop of your limbs! Not sure how people are reaching that conclusion.

Thanks HN!


Here's an example I can think of. Suppose you have a bunch of text documents, and you know that some documents are similar but not identical (e.g. plagiarized and slightly modified). You want to find out which documents are similar.

You can first run the contents through some sort of embedding model (e.g. the recent OpenAI embedding model [1]), and then apply LSH on those embeddings. The documents that have the same LSH value would have had very similar embeddings, and thus very similar content.

[1] https://beta.openai.com/docs/guides/embeddings


And I am completely surprised by people who think this tech today is not good, and fail to account that it can get way better in the future.


I’m guessing by people you mean me? I’m only talking about the state of ChatGPT as per the examples given. I’m not talking about the wider implications into the future or its other amazing capabilities.


Aside: if you scroll to past issues, the author has all sorts of other articles like how to get into p0rn. Wonder if the author picked up AI recently, in which case, that's impressive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: