Hacker Newsnew | past | comments | ask | show | jobs | submit | kvn8888's commentslogin

They're just flexing their engineering skills. Mass producing such a thin phone, where most of the non battery internals are housed in the camera bump can translate to meaningful products

Like wearables, glasses, etc. The fact that they're mass producing this is key


Chirp (HD) gives you $30 per 1M characters for free on the free tier also


I'd have to analyze my usage. For me, having used it for over a year cost me a penny. If I can ensure my total cost is less than $1/month, I'll consider it if the quality is really good. The Google one is "good enough", but not great.

One other feature I'd really like: Having the AI figure out who is saying what and use different voices (e.g. one voice for overall narrator, and separate voices for each person who is quoted in the article).

Not sure if any of the solutions out there do that automatically without my guidance.

(Still probably wouldn't pay more than $2/mo for it - I just don't use it often enough to justify paying much).


You start doing that for text from ebooks and Audible is going to want to have words with you.


I do it only for long articles. Not interested in converting fiction into audio books unless the quality rivals that of real storytellers.

And, you know, this is not a service I'd provide others. Just for my own use running from my PC. Audible won't know or care, just as no one cares if you borrow a book from the library and photocopy it for your own use.


Kindle originally had text to speech functionality. The Audibook people sued and Amazon went in to buy Audible


The audio quality is amazing. It's transformer based. I use it occasionally


That would be a ton of problems for a small team of PhD/Grad level experts to solve (for GPQA Diamond, etc) in a short time. Remember, on EpochAl Frontier Math, these problems require hours to days worth of reasoning by humans

The author also suggested this is a new architecture that uses existing methods, like a Monte Carlo tree search that deepmind is investigating (they use this method for AlphaZero)

I don't see the point of colluding for this sort of fraud, as these methods like tree search and pruning already exist. And other labs could genuinely produce these results


I had the ARC AGI in mind when I suggested human workers. I agree the other benchmark results make the use of human workers unlikely.


It shows the context length on the AI Studio site

2 million for gemini-exp-1206 32k for the other experimental gemini. I think gemini-exp-1121


GitHub actually uses AWS quite frequently

They act very independently from Microsoft


I think it’s unlimited for Claude

But I wonder who’s paying for it? I haven’t heard anything about GitHub paying discounted prices for AWS Bedrock. Or if Anthropic gets a cut


I use the wavenet extension and use my 1M character free quota from Google cloud


It’s amazing for reading articles


Microsoft is notorious for resetting user permissions randomly. Like after updates

Options turn themselves on for some reason, historically speaking


I'm sure Copilot will get turned on when it sees my browsing history. I certainly did :)


I can't find anything that says it's available in ChatGPT


ChatGPT (at least in Plus) when using the GPT-4 model selected (instead of GPT-3.5) currently consistently reports the April 2023 knowledge cutoff of GPT-4-Turbo (gpt-4-1106-preview/gpt-4-vision-preview) as its knowledge cutoff, not the Sep 2021 cutoff for gpt-4-0613, the most recent pre-turbo GPT-4 model release.

The most sensible explanation is that ChatGPT is using GPT-4-Turbo as its GPT-4 model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: