Hacker Newsnew | past | comments | ask | show | jobs | submit | stewfortier's commentslogin

Agree on each suggestions. I’m especially excited about building #2, which I think is a fairly subtle, hard problem to solve.


My thought on #2 is to continually evaluate and store off a 'best' sample of the author's writing taken from everything produced that hits on a number of key metrics (ie. general wordiness, adjective usage, humor, unusual quirks that are statistically significant), then compile that down to where it might seem to be complete gibberish but an effective and performant prompt.


Many of us have 20, 30, or more years of written electronic communication.

Main issue preventing just* chucking this into one of the LangChains is (a) unrolling email threads mixing many people's writing, (b) keeping partitioned tones (workplaces, topics, networks, cultures) distinct.

* No such thing as "just".


Appreciate the feedback on pricing! Will keep this mind as I imagine we'll offer a wider range of options in the future.


Yes! We do support basic image blocks today, but the idea is that we’ll integrate generative media types as new image/video/audio models become more reliable & stable.


Good to know, thanks!


Unfortunately, everything but the AI works offline. Though, maybe that's a feature if you're planning a more mellow retreat :)


Have you considered a limited LLM that could run locally?

> planning a more mellow retreat

The objective here is to forcefully going to where internet is impossible (no phone reception, I don’t have starlink) with the objective of focused productive output with limited distractions.

The idea came to mind after reading about John Carmack doing this for a week, diving into AI using nothing but classic text books and papers as reference material to work off.

EDIT: here is the HN thread on Carmack’s week long retreat:

https://news.ycombinator.com/item?id=16518726


> Have you considered a limited LLM that could run locally?

I think there are two main issues here. LLM are large (the name even hints at it ;) ) and the smaller ones (still, multiple GB) are really, really bad.

Edit: and uses a ton of memory, either RAM if CPU or VRAM if GPU.


Are they all that bad? [1]. I’d be ok using a few 100gb on my laptop, given that storage is so cheap these days.

[1] https://github.com/nat/openplayground


Compared to GPT-4, most of them are not super great, yeah. I've tested out most of the ones released for the last weeks and nothing have been getting the same quality of results, even the medium sized (30GB and up) models that require >24GB of VRAM to run GPU. I have yet to acquire hardware to run the absolute biggest of models, but I haven't seen any reports that they are much better either for general workloads.


That's fair! At a minimum, we probably should start saying "Type.ai" more often than just "Type."


Yep, we’ve acquired hundreds of paying customers since launching about a month ago. And we don’t see the market here as “people who hire writers” — we see it as “people who must write to get paid.”


We feel similarly. There are a lot of products in this space. Very few are enjoyable to use, though.


We think about that a lot! I think this reply from my co-founder summarizes our answer well: https://news.ycombinator.com/item?id=35442714


The honest answer is that it'll be hard for us to answer that until we try out what they've built!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: