Hacker Newsnew | past | comments | ask | show | jobs | submit | yincong0822's commentslogin

bro. just use smart_mode parameter in command line. such as './MuseBot -smart_mode=true', llm will automatically generate photo or video base on your prompt. please have a try about it!


ok, I will have a try and give you some feedback.


thank you very much!


find a project on github and contribute it!


Really impressive progress for just 6 weeks — the metrics look great. I like how focused the product is on speed and simplicity. Tried Loom and Screen Studio before, so I’ll definitely give this a shot. Curious how it handles long recordings and whether offline editing is planned.


Thank you so much! glad you like it..


please give me some feedback and i will make this project better, thanks!!!


congras!


That’s awesome — congrats on reaching the release candidate stage! I’m curious about the performance improvements you mentioned. Did you benchmark against other Go web servers like Caddy or fasthttp? Also really like that you’ve made automatic TLS the default — that’s one of those “quality of life” features that make a huge difference for users.

I’m working on an open-source project myself (AI-focused), and I’ve been exploring efficient ways to serve streaming responses — so I’d love to hear more about how your server handles concurrency or large responses.


Thank you!

> Did you benchmark against other Go web servers like Caddy or fasthttp?

I have already benchmarked Ferron against Caddy! :)

> so I’d love to hear more about how your server handles concurrency or large responses.

Under the hood, Ferron uses Monoio asynchronous runtime.

From Monoio's GitHub repository (https://github.com/bytedance/monoio):

> Moreover, Monoio is designed with a thread-per-core model in mind. Users do not need to worry about tasks being Send or Sync, as thread local storage can be used safely. In other words, the data does not escape the thread on await points, unlike on work-stealing runtimes such as Tokio. > For example, if we were to write a load balancer like NGINX, we would write it in a thread-per-core way. The thread local data does not need to be shared between threads, so the Sync and Send do not need to be implemented in the first place.

Ferron uses an event-driven concurrency model (provided by Monoio), with multiple threads being spread across CPU cores.


I understand your position: you see little benefit in learning the formal, mathematical aspects of the Relational Model (Relational Algebra and Calculus), and instead want to focus on deep, practical SQL knowledge that is valuable in the real-world job market.


hmm


Large Language Models (LLMs) and AI companies routinely use massive amounts of data for training, much of which is likely to contain copyrighted material.


AI is not a master as a editor. it just create some simple video.


iterm, it's the best


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: