Hacker Newsnew | past | comments | ask | show | jobs | submit | convivialdingo's commentslogin

Similar story for myself. It was long and tedious for my mental model to go from Basic, to Pascal, to C, and finally to ASM as a teen.

My recent experience is the opposite. With LLMs, I'm able to delve into the deepest parts of code and systems I never had time to learn. LLMs will get you to the 80% pretty quick - compiles and sometimes even runs.


That's super impressive! Definitely one of the best quality conversational agents I've tried syncing A/V and response times.

The text processing is running Qwen / Alibaba?


Qwen is the default but you can pick any LLM in the web app (though not the HN playground)


Thank you! Yes, right now we are using Qwen for the LLM. They also released a super fast TTS model that we have not tried yet, which is supposed to be very fast.


That’s amazing effort - I am impressed.

Awesome to see more small teams making impressive leaps.


Impressive! The cloning and voice affect is great. Has a slight warble in the voice on long vowels, but not a huge issue. I'll definitely check it out - we could use voice generation for alerting on one of our projects (no GPUs on hardware).


Cool! Yeah the voice quality really depends on the reference audio. Also mess with the parameters. All the feedback is welcome


I spent a few months working on different edge compute NPUs (ARM mostly) with CNN models and it was really painful. A lot of impressive hardware, but I was always running into software fallbacks for models, custom half-baked NN formats, random caveats, and bad quantization.

In the end it was faster, cheaper, and more reliable to buy a fat server running our models and pay the bandwidth tax.


When I use the "think" mode it retains context for longer. I tested with 5k lines of c compiler code and I could 6 prompts in before it started forgetting or generalizing

I'll say that grok is really excellent at helping my understand the codebase, but some miss-named functions or variables will trip it up..


not from a tech field at all but would it do the context window any good to use "think" mode but discard them once the llm gives the final answer/reply?

is that even possible to disregard genrated token's selectively?


I tried replacing a DMA queue lock with lock-free CAS and it wasn't faster than a mutex or a standard rwlock.

I rewrote the entire queue with lock-free CAS to manage insertions/removals on the list and we finally got some better numbers. But not always! We found it worked best either as a single thread, or during massive contention. With a normal load it wasn't really much better.


Wonder if this could work for li-ion batteries as a current collector? You could potentially lower charging times and handle higher power applications and higher temperature ranges.


Dang - Google finally made a quality model that doesn’t make me want to throw my computer out a window. It’s honest, neutral and clearly not trained by the ideologically rabid anti-bias but actually super biased regime.

Did I miss a revolt or something in googley land? A Google model saying “free speech is valuable and diverse opinions are good” is frankly bizarre to see.


Downvote me all you want - the fact remains that previous Google models were so riddled with guardrails and political correctness that it was practically impossible to use for anything besides code and clean business data. Random text and opinion would trigger a filter and shut down output.

Even this model criticizes the failures of the previous models.


Yes, something definitely changed. It's still a little biased, it's kind of like OpenAI before Trump became president.


I’ve been modifying the the MIR c2mir JIT compiler to extend the c11 compiler to support simple classes, boxed strings(immutable, nun-nullable) with AOT support.

Imagine if Java and C had a love child, basically.

MIR is a fantastic piece of engineering.

Honestly the hardest part is representing types. Having played around with other compilers it seems to be a typical problem.

I’m stuck in the minutiae of representing highly structured complexity and defining behavior in c. I can understand why many languages have an intermediate compiler - it’s too much work and it will probably change over time.


>I’ve been modifying the the MIR c2mir JIT compiler to extend the c11 compiler to support simple classes, boxed strings(immutable, nun-nullable) with AOT support

Is the project public? Really interested in the AOT support, I've always wanted to see its generated code but didn't find an easy way to dump it.


Once things are a little more stable, I will put it up!

Right now you can just break before the (fun_call)() delegate and disassemble the fun_call in gdb.

The basic trick is to add reloc support to the x86 translate code, mark external calls and replace with 0x0 placeholders, and copy out the machine_code and data segment output to an object file.

I can do basic main functions with simple prints calls but not much more. It’s a hack for now but I’ll refactor it until it’s solid.


Thanks, looking forward to it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: