Hacker Newsnew | past | comments | ask | show | jobs | submit | VivaTechnics's commentslogin

fn main() { let mood = "awful";

    let mut msg = r#"
    We feel super sad.
    Rust in Peace.

    Steel dreams compile to dust,
    Silent threads unwind.
    Memory fades,
    Borrowed time returned.
    "#;

    println!("{}\n{}", mood, msg);
}


Similar to Kiva Systems which was Amazon's best acquisition, Waymo is simply Google's best acquisition. (We live in San Francisco and it feels much safer around these Waymo cars than average "drivers".)


those zoox cars though. watchout!


AI doesn’t eliminate deep thinking; mediocre companies and minds do. Don't blame everything to AI and find a true company and find a way to join them.


OPINION:

This will only compound wasted time on Claude.ai, which exploits that time to train its own models.

Why time wasted? Claude’s accuracy for shell, Bash, regex, Perl, text manipulation/scripting/processing, and system-level code is effectively negligible (~5%). Such code is scarce in public repositories. For swarms or agents to function, accuracy must exceed 96%. At 5%, it is unusable.

We do also use Claude.ai and we believe it is useful, but strictly for trivial, typing-level tasks. Anything beyond that, at this current point, is a liability.


Python won because simplicity scales. Like English—26 letters, minimal grammar—it became the default. Python mirrors that trajectory.

It is trivially learnable, absurdly flexible, and unmatched in ecosystem leverage. No simpler language delivers comparable reach.

Python may not be so suitable for systems, real-time, or performance-critical work—that’s Rust, C, and C++.

Nevertheless, every serious engineer must know Python, just as they must know shell/bash scripting. Non-negotiable.


LLMs operate on numbers; LLMs are trained on massive numerical vectors. Therefore, every request is simply a numerical transformation, approximating learned patterns; without proper trainings, their output could be completely irrational.


100% agreed. Awesome!

Here is our 501(c)(3) tech non-profit. All corporate profits are directed to children. Clear and transparent.

https://aid.aideo.us/


Clear and transparent, yes. Useful, no. By that statement buying the CEOs kid a house is totally within bounds.


> All corporate profits

$0 profit by paying the "CEO" all the money - 20 years later all the profits have gone to good charitable causes ... all $0 of it.


I just clicked around your site and I have no idea what your organization does.


The spidery green font is really hard to read against the background.


Good work! `type<T>` (generic types) can mimic dependent types for this?


Not sure; the schema of the Props argument depends on the value — not type — of another argument, so it's not just generics.


Something short, simple, fundamental, low-level and deeply techie:

Xor, XORY

------

These are all excellent names: C, C++, Rust, Ada, Julia, Shell, Bash, etc.


>These are all excellent names

Wholeheartedly agree. what about something like Sage? B? You/U? Manifest?


- Maybe, design your language first, then name it.

- Single-letter names are mostly taken (e.g., B: https://en.wikipedia.org/wiki/B_(programming_language)

- Focus on one key feature your language does better than others. Low-level languages are trending; high-level application languages are crowded. For example, if you could make assembly-style code user-friendly, that could be a strong niche.


Impressive! This approach can be applied to designing a NoSQL database. The flow could probably look something like this? Right?

- The client queries for "alice123". - The Query Engine checks the FST Index for an exact or prefix match. - The FST Index returns a pointer to the location in Data Storage. - Data Storage retrieves and returns the full document to the Query Engine.


What you’ve described is the foundation of Lucene and as such the foundation of Elastic Search.

FSTs are “expensive” to re-optimize and so it’s typically done “without writes”. So the database would need some workaround for that low write throughput.

To save you the time thinking about it: The only extra parts you’re missing are what Lucene calls segments and merge operations. Those decisions obviously have some tradeoffs (in Lucene’s case the tradeoff is CRUD).

There are easily another 100 ways to be creative in these tradeoffs depending on your specific need. However, I wouldn’t be surprised if the super majority of databases’ indexing implementations are roughly similar.


Lucene's WFST is an insanely good and underappreciated in-process key value store. Assuming that you're okay with a 1 hour lag on your data.

Keyvi is also interesting in this regard


That wouldn't be a good idea in most cases due to reasons laid out in the "Not a general-purpose data structure" section. https://burntsushi.net/transducers/#not-a-general-purpose-da...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: