Hacker Newsnew | past | comments | ask | show | jobs | submit | CodesInChaos's commentslogin

That's a mathematical expression, not a C++ expression. And floor here isn't the C++ floor function, it's just describing the usual integer division semantics. The challenge here is that you need 128-bit integers to avoid overflowing.

Ah, you're right. I saw that the expression in the comment and in the code was the same and assumed that the commented bit was valid C++ code. You got me to look again and it's obvious that that isn't the case. I had even gone looking through the codebase to see if std::floor was included, and still missed the incorrect `^`.

I guess in that case as long as the 128-bit type supports constexpr basic math operations that should suffice to replace the hardcoded constants with their source expressions.


The .NET ecosystem has been moving towards a higher number of dependencies since the introduction of .NET Core. Though many of them are still maintained by Microsoft.


The "SDK project model" did a lot to reduce that back down. They did break the BCL up into a lot of smaller packages to make .NET 4.x maintenance/compatibility easier, and if you are still supporting .NET 4.x (and/or .NET Standard), for whatever reason, your dependency list (esp. transitive dependencies) is huge, but if you are targeting .NET 5+ only that list shrinks back down and the BCL doesn't show up in your dependency lists again.

Even some of the Microsoft.* namespaces have properly moved into the BCL SDKs and no longer show up in dependency lists, even though Microsoft.* namespaces originally meant non-BCL first-party.


I think first-party Microsoft packages ought to be a separate category that is more like BCL in terms of risk. The main reason why they split them out is so that they can be versioned separately from .NET proper.

I'm a bit confused by the privilege escalation part. Doesn't modifying the settings require the same privileges the application has?


I suppose the application runs as root (to update the application files) but reads the user settings (which are writable without root priviledges)


How long does mongodump take on that database? My experience was that incremental filesystem/blockdevice snapshots were the only realistic way of backing up (non sharded) mongodb. In our case EBS snapshots, but I think you can achieve the same using LVM or filesystems like XFS and ZFS.


It takes ~21hrs to dump the entire db (~500gb), but I'm limited by my internet speed (100mbps, seeing 50-100mbps during dump). Interestingly, the throughput is faster than doing a db dump from atlas which used to max around 30mbps


MongoDB Atlas is so overpriced that you can probably save already 90% by moving to AWS.


Most of the cost in their bill wasn't from MongoDB, it was cost passed on from AWS


I don't remember the numbers (90% is probably a bit exaggerated) but our savings of going from Atlas to MongoDB Community on EC2 several years ago were big.

In addition to direct costs, Atlas had also expensive limitations. For example we often spin up clone databases from a snapshot which have lower performance and no durability requirements, so a smaller non-replicated server suffices, but Atlas required those to be sized like the replicated high performance production cluster.


Was it? Assuming an M40 cluster consists of 3 m6g.xlarge machines, that's $0.46/hr on-demand compared to Atlas's $1.04/hr for the compute. Savings plans or reserved instances reduce that cost further.


There's definitely MongoDB markup, but a full 33% of their bill was AWS networking costs that have nothing to do with Atlas.


Highly doubt that. MongoDB has 5000 well paid employees and is not a big loss making enterprise. If most of the cost was pass through to AWS, they’d not be able to do that. Their quarterly revenue is $500M+ but also spend $200M in sales and marketing and $180M in R&D. (All based on their filings)


You can look at this particular bill and observe that more than 50% of the cost was going to AWS.


If they’re a reseller of AWS, which they will be, they decide the rates that get charged.


Yes, and my point is that this customer switching to running their own MongoDB instances on EC2 like Atlas does would reduce the bill by less than 50% because the rates that they are charging mean that their cut is less than what AWS is getting from this customer.


we saved 50% by moving from atlas to a three node cluster. Thats for a 6tb db (we moved because of size rather than cost, but its been a nice bonus)



The colour, of course.


Java's distinction between Runtime and Checked Exception makes sense, and is pretty much the same panic vs Result distinction Rust makes. But Java's execution of the concept is terrible.

1. Checked exception don't integrate well with the type-system (especially generics) and functional programming. It's also incompatible with creating convenient helper functions, like Rust offers on Result.

2. Converting checked exceptions into runtime exception is extremely verbose, because Java made the assumption that the type of error distinguishes between these cases. While in reality errors usually start as expected in low-level functions, but become unexpected at a higher level. In Rust that's a simple `unwrap`/`expect`. Similarly converting a low level error type to a higher level error type is a simple `map_err`.

3. Propagation of checked exception is implicit, unlike `?` in Rust

Though Rust's implementation does have its weaknesses as well. I'd love the ability to use `Result<T, A | B>` instead of needing to define a new enum type.


I wish I could upvote this more. I can totally understand GP's sentiment, but we need to dispel the myth that results are just checked exceptions with better PR.

I think the first issue is the most important one, and this is not just an implementation issue. Java eschewed generics on its first few versions. This is understandable, because generics were quite a new concept back then, with the only mainstream languages implementing them then being Ada and C++ - and the C++ implementation was brand new (in 1991), and quite problematic - it wouldn't work for Java. That being said, this was a mistake in hindsight, and it contributed to a lot of pain down the road. In this case, Java wanted to have exception safety, but the only way they could implement this was as another language feature that cannot interact with anything else.

Without the composability provided by the type system, dealing with checked exceptions was always a pain, so most Java programmers just ended up wrapping them with runtime exceptions. Using checked exceptions "correctly" meant extremely verbose error handling with a crushing signal-to-noise ratio. Rust just does this more ergonomically (especially with crates like anyhow and thiserror).


For me SQL feels like PHP. Sure it's usable, but it doesn't spark joy.

Syntactically PRQL is much simpler, cleaner and more flexible. You simply chain pipeline stages instead of having a single monolithic query statement.

Data model wise EdgeQL is close to what I want (links instead of joins, optionality instead of null, nesting support), but it's syntax is almost as bad as SQL.


Oh man, PRQL looks so good.

I just wish they had mutation in there too. I don't like the idea of swapping between PRQL and SQL, let alone some complex update statements where i'd rather write the query in PRQL. .. Yea you could argue they shouldn't be that complex for updates though heh.


Yeah, we deliberately left out DML to focus on DQL exclusively. I also find that appealing from a philosophical angle since it allows PRQL to remain completely functional.

I haven't thought about DML too much but what I could envision is an approach like the Elm or React architecture where you specify a new state for a table as a PRQL query and then the compiler computes the diff and issues an efficient update.

For example

    DELETE FROM table_name WHERE id = 1;
would be something like

    table_name = from table_name | filter id != 1
SQL:

    INSERT INTO table_name (id, name) VALUES (1, 'Jane');
PRQL:

   table_name = from table_name | append [{id=1, name='Jane'}]
Update is the trickiest to not have clunky syntax. For example what should the following look like?

SQL:

    UPDATE table_name SET name = 'John' WHERE id = 1;
I can think of `filter` followed by `append` or maybe a case statement but neither seems great.

Any ideas?


Using a single disk has durability concerns. But I don't see why VPS vs dedicated server should matter much.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: