Have been writing all my numerical code in Julia for the last 1.5 years, haven’t run into any issues at all.
Actively use things like Trixi.jl (CFD framework), Jump/IPOPT/PRIMA for numerical optimisation, OrdinaryDifferentialEquations for ODE solutions; a few colleagues actively use Gridap.jl for their work.
Not sure how large-scale these libraries can be considered, but they all seem to work fine, fast, and stable (potential issues with LoopVectorization.jl in upcoming versions of Julia are somewhat concerning, but IIRC, for 1.11 this has been resolved)
Had a "run-in" with lightning a few years ago: while walking to the grocery store with an umbrella, heard what sounded like an explosion (my ears rang for the rest of that day) and was nearly blinded by the bright light, but managed to see a big spark between my thumb and the metal part of the umbrella, and feel a slight jolt of electricity. So, in all probability, the lightning hit a nearby house, and via the lightning rod it all went into the ground and through me (partially)?
No burns, no real damage done to me or the umbrella, but I was really shook up.
"Computational Fluid Dynamics" by Ferziger and Peric. I'm in an area related to hydrodynamics, and while I don't do CFD myself, this book was easy to read and covers a lot of the basics, I think.
And it's extremely well written.
(Ferziger also wrote a great book on kinetic theory of rarefied gases)
Random note: When I was in the 5th grade, our math teacher taught us the meaning of Nota Bene, and used a NB sign to mark important notes when deriving formulas. Still use to this day when making notes
Some thoughts: expansion-in-series-based methods (including Hilbert's, which is not used in practice) and the Chapman-Enskog method work only for moderately rarefied gas flows (where you can neglect higher-order collisions; this can be derived explicitly using the BBGKY hierarchy). Also, since the Chapman-Enskog method is asymptotic, it is not guaranteed that higher-order equations (inviscid Euler equations being the zero-order equations and Navier-Stokes equations being the first-order equations) will provide an accurate description of flows. Indeed, the second-order equations (Burnett and super-Burnett equations) seem to fail in some cases, while providing more correct results in others. But given the complexity of the equations themselves and the complexity of the boundary conditions, no one really uses them. The cool thing about the Chapman-Enskog method is that it gives a closed set of equations, so you don't need empirical models for heat conductivity, viscosity, etc.
That's the first point – that methods depending on series decomposition might never guarantee a solution that's accurate in all cases. There are also moment-based methods (Grad's method, for example, being one of the most famous), which have additional equations for parts of the stress tensor (I think; never really read much about them).
The second point is that the equations correspond to conservation laws: mass, linear momentum, energy. The equation corresponding to the conservation of angular momentum is usually neglected: the terms related to internal angular momenta of particles are considered to cancel each other out (which seems logical, since unless there's some magnetization happening, the particles will be chaotically oriented and the average of the angular momentum will be 0), and in that case, the equation is satisfied since it just follows from the equation corresponding to the conservation of linear momentum.
However, there's been some research recently on whether this equation can actually be neglected and what implications it carries, whether it's connected to turbulence or some other effects.
The third point is that in high-altitude hypersonic flows, there are far more complex effects going on in flows that just simple collisions between particles – there are transitions of internal energy (which is a quantity described by quantum mechanics), chemical reactions (dissociation, exchange reactions), and this all complicates the Navier-Stokes equations – additional terms appear (bulk viscosity, relaxation terms, relaxation pressure). And correct modelling of these terms requires solving large linear systems with quite complex coefficients, and to complicate things further, for many of the processes mentioned, there aren't any easy or even correct models (to take into account dissociation, for example, you need to know the cross-section of the reaction for each vibrational level of each molecular species involved in the flow), since these models are either computed via quantum mechanics (which takes enormous amounts of computational power) or are obtained experimentally (which limits the range of conditions under which the results are obtained).
DSMC methods have being increasingly popular as of late, but of course, they can't provide theoretical results, while it is possible to observe some interesting effects even in theory using the Chapman-Enskog method.
So the problem is not only getting more "correct" equations, it's also being able to correctly model everything that goes into the equations we currently have, and then being able to solve them (for a simple flow of a N2/N mixture, if you use a detailed description of the flow, you get a system of 51 PDEs). And in engineering applications drastically over-simplified models are often used, and yet it's not like every high-altitude air/space-craft has burned to a crisp because of this. While new, "more correct" equations are interesting, of course, there's enough work to be done with the current ones.
Source: I do theoretical research and numeric computations of rarefied gas flows for a living (at the Saint-Petersburg State University).
Here's a write-up about a Neural Net which was used to win a Kaggle image classification challenge, they did a lot of transformations on the input data to a) prevent overfitting b) provide invariance. Some other cool tricks mentioned there, too.
https://benanne.github.io/2015/03/17/plankton.html
Thanks for the plug :) This is not quite the same thing though, we used a bunch of affine transformations for data augmentation, but we're not using any transforms with fancy invariance properties to compute the feature maps inside the networks, which I think is what therobot is talking about.
I have experimented with FFT convolutions (the Theano implementation for this is based on my code), but they are only really beneficial with large filters, and the current trend is towards convnets with very small filters (1x1, 2x2 or 3x3).
I think I read this somewhere here a few months ago (paraphrasing, obviously): "When the indices for your DB don't fit into a single machines RAM, then you're dealing with Big Data, not before."
And following up: Your laptop does not count as a "single machine" for purposes of RAM size. If you can fit the index of your DB in memory on anything you can get through EC2, it's still not Big Data.
There's still 40x difference to biggest EC2 instance to a maxed out Dell server (244 GB EC2 vs 6 TB for a R920). Not to mention non-PC hardware like SPARC, POWER and SGI UV systems that fit even more.
This is true, but at the upper end the "it isn't Big Data if it fits in a single system's memory" rule starts to get fuzzy. If you're using an SGI UV 2000 with 64 TB of memory to do your processing, I'm not going to argue with you about using the words "Big Data". ;-) I figured using an EC2 instance was a decent compromise.