Hacker Newsnew | past | comments | ask | show | jobs | submit | kxyvr's commentslogin

I'm an applied mathematician and this is the most common layout for dense matrices due to BLAS and LAPACK. Note, many of these routines have a flag to denote when working with a transpose, which can be used to cheat a different memory layout in a pinch. There are also parameters for increments in memory, which can help when computing across a row as opposed to down a column, which can also be co-opted. Unless there's a reason not to, I personally default to column major ordering for all matrices and tensors and use explicit indexing functions, which tends to avoid headaches since my codes are consistent with most others.

Abstractly, there's no such thing as memory layout, so it doesn't matter for things like proofs, normally.


Automatic differentiation was actively and continuously used in some communities for the last 40 years. Louis Rall has an entire book about it published in 1981. One of the more popular books on AD written by Griewank was published in 2000. I learned about it in university in the early 2000s. I do agree that the technology was not as well used as it should have been until more recently, but the technology was well known within numerical math world and used continuously over the years.


That's not true. Here's an abbreviated list from:

http://historyguy.com/major_wars_19th_century.htm

I'm sure there are others. It lists:

  Greek War of Independence (1821-1832)
  French invasion of Spain (1823)
  Russo-Persian War (1826-1828)
  Russo-Turkish War (1828-1829)
  Hungarian Revolution and War of Independence (1848-1849)
  First Schleswig War (1848-1851)
  Wars of Italian Independence (1848–1866)
  Crimean War (1854–1856)
  Second Schleswig War (1864)
  Austro-Prussian War (1866)
  Franco-Prussian War (1870-1871)
  Russo–Turkish War (1877–1878)
  Serbo-Bulgarian War (1885)
  Greco–Turkish War (1897)
Together, that adds up to multiple decades of war.


I think https://ourworldindata.org/grapher/deaths-in-wars-project-ma... puts this in perspective. The period from 1815 to 1915 was a much more peaceful period measured by deaths in war than 1915 to 2015, though 1975 forward seems like a return to that level (but world population is so much larger now that it's even better than it seems).


We're talking about different things.

Counting the years when there was a war anywhere is Europe, you'll end up with a large number.

I'm counting how often each country was at war. Several countries had no wars, and even the most war torn country didn't fight for more than 10-15 years.


That's really not true if you look at the European neighbors and European territories of Russia and the Ottoman Empire.

Also not true of Spain, which spent a lot of time in internal warfare (with occasional outside interventions.)

But, yes, excluding those, most of the countries in Europe were too busy fighting endless wars throughout their (or their allies’ or enemies’) colonial empires (whether to expand them, defend them, or put down or assist rebellions in them) to bother fighting other powers in Europe in that period.


True. That said, I'll also mention that tomography is a very rich, interesting field that's still open to new innovations. I work in the area and unfortunately needed to pass on a muon tomography contract some years ago. By the way, you may know this, but the following is for the broader audience.

---

If anyone is interested, the book Parameter Estimation and Inverse Problems by Aster, Borchers, and Thurber give an easy introduction to simple tomography problems in their book. Example 1.12 in their second edition has a very basic setup. More broadly, tomography intersects with an area of study called PDE constrained optimization. Commonly, tomography problems are setup as a large optimization problem where the difference between experimental data and the output of a simulation are minimized. Generally, the simulation is parameterized on the material properties of whatever is under study and are the optimization variables. The idea is that whatever material property that produces a simulation that matches the experimental data is probably what's there. This material property could be something simple like density or something more complicated like a full elasticity tensor.

What makes this difficult, is that most good simulations come from a system of differential equations, which are infinite dimensional and not suitable for running directly in an optimization algorithm. As such, care must be taken into discretizing the system carefully, so that the optimization tool produces something reasonable and physical. Words you'll see are things like discretize-then-optimize or optimize-then-discretize. Generally speaking, the whole system works very, very poorly if one just takes an existing simulator and slaps an optimizer on it. Care must be taken to do it right.

As far as the optimizer, the scale is pretty huge. It's common to see hundreds of millions of variables if not more. In addition, the models normally need to be bounded, so there are inequalities that must be respected. For example, if something like a density isn't bounded to be positive (which is physical), then the simulator itself may diverge (a simulator here may be something like a Runge-Kutta method.)

Anyway, it's a big combination of numerical PDEs, optimization, HPC, and other tools just to get a chance to run something. Something like the detector in the article is very cool because it may be a realistic way to get data to test against for super cheap.


I believe Absil, Mahony, and Sepulchre also have a book on optimization over manifolds:

https://press.princeton.edu/absil

I was unaware of the Bournal work, so thanks for that. Do you have any idea how Bournal's approach differs from Absil's?

For others, it looks like Bournal also has a book on the topic from 2023:

https://www.nicolasboumal.net/book/


Boumal was advised by Absil IIRC :) And in fact you can see this in his more modern presentation of the material.


In the U.S., there is typically a separation between calculus and real analysis. Though, the amount of difference between the two depends on the university.

In calculus, there is more emphasis on learning how to mechanically manipulate derivatives and integrals and their use in science and engineering. While this includes some instruction on proving results necessary for formally defining derivatives and integrals, it is generally not the primary focus. Meaning, things like limits will be explained and then used to construct derivatives and integrals, but the construction of the reals is less common in this course. Commonly, calculus 1 focuses more on derivatives, 2 on integrals, and 3 on multivariable. However, to be clear, there is a huge variety in what is taught in calculus and how proof based it is. It depends on the department.

Real analysis focuses purely on proving the results used in calculus classes and would include a discussion on the construction of the reals. A typical book for this would be something like Principles of Mathematical Analysis by Rudin.

I'm not writing this because I don't think you don't know what these topics are, but to help explain some of the differences between the U.S. and elsewhere. I've worked at universities both in the U.S. and in Europe and it's always a bit different. As to why or what's better, no idea. But, now you know.

Side note, the U.S. also has a separate degree for math education, which I've not seen elsewhere. No idea why, but it also surprised me when I found out.


There's a POV that learning math and learning how to teach math effectively are two orthogonal things.

If one only took the method of teaching that is most common in US university lecture halls, and applied it to a small class of pre-teens or teenagers, it probably wouldn't be very effective.


Honestly, the parent is pretty accurate. No one is claiming that P = NP. However, the technology to solved mixed integer programs has improved dramatically over the last 30 years and that improvement in performance has outpaced computational speed by multiple orders of magnitude. It's the algorithms.

I just went to pull up some numbers. The following comes from a talk that Bob Bixby gave at ISMP in 2012. He is the original author of CPLEX and one of the current owners of Gurobi. Between 1991 and 2007, CPLEX achieved a 29530x speedup in their solvers. Their 1997/98 breakthrough year attributes the following speedups, Cutting planes: 33.3x, presolve: 7.7x, variable selection: 2.7x, node presolve 1.3x, heuristics: 1.1x, dive probing 1.1x. I don't have a paper reference for these numbers and I don't think he has published them, but I was at the talk.

The point is that integer programming solvers perform unreasonably well. There is theory as to why. Yes, there is still a lot of searching. However, search in-and-of-itself is not sufficient to solve the problems that we regularly solve now. Further, that increase in performance is not just heuristics.


Here is the paper

https://www.emis.de/journals/DMJDMV/vol-ismp/25_bixby-robert...

FYI we’ve probably crossed paths :)


Laugh. Probably! I gave a talk at that conference titled, "Software Abstractions for Matrix-Free PDE Optimization with Cone Constraints." I still work in the field, so you want to talk algorithms sometime, feel free to send me an email. I keep my email off of HN to limit spam, but if you search for the lead author on that presentation, it should list my website.


I'll second this. Their methods are very powerful and very fast. For those out of the loop, the Chebyshev (and ultra-spherical) machinery allows a very accurate (machine precision) approximation to most functions to be computed very quickly. Then, this representation can be manipulated more easily. This enables a variety of methods such as finding the solution to differential algebraic equations to machine precision or finding the global min/max of a 1-D function.

I believe they use a different algorithm now, but the basic methodology that used to be used by Chebfun can be found in the book Spectral Methods in Matlab by Trefethen. Look at chapter 6. The newer methodology with ultraspherical functions can be found in a SIAM review paper titled, "A Fast and Well-Conditioned Spectral Method," by Olver and Townsend.


A convex function is a function that is bowl shaped such a parabola, `x^2`. If you take two points and connect them with a straight line, then Jensen's inequality tells you that the function lies below this straight line. Basically, `f(cx+(1-c)y) <= c f(x) + (1-c) f(y)` for `0<=c<=1`. The expression `cx+(1-c)y` provides a way to move between a point `x` and a point `y`. The expression on the left of the inequality is the evaluation of the function along this line. The expression on the right is the straight line connecting the two points.

There are a bunch of generalizations to this. It works for any convex combination of points. A convex combination of points is a weighted sum of points where the weights are positive and add to 1. If one is careful, eventually this can become an infinite convex combination of points, which means that the inequality holds with integrals.

In my opinion, the wiki article is not well written.


If anyone is interested in safely preserving food, the USDA provides the USDA's Complete Guide to Home Canning, which has recipes and canning guidance when using a pressure canner. Their current webpage is here:

https://www.nifa.usda.gov/about-nifa/blogs/usdas-complete-gu...

This page refers out to the National Center for Home Food Preservation, which is a web resource with similar guidance and recipes:

https://nchfp.uga.edu/

These recipes do include things like salsas. The point being is that safe canning practices have been well-studied, documented, and already paid for by tax payers. It's a good resource to use as opposed to just winging it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: