Legos interact with each other in a very well-defined manner. The circular interlocking portions must be manufactured to a tolerance of 10 micrometers so that the bricks lock as expected, but can be removed easily.
In a good interface (e.g. an API), the parameters passed around can be defined and characterized precisely, perhaps using input validation and strong typing. The designer of an interface should know and handle all the permutations of data in and data out. If a surface-level function can't handle negative values, then the designer must know how the function will respond.
Back to Legos. despite their extraordinarily tight specifications, there is practically nothing that can't be built with Legos, in the precisely-defined Lego universe. Six 2x4 bricks fit together in 915,103,765 ways. It's unlikely any particular design decision by Lego enabled this, but it's almost like they built a Turing-complete construction language.
Similarly, good interfaces shouldn't unreasonably constrain the user.
Finally, and this may be stretching the analogy, Legos interfaces sort of act like a "black box". One can attach a Lego to any interface of an existing Lego project without concern for what color the bricks are, how many bricks are connected, even the types of all the other bricks involved. The interface is all that matters.
Likewise, a good interface doesn't require the user to dig into the source code to understand how to use the surface level connections.
Separate reply for the n-dimensional testing space:
This goes back to the scientific method and hypothesis-testing. When conducting experiments in school, we learned to have only one independent variable at a time, and measure its singular effect on the dependent variable. It's possible to test more variables at once - it's just hard to visualize, more time-consuming, and more difficult to plot nicely in a school report. Using an idea from statistics called Design of Experiments, one can change two variables at once, creating a 3-dimensional space with a 3-dimensional functions.
Anyway, what's this have to do with programming? A program is effectively a transformation of inputs into outputs. It's easier to think about this for an individual function
fun (a int, b int) -> int { ...stuff... }
There are a few ways to test this function. The first is think really hard about common inputs, and make sure the outputs are correct. The next level is to figure out all the edge cases and test those as well. At that point, that function would be considered reasonably well-tested.
But why is that? Why doesn't 100% test coverage mean someone tested EVERY SINGLE POSSIBLE INPUT?
Because that style of testing has produced a series of points of data through which a 2d curve can be traced through the 3d test space (because there are 2 independent variables (a and b) and the dependent output). As a result, one can interpolate any value within that curve and know what the output is.
This example is clearly very simplified, but the concept can be extrapolated to functions with more complicated inputs and entire programs.
I built something similar to LibHunt without knowing about it. Someone on http://wip.chat pointed it out and I quickly shifted my focus to something else. It just doesn't seem like enough room in that space for two players and you already have the market cornered.
I was Associate Director of Development and I quit. I wanted hands on keyboard again. I was tired of the endless meetings, politics and dealing with the client. I wanted to be a developer again. So I stepped down and went back to development. Best idea ever.