Your question is rather ambiguous. Do you mean using chemistry to develop new techniques or combine unusual ingredients to create food that has novel flavors or textures? That would fall under Molecular Gastronomy, which has been highly influential within fine dining in the last few decades.
Do you mean processing ingredients with the goal to take cheap ingredients and make a product as hyper-palatable as possible? That would generally be called "ultra-processed food"; you're not going to find a Doritos chip in nature.
Do you mean developing completely completely new flavors via chemical synthesis? I don't think there's much possibility there. Our senses have evolved to detect compounds found in nature, so it's unlikely a synthetic compound can produce a flavor completely unlike anything found in nature.
Also, I think you're overestimating jelly. Gelatine is just a breakdown product of collagen. Boil animal connective tissue, purify the gelatine, add sugar and flavoring and set it into a gel. It's really only a few of techniques removed from nature. If you want to say it's not found in nature, then fair enough, but neither is a medium-rare steak.
I mean using chemistry to create food using atypical ingredients that aren't normally classified as food or entirely synthetic. Take more simpler or more abundant compounds to create original food instead of using plants and wildlife. Flavors don't need to be new, but as others mentioned there are plenty of recently invented flavors. Doritos is ultra-processed corn, what i'm saying is Doritos but there is no corn involved. The original article is about meat-like food, I was saying "why meat-like" , if it is food that has similar taste like meat, that's fine, but it doesn't need to be like meat, it just needs to taste good and have palatable texture. Maybe we can have something tastes better than meat!
Have you seen the presentation from GDC 2017 on the architecture of Overwatch [0]? If you watch the video in detail -- stepping through frame-by-frame at some points -- it provides a nearly complete schematic of the game's architecture. That's probably why the video has since been made unlisted.
I'm not aware of anything quite like that, but most submarines have something like a Rescue Buoy [0], Submarine Emergency Position-Indicating Radio Beacon (SEPIRB) or Submarine Emergency Communications Transmitter (SECT). I think those might differ based on whether they're attached by a cable and allow communicating to the submarine, or just broadcast a distress signal with the position. In any case, they're designed to be automatically deployed in the event of an emergency or catastrophic event, and based on this Quora answer [1] they're attached by an independent mechanism with a timer which has to be regularly reset to stop it deploying. I think it might be a clockwork mechanism, with an electronic alarm when it's about to go off to remind the crew to wind it.
It's a bit of a long read, but I think the best introduction is still this [0] and the comments were here [1]. Yes, it's presented in the context of rust and gamedev, but ECS isn't actually specific to a particular programming language or problem domain.
> I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
I don't agree. While starting with the simplest case and expanding out is a valid problem-solving technique, it is also often the case in mathematics that we approach a problem by solving a more general problem and getting our solution as a special case. It's a bit paradoxical, but a problem that be completely intractable if attacked directly can be trivial if approached with a sufficiently powerful abstraction. And our problem-solving abilities grow with our toolbox of ever more powerful and general abstractions.
Also, it's a general principle in engineering that the initial design decisions, the underlying assumptions underlying everything, is in itself the least expensive part of the process but have an outsized influence on the entire rest of the project. The civil engineer who halfway through the construction of his bridge discovers there is a flaw in his design is having a very bad day (and likely year). With software things are more flexible, so we can build our solution incrementally from a simpler case and swap bits out as our understanding of the problem changes; but even there, if we discover there is something wrong with our fundamental architectural decisions, with how we model the problem domain, we can't fix it just by rewriting some modules. That's something that can only be fixed by a complete rewrite, possibly even in a different language.
So while I don't agree with your absolute statement in general, I think it is especially wrong given the context of language design and system architecture. Those are precisely the kind of areas where it's really important that you consider all the possible things you might want to do, and make sure you're not making some false assumption that will massively screw you over at some later date.
> ... it is also often the case in mathematics that we approach a problem by solving a more general problem and getting our solution as a special case.
This is a really good point. LLL and "Feynman's" integral trick come to mind. There are many others.
I got it in my head that this doesn't apply to NP-complete problems so should be discounted. When trying to "solve" NP-complete problems, the usual tactic is to restrict the problem domain into something tractable and then try to branch out other regions of applicability.
> Those are precisely the kind of areas where it's really important that you consider all the possible things you might want to do, and make sure you're not making some false assumption that will massively screw you over at some later date.
I will say that abstraction is its own type of optimization and generalization like this shouldn't be done without some understanding of the problem domain. My guess is that we're in agreement about this point and the talk essentially makes this argument explicitly.
I actually think that it does a disservice to not go to Nazi
allegory, because if I don't use Nazi allegory when referring
to Oracle there is some critical understanding that I have
left on the table; there is an element of the story that you
can't possibly understand.
In fact, as I have said before and I emphatically believe, if
you had to explain the Nazis to somebody who had never heard
of WWII but was an Oracle customer, there's a very good chance
that you actually explain the Nazis in Oracle allegory.
So, it's like: "Really, wow, a whole country?"; "Yes, Larry
Ellison has an entire country"; "Oh my god, the humanity! The
License Audits!"; "Yeah, you should talk to Poland about it,
it was bad. Bad, it was a blitzkrieg license audit."
The discussion near the end about how leadership taking responsibility can beneficially relieve accountability reminded me of the story of the Naval Tactical Data System (NTDS) [0].
[1]:
> When NTDS was eventually acclaimed not only a success, but also one of the most successful projects in the Navy; it amazed people. Especially because it had stayed within budget and schedule. A number of studies were commissioned to analyze the NTDS project to find why it had been so successful in spite of the odds against it. Sometimes it seems there was as much money spent on studying NTDS than was spent on NTDS development.
[2]:
> ...the Office of the Chief of Naval Operations authorized development of the Naval tactical Data System in April 1956, and assigned the Bureau of Ships as lead developing agency. The Bureau, in turn, assigned Commander Irvin McNally as NTDS project “coordinator” with Cdr. Edward Svendsen as his assistant. Over a period of two years the coordinating office would evolve to one of the Navy’s first true project offices having complete technical, management, and funds control over all life cycle aspects of the Naval Tactical Data System including research and development, production procurement, shipboard installation, lifetime maintenance and system improvement.
[1]:
The Freedom to Fail: McNally and Svendsen had an agreement with their seniors in the Bureau of Ships and in OPNAV that, if they wanted them to do in five years what normally took 14, they would have to forego the time consuming rounds of formal project reviews and just let them keep on working. This was reasonable because the two commanders were the ones who had defined the the new system and they knew better than any senior reviewing official whether they were on the right track or not. It was agreed, when the project officers needed help, they would ask for it, otherwise the seniors would stand clear and settle for informal progress briefings.
The key take-away is that the NTDS was set up as a siloed project office with Commanders McNally and Svendsen having responsibility for the ultimate success of the project, but other than that being completely unaccountable. There were many other things the NTDS project did well, but I believe that fundamental aspect of its organization was the critical necessary condition for its success. Lack of accountability can be bad, in other circumstances it can be useful, but diffusion of responsibility is always the enemy.
How many trillions of dollars are wasted on projects that go overbudget, get delayed and/or ultimately fail, and to what extent could that pernicious trend be remedied if such projects were led from inception to completion by one or two people with responsibility for its ultimate success who shield the project from accountability?
CNC Kitchen put out a great video on the practical use of silica gel. I especially found his exploration of different methods of drying to be of interest.
Do you mean processing ingredients with the goal to take cheap ingredients and make a product as hyper-palatable as possible? That would generally be called "ultra-processed food"; you're not going to find a Doritos chip in nature.
Do you mean developing completely completely new flavors via chemical synthesis? I don't think there's much possibility there. Our senses have evolved to detect compounds found in nature, so it's unlikely a synthetic compound can produce a flavor completely unlike anything found in nature.
Also, I think you're overestimating jelly. Gelatine is just a breakdown product of collagen. Boil animal connective tissue, purify the gelatine, add sugar and flavoring and set it into a gel. It's really only a few of techniques removed from nature. If you want to say it's not found in nature, then fair enough, but neither is a medium-rare steak.
reply