Hacker Newsnew | past | comments | ask | show | jobs | submit | bluepanda928752's commentslogin

Consciousness has an easy enough explanation requiring no quantum physics


Let me guess though, the margin is too small to contain said explanation?

(This is a reference to Fermat's Last Theorem in which Fermat confidently proclaimed in a margin of his book that he had a simple elegant proof which the margin was too small to contain)


Dunning-Kruger.

Consciousness has, as yet, no empirical explanation. Only a variety of theories that, so far, have failed to make any headway in providing an explanation. That's why it's called the "hard problem of consciousness."

If you think there is an easy answer, then you do not yet understand the question.


> Consciousness has, as yet, no empirical explanation.

Because nobody has a definition and when it turns out that computers can fit the definition people will move the goalposts.

The hard problem of consciousness is getting a priest to agree to a definition that a scientist can test. Because everything that touches the real world can probably be modelled with distressing accuracy by matrix multiplications and a little non-linearity.


> Because nobody has a definition

That's not the only reason though.

> The hard problem of consciousness is getting a priest to agree to a definition that a scientist can test.

Scientists are free (assuming free will) to define their own, which I think they (some) actually have.


> If you think there is an easy answer, then you do not yet understand the question.

There is no easy answer that people find satisfactory, but it very well might be that people's intuitive understanding of their own consciousness is incoherent and there are in fact no answers at all that don't violate one or more of the arbitrary axioms that populate the thoughtspace regarding that concept.


The hard problem of consciousness seems almost entirely made up by people who a priori reject physcalism and reductionism. They generally start by assuming that there is something different between knowing what, say, a color is vs the experience of seeing that color, and deduce from this that qualia exist and have definite meaning.

However, if we don't start with such assumptions, it is easy to accept that our attention mechanisms have some way of interpreting the stimuli that they receive, and that similar computational architectures will interpret these stimuli the same way. It is entirely possible that all humans experience pain almost identically, aand that it is even more or less identical to how dogs experience pain; while at the same time, an alien or AGI might experience these things entirely differently.

It is also quite obvious that a being without self-reflection, such as a microprocessor, will not have this experience mechanism, almost by definition.

With enough study of the computational structure of the brain, we will likely come to even understand the precise advantage that experiences (self-reflection) have.

To give just one personal reason why I don't believe in the idea/importance of qualia, I feel very clearly that there are certain mental states that I experience (such as colors and smells etc), but there are also mental states that don't have an associated experience. for example, I don't think there is such a thing as 'the feeling of thinking about the number 277`. This to me suggests strongly that experiences are a specialized part of our computational apparatus, with specific roles that we don't understand precisely now (like so much about our minds), but with limited applicability - not some be-all, end-all of consciousness and thought.

I will also note that my position is entirely in line with philosophers like Daniel Dennett, it is not some dismissal of mainstream philosophical thought.


"knowing" and "experiencing" may or may not be different things. I believe they are aspects of the same thing. But equating them does not solve the problem.

What does it mean "to know" as opposed to contain, or have access to information? A computer program, in one sense, "knows" what the current value of a variable is. But it has no awareness of its own knowing.

There is no doubt that we are creatures of stimuli that is computationally processed in the nervous system. However, there is an ontological difference between the the ability to process stimuli, and the experience of said stimuli. The presumption is that with a sufficiently complex system, the ability to experience stimuli will self-reflective awareness, i.e., consciousness will emerge. But that's a whole lot of hand-waving. What is the mechanism? None is offered. It is only assumed.

As others have noted, we cannot even agree on what the term "consciousness" means, and because of this, some would like to posit that consciousness does not actually exist, i.e., that it is illusory. The irony of this conjecture is that they are trying to convince self-aware beings that they are not self-aware. The intent to convince the other is itself a function of self-awareness. Intent cannot exist without self-awareness, and Persuasion of "the other" also cannot exist without self-awareness.

Denying a phenomenon does not make it go away. I could just as easily respond to the person making this argument that if, indeed, self-awareness is an illusion, then so is their argument. It is mere data, the product of a series of calculations leading to a linguistic expression that intends to communicate nothing. It is merely the output from an extremely complex set of interlocking algorithms.

Despite the difficulty of agreeing to a definition of consciousness, we each, to a person, know what it is because of our own experience. The reason all current attempts to explain consciousness are unsatisfactory is because these attempts contradict our common experience. In fact, every attempt so far denies experience itself. They are unsatisfactory because they are illogical, or because they do not provide any explanation at all, but merely presume that somehow the conclusion is achieved, despite the lack of any mechanism in the proposition that can bring us to that conclusion.

If we can even go so far as to agree: "Consciousness is a property of a being that is aware of its own existence, and is able to contemplate its own existence as an entity distinct from its environment," this does not make it any easier to explain how any computational model will arrive at such a property.

One does not have to be a theist to acknowledge this. Some of the best minds in consciousness research are atheists.


> Consciousness is a property of a being that is aware of its own existence, and is able to contemplate its own existence as an entity distinct from its environment

Unfortunately this definition doesn't help weeding out misconceptions such as philosophical zombies. All you have to do is to say that a system may claim it's conscious without really being "aware" of it's own existence. This definition doesn't help because it doesn't say what being aware of it's own existence mean, offering arguably a circular definition.


Yes, but that just collapses into a solipsism, and would true of all possible definitions. At some point we must agree that there are conscious beings other than ourselves, which necessarily means that a definition such as I proposed is rational. The possibility of philosophical zombies does not imply that every observed entity is such. If we take it as an axiom that there are other conscious entities than ourselves, then one is still left with needing such a definition.


That's not what I said/meant though. I'm not disagreeing there are other conscious being other than ourselves. I'm just saying that it's very hard to craft a definition that is not circular or at least ambiguous enough for people to imagine the existence of non-conscious being that behave as if they were conscious (i.e. the P.zombies).

Quite contrary to solipsism, I'd posit that what matters is behaviour. If a cognitive process is able to communicate with us and convincingly recount its reflections about itself, for all intents and purposes that system is conscious. It doesn't matter if it differs from our consciousness in significant ways.

Let's consider a machine that is able to reflect on itself quote effectively but it's also to turn on and off that module "at will". This is quite different from our experience of consciousness, in that we're not able to turn off our own consciousness, to the extent that we believe we're "there" only while we're conscious of it, and thus we end up equating our existence with those surface experiences. But we're way more than that. Were made of a thousand brains and not all processing reaches what we call our consciousness.

A system that exhibits a different mixture of those mechanism shouldn't be ruled out as not meeting the arbitrarily anthropomorphic bar of consciousness.


We don't know at this point the inner workings of the human mind, conscious or unconscious, so there is much room for speculation.

One key aspect that people that talk about qualia and p-zombies and Chinese rooms assume is that it is possible, in principle, to behave as if you are conscious without really being conscious. The fact that a p-zombie is conceivable does not entail that it can exist in the world.

It is entirely possible, I believe even likely, that as we understand the evolution of consciousness we will discover it is in fact a necessary property of an agent with certain abilities. The reason I believe this is a simplistic evolutionary argument: consciousness would be unlikely to have evolved if it had not been beneficial or even necessary for the beings which possess it.

The tendency to say that people are in fact not conscious beings and that the consciousness itself is an illusion seems to me to come mainly from people who reject free will, and point out that our experience of cosnciously choosing our next action is illusory - our unconscious mind is certainly 'in charge' of many processes, and our consciousness often is only an observer of these processes. For a basic example, I can notice how my heart is beating, and I can sometimes feel like my heart is beating faster because I am scared, but this is jusg an interpretation, which mag be wrong - as anyone who has suffered a panic attack can tell you. More interestingly, when observing people with altered states of consciousness, such as Alzheimer's disease, it is often possible to observe them emitting wildly wrong theories about their own actions, such as claiming that they are dressed up because a relative came to visit, when in fact they were dressed up because they were preparing for an appointment. These examples show at the very least that our consciosuness is to some extent a mechanism that comes up with theories about our own actions, without being directly the cause of those actions, which contradicts our experience of consciousness.

The fact that we can't yet explain these processes and how they come about from computation is not surprising to me, given the youth of the field of computer science and the complexity of the human brain and mind.


I don’t disagree with any of this. There is clearly a deep connection between the brain and the conscious mind. One’s consciousness is in some way limited by one’s perception, and one’s perception is directly a product of one’s neural system. The question is, where do the boundaries lie between the self, and the perception in which the self participates. Obviously, I find this entire area fascinating.


I think the OP meant that no fancy physics is required for neurons to work, and consciousness is in some not yet known way the result of our brain wiring.

Just like you do not need quantum physics to explain how muscles work.


Not saying I believe it but I think it is conceivable that the neural networks in our brain are there for certain things like motor function learning, basically like a fancy control system, while the "mysterious quantum stuff" is there for "consciousness". The fact is we can't really define consciousness so we don't know what it takes to make it...


The view stated in your last sentence is a commonly-held one, but if we needed complete definitions of something before we could have knowledge of it, I doubt we would ever have come up with the idea of a gravitational field or a wave function. The solid definition of consciousness will follow from our future understanding of how minds work, not be a prerequisite for it.


> I doubt we would ever have come up with the idea of a gravitational field or a wave function.

But that is the opposite of the case here. We came up with those ideas after we had an understanding of the system. With consciousness, people casually use the term all the time and think they have a vague idea of what it is, but we have little understanding of the system. It may be that it simply does not exist for example.


you have a point - they are not exact analogies - but, as you say, we came up with those ideas after we had an understanding of the system. If we can discover things without having a prior definition of them, then having a vague prior definition should be no obstacle - even if it leads to some initial confusion, clearing that up through the gathering of evidence should be no harder than coming up with a definition for something we previously did not suspect through the gathering of evidence.

This is so even if it turns out that the phenomenon does not exist, such as in the case of the mythical causal effects of the four humors.


The "Hard Problem" is much more specific - and speculative - than the observation that current theories have made little or no progress (whether they have made any progress remains to be seen, and any claim that they have not is just opinion.) It is the claim that the scientific method cannot, in principle, explain the "what it is like" aspect of experience - that there is some sort of unbridgeable explanatory gap.


Would you mind sharing it?


Do tell.


Technology is pretty solid (no comparison to FSD), order of magnitude better than previous iterations of satellite internet. The solution is still getting our collective heads out of our asses and running fiber everywhere though


The only fusion project which has a chance of producing excess heat at Q>2 within a decade is the MIT SPARC, using proven plasma physics and scaling the size/cost down dramatically with high-field HTS magnets (think ITER but sooner and >10x cheaper). Why is this definite solution to climate apocalypse being developed within the framework of MIT startup accelerator instead of Manhattan Project while most of the publicity goes to unproven designs which are orders of magnitude from being anywhere close to Q>1 is beyond me

MIT SPARC overview presentation/recent progress: https://www.youtube.com/watch?v=h8uYNhevRtk

Journal of Plasma Physics issue with several papers about it: https://www.cambridge.org/core/journals/journal-of-plasma-ph...


Definite solution? It's a definite non-solution, even if the plasma physics is more nailed down. The ARC reactor (fully scaled up SPARC with tritium breeding blanket) would have a power density 40x worse than a PWR primary reactor vessel, and supplying the world's primary energy demand with them would require 100x more beryllium than the USGS estimated resource (not reserve) of that element.


Not sure why lower power density might be a show-stopper here. Beryllium angle is interesting, haven't thought about that


The cost of the reactor will be proportional to its size, so the cost/power will be inversely proportional to the power density. Lawrence Lidsky (who was also at MIT) (and also a similar argument from Pfirsch and Schmitter in Germany) famously pointed out back in the 1980s that DT fusion reactors will inherently have terrible power density compared to fission reactors, and this will render them noncompetitive. Despite putative rebuttals at the time, nothing we've seen since contradicts their devastating argument.

http://orcutt.net/weblog/wp-content/uploads/2015/08/The-Trou...

https://ui.adsabs.harvard.edu/abs/1987oepn.book.....P/abstra...

Note that Helion wants to go with D-3He, and use direct conversion for at least some of the energy recovery. This might be the only hope for making fusion compete. But of course you need 3He; making it with DD fusion requires even more aggressive plasma physics.

ARC at least isn't quite as absurd as a tokamak the size of ITER, which has a gross fusion power density another order of magnitude lower.


From Lidsky:

>Fusion will almost certainly have a lower power density than fission and therefore will require a larger plant to produce the same output. Suppose a fusion plant had to be ten times as big and therefore likely ten times as costly — as a present-day fission plant to produce the same amount of power.

Fission is currently not cost-competitive due to the expense of ensuring that fission reactors do not pose an unacceptable risk of radioactive contamination in their vicinity. However, fusion is not subject to this constraint, or anyway suffers from it much less. There are no long-lived radioactive byproducts, and judicious selection of the construction materials (already implemented) can ensure that neutron activation of the walls is not a problem either. Furthermore, the inherently unfavorable nature of fusion reactions mean that criticality accidents ('meltdowns'; a la Chernobyl) are not possible.

From Pfirsch and Schmitter:

>It is shown that the claims made therein for the economic prospects of pure fusion with tokamaks, when discussed on the basis of the present-day technology, do not stand up to critical examination.

The analysis in the fulltext relies on a variety of plasma parameters estimated based on technology available in 1987. I cannot immediately determine if it generalizes to designs using HTS, but the comments on pp 1473-4 about the achievable B field strengths and corresponding betas suggests that they do not. Cf. this paragraph:

>Another possibility is to use higher magnetic fields: 6 T instead of 5 T would increase fw to values between 1.3 and 2.0 MW/m2, which are still very low. The latter comes close to the value of 3 MW/m2 obtained in Sec. IV.A.l from thermal wall load constraints. Higher fields would, of course, again increase the cost.

Overall I don't think that these links provide nearly as strong an argument as you suggest they do.


Fission plants are expensive for a couple of reasons. One is that they need additional layers of heat exchangers. Another is that their parts must be very reliable, to reduce the probability of serious accidents.

Fusion reactors will also require layers of heat exchangers, to isolate the tritium. They will also require very reliable parts: not because of public safety, but because fusion reactors will have so many parts in the hot area where hands-on maintenance is impossible. And this reliability will be expensive, even though the requirement for it is more to avoid a financial meltdown rather than a physical one.

> The analysis in the fulltext relies on a variety of plasma parameters estimated based on technology available in 1987.

The point of these arguments is that beyond a certain power, the plasma parameters become irrelevant. The limit is imposed by what the first wall can withstand, not what the plasma can put out.

If you look at areal power densities of fusion reactor concepts, the older studies had HIGHER areal power densities. But those higher power densities were found to be unrealistic.

Lidsky concluded DT fusion reactors would be an order of magnitude worse (in volumetric power density) compared to fission reactors. In this, he was being too generous: ARC is 40x worse than a PWR; ITER is 400x worse (and DEMO almost as bad).

The arguments there were farseeing, and experience since then has buttressed them, not contradicted them.


>Fusion reactors will also require layers of heat exchangers, to isolate the tritium. They will also require very reliable parts: not because of public safety, but because fusion reactors will have so many parts in the hot area where hands-on maintenance is impossible. And this reliability will be expensive, even though the requirement for it is more to avoid a financial meltdown rather than a physical one.

All of this applies to fission. Radiation equipment is expensive, period. We pay four figures for a block of plastic. A very, very accurate piece of plastic. The components in a fission reactor are not easy to replace either; the cost of a fusion meltdown is the reactor, while the cost of a fission meltdown is the reactor + up to several square miles of the area around it, the latter being so large that we typically ignore the very expensive reactor cost!

But more simply, you're underestimating concrete. Fission facilities are critically dependent on the stuff, wall after wall, being the only material that can be assembled thick enough to guarantee the safety of radiation workers who sit in the plant all day. Lowering the intrinsic radiation burden reduces the use of concrete, which is one of the most expensive parts of nuclear plant construction:

https://www.forbes.com/sites/jeffmcmahon/2018/10/02/4-ways-t...

While some of this applies to fusion reactors, it doesn't seem appropriate to compare only the power-generating components of fusion vs. fission reactors while ignoring the safety components when the primary advantage of fusion is safety. Regardless, I've made a note to read more about it.

>Lidsky concluded DT fusion reactors would be an order of magnitude worse (in volumetric power density) compared to fission reactors. In this, he was being too generous: ARC is 40x worse than a PWR; ITER is 400x worse (and DEMO almost as bad).

If the real power densities are available, arguments about the theoretical power density are irrelevant. I can probably build a warehouse ten times the size of a nuclear reactor for a tenth the cost of said reactor. ARC's true power density -- or that of any other reactor -- must obviously be factored into any cost projections. The power density of a particular design is not usually something you need to read a paper about!


A fission plant can be extremely cheap in certain environments. For example, in the cloud tops of Venus, Saturn, Uranus, Neptune, Ganymede, Titan, or Triton, a fission reactor is as simple as a big fabric tube suspended from a balloon, with a naked atomic pile hanging near the bottom, and a wind turbine at the top. Radiation is absorbed by the air inside the (sufficiently broad) tube, which rises through the turbine at the top. You could dispense with the balloon if you constrict the exit aperture just right.


Oops, not Ganymede or Triton. Not enough atmosphere.

But the planets beyond Jupiter have surprisingly gentle gravity.


The assumption that "e a fusion plant had to be ten times as big and therefore likely ten times as costly — as a present-day fission plant to produce the same amount of power" is unreasonable.

Fusion has a lower power density of the reactor itself than a fission reactor, but in terms of size fission reactors are extremely tiny. Nuclear power plants are big because you have a massive containment building around the reactor, and infrastructure both for handling radioactive materials and generating power. Fusion plants don't need the giant containment building and the various additional facilities would be nearly identical for the same level of power production.

Further, costs are not a simple function of size - things much bigger than fission reactors can be built for much cheaper; the problem is that the combination of safety regulations, delays due to public pushback, a lack of standardization, and the loss of a skilled workforce have skyrocketed the price of fission reactors far beyond what the simple engineering considerations would predict. Fusion does not carry fission's stigma, so it should suffer less from excessive regulations and NIMBYism, and engineers can take lessons learned from the history of fission reactors to design fusion reactors that are easily replicated.


If a fusion plant is just like a fission plant, but replaces the fission reactor with a fusion reactor, then it is entirely reasonable to compare the cost of the reactors.

That a fission reactor itself is a small part of the cost of a fission plant doesn't mean the same must be true of a fusion plant. And indeed, if you look at the cost of conceptual DT fusion power plants the reactor is a significant part of the total cost of the plant.

You are right that cost is not JUST a function of size. It's also a function of how exotic the materials are and how intricate the device is. By those metrics, fusion will do even worse. A fission reactor is a rather simple thing, in comparison.

Fusion's costs will be further increased by reliability concerns. The part of a nuclear plant that's too radioactive for hands-on maintenance must be extremely reliable. In a fission power plant, this part is rather small and simple. Multiple fuel rods in a fission plant can leak without necessarily shutting down the plant; a single leak of coolant into the vacuum vessel of a fusion plant will likely prevent it from operating.

Fusion power plants will almost certainly need containment buildings. The cryogens of ITER, for example, would (if fully vaporized) present a larger pressure x volume load than the steam from a fission reactor meltdown. Containing this gas is not cheap. In any case, tritium must be kept from leaking, which will imply expensive hermetically sealed buildings and seals (tritium will permeate through polymer seals.) Tritium will be everywhere inside the fusion reactor building. The tritium that cycles through a 1 GW(e) DT fusion reactor in 1 year is enough to contaminate two months of the flow of the Mississippi River above the legal limit for drinking water. Even small levels of leakage will be extremely vexing.

For ARC specifically, the magnets are shielded by titanium hydride. This material will fully decompose to titanium and hydrogen at the temperature of the molten salt, so it must be assumed that in a serious accident it will all decompose.


It's not unreasonable to compare them, but that comparrison must be made in context: we're saying that something that makes up a very tiny part of the cost will be more expensive, while something that makes up a huge part of the cost will be dramatically less expensive.

Instead of a $100 million reactor, you're looking at $1 billion in reactor spending, but instead of a $4 billion dollar plant that this reactor goes into, you're looking at a $2 Billion plant.

A fusion plant would be comparable to a very expensive fission reactor in a world where people weren't afraid of fission plants, but in that world a fission plant would be dirt cheap. In the real world, fission is way more expensive than the engineering challenges would imply. It's not the materials or the containment that is expensive, it's having all of your assets sit idle for years on end while yet another environmental impact study is conducted.

Also, some of your assumptions are unreasonable. For example the reason you need a containment building around a nuclear reactor is that you can't just vent to atmosphere, because the water contains large amounts of tritium. It's perfectly fine to just vent helium to atmosphere in case of an emergency as it's not radioactive. While a fusion reactor would use a lot of tritium over time, at any given moment the amount present is rather miniscule, it is being actively generated on site specifically and if anything the major technical issue is not having enough. A reactor the size of ITER would have approximately 0.6 g of Tritium in the reactor at any given time, losing all of that to atmosphere would be equivalent to approximately 2% of the annual tritium release from The Hague Nuclear Reprocessing plant. Decomposition of titanium hydride at the temperature of the molten salt is slow, while obviously undesirable, there is no danger in the magnets decomposing, the real issue is quenching, which is one of the few genuine safety concerns of a fusion reactor.


Wait, the idea that the cost of a power plant structure is proportional to its size seems very remarkable to me even within a single technology like light water reactors or gas turbines. My understanding is that generally there's some most efficient size to a reactor or turbine or such because of non-linearities and if you want to increase the power generation of a plant you replicate these most efficiently sized structures rather than scaling them up.

The idea that you could apply the same linear scale to both fusion and fission reactors seems frankly incredible on the face of it. Do you have any details on why this should be so? I don't seem to have access to the second source you listed and the first just made this assertion without explanation. All this isn't to say that I'm sure fusion reactors would have to be less expensive per cubic meter than fission reactors, the opposite seems like it could be a possibility. It's just the idea that we should expect the price to be the same in both cases that I'm finding hard to swallow.


I think high-field HTS tokamaks projected costs are in a reasonable range despite the raw power density being lower. There could be additional savings related to radioactive waste processing since fusion should generate less and also containment/security for similar reasons

Helion and other new fusion projects are, surely, interesting, however tokamaks are so much more ready and, with high-field magnets, likely economical


I don't believe cost projections for fusion. If you look at them, they're filled with assumptions that aren't supported by much of anything(*), but magically make the technology just competitive. As the competition has improved, the assumptions have gotten more desperate. They're less "this is what the technology will cost" and more "this is the least ridiculous set of assumptions we could find that would let our technology not be dead."

If you apply the same level of assumptions to, say, light water fission reactors, I'm sure you'd get cost estimates vastly lower than what they actually cost in practice.

(*) For example, one paper assuming the efficiency of converting thermal energy to power in the fusion reactor is 60%, a level that combined cycle power plants achieve by expanding combustion gas that starts at a temperature that would soften or even melt the turbine blades.


I've seen some claims that even a magically costless heat producing device connected to a steam turbine won't be competitive with solar PV so I'd guess Fusion would also fail that test if it is basically being used to generate heat (note, they're claiming they have some new tech that avoids this problem, but we don't know if actually does or not)


The real question is whether it'd be competitive with solar PV on cloudy winter days with enough battery to get through windless nights. If so, there's probably a place on the grid for it.

(In any case, Helion is planning direct energy conversion, without a heat cycle.)


At that point you're competing against energy storage, not renewables. No way you can afford to run any such power system just a few days a year.

It could all pencil out if you had some massive incentive for production on those specific days and some legally mandated quota for other times, but at that point you'd probably be beaten by a few factories saying they'll shut down for a few days if you gave them the same cash and/or renewables producers using the money to add storage.

Practically, it needs to run full out constantly and compete with an average of other power producer prices to be viable.


Energy storage is not all that cheap.

There's a reason I mentioned "cloudy winter days." The cheapest way to address those is probably to overbuild PV. That overcapacity probably won't be used on bright summer days, raising the capital cost of PV across the board.

I don't think you're going to find many factories that can be economically shut down for entire seasons. None of this is an issue right now because we use natural gas for backup, but we need to stop doing that.


The cheapest way to generate power on cloudy winter days is wind power, not overbuilt solar. Though, yes, overbuilding wind and solar is generally more cost effective than most other alternatives and the overbuilt wind and solar will both contribute power even when not working at their seasonal peak and provide abundant cheap power for storage when overproducing near their peaks.

I had assumed we were both taking that as the baseline alternative, since every country in the world is basically building that out right now, hence my suggestion that demand response would be a better choice than any plausible nuclear option to cover any gaps in that provision due to unseasonal weather which is both less windy and more cloudy than predicted.

But we appear to be starting from radically different assumptions about what power grids will look like (and already look like today)


Yes, we have a lot of wind power, but that also is backed by fossil.


Hydrogen burned in combustion turbines would likely be cheaper. For this use case minimizing capital cost is all important; the cost of the hydrogen itself much less so.


Could they replace the beryllium with lead? It multiplies neutrons the same way; last I saw, that's what General Fusion was planning to use.


Molten metal flowing past metal structures in their high magnetic field would be a non-starter, I think, due to induced currents and JxB forces.


Does the metal have to flow? Let it sit there and run cooling pipes through it. Every now and then turn off the fusion when you need to fire up the pumps and swap in new lead/lithium.

(Also, I'm dumb but beryllium is also a metal, how does it differ from lead in this respect?)


The ARC design immerses the vacuum vessel in a bath of molten salt (which is where the Be is, in lithium beryllium fluoride (FLiBe) salt). That salt is where the neutrons deposit their heat. Replacing the Be with lead means the heat is getting deposited in that lead (or, more likely, molten lead-lithium alloy).

Even though ARC uses salt, it would also have to worry about voltages induced by flow across magnetic field lines -- not because of currents, but because if the voltage becomes high enough it can induce electrochemical reactions, like production of elemental fluorine (or corrosion of metal where the fluorine would have been evolved.) I think they keep the velocity x coolant channel diameter low enough to avoid that, but it's still a consideration they have to address.


SPARC is great, but I'm under the impression that a gen 1 DT MCF reactor will almost certainly need to be a stellarator to work around the engineering challenges and pulsed nature of tokamaks. Optimization, HTS magnets, and clever coil winding enable them. Tokamaks are easier so SPARC should certainly be made to make its splash.

The real problem is that DoE's Office of Science is relatively reducing funding of Fusion Energy Science. It's barely enough to meet the US' ITER contribution. The very few existing projects are running on fumes. No one in the US is making a machine and hasn't been for over a decade.

https://www.energy.gov/sites/default/files/2021-05/doe-fy202...


People are making machines, it's just with private funding. This includes SPARC, which MIT spun off into Commonwealth Energy. As of a year ago they'd raised over $200M.

https://techcrunch.com/2020/05/26/with-84-million-in-new-cas...


The context is US publicly funded projects.

The reason I used this context is because fusion is not profitable, won't be for at least 30 years in the optimistic estimates, and may very well never be profitable. It is exactly the kind of thing that should be public works.


Maybe it should, but since the government is not doing it, we're lucky that investors disagree with your assessment. CE, Tokamak Energy, Tri Alpha, General Fusion, and Helion have all gotten substantial private funding. Tri Alpha was over $700M last I checked. One of Helion's investors is YCombinator.


It's a matter of perspective and wager. I wager that the public image cost of failed startups leads to a reduced likelihood that fusion will be properly funded in the next 100 years. Fusion already has a public image deficit to overcome.

One could be optimistic and say the few potential successful startups such as SPARC or potentially successful moonshots such as Helion will lead to more private investors and/or public funding, but it's a community betting its public image when it's already down. I don't have a safer alternative to suggest.


Not just a public-image deficit. The $billions already poured down that rathole would take decades for the first fusion plant to pay back, if it had to, before ever achieving the break-even that actually counts. Especially so, when running it only at night after cloudy days when the much cheaper wind, solar, and storage flag.


If there was a definite solution it would be funded unless all the scientists involved lack communication skills to raise capital. VCs spend millions on apps that say ‘yo’…


The problem with surveillance is actually not the surveillance itself, but inequality in its application. This has been true for a long time; before the Internet age people with power just built higher fences

And so, there are two distinct solutions: (a) make privacy cheaper and more accessible (b) apply radical openness to everybody, both haves and have-nots. Second solution has been overlooked somewhat and it might have some useful side-effects like reducing amount of resources spent on pointless competition and zero-sum games


They'll decarbonize the poor


Just like we saw with the lockdown policies.

Heavily tax mass transit but allow private jets, etc.


Mass transit isn't taxed, it's heavily subsidized by taxes.


Also, the parent is probably talking about EU intent to apply carbon tax to mass market airlines but exempt the private jets


The tax should be simply applied to the carbon content of the fuel.

Use the revenue to subsidize green projects.


Often but not always; Hong Kong is one of the exceptions I think (at least the MTR)


How much of this can be attributed to earlier diagnosis (death rate per year is obviously less for earlier stage cancers) and who much to better treatments available? Couldn't find an answer in the article


Wait until spraying the stratosphere doesn't work as intended..


Probably. When shit starts really hitting the fan (i.e. bottom lines are being directly affected) it will be knee-jerk authorized. It's the only rapidly effective technologically feasible solution we have. The "theory" is that it isn't too environmentally damaging either, so there isn't much leverage to push back against it.

It's basically like benzo addiction for the entire Earth. You take it initially to band aid your ills. You get addicted to the fix it provides. The fix is so good that you accept the bad effects. You become dependent on it to the point where quitting becomes deadly.


What doesn't seem to be addressed is why COVID-19 germinal centre response might produce lasting immunity while, say, flu vaccination (which also produces germinal centre responses) doesn't


I believe influenza viruses mutate at a much faster pace, hence defeating the immunity achieved from a previous infection, and from vaccines. Coronaviruses seem to mutate at a slower pace, in comparison.


https://www.youtube.com/watch?v=2hf9yN-oBV4 A guy made a similar kind of spider silk using genetically modified yeast basically in his kitchen


With what kind of reliability can LIDAR data be predicted with vision nowadays?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: