Hacker Newsnew | past | comments | ask | show | jobs | submit | jfries's commentslogin

If you have a guarantee for the worst case of generating a pixel (which you indicate by saying that you never drop frames), couldn't you get rid of the framebuffer? Schedule pixel generation so that they complete just in time for when they're needed for output.

This would save up RAM for other things (and be a fun exercice to get right).


That's a good observation! This technique is also known as "racing the beam". The problem is a mismatch of refresh rates; the VGA display operates at 60 Hz but the ray tracer is not capable of producing that many pixels, it can only do 30 fps. So we need a frame buffer to store the result.


Yes, that would give some peace of mind, right? Unfortunately for us, that's not the case. The only platform specific code is the 8 instructions on top of "Code of framework" on http://www.sizecoding.org/wiki/Memories

First to set the video mode, and then to set up a timer used to progress time.


Not sure if this app supports it or not, but it would be very useful with a 5 second warning before being thrown into a video call. I could see all kinds of embarrassing moments happening otherwise.


It would be better to not have video running by default. People can drop in to talk over audio and request to switch over audio if necessary.

I honestly prefer the push to talk approach for audio too. It is always better to have explicit user consent at every audio and/or video session. Accidents happen.


Exactly, I'll try to make it more correct.


Since propagation of information is limited to one cell per step, it's possible to speed this up by splitting the image into smaller subimages and solving those first, and then solve the borders between such subimages.

This technique can be used to solve multiple steps as well, the only change is that the border between subimages becomes thicker.


I wouldn't expect this to produce a speedup, because the SAT solver will already be doing it!


Yeah but for a sufficiently big image I think segregation will help if we do some sort of memoization. But I guess SAT solvers may be doing that too internally.


That's a good point. Life has translation symmetry that the SAT solver can't see. So if copies of the same region appear in your pattern more than once then you could plausibly save the SAT solver time by telling it to use the same predecessor for all of them.


You can see this in the flower image. The target image has a lot of symmetries (reflections, rotations) that also exist in the GoL ruleset. So must be a parent that also has these symmetries. Yet the found solution has none of these symmetries.


Consider the 3×3 block of live cells:

    .....
    .ooo.
    .ooo.
    .ooo.
    .....
It has no parent with the same symmetry, since every neighbour of the centre cell would be the same but a cell can't live or be born with 0 or 4 neighbours.

But it does have a parent with less symmetry:

    .....
    .....
    ooooo
    .....
    .....


True, and this is a counterexample to my claim that the parent would retain the symmetry. I think it can be refined though:

The parent still has hor/ver mirror symmetries, and the solution rotated is also a solution. So there is a set of (in this case two) parents that is invariant under the same symmetries.

In general, a parent should still be a parent when any of the targets symmetries are applied. (But it wouldn't necessarily be the same parent).

This can be captured in a SAT by insisting that any symmetry of a solution is also a solution, which adds a lot of new constraints and no new variables, so should help with finding.


Could progressive mode be used to serve thumbnails by just truncating the image at a suitable point, or does the spec (and so, decoders) expect the whole image to eventually arrive?


You don't need whole image. If you stop at DC passes (~12-15% of the file) you get exactly 1/8th resolution of the image, although scaled without gamma correction.

With HTTP/2 you can micromanage delivery of JPEG scans to deliver placeholders quickly, and delay delivery of unnecessary levels of detail:

https://blog.cloudflare.com/parallel-streaming-of-progressiv...


Since progressive JPEGs are displayed while downloading and the connection could just be closed at any moment anyway ... I don't think that'd be a problem. Whether that's more efficient than an extra thumbnail is probably the more interesting question.


If you have time and space to pre-generate thumbnails, it's probably not a significant win, but I think it could work well for displaying local thumbnails of JPEGs, like from a camera.

If you're browsing a directory of hundreds of large (e.g. 10+ MB) JPEG photographs, generating the thumbnails by fully decompressing all of them would take while. "Progressive thumbnails" that only decompress the first ~100 KB would be much faster.


You can do that even with non-progressive JPEGs, as you can use just the low frequency terms from the discrete cosine transform (the same data that comes first with progressive ordering).

Epeg and libjpeg-turbo can do this.


You would still have to read the entire JPEG in though, wouldn't you?

I'm not an expert on JPEG, but I think that if you want the macro blocks at the bottom of the image, you still need to un-Huffman the all the blocks before it to find where the macro blocks start (since AFAIK there isn't a table indicating where each block starts). That means you have to read the entire JPEG from storage, only to through away the vast majority of it.

Even if there was a way to magically predict where the low frequency values of the image are stored, you'd still have to do tens of thousands of random reads to just get to them. Reading the whole file would be faster.

So if you have 500 photos and you want to go though them and need some thumbnails, for non-progressive image thumbnail generation, you have to read 10 MB x 500 images = 5 GB of data, but with a progressive thumbnail you only need the first 100 KB x 500 images = 50 MB of data.


As an aside, if you're just wanting thumbnails, most digital cameras encode small (120x160, ish) thumbnails in the EXIF header that can be quickly extracted by exiftool.


I don't know. These libraries claim to make thumbnails that way, and claim it's way faster.


Sounds like it should be possible if you can terminate the transfer of the file early:

> libjpeg has some interesting features as well. Rather than decoding an entire full-resolution JPEG and then scaling it down, for instance (a common use case when generating thumbnails), you may set it up when decoding so that it will simply do the reduction for you while decoding. This takes less time and uses less memory compared with getting the full decompressed version and resampling afterward.


The paragraph you quoted simply means that the decompression library is able to decompress to a smaller size raw image, and says nothing about jpeg file format.


What are the implications of this?


the meds modulating ACE family of proteins are also commonly available as blood pressure meds, actually. But interestingly, the side effect is...coughing...

But the research behind what is needed to modulate ACE is already there..


Sorry I don't know much about the topic.

0. How can I find out if I have too high (?) ACE2, and what can I do about it? Are there symthoms at least?

1. What does modulating ACE means? 2. Do you mean shall everyone get those blood pressure meds? Or what is the relation now with these meds in terms of action? Does it mean people who need it must get it for sure? 3. Research what is needed to modulate ACE: can you please give a pointer.



Thanks a lot


Anecdotal, but as someone who takes Losartan daily, it does not make me cough at all...

It did make my prostate feel swollen for a couple days though.


Does modulation mean increasing or decreasing ACE2? I assume it decreases and its good, right?


modulation means in this case means dynamically increase or decrease in order to stay on a curve


Thanks


this is the mechanism a coronavirus assembly exploits to enter a cell.

the virion uses a peplomer [one of its spikes] to dock with the cell at the ACE2 receptor then wrenches it apart like cutting a door right out of its frame, then the virus inserts itself into the cell leaving its envelope and protien accessories fused with the cell membrane.

-if this mechanism can be blocked it will inhibit viral entry

-if it can be mimmicked a vaccine is possible

here is a good place to start looking at background information:

https://en.wikipedia.org/wiki/Renin%E2%80%93angiotensin_syst...

This is what is so damaging to the infected cells to begin with. it disrupts an essential signalling system.


Another interesting side effect could be other temporizing measures in the inpatient setting. My understanding from a brief skim of older research papers is that the ACE inhibitors and angiotensin receptor blockers that we use today, like lisinopril or losartan, could upregulate the expression of ACE-2 receptors. If a patient is admitted with coronavirus, discontinuing blood pressure medications such as lisinopril and losartan could be worthwhile to decrease length of stay, morbidity, and mortality.


this is something of a bush to beat around.

- when the virus docks with the ACE2 receptor there is a loss of the ACE2 receptor function as it is a destructive process.

- when the virus inhabits the cell there is probably a cytoplasmic down regulation of ACE2 expression as the cell is now "claimed" as a place for replication

so the question would be when would we administer or withdraw such an ACE2 blocker

the loss of ACE2 functionality is devastating to the cell but the presence of functional ACE2 receptor makes a cell vulnerable to entry

-and should we be using a blocker pe se or should we use some sort of hypothetical receptor inhibitor that is not displace by coronavirus spike protien, as in competetive inhibition?


I wrote a long comment and it got deleted, argh!

Best guess? ACE2 loss is probably bad, having more ACE2 probably doesn’t drive disease severity:

https://twitter.com/__philipn__/status/1229568317167243264?s...

Relative ace2 reduction correlates w disease severity; viral load growth the same between groups - check out study!

Chinese paper discusses this as well:

https://pubmed.ncbi.nlm.nih.gov/32061198-inhibitors-of-ras-m...

They reason that 1) ARB doesn’t increase ACE2 past normal human baseline anyway, just to normal status 2) Younger have higher ACE2, older women more than older men; both groups more protected so disease severity unlikely from higher ACE2.

ACE2-fc being developed- will both stop AT1R activation and neutralize virus:

https://twitter.com/robertlkruse/status/1233986542097567744?...

He is looking for funding if any funders are reading this. Comment here and I can connect.


there would be an upper concentration of ACE2 receptor per unit of cell membrane that would elicit increased probability of receptor and viral peplomer interaction. once that concentration is reached, further increase of the docking probability would not occur.

there is a dynamic here as the virus, in sufficient titre encounters the receptor, binds enters the cell and typically disrupts normal translatory events in the cytoplasm.

the effect is to exclude further infection of the cell with increased copy number of virions, thus replication occurs instead of immediate cell apoptosis


I guess what I think could be possible is:

ARBs upregulate ACE2 but within normal physiological limit (see other comment). My understanding is it is the Ang II activation of AT2R that drives the increase in ACE2 expression. In this case, we would expect children and women (estrogen upregulates AT2R, downregulates AT1R) to have higher ACE2 in the presence of increased Ang II. I think children may have more AT2R. https://twitter.com/__philipn__/status/1233569567512772609?s... but this may be beside the point.

Other coronaviruses that use more common receptors are not as lethal. Even if the virus has a higher probability of cell entry, does that make it more lethal?

Check out this paper:

https://twitter.com/__philipn__/status/1229568317167243264?s...

Proportion of reduction of ACE2 correlates with disease severity.

Check out figure 1

“Differences in clinical outcomes did not correlate with differences in viral replication efficiencies.”

Compare C and D. Viral replication basically the same between young and old mice. But old had severe disease.

So I’m not sure it’s as simple as increased replication (a proxy for cell entry?) that drives severity.

Edit: just re read your final sentence. Great point. But in the case of ARB usage, while ACE2 may be upregulated and the cell may have another ACE2 cleaved by the virus, if AT1R blockade was occurring then maybe the lung damage would not happen.

Is the cell itself worse off with more than one copy of the virus inside of it? Or would the damage be the increased loss of ACE2?

If damage from increased loss, then I can see the ARB approach helping.

If more than one copy of virus in cell is really bad, then sounds like this could be more complicated. But I do wonder about other viruses which have lots of receptors available, and why this one is so serious in some, but not most, people.


the physiological cascade that occurs due to disruption of a homeostatic feedback system is the most probable cause of direct lethality. The setpoint and the relative concentrations of the elements in these systems are genetically dictated functions.

There is probably some correlate between predisposition to morbidity/mortality and how far ones setpoints deviate from population setpoints.

there are 2 prongs to this attack, one is the disruption of angiotensin based signalling and effectation

the other is the viral entry to the cell. the virus must do a dance on the head of a pin in a sense to replicate but not hijack somuch of the cells basic metabolism that apoptosis occurs. a virus that allows its host cell to remain viable wins the lottery for natural selection.

the problem is that the virus occupies the expression machinery that creates ACE2, in an indiscriminant fashion many other protiens as well.

so this means the virus has caused damage by entry

the entry damage causes errent feedback which causes further physiological perturberance

a secondary round of disruption occurs when ACE2 expression is inhibited thus interfereing with replenishment of ACE2 that has been damaged as well as disrupting the regular recycling of ACE2 and for that matter disrupts expression of the ATI and ATII receptors

---I am not a founder or funder but simply a concerned professional


Can you please give me a pointer on these researches?

Also its not clear to me: do blood pressure mediacations in general upregulate the expression of ACE-2, or is it possible that they downregulate it? (Not lisonopril or losartan)


you can do both depending on the med and there are different mechanisms of action.

Also :

any sort of EXPERIMENTATION should be done in a clinical setting with qualified professionals,

this discussion is about scientific research and is not even close to being acceptable for clinical trials.

blood pressure medication is normally prescribed with physician monitoring the patient for desirable or undesirable effect


I suppose that explains why a lot (most?) of the people who died also suffered from high blood pressure.


No, I think that's just the fact that most of the people who died were older with lots of comorbidities, like high blood pressure, diabetes, COPD, etc


it doesnt explain it, but it is something worth chasing to its end for a possible therapeutic benefit.

its does suggest that such comorbidities such as high blood pressure or existing lung disorder disease or other wellness impairment ---

--- could contribute to mortality


Lisinopril and losartan are usually stopped in patients admitted to the hospital for an infection.


Sorry I don't know much about the topic.

How can I find out if I have too high (?) ACE2, and what can I do about it? Are there sympthoms at least?



Surprised by the negativity in the comments. This is an extremely impressive presentation of ability to put together current technologies and get the thing working front start to finish.

A+, would hire.


The reason for the negativity is that this demonstrates the "Design by StackOverflow" mentality where the solution is like swatting a fly with a sledgehammer and no real domain knowledge. Plus the author didn't even train the neural nets: it's just a LEGO project. I'd higher this person to be a lab intern, but nothing above that. The fact the author couldn't solve it locally and had to invoke the CLOUD is... laughable. This problem has been solved for over two decades on lesser hardware.


TL;DR: reinventing wheels is a good way to learn a lot.

For trying out something in a few hours, of course you don't want to spend hundreds of hours setting it up, by definition. Yes, the result is "just about works, but doesn't scale" - but that's the point of experimenting. Sure, this is a LEGO-style experiment in reinventing the wheel, but exactly for that, an excellent way to start learning about this problem domain: power consumption? Latency? ML basics? Sure. That's hacking at its core - even though the project is rudimentary.


> Sure, this is a LEGO-style experiment in reinventing the wheel

If I'm not mistaken Larry Page used to be praised some time ago for building a printer out of LEGO pieces (that may just be an urban legend, I admit I never verified this information).


That's a tempting theory, but the increased ratio of white vs blue collar jobs isn't enough to explain it. See graph on https://www.businessinsider.com/great-news-weve-become-a-whi...


But even "blue collar" jobs require less labor than before


Would that be legal, or counted as insider trading? Since you're trading on relevant information not available to the public.


I assume if the company publicly denied the accusations it would not be considered insider trading


You can’t steal from your own company, so it’s fine. You have to tell investors that you didn’t do it though.


Interestingly (or perhaps annoyingly) enough, the placement of the 0 was different in different countries. At least in Sweden the first digit was 0, and not 1 as in this implementation. This was reflected in the emergency number which was 90000 at the time, which is easy to dial without mistakes even in a stressful situation.

Is there a need for localization?


Does that mean that different countries used incompatible dialing patterns? Since our 1 emitted one tone and 0 emitted 10, but presumably Sweden's 0 emitted one tone and 9 emitted 10.


Rotary dials didn't emit tones, but basically briefly disconnected the line. IN most countries it was 1 pulse for the digit 1, 2 pulses for 2 etc. up to 10, for zero

However Wikipedia notes:

> Exceptions to this are: Sweden (example dial), with one pulse for 0, two pulses for 1, and so on; and New Zealand with ten pulses for 0, nine pulses for 1, etc. Oslo, the capital city of Norway, used the New Zealand system, but the rest of the country did not.

If ever you were in a house which tried to prevent outgoing calls by using a lock on the dial, you could still make a call by tapping out the requisite digits on the hang-up buttons.


This is the type of phone I grew up with https://collection.motat.org.nz/objects/111551 It seems more logical to me. Those from other countries seem backwards. Pulse count = 10 - digit.


In North America you dial 1 plus number. Elsewhere it's 011. It might be related to that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: