Hacker Newsnew | past | comments | ask | show | jobs | submit | paulsmith's commentslogin

I'm glad you shared this because, in my head this Dr. Nick quote is so canonical it must be from the original golden 8 seasons, so it's nice to be reminded there are occasionally good things after! ;^)

lol I came here to post this too - perfect


As an OCaml-curious, is this the community recommendation, to choose the Jane Street stdlib if you’re just getting started?


It's probably the path of least resistance if following the Real World OCaml book (https://dev.realworldocaml.org/), which is quite excellent.


Many learning materials will push you that way, but the vast majority of FOSS packages don't use it.

There's nothing inherently wrong with using Jane Street's stdlibs if you miss the goodies they provide, but be aware the API suffers breaking changes from time to time and they support less targets than regular OCaml. I personally stopped using them, and use a few libraries from dbunzli and c-cube instead to fill the gaps.


You can wrap the navigation event in document.startViewTransition() and get something basic out of the box:

https://codepen.io/pauladamsmith/pen/VYeJMMb


> Unfortunately, they removed Plan mode

If I hit shift-Tab twice I can still get to plan mode


I think they meant the 'Plan with Opus' model. shift+tab still works for me, the VS code extension allows you to plan still too, but the UI is _so_slow with updates.


This is a fact of ZIP Codes that a lot of people stumble one. I've worked on GIS/mapping projects in the past where stakeholders wanted or assumed ZIP Codes to be polygons.

Another complexity that surprises folks is you can't guarantee a one-to-many state-to-ZIP Code relationship. There are several (I forgot offhand how many, I used to have them memorized) that span across state boundaries.


Yep, this fact eluded me earlier in the year. I was supposed to map out all ZIP codes in the US and color their boundaries based on certain stats we had. We were surprised to find many areas in the US were empty because they didn't have ZIP codes. I did a quick search and found out ZIP codes are driven by mail routes, and that instantly made sense to me, but the product stakeholders were very surprised to learn it.


One thing I just recalled is that if you maintain a small exceptions lookup table (i.e. the ones that span state boundaries), you can use ZIP Codes as a way to uniquely look up a county name.


ZIP codes also span county boundaries that aren't state boundaries -- I know of several in my county alone


Why would you do this?


For example, health care plans in the US are county-specific with regard to premiums, co-pays, etc. (based on demographics). Allowing someone to type in their ZIP Code to get started can be a better user experience than having them pick their county.

https://www.healthcare.gov/see-plans/


Except when it isn’t. I’d be curious to know the population in areas where a zip code spans multiple counties.


In that case, each county that corresponds to that ZIP Code is shown and the user can disambiguate manually.


Yeah it makes it easier. And I’m appreciative of the idea of making address entry easier for users but if they have to disambiguate did you actually make it easier?

I think Zillow does it best. You just type your address in a box and it looks up the normalized full address.

That makes everyone happy.


Similar timing, I had a high school internship at the National Cancer Institute at Ft. Detrick, MD in 1994-95, and the lab down the hall had some SGI iron and a glove (I don't remember what the glove hardware was, if it was SGI or 3rd-party or custom) for manipulating 3D renders of folded proteins. Incredible stuff, same "in the future" feeling.


SGI reality center had a glove option. I can't find a pic of it though.


Aside from the Liquid Glass stuff, has anyone detailed the changes to the Unix bits of the OS? What's new, deprecated, moved, locked-down, etc. ... ?


The most notable change is probably the new native container runtime: https://github.com/apple/containerization

Metal 4 is interesting: https://developer.apple.com/metal/whats-new/

New enterprise-y stuff: https://support.apple.com/en-us/124963

FireWire support is gone, and this is the last macOS release for Intel.


How dare you not complain about the UI?


Another good rule of thumb to remember is that a 50mm lens on a 35mm sensor ("full-frame") is roughly the equivalent FOV of the human eye, i.e., what you see naturally.


I never understood that argument. By pure FOV the human eye is much wider. Of course it is not that simple, spacial resolution drops off to sides (while temporal resolution increases). This makes statements like "50 mm on 35 mm is FOV of human eye" not very meaningful.


if you take a 35mm SLR with a 50mm lens and rotate it vertically (portrait) and hold the viewfinder up to one of your eyes, and leave the other eye open, your binocular vision will merge the two images with no problem/distortion, as if you were not holding a set of lenses up to one eye.

since what you see through the viewvinder is what the taken picture will look like, it is neutral like/wrt your eyes, at the zero middle between wide angle and telephoto. (it's worth considering "who says eyes are neutral?" it's the system we are used to and our brain develops to understand)

it's non obvious to a casual observer that the mm units chosen for the image size (the image gets focused on a 35mm rectangle (you need to know the aspect ratio)) and mm for the focal length are measuring different things, but that's why you just need to "know" that 35mm and 50mm "equal neutral". there are more things measured in mm as well, like the actual width of the primary lens which indicates how much light is gathered to be focused onto the same square.

i'm not a photographer. i don't quite know the mm lingo for what happens when the image sensor/film is wider then 35mm, the large/full formats. the focal lengths "work" the same, but a larger image would need to be focused and that seems like it would require some larger distances within the lens system.


The large format ones get a higher FOV in degrees. IIRC if you keep absolute aperture the same and change focal length to keep FOV the same, the DOF won't perceptually change.

Now, when you realize that there are geometric limitations to how wide an aperture can be relative to the focal length without having to stray from vaguely traditional _shapes_ of the objectives ("camera lens"), you can see that at the expense of fancier abberation corrections and of course larger/heavier glass lenses making up the larger objective, one could use a proportionally wider aperture with large format cameras.

For example, the infamous Barry Lyndon objectives were actually "just" 0.7x teleconverted spinoffs from an originally 70mm f/1 design. https://web.archive.org/web/20090309005033/http://ogiroux.bl...


The phenomenon you describe is a function of viewfinder magnification. It so happens that many SLRs had their magnification such that it worked well at 50mm to shoot with both eyes open. There are SLRs that have different magnification so this trick doesn’t always work.

You can get a rangefinder style camera with a viewfinder that lets you shoot with both eyes open but has a 35mm POV.

People have a variety of theories as to why 50mm is considered the standard lens and why people say it mimics human vision. I have heard so many explanations that I am inclined to say that there’s not really much but opinion behind it. It might just be that it was the most common first lens and because it is cheap and relatively simple to make a good, fast 50mm lens.


if that trick doesn't work, then either 1. your viewfinder is not showing what you will shoot which is what everybody expects because otherwise how can you frame your shot, 2. you are not using a 50mm lens or 3. you are not using a 35mm SLR

the point of a "single lens reflex" system is that you can see what the picture will look like by looking through the same (single) optics


No. As I stated, if the trick doesn’t work at 50mm it is because you are using a viewfinder with a different magnification.

A Pentax MX for example shows .97x magnification at 50mm. It will work great for your trick. Meanwhile a Canon AE-1 has .83x magnification at 50mm meaning one eye will be seeing an image where everything is 17% different in size. It will be like one eye is looking at a 55 inch TV and the other eye is looking at a 45 inch TV. Or more accurately, one eye is looking at the same TV but from 17% farther away.

If you throw a 58mm lens on that Canon, the trick will work again because you are zooming in to compensate for the zooming out that is happening in the viewfinder.

Of course, none of this has anything to do with 50mm lenses being “standard”.

Don’t believe me? Go slap a 50mm lens on an SLR with very low magnification. Or read one of the dozens of articles and threads out there explaining your misconception. Here’s a great one: https://www.lomography.com/magazine/319909-cameras-in-depth-...


technically speaking, if your viewfinder has a different magnification, that is (to coin a word) Multiple Lens Reflex; you have added a lens. SLRs were invented to show you "what the camera sees" so you can tweak it perfectly on different dimensions.

you are describing a different system that does not show you what the camera sees. I'm not saying what you are talking about doesn't exist, I'm saying that your over-inclusivity takes away the value of describing what I described and is telling people "there's really nothing you can say, a million different things could be going on"


Every slr with an eye level viewfinder (instead of focusing on a waist level ground glass) has optics in it. It is an MLR by your wording. Your eye has to focus on the reflection of a ground glass within an inch or so, or else is viewed through a lens that makes it possible to see what the camera sees. You wouldn’t be able to focus with your eyes if there wasn’t another lens.

In other words, your “standard” lens is an artifact of the optics chosen to allow your eye to see the image.

In terms of what your eye sees: The FOV of what you are focusing on with your eyes is narrower than a 50mm lens. The FOV where your eyes can recognize symbols (can read letters) is wider than a 50mm lens. The FOV that your eyes can see from periphery to periphery is drastically wider than 50mm.

Quite simply put, the fact that on some cameras you can shoot with both eyes open at 50mm is an artifact of design, not some natural law. This is proven by the fact that there are cameras where you can do this with a 35mm lens or a 60mm lens. Camera manufacturers settled on calling 50mm at 1.0x magnification a standard view is arbitrary.

There is precisely nothing behind the common belief that 50mm is the same view as your eyes. It isn’t.

You can keep insisting otherwise, but it is in contradiction with physics and nominal human anatomy.


well, why don't you bring your physics and human anatomy arguing-from-first-principles over to wikipedia and let's see how long your changes last on that camera page :) good luck!

https://en.wikipedia.org/wiki/Single-lens_reflex_camera

opening paragraph

"In photography, a single-lens reflex camera (SLR) is a type of camera that uses a mirror and prism system to allow photographers to view through the lens and see exactly what will be captured... SLR technology played a crucial role in the evolution of modern photography...the rise of mirrorless cameras in the 2010s has led to a decline in SLR use and production. With twin lens reflex and rangefinder cameras, the viewed image could be significantly different from the final image."

what you see through the viewfinder is what the camera will take a picture of; you can change lenses so it is not always neutral. but if the zoom of the lens it neutral, what you will see through the viewfinder is neutral, and that occurs at 50mm for a 35mm camera


You still haven’t understood what I’m saying. You are either very mistaken or you are explaining what you are trying to say very incorrectly.

Nothing that I’m saying is contradicted by that Wikipedia article.

I have, on my desk, no less than three 35mm film SLRs that will not allow you to see with both eyes open using a 50mm lens. I have already given you a link to an article explaining it, as well as explained it myself.

The image you see through a normal film SLR is the image that the lens is projecting onto a surface that is further transformed in the prism, you can see this surface by removing the lens and looking at the top of the inside of the camera above the mirror directly behind the lens. That image on the ground glass surface is then transformed using another set of lenses and mirrors in the prism so that you can put your eye to the lens and see it right side up, and focus your eye as if the image were not less than an inch away on a piece of ground glass.

There is no SLR on earth that does not have additional optics between you and the image projected on the ground glass. In modern cameras the ground glass and additional optics are a single piece with the flat side facing down and either a fresnel lens or a normal glass lens on the top.

Those optics inside the prism, that every single eye level finder SLR has, are what decide whether or not a 50mm lens shows an image to the photographer that is comparable in size to what they see with their other eye. If it is 1x magnification at 50mm it is the same size. Otherwise it is not. You can look up the magnification for any SLR. There is also the completely different coverage spec that SLRs have that tells you what percentage of the full image to be projected on the film that will be shown in the finder. You can have cameras that show the full image at lower magnification, in the same way that you can see the full image after printing on a 4x6 photo, or on a 8x12 photo.

What is crucial to understand, that you have continually missed, is that there is not a “neutral” spot that occurs naturally at 50mm. It is an artifact of design on many, but by no means all, cameras. A Nikon D850 has viewfinder coverage of 100% and a magnification of .75x at 50mm. That means that the viewfinder, with a 50mm lens attached, will show the entire image to be recorded, and the image will be 75% the size that my other eye sees it. It will give you a headache to try and shoot both eyes open. My Nikon F90x has similar specs for the viewfinder, to preempt any notion that this is because of digital. It is referenced to 50mm because it has to be referenced to something and the most popular focal length is the one that manufacturers settled on. Some SLR cameras show a smaller, but still complete version of the image that the lens is projecting. ALL SLRs need an additional lens in the prism to make it possible to see anything at such short distances. The nature of that internal lens and the prism is what determines the magnification not the lens that you attach.

If you go look at the Wikipedia link that you found there is a cutaway diagram showing the additional lens that you are viewing the image through. That combined with the article I linked earlier explain it very thoroughly.

Good luck with your journey to understanding of this concept.


The way I understand is that it is not FOV but zoom level. If you look through a camera with 50mm lens, the subject and background should appear same size as when viewed with naked eyes. Doesn’t matter if it is full frame or crop sensor.


Relative size of subject and background is cause by distance to them, not focal length.


Also called ‘perspective’ and the only way to change it is to move the position of the camera

It does not matter if you crop an image taken with a 50mm lens to get the same area of the motive as taken with a 300mm lens from the same ‘standpoint’ - there will be no difference between foreground and background (except for grain and noise - but that’s another story… ;-)

You have to move the camera to change that.

This is often seen in movies (those shot on real film) as opposed to on video as zoom lenses are often used without moving the camera, film based often use a dolly to move the camera. The effect of combining zoom and camera movement to keep the same crop of the foreground while having a dramatic effect of the background quickly getting larger/closer (or vice versa) is really effective - also in illustrating this concept.

In my early life (before taking the education as a photographer) I was really liking wide angles as it brought ‘life’ in to a lot of pictures. Wide as in 24 mm for my 35mm camera (Nikon F2, from 1973 should you wonder) was a favorite, replacing my 28 mm.

Too bad full frame digital is still so expensive. Using a 14-24 f/4 on the DX format in (Nikon D7100) just is’nt the same.

So now the iPhone is the most used camera (you know - the camera you have with you…!)


If you have 50mm lens, try it. It will look exactly same as if you are looking through a tube with a naked eye.


Both, actually.


It’s entirely independent of focal length.

It has to do with the ratio of the subject-camera distance to the background-camera distance.

As others have pointed out you prove this to yourself in one of two ways:

1. Frame with telephoto, then shoot with a wide angle lens and digitally zoom in photo.

2. Frame with wide angle and then shoot a panorama with the telephone and stich.

2 is significantly harder if you are close to the subject.


Well, no. This article has some clarifications: https://petapixel.com/is-lens-compression-fact-or-fiction/


This is a million times easier to demonstrate with images than text. Wikipedia has a good animation: https://en.m.wikipedia.org/wiki/Perspective_distortion

This page doesn’t have any images but covers the concept quite well: https://en.m.wikipedia.org/wiki/Normal_lens

The concept of matching a picture to normal human vision goes back to the age of paintings, before any photography even existed.


It's a great point, and I did consider it, the trouble is, how do you get the pattern resource data out of ResEdit running in the emulator and onto the modern machine? And ResEdit doesn't seem to run in any kind of compatibility mode on modern Macs anymore either.

It's too bad because ResEdit is an amazing program, and even has a surprisingly full-featured graphical editor, including for those patterns, with a live preview mode:

https://raw.githubusercontent.com/paulsmith/classic-mac-patt...


Oh, some of the emulators allow you to create a "shared folder" between the emulated OS and your host OS. At least, that's what I do with SheepShaver. Very easy to share files between the two!

Right, I thought I remembered such an editor from back in the day, for editing those patterns. When I was a kid I went all-out with ResEdit, inspecting every single resource in the System and Finder files (and pretty much every application/game I had) ... it was pretty fascinating how much stuff was so easily-editable! I renamed my Trash to "Incinerator" :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: