Hacker Newsnew | past | comments | ask | show | jobs | submit | gaxun's commentslogin

Nice to see multiple of us are on here.

> I imagine most of these checks (definitely all $7 of mine)

The 0x$2.00 check I got from 10 Feb 2020 was for one typographical item in Volume 4A and one pedagogical improvement in Volume 1, Fascicle 1. So it is certainly possible to still get checks for material that is already published.

Since it is now 2020, anyone with bug reports from the typography stuff should send them in soon, wouldn't want to miss the deadline.


That's interesting, can you go into more detail on those two -- what was the pedagogical improvement, and the typographical item? Just curious...


Sure.

- On page 715 of Volume 4A, he had something like \`a when he meant to have just à.

- In Volume 1, Fascicle 1, there is a convention that the "main" entry point of an MMIX program begins at LOC #100. The convention is established early on and repeated throughout the text. However, at no point is it explained why LOC #100 was chosen (instead of LOC #0, LOC #80, or whatever). It could be gleaned through careful study -- LOC #0-#80 are reserved for trip/trap handling and one more location before #100 is reserved for a special second entry point -- but you basically had to read the entire fascicle to find /all/ of these. A naive user would be likely to try writing a program beginning at LOC #0 and wonder why it didn't seem to behave correctly. My suggestion was to just add a note explaining why LOC #100 was used. He agreed and you can find the added note in the latest errata for Volume 1, Fascicle 1.


Oh thank you. I looked up page 715 of my copy of Volume 4A and can see the \`a. :-) I also see both corrections you mentioned in the online errata, that's cool. :-)


> when's the last time Knuth wrote a check for TAOCP?

Pretty recently. I just got one in the mail today.


The author should have written to Donald Knuth with an interview request for this piece. It would have added something special beyond just repeating what's already visible on his website.

I wrote to Professor Knuth about a project I did a year or two ago and was pleasantly surprised to receive a two page handwritten note in response. So it seems like the no-email filter is probably still working well for him.


Indeed.

Web browsers are generally free to use and there are several serious contenders and many less popular ones.

So the main thing they should be competing on is user experience.

But it seems to me that browsers frequently fail to deliver a user-first experience.

The browser should only take actions specifically requested by a user, as his agent. Everything about the experience needs to be reframed from that perspective.

Some browsers lately seem to be doing a little better at this, but just adding "advanced flag" features on to an existing product isn't going to help mainstream users at all.


> hurdle of setting up team repositories with safe credential management...like for any kind of collaboration

Identity continues to be the key selling point of keybase. I'm excited by this.

I can keep clones of my private repositories here. Things like dotfiles and configurations. That sounds like a good start. And I can also easily share code to people who need to see it.


I spent some time attempting to work with the W3C Web Annotation Data Model. That data model is serialized as JSON-LD.

After spending about 50 hours reading the documents and attempting to implement some of it, I have a general idea what JSON-LD is.

I wasn't really trying to achieve anything, so I basically quit once something seemed opaque enough I couldn't figure it out in a short period of time. When I visited the JSON-LD Test Suite page to see what implementations are expected to do [0], I found:

> Tests are defined into compact, expand, flatten, frame, normalize, and rdf sections

I had a hard time figuring out what each of these verbs meant, and they were about all that the various implementations I found did. For example, the term normalize doesn't even appear in the JSON-LD 1.0 specification [1]. shrug I'm sure I could have figured out more if I spent the time to actually read the whole thing and all the related documents.

[0]: https://json-ld.org/test-suite/

[1]: https://www.w3.org/TR/json-ld


JSON-LD is RDF. Or rather is what RDF would look like if it was serialized in JSON instead of XML. If you look from the Semantic Web angle JSON-LD is just a serialization format, like e.g. Turtle, but using JSON as JSON is popular nowadays.

Sometimes I wonder why this is not said directly, probably because Semantic Web and RDF are passe now.

Actually the post's author addresses this point:

> I made it a point to not mention RDF at all in the JSON-LD 1.0 specification because you didn’t need to go off and read about it to understand what was going on in JSON-LD.

...

> Tests are defined into compact, expand, flatten, frame, normalize, and rdf sections

These are just sub-formats of JSON-LD, information represented is the same but JSON looks a little bit different. Some sub-formats are easier for tools to process, some are better for humans.


> Sometimes I wonder why this is not said directly, probably because Semantic Web and RDF are passe now.

It seems to me that on one hand JSON-LD wanted to bootstrap the network effects by bringing along people who were doing RDF both technically (the JSON-LD spec says "JSON-LD is a concrete RDF syntax as described in [RDF11-CONCEPTS].") and socially (published by the RDF WG), but on the other hand the negative brand equity of RDF is recognized as an obstacle for bringing along even more people, hence the OP professing "Hate the Semantic Web" and "Kick RDF in the Nuts".

It's kinda weird how mentioning the RDF connection of JSON-LD in the sense of past experiences with RDF having any bearing on JSON-LD is treated as a social no-no. Despite the above-quoted bit from the spec saying "JSON-LD is a concrete RDF syntax as described in [RDF11-CONCEPTS].", we are supposed to play along with JSON-LD totally having nothing to do with RDF.


> Sometimes I wonder why this is not said directly

There are multiple flavours of RDF and I think json-ld only supports a subset of one of them. Its been a while since I read the spec but I believe there are various fudges with lists, reification and datatype coercion.

Any time I get close to an RDF stack I find its a broken mess. Its complexity seems to almost guarantee incompatibility instead of interoperability.


I've been trying out this extension for a few days.

What I would really like to see from this extension is a 1-click way to sign any message I'm writing, anywhere on the internet. Along with that would be the ability to verify that a keybase signature found in the wild belongs to a particular keybase user. Then I can initiate out-of-band discussions with the author of a comment on someone's blog, not just with a Reddit or Hacker News poster.

Having the keybase chat button appear next to posts on sites like Reddit, HN, etc. seems like a great step toward a "metaweb" platform as well. For example, I could let someone know about the typo in their post via keybase chat, rather than polluting the public comment stream.

Very excited.


I second this. Rough sketch: detect a text input I'm tying in, add a little button that says "sign and send", which when clicked adds a signature to the message and submits. If the extension sees this on a forum, verify the sig in the background and inject a little check mark next to the username if it checks out.


I like the idea of (semi?)automagically decrypting/verifying content on the web. It's on the list of ideas to explore, but not sure how soon. :)

If you have more specific use cases in mind, I'd love to hear them.


I would like the same functionality and I would love if it supported encryption as well as signing.


> Really, what a robust commentary system needs is to map many comments to many units of text

This is actually built into this specification. From the Web Annotation Data Model [0]:

  - Annotations have 0 or more Bodies.
  - Annotations have 1 or more Targets.
So one "Annotation" object can have multiple bodies (descriptions) attached to multiple targets.

> 3. Relationships between comments

This sounds more like an implementation detail of a client than part of the protocol or data model put forth by the W3C group.

However, I believe this can kind of be done server-side with the Web Annotation Protocol [1]'s idea of Annotation Containers. Your server can map a single annotation to multiple containers. So perhaps you have an endpoint like `http://example.com/a/` and you want to arrange a hierarchy of comments. You could provide a filtered set of the annotations at `http://example.com/a/romeo/consonance/`, and similar endpoints.

So basically what I'm saying is it seems like the protocol here isn't going to get in your way, it's just incentive to use this particular model for storing and transferring your data.

[0]: https://www.w3.org/TR/annotation-model/#web-annotation-princ...

[1]: https://www.w3.org/TR/annotation-protocol/#container-retriev...


If the protocol supports these features, then that's great and I'd love to see it adopted.


I have posted a couple things related to this but never finished a complete summary of my thoughts. These two attempts come close:

In [0], I shared some really basic thoughts about how this could be done by users of a particular VCS package, git. Two line summary:

  You run `git announce [name] [tracker]` to tell a git tracker
  that you've got your own fork. If you host your repository
  in a public place, the tracker just checks your repo for
  updates so other users can stay informed.
At [1], I describe a similar system with a specific type of frequently-forked software (GPL'd tools). This seems less tied to using git. This software is available from diverse sources in multiple VCS systems and may be wildly divergent. Comparing the sources seems valuable, but difficult.

Alternatively, Fossil does "distributed version control and issues", but it's really just meant to be one central location for a particular project.

Thought I'd share. I'm on board with everything you're saying. There are clearly some issues as well, though. The biggest one I see is related to the "paradox of choice" -- when there are four forks all just one commit different from each other, how do you know which one(s) to use? This is why we usually just follow the leader (maintainer) and use the one true source. End users don't have the ability to quickly know which patches to accept and reject. Even a reputation system wouldn't necessarily provide clean signals, due to bandwagon type effects.

Unfortunately, I already have enough trouble picking between `iron` and `nickel` as a Rust web framework.

[0]: https://www.gaxun.net/ideas/git-announce/

[1]: https://news.ycombinator.com/item?id=13028079


There is no built-in detection of content changes in the standard, but it is designed to be potentially robust to changes.

It depends what type of specifier you use. The data model provides a number of specifier types. A "text position selector" would lock in the annotation at a certain point in the text like the 142nd character. An "xpath selector" would use a DOM-like notation to place the annotation.

If your annotation is a "highlight" then you would need to use these selectors within a "range selector" with a specified start and end point.

If you want your annotations to be robust to content changes, you will probably need to use multiple specifier types. This is allowed by the specification but it felt very clumsy to me to implement.

If the target is available at multiple URLs, there is something available to handle it, but the hosts of the content need to add links to the canonical URL so your annotation software can use that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: