Hacker Newsnew | past | comments | ask | show | jobs | submit | ttybird's commentslogin

Does that mean that Anne Sacoolas will be extradited to the UK?


Small chance. Being the servile lap dog of the United States is official Tory policy.


I thought that the first amendment was about people rather than companies. In addition to that there are many products where companies have to display certain information, such as age rating for games, health warrnings on cigaretts, food content, allergy warning, nutrition labels, medicine side-effects, etc

It is hard to classify censorship as speech for a site that it is all about user-content.


From a legal perspective, corporations are people too. Age ratings for games are voluntary, not legally required. The Supreme Court has held that governments can impose some limits and requirements on purely commercial speech, but those precedents don't apply to censorship decisions made by private companies. The fact that a site contains mostly user generated content is legally irrelevant.

https://www.mtsu.edu/first-amendment/article/900/commercial-...

I understand that some people don't like this situation but that is the reality of US federal law today. It won't change without a Constitutional amendment, or a major realignment of the Supreme Court.


It's the corporate person, distinct from a 'natural person.' Corporate persons have many but not all of the effective rights as a natural person does.


Companies are just people as far as the constitution is concerned (or really they're groups of people, but assembly is also protected by the first amendment!)

The food safety labels is an interesting point, but I'm not even talking about gov regulations here. Just, let's say Twitter deletes your post. What do you sue them for?

The 1a allows twitters employees to express themselves as they wish, even through the company, so their removal of your post is simply their own protected expression.


> I thought that the first amendment was about people rather than companies.

Any amendment written before slavery was abolished probably had a rather flexible view on the whole "people" issue. If an African American can be property then a corporation can be a person.


This discussion has nothing to do with race. You're obsessed.


> You're obsessed.

I made one comment on this and that makes me obsessed?

> This discussion has nothing to do with race.

The claim was that it applied to people, I merely mentioned that what the law considers people is a rather flexible thing.


I hate this. If I know the password I should be able to log in to my account no matter what (unless if I have 2fa enabled). Sadly companies like Google and MS do not like this idea, they also often use the excuse that they can't verify you in order to mine your phone number.

"Edit: Account recovered. I used chromium to login (which I never do) and then back to Firefox"

How can google keep getting away with this? MS got into trouble with IE with much less.


Google denying me access to YouTube videos asking me to verify my age by giving them either my passport or my credit card should be illegal as well.

Then they expanded it to the app store - some apps, even ones that don't seem to need to be age restricted - now require I verify my age in the same way. I just give them the middle finger and manually install the APKs for those apps.

Unfortunately, no way to get around the YouTube restriction though.

Google having a complete monopoly on this stuff has got to end at some point. They have way too much power over what are now pretty mainstream services to the internet, especially since they cannot be completed with by most companies, even those with adequate funding and reach.


> Google denying me access to YouTube videos asking me to verify my age by giving them either my passport or my credit card should be illegal as well.

Wasn't this added because of some EU law? It seems that YouTube's interpretation was that asking this is the opposite of illegal, actually legally required.


The irony is that I have to verify my age on YouTube, but I heard that people don't need this on any porn site for example.


UK prawn laws? - proposed and then dropped?


You can get around the youtube restrictions using yt-dlp (fork of youtube-dl) There are video players such as mpv that integrate with youtube-dl/yt-dlp so you can watch videos without saving them.


Including the age restriction? I don't see how that's possible. I know some typical bypass methods work for some of the other checks but they've never worked for the age requirements.

I'll give it a try though, thanks. There are some High Boi videos I still haven't seen :|


Keyhole, Jigsaw. Deep ties to defense.


The request for a phone number is precisely a 2FA mechanism, and that's how it's being used. I don't understand what the bad faith you're assuming on Google's part is. What do they want that phone number for otherwise?


If they're asking for a phone number after they've already decided you're not trusted enough with just your password, that is not 2FA.

If you're not trusted, then any phone number you give shouldn't be trusted, either.

If you're trusted enough to be let in when you give a phone number, then you should be trusted enough to be let in without it, and then asked for a phone number, if one is really needed for 2FA.


Indeed this is not a second factor.

I assume this restriction is against automation. As a complete guess on the heuristics: If the phone number you gave them is a "virtual" one, you are out. If it was used too frequently for recovery, you are out. If they are already familiar with it as belonging to someone else who is unlikely to also own the current account, you are out.

With those heuristics you need to provide a "fresh" phone number that likely belongs to a real person and costs real money to purchase and you are unlikely to want to "burn" for just 5 out of 5,000,000 of your automated attempts.


Easier to track you.


You are probably looking for Qt.


Doesn't fit the part about licenses that restrict what you can do with your code:

"Qt for Application Development is dual-licensed under commercial and open source licenses. The commercial Qt license gives you the full rights to create and distribute software on your own terms without any open source license obligations." https://www.qt.io/licensing/


One of the "open source" licenses that it is under is LGPL, you can do whatever you want with your code. (Although this does not mean that you should, please avoid publishing closed source software)


It's not a bad suggestion, but it still doesn't satisfy the requirement. You can't put it all in a single executable if that is what you want


You can, as long as you give the user the ability to replace the library. Such as by providing .o files.


"but our writing of numbers comes from Arabic"

Not India?


Yes, India. Where they were writing left to right.

So the Persians picked them up and put them in their right-to-left text, so little-endian for them, and then Europeans picked them up from the Persians, all as-is. Little-endian would make typesetting columns of figures simpler, but that ship sailed long ago.


"e.g., ePub"

So, at the end you are going to serve html at the browser.

You can do that with web1


The assertion was that there is too great a diversity of device sizes for a LaTeX-based document standard to work, with the strong implication being that a fixed-size PDF is the only output option for LaTeX.

Both sides of that fail.

You're now shifting the goalposts.

Yes, ePub is based on HTML. It is also contained, standardised, and structured (not dissimilarly to how LaTeX itself is based on Tex but with standards and structure). It's also not the only fluid document standard, though the other I'm significanlty aware of (mobi in particular) are effectively proprietary.

ePub exists and is good enough.

It has some issues of its own, including an overly-strong foundation in HTML which could well lead it to many of the same issues plagueing the Web. In practice, to date, it's largely avoided those pitfalls.

But again, that's really not what the question I was answering was about. Rather it was in targeting multiple sizes of devices. And I think I somewhat addressed that.


"must"? A web4 browser would probably use a "runtime compiled" implementation.

"Whilst that may still pass poorly-structured documents and result in poorly-formed output" - this by the way happens much more often with LaTeX compared to html in my experience.


A LaTeX document is virtually never directly consumed by a reader. It's first converted ("compiled") into some consumable format. Typically PDF/Postscript, though there can be numerous others.

You seem to be unfamiliar with this aspect of the system?


Don't make assumptions about me. A LaTeX document is also virtually never directly consumed by web browsers. In addition to that we are considering LaTeX for the web, not DVI, not PDF, not something else that you compile LaTeX into.

And well, given the amount of people that use overleaf I would say that a lot of people (although writers instead of readers) consume LaTeX.

(Btw, you can compile html too, try printing it as ps/pdf document)


That the Web is principally oriented around HTML is something of an accident of history. Note that any data can be transferred over HTTP(S), including, on occasion, either compiled or uncompiled LaTeX.

I am not making assumptions about what you do or do not know. I'm telling you how you're being perceived. You have the power to alter that perception. You've failed to use it.


"Note that any data can be transferred over HTTP(S), including, on occasion, either compiled or uncompiled LaTeX."

Sure, but I don't see what this has to do with anything.

As for your perception, I don't care :) Keep it to yourself next time please. You too are being perceived in a certain way as well but telling you how would likely be against this site's rules.


We don’t see html raw either dude


Username checks out. It's not the same: HTML is interpreted and often modified on the fly (with JavaScript), with the source being a click away, but it's still just HTML. On the other hand Latex documents are compiled into other formats like PDF that are distributed, instead of the Latex source.


Correct, further:

- The compilation means that there's at least a check for syntactic validity before any old crap is published.

- JS-based HTML can be fully dynamic to the extent that there's no sense of an underlying document at all. There are times when this is useful. That is an exceedingly small minority of the cases in which it is used. The fact that it's often preferable to rerender an HTML document as PDF, simply for readability, let alone archival, should speak volumes.


I talked about the compilation aspect at https://news.ycombinator.com/item?id=29372937 In addition to that there are various checkers for html.

There is lua-based LaTeX and js-based pdf. Just use html without JS.

"The fact that it's often preferable to rerender an HTML document as PDF, simply for readability, let alone archival, should speak volumes."

The fact that it's always preferable to export a LaTeX document as pdf...


Username checks out? Are people who compile markdown, org (or even LaTeX) via pandoc into html somehow "garbage coders"?


Nor a "first-class" TeX element either. \footnote is just a macro.


We're discussing LaTeX.

TeX itself is a set of typesetting primitives. LaTeX is a set of structurally-semantic, document-oriented macros built on TeX.


Latex is not semantic, the fact it's using macro (which are factoring function calls) says it all. There is some degree of separation between content and presentation, but that's it.


LaTeX as designed is generally semantic (formatting is left to the applied style), and whilst HTML5 is in theory reasonably semantic, in practice, LaTeX tends to be used far more semantically than HTML is.

Some of those reasons are technical, though as noted elsewhere in this subthread, economic and other factors seem more dominant, and I severely doubt that LaTeX on its own will address the broader scope of issues raised in TFA.


Is this why after all these years there is STILL no sane way to make accessible pdfs from LaTeX?

LaTeX is almost never semantic. People are encountered to think only about the document presentation (mostly due to LaTeX's own failures).

As for footnotes being "first class". It's just a macro, nothing first-class about that when compared to HTML's solution.


"After all these years" being the four and change since the tagged-PDF standard ISO 32000-2, 14.8 was released, in January 2017?

https://www.pdfa.org/resource/iso-32000-pdf

The LaTeX project announced an a16y project in January of this year. A tool is now available, though success varies.

https://www.latex-project.org/publications/indexbytopic/pdf/

Of footnotes: LaTeX has the macro, it's common across multiple document types. HTML does not.


To respond to your question: no. Not sure why you linked me the PDF 2 specification. PDF/UA is from 2012, and the ability to tag PDF files for accessibility is a thing since 2001 (with PDF 1.4).

Various tools and packages that attempted to generate accessible pdfs from LaTeX have existed for aeons, all of they had a common characteristic: they sucked. I am convinced that by this point everyone who cares about accessibility has moved to other formats (like HTML).

"HTML does not"

<aside> is "first class".


I honestly don't see any advantage to LaTeX over html+svg+css+mathml+??? for any usecase. Everything that LaTeX does the things that I mentioned do better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: