You gain flexibility by letting your browser handle such things. Have a look at Stylus https://github.com/openstyles/stylus, which is a privacy-conscious fork of Stylish for Chrome, also compatible with Firefox as a WebExtension.
I echo this. When my TKL KBParadise with Matias Quiet Click switches started misbehaving I was quite disappointed to find out that chatter is a common defect of Matias switches. Don't like the stabilizers either. Should have stuck with Cherry MX Browns, by far my favorite switch.
It appears you've only managed to hire one person after at least 246 days (https://news.ycombinator.com/item?id=11612805) and it is safe to assume you've advertised this position in other boards as well. I see you were a team of nine developers back then, and only recently became a team of ten developers, according to the job posts.
Maybe you should inform the community here why it hasn't been working out for you or update the copy to indicate the level of experience you are looking for.
If you are on the fence on professional proofreading due to cost, I think that a thorough review by a native English speaker (maybe a good friend) that is at least somewhat technical would be enough to correct a lot of the issues.
I find the way you write enjoyable, free-flowing and absolutely understandable, but there are quite a few instances of expressing things in an subtly unnatural way, which detracts from the otherwise great experience. Perhaps this isn't the best example (and some things are subjective too), but to illustrate what I'm saying:
> You’ll even know how to show your CPU usage and memory via the status line.
I would rewrite that to:
> You’ll even find out how to show your CPU usage and memory via the status line.
Or even better:
> I'll even show you how to display CPU and memory usage right at your status line.
Any amount of service toggling and hosts file stuffing will not suffice. It just screams ignorance. As a software developer you should understand that plugging holes in a black box is a futile effort. All these tools are doing is giving a false sense of privacy, that the next update will undo by flipping a switch or installing a new service.
If you think the OS is violating your privacy, stop using it or remove it from the Internet. Or both. It's the only way.
Edited to add: If you actually like Windows (I do), just switch to the Enterprise Edition and dial Telemetry down to "Security". Here is an explanation of what little is then shared, and how to even further minimize your footprint: https://technet.microsoft.com/en-us/itpro/windows/manage/con...
Edit to address the availability of the Enterprise Edition: If you are not able to get it via your $JOB, a valid key from MSDN surplus shouldn't be more than $50 if you look around. Of course you'd then be bending the EULA in your favor, but hey, since Microsoft is spying on everyone against their will I think it is fair game, right?
Pragmatically there are reasons for some people to run Windows 10 vs. other operating systems, even if you don't/won't recognize them. Tools like this allow people to run Windows 10 in a "good enough" state and spread awareness of the problem. It will always be a cat and mouse game if the black box vendor so chooses but it's still better than doing nothing.
Edit: To address your edit, aren't you trusting their black box and using a different tool to accomplish the same? Also, not everyone has access to Windows 10 Enterprise.
I actually like Windows from a technical point of view. However, as a private consumer how do I buy the Enterprise edition that gives me full control of my system? Even with the "Professional" edition Microsoft is still in the driver's seat (e.g. http://superuser.com/questions/1110265/how-to-prevent-window...).
It's not "off". My workstation running Windows 10 Enterprise still makes a lot of network calls (even explorer.exe) not related to Windows update. I never came across an in-depth analysis of what exactly is being sent, please share if you know of one.
The lowest telemetry setting level supported through management policies is Security.
[Security Security data only. 0]
Information that’s required to help keep Windows, Windows Server, and System Center secure, including data about the Connected User Experience and Telemetry component settings, the Malicious Software Removal Tool, and Windows Defender.
The Security level gathers only the telemetry info that is required to keep Windows devices, Windows Server, and guests protected with the latest security updates. This level is only available on Windows Server 2016, Windows 10 Enterprise, Windows 10 Education, Windows 10 Mobile Enterprise, and Windos IoT Core editions.
Ah… I did some more looking around and found out that the “Off” option actually did still exist¹ in the release version of Windows 10 Enterprise. However, it seems like some update to Windows 10 changed the label from “Off” to “Security”² instead. I can only think of two possible explanations for the change:
• Microsoft removed the already existing capability to completely turn off Telemetry for some reason, or
• the “Off” label wasn’t accurate in the first place, so Microsoft changed it to something less misleading
In any case, it seems like Microsoft has no plans to include a way to fully turn off telemetry on Windows 10 Enterprise anytime soon³.
As for your inquiry, unfortunately, I haven’t seen a more in-depth analysis of what being sent than the one at the link you’ve posted (although it actually does go into a bit more detail than just the part you’ve quoted here). There is this⁴, although it’s just a list of hostnames and IP addresses; there was no packet inspection done, so it doesn’t make it clear what’s actually being sent.
The reason why people will continue using a OS that they dislike and distrust, is the same reason why people don't switch to an Free and open source OS. Too much software is exclusively on windows, and that forces the user onto that sticky platform regardless of user preference.
Its the same reason why people who dislike and tries to block advertisement don't simply stop consuming contents that contain advertisement. They don't want to turn into hermits that live on a mountain away from the web, TV, mail, email, radio, billboards, milk cartons, the sky, and practically everywhere where a company can stick a advertisement on something. It is an imperfect solution to an imperfect world.
> The reason why people will continue using a OS that they dislike and distrust...Too much software is exclusively on windows...
Is there any room in your opinion for people who love Windows and think it's better than any other OS that is currently available? Because that's why I stick with it, despite having some very minor issues...
Also, the reason that I don't switch to a Free and open source OS for my desktop is because they all suck. They're slower and clunkier than Windows and they don't have the features that I want.
All of my Windows issues were solved by simply toggling features via Settings and Group Policy though. I think there is one setting that you need the Enterprise version to toggle and that is Telemetry. However, you can disable that service manually too - http://www.thewindowsclub.com/windows-10-telemetry/ Of course disabling Telemetry causes you to lose Cortana, the Windows Store and any use of your Microsoft Account - but I don't use any of that crap anyway and anyone who does want to use that stuff wouldn't care about the basic Telemetry data that gets collected, which is detailed here - https://privacy.microsoft.com/en-US/windows-10-feedback-diag...
I really don't understand how something that can run on less than a Pi can feel slower on consumer hardware than something that requires beefier specs.
For example, I never have to wait for my file manager to open. Not half a second.
Secondly, though Microsoft details the telemetry, its encrypted before the user can see it. You have to trust a company, that have a habit of bending over backwards for the US's clandestine organisations. It can't be verified.
I'm judging by the speed of the apps that run on top of the OS not the OS itself. Desktop apps specifically.
For example, all of the browsers run slower and are klunkier on Linux.
I'm with you on the telemetry. I just disabled it via the registry though. That option works on all editions of Windows unless I'm mistaken... which I very well may be since I did not go to very far lengths to verify that my machine is not sending back anything. However, I am not worried about US clandestine operations because there's nothing I can do about them anyway. They are into everything around you, not just Windows.
In my opinion the greatest threat is not spying on you. The thing you should be worried about the most is psychological warfare. They are not supposed to be running psychological operations on US soil, but it's so obvious that nobody follows that rule. TV, movies, news...all of them are used to program people. Honestly, there's nothing you can do about that either unless you are seriously rich and very well-informed.
Spying leads to manipulation, true. But my fear is based on not living in the US. And disabling regkeys doesn't stop 5gb of telemetry going to MS a day. Which I find just a tad excessive.
Another big reason is maturity of the software. I've been using Linux and Windows in parallel for years now and even though it got better recently, I still stumble over minor bugs, inconsistencies, usability slips and such on Linux while those are practically non-existent on Windows.
It's quite understandable given the different objectives and budgets of the two, but I think for most average users this is a deal breaker for switching.
So - no effort is better than some effort? That's pretty dark, and if we're given a control of at least 20% of the holes, I would say - use that control.
> If you are not able to get it via your $JOB, a valid key from MSDN surplus shouldn't be more than $50 if you look around.
Then again, the main reason for running Windows instead of Linux is because you want stuff to just work. Once 'buy Windows' becomes 'go looking on the gray market in the hope of finding something unsupported that might or might not actually work when you try to install it' the value proposition relative to Linux has been significantly eroded.
People want to have their cake and eat it, too. The windows ecosystem is all they know and they are not ready to step outside their comfort zone to use open source alternatives. They rather pirate whatever comes along and try to patch things up as good as possible using antivirus, firewall, VPN and a whole lot of snakeoil software. I don't think we can change a lot on that front in the coming years. I don't know if we should even try.
Windows actually has a pretty nice UI, and although some of the underlying system is different than more unixy variants, that doesn't make it bad. I use Linux (htpc, and work dist) as well as Windows and macOS daily... I prefer the Windows UI on the desktop, and Unity (Ubuntu) is close enough for me... macOS is the odd one out... I mostly stick to bash and node stuff lately, so I can get by anywhere. I use VS Code for editing, so again can get by anywhere.
I agree, for example when they block IP in host file and when you check these IP as nothing to do with Microsoft...
Also they block things that can be used in a good way, like recognition of ink pen, yeah you don't want Microsoft can improve then how they will improve it? and they will complain why this pen don't recognize anything, the solution will be to use ccleaner to do more mess because we all know the script that BOOST windows performance...
Maybe it will make more sense once it fully sinks in, but I think in general it is a mistake to make developers think about when and where certain things can be omitted. It's more straightforward to simply do one thing, consistently, following the "explicit is better than implicit" mantra.
What happened to optimizing for mental overhead instead of file size? This simply should be a build step, part of your minification and concatenation dance, not having to consider all of these when trying to decide if I should close my <p> tag or not:
A p element's end tag may be omitted if the p element is immediately followed by an address, article, aside, blockquote, details, div, dl, fieldset, figcaption, figure, footer, form, h1, h2, h3, h4, h5, h6, header, hgroup, hr, main, menu, nav, ol, p, pre, section, table, or ul element, or if there is no more content in the parent element and the parent element is an HTML element that is not an a, audio, del, ins, map, noscript, or video element, or an autonomous custom element.
This reasoning is why I write all the web pages for my personal projects using XHTML. I can't be bothered to remember which tags are self-closing, which tags need explicit closing tags which can't be combined into the opening tag, etc. Everything's consistent in XHTML.
Agreed. Years ago I started doing all my projects in XHTML because I found that debugging silent HTML errors was not fun.
Silent errors include things like malformed tags and attributes, incorrect nesting structure (thus also messing up where CSS rules are applied), and unescaped left-angles and ampersands.
About a decade ago it was a pretty commonplace thing to happen.
HTML 4/4.1 was kind of messy, and could have rendering issues. So going with an (x)HTML validator was a common thing, as well as a marketable value proposal to clients.
HTML 5 had much "saner" implementations, so validators fell by the wayside as they weren't as necessary for compatibility.
The Firefox source viewer (not the developer tools DOM viewer) does validation. It will highlight bad tags in red and if you hover over them it shows the error.
I'm pretty sure Moz has HTML validator built into its SEO tool, so it may be more common than you think solely because of that. We validate HTML at my company—If we don't we'll hear about it next time our boss runs an SEO check,
It's why I stopped using hand coded markup at all, aside from markdown for article data. Everything else is data pushed into templates that generate "whatever the code is that the client needs to receive", and let the build tools figure it out. That's what they're for.
As long as you're sure it will never be interpreted as HTML, you can do that. Which is harder than it should be, because doctype declarations are ignored. One lost header or unforseen embedding and everything after that <script /> tag gets eaten.
I write and edit all my Genshi [1] templates as xhtml, so I can validate and process them as crisp clean hi-fidelity xml, and then pump them out to browsers with the html serializer [2].
If I were inclined to follow Google's guidelines on omitting optional tags, it would be easy to write a stream filter that removed them [3].
But I prefer source templates to have all the explicit properly indented structure, so they're easier to validate and process with XML tools (and by eye), and unintentional mistakes don't sneak through as easily.
For the same reason, I also prefer not to write minified JavaScript source code: that should be done by post-processors, no humans. ;)
You can use text/html. Technically it wouldn't be an XML resource, but it's correct for HTML5. You can also use an xhtml doctype[1]. And don't forget that the HTML5 namespace is http://www.w3.org/1999/xhtml! [2] So basically you use your xhtml tools and just publish as HTML5.
"I can't be bothered to remember which tags are self-closing, which tags need explicit closing tags which can't be combined into the opening tag, etc"
You're right lets not bother ourselves with this small things, cause === and == do the exact same comparison in Javascript and all browsers are exact replicates when implementing html, css and javascript.
Beyond all the sarcasm, in reality, web programming is a hassle. But other programming languages and markups have their quirks as well. I'm glad you found a solution, but it doesn't mean we shouldn't look at the fine details of a specification.
I don't understand your argument. Yes, web programming has lots of warts and subtle behaviors and inconsistencies. So shouldn't we jump on a chance to remove a small part of that from our day-to-day development? OP isn't advocating ignorance of the spec, just a way not to need to reason with it as often.
You already have to consider all of those cases about the <p> tag: because they auto close when they hit one of those elements, that means that <p> tags can't contain any of them. If you don't know about this while using <p> tags, you can be in for a world of fun mysterious issues.
But all those tags are things that no sane developer would put inside a p tag anyways, so you don't really have to think about them.
The real mental overhead is incurred when reasoning about the tag following the p, which could be anything. "Hmmm, I have a nav tag coming after this p tag. Does that implicitly close it?"
Although if you had a good autoindenter, you could catch any mistakes by how it was indented. "Oh, that nav tag is on the same indentation level as the p tag, I guess it does implicitly close it."
I have done web dev on and off for over 15 years and I've never even thought about what happens when you put a h1 in a p. In my opinion the browser should crash and the operating system should BSOD. I have always been severely annoyed by the amount of shit browsers put up with. I don't understand why XHTML strict didn't get the traction it deserved and why they didn't continue along that line with HTML5.
Because the world is made up of messy people. And the value of allowing messy content was perceived as outweighing the value of consistency and reliability. I happen to agree.
I ran into this when working on some software that put user comments in <p> tags. I added some allowed markup that came out as <div> tags for a collapsible section. It didn't strike me as a particularly insane feature, but I about lost my mind trying to figure out why the <div> tags appeared to negate the <p> tag styling for all of the text after it.
How is a build step that turns your HTML into a smaller amount of HTML with the exact same behavior (by removing optional tags) different from a "minification" step that turns your HTML or JS into a smaller amount of HTML/JS with the exact same behavior?
Smaller file size -> faster loading (in theory... if gzipped, it's probably redundant).
Possibly faster parsing, because the parser has less HTML so go through. (also probably not valid, because I'd be pretty sure that reading a string from memory is not the bottleneck in parsing, compared to logic, memory allocations, etc).
It could make a difference for Googles server infrastructure though.
If they have to download a tiny bit less, and save a tiny bit on CPU cycles and memory for each page, , it might still lead to considerably savings.
> I'm wondering if there's some performance gain by the browser not having to parse the implicit optional tags.
The motivation behind this style is not browser parsing perf - it's network perf. The smaller your HTTP response, the fewer packets (and round trips) required to transmit it.
If your output is compressed (which it should be if you're worried about response size) then omitting end tags has much less impact, I believe. All of the tags should get compressed well because they're repeated so often, and they should be much smaller overall than your non-repetitive content.
But note that on the scale of move as much data around as Google does, or even "the web as a whole", shaving even a few bytes off of every single gzip packet stream can still equate to significant network relief.
I suspect their advice is for their benefit, not other website devs. They can save a lot of space in their archive if everyone's pages were smaller. Nothing compared to better image compression though.
No - a few bytes on a web page are insignificant compared to the data volume of images and movies. This is all about getting pages to load faster on mobile.
If you would gzip your output like you should, how much does that even buy?
There's usually something better to use your time for instead of trying to shave 500 bytes out of your page.
it's not a competition, though. If there's something better to do, also do that. However, that does still leave the question of how many bytes are actually saved in transport, especially with gzipping. The benefit here is absolutely not individual developers or even individual sites, but the data transfered by entire data centers over the course of a day, week, month etc. If this recommendation can bring down the total byte transmission for "the web" by 0.001% for instance, that's still a boatload of bytes that don't bog down the network anymore.
When you're looking at fractions of a percent, remember to consider other options. Set up brotli, for example. Or redesign your site to have a leaner layout. You might not ever reach the efficiency level where optimizing optional tags is the best use of dev time.
And the overhead of tracking which tags are optional in which circumstances is not particularly small. Consider that the extra complexity could impede more optimizations in the future, especially now that your markup requires a more complex parser than it could have needed.
Have you looked at the size of Youtube and Netflix videos?
According to this study [1], 70% of web traffic is video streaming. Only 8% are web browsing (which might include images, because they are not mentioned anywhere else - didn't find any info on that).
Just because the vast majority of roads are for cars doesn't mean we should therefore not try to optimize the bike and pedestrian lanes.
Sure, a lot of the traffic is streamed data rather than HTML, but 30% of close to a zetabyte of data in a single day (for the internet as a whole) is still hundreds of petabytes that can be made drastically smaller. When the numbers are that large, even optimizing for something as "insignificant" as 0.01% of the traffic means 10s or even 100s of terabytes not pumped through the network every day.
As always: optimizations are not a matter a of "one or the other" they're about "do all of them". Make the build tool apply all optimizations and minifications, and make the client-server connection negotiate as much compression as possible. Don't stop at just gzip when further improvements are trivial (like this one).
The double negative phrasing Google and the spec uses makes it sound weirder than it is. You could phrase it as "only use tags that are needed for the document to be parsed correctly" which makes explicitly including an <html> tag with no attributes or a information-free stack of closing tags seem like a strange thing to do if it wasn't tradition.
file size? It's not much, but it would still strip some stuff.
I'm still bitter that HTML/XML works based off of explicit closing tags (where you can mistakenly close the wrong tag) instead of something like braces.
Use a build tool (which you should be doing anyway if you hand-write any markup, because you need to validate it) and make it rewrite </> to the relevant closing tag, if necessary... problem solved? (and yes, you'd be free to even leave </> off in many, many places: https://www.w3.org/TR/html5/syntax.html#optional-tags).
Alternatively, don't use HTML at all. Use pug (formerly "jade") or something and now you're free from all those inconvenient angle brackets.
After 20 years of composing HTML, the world can do better than write full HTML but use technology like jade and not worry about what goes to the browser...
Why not worry about what goes to the browser? In my eyes, what actually runs in the browser is the only thing that matters in the end. You could still write in something like Jade, transpiler, minify, then strip unneeded tags, all with automation.
I didn't mean not to care what goes to the brower, I meant if tools like jade does it right for us, the rest of us no longer have to care about those little details.
Frankly I'm amazed the HTML way of verbose writing still stands after all these years in a fast paced industry.
Hadn't heard of Jade myself so your post inspired me to go looking. On http://learnjade.com/ the front page example shows that Jade doesn't take advantage of this ability to omit the closing </p> tag. So while I agree with you that html is not the best form for authors to write in, Jade itself still has room for improvement.
That is an option of course but then requires further processing by another tool. And Jade is often used real time which means that the need for further processing will be a burden. Is there any reason for the tool to output less than optimized code in the first place? What value is there in producing html that needs to be optimized by another tool?
By the way, Jade is in the process of being renamed to Pug because of a naming conflict with someone who holds the rights to the Jade name in another context.
The value is that optimizing would make jade/pug's code more complicated, whereas a generic tool that minimizes html according to these rules would work on whatever preprocessing tool you use (PHP, ejs, erb, handlebars, pug) that spits out HTML.
Do one thing and do it well, pretty much.
EDIT: editting since I can't reply to child post.
The thing is, if jade does it well on its own, then all the other tools won't profit from it. I believe jade should focus on outputting easy to understand, well-indented HTML.
When debugging HTML problems (missing attributes or whatnot) in development, I always disable any kind of minifiers. If jade implements a minified output, it would need to be optional, further increasing complexity.
If minification is a build step, I can just disable that step. Easy peasy.
Yes, that all makes sense. But there is also a cost to such a design. To my mind, doing it well means outputting optimized code.
Edit:
Worrying about the fact that Jade/Pug optimizations won't benefit others completely misses the point. Any improvements to its parser wont help anyone else either. The question is how to make the best tool for the job.
Perhaps the inefficiencies of outputting sub-optimal HTML don't matter much in reality. But if optimal html output was easy to do we would expect it to be done, right? So the only question at all worth considering is how difficult it is to achieve optimal HTML output.
My gut tells me that better output can't be that much harder, but I have never looked at the code at all so I may be dead wrong.
But you don't need or care about optimized html while developing. Much like you only minimize JavaScript when deploying, minimizing the html can be one step in the production build process that you pretty much set up once then forget about.
You still pay the cost in processing time of the optimization tool with every real time request. Pug is adding in unnecessary HTML elements which the next step in the pipeline removes. It's clearly inefficient.
Whether that is really important is another question; probably in the big scheme of things it's not worth worrying about. But I wouldn't dismiss it without knowing for sure that the costs in complexity to Pug are actually significant. That would require someone who knows the codebase to comment.
> A p element's end tag may be omitted if the p element is immediately followed by block-ish element, or if there is no more content in the parent element.
> This doesn't apply if you are doing weird stuff in a non-block-ish element, or a media element, or a custom element.
It's really just better to keep a closing p tag, so you don't have to care about consequence when you edit that part later...
Does not type </p> save anything? No.
<p>
It naturally acts as a clean way to segment
paragraphs of text
<p>
And most of the tag-closing rules are roughly
matched with the rules of using p tags altogether.
<p>
e.g. you can't have a div within a paragraph, so
closing or not closing, divs can only come after
paragraphs!.
It actually saves at least 4 bytes per closing tag. On a larger webpage, that could easily add up to saving hundreds or thousands of bytes per request. That's a significant savings, especially for mobile.
I just took a sample page out of here which has bunch of p tags open and closed, gzipped the original and the one with </p> stripped, difference was 39 bytes.
Ironically, if end tags were truly non-optional, html might actually compress better, because it would have less entropy (less choices). In practice, it would allow for a compression filter to represent the tree structure in a less redundant form with fewer corner cases to deal with (much like compressors do for binaries, for example).
The page is very beautiful in its versatility. No fancy JS hacks to keep pages style: you can use Firefox' reader view for instance. I keep my browser windows around 1000x1000 pixels because the vast majority of sites would otherwise style me overlength lines of text.
And it's fine, as long they scale properly and allow narrow browser views. The only criminals are the ones that force wide paragraphs... sometimes to a point I have to vertically scroll a paragraph.
I completely agree. The advantages of sites like these is that just 3 lines of code is enough to tweak it to my preferences. I doubt it would be so simple for sites with mountains of existing CSS.
It's also worth noting that Firefox's "reader" mode works perfectly with this site.
Any ideas on how you are going to promote it? Are you good with marketing? At the very least you need to add a demo dashboard to highlight the available features, before I have to spend time making an account.
Judging from similar projects I've pursued, writing code seems to be the easiest task these days.
Yea, I've done my share of marketing. I don't know if I would claim I'm good at it or not. Right now, everything is a bit rough, so it needs some work. I just wanted to get something out and start getting peoples thoughts. If I decide to move forward with it, I'll clean everything up, and create a better landing page for it, explaining features, etc.
This isn't anything new. There are other monitoring apps out there, but some of them are a bit expensive. I'm thinking about charging half of what the others charge.
If you do decide to move forward with it, check out https://www.indiehackers.com/businesses for inspiration, there are a few people doing similar stuff. Some direct links:
As a marketer, thanks for saying that. Many on the technical end are dismissive of the value of marketing with 2 predominant views; 1) If its a good product the customers will come or 2) I can do that myself.
To be fair this seems to be a reducing pool last few years for some reason. And there are some god awful marketers out there.