Hacker Newsnew | past | comments | ask | show | jobs | submit | maxymajzr's commentslogin

I see an interesting library, risk quantification (relates to what I work with).

I click the link.

Sniff/Accountwall stops me. I hit ctrl+w before I even thought of doing it (yes for muscle memory!)

For people who went through the trouble to see the content, is the library worth it?

These days, anything behind paywall/sniffwall/must-be-authenticated-to-read-this-wall warrants a ctr + w from me.


Unless I'm reading it wrong, the text says "Please implement TLS". Not everything is evil, toxic and in need of your downvote hammer. There's no injustice to be fixed here.


Just because somebody tries to phrase it in a polite way doesn't make it any less of a demand.


How exciting! I can't wait for next brilliant idea. Maybe... "notify your friends when you're thirsty via click of a button! Subscribe now, only $4.99 a month!"


The author is right. It just sucks to read what's true. He's not entirely right but he's on the right track.

You'll find a lot of tiny companies with dev. blogs where they explain in-depth the scaling strategy of their unknown, sporadically-used product.

Yes, react and vue are a few hundred kb gzipped. But are we seriously going to pretend that projects built with react are in a few hundred kb range? You can't even predict how big a project will be. It's not react's or vue's fault, it's simply how it is - add assets, add css, add additional libraries for <insert reason here> and it's not unrealistic to get 2-3 MB that you need to download.

A lot of people are designing their products to show off to their peers or for google (SEO). Users get left out. And we (users) are starting to feel it. Heed the warning or turn the blind eye.


In the good old days, there were plenty of terrible user experiences built with traditional technology. All of those apps failed to make a mark in people’s memories and have been forgotten.

I do remember stuff built in Frontpage breaking terribly if you resized your browser. I remember Joomla outputting reams of html that the browser would choke on the nodes. I remember doing chat apps via iframes and long polling. It was all there.

Better technology has only lead to better outcomes on both the low and the high end, and it’s a lot easier to debug and fix the low quality outcomes.


> But are we seriously going to pretend that projects built with react are in a few hundred kb range?

The author said frameworks == bloat. But the above suggests frameworks != bloat. So I don't know what point you're still trying to make.

> A lot of people are designing their products to show off to their peers

Again, I'm not sure how using new frameworks is "showing off for your peers".

> or for google (SEO)

Client-side rendering only makes SEO harder. Again, I'm not sure what argument you're trying to make.

Present clear reasoning and evidence instead of vague, preconceived assumptions.


I remember the sound my 28800 modem made before connecting.

I knew I have exactly 1 second below 60 minutes to be online before I get disconnected from my student dial-up.

I had ICQ. We used Altavista (and Astalavista to crack the software we illegally got). Relevant information and socializing was done via newsgroups, as well as content-sharing. Many don't know why winzip/winrar has the option to split an archive into multiple files (so they can fit 1.44" floppy disk and avoid upload limits). Later on I used Miranda so I can use ICQ/IRC/AIM/MSN from the same client. Browser wars were nonexistent. First "awesome" browser shell I used was Maxthon and content was shared on lan parties or you used obscure xdcc search engines to leech off of IRC. FPS games that I played (quake 1, 2 and 3) lagged, I had 200 ping - which is how I learned about the importance of latency and what it is. It dropped to 50 once I got ISDN.

Now, why did I type this? Not to present myself as "the old internet user". I don't consider myself a user of internet from "before", I'm well aware of even older generation whose internet looked even different. I'm just saddened by the fact that buzzfeed, one of the worst ad-ridden sites that exists, is making this kind of an article, spreading false knowledge (which is what it does anyway). It's not even an article. It's made by someone who used Facebook when it looked slightly shittier than it does now.

That's not "old" internet. The consumer-generation that's lifeblood of leeches such as buzzfeed/youtube has no experience nor right to write about "old" internet. They simply haven't experienced how internet used to be when total population online was well below 50 million people and when broadband was a luxury.


> The consumer-generation that's lifeblood of leeches such as buzzfeed/youtube has no experience nor right to write about "old" internet

While I share your sentiment towards Buzzfeed and clueless millennials, I don't think it's entirely fair to say they have no right to write about the "old" internet.

I began to experience the web around 1997, and it was different in a lot of ways compared to now. More personal pages, less centralization, little content policing, Netscape, Real Player, etc.

I'm sure some old fart from the 80's would tell me that I didn't know the "old" internet because I'd never used Telnet to log in to a BBS. While there's truth to that, it doesn't mean I don't have my own perspective of what's "old".


When I used "they have no right" - what I meant was this: they haven't experienced anything fundamentally different 10 years ago, therefore they can't be writing about something that "died" simply due to the fact there was nothing different to experience back then that doesn't exist today. The article seems forced, as any article from fake-factory would be. I'm always skeptical towards such content, that attributes to my negativity when typing all this. I merely find it ironic that someone who has no clue about the matter is writing about it :)


> Many don't know why winzip/winrar has the option to split an archive into multiple files (so they can fit 1.44" floppy disk

Good archiving programs came with the option of using erasure/error correction codes so that the archive would still be recoverable, even if some of the floppies were returning read errors or became unreadable altogether.


But it's not educating anyone. A handful of people who deal with this particular area on daily basis are capable of understanding what the blog post is about.

Gist of the post is that if you have all the info needed to construct the secret used to extract 6 numbers - you can copy it around and have copies of the device that produces one time passwords.

Obtaining the shared secret and knowing the user's credentials is difficult to achieve (obtaining both). Even if it were to happen, you, as a service provider, have undeniable evidence that the user was negligent because leaking out the secret for MFA isn't exactly easy to do.

Data leaks due to malicious employees is often the attack vector in these cases. I'd argue that safe-keeping the data in a way that employees can't access it easily is what's actually a big deal in data breaches, not the actual mechanisms (RFC4226 and RFC6238 algorithms and their derivatives) that rely on keeping data safe.

Attacking a service that's been breached by leaking shared secrets is still extremely hard - you have to know the credentials and corresponding shared secret out of hundreds of thousands of leaked ones. Only way to attack the service is brute-forcing it. That doesn't go unnoticed.

Plausible attacks are extremely rare and difficult to achieve and edge cases that are possible only when extremely sensitive data is leaked aren't an argument against MFA.

The post we're commenting on mentions U2F - that particular approach completely obviates all the problems mentioned in this blog post, on top of being vastly easier to use to the end user (stick the token in the usb port, press the flashing button, job done).


> Even if it were to happen, you, as a service provider, have undeniable evidence that the user was negligent because leaking out the secret for MFA isn't exactly easy to do.

No. It's a shared secret, this means the accuser might also be the perpetrator as they too could leak the secret, through malice or incompetence. Maybe that's good enough to kick people out of a fan forum or something but it ought not to be enough in, say, a court of law.

The beauty of FIDO (U2F/ WebAuthn) is that it does public key crypto, and so there is no shared secret to get stolen. This makes it easy to reason about as a physical object. Does anybody have my private key? No, it is in my pocket, I can feel it there next to the wallet.

The same can't be said for my password (even if you've used argon2i and locked the databases down tight, every single time I log in your systems, and any employees with access to them, know the password for a fraction of a second) let alone PINs, TOTP secrets, messages sent over SMS or just hoping nobody but me remembers my mother's maiden name...


Can you elaborate? I've yet to see Apache + mod_php to be capable of coming even close to <anything> + PHP-FPM so I'm really interested in what you guys are doing.


Mod_php was always faster at executing scripts. There is less overhead as you don't have to communicate like you have to with fpm.

For light scripts this is far superior to fpm. On the other hand, always loading php does have it's downsides too as memory consumption can get quite high depending on the number of threads.

This is was also the reason for the fpm hype a long time ago: don't waste memory on php when php isn't needed. It had nothing to do with it running php faster.

What you should choose depends on your need.


> Mod_php was always faster at executing scripts

I've never, ever witnessed that mod_php came close to be fast, let alone faster than PHP-FPM. There's more work to be done in order to prepare everything needed for Apache to pass the data to PHP executable once it embeds it within its own process. Once opcache is up and running, PHP-FPM blows mod_php away (and there are tools to warm up the cache prior to letting the php-fpm node go live).

> This is was also the reason for the fpm hype a long time ago: don't waste memory on php when php isn't needed

I've been present when the "hype" as you called it hit. It had nothing to do with memory as much as it did with scaling. Added benefit was the ability to have PHP-FPM act as a multiplexer towards certain services (database to name one).

Today, there's no reason to use Apache and mod_php. It's slower and worse by definition. It can't be faster. If you receive results that show it is faster, you're either testing it wrong or your PHP-FPM runs on a raspberry pi.


> Mod_php was always faster at executing scripts. There is less overhead as you don't have to communicate like you have to with fpm.

The "overhead" of communicating via CGI to a PHP process has nothing to do with the speed of execution of the script itself.

> For light scripts this is far superior to fpm. On the other hand, always loading php does have it's downsides too as memory consumption can get quite high depending on the number of threads.

It's not far superior as the "overhead" of CGI is negligible in the real world. Plus you can pool processes for better scaling. Also, if you are using prefork with mod_php (which is the most probable scenario) it means you are forking an entirely new Apache process and not just "loading PHP" with each request.

> This is was also the reason for the fpm hype a long time ago: don't waste memory on php when php isn't needed. It had nothing to do with it running php faster.

It's not hype, because for a long time, mod_php required prefork because it was not thread-safe (even now it's still a pain to manage re-compiling PHP to be thread-safe for mod_php + Apache)...which means you could not take advantage of mpm_event or mpm_worker.


Yes CGI just allows you to scale in a way mod_php doesn't. Apache & mod_fcgid are a great combo imo.


Have you tried mod_proxy_fcgi? Being able to have similar handling for fpm and other proxied appservers (eg ruby or python stuff) is quite handy.


In most cases, SP doesn't know user is removed from IdP. If there's a need for such feature, you resort to shared sessions - IdP has control of storage service where SP's save sessions (say, Redis). Once user is removed from IdP, IdP deletes the session record in Redis and SP loses all user info.

This implementation happens rarely.

The single logout process is often flawed since it depends on SP's accurately processing the request and returning the user back to IdP if a session has been successfully destroyed. This often fails due to network connectivity, problems with session destroying at the SP, SP's not implementing the SingleLogout properly etc. What I've experienced and seen in many cases is that IdP simply kills the session it has on the user and stops right there, then the rest of the SP's handle it through back-channel.

I work in this area so I'm relaying my experience through past 10 years of implementing SSO for various enterprises.

In reality, the SAML protocol is quite straight-forward but something odd happens when people hear the term SSO. It's not magic, it's quite trivial but takes a bit of discipline to grasp it fully and implement properly.


> it's quite trivial

I've been tasked with authenticating in an existing application via SAML 2.0 I've got the whole SAML 2.0 specification printed and sitting on my desk right now. It sure doesn't look trivial.


When people say "Don't roll your own encryption" or "Don't implement $ALGORITHM, just use libsodium", I think that also applies to web authentication (be it OAuth or SAML).

If you are the 'client' that wants to authenticate an incoming user, then I'd point you to an existing SP like Shibboleth or SimpleSAMLPHP or even CILogon (which bridges SAML and OAuth). If you are handing out authentications, then you are an IdP and I'd point you to something like Shibboleth or ADFS or Okta.

It is good to have the relevant RFCs to hand, but I would not make an implementation from scratch, unless there was a very good reason for it.


I'm trying to build an "SP" or "relying party" as I've learned in the jargon of the domain. Building an implementation from scratch is the last thing I want to do. I feel like I'd rather pull out my own teeth. But a difficulty of finding any information about it has lead me to start reading all the official specs as a kind of last resort.

In my initial research, I did discover Shibboleth, but was under the impression that it was an IDP only. I will check out the SP component, because I would love to not implement it from scratch.

At first glance, I see this: https://www.shibboleth.net/products/service-provider/ It basically seems pretty opaque. The downloads at https://shibboleth.net/downloads/service-provider/latest/ don't really provide much clue of how to build them or incorporate them into my app, which is written in a different language than any of these source files. I'll keep looking.


I was just tasked last week with implementing SAML auth for our applications and have settled on Shibboleth.

For the SP I just installed it from yum after adding the centos repos.

There is a repo config generator at https://shibboleth.net/downloads/service-provider/RPMS/ that I used.

It installs as an apache module and I am building a flow from my load balancers to hit the apache/SP and proxy some traffic to the application after authentication success so we can create our login session.

I have a PoC going now, but the documentation is not terribly clear to me and it is really confusing me about some options I need to tune.

For my initial testing, I built the SP and used https://samltest.id/ to test against an IDP.

Shibboleth SP docs: https://wiki.shibboleth.net/confluence/display/SP3/Home


If your language/environment supports using Apache HTTPD as a proxy, then you can use mod_auth_mellon to secure your web application. That's just one option, there are lots of others.


I have heard about these options, but the problem for me is identifying any of them and figuring out how to use them. My web server probably has to be IIS or Kestrel. I've been trying to figure this out for a few weeks now. And I've spent about a day trying to figure out how to do anything with the Shibboleth SP. I think I got it installed, but I can't really tell if it's doing anything or how to use it. And even then, there doesn't seem to be any information on how to actually use it for authentication in my application.

From my perspective as an application developer, writing an application that authenticates via SAML 2.0 is a nightmare, despite the ubiquitous claims of how simple it is.


If your app is .NET, look at Sustainsys (https://saml2.sustainsys.com/en/2.0/) or ITFoxtec (https://www.itfoxtec.com/IdentitySaml2) libs. Unfortunately there isn't a clear architect/dev-level guidance at the protocol level on key decisions that need to be made when implementing SP that tightly integrates with your app. Give one of these a shot and post your questions on StackOverflow.


Thanks. This looks more approachable. I'll give these a shot.


If you're willing to use PHP, there's also "the award-winning" SimpleSAMLphp (https://simplesamlphp.org/), which can operate as IdP or SP. I can tell you that many big names in the TV Everywhere space are using it.


You're reading the entire spec, but the flow is what's trivial. 2 systems are exchanging info, they use cryptograhpy to assert that message is coming from a valid, registered resource and the message carrier format is XML. You ask for info, you get XML back, verify the sender and if correct trust the info is valid. That's the gist of the protocol and the tedious part is parsing the XML you receive.


The flow/concept is simple indeed. But with saml the devil is really in the details and they created way too many details by offering way to many options in the spec.


What's the best practice on when a user session is destroyed at the SP's end? After receiving the LogoutResponse from IdP or before sending the LogoutRequest to IdP?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: