> It's important to note the nature of the failure.
Definitely! UCSF had a security firm send out a fishy-looking fishing email. My email client pointed out the url did not match the link text, whois told me it was a security company, and I opened the URL in a VM.
“You just got fished!” eye roll
I wouldn’t be surprised if most of those employees at gitlab were not so much fished as curious.
The article says 17 employees opened the link, and 10 of those types in their credentials. The 20% the headline is talking about are those 10, not the 7 that didn't do anything.
They did a test like this at a company I worked at. I ended up entering fake credentials because the thing seemed so shady, I was curious what its deal was.
I opened the email and I forwarded the email to abuse at corporate domain just like the corporate website says and my manager still got an email saying I failed the test.
Maybe because the tracking pixel remote image loaded? I remember reading an article where people sent an email to Apple and it got passed around within Apple and iirc either Steve Jobs or someone who reports directly to Steve Jobs opened the email not knowing that they were sending out a makeshift read receipt every time they opened the email.
I'm not even going to get to the point of wondering whether every component is faked or not, since my thought process will stop at "I'm not going to ever enter credentials into a site I got to from a random link in an email". Which seems to me to be a far better policy than trying to figure out whether a particular site I got to from a random link in an email is faked or not.
Nobody is demanding you do. But if you go around claimng people "got phished", then you should be sure.
I've also entered fake credentials into a clearly faked login form to see what'd happen. Would it redirect me to the right site? Just claim the information was wrong? Send me to a mock up of the intranet I was trying to access? You can call it bad policy if you want (although you don't know about my precautions), but it doesn't mean I was phished.
Isn't this fairly common? I've now worked at several organizations where sensitive information was stored on air-gapped networks. Software updates or data were moved in and out using pre-approved external drives.
I tend to think this is good software dev practice anyway. You ought to be able to test everything on your testing servers, and if this doesn't adequately reproduce the production environment, it's a problem with your test system.
It is common in the sense that it's done frequently enough that we don't need to reinvent it. Most orgs don't want that level of security & inconvenience. FWIW I personally have never encountered it.
This is kinda ridiculous. You first need the email client to have a bug which enables some kind of cross-site scripting just rendering an email, then a sandbox bug for a webpage to leak into the underlying system, and THEN a bug for the VM to escape to the parent OS.
At that point, I think it's as likely that your airgapped email laptop can hack into your work machine through local network exploits.
If you think a hacker is going to manage all that, you might as well assume that the hacker can trick gmail in to opening the email for you. There's a point at which we have to realistically assume that some layer of security works, and go about our lives.
Like other words whose scope has expanded meaning (e.g., serverless, drone), airgap can simply mean segregated network and not just completely unplugged.
1. Nothing about that post says it's just network layer segmentation. C2S is it's own region, with multiple AZs (data centers). Why would you believe those are collocated with commercial AWS and not, as they write, air-gapped.
2. Please don't contribute to giving marketing license to remove what little meaning words still have.
The wrong one I suspect. An Airgapped machine is a term reserved for a pc never connected to the internet, hence the gap. Usually for extreme security concerns like managing a paper crypto wallet or grid infrastructure.
It is a paranoid stance. But if you are a developer in a large company, think about how likely it is that your computer has (direct or not) access to data/funds worth more than $100k to someone, and what kind of exploits that money can buy.
Definitely! UCSF had a security firm send out a fishy-looking fishing email. My email client pointed out the url did not match the link text, whois told me it was a security company, and I opened the URL in a VM.
“You just got fished!” eye roll
I wouldn’t be surprised if most of those employees at gitlab were not so much fished as curious.