Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...

"The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups."

This is absolutely wild.



The issue here is not refusing to use a foreign third party. That makes sense.

The issue is mandating the use of remote storage and not backing it up. That’s insane. It’s like the most basic amount of preparation you do. It’s recommended to even the smallest of companies specifically because a fire is a risk.

That’s gross mismanagement.


This. Speaking specifically from the IT side of things, an employer or customer refusing to do backups is the biggest red flag I can get, an immediate warning to run the fuck away before you get blamed for their failure, stego-tech kind of situation.

That being said, I can likely guess where this ends up going:

* Current IT staff and management are almost certainly scapegoated for “allowing this to happen”, despite the program in question (G-DRIVE) existing since 2017 in some capacity.

* Nobody in government will question sufficiently what technical reason is/was given to justify the lack of backups and why that was never addressed, why the system went live with such a glaring oversight, etc, because that would mean holding the actual culprits accountable for mismanagement

* Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations

* The major cloud providers will likely win several contracts for “temporary services” that in actuality strip the sovereignty the government had in managing its own system, even if they did so poorly

* Other countries will use this to justify outsourcing their own sovereign infrastructure to private enterprise

This whole situation sucks ass because nothing good is likely to come of this, other than maybe a handful of smart teams lead by equally competent managers using this to get better backup resources for themselves.


> * Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations

They might (MIGHT) get fired from their government jobs, but I'll bet they land in consulting shops because of their knowledge of how the government's IT teams operate.

I'll also bet the internal audit team slides out of this completely unscathed.


> I'll also bet the internal audit team slides out of this completely unscathed.

They really, really shouldn't. However, if they were shouted down by management (an unfortunately common experience) then it's on management.

The trouble is that you can either be effective at internal audit or popular, and lots of CAE's choose the wrong option (but then, people like having jobs so I dunno).


Likely it wasn't even (direct) management, but the budgeting handled by politicians and/or political appointees.


Which begs the question, Does N Korea have governmental whistle-blower laws and/or services?

Also, internal audit aren't supposed to be the only audit, they are effectively pre-audit prep for external audit. And the first thing an external auditor should do - ask them probing questions about their systems and process.


Wrong Korea, this is South Korea


I have never been to DPRK but based on what I've read, I wouldn't even press "report phishing" button in my work email or any task at work I was not absolutely required to do, much less go out of my way to be a whistleblower.


That's true, but by their nature, external audits are rarer so one would have expected the IA people to have caught this first.


I abhor the general trend of governments outsourcing everything to private companies, but in this case, a technologically advanced country’s central government couldn’t even muster up the most basic of IT practices, and as you said, accountability will likely not rest with the people actually responsible for this debacle. Even a nefarious cloud services CEO couldn’t dream up a better sales case for the wholesale outsourcing of such infrastructure in the future.


I'm with you. It's really sad that this provides such a textbook case of why not to own your own infrastructure.

Practically speaking, I think a lot of what is offered by Microsoft, Google, and the other big companies that are selling into this space is vastly overpriced and way too full of lock-in, taking this stuff in-house without sufficient knowhow and maturity is even more foolish.

It's like not hiring professional truck drivers, but instead of at least people who can basically drive a truck, hiring someone who doesn't even know how to drive a car.


If this is true, every government should subsidize competitors in their own country to drive down costs.


Aside from data sovereignty concerns, I think the best rebuttal to that would be to point out that every major provider contractually disclaims liability for maintaining backups.

Now, sure, there is AWS Backup and Microsoft 365 Backup. Nevertheless, those are backups in the same logical environment.

If you’re a central government, you still need to be maintaining an independent and basically functional backup that you control.

I own a small business of three people and we still run Veeam for 365 and keep backups in multiple clouds, multiple regions, and on disparate hardware.


One co-effects of the outsourcing strategy is to underfund internal tech teams.. which then makes them less effective in both competing against and managing outsourced capabilities.


There's a pretty big possibility it comes down to acquisition and cost saving from politicians in charge of the purse strings. I can all but guarantee that the systems administrators and even technical managers had suggested, recommended and all but begged for the resources for a redundant/backup system in a separate physical location were denied because it would double the expense.

This isn't to preclude major ignorance in terms of those in the technology departments themselves. Having worked in/around govt projects a number of times, you will see some "interesting" opinions and positions. Especially around (mis)understanding security.


By definition if one department is given a hard veto, then there will always be a possibility that all the combined work of all other departments can amount to nothing, or even have a net negative impact.

The real question then is more fundamental.


I mean - it should be part of due diligence of any competent department trying to use this G-drive. If it says there are no backups it means it could only be used as a temporary storage, maybe as a backup destination.

It's negligence all the way, not just with this G-Drive designers, but with customers as well.


Backups should be far away, too. Apparently some companies lost everything on 9/11 because their backups were in the other tower.


Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.

Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.


Jersey City still was fine and 50 miles can be problematic for certain types of backup (failover) protocols. Regular tape backups would be fine but secondary databases can't be that far away (at least not at the time). I remember my boss at WFC saying that the most traffic over the data lines was in the middle of the night due to backups - not when everybody was in the office.


Companies big enough will lay the fibre. 50-100 miles of fibre isn't much if you are a billion dollar business. Even companies like BlackRock who had their own datacenters have since taken up Azure. 50 miles latency is negligible, even for databases.


The WTC attacks were in the 90s and early 00s and back then, 50 miles of latency was anything but negligible and Azure didn’t exist.

I know this because I was working on online systems back then.

I also vividly remember 9/11 and the days that followed. We had a satellite dish with multiple receivers (which wasn’t common back then) so had to run a 3rd party Linux box to descramble the single. We watch 24/7 global news on a crappy 5:4 CRT running Windows ME during the attack. Even in the UK, it was a somber and sobering experience.


For backups, latency is far less an issue than bandwidth.

Latency is defined by physics (speed of light, through specific conductors or fibres).

Bandwidth is determined by technology, which has advanced markedly in the past 25 years.

Even a quarter century ago, the bandwidth of a station wagon full of tapes was pretty good, even if the latency was high. Physical media transfer to multiple distant points remains a viable back-up strategy should you happen to be bandwidth-constrained in realtime links. The media themselves can be rotated / reused multiple times.

Various cloud service providers have offered such services, effectively a datacentre-in-a-truck, which loads up current data and delivers it, physically, to an off-site or cloud location. A similar current offering from AWS is data transfer terminals: <https://techcrunch.com/2024/12/01/aws-opens-physical-locatio...>. HN discussion: <https://news.ycombinator.com/item?id=42293969>.

Edit to add: from the above HN discussion Amazon retired their "snowmobile" truck-based data transfer service in 2024: <https://www.datacenterdynamics.com/en/news/aws-retires-snowm...>.


I’ve covered those points already in other responses. It’s probably worth reading them before assuming I don’t know the differences between the most basic of networking terms.

I was also specifically responding to the GPs point about latency for DB replication. For backups, one wouldn’t have used live replication back then (nor even now, outside of a few enterprise edge cases).

Snowmobile and its ilk was a hugely expensive service by the way. I’ve spent a fair amount of time migrating broadcasters and movie studios to AWS and it was always cheaper and less risky to upload petabytes from the data centre than it was to ship HDDs to AWS. So after conversations with our AWS account manager and running the numbers, we always ended up just uploading the stuff ourselves.

I’m sure there was a customer who benefited from such a service, but we had petabytes and it wasn’t us. And anyone I worked with who had larger storage requirements didn’t use vanilla S3, so I can’t see how Snowmobile would have worked for them either.


Laws of physics hasn't changed since the early 00s though, we could build very low latency point to point links back then too.


Switching gear was slower and laying new fibre wasn't an option for your average company. Particularly not point-to-point between your DB server and your replica.

So if real-time synchronization isn't practical, you are then left to do out-of-hours backups and there you start running into bandwidth issues of the time.


Never underestimate the potential packet loss of a Concorde filled with DVDs.


Plus long distance was mostly fibre already. And even regular electrical wires aren’t really much slower than fibre in term of latency. Parent probably meant bandwidth.


Copper doesn't work over these kinds of distances without powered switches, which adds latency. And laying fibre over several miles would be massively expensive. Well outside the realm of all but the largest of corporations. There's a reason buildings with high bandwidth constraints huddle near internet backbones.

What used to happen (and still does as far as I know, but I've been out of the networking game for a while now) is you'd get fibre laid between yourself and your ISP. So you're then subject to the latency of their networking stack. And that becomes a huge problem if you want to do any real-time work like DB replicas.

The only way to do automated off-site backups was via overnight snapshots. And you're then running into the bandwidth constraints of the era.

What most businesses ended up doing was tape backups and then physically driving it to another site -- ideally then storing it an fireproof safe. Only the largest companies could afford to push it over fibre.


To be fair, tape backups are very much ok as a disaster recovery solution. It's cheap once you have the tape drive. Bandwith is mostly fine if you want to read them sequentially. It's easy to store and handle and fairly resistant.

It's "only" poor if you need to restore some files in the middle or want your backup to act as a failover solution to minimise unavailability. But as a last resort solution in case of total destruction, it's pretty much unbeatable cost-wise.

G-Drive was apparently storing less than 1PB of data. That's less than 100 tapes. I guess some files were fairly stable so completely manageable with a dozen of tape drives, delta storage and proper rotation. We are talking of a budget of what 50k$ to 100k$. That's peanuts for a project of this size. Plus the tech has existed for ages and I guess you can find plenty of former data center employees with experience handling this kind of setup. They really have no excuse.


The suits are stingy when it's not an active emergency. A former employer declined my request for $2K for a second NAS to replicate our company's main data store. This was just days after a harrowing data recovery of critical from a failing WD Green that was never backed up. Once the data was on a RAID mirror and accessible to employees again, there was no active emergency, and the budget dried up.


I don't know. I guess that for all intents and purposes I'm what you would call a suit nowadays. I'm far from a big shot at my admittedly big company but 50k$ is pretty much pocket change on this kind of project. My cloud bill has more yearly fluctuation than that. Next to the cost of employees, it's nothing.


> There's a reason buildings with high bandwidth constraints huddle near internet backbones.

Yeah because interaction latency matters and legacy/already buried fiber is expensive to rent so you might as well put the facility in range of (not-yet-expensive) 20km optics.

> Copper doesn't work over these kinds of distances without powered switches, which adds latency.

You need a retimer, which adds on the order of 5~20 bits of latency.

> And that becomes a huge problem if you want to do any real-time work like DB replicas.

Almost no application would actually require "zero lost data", so you could get away with streaming a WAL or other form of reliably-replayable transaction log and cap it to an acceptable number of milliseconds of data loss window before applying blocking back pressure. Usually it'd be easy to tolerate enough for the around 3 RTTs you'd really want to keep to cover all usual packet loss without triggering back pressure.

Sure, such a setup isn't cheap, but it's (for a long while now) cheaper than manually fixing the data from the day your primary burned down.


Yes but good luck trying to get funding approval. There is a funny saying that wealthy people don't become wealthy by giving their wealth away. I think it applies to companies even more.


In the US, dark fiber will run you around 100k / mile. Thats expensive for anyone even if they can afford it. I worked in HFT for 15 years and we had tons of it.


DWDM per-wavelength costs are way, way lower than that, and, with the optional addition of encryption, perfectly secure and fast enough for disk replication for most storage farms. I've been there and done it.


Assuming that dark fiber is actually dark (without amplifiers/repeaters), I'd wonder how they'd justify the 4 orders of magnitude (99.99%!) profit margin on said fiber. That already includes one order of magnitude between the 12th-of-a-ribbon clad-fiber and opportunistically (when someone already digs the ground up) buried speed pipe with 144-core cable.


Google the term “high frequency trading”


So that's 5 million bucks for 50 miles? If there are other costs not being accounted for, like paying for the right-of-way that's one thing, but I would think big companies or in this case, a national government, could afford that bill.


Yeah, most large electronic finance companies do this. Lookup “the sniper in mahwah” for some dated but really interesting reading on this game.


Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.

IIRC, multiple IBM mainframes can be setup so they run and are administered as a single system for DR, but there are distance limits.


A Geographically-Dispersed Parallel Sysplex for z\OS mainframes, which IBM has been selling since the '90s, can have redundancy out to about 120 miles.

At a former employer, we used a datacenter in East Brunswick NJ that had mainframes in sysplex with partners in lower manhattan.


If you have to mirror synchronously the _maximum_ distances for other systems (e.g. storage mirroring with NetApp SnapMirror Synchronous, IBM PPRC, EMC SRDF/S) are all in this range.

But an important factor is, that performance will degrade with every microsecond latency added as the active node for the transaction will have to wait for the acknowledgement of the mirror node (~2*RTT). You can mirror synchronously that distance, but the question is if you can accept the impact.

That's not to say that one shouldn't create a replica in this case. If necessary, synchronize synchronous to a nearby DC and asynchrone to a remote one.

For sure we only know the sad consequences.


The actual distance involved in the case of the Brunswick DC is closer to 25 miles to Wall St.; but yes, latency for this is always paramount.


>Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.

I was told right after the bombing, by someone with a large engineering firm (Schlumberger or Bechtel), that the bombers could have brought the building down had they done it right.


Funnily enough, Germany has laws for where you are allowed to store backups exactly due to these kinda issues. Fire, flood, earthquake, tornadoes, whatever you name, backups need to be stored with appropriate security in mind.


Germany, of course. Like my company needs government permission to store backups.


More like: your company (or government agency) is critical infrastructure or of a certain size, so there are obligations on how you maintain your records. It’s not like the US or other countries don’t have similar requirements.


[flagged]


> This is incredible. Government telling me how to backup my data. Incredible.

No more incredible than the government telling you that you need liability insurance in order to drive a car. Do you think that is justifiable?


The difference is that you cannot choose who you're sharing a road with while you can usually choose your IT service providers. You could, for instance, choose a cheaper provider and make your own backups or simply accept that you could lose your data.

Where people have little or no choice (e.g government agencies, telecoms, internet access providers, credit agencies, etc) or where the blast radius is exceptionally wide, I do find it justifiable to mandate safety and security standards.


> you cannot choose who you're sharing a road with while you can usually choose your IT service providers

You can choose where to eat, but the gov still carrier out food heath and safety inspections. The reason is that it isn't easy for customers to observe these things otherwise. I think the same applies to corporate data handling & storage.


It's a matter of balance. Food safety is potentially about life and death. Backups not so much (except in very specific cases where data regulation is absolutely justifiable).

If any legislation is passed regarding data, I would prefer a broader rule that covers backup as well as interoperability/portability.


Losing data is mostly(*) fine if you are a small business. If a major bank loses it's data it is a major problem as it may impact a huge number of customers and an existential way, when all money is "gone"

(*) From state's perspective there is still a problem: tax audits, bad if everybody avoids them by "accidental" data loss


As I said, a wide blast radius is a justification and banks are already regulated accordingly. A general obligation to keep financial records exists as well.


> liability insurance in order to drive a car. Do you think that is justifiable?

New Zealand doesn't require car insurance, and I presume there are other countries with governments that that don't either.

I suspect most people in NZ would only have a sketchy idea of what liability is, based on learning from US TV shows.


It seems New Zealand is one of very few countries where that is the case, and that's because you guys have a government scheme that provides equivalent coverage for personal injury without being a form of insurance (ACC). As far as I understand, part of the registration fees you pay go to ACC. I would argue this is basically a mandatory insurance system with another name.


Australia is the same. Its part of the car registration cost annually.


Nope: The other way around. If you are of a certain size, you are required to ensure certain criteria. NIS-2 is the EU directive and it more or less maps to ISO27001 which includes risk management against physical catastrophes. https://www.openkritis.de/eu/eu-nis-2-germany.html

Of course you can do backups if you are smaller, or comply with such a standard if you so wish.


[flagged]


Is it? It would be incredible if the government didn’t have specific requirements for critical infrastructure.

Say you’re an energy company and an incident could mean that a big part of the country is without power, or you’re a large bank and you can’t process payroll for millions of workers. They’re ability to recover quickly and completely matters. Just recently in Australia an incident at Optus, a large phone company, prevented thousands of people from making emergency calls for several hours. Several people died including a child.

The people should require these providers behave responsibly. And the way the people do that is with a government.

Companies behave poorly all the time. Red tape isn’t always bad.


I'm usually first in line when talking shit about the German government, but here I am absolutely for this. I was really positively surprised when I had my apprenticeship at a publishing company and we had a routine to bring physical backups to the cellar of a post office every morning. The company wasn't that up-to-date with most things, but here they were forced to a proper procedure which totally makes sense. They even had proper desaster recovery strategies that included being back online within less than 2 hours hours even after a 100% loss of all hardware. They had internal jokes that you could have nuked their building and as long as one IT guy survived because he was in the home office, he could at least bring up the software within a day.


It's incredible knowing the bureaucracy of Germany.


Considering that companies will do everything to avoid doing sensible things that cost money - yes, of course the government has to step in and mandate things like this.

It's no different from safety standards for car manufacturers. Do you think it's ridiculous that the government tells them how to build cars?

And similarly here: If the company is big enough / important enough, then the cost to society if their IT is all fucked up is big enough that the government is justified in ensuring minimum standards. Including for backups.


It’s government telling you the minimum you have to do. There is nothing incredible there.

It makes sense that as economic operators become bigger, as the impact of their potential failure grows on the rest of the economy, they have to become more resilient.

That’s just the state forcing companies to take externalities into account which is the state playing its role.


Well, given that way too many companies in the critical infrastructure sector don't give a fuck about how to keep their systems up and we have been facing a hybrid war from Russia for the last few years that is expected to escalate in a full on NATO hot war in a few years, yes it absolutely does make sense for the government to force such companies to be resilient against Russians.

Just because wherever country you are at doesn't have to prepare for a hot war with Russia doesn't mean we don't have to. When the Russians come in and attack, hell even if they "just" attack Poland with tanks and the rest of us with cyber warfare, the last thing we need is power plants, telco infra, traffic infrastructure or hospitals going out of service because their core systems got hit by ransomware.


> it absolutely does make sense for the government to force such companies

Problem is, a) governments are infiltrated by russian assets and b) governments are known to enforce detrimental IT regulations. Germany especially so.

> power plants, telco infra, traffic infrastructure or hospitals

Their system _will_ get hit by ransomware or APTs. It is not possible to mandate common sense or proper IT practices, no matter how strict the law. See the recent incident in South Korea with burned down data center with no backups.


> Problem is, a) governments are infiltrated by russian assets and b) governments are known to enforce detrimental IT regulations. Germany especially so.

The regulations are a framework called "BSI Grundschutz" and all parts are freely available for everyone [1]. Even if our government were fully corrupted by Russia like Orban's Hungary - just look at the regulations on their face values and tell me what exactly you would see as "detrimental" or against best practice?

> It is not possible to mandate common sense or proper IT practices, no matter how strict the law. See the recent incident in South Korea with burned down data center with no backups.

I think it actually is. The BSI Grundschutz criteria tend to feel "checkboxy", but if you tick all the checkboxes you'll end up with a pretty resilient system. And yes, I've been on the implementing side.

The thing is, even if you're not fully compliant with BSI Grundschutz... if you just follow parts of it in your architecture, your security and resilience posture is already much stronger than much of the competition.

[1] https://www.bsi.bund.de/DE/Themen/Unternehmen-und-Organisati...


Government isn’t perfect but I’d be interested to know what alternative you propose?


a) Incarceration time for IT execs and responsible engineers.

b) Let companies go out of business once they fail to protect their own crucial data.

None of that is possible.


Responsible for what? If the government does not mandate any behavior, what basis does it have to incarcerate anyone?


Those are only punishments, which are shown to not work. Solutions are needed


so you are not proposing anything real then? I can pull "magic indestructible backup solution" out of my arse, too :(


No propositions at this point. I have no idea how to fix the problem.


It feels like you are being obtuse/arguing in bad faith. Of course there are standards on backups. Most countries have them.

Let's think what regulations does the 'free market' bastion US have on computer systems and data storage...

HIPAA, PCI DSS, CIS, SOC, FIPS, FINRA...


> HIPAA, PCI DSS, CIS, SOC, FIPS, FINRA

Those are related to _someone else's_ data handling.


They had standards for a variety of stuff, including how you architect your own systems to protect against data loss due to a variety of different causes.


(Without knowing the precise nature of these laws) I would expect that they don't forbid you to store backups elsewhere. It's just that they mandate that certain types of data be backed up in sufficiently secure and independent locations. If you want to have an additional backup (or backups of data not covered by the law) in a more convenient location, you still can.


> sufficiently secure and independent locations

This kind of provision requires enforcement and verification. Thus, a tech spec for the backup procedure. Knowing Germany good enough, I'd say that these tech spec would be detrimental for the actual safety of the backup.


wild speculation and conjecture


Not wild.

When you live in Germany and are asked to send a FAX (and not a mail, please). Or a digital birth certificate is not accepted until you come with lawyers, or banks not willing to operate with Apple pay, just to name few..

Speculation, yes, but not at all wild


I'm German and in my 45 years of being so have never been required to send a fax. Snail mail yes, but never a fax.


Agree. It is based on my experience with German bureaucracy.


Certain data records need to be legally retained for certain amounts of time; Other sensitive data (e.g. PII) have security requirements.

Why wouldn't government mandate storage requirements given the above?


No it doesn’t. It does however need to follow the appropiate standards commensurate with your size and criticality. Feel free to exceed them.


They deserved to lose everything... except the human lives, of course.

That's like storing lifeboats in the bilge section of the ship, so they won't get damaged by storms.


Nothing increases the risk of servers catching fire like government investigators showing up to investigate allegations that North Korea hacked the servers.


Or investigations into a major financial scandal in a large French bank!

(While the Credit Lyonnais was investigated in the 90s, both the HQ and the site where they stored their archives were destroyed by fire within a few months)


>This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967...

>Was 1967 a particularly bad winter?

>No, a marvellous winter. We lost no end of embarrassing files.


Yes minister! Great show that no one in the US has heard of which is a shame.


"We must do something --> this is something --> We must do this!"


It _almost_ sounds like you're suggesting the fire was deliberate!


It is very convenient timing


> The issue here is not refusing to use a foreign third party. That makes sense.

For anyone else who's confused, G-Drive means Government Drive, not Google Drive.


> The issue here is not refusing to use a foreign third party. That makes sense.

Encrypt before sending to a third party?


Of course you'd encrypt the data before uploading it to a third party, but there's no reason why that third party should be under control of a foreign government. South Korea has more than one data center they can store data inside of, there's no need to trust other governments sigh every byte of data you've gathered, even if there are no known backdoors or flaws in your encryption mechanism (which I'm sure some governments have been looking into for decades).


There is a reason that NIST recommends new encryption algorithms from time to time. If you get a copy of ALL government data, in 20 years you might be able to break encryption and get access to ALL government data from 20yr ago, no matter how classified they were, if they were stored in that cloud. Such data might still be valuable, because not all data is published after some period.


That doesn't sound like a good excuse to me.

aes128 has been the formal standard for 23 years. The only "foreseeable" event that could challenge it is quantum computing. The likely post quantum replacement is ... aes256, which is already a NIST standard. NIST won't replace aes256 in the foreseeable future.

All that aside, there is no shortage of ciphers. If you are worried about one being broken, chain a few of them together.

And finally, no secret has to last forever. Western governments tend to declassify just about everything after 50 years. After 100 everyone involved is well and truly dead.


That's going away. We are seeing reduced deprecations of crypto algorithms over time AFAICT. The mathematical foundations are becoming better understood and the implementations' assurance levels are improving too. I think we are going up the bathtub curb here.

The value of said data diminishes with time too. You can totally do an off-site cloud backup with mitigation fallbacks should another country become unfriendly. Hell, shard them such that you need n-of-m backups to reconstruct and host each node in a different jurisdiction.

Not that South Korea couldn't have Samsung's Joyent acquisition handle it.


I don't consider myself special, anything I can find eventually proof assistants using ML will find...


The reason is because better ones have been developed, not because the old ones are "broken". Breaking algos is now a matter of computer flops spent, not clever hacks being discovered.

When the flops required to break an algo exceed the energy available on the planet, items are secure beyond any reasonable doubt.


If you are really paranoid to that point, you probably wouldn't follow NIST recommendations for encryption algorithms as it is part of the Department of Commerce of the United States, even more in today's context.


Because even when you encrypt the foreign third party can still lock you out of your data by simply switching off the servers.


Would you think that the U.S would encrypt gov data and store on Alibaba's Cloud? :)


Why not?


Because it lowers the threshold for a total informational compromise attack from "exfiltrate 34PB of data from secure govt infrastructure" down to "exfiltrate 100KB of key material". You can get that out over a few days just by pulsing any LED visible from outside an air-gapped facility.


Wait what?


There are all sorts of crazy ways of getting data out of even air-gapped machines, providing you are willing to accept extremely low data rates to overcome attenuation. Even with million-to-one signal-to-noise ratio, you can get significant amounts of key data out in a few weeks.

Jiggling disk heads, modulating fan rates, increasing and decreasing power draw... all are potential information leaks.


> There are all sorts of crazy ways of getting data out of even air-gapped machines.

Chelsea Manning apparently did it by walking in and out of the facility with a CD marked 'Lady Gaga'. Repeatedly

https://www.theguardian.com/world/2010/nov/28/how-us-embassy...


On which TV show?


As of today, there's no way to prove the security of any available cryptosystem. Let me say that differently: for all we know, ALL currently available cryptosystems can be easily cracked by some unpublished techniques. The only sort-of exception to that requires quantum communication, which is nowhere near practicability on the scale required. The only evidence we have that the cryptography that we commonly use is actually safe is that it's based on "hard" math problems that have been studied for decades or longer by mathematicians without anyone being able to crack them.

On the other hand, some popular cryptosystems that were more common in the past have been significantly weakened over the years by mathematical advances. Those were also based on math problems that were believed to be "hard." (They're still very hard actually, but less so than we thought.)

What I'm getting at is that if you have some extremely sensitive data that could still be valuable to an adversary after decades, you know, the type of stuff the government of a developed nation might be holding, you probably shouldn't let it get into the hands of an adversarial nation-state even encrypted.


> The only evidence we have that the cryptography that we commonly use is actually safe is that it's based on "hard" math problems that have been studied for decades or longer by mathematicians without anyone being able to crack them.

Adding to this...

Most crypto I'm aware of implicitly or explicitly assumes P != NP. That's the right practical assumption, but it's still an major open math problem.

If P = NP then essentially all crypto can be broken with classical (i.e. non-quantum) computers.

I'm not saying that's a practical threat. But it is a "known unknown" that you should assign a probability to in your risk calculus if you're a state thinking about handing over the entirety of your encrypted backups to a potential adversary.

Most of us just want to establish a TLS session or SSH into some machines.


While I understand what you're saying, you can extend this logic to such things as faster-than-light travel, over-unity devices, time travel etc. They're just "hard" math problems.

The current state of encryption is based on math problems many levels harder than the ones that existed a few decades ago. Most vulnerabilities have been due to implementation bugs, and not actual math bugs. Probably the highest profile "actual math" bug is the DUAL_EC_DRBG weakness which was (almost certainly) deliberately inserted by the NSA, and triggered a wave of distrust in not just NIST, but any committee designed encryption standards. This is why people prefer to trust DJB than NIST.

There are enough qualified eyes on most modern open encryption standards that I'd trust them to be as strong as any other assumptions we base huge infrastructure on. Tensile strengths of materials, force of gravity, resistance and heat output of conductive materials, etc, etc.

The material risk to South Korea was almost certainly orders of magnitude greater by not having encrypted backups, than by having encrypted backups, no matter where they were stored (as long as they weren't in the same physical location, obviously).


>While I understand what you're saying, you can extend this logic to such things as faster-than-light travel, over-unity devices, time travel etc. They're just "hard" math problems.

No you can't. Those aren't hard math problems. They're Universe breaking assertions.

This is not the problem of flight. They're not engineering problems. They're not, "perhaps in the future, we'll figure out..".

Unless our understanding of physics is completely wrong, then None of those things are ever going to happen.


According to our understanding of physics, which is based on our understanding of maths, the time taken to brute force a modern encryption standard, even with quantum computers, is longer than the expected life of the universe. The likely-hood of "finding a shortcut" to do this is in the same ball-park as "finding a shortcut" to tap into ZPE or "vacuum energy" or create worm-holes. The maths is understood, and no future theoretical advances can change that. It would involve completely new maths to break these. We passed the "if only computers were a few orders of magnitude faster it's feasible" a decade or more ago.


Sorry, I don't think this is true. There is basically no useful proven lower bound on the complexity of breaking popular cryptosystems. The math is absolutely not understood. In fact, it is one of the most poorly understood areas of mathematics. Consider that breaking any classical cryptosystem is in the complexity class NP, since if an oracle gives you the decryption key, you can break it quickly. Well we can't even prove that NP != P, i.e., that there even exists a problem where having such an oracle gives you a real advantage. Actually, we can't even prove that PSPACE != P, which should be way easier than proving NP != P if it's true.


One-time pad is provable secure. But it is not useful for backups, of course.


OTP can be useful especially for backups. Use a fast random number generator (real, not pseudo), write output to fill tape A. XOR the contents of tape A to your backup datastream and write result to Tape B. Store tape A and B in different locations.


But you have one copy of the key stream. It is not safe. You need at least two places to store at least two copies of the key stream. You cannot store it in non-friendly cloud (and this thread started from backing up government sensitive data into cloud owned by other country, possibly adversary one.

If you have two physically separate places which you could trust key stream, you could use them to backup non-encrypted (or "traditionally" encrypted) data itself, without any OTP.


You may want some redundancy because needing both tapes increases the risk to your backup. You could just backup more often. You could use 4 locations, so you have redundand keystreams and redundant backup streams. But in general, storing the key stream follows the same necessities as storing the backup or some traditional encryption keys for a backup. But in general, your backup already is a redundancy, and you will usually do multiple backups in time intervals, so it really isn't that bad.

Btw, you really really need a fresh keystream for each and every backup. You will have as many keystream tapes as you have backup tapes. Re-using the OTP keystream enables a lot of attacks on OTP, e.g. by a simple chosen plaintext an attacker can get the keystream from the backup stream and then decrypt other backup streams with it. XORing similar backup streams also gives the attacker an idea which bits might have changed.

And there is a difference to storing things unencrypted in two locations: If an attacker, like some evil maid, steals a tape in one location, you just immediately destroy its corresponding tape in the other location. That way, the stolen tape will forever be useless to the attacker. Only an attacker that can steal a pair of corresponding tapes in both locations before the theft is noticed could get at the plaintext.


Even OTP is not secure if others have access to it.


Every castle wall can be broken with money.


How much money is required to decrypt a file encrypted with a 256-bit AES key?


How much person(s) who know the key will take to change country and don't work anymore?

Or how much it is cost to kidnap significant one of key bearer(s)?

I think, it is very reasonable sums for governments of almost any country.


I think you assume that encryption keys are held by people like a house key in their pocket. That's not the case for organizations who are security obsessed. They put their keys in HSMs. They practice defense in depth. They build least-privilege access controls.


Thank you for writing this post. This should be the top comment. This is a state actors game, the rules are different.


> could still be valuable to an adversary after decades

What kind of information might be valuable after so long?


Why make yourself dependent on a foreign country for your own sensitive data?

You have to integrate the special software requirements to any cloud storage anyway and hosting a large amount of files isn't an insurmountable technical problem.

If you can provide the minimal requirements like backups, of course.


Presumably because you aren't capable of building what that foreign country can offer you yourself.

Which they weren't. And here we are.


> Encrypt before sending to a third party?

That sounds great, as long as nobody makes any mistake. It could be a bug on the RNG which generates the encryption keys. It could be a software or hardware defect which leaks information about the keys (IIRC, some cryptographic system are really sensitive about this, a single bit flip during encryption could make it possible to obtain the private key). It could be someone carelessly leaving the keys in an object storage bucket or source code repository. Or it could be deliberate espionage to obtain the keys.


It does only make sense if you are competent enough to manage data, and I mean: Any part of it, forever. It's not impossible, of course, but it is really not as trivial as the self-host crowd makes it out to be, if you absolutely need a certain amount of 9s of reliability. There is a reason why AWS etc can exist. I am sure the cloud market is not entirely reasonable but certainly far more reasonable than relying on some mid consultant to do this for you at this scale.


Yeah, the whole supposed benefit of an organization using storage the cloud is to avoid stuff like this from happening. Instead, they managed to make the damage far worse by increasing the amount of data lost by centralizing it.


The issue is without profit incentive of course it isn’t X (backed up, redundant, highly available, whatever other aspect is optimized away by accountants).

Having worked a great deal inside of aws on these things aws provides literally every conceivable level of customer managed security down to customer owned and keyed datacenters operated by aws, with master key HSMs owned, purchased by the customer, with customer managed key hierarchies at all levels and detailed audit logs of everything done by everything including aws itself. The security assurance of aws is far and away beyond what even the most sophisticated state actor infrastructure does and is more modern to boot - because it’s profit incentive drives that.

Most likely this was not about national security than about nationalism. They’re easily confused but that’s fallacious. And they earned the dividends of fallacious thinking.


Call me a conspiracy theorist, but this kind of mismanagement is intentional by design so powerful people can hide their dirty laundry.


Never attribute to malice what can be attributed to stupidity.

There was that time when some high profile company's entire Google Cloud account was destroyed. Backups were on Google Cloud too. No off-site backups.


One of the data integrity people sadly committed suicide as a result of this fire, so I am also thinking this was an incompetence situation (https://www.yna.co.kr/view/AKR20251003030351530).

For the budget spent, you’d think they would clone the setup in Busan and sync it daily or something like this in lieu of whatever crazy backup they needed to engineer but couldn’t.


You were probably thinking of UniSuper [0], an Australian investment company with more than $80B AUM.

Their 3rd party backups with another provider were crucial to helping them undo the damage from the accidental deletion by GCloud.

GCloud eventually shared a post-mortem [1] about what went down.

0: https://news.ycombinator.com/item?id=40304666

1: https://cloud.google.com/blog/products/infrastructure/detail...


> Never attribute to malice what can be attributed to stupidity.

Any sufficiently advanced malice is indistinguishable from stupidity.

I don't think there's anything that can't be attributed to stupidity, so the statement is pointless. Besides, it doesn't really matter naming an action stupidity, when the consequences are indistinguishable from that of malice.


I know of one datacenter that burned down because someone took a dump before leaving for the day, the toilet overflowed, then flooded the basement, and eventually started an electrical fire.

I'm not sure you could realistically explain that as anything. Sometimes ... shit happens.


I mean, I don't disagree that "gross negligence" is a thing. But that's still very different from outright malice. Intent matters. The legal system also makes such a distinction. Punishments differ. If you're a prosecutor, you can't just make the argument that "this negligence is indistinguishable from malice, therefore punish like malice was involved".


Hanlon's Razor is such an overused meme/trope that it's become meaningless.

It's a fallacy to assume that malice is never a form of stupidity/folly. An evil person fails to understand what is truly good because of some kind of folly, e.g. refusing to internally acknowledge the evil consequences of evil actions. There is no clean evil-vs-stupid dichotomy. E.g. is a drunk driver who kill someone with drunk driving stupid or evil? The dangers of drunk driving are well-known, so what about both?

Additionally, we are talking about a system/organization, not a person with a unified will/agenda. There could indeed be an evil person in an organization that wants the organization to do stupid things (not backup properly) in order to be able to hide his misdeeds.


Hanlon's Razor appears to be a maxim of assuming good-faith; "They didn't mean to be cause this, they are just inept."

To me, it has no justification. People see malice easily, granted, but others feign ignorance all the time too.

I think a better principle is: Proven and documented testing for competence, making it clear what a persons duties and (liable) responsibilities are, then thereafter treating incompetence and malice the same. Also: any action need to be audited by a second entity who shares blame (to a measured and pre-decided degree) when they fail to do so.


It's also true that "it is difficult to get a man to understand something, when his salary depends on his not understanding it."


You have to balance that with how low can you expect human beings to lower their standards when faced with bureaucratic opposition. No backups on a key system would increase the likelihood of malice versus stupidity, since the importance of backups is known to IT staff regardless of role and seniority for only 40 years or so.


I very seriously doubt that the US cares about South Korea's deepest, darkest secrets that much, if at all.

Not using a cloud provider is asinine. You can use layered encryption so the expected lifetime of the cryptography is beyond the value of the data...and the US government themselves store data on all 3 of them, to my knowledge.

I say US because the only other major cloud providers I know of are in China, and they do have a vested interest in South Korean data, presumably for NK.


It's quite wild to think how US wouldn't want access to their data on a plate, through AWS/GCP/Azure. You must not be aware of the last decade of news when it comes to US and security.


The US and South Korea are allies, and SK doesn't have much particular strategic value that I'm aware of? At least not anything they wouldn't already be sharing with the US?

Can you articulate what particular advantages the US would be pursuing by stealing SK secret data (assuming it was not protected sufficiently on AWS/GCP to prevent this, and assuming that platform security features have to be defeated to extract this data—this is a lot of risk from the US's side, to go after this data, if they are found out in this hypothetical, I might add, so "they would steal whatever just to have it" is doubtful to me).


The NSA phone-tapped Angela Merkel's phone while she was chancellor as well as her staff and the staff of her predecessor[1], despite the two countries also being close allies. "We are allies, why would they need to spy on us?" is therefore proveably not enough of a reason for the US to not spy on you (let's not forget that the NSA spies on the entire planet's internet communications).

The US also has a secret spy facility in Pine Gap that is believed to (among other things) spy on Australian communications, again despite both countries being very close allies. No Australians know what happens at Pine Gap, so maybe they just sit around knitting all day, but it seems somewhat unlikely.

[1]: https://www.theguardian.com/us-news/2015/jul/08/nsa-tapped-g...


Airbus was spied on by NSA For the benefit of Boeing: https://apnews.com/general-news-e88c3d44c2f347b2baa5f2fe508f...

Why do you think USA wouldn't lie, cheat and spy on someone if it had a benefit in it?


[flagged]


As a sysadmin at company that provide fairly sensitive services, I find online cloud backups to be way to slow for the purpose of protecting against something like the server room being destroyed by a fire. Even something like spinning disks at a remote location feel like a risk, as files would need to be copied onto faster disks before services could be restored, and that copying would take precious time during an emergency. When downtime means massive losses of revenue for customers, being down for hours or even days while waiting for the download to finish is not going be accepted.

Restoring from cloud backups is one of those war stories that I occasionally hear, including the occasionally fedex solution of sending the backup disk by carrier.


Many organizations are willing to accept the fallbacks of cloud backup storage because it’s the tertiary backup in the event of physical catastrophe. In my experience those tertiary backups are there to prevent the total loss of company IP in the should an entire site be lost. If you only have one office and it burns down work will be severely impacted anyway.

Obviously the calculus changes with maximally critical systems where lives are lost if the systems are down or you are losing millions per hour of downtime.


For truly colossal amounts of data, fedex has more bandwidth than fiber. I don’t know if any cloud providers will send you your stuff on physical storage, but most will allow you to send your stuff to them on physical storage- eg AWS snowball.

There are two main reasons why people struggle with cloud restore:

1. Not enough incoming bandwidth. The cloud’s pipe is almost certainly big enough to send your data to you. Yours may not be big enough to receive it.

2. Cheaping out on storage in the cloud. If you want fast restores, you can’t use the discount reduced redundancy low performance glacier tier. You will save $$$ right until the emergency where you need it. Pay for the flagship storage tier- normal AWS S3, for example- or splurge and buy whatever cross-region redundancy offering they have. Then you only need to worry about problem #1.


Amazon used to offer a truck based data transport: https://www.datacenterdynamics.com/en/news/aws-retires-snowm...


If you allow it to cost a bit, which is likely a good choice given the problem, then there are several solutions available. It is important to think through the scenario, and if possible, do a dry run of the solution. A remote physical server can work quite well and be cost effective compared to a flagship storage tier, and if data security is important, you can access the files on your own server directly rather than downloading an encrypted blob from a cloud located outside the country.


In one scenario, with offsite backups ("in the clown" or otherwise): "We had a fire at our datacenter, and there will be some downtime while we get things rolling again."

In the other scenario, without offsite backups ("in the clown" or otherwise): "We had a fire at our datacenter, and that shit's just gone."

Neither of these are things that are particularly good to announce, and both things can come with very severe cost, but one of them is clearly worse than the other.


SK would be totally fine with that though because that means there would eventually be recovery!


You're not designing to protect from data loss, you're designing to protect from downtime.


That’s why

Microsoft can't guarantee data sovereignty

https://news.ycombinator.com/item?id=45061153


He obviously meant encrypting before uploading. At that point it doesn't matter who's holding your data or what they try to do with it.


It still matters who holds your data. Yes they can't read it, but they can hold it ransom. What if the US decides it wants to leverage the backups in tariff negotiations or similar? Not saying this would happen, but as a state level actor, you have to prepare for these eventualities.


That's why you backup to numerous places and in numerous geopolitical blocs. Single points of failure are always a bad idea. You have to create increasingly absurd scenarios for there to be a problem.


or… hear me out…

you obviate the need for complex solutions like that by simply having a second site.


How’s that? Using encryption, which is known to have backdoors and is vulnerable to nation state cracking?


It is much more likely and cheaper, that US marines will desant and capture your backup facility, than someone would break AES-128.


Sending troops would be an act of war, and definitely not cheap.

Stealing some encryption keys, just another Wednesday.


You mean like blowing up an oil pipeline? Accidents happen all the time. It is quite a lot cheaper to have an 'accident' happen to a data center than to break AES-256.


There are less public options to get the data without breaking encryption especially when the target uses MS software


Now I’m down the rabbit hole of https://en.wikipedia.org/wiki/NSAKEY


There might be unknown unknowns....


Can you provide an example of a commonly used cryptography system that is known to be vulnerable to nation state cracking?

As for backdoors, they may exist if you rely on a third party but it's pretty hard to backdoor the relatively simple algorithms used in cryptography


It's not so much that there is a way to directly crack an encrypted file as much as there being backdoors in the entire HW and SW chain of you decrypting and accessing the encrypted file.

Short of you copying an encrypted file from the server onto a local trusted Linux distro (with no Intel ME on the machine), airgapping yourself, entering the decryption passphrase from a piece of paper (written by hand, never printed), with no cameras in the room, accessing what you need, and then securely wiping the machine without un-airgapping, you will most likely be tripping through several CIA-backdoored things.

Basically, the extreme level of digital OPSEC maintained by OBL is probably the bare minimum if your adversary is the state machinery of the United States or China.


This is a nation state in a state of perpetual tension, formal war and persistent attempts at sabotage by a notoriously paranoid and unscrupulous next-door enemy totalitarian/crime family state.

SK should have no shortage of motive or much trouble (it's an extremely wealthy country with a very well-funded, sophisticated government apparatus) implementing its own version of hardcore data security for backups.


Yeah, but also consider that maybe not every agency of South Korea needs this level of protection?


> Can you provide an example of a commonly used cryptography system that is known to be vulnerable to nation state cracking?

DES. Almost all pre-2014 standards-based cryptosystems due to NIST SP 800-90A. Probably all other popular ones too (like, if the NSA doesn't have backdoors to all the popular hardware random number generators then I don't know what they're even doing all day), but we only ever find out about that kind of thing 30 years down the line.


Dual_EC_DRBG


Please provide any proof or references to what you are claiming.


>Using encryption, which is known to have backdoors and is vulnerable to nation state cracking?

WTF are you talking about? There are absolutely zero backdoors of any kind known to be in any standard open source encryption systems, and symmetric cryptography 256-bits or more is not subject to cracking by anyone or anything, not even if general purpose quantum computers are doable and prove scalable. Shor's algorithm applies to public-key not symmetric, where the best that can be done is Grover's quantum search for a square-root speed up. You seem to be crossing a number of streams here in your information.


As someone who’s fairly tech-literate but has a big blind spot in cryptography, I’d love to hear any suggestions you have for articles, blog posts, or smaller books on the topic!

My (rudimentary, layman) understanding is that encryption is almost like a last line of defense and should never be assumed to be unbreakable. You sound both very knowledgeable on the topic, and very confident in the safety of modern encryption. I’m thinking maybe my understanding is obsolete!


Encryption is the mechanism of segmentation for most everything on 2025.

AES is secure for the foreseeable future. Failures in key storage and exchange, and operational failures are the actual threat and routinely present a practical, exploitable problem.

You see it in real life as well. What’s the most common way of stealing property from a car? A: Open the unlocked door.


> My (rudimentary, layman) understanding is that encryption is almost like a last line of defense and should never be assumed to be unbreakable

Lol this is woefully misinformed.


https://en.wikipedia.org/wiki/Post-quantum_cryptography

It is my understanding that current encrypted content can someday be decrypted.


That's incorrect. Current asymmetric (ie: public-key) algorithms built using prime factoring or elliptic curve techniques are vulnerable to quantum attack using Shor's algorithm.

However, symmetric algorithms are not nearly as vulnerable. There is one known quantum attack using Grover's algorithm, but with quadratic speedup all it does is reduce the effective length of the key by half, so a 128-bit key will be equivalent to a 64-bit key and a 256-bit key will be equivalent to a 128-bit key. 256-bit keys are thus safe forever, since going down to a 128-bit key you are still talking age-of-the-universe break times. Even 128-bit keys will be safe for a very long time. While being reduced to a 64-bit key does make attacks theoretically possible, it is still tremendously difficult to do on a quantum computer, much harder than the asymmetric case (on the order of centuries even with very fast cycle times).

Finally, it's also worth noting that asymmetric cryptosystems are rapidly being updated to hybrid cryptosystems which add post-quantum algorithms (ie: algorithms which quantum computers are believed to provide little or no speedup advantage). So, going forward, asymmetric crypto should also no longer be vulnerable to store-now-decrypt-later attacks, provided there's no fundamental flaw in the new post-quantum algorithms (they seem solid, but they are new, so give the cryptographers a few years to try to poke holes in them).


This is also assuming a theoretical quantum computing system is developed capable of breaking the encryption. Which isn't at all a given.


>However, symmetric algorithms are not nearly as vulnerable. There is one known quantum attack using Grover's algorithm, but with quadratic speedup all it does is reduce the effective length of the key by half, so a 128-bit key will be equivalent to a 64-bit key and a 256-bit key will be equivalent to a 128-bit key. 256-bit keys are thus safe forever, since going down to a 128-bit key you are still talking age-of-the-universe break times. Even 128-bit keys will be safe for a very long time. While being reduced to a 64-bit key does make attacks theoretically possible, it is still tremendously difficult to do on a quantum computer, much harder than the asymmetric case (on the order of centuries even with very fast cycle times).

Specifically it's worth noting here the context of this thread: single entity data storage is the textbook ideal case for symmetric. While Shor's "only" applies [0] to one type of cryptography, that type is the key to the economic majority of what encryption is used for (the entire world wide web etc). So it still matters a lot. But when you're encrypting your own data purely to use it for yourself at a future time, which is the case for your own personal data storage, pure symmetric cryptography is all you need (and faster). You don't have the difficult problem of key distribution and authentication with the rest of humanity at all and can just set that aside entirely. So to the point of "why not back up data to multiple providers" that "should" be no problem if it's encrypted before departing your own trusted systems.

Granted, the "should" does encompass some complications, but not in the math or software, rather in messier aspects of key control and metadata. Like, I think an argument could be made that it's easier to steal a key then exfiltrate huge amounts of data without someone noticing, but there's powerful enough tools for physically secure key management (and splitting, Shamir's Secret Sharing means you can divide up each unique service backup encryption key into an arbitrary number of units and then require an arbitrary number of them to all agree to reconstitute the usable original key) that I'd expect an advanced government to be able to handle it, more so then data at rest even. Another argument is that even if a 3rd party cannot ever see anything about the content of an encrypted archive, they can get some metadata from its raw size and the flows in and out of it. But in the reduced single use case of pure backups where use is regular widely spaced dumps, and for something as massive as an entire government data cloud with tens of thousands of uncorrelated users, the leakage of anything meaningful seems low. And of course both have to be weighed against a disaster like this one.

Anyway, all well above my pay grade. But if I were a citizen there I'd certainly be asking questions because this feels more like NIH or the political factors influencing things.

----

0: Last I checked there was still some serious people debating on whether it will actually work out in the real world, but from the perspective of considering security risk it makes sense to just take it as given that it will work completely IRL, including that general purpose quantum computers that can run it will prove sufficiently scalable to provide all the needed qubits.


> someday be decrypted

Yup and that someday is the same day nuclear fusion is commercially viable.


Someday, theoretically, maybe. This means that, as far as everyone knows, if I properly secure a message to you using RSA, no one else is reading the message. Maybe in 50 years they can, but, well, that's in 50 years. Alarmists would have you believe it'll happen in three. I'm just an Internet rando, but my money's on it being closer to 50. Regardless though, it's not today.



Perhaps that is why I was asking for better information.


Whew, that's actually a hard one! It's been long enough since I was getting into it that I'm not really sure what's the best present path on it. In terms of books, JP Aumasson's "Serious Cryptography" got a 2nd edition not too long ago and the first edition was good. Katz & Lindell's "Modern Cryptography" and Hoffstein's "Introduction to Mathematical Cryptography" are both standard texts that I think a lot of courses still get started with. Finally I've heard good things about Esslinger's "Learning and Experiencing Cryptography with CrypTool and SageMath" from last year and Smart's "Cryptography Made Simple", which has a bunch of helpful visuals.

For online stuff, man is there a ton, and plenty comes up on HN with some regularity. I guess I've been a fan of a lot of the work Quanta Magazine does on explaining interesting science and math topics, so you could look through their cryptography-tagged articles [0]. As I think about it more, honestly though it might almost seem cliche but reading the Wikipedia entries on cryptography and following that along with reference to the links if you want isn't bad either.

Just keep in mind there's plenty of pieces that go into it. There's the mathematics of the algorithms themselves. Then a lot of details around the implementations of them into working software, with efforts like the HACL* project [1] at formal mathematical verification for libraries, which then has gone on to benefit projects like Firefox [2] in both security and performance. Then how that interacts with the messy real world of the underlying hardware, and how details there can create side channels can leak data from a seemingly good implementations of perfect math. But then also that such attacks don't always matter, it depends on the threat scenarios. OTP, symmetric and asymmetric/pub-key (all data preserving), and cryptographic hash functions (which are data destroying) are all very different things despite falling under the overall banner of "cryptography" with different uses and tradeoffs.

Finally, there is lots and lots of history here going back to well before modern computers at all. Humans have always desired to store and share information with other humans they wish while preventing other humans from gaining it. There certainly have been endless efforts to try to subvert things as well as simple mistakes made. But we've learned a lot and there's a big quantitative difference between what we can do now and in the past.

>My (rudimentary, layman) understanding is that encryption is almost like a last line of defense and should never be assumed to be unbreakable.

Nope. "We", the collective of all humanity using the internet and a lot of other stuff, do depend on encryption to be "unbreakable" as a first and only line of defense, either truly and perfectly unbreakable or at least unbreakable within given specified constraints. It's the foundation of the entire the global e-commerce system and all the trillions and trillions flowing through it, of secure communications for business and war, etc.

Honestly, I'm kind of fascinated that apparently there are people on HN who have somehow internalized the notion of cryptography you describe here. I don't mean that as a dig, just it honestly never occurred to me and I can't remember really seeing it before. It makes me wonder if that feeds into disconnects on things like ChatControl and other government backed efforts to try to use physical coercion to achieve what they cannot via peaceful means. If you don't mind (and see this at some point, or even read this far since this has turned into a long-ass post) could you share what you think about the EU's proposal there, or the UK's or the like? Did you think they could do it anyway so trying to pass a law to force backdoors to be made is a cover for existing capabilities, or what? I'm adamantly opposed to all such efforts, but it's not typically easy to get even the tech literate public on-side. Now I'm curious if thinking encryption is breakable anyway might somehow play a role.

----

0: https://www.quantamagazine.org/tag/cryptography/

1: https://github.com/hacl-star/hacl-star

2: https://blog.mozilla.org/security/2020/07/06/performance-imp...


Wow, thank you for this detailed reply! I’ll be checking out some of those resources at lunch today :)

I didn’t take your comment as a dig at all. I’m honestly a little surprised myself that I’ve made it this far with such a flawed understanding.

> Did you think they could do it anyway so trying to pass a law to force backdoors to be made is a cover for existing capabilities, or what?

I had to do some quick reading on the ChatControl proposal in the EU.

I see it along the lines of, if they really needed to target someone in particular (let’s not get into who “deserves” to be targeted), then encryption would only be an obstacle for them to have to overcome. But, for the great majority of traffic - like our posts being submitted to HN - the effort of trying to break the encryption (eg, dedicating a few months of brute force effort across multiple entire datacenters) simply isn’t worth it. In many other scenarios, bypassing the encryption is a lot more practical, like that one operation where I believe it was the FBI waited for their target to unlock his laptop - decrypting the drive - in a public space, and then they literally grabbed the laptop and ran away with it.

The ChatControl proposal sounds like it aims to bypass everyone’s encryption, making it possible to read and process all user data that goes across the wire. I would never be in support of something like that, because it sounds like it sets up a type of backdoor that is always present, and always watching. Like having a bug planted in your apartment where everything you say is monitored by some automated flagging system, just like in 1984.

If a nation state wants to spend the money to dedicate multiple entire datacentres to brute forcing my encrypted communications, achieving billions of years of compute time in the span of a few months or whatever, I’m not a fan but at least it would cost them an arm and a leg to get those results. The impracticality of such an approach makes it so that they don’t frivolously pursue such efforts.

The ability to view everyone’s communications in plaintext is unsettling and seems like it’s just waiting to be abused, much in the same way that the United States’ PRISM was (and probably is still being) abused.


From someone else who was curious about an intelligent answer for the question in the comment above, thanks for taking the time to really deliver something interesting, politely too. Nice to see that not everyone here replies with arrogant disdain to someone who openly admits not knowing much about a complex field like cryptography, and nicely asking about it.


You don’t need a backdoor in the encryption if you can backdoor the devices as such.

Crypto AG anyone?


In fairness, they backdoored the company, the crypto algorithms and the devices at Crypto AG.

Anyway, there are many more recent examples: https://en.wikipedia.org/wiki/Crypto_Wars

Don’t get me started on the unnecessary complexity added to TLS.


Agree completely that it's absolute wild to run such a system without backups. But at this point no government should keep critical data on foreign cloud storage.


Good thing Korea has cloud providers, apparently Kakao has even gone...beyond the cloud!

https://kakaocloud.com/ https://www.nhncloud.com/ https://cloud.kt.com/

To name a few.


They are overwhelmingly whitelabeled providers. For example, Samsung SDI Cloud (the largest "Korean" cloud) is an AWS white label.

Korea is great at a lot of engineering disciplines. Sadly, software is not one of them, though it's slowly changing. There was a similar issue a couple years ago where the government's internal intranet was down a couple days because someone deployed a switch in front of outbound connections without anyone noticing.

It's not a talent problem but a management problem - similar to Japan's issues, which is unsurprising as Korean institutions and organizations are heavily based on Japanese ones from back in the JETRO era.


I spent a week of my life at a major insurance company in Seoul once, and the military style security, the obsession with corporate espionage, when all they were working on was an internal corporate portal for an insurance company… The developers had to use machines with no Internet access, I wasn’t allowed to bring my laptop with me lest I use it to steal their precious code. A South Korean colleague told me it was this way because South Korean corporate management is stuffed full of ex-military officers who take the attitudes they get from defending against the North with them into the corporate world; no wonder the project was having so many technical problems-but I couldn’t really solve them, because ultimately the problems weren’t really technical


    > South Korean corporate management is stuffed full of ex-military officers
For those unaware, all "able-bodied" South Korean men are required to do about two years of military service. This sentence doesn't do much for me. Also, please remember that Germany also had required military service until quite recently. That means anyone "old" (over 40) and doing corp mgmt was probably also a military officer.


The way it was explained to me was different... yes, all able-bodied males do national service. But there's a different phenomenon in which someone serves some years active duty (so this is not their mandatory national service, this is voluntary active duty service), in some relatively prestigious position, and then jumps ship to the corporate world, and they get hired as an executive by their ex-comrades/ex-superiors... so there ends up being a pipeline from more senior volunteer active duty military ranks into corporate executive ranks (especially at large and prestigious firms), and of course that produces a certain culture, which then tends to flow downhill


Did you happen to notice interesting phenomenon like “the role becomes a rank”?


Also Israel - and their tech echo system is tier 1.

As somebody that has also done work in Korea (with on of their banks), my observation was that almost all decision making was top-down, and people were forced to do a ton of monotonous work based on the whims of upper management, and people below could not talk back. I literally stood and watched a director walk in after racking a bunch of equipment and commented that the disk arrays should be higher up. When I asked why (they were at the bottom for weight and centre of gravity reasons), he looked shocked that I even asked and tersely said that the blinking lights of the disks at eye level show the value of the purchase better.

I can't imagine writing software in that kind of environment. It'd be almost impossible to do clean work, and even if you did it'd get interfered with. On top of that nobody could go home before the boss.

I did enjoy the fact that the younger Koreans we were working with asked me and my colleague how old we were, because my colleague was 10 years older than me and they were flabbergasted that I was not deferring to him in every conversation, even though we were both equals professionally.

This was circa 2010, so maybe things are better, but oh my god I'm glad it was business trips and I was happy to be flying home each time (though my mouth still waters at the marinaded beef at the bbq restaurants I went to...).


Military culture in SK (especially amongst the older generation who served before democratization in the late 1990s) is extremely hierarchical.


> That means anyone "old" (over 40) and doing corp mgmt was probably also a military officer.

Absolutely not. It was very common in Germany to deny military service and instead do a year of civil service as a replacement. Also, there were several exceptions from the """mandatory""" military service. I have two brothers who had served, so all I did was tick a checkbox and I was done with the topic of military service.


Depends on if these were commissioned officers or NCOs. Basically everyone reaches NCO by the end of service (used to be automatic, now there are tests that are primarily based around fitness), but when people specifically call out officers they tend to be talking about ones with a commission. You are not becoming a commissioned officer through compulsory service.


The difference is that South Korea is currently technically still at war with North Korea.


This - you and half of the smart people here in the comments clearly have no idea what it's like to live across the border from a country that wants you eradicated.


All able bodied men don't become officers.


I've done some work for a large SK company and the security was manageable. Certainly higher than anything I've seen before or after and with security theater aspects, but ultimately it didn't seriously get in the way of getting work done.


I think it makes sense that although this is a widespread problem in South Korea, some places have it worse than others; you obviously worked at a place where the problem was more moderate. And I went there over a decade ago, and maybe even the place I was at has lightened up a bit since.


That doesn't seem accurate at all. The big 3 Korean clouds used inside Korea are NHN Cloud, Naver Cloud and now KT. Which one of these is whitelabeled? And what's the source on Samsung SDI Cloud being the "largest Korean cloud"? What metric?

NHN Cloud is in fact being used more and more in the government [1], as well as playing a big part in the recovery effort of this fire. [2]

No, unlike what you're suggesting, Korea has plenty of independent domestic cloud and the government has been adopting it more and more. It's not on the level of China, Russia or obviously the US, but it's very much there and accelerating quickly. Incomparable to places like the EU which still have almost nothing.

[1] https://www.ajunews.com/view/20221017140755363 - 2022, will have grown a lot now [2] https://www.mt.co.kr/policy/2025/10/01/2025100110371768374


I am very happy with the software that powers my Hyundai Tuscon hybrid. (It's a massive system that runs the gas and electric engines, recharging, shifting gears, braking, object detection, and a host of information and entertainment systems.) After 2 years, 0 crashes and no observable errors. Of course, nothing is perfect: maps suck. The navigation is fine; it's the display that is at least 2 decades behind the times.


I've been working for a Korean Hyundai supplier for two years training them in modern software development processes. The programming part is not a problem, they have a lot of talented people.

The big problem from my point of view is management. Everyone pushes responsibility and work all the way down to the developera so that they do basically everything themselves from negotiating with the customer, writing the requirements (or not) to designing the architecture, writing the code and testing the system.

If they're late,they just stay and work longer and on the weekends and sleep at the desk.


> If they're late,they just stay and work longer and on the weekends and sleep at the desk.

This is the only part that sounds bad? Negotiating with customers may require some help as well but it's better than having many layers in between.


If the dev does everything, their manager may as well be put in a basket and pushed down the river. You can be certain there are a lot of managers. The entire storyline sounds like enterprise illness to me to be honest.


I’ve driven a Tucson several times recently (rental). It did not crash but it was below acceptable. A 15 year old VW Golf has better handling than the Tucson.


    > Korea is great at a lot of engineering disciplines. Sadly, software is not one of them
I disagree. People say the same about Japan and Taiwan (and Germany). IMHO, they are overlooking the incredible talents in embedded programming. Think of all of the electronics (including automobiles) produced in those countries.


Good point! I wasn't treat that as "software" in my answer but it's true that their embedded programming scene is fairly strong.


Embedded electronics, including from those countries, does not have an enviable reputation. :(


What about automobiles from Japan, Korea, and Germany? They are world class. All modern cars must have millions of lines of code to run all kinds of embedded electronics. Do I misunderstand?


Yet people complain about their many software and UI issues all the time.


Samsung owns Joyent


Nevertheless isn't Joyent registered in the US?


The last time I heard of Joyent was in the mid-2000s on John Gruber’s blog when it was something like a husband-and-wife operation and something to do with WordPress or MovableType - 20 years later now it’s a division of Samsung?

My head hurts


In the meantime, they sponsored development of node in its early days, created a could infrastructure based on OpenSolaris and eventually got acquired by Samsung.


Encrypted backups would have saved a lot of pain here


Any backup would do at this point. I think the most best is: encrypted, off-site & tested monthly.


You don’t need cloud when you have the data centre, just backups in physical locations somewhere else


Others have pointed out: you need uptime too. So a single data center on the same electric grid or geographic fault zone wouldn’t really cut it. This is one of those times where it sucks to be a small country (geographically).


> so a single data center on the same electrical grid or geographic...

Yes, but your backup DC's can have diesel generators and a few weeks of on-site fuel. It has some quakes - but quake-resistant DC's exist, and SK is big enough to site 3 DC's at the corners of an equilateral triangle with 250km edges. Similar for typhoons. Invading NK armies and nuclear missiles are tougher problems - but having more geography would be of pretty limited use against those.


> no government should keep critical data on foreign cloud storage

Primary? No. Back-up?

These guys couldn’t provision a back-up for their on-site data. Why do you think it was competently encrypted?


They fucked up, that much is clear but the should not have kept that data on foreign cloud storage regardless. It's not like there are only two choices here.


> the should not have kept that data on foreign cloud storage regardless. It's not like there are only two choices here

Doesn't have to be an American provider (Though anyone else probably increases Seoul's security cross section. America is already its security guarantor, with tens of thousands of troops stationed in Korea.)

And doesn't have to be permanent. Ship encrypted copies to S3 while you get your hardenede-bunker domestic option constructed. Still beats the mess that's about to come for South Korea's population.


I'm aware of a big cloud services provider (I won't name any names but it was IBM) that lost a fairly large amount of data. Permanently. So that too isn't a guarantee. They simply should have made local and off-line backups, that's the gold standard, and to ensure that those backups are complete and can be used to restore from scratch to a complete working service.


>I'm aware of a big cloud services provider (I won't name any names but it was IBM) that lost a fairly large amount of data. Permanently. So that too isn't a guarantee.

Permanently losing data at a given store point isn't relevant to losing data overall. Data store failures are assumed or else there'd be no point in backups. What matters is whether failures in multiple points happen at the same time, which means a major issue is whether "independent" repositories are actually truly independent or whether (and to what extent) they have some degree of correlation. Using one or more completely unique systems done by someone else entirely is a pretty darn good way to bury accidental correlations with your own stuff, including human factors like the same tech people making the same sorts of mistakes or reusing the same components (software, hardware or both). For government that also includes political factors (like any push towards using purely domestic components).

>They simply should have made local and off-line backups

FWIW there's no "simply" about that though at large scale. I'm not saying it's undoable at all but it's not trivial. As is literally the subject here.


> Permanently losing data at a given store point isn't relevant to losing data overall.

I can't reveal any details but it was a lot more than just a given storage point. The interesting thing is that there were multiple points along the way where the damage would have been recoverable but their absolute incompetence made matters much worse to the point where there were no options left.

> FWIW there's no "simply" about that though at large scale. I'm not saying it's undoable at all but it's not trivial. As is literally the subject here.

If you can't do the job you should get out of the kitchen.


>I can't reveal any details but it was a lot more than just a given storage point

Sorry, not brain not really clicking tonight and used lazy imprecise terminology here, been a long one. But what I meant by "store point" was any single data repository that can be interacted with as a unit, regardless of implementation details, that's part of a holistic data storage strategy. So in this case the entirety of IBM would be a "storage point", and then your own self-hosted system would be another, and if you also had data replicated to AWS etc those would be others. IBM (or any other cloud storage provider operating in this role) effectively might as well simply be another hard drive. A very big, complex and pricey magic hard drive that can scale its own storage and performance on demand granted, but still a "hard drive".

And hard drives fail, and that's ok. Regardless of the internal details of how the IBM-HDD ended up failing, the only way it'd affect the overall data is if that failure happened simultaneously with enough other failures at local-HDD and AWD-HDD and rsync.net-HDD and GC-HDD etc etc that it exceeded available parity to rebuild. If these are all mirrors, then only simultaneous failure of every single last one of them would do it. It's fine for every single last one of them to fail... just separately, with enough of a time delta between each one that the data can be rebuilt on another.

>If you can't do the job you should get out of the kitchen.

Isn't that precisely what bringing in external entities as part of your infrastructure strategy is? You're not cooking in their kitchen.


Ah ok, clear. Thank you for the clarification. Some more interesting details: the initial fault was triggered by a test of a fire suppression system, that would have been recoverable. But someone thought they were exceedingly clever and they were going to fix this without any downtime and that's when a small problem became a much larger one, more so when they found out that their backups were incomplete. I still wonder if they ever did RCA/PM on this and what their lessons learned were. It should be a book sized document given how much went wrong. I got the call after their own efforts failed by one of their customers and after hearing them out I figured this is not worth my time because it just isn't going to work.


Thanks in turn for the details, always fascinating (and useful for lessons... even if not always for the party in question dohoho) to hear a touch of inside baseball on that kind of incident.

>But someone thought they were exceedingly clever and they were going to fix this without any downtime and that's when a small problem became a much larger one

The sentence "and that's when a small problem became a big problem" comes up depressingly frequently in these sorts of post mortems :(. Sometimes sort of feels like, along all the checklists and training and practice and so on, there should also simply be the old Hitchhiker's Guide "Don't Panic!" sprinkled liberally around along with a dabbing of red/orange "...and Don't Be Clever" right after it. We're operating in alternate/direct law here folks, regular assumptions may not hold. Hit the emergency stop button and take a breath.

But of course management and incentive structures play a role in that too.


In this context the entirety of IBM cloud is basically a single storage point.

(If IBM was also running the local storage then we're talking about a very different risk profile from "run your own storage, back up to a cloud" and the anecdote is worth noting but not directly relevant.)


If that’s the case, then they should make it clear they don’t provide data backup.

A quick search reveals IBM does still sell backup solutions, including ones that backup from multiple cloud locations and can restore to multiple distinct cloud locations while maintaining high availability.

So, if the claims are true, then IBM screwed up badly.


DigitalOcean lost some of my files in their object storage too: https://status.digitalocean.com/incidents/tmnyhddpkyvf

Using a commercial provider is not a guarantee.


DO Spaces, for at least a year after launch, had no durability guarantees whatsoever. Perhaps they do now, but I wouldn’t compare DO in any meaningful way to S3, which has crazy high durability guarantees as well as competent engineering effort expended on designing and validating that durability.


They should have kept encrypted data somewhere else. If they know how to use encryption, it doesn’t matter where. Some people use stenographic backup on YouTube even.


It's 2025. Encryption is a thing now. You can store anything you want on foreign cloud storage. I'd give my backups to the FSB.


> I'd give my backups to the FSB.

Until you need them - like with the article here ;) - then the FSB says "only if you do these specific favours for us first...".


There's certifications too, which you don't get unless you conform to for example EU data protection laws. On paper anyway. But these have opened up Amazon and Azure to e.g. Dutch government agencies, the tax office will be migrating to Office365 for example.


Encryption does not ensure any type of availability.


Why not? If the region is in country, encrypted, and with proven security attestations validated by third parties, a backup to a cloud storage would be incredibly wise. Otherwise we might end up reading an article about a fire burning down a single data center


Microsoft has already testified that the American government maintains access to their data centres, in all regions. It likely applies to all American cloud companies.

America is not a stable ally, and has a history of spying on friends.

So unless the whole of your backup is encrypted offline, and you trust the NSA to never break the encryption you chose, its a national security risk.


> France spies on the US just as the US spies on France, the former head of France’s counter-espionage and counter-terrorism agency said Friday, commenting on reports that the US National Security Agency (NSA) recorded millions of French telephone calls.

> Bernard Squarcini, head of the Direction Centrale du Renseignement Intérieur (DCRI) intelligence service until last year, told French daily Le Figaro he was “astonished” when Prime Minister Jean-Marc Ayrault said he was "deeply shocked" by the claims.

> “I am amazed by such disconcerting naiveté,” he said in the interview. “You’d almost think our politicians don’t bother to read the reports they get from the intelligence services.”

> “The French intelligence services know full well that all countries, whether or not they are allies in the fight against terrorism, spy on each other all the time,” he said.

> “The Americans spy on French commercial and industrial interests, and we do the same to them because it’s in the national interest to protect our companies.”

> “There was nothing of any real surprise in this report,” he added. “No one is fooled.”


France has had a reputation for being especially active in industrial espionage since at least the 1990s. Here's an article from 2011 https://www.france24.com/en/20110104-france-industrial-espio...

I always thought it was a little unusual that the state of France owns over 25% of the defense and cyber security company Thales.


> I always thought it was a little unusual that the state of France owns over 25% of the defense and cyber security company Thales.

Unusual from an American perspective, maybe. The French state has stakes in many companies, particularly in critical markets that affect national sovereignty and security, such as defence or energy. There is a government agency to manage this: https://en.wikipedia.org/wiki/Agence_des_participations_de_l... .


> America is not a stable ally, and has a history of spying on friends

America is a shitty ally for many reasons. But spying on allies isn’t one of them. Allies spy on allies to verify they’re still allies. This has been done throughout history and is basic competency in statecraft.


That doesn’t capture the full truth. Since Snowden, we have hard evidence the NSA has been snooping on foreign governments and citizens alike with the purpose of harvesting data and gathering intelligence, not just to verify their loyalty.

No nation should trust the USA, especially not with their state secrets, if they can help it. Not that other countries are inherently more trustworthy, but the US is a known bad actor.


> Since Snowden, we have hard evidence the NSA has been snooping on foreign governments and citizens alike

We also know this is also true for Russia, China and India. Being spied on is part of the cost of relying on external security guarantees.

> Not that other countries are inherently more trustworthy, but the US is a known bad actor

All regional and global powers are known bad actors. That said, Seoul is already in bed with Washington. Sending encrypted back-ups to an American company probably doesn't increase its threat cross section materially.


> All regional and global powers are known bad actors.

That they are. Americans tend to view themselves as "the good guys" however, which is a wrong observation and thus needs pointing out in particular.

> That said, Seoul is already in bed with Washington. Sending encrypted back-ups to an American company probably doesn't increase its threat cross section materially.

If they have any secrets they attempt to keep even from Washington, they are contained in these backups. If that is the case, storing them (even encrypted) with an American company absolutely compromises security, even if there is no known threat vector at this time. The moment you give up control of your data, it will forever be subject to new threats discovered afterward. And that may just be something like observing the data volume after an event occurs that might give something away.


Being "in bed with Washington" doesn't really seem any kind of protection right now.

Case in point: https://en.wikipedia.org/wiki/2025_Georgia_Hyundai_plant_imm...

> The raid led to a diplomatic dispute between the United States and South Korea, with over 300 Koreans detained, and increased concerns about foreign companies investing in the United States.


There is no such thing as good or trustworthy actors when it comes to state affairs. Each and every one attempt to spy on the others. Perhaps US have more resources to do so than some others.

You really have no evidence to back up your assertion, because you’d have to be an insider.


> There is no such thing as good or trustworthy actors when it comes to state affairs. Each and every one attempt to spy on the others. Perhaps US have more resources to do so than some others.

Perhaps is doing a lot of work here. They do, and they are. That is what the Snowden leaks proved.

> You really have no evidence to back up your assertion, because you’d have to be an insider.

I don't, because the possibility alone warrants the additional caution.


Didn't mean to imply one followed from the other. Rather that both combined creates a risk.


Not only does the NSA break encryption but they actually sabotage algorithms to make them easier to break when used.


DES is an example of where people were sure that NSA persuaded IBM to weaken it but, to quote Bruce Schneier, "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES". <https://www.cnet.com/news/privacy/saluting-the-data-encrypti...>


Can the NSA break the Ed25519 stuff? Like the crypto_box from libsodium?


ed25519 (and ec25519) are generally understood not to be backdoored by the NSA, or weak in any known sense.

The lack of a backdoor can be proven by choosing parameters according to straightforward reasons that do not allow the possibility for the chooser to insert a backdoor. The curve25519 parameters have good reasons why they are chosen. By contrast, Dual_EC_DRBG contains two random-looking numbers, which the NSA pinky-swears were completely random, but actually they generated them using a private key that only the NSA knows. Since the NSA got to choose any numbers to fit there, they could do that. When something is, like, "the greatest prime number less than 2^255" you can't just insert the public key of your private key into that slot because the chance the NSA can generate a private key whose public key just happens to match the greatest prime number less than 2^255 is zero. These are called "nothing up my sleeve numbers".

This doesn't prove the algorithm isn't just plain old weak, but nobody's been able to break it, either. Or find any reason why it would be breakable. Elliptic curves being unbreakable rests on the discrete logarithm of a random-looking permutation being impossible to efficiently solve, in a similar way to how RSA being unbreakable relies on nobody being able to efficiently factorize very big numbers. The best known algorithms for solving discrete logarithm require O(sqrt(n)) time, so you get half the bits of security as the length of the numbers involved; a 256-bit curve offers 128 bits of security, which is generally considered sufficient.

(Unlike RSA, you can't just arbitrarily increase the bit length but have to choose a completely new curve for each bit length, unfortunately. ed25519 will always be 255 bits, and if a different length is needed, it'll be similar but called something else. On the other hand, that makes it very easy to standardize.)


> but nobody's been able to break it, either.

Absence of evidence is not evidence of absence. It could well be that someone has been able to break it but that they or that organization did not publish.


How could you not!? Think of the bragging rights. Or, perhaps the havoc. That persons could sit on this secret for long periods of time seem... difficult to maintain. If you know it's broken and you've discovered it; surely someone else could too. And they've also kept the secret?

I agree on the evidence/absence of conjecture. However, the impact of the secret feels impossible to keep.

Time will, of course, tell; it wouldn't be the first occasion where that has embarrassed me.


Some people are able to shut the hell up. If you're not one of them, you're not getting told. Some people can keep a secret. Some people can't. Others get shot. Warframe is a hilarious example where people can't shut the hell up about things they know they should keep quiet about.


There are a large number mathematicians gainfully employed in breaking such things without talking about it.


It is, actually. A correct statement would be “absence of proof is not proof of absence”, but “evidence” and “proof” are not synonyms.


Large amounts of data, like backups, are encrypted using a symmetric algorithm. Which makes the strength of Ed25519 somewhat unimportant in this context.


There are no stable allies. No country spies on its friends because countries don't have friends, they have allies. And everybody spies on their allies.


Spies play one of the most important roles in global security.

People who don’t know history think spying on allies is bad.


Exactly.

Like, don't store it in the cloud of an enemy country of course.

But if it's encrypted and you're keeping a live backup in a second country with a second company, ideally with a different geopolitical alignment, I don't see the problem.


The problem is money,

you are seeing the local storage decision under the lens of security, that is not the real reason for this type of decision.

While it may have been sold that way, reality is more likely the local DC companies just lobbied for it to be kept local and cut as many corners as they needed. Both the fire and architecture show they did cut deeply.

Now why would a local company voluntary cut down its share of the pie by suggesting to backup store in a foreign country. They are going to suggest keep in country or worse as was done here literally the same facility and save/make even more !

The civil service would also prefer everything local either for nationalistic /economic reasons or if corrupt then for all kick backs each step of the way, first for the contract, next for the building permits, utilities and so on.


Enemy country in the current geopolitical climate is an interesting take. Doesn't sound like a great idea to me tbh.


There are a lot of gray relations out there, but there’s almost no way you could morph the current US/SK relations to one of hostility; beyond a negligible minority of citizens in either being super vocal for some perceived slights.


One could have said the exact same thing about US-EU relations just a couple of years ago. And yet, here we are.


You think when ICE arrested over 300 South Korean citizens who were setting up a Georgia Hyundai plant and subjected them to alleged human rights abuses, it was only a perceived slight?

https://www.huffpost.com/entry/south-korea-human-rights-inve...

How Trump’s ICE Raid Triggered Nationwide Outrage in South Korea

https://www.newsweek.com/trump-ice-raid-hyundai-outrage-sout...

'The raid "will do lasting damage to America's credibility," John Delury, a senior fellow at the Asia Society think tank, told Bloomberg. "How can a government that treats Koreans this way be relied upon as an 'ironclad' ally in a crisis?"'


Yes.


A year ago, I would have easily claimed the same thing about Denmark.


I don't follow. Can you share more context?


The US is threatening to invade Greenland, what means active war with Denmark.


Great point! I forgot that Greenland is not (yet) an independent nation. It is still a part of Denmark.


The current US admin's threats to annex Greenland, an autonomous territory of Denmark.


Trump will find a way, just as he did with Canada for example (i mean, Canada of all places). Things are way more in flux than they used to be. There’s no stability anymore.


From the perspective of securing your data, what's the practical difference between a second country and an enemy country? None. Even if it's encrypted data, all encryption can be broken, and so we must assume it will be broken. Sensitive data shouldn't touch outside systems, period, no matter what encryption.


A statement like "all encryption can be broken" is about as useful as "all systems can be hacked" in which case, not putting data in the cloud isn't really a useful argument.


Any even remotely proper symmetric encryption scheme "can be broken" but only if you have a theoretical adversary with nearly infinite power and time, which is in practice absolutely utterly impossible.

I'm sure cryptographers would love to know what makes it possible for you to assume that say AES-256 or AES-512 can be broken in practice for you to include it in your risk assessment.


The risk that the key leaks through an implementation bug or a human intelligence source.

Exfiltrating terabytes of data is difficult, exfiltrating 32 bytes is much less so.


That's very far from the encryption itself being broken though. If that were the claim, I would have had no complaints.


You’re assuming we don’t get better at building faster computers and decryption techniques. If an adversary gets hold of your encrypted data now, they can just shelf it until cracking becomes eventually possible in a few decades. And as we’re talking about literal state secrets here, they may very well still be valuable by then.


Barring any theoretical breakthroughs, AES can't be broken any time soon even if you turned every atom in the universe into a computer and had them all cracking all the time. There was a paper that does the math.


You make an incorrect assumption about my assumptions. Faster computers or decryption techniques will never fundamentally "break" symmetric encryption. There's no discrete logarithm or factorization problem to speed up. Someone might find ways to make for example AES key recovery somewhat faster, but the margin of safety in those cases is still incredibly vast. In the end there's such an unfathomably vast key space to search through.


You're also assuming nobody finds a fundamental flaw in AES that allows data to be decrypted without knowing the key and much faster than brute force. It's pretty likely there isn't one, but a tiny probability multiplied by a massive impact can still land on the side of "don't do it".


I'm not. It's just that the math behind AES is very fundamental and incredibly solid compared to a lot of other (asymmetric) cryptographic schemes in use today. Calling the chances of it tiny instead of nearly nonexistent sabotages almost all risk assessments. Especially if it then overshadows other parts of that assessment (like data loss). Even if someone found "new math" and it takes very optimistically 60 years, of what value is that data then? It's not an useful risk assessment if you assess it over infinite time.

But you could also go with something like OTP and then it's actually fundamentally unbreakable. If the data truly is that important, surely double the storage cost would also be worth it.


> From the perspective of securing your data, what's the practical difference between a second country and an enemy country? None.

Huh? An enemy country will shut off your access. Friendly countries don't.

> Even if it's encrypted data, all encryption can be broken, and so we must assume it will be broken.

This is a very, very hot take.


A country can become an adversary faster than a government can migrate away from it.


Hence a backup country. I already covered that.

But while countries go from unfriendly to attacking you overnight, they don't generally go from friendly to attacking you overnight.


Overnight, Canada went from being an ally of the US to being threatened by annexation (and target #1 of an economic war).

If the US wants its state-puppet corporations to be used for integral infrastructure by foreign governments, it's going to need to provide some better legal assurances than 'trust me bro'.

(Some laws on the books, and a congress and a SCOTUS that has demonstrated a willingness to enforce those laws against a rogue executive would be a good start.)


And which organization has every file, from each of their applications using the cloud, encrypted *before* it is sent to the cloud?


They're talking about backups. you can absolutely send an updated copy every night.


True, the user I was replying to only mentioned backups.

For those there's sure no problem


Especially on US cloud storage.

The data is never safe thanks to the US Cloud Act.


If you can’t encrypt your backups such that you could store them tatooed on Putin’s ass, you need to learn about backups more.


Governments need to worry about

1. future cryptography attacks that do not exist today

2. Availability of data

3. The legal environment of the data

Encryption is not a panacea that solves every problem


Why not?

Has there been any interruption in service?


And yet here is an example where keeping critical data off public cloud storage has been significantly worse for them in the short term.

Not that they should just go all in on it, but an encrypted copy on S3 or GCS would seem really useful right about now.


You can do a bad job with public or private cloud. What if they would have had the backup and lost the encryption key?

Cost wise probably having even a Korean different data center backup would not have been huge effort, but not doing it exposed them to a huge risk.


Then they didn't have a correct backup to begin with; for high profile organizations like that, they need to practice outages and data recovery as routine.

...in an ideal world anyway, in practice I've never seen a disaster recovery training. I've had fire drills plenty of times though.


We’ve had Byzantine crypto key solutions since at least 2007 when I was evaluating one for code signing for commercial airplanes. You could put an access key on k:n smart cards, so that you could extract it from one piece of hardware to put on another, or you could put the actual key on the cards so burning down the data center only lost you the key if you locked half the card holders in before setting it on fire.



Rendering security concepts in hardware always adds a new set of concerns. Which Shamir spent a considerable part of his later career testing and documenting. If you look at side channel attacks you will find his name in the author lists quite frequently.


> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...

They absolutely cannot be trusted, especially sensitive govt. data. Can you imagine the US state department getting their hands on compromising data on Korean politicians?

Its like handing over the govt. to US interests wholesale.

That they did not choose to keep the backup, and then another, at different physical locations is a valuable lesson, and must lead to even better design the next time.

But the solution is not to keep it in US hands.


Using the cloud would have been the easiest way to achieve the necessary redundancy, but by far not the only one. This is just a flawed concept from the start, with no real redundancy.


But not security. And for governmental data security is a far more important consideration.

not losing data and keeping untrusted parties out of your data is a hard problem, that "cloud" aka "stored somewhere that is accessible by agents of a foreign nation" does not solve.


It's the government of South Korea, which has a nearly 2 trillion dollar GDP. Surely they could have built a few more data centers connected with their own fiber if they were that paranoid about it.


As OP says, cloud is not the only solution, just the easiest. They should probably have had a second backup in a different building. It would probably require a bit more involvement, but def doable.


There is some data privacy requirement in SK where application servers and data have to remain in the country. I worked for a big global bank and we had 4 main instances of our application: Americas, EMEA, Asia and South Korea.


When I worked on Apple Maps infra South Korea required all servers be in South Korea.


It was the same at google. If I'm remembering right we couldn't export any vector type data (raster only) and the tiles themselves had to be served out of South Korea.


If only there were a second data center in South Korea where they could backup their data…


I know there is legit hate for VMWare/Broadcom but there is a legit case to be made for VCF with an equivalent DR setup where you have replication enabled by Superna and Dell PowerProtect Data Domain protecting both local and remote with Thales Luna K160 KMIP for the data at rest encryption for the vSAN.

To add, use F710s, H710s and then add ObjectScale storage for your Kubernetes workloads.

This setup repatriates your data and gives you a Cloud like experience. Pair it with like EKS-A and you have a really good on premises Private Cloud that is resilient.


This reads very similar to the Turbo Encabulator video.


> G-Drive’s structure did not allow for external backups

Ha! "Did not allow" my ass. Let me translate:

> We didn't feel like backing anything up or insisting on that functionality.


Pretty sensible to not host it on these commercial services. What is not so sensible is to not make backups.


I was once advised to measure your backup security in zip codes and time zones.

You have a backup copy of your file, in the same folder? That helps for some "oops" moments, but nothing else.

You have a whole backup DRIVE on your desktop? That's better. Physical failure of the primary device is no longer a danger. But your house could burn down.

You have an alternate backup stored at a trusted friend's house across the street? Better! But what if a major natural disaster happens?

True life, 30+ years ago when I worked for TeleCheck, data was their lifeblood. Every week a systems operator went to Denver, the alternate site, with a briefcase full of backup tapes. TeleCheck was based in Houston, so a major hurricane could've been a major problem.


Not sure “sane backup strategy” and “park your whole government in a private company under American jurisdiction” are mutually exclusive. I feel like I can think of a bunch of things that a nation would be sad to lose, but would be even sadder to have adversaries rifling through at will. Or, for that matter, extort favors under threat of cutting off your access.

At least in this case you can track down said officials in their foxholes and give them a good talking-to. Good luck holding AWS/GCP/Azure accountable…


He may or may not have been right, but it's besides the point.

The 3-2-1 backup rule is basic.


Well it is just malpractise. Even when I was an first semester art student I knew about the concept of off-site backups.


If you (as the SK government) were going to do a deal with " AWS/GCP/Azure" to run systems for the government, wouldn't you do something like the Jones Act? The datacenters must be within the country and staffed by citizens, etc.


Microsoft exec testified that US Govt can get access to the data Azure stores in other countries. I thought this was a wild allegation but apparently is true [0].

[0]https://www.theregister.com/2025/07/25/microsoft_admits_it_c...


Because these companies never lose data, like during some lightning strikes, oh wait: https://www.bbc.com/news/technology-33989384

As a government you should not be putting your stuff in an environment under control of some other nation, period. That is a completely different issue and does not really relate to making backups.


“The BBC understands that customers, through various backup technologies, external, were able to recover all lost data.”

You backup stuff. To other regions.


But the Korean government didn't backup, that's the problem in the first place here…


Sure. Using a cloud can make that more convenient. But obviously not so if you then keep all your data in the same region, or even “availability-zone” (which seems to be the case for the all “lost to lightening strikes” data here).


>As a government you should not be putting your stuff in an environment under control of some other nation, period.

Why? If you encrypt it yourself before transfer, the only possible control some_other_nation will have over you or your data is availability.


You're forgetting that you're talking nation states, here. Breaking encryption is in fact the role of the people you are giving access.

Sovereign delivery makes sense for _nations_.


You can use and abuse encrypted one time pads and multiple countries to guarantee it’s not retrievable.


Using a OTP in your backup strategy adds way more complexity, failure modes, and costs with literally no improvement in your situation.


You're assuming a level of competency that's hard to warrant at this point.


If your threat model is this high that you assume encryption breaking to be into your threat model, then maybe you do need a level of comeptency in the process as well.

They have 2 Trillion $ economy. I am sure that competency shouldn't be the thing that they should be worrying at that scale but at the same time I know those 2 trillion $ don't really make them more competent but I just want to share that it was very possible for them to teach/learn the competency

Maybe this incident teaches us atleast something. Definitely something to learn here though. I am interested in how the parent comment suggests sharing one time pad or rather a practical way for them to do so I suppose since I am genuinely curious as most others refer to using the cloud like aws etc. and I am not sure how much they can share something like one time pad and at the scale of petabytes and more, I can maybe understand it but I would love if the GP can tell me a practical way of doing so to atleast have more safety I suppose than encryption methods I suppose..


I think it doesn't need to be the encryption breaking per se.

It could be a gov laptop with the encryption keys left at a bar. Or the wrong keys saved on the system and the backups can't actually be decrypted. Or the keys being reused at large scale and leaked/guessed from lower security area. etc.

Relying on encryption requires operation knowledge and discipline. At some point, a base level of competency is required anyway, I'm not just sure encryption would have saved them as much as we'd wish it would.

To your point, I'd assume high profile incidents like this one will put more pressure to do radical changes, and in particular to treat digital data as a more critical asset that you can't hand down to the crookest corrupt entity willy nilly just for the kickback.

South Korea doesn't lack competent people, but hiring them and letting them at the helm sounds like a tough task.


First of all, you cannot do much if you keep all the data encrypted on the cloud (basically just backing things up, and hope you don't have to fetch it given the egress cost). Also, availability is exactly the kind of issue that a fire cause…


Yeah backups would’ve been totally useless in this case. All South Korea could’ve done is restore their data from the backups and avoid data loss.


What part of the incident did you miss: the problem here was that they didn't backup in the first place.

You don't need the Cloud for backups, and there's no reason to believe that they would have backuped their data while using the cloud more than what they did with their self-hosting…


For this reason, Microsoft has Azure US Government, Azure China etc


Yeah, I heard that consumer clouds are only locally redundant and there aren't even backups. So big DC damage could result in data loss.


By default, Amazon S3 stores data across at least separate datacenters that are in the same region, but are physically separate from each other:

Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive redundantly store objects on multiple devices across a minimum of three Availability Zones in an AWS Region. An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. Availability Zones are physically separated by a meaningful distance, many kilometers, from any other Availability Zone, although all are within 100 km (60 miles) of each other.

You can save a little money by giving up that redundancy and having your data i a single AZ:

The S3 One Zone-IA storage class stores data redundantly across multiple devices within a single Availability Zone

For further redundancy you can set up replication to another region, but if I needed that level of redundancy, I'd probably store another copy of data with a different cloud provider so an AWS global failure (or more likely, a billing issue) doesn't leave my data trapped in one vendor).

I believe Google and Azure have similar levels of redundancy levels in their cloud storage.


What do you mean by "consumer clouds"?


I refer to stuff like onedrive/gdrive/dropbox.


It's certainly not the case for Google Drive, which is geo-replicated, and I would be very surprised if it's true for any other major cloud.


I mean… at the risk of misinterpreting sarcasm—

Except for the backup strategy said consumers apply to their data themselves, right?

If I use a service called “it is stored in a datacenter in Virginia” then I will not be surprised when the meteor that hits Virginia destroys my data. For that reason I might also store copies of important things using the “it is stored in a datacenter in Oregon” service or something.


You might expect backups in case of fire, though. Even if data is not fully up to date.


...on a single-zone persistent disk: https://status.cloud.google.com/incident/compute/15056#57195...

> GCE instances and Persistent Disks within a zone exist in a single Google datacenter and are therefore unavoidably vulnerable to datacenter-scale disasters.

Of course, it's perfectly possible to have proper distributed storage without using a cloud provider. It happens to be hard to implement correctly, so apparently, the SK government team in question just decided... not to?


The simple solution here would have been something like a bunch of netapps with snapmirrors to a secondary backup site.

Or ZFS or DRBD or whatever homegrown or equivalent non-proprietart alternative is available these days and you prefer.


Usually these mandates are made by someone who evaluates “risks.” Third party risks are evaluated under the assumption that everything will be done sensibly in the 1p scenario, to boot, the 1p option will be cheaper as disk drives etc are only a fraction of total cost.

Reality hits later when budget cuts/constrained salaries prevent the maintenance of a competent team. Or the proposed backup system is deemed as excessively risk averse and the money can’t be spared.


They put everything only in one datacenter. A datacenter located elsewhere should have been setup to mirror.

This has nothing to do with commercial clouds. Commercial clouds are just datacenters. They could pick one commercial cloud data center and not do much more to mirror or backup in different regions. I understand some of the services have inherent backups.


Mirroring is not backup.


>The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...

They can't. The trump admin sanctioning the international criminal court and Microsoft blocking them from all services as a result are proof of why.


What a lame excuse. “The G-Drive’s structure did not allow for backups” is a blatant lie. It’s code for, “I don’t value other employees’ time and efforts enough to figure out a reliable backup system; I have better things to do.”

Whoever made this excuse should be demoted to a journeyman ops engineer. Firing would be too good for them.


It could be accurate. Let’s say, for whatever reason, it is.

Ok.

Then it wasn’t a workable design.

The idea of “backup sites” has existed forever. The fact you use the word “cloud” to describe your personal collection of servers doesn’t suddenly mean you don’t need backups in a separate physical site.

If the government mandates its use, it should have a hot site at a minimum. Even without that a physical backup in a separate physical location in case of fire/attack/tsunami/large band of hungry squirrels is a total must-have.

However it was decided that not having that was OK, that decision was negligence.


Silly to think this is the fault of ops engineers. More likely, the project manager or C-suite didn't have time nor budget to allocate on disaster recovery.

The project shipped, it's done, they've already moved us onto the next task, no one wants to pay for maintenance anyway.

This has been my experience in 99% of the companies I have worked for in my career, while the engineers that built the bloody thing groan and are well-aware of all the failure modes of the system they've built. No one cares, until it breaks, and hopefully they get the chance to say "I **** told you this was inadequate"


You could be right, but it could also be a bad summary or bad translation.

We shouldn't rush to judgement.


your first criticism was they should have handed their data sovereignty over to another country?

there are many failure points here, not paying Amazon/Google/Microsoft is hardly the main point.


Days? That's optimistic. It depends on what govt cloud contained. For example imagine all the car registrations. Or all the payments for the pension fund


Dude, the issues go wayyy beyond opting for selfhosting rather than US clouds.

We use selfhosting, but we also test our fire suppression system every year, we have two different DCs, and we use S3 backups out of town.

Whoever runs that IT department needs to be run out of the country.


cloud will also not back up your stuff if you configure it wrong so not sure how's that related


Rightfully did not trust these companies. Sure what happened is a disaster for them, but you cant simply trust Amazon & Microsoft.


Why not? You can easily encrypt your data before sending it for storage on on S3, for example.


You and I can encrypt our data before saving it into the cloud, because we have nothing of value or interest to someone with the resources of a state.

Sometimes sensitive data at the government level has a pretty long shelf life; you may want it to remain secret in 30, 50, 70 years.


I don't see how this is any different than countries putting significant portions of their gold & currency reserves in the NY Federal Reserve Bank. If for some reason the U.S. just decided to declare "Your monies are all mine now" the effects would be equally if not more devastating than a data breach.


The difference is that there are sometimes options to recover the money, and at least other countries will see and know that this happened, and may take some action.

A data breach, however, is completely secret - both from you and from others. Another country (not even necessarily the one that is physically hosting your data) may have access to your data, and neither you nor anyone else would necessarily know.


Exactly that happened to Russia, Iran, Venezuela


Not North Korea though; they just have hundreds of thousands of dollars of unpaid parking tickets invested in the USA, which is a negative.

https://www.nbcnewyork.com/news/local/north-korea-parking-ti... [2017]


Is encryption, almost any form, really reliable protection for a countries' government entire data? I mean, this is _the_ ultimate playground for "state level actors" -- if someday there's a hole and it turns out it takes only 20 years to decrypt the data with a country-sized supercomputer, you can bet _this_ is what multiple alien countries will try to decrypt first.


You're assuming that this needs to protect...

> ... a countries' government entire data?

But the bulk of the data is "boring": important to individuals, but not state security ("sorry Jiyeong, the computer doesn't know if you are a government employee. Apologies if you have rent to make this month!")

There likely exists data where the risk calculation ends up differently, so that you wouldn't store it in this system. For example, for nuke launch codes, they might rather lose than loose them. Better to risk having to reset and re-arm them than to have them hijacked

> Is encryption, [in?] any form, really reliable protection

There's always residual risk. E.g.: can you guarantee that every set of guards that you have watching national datacenters is immune from being bribed?

Copying data around on your own territory thus also carries risks, but you cannot get around it if you want backups for (parts of) the data

People in this thread are discussing specific cryptographic primitives that they think are trustworthy, which I think goes a bit deeper than makes sense here. Readily evident is that there are ciphers trusted by different governments around the world for their communication and storage, and that you can layer them such that all need to be broken before arriving at the plain, original data. There is also evidence in the Snowden archives that (iirc) e.g. PGP could not be broken by the NSA at the time. Several ciphers held up for the last 25+ years and are not expected to be broken by quantum computers either. All of these sources can be drawn upon to arrive at a solid choice for an encryption scheme


A foreign gov getting all your security researchers and staff's personal info with their family and tax and medical records doesn't sound great.

That's just from the top of my head. Exploiting such a trove of data doesn't sound complicated.


Yeah that ignores about two thirds of my point, including that it would never get to the "Exploiting such a trove of data doesn't sound complicated" stage with a higher probability than storing it within one's own territory


I'm in agreement with your second point, I think moving data in the country isn't trivial either and requires a pretty strong system. I just don't have much to say on that side, so didn't comment on it.


You can encrypt them at rest, but data that lies encrypted and is never touched, is useless data. You need to decrypt them as well. Also, plenty of incompetent devops around, and writing a decryption toolchain can be difficult.


Am I missing something? If you ever need to use this data, obviously you transfer it back to your premises and then decrypt it. Whether it's stored at Amazon or North Korean Government Cloud makes no difference whatsoever if you encrypt before and decrypt after transfer.


They can take the data hostage, the foreign nation would have no recourse.


Have it in multiple countries with multiple providers if money isn't a concern.

And are we forgetting that they can literally have a multi cloud backup setup in their own country as well or incentivize companies to build their datacenters there in partnership with them of sorts with a multi cloud setup as I said earlier?


Encryption only protects data for an unknown period of time, not indefinately.


If your threat model includes the TLA types, then backup to a physical server you control in a location geographically isolated from your main location. Or to a local set of drives that you physically rotate to remote locations.


Decryption is not usually an issue if you encrypt locally.

Tools like Kopia, Borg and Restic handle this and also include deduplication and other advanced features.

Really no excuse for large orgs or even small businesses and somewhat tech literate public.


Why write one when there are tools like “restic”?


For sure the only error here is zero redundancy.


S3 features have saved our bacon a number of times. Perhaps your experience and usage is different. They are worth trusting with business critical data as long as you're following their guidance. GCP though have not proven it, their data loss news is still fresh in my mind.


Were you talking about this incidence? https://arstechnica.com/gadgets/2024/05/google-cloud-acciden...

I am currently evaluating between GCP and AWS right now.


I read the article and it seems that, that thing happened because their account got deleted and here is something from the article you linked

Google Cloud is supposed to have safeguards that don't allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution).

If you are working with really important software, please follow the 3-2-1 EVEN with cloud providers I suppose if you genuinely want ABSOLUTE guarantee I suppose, but it depends on how important the data is I suppose for the prices.

I have thought about using some cheap like backblaze and wasabi and others for the 3-2-1 for backups I suppose I am not sure but I do think that this incident was definitely a bit interesting to read into and I will read more about it, I do remember it from kevin fang's video but this article is seriously good and I will read it later, bookmarked.


On the Microsoft side CVE-2025–55241 is still pretty recent.

https://news.ycombinator.com/item?id=45282497


I understand data sovereignty in the case where a foreign entity might cut off access to your data, but this paranoia that storing info under your bed is the safest bet is straight up false. We have post-quantum encryption widely available already. If your fear is that a foreign entity will access your data, you're technologically illiterate.

Obviously no person in a lawmaking position will ever have the patience or foresight to learn about this, but the fact they won't even try is all the more infuriating.


Encryption only makes sense if "the cloud" is just a data storage bucket to you. If you run applications in the cloud, you can't have all the data encrypted, especially not all the time. There are some technologies that make this possible, but none are mature enough to run even a small business, let alone a country on.

It sounds technologically illiterate to you because when people say "we can't safely use a foreign cloud" you think they're saying "to store data" and everyone else is thinking at the very least "to store and process data".

Sure, they could have used a cloud provider for encrypted backups, but if they knew how to do proper backups, they wouldn't be in this mess to begin with.


> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information

They were still right though: it's absolutely clear without an ounce of doubt that whatever you put on an US cloud is being accessible by the US government, who can also decide to sanction you and deprive you from your ability to access the data yourself.

Not having backups is entirely retarded, but also completely orthogonal.


The U.S. Government can’t decrypt data for which it does not possess the key (assuming the encryption used is good).


Well first of all neither you and I knows the decryption capabilities of the NSA, all we know is that they have hired more cryptologists than the rest of the world combined.

Also, it's much easier for an intelligence service to get the hand on a 1kB encryption key than on a PB of data: the former is much easier to exfiltrate without being noticed.

And then I don't know why you bring encryption here: pretty much none of the use-case for using a cloud allow for fully encrypted data. (The only one that does is storing encrypted backups on the cloud, but the issue here is that the operator didn't do backups in the first place…)


1. More evidence suggests that NSA does not know how to decrypt state-of-the-art ciphers than suggests they do. If they did know, it's far less likely we'd have nation states trying to force Apple and others to provide backdoors for decryption of suspects' personal devices. (Also, as a general rule, I don't put too much stock in the notion that governments are far more competent than the private sector. They're made of people, they pay less than the private sector, and in general, if a government can barely sneeze without falling over, it's unlikely they can beat our best cryptologists at their own game.)

2. The operative assumption in my statement is that the government does not possess the key. If they do possess it, all bets are off.

3. This thread is about a hypothetical situation in which the Korean government did store backups with a U.S.-based cloud provider, and whether encryption of such backups would provide adequate protection against unwanted intrusion into the data held within.


> 2. The operative assumption in my statement is that the government does not possess the key. If they do possess it, all bets are off.

All bets are off from the start. At some point the CIA managed to get their hands on the French nuclear keys

> 3. This thread is about a hypothetical situation in which the Korean government did store backups with a U.S.-based cloud provider

This thread is about using US cloud providers, that's it, you are just moving the goalpost.


In theory. I'm very much happier to have my encrypted data also not be available to adversaries.


"Not my fault.. I asked them to save everything in G-Drive (Google Drive)"


I mean he's still right about AWS etc. with the current US Administration and probably all that will follow - but that doesn't excuse not keeping backups.


Yeah let’s fax all government data to the Trump administration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: