Yeah we just did it with the --link option on a 6TB database and it took like 30 seconds. Something has to be off with their OS settings or disk speeds.
The main challenge with that is running an ANALYZE on all the tables though, that took like 30 minutes during which time the DB was unusable
We did use the --analyze-in-stages option, I think our data model is just not optimal. We have a lot of high frequency queries hitting very large tables of .5 to 1 billion rows. Proper indexing makes them fast but until all the stats are there, the frontend is unusable.
The main challenge with that is running an ANALYZE on all the tables though, that took like 30 minutes during which time the DB was unusable