DVC is slow because it stores and writes data twice, and the default of dozens of concurrent downloads causes resource starvation. They finally improved uploads in 3.0, but downloads and storage are still much worse than a simple "aws s3 cp". You can improve pull performance somewhat by passing a reasonable value for --jobs. Storage can be improved by nuking .dvc/cache. There's no way to skip writing all data twice though.
Look for something with good algorithms. Xethub worked very well for me, and oxen looks like a good alternative. git-xet has a very nice feature that allows you to mount a repo over the network [0]
Clarification on file duplication: DVC tries to use reflinks if the filesystem supports it, and falls back on copying the files. It can be configured to use hardlinks instead for filesystems like ext4 [0]. This improves performance significantly.
Look for something with good algorithms. Xethub worked very well for me, and oxen looks like a good alternative. git-xet has a very nice feature that allows you to mount a repo over the network [0]
[0] https://about.xethub.com/blog/mount-part-1