Sell talk and buzzwords. Either author has no idea that Hadoop is ecosystem and Spark depends on it or deliberately mix Hadoop and Kubernetes, which aren't much related.
Even if you don't run HDFS and YARN, you aren't escaping Hadoop. And if some configuration goes wrong, and you'll probably need to look into the Hadoop conf files.
The original comment was about the mass of libraries that Hadoop brings in. Spark isn't a solution that allows you to leave the mess. If you try to dockerize spark, you'll still see that you have 300 MB size images full of JARs that came from wherever.
And good luck running Spark without Hadoop ;)