I'm working on that as well. I just fixed cat to crash and log on error, and I'll be fixing the other commands soon.
By the way, I'm wondering how I should go through the file line by line with a reader.
I think the most efficient way may be to scan byte by byte from the start (head) or the end (tail), and count until reaching n amount of newlines (or stop if at the end of the file), then print the bytes between the start/end and the nth newline.
Good question. I'm not sure. You might want to seek to the end and move back. You probably shouldn't do it byte-by-byte directly from the file since that's very inefficient. As you can tell, this is already starting to get complicated! Maybe you could try mmaping the file so you could treat it as a []byte.
Last year I was trying to write a Go routine that read a file backwards. I was amazed how unexpectedly difficult that proved to be.
In the end I settled for reading it from the start which worked 99.999% of the time and enabled me to finish the project to the tight deadline I had. But I've always meant to go back and "fix" that code at some point.
The strategy that is used by the original GNU coreutils written in C, and the one I used to implement tail with Rust, is to jump to the end of the file, than rewind AVERAGE_CHARS_PER_LINE * NUMBER_OF_LINES_TO_BE_READ, check if enough lines have been read, and repeat until enough lines have been found.
I found the optimal value of AVERAGE_CHARS_PER_LINE to be around 40 characters, but of course it hugely depends on the file being read.