Isnt the problem with that approach you have a model of the implementation (and not of the 'interface')? I mean you dont know if you are useing the function as intended and if not used as intended it might (actually will) break in the future.
I mean, as a first note one of the ideas here is reading code already written in zig to understand how to write code in zig - so in that case you're effectively inferring the interface from what the authors of the code itself are treating said interface to be.
But also, interface documentation is pretty much never truly complete because there are almost always some implicit assumptions involved (and if you try and make all of those explicit you rapidly end up with documentation that's so verbose people's eyes glaze over when they try to read it so their model often ends up incomplete anyway so how explicit to be is itself a trade-off).
Then zig embeds its test cases in the source file, so you can look at what the authors have explicitly declared -must- work to help you know what the interface is intended to be.
Plus when I'm source diving, I've done enough of it over the years to be able to at least attempt to build up a model of not just the implementation, but of the author's mental model as they were implementing it, and if you can figure out their intent, it's much easier to guess what their code is -meant- to do and thereby what it will hopefully continue to do into the future.
If in doubt, though, leaving a comment in your own code as to what assumption you're making -and- writing a unit test in your own code that verifies the assumption continues to hold will mean that at least if it doesn't you'll see a test failure in your own suite that tells you it changed.
(or: "code to the interface, not the implementation" is absolutely the right thing to aim for, but in practice the line between the two is fuzzy and cases where you have to make a judgement call will always show up eventually)
For auditing you are right, of course.