Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know if this exists or not, but it's something I've always wanted.

I love and hate unit testing. It's a fantastic idea that I've never seen successfully implemented in practice, mostly because the tests invariably fall by the wayside. At the very least, it's a burden to support them.

I'd love a tool that would observe my code during runtime - capturing all inputs and outputs from every function, as well as the value of each variable at each point in the execution flow. Because this would probably incur a performance hit, I'd want to have the option to toggle it on/off.

After execution, I can scroll through a list of called functions and decide which ones I want to save. Once I choose to save, the tool generates a unit test for the selected function. Since the tool is capturing I/O of all functions, it could retroactively mock each call within the function.

This would only be useful for deterministic functions, of course, and it would still be up to devs to document why the expected results are what they are. But my goodness it would be a time saver - one click unit tests. Yes please.



> I'd love a tool that would observe my code during runtime - capturing all inputs and outputs from every function, as well as the value of each variable at each point in the execution flow. Because this would probably incur a performance hit, I'd want to have the option to toggle it on/off.

There's a class of integration tests dubbed consistency tests, which tracks the output that an interface provides give a specific input in production, and the test consists of playing back a sample of said production traffic to verify if the service remains consistent before promoting/dialing it up in production.

This class of integration tests is also used in UI tests, and Selenium provides explicit support for them.

https://elementalselenium.com/tips/62-web-consistency-testin...


Ah very nice, that sounds close to what I'm imagining. Will check it out, thanks.


also perhaps note worthy: "instrumentation", which I guess is a part of it at least. very useful tool.


> I love and hate unit testing. It's a fantastic idea that I've never seen successfully implemented in practice, mostly because the tests invariably fall by the wayside. At the very least, it's a burden to support them.

Are you using object-oriented programming or a more functional style? Without classes, unit tests become a lot more readable and manageable.


Always functional; I'd never try to implement unit testing with classes, that sounds like a nightmare. I'm not talking about the difficulties of writing a unit test, which is where the oop/functional divide is important. I'm talking about the increased time to market, the difficulties of maintaining tests over time, the need to have everyone on the team be equally committed to tdd, etc etc. These are the things that kill good testing practices.


So much this, both in the tested code and in the test code itself. I only started unit testing in Python once I discovered pytest; the built-in unit test framework was such a cognitive mismatch for my usual coding style.


A tool similar to what you want. It generates input/output pairs from the code, aiming to maximize coverage, and allows you to convert them to unit tests with one click.

https://docs.microsoft.com/en-us/visualstudio/test/generate-...

Also known as its legacy / research project name, "Pex". The underlying algorithm technique is called "dynamic symbolic execution".

There was also similar tool for Java called "Agitar Agitator".


I am building Videobug - we record run time code execution and play it back in the IDE. We aim for Quick bug fixes, better code understanding and automatic test case generation.

Our Show HN: https://news.ycombinator.com/item?id=31286126 Our demo: https://www.youtube.com/watch?v=aD4CV7UL0RM

Would love to get your feedback.


possibly related: * https://umaar.com/dev-tips/241-puppeteer-recorder/ * https://umaar.com/dev-tips/248-recorder-playback/ * https://developer.chrome.com/docs/devtools/recorder/

I use Deno and its standard library for testing with Puppeteer. I've found unit tests overrated and scenarios underrated (ie testing pyramid).


I'd love a tool that would observe my code during runtime - capturing all inputs and outputs from every function, as well as the value of each variable at each point in the execution flow, and the call graph. Because this would probably incur a performance hit, I'd want to have the option to toggle it on/off.

After execution, I can scroll through a tree of called functions and convert it into permanent "implementation documentation" (saved in a document I can annotate eg. with data flow, and edited when the code itself changes), helping me understand both the structure and dynamic control flow of code I'm learning or relearning after months of absence. I currently do this manually in Google Doc bullets, in order to make sense of legacy code whose control flow is fragmented into many function calls I must piece together mentally (as described in http://number-none.com/blow/john_carmack_on_inlined_code.htm...), and as a starting point for restructuring types and function graphs.


I experimented with this idea in PHP by capturing profiles and traces.

It's an interesting idea for sure!

https://github.com/Azeirah/autotest


It doesn't check all your boxes, but have you tried Playwright?

https://playwright.dev


I've looked at Playwright and it seems to be very close to what I want, but it's on the E2E side. I'm looking more for a unit testing variant of that idea.


There's a gem for Ruby that does a limited version of this: https://github.com/testdouble/suture

Suture is geared towards refactoring, so it doesn't do it for every function at once, and instead you have to specify methods manually.


This sounds like fuzzing. Do you want fuzzing?


I believe a key component of fuzz testing is randomized input. I’m talking about recording live usage of an application and capturing the observed I/O. Like Playwright, mentioned above, but applied to unit testing rather than E2E.


It's not a key component, in fact a lot of fuzzers do intelligent input culling and whatnot based on static analysis.


Not sure this would be great for new code, because the tests are supposed to dictate the function’s behaviour, and not the other way around.

Yet, this would be amazing for legacy projects with no unit tests: you could record the code in production, and generate unit tests from there, and add them to the non-tested project.

It would bring you a safety net allowing you to better work on that legacy project IMO.

This actually reminds me a an attempt at an internal tool I saw pass by whose goal was to save all IO for a given request (aka db call, log calls, network request, file read) for a given backend, and allow you to replay those locally for testing purposes independently from the language used by the backend. I think this didn’t lead anywhere, because it was harder than expected, some things couldn’t be recorded easily (like RNG use), and it also meant finding a way to have all IO libs use the precomputed values.


Very true, yes. Much more useful for existing code. Although I don't think unit tests and TDD necessarily need to be aligned, the red-green-refactor flow could potentially be re-imagined with a tool like this.


One thing I found useful with Trompeloeil was the tracer. You instantiate a tracer, and then every call made into a mocked object is logged for you. Really make it easy to see how a mysterious controller operates on the thing you are mocking.

Plus, the very act of engineering in a dependency injection for the mock if not already provided, enforces a testable interface.


Yeah I want this too. Jest Snapshots are great for quickly saving the expected output and asserting that it doesn’t deviate, but it would be nice to have the same sort of “snapshot” of inputs as well.


100% this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: