Hacker Newsnew | past | comments | ask | show | jobs | submit | na--'s commentslogin

Talk by Mike Olson from Monktoberfest 2023. Article version of the talk: https://www.olsons.net/entrepreneurship/what-is-a-board-for


FWIW, it works on QubesOS, which is Xen based. Though it works only in my Fedora and Debian VMs, but not on my Archlinux-based one.

I think that's because I have WINE installed in the Archlinux VMs, since that's what gets started when I run ./redbean.com. But even after I remove WINE, it still doesn't work, I probably have to restart (i.e. remove WINE from the TemplateVM, not the AppVM) for it to work.

In any case, incredible work, I am in awe!


The canonical way to record web traffic are HAR files: https://en.wikipedia.org/wiki/HAR_(file_format)

You can export HAR files from all web browsers and a lot of proxies and other tools. Then you can convert that recording to a k6 script with our HAR converter: https://k6.io/docs/test-authoring/recording-a-session/har-co...

We also have browser plugins to directly record browser traffic and import it as a k6 script in our cloud service: https://k6.io/docs/test-authoring/recording-a-session/browse...

However, you don't need a cloud subscription to run the tests, you can copy-paste the generated script code and `k6 run` it locally, without paying us anything.


Thanks. Will check it out.


I am one of the k6 developers. I've commented in a bunch of threads here, but if you have any questions, feel free to AMA :) I'll be around for the next ~1h and then probably later in the day as well.


Is there some way you can do whatever it is AWS CDK does under the hood to provide language bindings to other languages for writing tests? That'd be killer. I want to be able to write the tests in my backend language.


Unfortunately not... At some point in the far future, we might release support for Go "scripting" [1] or we might extend xk6 [2] so it allows different script types, but those things are far from certain.

[1] https://github.com/loadimpact/k6/issues/751

[2] https://github.com/k6io/xk6


This very much depends on your available hardware and the complexity of the load test. Following the advice in https://k6.io/docs/testing-guides/running-large-tests, and having a beefy load generation machine and network, you should be able to run many thousands of VUs on a single instance pretty comfortably. How many thousands precisely depends on a lot of factors... :)


That's one of the main uses, yes. Although k6 gives you a lot of flexibility to decide what your pass/fail criteria are with its custom metrics ( https://k6.io/docs/using-k6/metrics#custom-metrics ), checks (https://k6.io/docs/using-k6/checks) and thresholds ( https://k6.io/docs/using-k6/thresholds ).

See the example in https://github.com/loadimpact/k6/#checks-and-thresholds. If, at the end of the test run, some of these rules (encoded as `thresholds`) were unsatisfied, k6 will exit with a non-zero exit code (and thus fail your CI check): - The 95th percentile of all HTTP request durations should be be less than 500ms - The 99th percentile of all HTTP request durations that were tagged with `staticAsset:yes` should be less than 250ms - The failure rate of the checks should be less than 1%, although if it's more than 5% the test will abort immediately.


Does K6 deal with the coordinated omission problem?

Gil Tene (Azul Systems) has argued convincingly [1] (slides [2]) that monitoring tools get latency measurement wrong because sudden spikes aren't represented correctly in timings because of averaging and the wrong use of percentiles.

He argues that percentiles simply aren't useful, because, statistically, most requests will experience >= 99.99-percentile response times. All percentiles lie; the only truly useful and realistic number is, in fact, the 100th percentile.

He also argues that the most revealing way to show latency is with a complete histogram.

I haven't used K6 yet, but I've been looking for a good load-testing tool, and intend to try it out.

[1] https://www.youtube.com/watch?v=lJ8ydIuPFeU

[2] https://www.azul.com/files/HowNotToMeasureLatency_LLSummit_N...


This used to be an issue with older k6 versions, but since v0.27.0 we have arrival-rate executors [1] that should address the biggest issue of coordinated omission. And we've always measured the max value of all metrics, even when they are not exported to some external output and we just show the end-of-test summary.

It has been a while since I watched these Gil Tene talks, so I might be missing something, but I think the only remaining task we have is to adopt something like his HDR histogram library [2]. And that's mostly for performance reasons, though we'll probably play around with the correction logic as well.

[1] https://k6.io/docs/using-k6/scenarios/arrival-rate

[2] https://github.com/loadimpact/k6/issues/763


That's perfect, thanks!


That's an interesting talk.

k6 allows the user to choose[1] which metrics are relevant for the particular test. By default, it displays max or p(100), p(95), p(90), min, med, and avg. User can specify other values such as p(99.995)

It's also possible to create completely custom metrics[2] to track whatever is relevant to the user.

k6 allows the user to change almost all aspects of execution, tracking, and reporting.

[1] https://k6.io/docs/using-k6/options#summary-trend-stats

[2] https://k6.io/docs/using-k6/metrics


Yup. I put a couple of metrics for failures of a couple of types in the load script and ship them out to InfluxDb/Grafana for plotting in the very first graph. That way, if I see a lot of "failure" color, flags go up. Otherwise the usual amount of background noise is ok.


That's more or less it. Source: I am one of the k6 developers, and here's also an answer from our CEO on the topic: https://community.k6.io/t/k6-licensing-agplv3/395/2


The CEOs response doesn't inspire any confidence:

If you statically or dynamically link any part of the k6 codebase with some other code (a derivative work), the license’s copyleft virality is triggered and that other code also needs to be made available under AGPLv3. When it comes to using the k6 binary and interacting with it from another process or over a network it is not certain exactly how the copyleft virality would apply from what I’ve been told by lawyers with OSS license knowledge.


Happy to answer any questions you have regarding the license and why we chose it. Is your concern the copyleftness of the license?


Thanks. I must say, I prefer weak copylefts for my projects (MPLv2 with the incompatibility clause), so my concern isn't copyleft per se, but the virility of it.

That said, I do have a couple questions:

- Would a non-AGPLv3 open-source or source-available project using K6 be incompatible with AGPLv3 given they might "distribute" CI/CD along with their open-source or source-available code?

- If a closed-source project publishes results from K6, are they now in violation of AGPLv3?


I'm not a lawyer, so take that into account when reading my reply :)

> Thanks. I must say, I prefer weak copylefts for my projects (MPLv2 with the incompatibility clause), so my concern isn't copyleft per se, but the virility of it.

I need to read up some more on MPLv2, but from what I've read now it looks like a license that could fit k6 as well.

> - Would a non-AGPLv3 open-source or source-available project using K6 be incompatible with AGPLv3 given they might "distribute" CI/CD along with their open-source or source-available code?

No, using k6 as a tool in CI/CD would not be a violation of AGPLv3 or trigger the virality of the license in regards to the non-AGPLv3 open source or source-available code base. Any k6 test scripts you write would also not need to be licensed under AGPLv3. Gitlab built an integration with k6 which could be seen as a data point in agreement with this view: https://docs.gitlab.com/ee/user/project/merge_requests/load_...

> - If a closed-source project publishes results from K6, are they now in violation of AGPLv3?

If we're talking about publishing results from testing of the closed-source project, then no, that would not be in violation. There's however an undefined gray area around clause 13 "Remote Network Interaction" in AGPLv3 in terms of what is allowed when for example offering a SaaS product based on an AGPLv3 licensed component such as k6. Clause 13 has not been tried in court afaik. It could seem that clause 13 would provide some protection for a business like ours from other companies building commercial solutions on top of k6, but from what I've been told by lawyers we shouldn't count on that, so we're not (anymore).

We have had many internal discussions on this topic, as well as with other companies and individuals in the open source space; whether to go the route of a source-available license, effectively restricting commercialization possibilities of k6 by other companies, or going the open core route and restrict what we release as open source. Everytime we've had this discussion internally we've come back to open source being the right choice for us, given the type of product we build and where we think we can capture value.

We'll look into MPLv2.


Thanks a lot for being so thorough with the answers. (IANAL either but) Given the above, I think you'd be fine with AGPLv3.

MPLv2 and other related licenses such as Eclipse Public License v2, Erlang Public License, do have the advantage of being well understood and in some cases auto-approved for use at various enterprises and thus a good midway between MIT / Apache and GPLv3.

That said, you'd be right to lean more-copyleft (going Server Side Public License, for example) if K6 is a key product (and not a complementary product), though Bryan Cantrill thinks you might be better off closing up the source in that case: http://dtrace.org/blogs/bmc/2018/12/14/open-source-confronts...


I might be missing something, but k6 should be able to completely cover all of your use cases? I am one of the k6 developers, can you share exactly what the missing piece was?

> tests that would run much much more often (like every seconds or couple of seconds), in a continuous manner.

You can do that, just use an arrival-rate executor that runs an iteration every second, with a test duration of 365 days or something like that :) See https://k6.io/docs/using-k6/scenarios/arrival-rate

> Tue goal was to have something at the same time like a healtcheck (is it still working), like a performance test (does it answer in a timely manner) and like a validation test (does it answer the right result - the endpoints we wanted to test do "complex calculations"). Our best answer so far was to wrap K6 in an infinite loop, but I wonder if there could be something smarter.

You can certainly wrap k6 in an infinite loop. Nothing wrong with that, though you can probably use the `scenarios` feature (with long `duration` values) to achieve it without wrapping k6: https://k6.io/docs/using-k6/scenarios


There is auto-completion, see https://k6.io/docs/misc/intellisense

Regarding debugging, that's unfortunately unlikely to come any time soon... We'd need support for that in the JS runtime we're using (https://godoc.org/github.com/dop251/goja) and then we'd need to figure out how to expose in in k6, while dealing with potentially hundreds of concurrent JS runtimes (VUs). Not impossible, just unlikely to land anytime soon...


Yes, that's it. If the C++ application exposes some service, then k6 might be the tool to test how it behaves under load now (or in the future). The only protocols we currently support are HTTP(S), WebSockets and gRPC, though we plan to add a lot of others (raw TCP/UDP, various DBs, queues, messaging, DNS, etc.) in the future, and we've recently added a way to extend the k6 functionality by writing external Go modules: - https://github.com/loadimpact/k6/blob/master/release%20notes... - https://github.com/k6io/xk6


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: