Which drives and parameters for the READ BUF SCSI command yielded the expected 2366 bytes per sector? I imagine that it was combined with seeks to each sector before reading from the buffer (as it would be harder to isolate multiple sectors data in cache?).
Nice work Dmitry, looking forward to read your next article.
The later model Pixter Multimedia had the full memory space accessible via JTAG, which is how some carts and even boot ROM got dumped a while ago [1], is it the same deal with Pixter Color?
That OpenOCD script was a bit flaky, and sometimes the boot ROM would be already unloaded before reading, maybe you have some insights in how to make it more robust.
btw, have you looked into the original Pixter? The cart connector seems to have a very narrow bus, so it doesn't look like those carts have code, and probably can only be dumped with a decap.
That only dumps the data. That’s the easy part. None of that dumps the melodies.
The pin outs that page links to are also not quite accurate. I need to finish editing my other article on this.
I have indeed looked into the original Pixter. Deeply: I have decoded the bus, documented the device, dumped games, and produced a working emulator.
The cartridges do contain memory. Most of them are about 1 MB in size, split between code (the maximum for which is 32 kB) and audio effects + images which occupy the rest of the space. If you are very, very curious and don’t want to wait for me to finish my editing, email me and I can explain how it works.
For other architectures, it feels like a missed opportunity to not have an independent WASM build of MAME's debugger, as the whole project could already be built in WASM (although I think the latest versions were broken, as that target isn't actively maintained): https://docs.mamedev.org/initialsetup/compilingmame.html#ems...
I found their ELF format specification to have a decent coverage, even if not completely exhaustive (e.g. some debug info isn't breakdown after a certain point, but it just might be incomplete rather than limitations).
Besides the implementation effort in building from scratch, it also opens you up to unknown unknowns, while the limitations of mature software are better defined. Just because Jepsen didn't test other alternatives doesn't mean they are free from issues.
Regarding the Kafka issues pointed out:
* 2 only affect you if you are using transactions (and I would be interested in knowing of alternatives that better handle exactly-once semantics);
* Consumer.close() is likely off the critical path of your applications (personally I never encountered these hangs);
* Aborted reads and torn transactions were mentioned as being fixed by KIP-890, of which the mentioned ongoing work was completed since that article was published: https://issues.apache.org/jira/browse/KAFKA-14402
Without discrediting the author, as it's always cool to share these findings, was anyone actively looking for this over 27 years?
A lot of the times, it just happens that someone was the first person that even bothered trying digging into the code. Specially after decompilation became much more accessible for less popular architectures with Ghidra. Give it a try, there's plenty of low hanging fruit! I've submitted another case some time ago.
Also luckily, considering other OS easter eggs, it doesn't seem like there was any obfuscation involved, like "chained xor stored in bitmap resource of badly supported executable format": https://x.com/mswin_bat/status/1504788425525719043
AWS autoscaling does not take your application logic into account, which means that aggresive downscaling will, at worst, lead your applications to fail.
I'll give a specific example with Apache Spark: AWS provides a managed cluster via EMR. You can configure your task nodes (i.e. instances that run the bulk of your submitted jobs to Spark) to be autoscaled. If these jobs fetch data from managed databases, you might have RDS configured with autoscaling read replicas to support higher volume queries.
What I've frequently see happening: tasks fail because the task node instances were downscaled at the end of the job, because they are no longer consuming enough resources to stay up, but the tasks themselves haven't finished. Or tasks failed because database connections were suddenly cut off, since RDS read replicas were no longer transmitting enough data to stay up.
The workaround is to have a fixed number of instances up, and pay the costs you were trying to avoid in the first place.
Or you could have an autoscaling mechanism that is aware of your application state, which is what k8s enables.
> since RDS read replicas were no longer transmitting enough data to stay up.
As an infra guy, I’ve seen similar things happening multiple times. This could be a non problem if developers handled the connection lost case, reconnection with retries and stuff.
But most developers just don’t bother.
So we’re often building elastic infrastructure that is consumed by people that write code as if we were still on the late 90ies with the single instance dbs expected to be always available.
Can you elaborate on that "little more work", given that resizing on demand isn't sufficient for this use-case, and predictive scaling is also out of the question?
Sleep obfuscation seems to be viable because scanners only execute periodically. I'm not very familiar with Windows internals, but why don't these scanners hook the VirtualProtect calls and only then scan the associated memory region? My understanding on using ROP is to make the calls seem to originate from trusted modules, but couldn't a kernel driver / hypervisor be able to detect all these calls regardless? Is it just too taxing on overall system performance or is there some other limitation?
VirtualProtect might be unhooked in userspace by the payload, and the payload might only be decrypted for a short moment (to run a task, do a beacon cycle) so you’d have to be quick capturing its unobfuscated form.
Not sure if you can actually hook/intercept VirtualProtect on the kernel side, probably not due to the performance and safety implications, but there are ETW feeds that emit telemetry for the call now (https://undev.ninja/introduction-to-threat-intelligence-etw/)
It seems like it was a follow-up from previous bruteforce efforts, which include a spreadsheet with various results, but it would help to have some conclusions on which were best: http://forum.redump.org/topic/51851/dumping-dvds-raw-an-ongo...
Also, couldn't find any source/download for DiscImageMender.
reply