Hacker Newsnew | past | comments | ask | show | jobs | submit | craigmart's commentslogin

Schools?


I totally agree. The authors have found a security vulnerability in a mechanism that wasn’t even trying to protect the software from attacks. I don’t see any relevance in this paper except being a good exercise


You are speaking of metadata as if all metadata is equal. Signal does collect phone numbers (even though, since usernames have been introduced [1], this can be made opt in from now on), but not the contacts or social graph, neither many other relevant metadata [2]. What they can gather from this, is only when the specified phone number registered to signal services and its last connection to the server [3].

So, if you can call "metadata exchanging app" an app that simply has a list of numbers registered to the service, without any metadata assigned to them except their last access, the same label could be assigned to a much larger number of services.

It may not be anonymous, but it can hardly be disregarded as private.

[1] https://signal.org/blog/phone-number-privacy-usernames/

[2] https://signal.org/blog/sealed-sender/

[3] https://signal.org/bigbrother/central-california-grand-jury/


>but not the contacts or social graph, neither many other relevant metadata [2].

Assuming you trust them (notice all your links point to signal.org own publications). Most of the privacy people are cautious/paranoid and assume that everything that can be collected is collected. Even assuming a lack of malicious intent, what's stopping NSA from hacking into Signal's infrastructure and logging who's talking to who along with timestamps? That's not to say I don't trust signal (it's the best mainstream solution right now), but it could do better to hide metadata from the protocol.


> Even assuming a lack of malicious intent, what's stopping NSA from hacking into Signal's infrastructure and logging who's talking to who along with timestamps?

Sealed Sender, the second link in the comment you've replied to. The indicator is off by default, but you can enable it under Settings → Privacy → Advanced. If I remember correctly, it doesn't work for the very first message you exchange with someone, but then it turns on and remains on.

In layman terms, it turns "from A; to B; content: <encrypted>" into "to B; content: <encrypted>". Their infrastructure doesn't need to know the "from" part to serve its purpose, so they strip it away.

If it was the other way around, they'd have to give that info to the (US) court. Same as any other US-based business, it's not optional, they can't ignore such requests, they can't lie, otherwise they'd be placing themselves in legal troubles for a random nobody that happens to be using their product. So, when I see this page, I fully believe them: https://signal.org/bigbrother/. If I didn't, my first step would be to look up those court cases from alternate sources.


The point is that you don't have to trust them because the client (where the relevant cryptography is performed) is open source and the fact that my links point to signal.org is completely irrelevant, those blog posts are just ways to advertise facts that are freely verifiable. You can read the source code to check the implementation of sealed senders or how the social graph is handled.

NSA can hack into Signal's infrastructure, and what they will be able to gather are the same information provided by Signal in reply to subpoenas (the whole list here https://signal.org/bigbrother/), because everything else is end-to-end encrypted.


one side of the "debate" seem to be taken solely by Joel Moskowitz, who's cited in every article that promotes the message of "phone radiation is harmful"


looks real to me


Reality has much higher polygon count, and much better texture mapping.


Could you expand on why people shouldn't compile their kernel? I think it's fairly useful to compile their own to get a better understanding of what the kernel does and to better suit everyone's needs. For example, if I have little free space on my boot partition and I have my disk encrypted, I want my kernel to be as small as possibile, so I will deselect every driver I don't need. Or maybe the driver for my new device is not included in the kernel builds of my distribution.

Not only I woundn't say that most people shouldn't compile their kernel, I would say that most linux users* should do it at least once, so they can understand the power they have compared to closed-source operaring systems.

*with linux users I mean users that use linux as their main operating system, not people that do ssh once in a while or rarely boots their linux partition


Lots of reasons:

- everyone has better things to do than compile software they didn't write

- a good distro has probably tested it on a bunch of hardware, and hopefully signed it (or at least the packaging), so you know it was securely acquired and built

- you won't learn much at all about the Linux kernel by compiling it... you may learn a tiny introductory about about it by configuring it, but that's still not very much at all, really (it may seem like a lot when you don't know how to measure what you're learning)

- what you should learn from configuring and compiling a Linux kernel is that you don't ever want to be in a situation where you have to do it again (without a really spectacular reason, or being paid)

- if you're compiling a kernel because your boot partition is small... make it bigger, or don't have one at all. come on.


using the same binaries shares the verification for said binary.

a modular kernel with a custom inird (generated by the distro) is small enough for most.

so if you are into adventures or you are in the business of kernel development yes roll your own. anybody else is better served standing on the shoulders of a maintained binary distribution.


What kind of protection does Facebook Container have other than deleting cookies outside of the container?

For my case Total Cookie Protection is enough, but if you want the same protection of Facebook Container for every website (i.e. session cookies which are deleted each time you restart the browser) you can install Cookie AutoDelete or use the built-in option to delete cookies at restart (whitelisting websites where you need permanent cookies).


It also blocks network requests made by third-party sites to FB. So unless you're already running ETP with the content blocker on (strict mode, private browsing mode) or another ad blocker kind of addon that also blocks FB strictly, then that's an additional measure.


Although I don't agree with the article, I can understand how someone could start hating work and the urge to be always productive. Our economy promotes competitiveness, which of course creates progress and brought us where we are now, but I think covid worked as catalyst for all the pressure the workers were feeling because of this. And when you are so distressed by the system it's way easier to believe that you don't have to work as hard to be happy instead of starting to push harder


>why in the heck is he going for POSIX compatibility

Because existing desktop applications can be ported to ToaruOS

>why not a safer microkernel, keeping everything in userspace?

This is a design choice, microkernels aren't necessarily better than hybrid, they're slower, harder to debug and process management can be complicated


Just curious how hard it would be to forego POSIX entirely if you were building an OS. I know TempleOS is entirely from scratch. I'd like to implement a small LISP like SectorLISP [1] (see yesterday's posts too on HN). I don't know much about building my own OS, so I'd like to start with something like MenuetOS (my first PL was asm), SerenityOS, TempleOS, or this one. I'd like it to be completely an 'island', i.e. POSIX not a requirement. I want to use it to hack on in isolation without any easy copy/paste shortcuts. I know Mezzano exists, and it has booted on bare metal, but I would like to start with the OS's above, implement my own LISP, and go from there.

Any other OS recommendations base on my ignorant, but wishful, reqs above? I realize there are some others in Rust too. Thanks!

[1] https://github.com/jart/sectorlisp


Someone who would make a new OS, should define a completely new system call interface, as it is likely that now it is possible to conceive a better interface than 50 years ago and anyway if it would not be different there would be no reason to make a new OS, instead of modifying an existing one.

Nevertheless, the first thing after defining a new OS interface must be writing a POSIX API translation layer, to be able to use without modifications the huge number of already existing programs.

Writing a new OS is enough work, nobody would have time to also write file systems, compilers, a shell, a text editor, an Internet browser and so on.

After having a usable environment, one can write whatever new program is desired, which would use the new native OS interface, but it would not be possible to replace everything at the same time.

Besides having a POSIX translation layer, which can be written using as a starting point one of the standard C libraries, where the system calls must be replaced with the translation layer, some method must be found for reusing device drivers made for other operating systems, e.g. either for Linux or for one of the *BSD systems.

Nobody would have time to also write all the needed device drivers. So there must exist some translation layer also for device drivers, maybe by running them in a virtual machine.

The same as for user applications, if there is special interest in a certain device driver, it should be rewritten for the new OS, but rewriting all the device drivers that could be needed would take years, so it is important to implement a way to reuse the existing device drivers.


> Writing a new OS is enough work, nobody would have time to also write file systems, compilers, a shell, a text editor, an Internet browser and so on.

> So there must exist some translation layer also for device drivers, maybe by running them in a virtual machine.

> ... but rewriting all the device drivers that could be needed would take years, so it is important to implement a way to reuse the existing device drivers.

I'd think most people making a hobby OS specifically want to do these things.

I also think most don't care about wide hardware compatibility.


Even if you do not want the new OS to run on anything else but your own laptop, that still needs a huge amount of drivers, for PCIe, USB, Ethernet, WiFi, Bluetooth, TCP/IP, NVME, keyboard / mouse / trackpad, sound, GPU, sensors, power management, ACPI and so on.

The volume of work for rewriting all these is many times larger than writing from scratch all the core of a new OS.

Rewriting them requires studying a huge amount of documentation and making experiments for the cases that are not clear. Most of this work is unlikely to present much interest for someone who wants to create an original OS, so avoiding most of it is the more likely way leading to a usable OS.


I think you're still missing the point here.

Not every hobby OS needs or even wants networking, gpu support, even storage I/O, etc. See TempleOS.

The goal typically isn't to make a fully featured OS.


If you do not want those features, that means that the OS is not intended to be used on a personal computer, but only on an embedded computer.

For dedicated embedded computers, the purpose for an OS becomes completely different and compatibility with anything does not matter any more.

Not only personal computers cannot be used without a huge amount of device drivers, but even for a very simple server, e.g. an Internet gateway/router/firewall or a NAS server, the amount of work for writing the device drivers, the file systems and the networking part would be much more work than writing the core of a new OS.

Only for embedded computers the work needed for device drivers can be smaller than for the base operating system.


You hit it on the head: The point is purely for fun and learning. I want to learn as much as I can by rebuilding apps from scratch, etc. I had my first computer in 1977/78, a Commodore PET 2001, followed by a Vic-20, so if I can duplicate the bare system I had them and a PL to create apps, I am back where I started - having fun with computers!


> Nevertheless, the first thing after defining a new OS interface must be writing a POSIX API translation layer, to be able to use without modifications the huge number of already existing programs.

I disagree. POSIX sucks. Build a hypervisor so people can run their applications in a VM and insist that native programs use the non-garbage API. It's the only way you'll ever unshackle yourself.


The point of writing a new OS is to use it, otherwise you do not get any of its supposed benefits.

If you do all your normal work in a virtual machine, what will you use your new OS for?

Writing any useful application program in a complete void, without standard libraries and utilities, would take a very long time and unless it is something extremely self contained it would not be as useful as when it can exchange data with other programs.

It is much easier to first write a new foundation, i.e. the new OS, with whatever novel ideas you might have for managing memory, threads, security and time, and then start to use the foundation with the existing programs, hopefully already having some benefit from whatever you thought you can improve in an OS (e.g. your new OS might be impossible to crash, which is not the case with any of the popular OSes), and then replace one by one the programs that happen to be important for you and that can benefit the most from whatever is different in the new OS.

For the vast majority of programs that you might need from time to time it is likely that it would never be worthwhile to rewrite them to use the native interfaces of the new OS, but nonetheless you will be able to use them directly, without having to use complicated ways to share the file systems, the clipboard, the displays and whatever else is needed with the programs run in a virtual machine.

Implementing some good methods for seamless sharing of data between 2 virtual machines, to be able to use together some programs for the new OS with some programs run e.g. in a Linux VM, is significantly more difficult than implementing a POSIX translation layer enabling the C standard library and other similar libraries to work on the new OS in the same way as on a POSIX system.


> If you do all your normal work in a virtual machine, what will you use your new OS for?

You write replacements for or properly port your every-day workflow to the new OS. You already wrote a whole new OS for some reason even though there are hundreds to choose from, presumably there is value in replacing your tools to take advantage of whatever you put all that effort into or else why bother? The VM is for things you haven't ported yet or less important workflows.

Besides, people run windows and do all their work in WSL all the time.

> Writing any useful application program in a complete void, without standard libraries and utilities, would take a very long time [...]

So does writing an OS and you've already decided that was worth the effort, yet you balk at rewriting some commandline utilities[0] and a standard library? Please.

> It is much easier to first write a new foundation [...] then start to use the foundation with the existing programs [...] and then replace one by one the programs that happen to be important for you and that can benefit the most from whatever is different in the new OS.

Any reason not to just do that with a VM? Forcing POSIX compatibility into your OS is going to constrain your choices (not to mention your thinking) to the point that you'd probably be better off just modifying an existing OS anyway.

> For the vast majority of programs that you might need from time to time it is likely that it would never be worthwhile to rewrite them to use the native interfaces of the new OS, but nonetheless you will be able to use them directly, without having to use complicated ways to share the file systems, the clipboard, the displays and whatever else is needed with the programs run in a virtual machine.

A: it isn't that complicated. B: if you can use them so directly without having to deal with the separation provided by a VM, it's likely you didn't improve their security situation anyway. Again, why not just modify an existing POSIX OS in this case?

> Implementing some good methods for seamless sharing of data between 2 virtual machines, to be able to use together some programs for the new OS with some programs run e.g. in a Linux VM, is significantly more difficult than implementing a POSIX translation layer enabling the C standard library and other similar libraries to work on the new OS in the same way as on a POSIX system.

I doubt it is as hard as, say, writing a brand new OS that's actually in some way useful. Why go through the effort of the latter only to throw away a bunch of potential by shackling yourself with a set of barely-followed standards from the 1970s?

[0] POSIX does nothing to help you with anything GUI.


> Someone who would make a new OS, should define a completely new system call interface, as it is likely that now it is possible to conceive a better interface than 50 years ago and anyway if it would not be different there would be no reason to make a new OS, instead of modifying an existing one.

For an example of how things like this can be done incrementally, you can look at io_uring on linux.


At that point, why not just contribute to Linux?


redox is one i've been following from afar. rust, not posix, microkernel, s/everything is a file/everything is a url/

it looks pretty cool, although the url thing seems yet to prove its utility. they seem to be playing around a bit with using the protocol component (net, disk, etc), but it's unclear what this adds over just using paths. although maybe if they used the protocol to describe the encoding of the data, it would add something?


> microkernels aren't necessarily better than hybrid, they're slower, harder to debug and process management can be complicated

I was basically on board, but how are they harder to debug? I'd think being able to run components in userspace would make debugging way easier.


You are now debugging a distributed system.


Oh, good point; I was thinking at the component level


fun fact: you already are in linux. being a monolith doesn't change the nature of the problem.


Are you? I'm paddy pretty sure I can run a single Linux and point a single gdb at it[0] and debug it in a single memory space; I don't think you can do that with a microkernel.

[0] possibly resorting to UML, but still


I'm very confused by this comment. There are a ton of other things you need to implement if you want to have desktop applications. POSIX does not specify any APIs for graphical applications. You might be thinking of something else.

If you want to support the lion's share of desktop applications, it would actually be better to implement the Win32 API...


Sorry, I meant software in general but wrote "desktop applications" instead. Anyway the sentence is still valid, even if you'll have to implement other things such as the graphical interface, the POSIX compliant code won't need modification


If you're taking an app built for Linux or GNU or BSD, then it probably will need modification, as those systems have various extensions on top of POSIX.


Isn't QNX a microkernel? I remember it being known for being quite fast?


No, it's more of a nanokernel. It's very fast.

Full disclosure: I maintain QNX toolchain.


As a real-time OS it is known for deterministic response times. If it were exceptionally fast (and licenses cheap enough), you'd see hosts in the TOP500 using it.


I agree 100%. QNX is lurking in products you may use. It was the OS for the show control system that is used throughout entertainment where real-time is necessary for the safety of the devices it controls, which also have hardware safety at the lower level. I would drop into the QNX terminal for certain tasks. Unfortunately, you used to be able to download the show control software and play with it, but it has since been bought buy a company that sells it with the equipment they rent, so you need to buy trainging and it is behind their wall now. Not QNX, but the show control software that runs on QNX.


TOP500 is chock full of microkernels though, even if the "I/O nodes" would run Linux


I'd like to see more information about that. I remember that Penguin Computing offered such some 15 years ago, but don't know where it was deployed or still is. Cray and IBM had also such a concept for their superclusers in the past, but are they still using such? The one HPC environment I worked on (a major car manufacturer in Europe) used plain RH Linux on all nodes as recently as three years ago.

The current #1 (Fugaku) uses IHK/McKernel as kernel for the actual payload. The previous #1 (IBM Summit) seems to use RH Linux though. Perhaps, since the most performance critical part is run by and within the GPGPU(s), the actual OS doesn't matter all that much (for performance -- it matters of course for programmer's comfort/efficiency).


There used to be a lot of "special microkernel on compute RPCing to Linux on I/O" on Crays and the like. Hard to say how prevalent it is now, and most annoyingly I can't recall the names. (Charon?)


So this service consists basically in paying to compromise E2EE, adding an attack vector, storing my otherwise encrypted conversations on centralised servers, using an arguably worse client with less features just for the convenience of not having to open 3 separate apps? Who is this for?


It's for all of us! Created by those nice folks over at the NSA! I make joke!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: