I'm surprised that Canada doesn't seem to be talking about doing this.
We've already got a strong payment processing brand with Interac, it's used daily for millions of debit transactions, and supports all the features you'd expect (in Canada) from a payment card (tap, chip&pin). There's also the MasterCard Debit and Visa Debit branding which seem to bridge debit transactions to the MasterCard and Visa networks. And there's already Interac-capable terminals basically everywhere that Visa and MC are accepted.
My thought is that Interac should launch a credit card brand called "Interac Credit". The actual credit would be via the banks, just like it is with Visa and MC. Interac already has the relationships with merchants and banks to make this happen, and it has the mindshare with consumers to make it successful.
The Canadian government has been trying for about 4-5 years now to get Canadian banks to embrace Open Banking, which will allow these sorts of products to be built quickly.
The banks have consistently refused to do anything, because they don't want any change that threatens their oligopoly. Canadian bank services are more or less the same as they were a decade ago.
The current government will have to show that it can resist rich people lobbying in this regard, before any real change can happen.
Indeed, no one seems to ever talk about the banking monopoly in Canada so it's not a big political issue. It's not nearly as controversial as the telecom or airline monopolies.
There are some new banking startups popping up in Canada like Neo Financial (from the guys who made SkipTheDishes) but they are online-only and have limited integration with stuff like Plaid.
Canadian banks still live in 2006. They can't even make EMT transfers more convenient, so people could request a payment or pay with QR.
Sometimes I feel like they don't actually refuse to do things, maybe they are not capable to improve. Something in their chain of command is broken and doesn't let them change.
This would still rely on Visa/MasterCard allowing dual-branded credit cards for overseas transactions, which isn't a very common arrangement. The only market that has this is in Asia where there are UnionPay + Visa/MasterCard dual branded cards and I suspect that the reason they allow this is because the market is huge, especially compared to Canada.
Also Interac does not do online transactions outside of some very specific merchants that take Apple/Google Pay transactions. This is how Interac reduces fraud risk, which is why interchange rates for Interac are so low.
You can do this in multiple steps. Start with a credit card usable only with Canadian merchants, which will cover a great majority of transactions of a great majority of Canadians. I'll have an MC for travel and the ordering from non-Canadian merchants, and this Canadian credit card for the other 95% of my expenses. If a significant percentage of Canadians have such a card, major non-Canadian services will add it as a payment option (e.g. ChatGPT or Claude). Then you branch out by either joining or co-branding with the EU credit card company if such a company succeeds.
A world with a patchwork of payments processing options will look different for travel and business, in some ways worse, but such is life in a "multipolar world" which the Americans elected their leadership to conjure up.
We have dual-branded debit cards in Canada, too. But these are all debit cards, not credit cards. Visa/MC makes way more money off credit cards than debit cards, which is why they're much more hesitant to allow dual-branding.
> This would still rely on Visa/MasterCard allowing dual-branded credit cards for overseas transactions, which isn't a very common arrangement
We can and should make this practice illegal, along with several other anticompetitive policies in this space. Oh no they might hit us with tariffs in response?
Interac strong? Are you serious? That thing can take up to 30 minutes to receive your money! (source: interac.ca)
<< Interac e-Transfer is a fast, secure and convenient way to send money to anyone in Canada using online banking. The participating bank or credit union transfers the funds using established and secure banking procedures. Transfers are almost instant, but can take up to 30 minutes depending on your bank or credit union. >>
Running two Lenovo ThinkVision displays off of my work MacBook Pro.
On the MBP built-in display, the upper-left and upper-right corners are rounded. I believe this is due to the shape of the display. The bottom corners are not rounded.
On the external displays, the corners are all square.
Same here, so I had to overcome this fear. But to be fair, my incident was >10 years back, and I was trying to be fancy (resizing a live FS - IIRC). Btrfs has gone a long way since then, and is even the standard filesystem for Synology NAS boxes.
Same here, repeatedly. I've never had a BTRFS fs that didn't spontaneously combust before it would have been decommissioned. Recovery or even check is always impossible. Only things that have caused more loss were (old) XFS on Linux that ate files every power outage and ReiserFS which also blew up in an unfixable way. I'll always recommend staying away from them. A filesystem should always have reliability and no loss, particularly no invisible loss, as its top priority.
I have one btrfs going at the moment. Nothing important, naturally. Just a Silverblue test installation. Will see how that fares.
Honestly, the big thing that the Pi has is the support and mindshare. Compared to many equivalents (e.g., the Banana Pi) you'll find much better documentation and have a much more polished experience with a Raspberry Pi.
The only question, for me anyway, is how much value I put on that experience and whether the increased cost is worth it.
Yes, this is it. When I'm experimenting, I'm happy to pay an extra $40 for the name-brand device that has 1000s of blog posts supporting it, since it's probably this community is going to help me be more successful and learn faster.
I would reconsider the choice when I have a commercial product that I have some scale of production, where a few bucks off the BOM is important.
I had a number of PalmPilots over the years, I loved those little devices! I also remember using a keyboard dock with one of them to take notes during meetings.
I'm pretty sure I had a PalmPilot Professional, a Palm V, and a Tungsten T (which slid open). The Palm V was easily my favourite, it was a very good looking device that worked very well. In comparison, the Tungsten T was somewhat clunky.
I got the (old) Treo 650 (or sinilar) from my father as he bought a new one back when I was a teenager. I was the only one with a handheld at school (I was maybe 13/14 years old), and I had a funny discussion with teachers about task management on a handheld (they obviously were overwhelmed by a teenager consisting tasks belong in a handheld, not a physical notebook).
I think it still was one of the best handhelds ever produced.
Worked issued me a Windows Mobile based Palm Treo (probably a 700wx), but I'm honestly not sure. It was decidedly not great, but mostly did the job. It was eventually replaced by a Blackberry of some kind which worked a lot better.
What DNS features are you missing? Is this a weird UXG limitation?
I have a UCG-Ultra and was able to set up DNS just the way I wanted. My needs aren't extreme, but I was able to set up a wildcard entry (*.apps.domain -> 192.168.x.y) and fixed addresses and DNS names for various hosts.
The configuration is in a non-obvious place now and has moved around a bit over time. Currently it hides in Settings > Policy Engine > DNS. It shows entries that come from the per-host fixed IP/Local DNS configuration (you can't edit these here) and you can create new entries here (like my wildcard or some other random entry).
The thing that bothers me most about DoH is that it moves the responsibility for name resolution from the operating system to each application. So now you don't have the ability to set up your own DNS server system-wide, you need to do it per-application and per-device. Assuming, of course, that the applications and devices in question allow you to do this and/or respect your choice when you do it.
Also shoving every protocol under the sun into HTTPS just feels wrong. I get why it's happening (too many middleware boxes and ISPs think internet == web). But shouldn't we fix the ISPs and middleware instead of endlessly working around it?
For Windows
> In the Registry Editor window open: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters
> Right-click within the “Parameters” folder and create a new Dword (32-bit) Value. Name this new file “EnableAutoDOH” and set its value to “2.”
* https://superuser.com/posts/1764668/revisions
Not just that. ISP knows the IP addresses anyway, so they can make an educated guess which domain you are accessing (or use SNI). So why would I want to leak this data to another entity?
Of course, Cloudflare (if page uses them) and Google (if you are not blocking their remote fonts & js) also already have this information, so there's that.
> Not just that. ISP knows the IP addresses anyway, so they can make an educated guess which domain you are accessing (or use SNI). So why would I want to leak this data to another entity?
Because a lot of sites are behind a CDN that makes such guessing infeasible, and can use ECH to block the SNI leak. And since your ISP knows your real identity and other personal info like physical address, it's better privacy-wise for them not to be the ones who know exactly which sites your IP is visiting.
Yes, but it won’t be easy. Heavy investment has gone into HTTP and we have great tooling and support for it as a result. That has a lot of benefits and I’m glad for it. But there is a cost.
HTTP is a blunt hammer and computing sometimes needs a scalpel. Lighter, more efficient protocols are important, as QUIC and WireGuard have proven.
To play devil's advocate, shoving everything into HTTP/HTTPS also allowed a ton of innovation.
Would video streaming sites (Youtube, Vimeo, etc) ever have gotten off the ground if they had to go to IANA to get a port number assigned, then wait for browsers to support the new protocol that runs over the new port, etc? Probably not to be honest. Or maybe browsers would just let JavaScript connect to any port, which would be terrifying from a security standpoint.
I'm firmly convinced that shoving everything into HTTP/HTTPS was a mistake. But I'm also willing to acknowledge that it's probably the least-worst solution to a bunch of problems.
> Or maybe browsers would just let JavaScript connect to any port, which would be terrifying from a security standpoint.
Isn't this just WebRTC?
Also, why does everything have to be done in a browser? We're talking about name resolution. That's supposed to be done by the OS regardless so you don't have a thousand separate configuration options to change if you want to change your DNS server.
Absolutely. The investment in HTTP means I can setup a website or API in a few clicks and pay nothing (or nearly so) for it. That has made it possible for me to try many, many things over the years. It’s fabulous.
I would very much like to see that same freedom to innovate when using other protocols.
Please elaborate. Why is it "good" to have a separate network stack element in each application, and what does this mean for legacy applications that will never support QUIC?
Because different applications have different needs and there is nothing intrinsically safer or better about having the stack be kernel-resident and, in fact, a lot to recommend moving things out of the kernel and into userland, which has a better application cotenancy model (full cr3 context switches between different users).
Circumventing the role of the operating system in the name of improved efficiency and duplicating a once-centralized function in (some) applications doesn't seem like a well thought out course of action. Why have an operating system at all? Why not just boot your computer to each application you want to run, as was done on the early PCs of the 1980's?
Perhaps we shouldn't use computers to run our applications anymore. Everything could run on a gaming console. That would certainly be more efficient for the applications.
"The role of the operating system" is whatever we decide it is. The operating system exists to serve applications, not the other way around. This isn't an aesthetic thing; it's an engineering choice. Moving parts (or all) of the networking stack into userland is often better engineering, for the reasons I just supplied, any of which I'm happy to go further into.
It's "better" for the application, but not necessarily better for the whole system, or the users. I'm sure it makes perfect sense from the (selfish) application developer point of view, but it goes against the philosophy that defines the roles of the OS vs. the applications.
Considering that a lot of people use their OS just to navigate the web, you could use the logic you are applying here to argue that the browser should be an OS provided component. Plus, networking is one of the easiest things to move to userland. An OS is still extremely useful for everything else, so I'm not sure I see your point
I don't believe it's my responsibility to cover these basic concepts here, but the role of the OS is to abstract the hardware from the applications.
All browsers are applications, regardless of what Microsoft argued in their 1998 anti-trust suit defense.
The OS knows how to talk to your networking hardware, and it provides services for the applications to do so, and even implements some standard protocols so the applications don't need to do that.
This is all very basic stuff, but I've been out of school for many decades, so maybe they don't teach it anymore.
What? Don't worry, I understand "the basic concepts". It's just that you yourself said that it was about a philosophy so it's weird to claim that disagreeing with said philosophy is just ignorance.
Also, I'm not sure you understand what quic is then? The kernel still handles most of the hardware abstraction, and in most cases still interacts exclusively with the hardware at the driver level (except for very high performance networking).
Yes, it can also help with protocols, but even going by your own definition ("but the role of the OS is to abstract the hardware from the applications"), protocols don't have to be at the kernel level.
You missed my original point, providing a browser was obviously an exaggerated example of doing more than just abstracting and handling the hardware. Which is also what implementing protocols at the kernel level is. The kernel still handles QUIC's UDP layer (and thus the hardware).
Yes exactly, I completely agree! I was about to say that, but even with that rather restrictive definition it still doesn't make sense to want things like QUIC anywhere else but in userland.
But whenever I hear about "philosophy" in a discussion about kernels, I just default to assuming that they mean "whatever Linux does is the right way™".
But hey, I should've just skipped my kernel courses since they didn't teach me a lot about "kernel philosophy" lol.
> Also shoving every protocol under the sun into HTTPS just feels wrong. I get why it's happening (too many middleware boxes and ISPs think internet == web). But shouldn't we fix the ISPs and middleware instead of endlessly working around it?
It'd be great for the horrible ISPs and middleboxes to change, but that's not realistic, and working around it by wrapping everything in HTTPS is realistic.
Today, it's a good thing that applications don't respect the system defaults, since on basically every OS, the system defaults are either "totally insecure DNS all the time", or "auto fallback to insecure DNS". I'd only want programs to start respecting the system defaults if that ever changes.
Thats like saying every application should come up with its own bespoke encryption framework because the OS doesn’t utilize full disk encryption by default. The solution is not to implement encryption in all your programs, the solution is to configure full disk encryption in the OS.
> Thats like saying every application should come up with its own bespoke encryption framework because the OS doesn’t utilize full disk encryption by default. The solution is not to implement encryption in all your programs, the solution is to configure full disk encryption in the OS.
Should password managers just store all of your passwords in cleartext instead of encrypting them, since you should be using FDE?
> Should password managers just store all of your passwords in cleartext instead of encrypting them, since you should be using FDE?
Better analogy, should every random app bundle own custom crypto and encrypt all files and ask user for password just in case some user does not set login password?
An app should do what it does, if secure storage is not its task then it probably should leave it to the os and if it's not DNS resolving then it shouldn't DNS resolve. Is very annoying
Who is to say that insecure defaults is less good for most people? The reason why things like FDE and other enhanced security mechanisms aren’t enabled by default is because it increases the risk of things breaking for non tech savvy people. I have had to recover installs from peoples hard drives where the messed things up and if they used FDE they would have been screwed. The reality is that it is much more likely grandma forgets her password over somebody stealing her desktop and scraping her data.
It would be nice for OS vendors to create profiles for OS installs, and then people who know what they are doing can opt for the “secure” profile, but I don’t think FDE can ever be the default on mass consumer devices.
> Should password managers just store all of your passwords in cleartext instead of encrypting them, since you should be using FDE?
Those are different threat vectors. FDE stops intruders from accessing your system when it is locked/off. Password manager encryption is to prevent rogue processes on an UNLOCKED system. The system can solve the second problem either by having a more granular permissions system (like iOS) so that process A cannot read data of process B, or the OS can have a secure enclave which can store your secrets behind biometrics.
Notably, both of these solutions are implemented in Apple world and I would argue that applications should consider using those system mechanisms instead of rolling their own encryption.
If you are actually serious about security you aren’t using passwords, you are using webauthn and u2f where possible.
When applications don't respect system defaults, they are by definition "going rogue."
I run Pi-hole because I like having some control over the IoT garbage on my (separate IoT) home subnet. Much of the IoT garbage already pins their DNS server, which limits my control, or makes control more difficult to achieve.
If you're worried about IoT garbage spying on you, blocking DoH wouldn't even help. Presumably, there's something important on the Internet that they need to access (since otherwise you'd just air gap them outright), so they could exfiltrate your data through the same connection that they're using for their legitimate purpose.
But that's the game that most IoT stuff plays. They offer some utility that makes them worthwhile, but they exfiltrate your data to marketeers and even government entities (such as Ring's partnership with law enforcement).
Maybe I'm old-school, but I like to have some control over what's going in and out of my network. DoH seems to exist mainly to circumvent that control.
> DoH seems to exist mainly to circumvent that control.
Hate to break it to you, but if I control the client, then I'm not in any way obligated to use DNS or any other IETF-endorsed protocol to turn names into numbers when I'm running on your network.
The idea of "controlling what's going in and out of the network" died in the 90s.
> But that's the game that most IoT stuff plays. They offer some utility that makes them worthwhile, but they exfiltrate your data to marketeers and even government entities (such as Ring's partnership with law enforcement).
Sure. My point is that blocking DoH wouldn't stop that though.
> Maybe I'm old-school, but I like to have some control over what's going in and out of my network.
What if you were a public Wi-Fi operator? You definitely shouldn't have control or insight into the traffic to and from other people's computers and phones.
> DoH seems to exist mainly to circumvent that control.
No, DoH is purely a good thing, since the evil use cases like above can happen even without it.
Sure, it's a "good thing" for the IoT garbage and the information hoarders, but it's not a "good thing" from my perspective, or from the perspective of corporate IT security.
I can see a future where Chrome will use the system resolver for everything except Google's advertising domains, and those name resolutions will be impossible to block because they're going to a Google IP that may also serve services you want. Maybe Chrome would get called out for this change and they'd back it off.
But I doubt that a smart TV that does this would get called out, and even if they were the response would likely be "Oh, that model is three months old and we don't do firmware updates, sorry."
Google already makes blocking individual services nearly impossible. Want to give kids access to Google Classroom? Auth is done through google.com so now search is unblocked. What about Google Docs? You’ve just opened all of YouTube as well.
That's not a good argument to block DoH, since once apps or devices would start doing that, they could just as easily start hardcoding the IPs instead.
>Also shoving every protocol under the sun into HTTPS just feels wrong. I get why it's happening (too many middleware boxes and ISPs think internet == web).
But the HTTP part of HTTPS is invisible to middleboxes. They see an opaque TLS stream.
Some middleboxes inspect the TLS session setup (e.g., SNI sniffing) and in some corporate environments they even decrypt the traffic (this relies on the endpoints having a root certificate installed that allows this functionality, which is something you'd see in a corporate environment).
That’s incorrect. I use DNSecure (iOS app) to relay all DNS traffic on my iPhone to my DNScrypt-proxy server which I host on the internet (make sure you know what you do before exposing DNS servers on the internet).
It’s awesome because I have system wide tracker/adblocking which works whether or not I’m on my LAN and even with Apple Private Relay on.
How does this prevent a random application from making an HTTPS request to a random hard-coded IP address? Similarly, how does this prevent an application from making an HTTPS request to a generic host (e.g., api.example.com)?
This is what DoH looks like from outside the application. You can't really tell that it's DoH since it's just an HTTPS connection, which is kind of the whole point of it.
Yep with applications hardcoding addresses and utilizing certificate pinning, there's nothing the device owner/homeowner/network admin/system admin can do to inspect or modify DNS over HTTPS traffic, other than uninstall the application or block the connection entirely. Increasingly, blocking connections breaks the app so you almost might as well just uninstall the app or block it from being installed on managed endpoints.
> How does this prevent a random application from making an HTTPS request to a random hard-coded IP address? Similarly, how does this prevent an application from making an HTTPS request to a generic host (e.g., api.example.com)?
I think that, franky, even without DoH, that ship has sailed. WhatsApp and Telegram (to name a few) is known to embed IP addresses into their applications. It is silly to assume that not standardizing DoH will not result in the same situation, and I imagine there are custom DNS bootstraping happening, for good or evil.
Theoretically you could domainblock known DoH servers that certain applications would use.
But yes, I believe that if an application try hard enough there are ways to bypass any set of rules you set on a device. Luckily, most applications just use the internal libresolv for any domain resolving needs.
This is only because applications think they should do that. There is nothing against a DoH client in the OS, I think Windows and MacOS already supports it.
HTTP3/QUIC is on the path for this because once you have "HTTPS" over UDP, the next thing that happens is you mark all of the actual HTTP bits as optional to implement since the middlebox can't see them and just run a datagram TLS VPN over port 443 to tunnel whatever you want.
I see the carbon tax as a 'stick' (to penalize undesired behaviour, in this case emitting carbon), but it needs to be coupled with a 'carrot' to encourage the desired behaviours.
I'd like to see a carbon tax coupled with massive investments to make public transit legitimately good. There are too many places where there is no viable alternative to driving, a carbon tax will unnecessarily punish those people without giving them a reasonable alternative.
The carrot is doing the things you want to do like getting from A to B or building a home.
Government ‘carrots’ are almost universally a terrible idea because they codify specific solutions. Instead you can get the same effect more efficiently with a carbon tax large enough for people to notice.
Do people not look at the operating costs before buying a vehicle? Do they really just negotiate a monthly payment and get surprised at the amount they have to pay for fuel/maintenance/insurance?
When I bought my most recent car I had a spreadsheet which projected fuel (whether that's gas, electricity, or gas+electricity) and maintenance costs (there was some ball-parking here) for a dozen different models based on our driving habits. Once the list was narrowed down a bit I did some online quotes at my insurance company to add that in.
There were no financial surprises when I bought the car.
This is unnecessarily self-congratulating. The problem is that vulnerabilities are found in cars after they are on the market for a while and already purchased, so existing owners get their rates hiked, but the manufacturer never fixes the issue. No amount of research is going to guarantee your operating cost next year.
We've already got a strong payment processing brand with Interac, it's used daily for millions of debit transactions, and supports all the features you'd expect (in Canada) from a payment card (tap, chip&pin). There's also the MasterCard Debit and Visa Debit branding which seem to bridge debit transactions to the MasterCard and Visa networks. And there's already Interac-capable terminals basically everywhere that Visa and MC are accepted.
My thought is that Interac should launch a credit card brand called "Interac Credit". The actual credit would be via the banks, just like it is with Visa and MC. Interac already has the relationships with merchants and banks to make this happen, and it has the mindshare with consumers to make it successful.