The headline is false (“WhatsApp Backdoor allows Hackers to Intercept and Read Your Encrypted Messages”), in the sense that hackers can’t actually read and intercept WhatsApp messages. Normally the reporting of a security vulnerability includes a POC of an exploit. There isn’t one here, because hackers haven’t been able to exploit it. If an activist saw this story, got scared of WhatsApp, and decided to use SMS or Telegram instead (especially if they didn’t use the opt-in secret chats feature, which most people don’t), their security got weaker.
That doesn’t really refute the claim that this can be used as a backdoor, however. Since the backdoor is only usable by Whatsapp (or whoever controls them and their servers), a random researcher can’t really release a POC.
Disclaimer: I personally know nothing about beyond the posts in this thread.
MDMA induces tolerance and requires dose escalation for redosing after the acute effects wear off. It’s like benzos, heroin, or speed in that respect. At least it will stop working eventually so few people would compulsively redose every day, but I’ve seen people who did it every weekend and eventually had a very bad time. Antidepressants, on the other hand, are designed for daily use (and generally require sustained use to be effective). The dynamic where you want more so you keep increasing your dose to fight tolerance until you end up worse than you started is less of a risk.
I don’t mean to attack MDMA. I think it’s great, and a lot of people could benefit from it. But I think this is a real issue if it’s easily available outside of a therapeutic context.
NAC (N-acetyl cysteine) supplement anecdotally reduces the tolerance that builds. Time passing also reduces the tolerance - I initially was told to not do MDMA more than every 4 months maximum.
"On the joyous and satisfying occasion of this, our first successful journey to Mars, we think back not only on all those who died along the way, but also on the Slack clone that allowed us avert disaster on this mission..."
I do wonder why we want humans on Mars. Given the huge amount of fossil fuels that must be burned on this planet to get us there.
We may find Mars is dry and arid when we get there, and so is Earth.
Somebody once said that that Apple’s business model is charging a very high markup for flash memory (currently it’s $50 per 64 GB to upgrade storage on an iPhone or MacBook). The margin on other parts is also high. The user-upgradeable Mac Pro provides an escape hatch from these expensive upgrades. Therefore the entry price has to be much higher to make up for it. An affordable tower would cannibalize Mac Pro sales by providing businesses with a cost-saving opportunity that’s too good to refuse.
This is good analysis, but there's some other effect I'm unable to put my finger on.
The iPad is at least an order of magnitude better than comparable netbooks (though chromebooks, depending on manufacturer, can be competitive thanks mainly to ChromeOS's reduced footprint). So even though the margins are high, the perceived quality, regardless of raw benchmarks, is still something. It's not just marketing to me to say the marriage of software and hardware is unique. (Gruber's observation about NSObject alloc's being a lot faster on Apple Silicon, for instance).
Now that I think about it, honestly Google is the only other company playing by these rules... ie pixelbook, pixel phone, etc. But they're much earlier in the evolution, and have less upstream control in software (especially since Fuchsia seems to be somewhat lower priority than before, though this is second hand knowledge).
That somebody is probable only partially right about a partial business unit within Apple. People tend to buy it for the package deal, not for individual specs. At volume, those are the much more valuable customers than the 'buy a single computer' crowd that often talk about their modifications they want to make.
If some affiliate is actually doing the work of obtaining the customer to earn the fee, but then the customer has an economic incentive to strip the affiliate attribution to get the fee for themselves, then the affiliate isn’t getting paid. Maybe they’ll recommend another product instead, which doesn’t incentivize users to break the affiliate link.
> If you had a model trained on a large corpus of data from the pre civil war southern American states, it would have been deeply racist, and would even view black people as possible property. If you had one that was trained on data from the 1950 it would be less racist but still problematic viewed by people from today. Is there really something special with today, that removes these kind of concerns with a model trained with current data?
I think this argument applies not just to machine learning, but to learning in general. Any kind of knowledge-acquisition process is going to be biased by the environment in which it occurs. That goes not just for digital neural networks, but also those in our human brains, operating on the same racist data the ML models are. If that means we shouldn’t do machine learning, it also means we shouldn’t do human learning either.
Of course, the preceding is absurd. A more reasonable take is that we should adjust the objective function of our learning processes to try to account for the effects of biases. We try to do that subjectively as any decent person operating in a biased society should, but our ML models can do it more accurately. In fact, I’d argue that such techniques are necessary to more carefully analyze and build evidence describing the effects of those biases. They can provide insights that will even improve our ability to correct for biases in the real world.
I think you’re making a weaker version of this argument than you need to, because you’re framing it in terms that oppose the politics of half of the public, when you really only need to argue against a much more specific set of ideas.
For example:
> Ok, basically she is pissed as AI is picking normal language, that people use, and not using the vocabulary that is in vogue on certain political circles. Basically censure, and forced speech.
I think a stronger version of this argument, which can appeal more broadly, would be: an automated system that is constantly being trained on data from social media can more quickly pick up new trends in human speech than a manually-programmed expert system. Furthermore, even if humans are constantly updating an expert system, the biases inherent in the composition of the company-hired set of experts will limit the representation of minority trends in the data. On the other hand, a model examining the entire data set can pick these up automatically. For example, I can easily get AI Dungeon (powered by GPT-3) to communicate using language and concepts that are only present in a minority community that I’m part of that I’ve rarely seen represented in literature.
When arguing against Timnit Gebru or their ilk, you don’t need to just give them the support of liberals. We actually make up the majority of techies, so it’s a bad position to put yourself in if you want your cause to succeed.
> Furthermore, even if humans are constantly updating an expert system, the biases inherent in the composition of the company-hired set of experts will limit the representation of minority trends in the data.
Exactly this. Framing it in terms of alternate options makes AI the more reasonable choice.
Google AI appears to have drawn unnecessary attention to themselves for relatively benign paper. Most of the arguments in this paper have already been discussed elsewhere and could be refuted in simple terms.
That really depends on the context you think about this paper in. If you see it as Critical race theory activism (which I do) then it's not benign at all on the contrary it's deeply damaging that Google have well paid people who think they should push their own political agendas through an organization that underlines almost everyone in the wests daily lives.
If you see AI as a technological amplifier, it could backfire the other way around. I.e. that the status quo is kept longer than necessary because it is so baked into the technology itself, not only the people who wield it.
I think it is precisely for these reasons this debate is so important, because it is truly not a clear cut improvement in my opinion.
Who’s going to invest the capital and do the work to run a company that develops technology (eg. hiring the tens of thousands of people required to fabricate 5 nm chips and build cell phones) if there is no way to become a billionaire off of it? I figure the people who are capable of that kind of stuff would put all their efforts toward overthrowing the system instead if that’s what the rules were.
I buy it for basic research that a few people in a lab can do, but it seems hard to coordinate the efforts of tens of thousands of people that way. You need ruthless prioritization of different initiatives and performance incentives for management and other stuff that public servants and academics aren’t that big on.
i still fail to understand what the issue with having billions of money is. the best part is these people ignore that money is just a stand-in for power, which they absolutely fucking hate, and that if you ban billionares people will still have more power than a billionare via other mechanisms :)