The "14 hours" is a huge exaggeration, but that's to be expected; every advertised battery life I've ever seen is a huge exaggeration. But I have to strongly vouch for this one; it's the first laptop I've seen that actually can actually last more than seven whole hours with Linux. Every other laptop I've tried lasts under four hours, even if it advertises twenty.
Well usually laptops don’t come from the OEM with Linux, so if you slap Ubuntu on there and the power management stinks, that’s on you.
Laptops with practical 10+ hours battery life usually come with Linux from the factory. Eg the google Pixelbook Go, which gets about 10-12 hours on a battery half as large as the one we are discussing. The System76 has mostly just thrown a gigantic battery at the problem.
Echoing what David (kanetw) already said, but: super honored to have you as one of our preorder backers. If you have any questions/feedback about your headset or our project, please don't hesitate to reach out.
Isn't the reason why this is typically not done that VR giving a good experience is very low-latency-dependent, and it's very hard to make wireless connections have an ultra-low latency?
Yeah, pretty much. I think you can mitigate it a bit with clever tricks, but it's a lot of extra effort we'd rather not deal with this early on.
Also, bandwidth is the big blocker. Compression adds latency and reduces quality, and the amount of data we're pushing is really high (27Gbps uncompressed)
This might be because it's easier to write a quick putdown than a thoughtful comment, so the first comments tend to be the quick putdowns. Thankfully the more thoughtful comments rise to the top, but that takes time.
That's exactly the contrarian dynamic—your description is spot on! It's an artifact of the commenting process, not a representative sample of the community, and it happens in many threads. Maybe most threads.
What fascinates me most is that the errors are very "human-like". If you gave me multi-digit multiplication and addition problems like that, I would frequently have similar results of getting most digits right but making a mistake on one or a few of them.
When I do mental arithmetic my brain frequently tokenizes into digit pairs or triples if I can recognize pairs and triples that have specific properties.
"224" is actually a really nice object to recognize because it's 7 * 32, and if you can recognize other multiples of 32 it frequently gives you shortcuts. It's less useful for addition because you would need to get lucky and get a multiple of 32 (or 7) on both sides, but for multiplication and division it helps a lot.
Sure - I think we all learn tricks like that. But you learned that pattern of tokenization, it wasn't arbitrarily foisted on you.
What GPTs have to deal with is more like, you are fed an arithmetic problem via colored slips of paper, and you just have to remember that this particular shade of chartreuse means "224", which you happen to have memorized equals 7 * 32, etc., but then the next slip of paper is off-white which means "1", and now you have to mentally shift everything ...
The tokens in most gpt models are small like this, but they still 'learn tokenization' very similar to what you just mentioned. It's part of the multi headed attention.
It learns what level of detail in the tokenization is needed for given tasks. For example, If you're not interested in parsing the problem for actually doing the computation for example, you don't pay attention to the finer tokenization'. If you do need that level of detail, you use those finer groupings. Some of the difficulty a few years ago was trying to extend these models to handle longer contexts (or just variable contexts which can go to very long), but that also seems close to solved now too.
So you're not exactly giving much insight with this observation.
I think that part of why the tokenization is a proble for math here is that it doesn't seem to be carrying overflow into the left token. Anyway, I haven't worked with GPT in detail to do a deeper analysis than that hunch, so take my comment with a couple of salt grains.
In this case it's the bridge that's fairly centralized. There wasn't any hard forks or other manipulation of the underlying blockchains (except for sending transactions on them).
Exactly. Vitalik Buterin even shared his concerns on the fundamental security limits of cross-chain bridges earlier this year:
> For example, suppose that you have 100 ETH on Ethereum, and Ethereum gets 51% attacked, so some transactions get censored and/or reverted. No matter what happens, you still have your 100 ETH. Even a 51% attacker cannot propose a block that takes away your ETH, because such a block would violate the protocol rules and so it would get rejected by the network
> Now, imaging what happens if you move 100 ETH onto a bridge on Solana to get 100 Solana-WETH, and then Ethereum gets 51% attacked. The attacker deposited a bunch of their own ETH into Solana-WETH and then reverted that transaction on the Ethereum side as soon as the Solana side confirmed it. The Solana-WETH contract is now no longer fully backed, and perhaps your 100 Solana-WETH is now only worth 60 ETH. Even if there's a perfect ZK-SNARK-based bridge that fully validates consensus, it's still vulnerable to theft through 51% attacks like this.
I think I was referring there more to all the laws _other_ than GDPR (eg. the more data-nationalist stuff) that nevertheless end up having a similar effect.
Glen Weyl (the economist who has done a lot of work on quadratic voting, mentioned in the article) has also written and talked extensively about Harberger taxes and quadratic funding; the former is a fairly deep change to how property rights work, and the latter is a way of democratically allocating funds to public projects. I would definitely encourage reading up on both of those!
In Democracies with a reasonable proportional representation (PR) improving how to choose from N options, or how to allocate M resources, is secondary to the problem of how to select and parametrize N and M.
In PR democracies mechanism design should focus more on how to improve participation and deliberation before final voting.
Deliberation is a process of selecting and articulating options and figuring out problems. Mechanism design for deliberation can emphasize cooperation, dialogue, creativity and reason, as opposed to the use of power that is voting for given choices.
https://vitalik.ca/general/2019/05/12/fft.html
Ctrl+F for "binary fields".