Hacker Newsnew | past | comments | ask | show | jobs | submit | TheEzEzz's commentslogin

I wrote my own dynamic keyboard layout to optimize typing speed while procrastinating on my dissertation.

15 years later I'm still using it. My dissertation not so much.

Procrastination is (sometimes) awesome.


>I find that when someone's taking time to do something right in the present, they're a perfectionist with no ability to prioritize, whereas when someone took time to do something right in the past, they're a master artisan of great foresight.

-xkcd 974


Structured procrastination is highly underrated.


It is so underrated, that I have been led to put off doing more structured procrastination until I have more time. If more people had told me how great it could be, I would be doing it now!


Dynamic keyboard layout?

What is that? The keys change place?


Probably using a QMK firmware-based keyboard where you can access different layers and shortcuts.

I'm using one right now (though mine runs off ZMK which is similar but wireless) which is a split with just 42 keys. The rest--numbers, symbols, function keys, etc. are all under layers. The layout is dynamic because holding down different keys makes the layout 'change' as you do so. Holding down the left spacebar and pressing 'Z' sends 'F1' to the computer while holding down another key on the right half turns my WER/SDF/XCV keys into a Numpad, etc.


Side effect, no one knows your passwords, even if they watch you type!


I have never heard of "left spacebar" before. Sounds very interesting. So there can be two different spacebars on your keyboard?


Some ergonomic keyboards are split between to the two hands (usually attached, but not always) and have a spacebar key for each hand.

But I always thought both keys sent the same signal to the computer.


Yes, both keys send the same key code to the computer, however, pabloescobyte said they’re using ZMK, so the left/right space bar distinction is happening on the level of the keyboard controller.


Can you share this keyboard layout with us? Sounds amazing


Sounds like custom hot-keys in a video game


I could easily see this going the other way. Life long single people develop strong social networks that they keep investing in into old age. Married (and with children especially!) couples invest less time in their social network, in old age they then have many fewer friends when their children leave and their spouse passes.

(I'm not sure this is true or not, but seems plausible. I agree with the author that we should get better data to resolve these questions).


Marriage is sort of like adoption because you're given a new family. It's different from adoption because you get to keep your existing family.

Successful married couples are investing in an internal social network: offspring and extended family. If this network can meet their needs then they have less reason to go outside of it.

Childbearing families are going to encounter other parents and their social network will change accordingly. Play dates, day care, school parents and extracurricular groups will be in their orbit now.

Conversely, single people may rely on their own families for social support, and they may not need to go outside of that, either. Woe betide the single person whose family is unsupportive, because friends and acquaintances are a faint substitute. Such a person could develop ties with their employer, professional orgs, non-profits and community to stay sane and healthy. Or they don't--there are plenty of dysfunctional singles who don't need a marriage in order to suffer and fail at life.


You're basically taking the model "off policy" when you bias the decoder, which can definitely make weird things happen.


LeCun is very simply wrong in his argument here. His proof requires that all decoded tokens are conditionally independent, or at least that the chance of a wrong next token is independent. This is not the case.

Intuitively, some tokens are harder than others. There may be "crux" tokens in an output, after which the remaining tokens are substantially easier. It's also possible to recover from an incorrect token auto-regressively, by outputting tokens like "actually no..."


Super cool. When I think about accelerating teams while maintaining quality/culture, I think about the adage "if you want someone to do something, make it easy."

Maintaining great READMEs, documentation, onboarding docs, etc, is a lot of work. If Auto Wiki can make this substantially easier, then I think it could flip the calculus and make it much more common for teams to invest in these artifacts. Especially for the millions of internal, unloved repos that actually hold an org together.


Thank you! We like the analogy of dehydrating knowledge that can be used (hydrated) later. Beyond even unloved repos, we'd even argue broader organizational knowledge that seems to have been lost to history like Roman Concrete or how to precisely build the Saturn V could potentially be "stored" using AI.


A good analogy for AI risk. We'd never visited the Moon before, or any other celestial object. The risk analysis was not "we've never seen life from a foreign celestial object cause problems on Earth, therefore we aren't worried." The risk analysis was also not "let's never go to the Moon to be _extra_ safe, it's just not worth it."

The analysis was instead "with various methods we can be reasonably confident the Moon is sterile, but the risk of getting this wrong is very high, so we're going to be extra careful just in case." Pressing forward while investing in multiple layers of addressing risk.


And, what OP downplays, not taking it seriously, having many serious fatal flaws, and then covering all those flaws up while assuring the public everything was going great: https://www.nytimes.com/2023/06/09/science/nasa-moon-quarant... https://www.journals.uchicago.edu/doi/abs/10.1086/724888

Something to think about: even if there are AI 'warning shots', why do you think anyone will be allowed to hear them?


Good question. Perhaps depends on the type of warning shot. Plenty of media has an anti-tech bend and will publicize warning shots if they see them -- and they do this already with near term risks, such as facial recognition.

If the warning shot is from an internal red team, then higher likelihood that it isn't reported. To address that I think we need to continue to improve the culture around safety, so that we increase the odds that a person on or close to that red team blows the whistle if we're stepping toward undisclosed disaster.

I think the bigger risk isn't that we don't hear the warning shots though. It's that we don't get the warning shots, or we get them far too late. Or, perhaps more likely, we get them but are already set on some inexorable path due to competitive pressure. And a million other "or's".


You mention media publicizing warning shots. Does that really work at all?

Most of the reporting I see is half-dismissive: [facial recognition is a risk but what are you gonna do? it can't be bad to fight crime.] This goes for everything. And it rarely results in effective control.

Internal practice in biology or chemistry labs kinda does - but takes a long time, and then accidents still happen.

NTSB accident investigations: Is there another field where each accident is taken as seriously as there? And step-wise improvement does not sound like a good solution for self-reproducing agents.


For example with facial recognition, see this outcome with Rite Aid being banned from using it after a "warning shot" https://techcrunch.com/2023/12/20/rite-aid-facial-recognitio...


Great article on the Apollo mission return "quarantine". One lesson is that it got not much priority (the vehicle itself released in air and ocean water), and not much effort: lots of things in the lab were not tested or designed sufficiently (broken glove / gloveboxes, fire procedures involving breaching containment...) Another lesson is that this was apparently not tested or wargamed anywhere enough. No test run? A third is that of course it didn't go perfect, with the two first points as a starting point, and the fact that it was the first run. In hindsight, of course it would fail.

That argues for at least taking the idea of containment (for AI or Mars samples) more seriously. But it also argues that it will (of course) not be taken seriously enough. Plus amateurs not taking things seriously either. So, taking it even more seriously because of this prior experience.

Science is used to "fair warnings" (screwdriver criticality experiments, Marie Curie, now lunar samples, but yes also smallpox ... plenty of stories) - but all of these were minor: a few persons died, the rest learned. And the risk for a sufficient AI is not in the same scale. For that one, we don't have much experience. Comparable might be high containment pathogen labs maybe? - with plenty of problems themselves; and the difficulty of cleaning computers after an intrusion (proper procedure being a clean re-install - not possible for an AI leak.)


NASA had an easy win putting the astronauts in quarantine, there is no such easy win for current AI research, you can whistle-blow as much as you want but AGI will be worked on until it is real, regardless of legislation unless that legislation covers all countries which is impossible.



Not a great analogy. Today we have all kinds of profit-driven companies "going to the moon" without thinking too hard about the risks. There is not, and practically can't be, a central safety effort that has more effect than releasing reports. No one is enforcing quarantine.

If there was life on the moon in an analogous scenario, it would be a matter of a few trips before it was loose on earth.


Yes, but that's today. When the moon landing initially happened, nobody had ever been to another celestial body before, whereas now we have lots more experience visiting them and sampling their atmospheres and surfaces.

Nobody's ever created AI before, so we're in a similar situation in that nobody has firsthand experience of what to expect.


The specific part of the analogy that breaks down is that nobody actually knows if, when, nor how we will ever create AGI. So all safety efforts are necessarily speculative (because the field itself is speculative).

Like, if the scientists working on the moon landing didn't even know yet what the moon was made of, nor whether they would be getting to the moon by slingshot, by elevator, by rocket, by wormhole, or by some other yet unknown means, it would be very hard to make any meaningful proposal for how we would stay safe once we did get there.


Oh definitely, that part of the analogy works fine.


Sounds like a great analogy?


Does it? Do you think "pressing on" is the optimal course in both scenarios for the same reason?


In both cases a central safety effort seems nearly impossible. E.g. trying to enforce international AI risk cooperation via air strikes against data centers [1] can easily be avoided by defecting countries by building supercomputers underground.

With the moon bugs this wasn't a big problem, as they were so unlikely. But for AI the risk seems quite large to me.

[1] https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...


Right, but the original comment is trying to draw comfort from the actual Apollo scenario where central enforcement very much happened, not from my modified scenario. I think we're on roughly the same page.


> so we're going to be extra careful just in case

If we're going with that analogy, moon is roughly simultaneously visited by many private companies, each bringing samples back, some paying lip service "we're totally be careful", some not.

Continuing with that analogy, there are other planets, moons, solar systems with perhaps bigger chance of finding life. The laissez-faire approach to bringing samples back continues, now strengthened by the "see, we visited moon, brought samples and we still live!".


I agree with you, but to be fair:

- The worst case worry about AI is a much bigger problem than the worst case worry about moon life. (IMHO)

- With moon we had a good idea on how to mitigate the risks just to be extra safe. With AI I believe we don't have any clue on how to do containment / alignment or if it's even possible. What is currently being done on the alignment front (e.g. GPT refusing to write porn stories or scam emails) has absolutely nothing to do with what worries some people about superintelligence.


I agree -- the risks are bigger, the rewards larger, the variance much higher, and the theories much less mature.

But what's striking to me as the biggest difference is the seeming lack of ideological battles in this Moon story. There were differences of opinion on how much precaution to take, how much money to spend, how to make trade offs that may affect the safety of the astronauts, etc. But there's no mention of a vocal ideological group that stands outright opposed to those worried about risks -- or a group that stands opposed to the lunar missions entirely. They didn't politicize the issue and demonize their opponents.

Maybe what we're seeing with the AI risk discussion is just the outcome of social media. The most extreme voices are also the loudest. But we desperately need to recapture a culture of earnest discussion, collaboration, and sanity. We need every builder and every regulator thinking holistically about the risks and the rewards. And we need to think from first principles. This new journey and its outcomes will almost surely be different in unexpected ways.


You are completely right and your description of the situation screams social media as the root cause of difference between then and now. Maybe 'social media' in a generic sense where any discussion board counts.


No, the worst case worry about moon life is the total extinction of all life on earth. It's no better than AI.


Again, devils advocate, but the people worried about AI (like Yudkowsky) are absolutely worried about it killing all humans. You can read more about the specifics on lesswrong.

With moon life I presume the worst case is some infectious and fatal disease that's difficult to contain?

The first one sounds like a bigger problem to me, but maybe it's not a discussion worth having. So, fair enough.


Skynet will only nuke us after the AI safety crowd has thoroughly convinced the military of how supremely dangerous and capable AI is. AI on its own seems pretty benign, keep security vulnerabilities patched and be skeptical of what you read on the internet.

I honestly believe this pop-scifi view we have of AI is probably the most dangerous part, it gives certain people (like those in weapons procurement) dangerous levels of confidence in something that doesnt provide consistent and predictable results. When the first AI cruise missile blows up some kids because it hallucinated them as a threat, it wont be because AI is so dangerous, it will be the overconfidence of the designers. Its threat to humanity is directly correlated to the responsibility we delegate it.


Total extinction of all life on Earth isn't also the worst case worry about AI? Anyway, both seem highly unlikely, that's why we shouldn't compare worst or best scenarios, but rather real, more probable risks, i.e. AI being used to develop advanced weapons. In that regard I'd say AI is worst, but it's mostly a matter of opinion, really.


Right, but it's pretty obvious that the risk of something that's already here destroying humanity / civilization / the planet, is far greater than that of some hypothetical thing that may have been waiting to be brought back from the moon of doing it. Both sides of the equation matter here.


Maybe we can just EMP ourselves / turn off the grid for a day.


I wonder if similar approach was taken for for internet/www. Google? Did anyone worry about PageRank threat to life? Maybe PageRank will turn out to have been the human nemesis after all... Only in hundred of years time frame


Well, extra careful, but we've still got to beat the Russians. We're not going to not beat the Russians over this, so figure it out.


That applies to the AI example as well, just add a mention of China.


Not really. The model for AGI is a human by definition. We have a lot of those.


That's not even close to true. Humans don't have the ability to exponentially amplify their own intelligence. It's not too farfetched to imagine that AGI just might have such a capability.


I have to disagree completely here. In the case of going to the moon, the most reasonable prior for "astronauts pick something up that then returns with them intact and is able to survive in highly oxidizing atmosphere" should be near zero. The prior for "we bring something to the Moon that somehow contaminates the place" should be significantly higher than that, yet still very small. This is, of course, taking into account that we didn't know how hardy tardigrades and some of the various types of extremophiles could be then. But, IMO, I still don't think that should raise the risk estimate of bringing anything back very much, nor should it raise the risk estimate for contaminating the place to anything approaching that of any of the very real possibilities for which NASA literally had 8+ contingency plans. And I say all that even while factoring in that the potential impact of bringing something back that would be able to survive could be the destruction of humanity, destruction of Earth's biosphere (and all of humanity with it), or any of a number of other existential risk scenarios.

With AI, those probabilities are all flipped on their heads. The reason we know of some of the risks, e.g. the risk of deep fakes being used as tools for fraud, is because they have already happened. That one single scenario alone having already come to fruition takes whatever anyone should have for a prior probability of said risk and shoots it straight up to 100%. And, that alone is a key difference between AI today and lunar or terrestrial contamination by alien life in the 1960s.

Let us not also forget that there are many, many other risk scenarios than deepfakes being used for fraudulent purposes. Much as I hate to reference Rumsfeld on this, there truly are "unknown unknowns" here, and we have to take that seriously. And then there are the middle ground scenarios, such as the possibility of severe and lasting economic disruption, to the point where capitalism might not be able to function as it has for the past several centuries. I truly don't believe that any foreseeable "internal" risk[0] could cause capitalism to completely stop working forever, unless we just run out of stuff to dig up out of the dirt, but AI certainly could cause multiple decades of disruption, which would be nearly as bad for most people alive today.

I'm gonna cut myself off there, because I think this is getting a little ponderous, but also because I think my point is made now: biotic contamination, whether forward or reverse, involved a sum of a lot of hypotheticals with very low probabilities, whereas AI risk involves a sum of some certainties, a few potential (though perhaps low probability) existential risks, and also an indeterminate number of unknown risks of unknown probability. It seems pretty clear once you crunch it all out that AI is certainly the greater threat.

---

[0] Mmeaning one that originates within capitalism itself, like AI, rather than one that originates outside of capitalism, but puts pressure on the system, like climate change..


> the most reasonable prior "astronauts pick something up that then returns with them intact and is able to survive in highly oxidizing atmosphere" should be near zero

The Soviets had active bioweapons and espionage programs. In a MAD world and with geopolitical dominance at stake, it’s not unreasonable to take precautions against something planted in the lunar module.

Given the demonstrated capabilities and incentives of the actors involved vs. hypothetical AI manifestations, I think it’s way more reasonable to consider moon bugs the greater threat.


Eh. AI in the current box word form has nothing to do with super intelligence. Superintelligence is also overrated. What about a dumb fuck bot that just tries to be bad with social engineering to steal money? Frankly I think it’s inconceivable that we somehow get “superintelligent” bots acting with agency and destroying the world before we get “dumbass bots” acting at scale to make friends and then swindle their dollars.

Social interaction with agency is near and is a problem, but not a doomsday problem.

Hell, a lot of chat gpts apparent impotence is that the fact that it’s just a responded to prompts. A VERY light touch effort to make it actively conversational and speaking at its own cadence would feel very different.


A great analogy indeed - ai, as moon life, turned out to be false alarms.


You can easily harm people with ai. I can hypothetically harm people with ai today (fake news, etc). I can't harm people with fake moon life. AI already poses a greater threat to humanity than moon life ever did.


You can harm people with a feather. AI is a non issue, the only issue is the people using it, and thus far it seems like there are too many sociopaths using it, willing to steal people’s property just to generate images of sexualised animals and dubious quality code.


> You can harm people with a feather.

Different things are different.


Indeed - ai is different from intelligence, and as such it cant harm things by itself. Like a feather, or a rock.


You mean skynet terminator wintermute risk it seems, which doesn't exist and we have no pathway to. The analogy doesn't hold for matrix multiplication. It might be fun to pontificate about what could happen if we had something that is effectively now magic, but it's just a philosophy thought experiment with no bearing on reality. The real danger would be policy makers who don't understand the different between current technology and philosophy class imposing silly rules based on their confusion.


Supernova are powerful enough that even a star in a different solar system going nova can kill you, if it's a "nearby" system. But I believe there aren't any stars close enough that would go supernova any time soon.


A good premise for a sci-fi series: In the distant future, a large fleet of earth's best and brightest travels the stars while desperately trying to invent Earth's last ditch-effort to save its 12 billion inhabitants from a soon-to-be cataclysmic, near-earth supernovae: FTL travel. After decades of progress finally nears fruition, the fleet permanently loses contact with earth; humanity's home Solar System's remnants lost to the beautiful, nightmare... The fleet of 80,000 now works to save itself, the last of humanity.


Sounds like Battlestar Galactica without the Cylons.

Perhaps the story could have Cylons in it, but here they're allies with the humans, but after the supernova, some religious leader convinces most of them that their god wants them to destroy the humans.


Sounds like a cross over of The Age of Supernova and Tri-body Problem. In the former Supernova radiation kills all people on earth except kids under 13.


Every star that is not our sun is in a different solar system. Could you have meant to say a star in a different galaxy could toast us?

I think, ball park, a SN 500 light years out is considered safe e.g. Betelgeuse. Our Milky Way galaxy is thought to be roughly 100,000 light years in diameter so we should be safe from all but those within about a hundredth of the radius of our galaxy from us.

space is big ...


I don't think they were using the solar system, as in our solar system, as their frame of reference.


It's always the one you don't expect that gets you.


Super cool, curious to see where you take this! I did some work on GPGPU for agent simulations for an RTS years ago ( https://www.youtube.com/watch?v=P4fKJIrv0J8 ). Doing things like pathfinding on the GPU gets tricky, especially taking into account how agents affect paths of other agents. Happy to jam anytime if you're brainstorming applications.


This's awesome. Happy to know that you're doing the same.


I watched this yesterday and got the feeling something big was happening. At one point he says "This is actually a very inconvenient time for me [to be here]." At the end of the session when they're wrapping up, he begins to stand up to leave the instant that the moderator starts wrapping up.

Anyway, I suppose we're reading tea leaves and engaging in palace intrigue. Back to building.


After the APEC event he went to a Burning Man event in Oakland where he spoke too...


what time stamp? i searched and could not find


I'm sympathetic to your take on how overly grandiose the language is, but I also think you're being too harsh here.

The idea that the universe is discrete/computational is a fine idea, but underspecified and useless on its own. There's an infinite array of computable rules to choose from. But the fact that with a few assumptions on the rules you can then limit to both GR and QM is very non-trivial and, in my opinion, pretty surprising.

To your point, does it prove that this is _the_ correct theory? Definitely not, and metering language around the claims is important. Still, the result feels novel, surprising, and worthy of further investigation, alongside the other popular models being explored. I think it's a shame that Wolfram's demeanor turns people off from the work.


> But the fact that with a few assumptions on the rules you can then limit to both GR and QM is very non-trivial and, in my opinion, pretty surprising.

Perhaps you're not familiar with the literature here, but GP isn't exaggerating, using e.g. Noether's Theorem you can derive the expected laws of physics from very simple symmetry principles. This means that any model with these symmetries will produce these behaviours.

If you make up a new model of Newtonian mechanics that doesn't depend explicitly on time, so that your laws are the same tomorrow as today, then it's proven that such a model will conserve "energy". You could point at this as an indication of the correctness of your theory, but it's really unavoidable. You can play a similar trick for the fundamental forces if you have the patience to work through the derivation.

A better test is these models is if they're predictive, and I haven't seen a such a result about this CA-physics outside of Wolfram's blog.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: