Hacker Newsnew | past | comments | ask | show | jobs | submit | __tmk__'s commentslogin

Interesting, I've also heard the exact opposite opinion [0] where Zulip’s non-standard approach is seen as its main strength.

[0]: https://chaos.social/@yorgos/115931944888149528


I would think there wouldn't be much of a difference because the smallest unit you can really work with on modern computers is the byte. And whether you use 8 bits to encode a byte (with 256 possible values) or 5 trits (with 243 possible values), shouldn't really matter?


3 fewer lanes for the same computation. FWIW 8bits is the addressable unit. Computers work with 64bits today, they actually mask off computation to work with 8bits. A ternary computer equivalent would have 31trits (the difference is exponential - many more bits only adds a few trits). That means 31 conductors for the signal and 31 adders in the alu rather than 64. The whole cpu could be smaller with everything packed closer together enabling lower power and faster clock rates in general. Of course ternary computers have more states and the voltage differences between highest and lowest has to be higher to allow differentiation and then this causes more leakage which is terrible. But the actual bits vs trits itself really does matter.


> A ternary computer equivalent would have 31trits

I think you mean 41, not 31 (3^31 is about a factor of 30000 away from 2^64).

The difference in the number of trits/bits is not exponential, it's linear with a factor of log(2)/log(3) (so about 0.63 trits for every bit, or conversely 1.58 bits for every trit).

> ternary computers have more states and the voltage differences between highest and lowest has to be higher to allow differentiation and then this causes more leakage which is terrible

Yes -- everything else being equal, with 3 states you'd need double the voltage between the lowest and highest states when compared to 2 states.

Also, we spent the last 50 years or so optimizing building MOSFETs (and their derivatives) for 2 states. Adding a new constraint of having a separate stable state *between* two voltage levels is another ball game entirely, especially at GHz frequencies.


> because the smallest unit you can really work with on modern computers is the byte

Absolutely not, look e.g. at all the SIMD programming where bit manipulation is paramount.


There's a project for using Swift to write GNOME applications, which is fascinating to me: https://github.com/AparokshaUI/adwaita-swift

I wish them success but realistically they won't have success.


1-cos_sim(x,y) is a valid distance metric for L2 normalized vectors, though.


That's not very actionable advice. I've found this article [0] from Terence Tao very insightful:

  [A]ctual solutions to a major problem tend to be arrived at by a process more like the following (often involving several mathematicians over a period of years or decades, with many of the intermediate steps described here being significant publishable papers in their own right):

  1. Isolate a toy model case x of major problem X.
  2. Solve model case x using method A.
  3. Try using method A to solve the full problem X.
  4. This does not succeed, but method A can be extended to handle a few more model cases of X, such as x’ and x”.
  5. Eventually, it is realised that method A relies crucially on a property P being true; this property is known for x, x’, and x”, thus explaining the current progress so far.
  6. Conjecture that P is true for all instances of problem X.
  7. Discover a family of counterexamples y, y’, y”, … to this conjecture. This shows that either method A has to be adapted to avoid reliance on P, or that a new method is needed.
  8. Take the simplest counterexample y in this family, and try to prove X for this special case. Meanwhile, try to see whether method A can work in the absence of P.
  
  (... 15 more steps)
[0]: https://terrytao.wordpress.com/career-advice/be-sceptical-of...


Polya‘s „How to solve it“ might also be worth a look.

Summary: https://www.math.utah.edu/~alfeld/math/polya.html


I loved this book and respect the author but to be honest, for anyone that doesn't already know, it is geared towards highschool level mathematics. Still a great book though.


I read this book while I was studying mathematics as an undergraduate. I found the information in it to be very helpful. The math is lower level, but the problem solving techniques that the author presents generalize very nicely to upper level mathematics. I’m not sure that I’d say that it’s geared toward high school students. I think that it’s geared toward beginner math students.


it uses high school math for examples which is great because most readers will already be familiar with the problems

you can apply the techniques anywhere though - I think the book is much more philosophy than math


Overheard at the watercooler: The only way a CS guy knows how to prove an algorithm works/does not work is to find counterexamples.


My real analysis instructor drily pointed out “You really like proof by contradiction huh” and my defense was that my main life skill is spotting why plausible-seeming things don’t work correctly


Seems hard to prove that an algorithm works by finding counterexamples.


I think most bugs are the result of someone's attempt to do just that. As is test-driven development I suppose.

Then again proving code works isn't everything either. There's a reason Knuth once stated "Beware of bugs in the above code; I have only proved it correct, not tried it."

Type checking is a nice intermediate, though not all languages allow all properties you care about to be encoded in types.


Just express it as an n-state Turing machine and see if it halts within BB(n) steps. /s


And, if n is large enough, then I can retire today.


Watercooler guy has never heard of Mathematical Engineering or Edsger Dijkstra. If he had then he'd know that proof by construction is the method par excellence for showing an algorithm is correct.


I would've picked proof by induction for CS peeps!


From the number of math channels I follow, the common advice I see for handling a complex problem is to start with a simpler version of it and solve it. Try simplifying it in a few different ways, find a few different ways to solve each simplification, and then see which ways of solving the problem might work to solve the more complex problem. With experience one gets better at finding the right simplification with less effort/time invested.


And then erase all intermediate steps and just publish the end result - which no one will understand (including yourself a couple of years later), but that's OK /s


I've lamented this effect of the paper writing process for some time. I think a paper should present the steps taken by the authors to arrive at the new results, not just the new results in isolation. Those steps often provide so much useful insight.


I think so too, but then, (many/most) papers will need to be 2x or 3x longer - which is fine by me, but conferences and journals have strict length limits.


Maybe the scratch notes could be published outside of the paper itself. It’s not an ideal solution, of course, because links can go stale. But maybe it’s a starting point.


Or better still, stop pretending there's any real limit on how long a publication can be.


Increasingly this evolution is spread across multiple papers, and is not enclosed within a single paper. So you can partially follow the chain of thought.


Increasingly this evolution is spread across multiple papers, and is not enclosed within a single paper.


Tao’s advice is characteristically excellent but I think Eugenia Cheng’s advice (not just the article title but as a whole) is actionable, just not so much for someone who is ready for specific advice like Tao’s. Her message seems to be targeted towards those who are just getting started and who may not otherwise have been interested in studying math.


This is a great way to go about implementing features that deal with complex situations in NP-Complete problems, where you know a subset is trivially solvable, you build patterns / approaches for said problem, and for the remaining cases, just use approximate solutions.


That's about how to solve problems, but to do research you need to find good problems, and that turns out to be more important, imo. How do you pick "major problem X" anyway? This strategy from Tao just leads to incremental results...


1. If Tao (a Fields medalist) is telling you a process to follow, a dismissal of "just leads to incremental results" requires more evidence than just a bald claim. IMO, he's giving away his working process for free.

2. If "all" you want is tenure at a research 1 institution in the us, there's a lot to be said for this process. Another ingredient is "pick an area where other people will be interested in the results."

PS I don't know what to make of your username, but perhaps you have more to say about picking a problem... in which case I'm sure many of us would love to hear it.


I've also seen neovim plugins written in fennel [0], so if you want something lispy, that's possible now.

[0]: a Lisp that compiles to Lua, https://github.com/bakpakin/Fennel


Coal is 30% currently: https://www.destatis.de/DE/Presse/Pressemitteilungen/2023/06... and I don't see this going away soon.

The argument is that we could have kept nuclear running and shut down all coal. In 1990, nuclear power plants were producing 153TWh in Germany. Today, coal+nuclear together is producing about as much.


> Coal is 30% currently: https://www.destatis.de/DE/Presse/Pressemitteilungen/2023/06... and I don't see this going away soon.

5 years ago it was significant higher. And in 5 years it will significant lower again. Germany is in a process of change.

> The argument is that we could have kept nuclear running and shut down all coal.

Which is wrong. Nuclear can't replace gas, which coal is replacing. And Nuclear plants have no fuel anymore and lacking up-to-date certification. There was zero chance to continue running since 2019, even if we ignore all the other problems.


I doubt the USG is competent enough to have a masterplan like that. This just seems to me like the actions of a person (Gary Gensler) who personally dislikes cryptocurrencies and who is trying to establish himself as a political player through his "bold actions".


I think my banking app disallows taking screenshots of it. (Presumably this also means it would be hidden from screen recordings? Not quite sure.)


The apps are rarely the problem, the goal is to get the user to install TeamViewer or AnyDesk software that has legitimate uses and then get them to visit their bank's site on the computer.


Why is the article using PNG images for the code samples? I guess because the author thinks it looks prettier this way, but I don't really see it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: