Hacker Newsnew | past | comments | ask | show | jobs | submit | jcelerier's commentslogin

As a french person being close to many people who:

- don't have english or any european language as their first language

- have learned english successfully

- are now in a long, struggling process of learning french

I don't believe there is in day-to-day life much value in the advantages you mention for french.


... is it? I had 14 meetings with externals this week only lol

The point of GitHub is not technical - the website is terrible. It's the social network.

That’s interesting. I would have said the opposite. I’ve never used any of the social features, but the technical aspects (including integrations) are where the value is.

It does break and go down; and GHA are a real pain in the ass. But the basic hosting and PR workflow are fine.


The PR workflow is fine if you don’t care about stacked PRs, you don’t write reviews, you don’t read nontrivial reviews, and you don’t need the diff viewer.

You should use your IDE to do all of those things. Much better that way.

The site UI has been going downhill these years. It's become heavy and slow, and the buttons are more and more randomly placed. Like after you search for something in the repo, to go back to the repo front page you needed to click on the most unexpected button.

It's still getting things done, for sure, but no longer pleasant to work with.


I think Github has a nice UI.....when the contents finishes loading.

That's the real problem with Github these days. Too much critical information behind throbbers that take their sweet time. I find Codeberg much more responsive, despite being an ocean away and having the occasional anti-AI-scraper screen.


Some competitors like Gitlab have reduced friction by offering "Login with Github", so if you've already got a Github account, the bar for signing up some alternative forges is low.

I help with one of the most popular projects on Codeberg, Fuzzel. I can say we get no shortage of issues and feature requests from being on an alternative forge. Indeed, we have plenty!


What is the value of the social network? I discover code by looking for a package in my language via a search engine. Whether it’s GitHub/GitLab/Gittea/etc doesn’t matter as long as it’s indexed by the search engine.

Just a couple weeks ago a bogus update was pushed to Ubuntu 24 which completely broke Nvidia as they pushed a different version of the 580 drivers and user space libraries

> When the patch works it will finally land even in Debian Stable.

Which is very pointless if it's three years late for e.g. a game release


It's not necessarily broken, but for instance packages in cachy are compiled against x86-64-v3 iirc so they wouldn't work on older machines that don't support avx2

Can't you just add an x86-64-v3 arch to Debian if that really makes much of a difference? (I'd be surprised if it's really that significant because you can't recompile the game itself, and even when you can recompile things use -march=native doesn't make that much difference in my experience).

France and most of europe has fair use (https://fr.wikipedia.org/wiki/Copie_priv%C3%A9e) but also has a mandatory tax on every sold medium that can do storage to recover the "lost fees" due to fair use

Is that not just an exemption for copying for private use? My french is not up to much but this:

> L'exception de copie privée autorise une personne à reproduire une œuvre de l'esprit pour son usage privé, ce qui implique l'utilisation personnelle, mais également dans le cercle privé incluant le cadre familial.

seems to be only for personal use?

Fair dealing in the UK and other countries is broader, and US fair use broader still.


... are you saying that hardware projects fail less than software ones? just building a bridge is something that fails on a regular occurence all over the world. Every chip comes with list of erratas longer than my arm.

But even the most powerful apple silicon GPU is terrible compared to an average Nvidia chip


While I agree with the general point, this statement is factually incorrect - apple's most powerful laptop GPU punches right about the same as the laptop SKU of the RTX 4070, and the desktop Ultra variant punches up with a 5070ti. I'd say on both fronts that is well above the average.


There is no world where Apple silicone is competing with a 5070ti on modern workloads. Not the hardware and certainly not the software where Nvidia DLSS is in it's own air with AMD just barely having gotten AI upscaling out and started approximating ray reconstruction.


Certainly, nobody would buy an Apple hoping to run triple-A PC games.

But among people running LLMs outside of the data centre, Apple's unified memory together with a good-enough GPU has attracted quite a bit of attention. If you've got the cash, you can get a Mac Studio with 512GB of unified memory. So there's one workload where apple silicon gives nvidia a run for their money.


Only in the size of model it can run, not speed of token generation.


Apple's MetalFX upscaler is pretty similar to DLSS (and I think well ahead of AMD's efforts on this front).

Ray tracing outside of Nvidia is a disaster all round, so yeah, nobody is competing on that front.


Ray tracing support on the newest AMD chips is getting good enough. They are still behind Nvidia but definitely not a disaster anymore


That simply isn't true. I have an RTX 4070 gaming PC and an M4 MacBook Pro w/ 36GB shared memory. When models fit in VRAM, the RTX 4070 still runs much faster. Maybe the next generation M5 chips are faster but they can't be 2-4x faster.


GP said laptop 4070. The laptop variants are typically much slower than the desktop variants of the same name.

It's not just power budget, the desktop part has more of everything, and in this case the 4070 mobile vs desktop turns out to be a 30-40% difference[1] in games.

Now I don't have a mac so if you meant "2-5x" when you said "much faster" well thdn yea, that 40% difference isn't enough to overcome that.

[1]: https://nanoreview.net/en/gpu-compare/geforce-rtx-4070-mobil...


Are there real world game benchmarks for this or are these synthetic tests?


Only a few, because it's not easy to find contemporary AAA games with native macOS ports. Notebookcheck has some comparisons for Assassins Creed: Shadows and Cyberpunk 2077[1]

[1]: https://www.notebookcheck.net/Cyberpunk-AC-Shadows-on-Apple-...


And it will consume almost as much power as the Nvidia GPU to do so.


a 4.5k$ M4 Max barely competes with an entry-level laptop with a 4060 which will be around ~1K in FPS in cyberpunk given the same settings. For AI it's even worse - on NVidia hardware you're getting double-digit speeds for FPS for real-time inference of e.g. stable diffusion, whereas on the M2 Max I have you get at best 0.5 FPS

for instance as a X11 user I don't want a compositor at all


(Same... I know people use them to get some pretty effects; but, they add a frame of latency I do not want and require lots of memory and assume acceleration I don't need.)


There is no way to avoid a frame of latency without "racing the beam", which AFAIK quite complicated and not compatible with most GUI frameworks. That is, if you don't want tearing.

But I may be wrong here


One frame of latency and adding a frame of latency are different things. The first is required (without tearing) the second should be avoided at all cost (athough high display refresh rates reduce the problem of "long" swapchains quite a bit).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: