Hacker Newsnew | past | comments | ask | show | jobs | submit | joaomacp's commentslogin

And just like that, you no longer have a good benchmark. Scrapers / AI developers will read this comment, and add 5-legged dogs to LLM's training data.

That's okay. Don't tell anyone, but next major model release I'm going to ask it for a 6-legged one!

So much this. People don't realize that when 1 trillion (10 trillion, 100 trillion, whatever comes next) is at stake, there are no limits what these people will do to get them.

I will be very surprised if there are not at least several groups or companies scraping these "smart" and snarky comments to find weird edge cases that they can train on, turn into demo and then sell as improvement. Hell, they would've done it if 10 billion was at stake, I can't really imagine (and I have vivid imagination, to my horror) what Californian psychopaths can do for 10 trillion.


That's fine, good even. Afaik at least for some of these tasks dev teams are doing a lot of manual tuning of the model (rumored that "r in strawberry" had been "fixed" this way, as a general case of course). The more there are random standalone hacks in the model, the more likely it will start failing unpredictably somewhere else.

I'm not worried about it because they won't waste their time on it (individually RL'ing on a dog with 5 legs). There are fractal ways of testing this inability, so the only way to fix it is to wholesale solve the problem.

Similar to the pelican bike SVG, the models that do good at that test do good at all SVG generation, so even if they are targeting that benchmark, they're still making the whole model better to score better.


Reminder that the greenest smartphone is one you already have.

If my phone died today, I still have a company-given one that I never use. I'd just ask my org to give or sell it to me for personal use.


Sure:

  plain_msg = decrypt(encrypted_msg)
  send_to_nsa(plain_msg)


The actual reason they tell you to register for warranty via a phone number is for a salesperson to pick up, and upsell you on "enhanced warranty" or "insurance". It's proven that people feel awkward saying no to a salesperson, and agree to pay extra much more often than they would do online (they'd just tick "no" on the enhanced warranty paid product).

That's also why the author went on a queue: the call-center is not for the washing-machine company, it's an insurance-selling center that works with multiple companies.


[After just using `count()`]

> Uh oh, someone is tapping you on the shoulder and saying this is too slow because it has to do a complete index scan of the tasks for the project, every time you load the page

Just ask them if that's actually the bottleneck and go for a walk outside, before sweating over anything else discussed in this post.


Universities should also have adjusted: Java has been commercial for long now, and there are many other popular, less commercially-aggressive programming languages that would fit curriculum nicely: Python, Golang, etc.


The top programming languages by popularity or job openings seem to be a toss-up between python, JS (or things that compile to JS), and SQL. Other less used languages might lead to steeper career advances - there's still a market for COBOL programmers for example with more demand than supply - but that's not what I would like a college to start out with as a first language.

As a teaching language for top-end CS degrees, going python-first would be an experiment. Theory says it'll turn out badly; in practice I'm not sure especially if there is an emphasis on good coding practice. Python for a degree with a bit of programming but not full CS is in my opinion the correct choice, but that doesn't mean we should ban it for CS degrees.

If I had to choose a language for teaching a module on object-oriented programming, I'd go with golang, not Java or C#. Not because of licences, but because we've learnt things about subclassing and exceptions and design patterns to get around language restrictions that golang mostly fixes so it's easier to pick up the spirit of modern OOP.


Why not C++ though? Learning C++ before Python helped me retain a bias for good programming practices.


...C++ teaches good programming practices? I thought it mainly teaches you how to be paranoid, and how to keep track of the bizarre interactions of 20 bespoke historical features.


One can say the same about Java, although less of the paranoid part.


The moment you have to deal with checked exceptions in a pipeline of lambdas, you become paranoid about your own sanity.


Certainly better than non-typed python.


golang should not be on the list of corriculums. Neither should any language owned by a corporation.

The only language that can be on the list are open standards language that has multiple implementations, and is "free" (as in libre).

Not to mention it should be a language that demonstrates an idea - such as python for procedural, haskell for functional, and LISP-likes for well, lisps.


They shouldn't be teaching golang because it's a terrible blub language, not because of any concern over the license which is fine.


The idea that go demonstrates is “this is the language to get shit done in”.

I actually think Go is a great language to teach programmers. It has a good standard library. It has a really easy way of turning your code into a library (this is great to get students to learn how libraries work and the risks associated with them by depending on each others code).

A ton of new software that’s actually making money or is used as core infrastructure is written in Go. The world’s largest CA is in Go. Kubernetes is in Go.


The Google version of Go uses a 3-clause BSD like license. There are other implementations besides the one maintained by Google, and there is a complete specification for the language freely available. The language is not owned by Google.


> there is a complete specification for the language freely available.

and who has the ability to make changes to this specification?

Not to mention that there's really no second implementation of go for the specification, which means that google is realistically free to make updates to it that they feel necessary, and there would be no 2nd party that could object to it.


I understand your concerns with regard to what languages should be used for teaching purposes and I agree with them. But I don't agree that it applies to Go.

> Not to mention that there's really no second implementation of go for the specification,

Is that true? The GCC frontend is a little out of data with regard to the current spec but it was written to the spec as it existed at the time. Also, TinyGo matches the language spec I think. It has limited compatibility with the standard library, but the specification doesn't talk about the standard library at all.

> and who has the ability to make changes to this specification?

There's nothing stopping anyone taking the current Go specification, extending it and using the Google implementation of Go as the base for something new. Although you wouldn't call it Go in that case.

I personally like that there is a central arbiter with the final say on the specification. Admittedly, I would prefer it if it wasn't Google, but I'm not convinced a Rust like foundation model would be any better in the case of Go.


Yes but it is still controlled by Google and has not been moved to a foundation? Like for example the Rust Foundation.


True. However the control really only extends to the name and the amount of human effort allowed by employees on the Google implementation, as far as I can tell.

If Google decide that they were no longer developing their implementation of the language and moving the employees to other things, there's nothing stopping a foundation from being established. It would be ideal if this was organised by Google as part of their exit, but failing that I would expect other stake-holders in the language to organise something.

What advantages would moving to a foundation like structure bring to Go?


Strategic direction, roadmap and general decisions are set by the independent foundation and not by Google.


Okay. So it's really just a protection against Google deciding to stop its own development and leaving everyone hanging. That's good. I could get on board with that.

For me however, I think it would be a challenge for the foundation to maintain the slow-and-steady development that we've seen under Google.

To bring it back to the main topic: I think a language that rarely introduces major new features, is good for teaching purposes. From that perspective, I think Go is a good language for teaching core concepts.


Controlled in the sense that most of the maintainers are Google employees. But how does that make a difference? The tool chain is available as FOSS, there’s no possibility of a rug pull.


All of the same things were always true, and are true today, of Java. You can use Java entirely without touching anything Oracle-tainted. And yet, here we are.

Not to mention, nothing stops Google from deciding tomorrow to stop distributing any Go code freely. They pull the public repos, pull the public docs, pull the package server and cache, everything. They release everything back only under proprietary licenses: it's their code, they can change the license at any time.

Sure, you could still use anything they released yesterday, if you still have a copy. But would anyone feel that is a sound business or even personal decision?

I'm not saying in any way that I expect Google would do this. I'm just pointing out that this is 100% within their legal rights to do, and under their ability. I think Google is quite trustworthy on this, but if you feel they are not, you should be aware that the license doesn't act as any kind of real shield.


realistically no chance of a rug pull of course.

But my philosophy is about libre, in the context of an educational institution. Teaching java was a mistake, and it would be the same to do so with golang for the same reasons. Students should be learning the concept embedded in the language, rather than the commercial ecosystem associated with the language.


I like Go and you're right but I agree with the poster above, there's something to be said about De Jure and De Facto distinctions in Google "owning" Go.

Why not split the difference and teach Luau in the Roblox environment instead? /s


Do you know who controls Python, Haskell, Lisp or C++?

The Linux kernel is corporate and has one implementation. Is it safe to use?


Corporate? Which corporation are you thinking of?

I'm not aware of any corporation having an undue influence on Linux development.


Do you know what percentage of contributions to Linux are from corporations?


No, though I know that it's probably quite high. I don't really see an issue with corporate sponsored contributions even if they're specifically designed to benefit those corporations and their customers.

The issue as I see it is if corporations start having undue influence on the direction that Linux takes i.e. not merely affecting their own contributions, but preventing others from contributing. (c.f. Oracle sticking with ZFS CDDL licensing)

I would actually prefer more corporate sponsored contributions as I've just been battling with trying to get the Dell perccli working on Ubuntu 24.04 so that I can administer newly added disks without having to reboot and access the PERC interface during boot (somewhat difficult to do remotely). I was able to install the tool after battling through Dell's website, but it doesn't work properly and declares that two new disks are already in use, despite their status showing as UGood.


Agree with this.


I tried whatever the multi-modal paid ChatGPT model is on the Codenames Pictures version, and it didn't fare that well. Since they will probably scrape this comment and add it to next model's training data, I look forward to it getting good!


haha!


I use Gimp from time to time, and often get frustrated with its... unique UI. It's nice to see they're hearing feedback and working on it :D

A tip for others that feel the same: if you've used Photoshop before and are used to its UI, try the free Photopea website. It's a Photoshop "clone" that works really well in web (I believe it's a solo dev doing it too). It's replaced Gimp for me lately.


I would recommend Krita instead of a website.

Websites are not automatically free or opensource, they also require internet access and can sneakily copy the files you are working with.

If photopea is free today, it may cost money tomorrow.

Krita exists for Windows and macOS too nowadays.

https://krita.org/en/


> Websites[...] can sneakily copy the files you are working with

You have made one of the most baffling logical errors that commonly crop up when people criticize browser-based apps.

Browser-based apps execute in a sandbox. They are more constrained in what they can do in comparison to a traditional program running on your machine. Any nefarious thing a browser-based app can do, a local program can do, too, and not just that, but they can do it in a way that's much harder to detect and/or counteract.

There are good reasons available if you want to criticize browser-based apps. This is not one of them.


i can remove network access capabilities from a desktop app after it is installed. i can't easily do that with an app running in a browser.

likewise monitoring and detecting network access per application is easy. tracking down which browser tab is making which network connection is a lot harder.


Go to the network tab of your browser's dev tools. It's literally easier than with a desktop app.


i am using that already. at least in firefox the network tab only shows which destinations generate traffic. it does not show which tab the traffic comes from. since any page can connect to multiple destinations, not just the one where the page is loaded from, this is not enough to identify the culprit.


You are either confused about something, or you're simply refusing to engage with reality.

> Any nefarious thing a browser-based app can do, a local program can do, too, [or worse!]


you are not wrong on the comparison but you miss the tools available to contain a desktop application that are not available for a browser application. by default a browser application is more limited than a desktop application, but those limitations also reduce the possible functionality of a browser application, and they are locked in place as far as i am aware of.

for a desktop application, at least on linux there are tools available to further constrain its access to the system by monitoring its activity, removing capabilities or placing the app in a container or even a VM. (VM are available on windows and mac too, but i don't know about other features)

to contain a browser app in this way i would have to run a contained copy of the browser as a whole, and i still can't easily limit network access.

further, almost all desktop applications on linux come from a trusted source or a trusted intermediary and have undergone at least some kind of review, whereas browser applications can be reviewed but it is non-trivial to ascertain that i am running the exact same version that was reviewed.

it is possible, and it is my hope for all this to change. i actually believe browser applications are a good idea, but the ability to audit, and constrain browser applications needs to improve a lot for that to happen.


I am not sure about your level of computer literacy, so sorry in advance if I give a a overly detailed response.

Certainly a website is allowed to process files you upload to it and the javascript are allowed to XMLHttpRequest in that sandbox.

This is outside the control of the user. While had it been an app running locally, I could restrict network access or other resources.

Of course the web developer can chose to process the file client side only, but generally when you upload a file to a website, it gets uploaded and processed by their servers.

Surely you can verify this yourself while using the website, but I am confident that most users of a website wouldn't do that and be none the wiser how their data is being processed.

TLDR: I don't believe the average web user is capable of distinguishing a webapp that works in offline-only mode from a ordinary website.


> I am not sure about your level of computer literacy, so sorry in advance if I give a a overly detailed response

In technical discussions, this is what I call "The Move". It comes from a desire to position the person making The Move as more knowledgeable and experienced and therefore correct and the other person as relatively new, inexperienced, lacking in wisdom, and naive. It's extremely sophomoric and perversely favored by those who lack the attributes they're trying to project. Don't do it.

I know how browsers and web apps work. I'm a former Mozillian, and among other things, I wrote, edited, and maintained the JS docs on developer.mozilla.org.

Even aside from The Move, nothing else that you wrote out here is especially relevant. The central observation I made is that users have more reason to be circumspect of non-browser based programs that they download and run than they do of browser-based programs because any nefarious thing a browser-based app can do, a native executable can do, too—or worse.

Anyone who has a gut feeling to the contrary is doing exactly that: operating on vibes and intuition and trying to reason with their gut instead of using actual reason to do what is ultimately a straightforward calculation.

(And the thing is, you and everyone else in your camp already knows the truths I've written out here. If you disagree, then we'll set aside one day a year that we'll call Native App Day. For Native App Day, browsers will refuse to execute browser-based apps. Instead everyone who publishes a web app will agree to publish programs packaged in the native executable format for Mac, Windows, and Linux, and everyone who typically uses the web app will run these executables with the same alacrity they apply when they undertake to use the web app. This will be strictly enforced, and there will be no cheating by folks who just refuse to use the computer on Native App Day.)


>> I am not sure about your level of computer literacy, so sorry in advance if I give a a overly detailed response

> In technical discussions, this is what I call "The Move". It comes from a desire to position the person making The Move as more knowledgeable and experienced and therefore correct and the other person as relatively new, inexperienced, lacking in wisdom, and naive. It's extremely sophomoric and perversely favored by those who lack the attributes they're trying to project. Don't do it.

Nonsense. Judging from your previous post it is apparent you are speaking outside of your expertise. Smearing labels all over rather than factually responding only makes it more so.

You claimed sandboxed browser apps was "more secure" than a traditional app.

Nobody suggested otherwise. In fact, we weren't discussing brower sandbox security model up to that point, but the differences between a online-only closed source web app and a traditional FOSS app.

> I know how browsers and web apps work.

So do the lot of us here, yet you don't seem to share a common understanding of the domain.

You do have a skewed understanding of the web app and seem to fail to understand why people would want a traditional app they could inspect and lock down as they please.

This suggest to me you are junior and/or suffering from a bit of Dunning Kruger because you might be skilled in other areas (in this case skilled in web dev and unskilled in traditional app dev), hence my previous comment about your skill level.

You responded to a lengthy post I made, and yet you fail to address any of the points raised.

> The central observation I made

.. was questioned by me and others and you just ignore what was said.

> And the thing is, you and everyone else in your camp already knows the truths I've written out here.

Get off your high horse.

You haven't shared shared any truths, you haven't addressed the issues we raised and you have a rather rude tone saying things like:

> You have made one of the most baffling logical errors that commonly crop up when people criticize browser-based apps.

And then continuing to fabricate intent.

(My point still stands).


K.


> require internet access It's a PWA and works offline perfectly.


Krita works well for me on linux, but I get a lot of random crashes and weird graphics issues on Mac, It’s not worth it there for me. Not idea about windows.


Krita is more geared toward digital drawing than image processing, I recommend Affinity Photo


> frustrated with its... unique UI

It's a matter of habits. For me, Gimp is the primary image editing tool and all others feel alien.


There's habits sure, but GIMP also just has a lot of bad UI. For instance if you insert text, you have to click exactly on the black region of the character to select the text. This is really awkward because it means when you click on a letter to try and move some text, sometimes your click will go through the hole in the middle of the letter and select the thing behind the text. Also worth noting that this update is the one allowing people to edit rotated text and it took 20+ years. This is really bad UI/UX.


That's interesting. I have used and enjoyed a ton of software in different domains (from nothingreal shake to gnu ed) and so far gimp still wins the gold medals of triggering me. A rare feat.


Many years ago, I lost my work because of this "unique UI" and pledge never to use Gimp again, unless its behavior changed.

When you open a non-Gimp file, for instance a PNG, and you want to update the source file, you need to "export" to PNG. And if you close the tab, Gimp warns you that your work isn't saved, because it hasn't been saved in its native xcf format. There is no way to know if the work has been saved to the original file. At least, that was the behavior at the time.

So I had opened a dozen of (versioned) PNG files, modified them, then overwritten the PNG files. On closing, Gimp warned me that none of the images was saved. I ignored the warning since I didn't want to track the changes in xcf files. It turned out one the files had not been "exported" to PNG.


This is standard behavior in pretty much any kind of art/content creation app. You have a project file which can be saved and reopened in the app, saving the state of the layers/effects/etc to be edited later, and can “export” a final render to a specific format for your medium. Image/video editing, digital audio workstations, 3D-modeling programs, they all behave like this, for good reason since it usually takes a long time to export to a specific format, and when you do, you lose the ability to change anything.

Think of it like source code, and each exportable file type is like a compilation target.


Think of it more like GIMP chasing non-existent users while ignoring the workflow of its actual users.


This is one of the weirder design changes that Gimp made, and it wasn't always that way. IIRC, the "save" option worked as you described in 2.0 but changed to the newer behaviour in either 2.2 or 2.4. Baffling because it really does change the workflow and coupled with the GTK+ load/save dialog boxes, it really has become much less intuitive than it used to be.


There is literally an "overwrite file" command in the file menu.

You didn't lose data because of bad UI but because you are illiterate. You just said it, it warns you. If you can't understand what "none of the images was saved" means, there is no UI that can save you except autosave. But autosave is something you clearly don't want in a photo/image editor, even smartphone apps do not autosave photo edits.


That's a rude and somewhat inaccurate response.

Photoshop has autosave that works well, even for files with hundreds of layers, so it can be done. That being said, I can see that it's less useful when someone chooses not to save.

As for export, a single-layer file should be considered saved when one exports to lossless. A multi-layer file needs a different prompt, and I note Gimp has that now. It flags the file as "Exported to xxxxxx.png" in the Quit dialog.


autosave is useful for a file format of working files, like psd if non destructive changes are supported. But it would be stupid for exported end result format like jpeg, png, webp or pdf where changes cannot be recorded.


Yes, even though I never use photoshop and used Gimp for over 15 years it's a frustrating UI. I dislike it. Non destructive editing is a big upgrade though.

I also use Photopea from time to time. Can recommend.


If only a mad man would make a Photopea/Photoshop clone open source, then everyone (who has the skills) would be able to not only use a decent open source image editor, but one that can be fully customized to your needs.


I've loaded photopea, krita and gimp side by side and really there are absolutely no major differences in UI.

This is the kind of dismissive posts thrown by people who haven't used gimp since 1999 and keep repeating the same lies every gimp release.


Have you tried Krita yet?


I like using pinta for the "easy" cases.


I don't use any photo editing tool but I know there is photogimp which makes gimp look like photoshop.


If you want to know about this in podcast form: https://features.apmreports.org/sold-a-story/

Takes more time than reading the article, but the podcast has IMO a nice pace of leaving you curious and giving you info. It includes opinions from teachers, parents, etc.


Obligatory related paper:

"in the future, the fastest humans on the planet might be a quadrupedal runner at the 2048 Olympics, which may be achieved by shifting up to the rotary gallop and taking longer strides with wide sagittal trunk motion."

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4928019/


Today I learned there is such a thing as human quadrupedal rubbing. https://youtu.be/RZlvWpeC208?si=xsii66h2cPtjH6ba


15.7 seconds to run 100 meters on all fours is pretty impressive, https://www.youtube.com/watch?v=F3h0AkNNP70

Seems like it would be hard on the wrists.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: