Hacker Newsnew | past | comments | ask | show | jobs | submit | robluxus's commentslogin

Interesting comment. Why is it common decency to call out how much ai was used for generating an artifact?

Is there a threshold? I assume spell checkers, linters and formatters are fair game. The other extreme is full-on ai slop. Where do we as a society should start to feel the need to police this (better)?


The threshold should be exactly the same as when using another human's original text (or code) in your article. AI cannot have copyright, but for full disclosure one should act as if they did. Anything that's merely something that a human editor (or code reviewer) would do is fair game IMO.


Maybe OP just used an ai editor to add their silly comments, so that would be fair game I guess? Or some humans just add silly comments. The article didn't stand out to me as emberrassingly ai-written. Not an em dash in sight :)

Edit: just found this disclaimer in the article:

> I’ll show the generating R code, with a liberal sprinking of comments so it’s hopefully not too inscrutable.

Doesn't come out the gate and say who wrote the comments but ostensibly OP is a new grad / junior, the commenting style is on-brand.


Op here, no AI generated code, I'm wondering what gives the impression that it is?

I use Rmarkdown, so the code that's presented is also the same code that 'generates' the data/tables/graphs (source: https://github.com/gregfoletta/articles.foletta.org/blob/pro...).


If you say there's no AI-generated code then I retract the original comment, nice work.


That is not a disclaimer for generated code, it's referring to the code that generated the simulations/plots.

I had read that line before I commented, it was partly what sparked me to comment as it was a clear place for a disclaimer.


Agree here - in a nutshell it strikes me as intellectually dishonest to intentionally pass off some other entity's work as one's own.


i personally have no problem with people including AI gen’d code without attribution so long as they stand by it and own the consequences of what they submit. after all, we all know by now how much cajoling and insisting it takes to get any AI gen’d code to do what it’s actually requested and intended to do.

the only exception being contexts that explicitly prohibit it.


This is the actual essence of CATB, has very little to with your analogy:

-----

> The software essay contrasts two different free software development models:

> The cathedral model, in which source code is available with each software release, but code developed between releases is restricted to an exclusive group of software developers. GNU Emacs and GCC were presented as examples.

> The bazaar model, in which the code is developed over the Internet in view of the public. Raymond credits Linus Torvalds, leader of the Linux kernel project, as the inventor of this process. Raymond also provides anecdotal accounts of his own implementation of this model for the Fetchmail project

-----

Source: Wikipedia


While the GP is completely off-base with their analogy, the Wikipedia summary is so simplified to the point of missing all the arguments made in the original essay.

If you're a software developer and especially if you're doing open source, CATB is still worth a read today. It's free on the author's website: http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral...

From the introduction:

>No quiet, reverent cathedral-building here—rather, the Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who'd take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles.

> The fact that this bazaar style seemed to work, and work well, came as a distinct shock. As I learned my way around, I worked hard not just at individual projects, but also at trying to understand why the Linux world not only didn't fly apart in confusion but seemed to go from strength to strength at a speed barely imaginable to cathedral-builders.

It then goes on to analyze why this worked at all, and if the successful bazaar-style model can be replicated (it can).


> I just want to putz around with something in VSCode for a few hours!

I just googled "using claude from vscode" and the first page had a link that brought me to anthropic's step by step guide on how to set this up exactly.

Why care about pricing and product names and UI until it's a problem?

> Someone on HN told me Copilot sucks, use Claude.

I concur, but I'm also just a dude saying some stuff on HN :)


Maybe I just haven't worked at a large enough company yet, but it's fascinating to me how in some organizations a very common sense exercise of "let's have some stakeholders sit down and reason about what everyone wishes to do" could be revolutionary.


> iCloud photo albums have no API. However, if you share an iCloud photo album to a public link [...]

How do folks feel about the security vs. convenience aspect of this? I almost talked myself into doing this for our shared family albums, but I know I really shouldn't do it.

Some of our older family members run Windows and iCloud sharing is just horrible there. Basically, the photos keep disappearing from their computer. It looks like we're not the only one with the issue: https://www.reddit.com/r/iCloud/comments/150nq4i/icloud_wind...


I even have build a small product to show apple shared albums online https://public.photos/

also wanted to add API on top so that people can show photos however they want. But didn’t have a time to finish it yet.


That’s cool but why not fetch the photos via URL? Seems easier than having to maintain an email inbox


have more information that way I think on each photo.

Plus feels more robust.

Getting from the website might break if layout breaks.


I feel really okay with this, but I'm not okay with is that there isn't language to talk transparently about it. Facebook does the same thing (or did, because I haven't used Facebook in a long time), if you copy link to image, you can just forward that, even if the post is private.

This is pretty inherent in image sharing, though. You can just download the image, or if the website limits that you can take a screenshot (let's not get into the debate about DRM and assume that it doesn't work).

Where should you draw the line? Time limited link sharing? Login based doesn't work because you can't share with Grandma - she doesn't know how to login.

We need words and descriptions of these basic patterns, and better ways to Intuit which is in use.


I think the phrase used here quite often is "security through obscurity" when it comes to links. The question is whether people feel comfortable with family photos falling under that principle. They're obviously not meant for public consumption, but the feeling of privacy invasion if a random person stumbled on them is going to vary from person to person. If one was totally comfortable with them being public and has zero reservations about random folks peeking on them, then I'd be surprised if there weren't an even sturdier way to do this publicly (through Flickr or just an open FTP link – but that loses some of the convenience of just an iCloud album for some people).


Securely-generated unique links aren't security by obscurity at all - They are literally the same as any password or private key being unique and high entropy. The security problem is inherent with any data that is shared widely.


Its actually damn near impossible to retrieve all of your photos/videos from icloud for a backup if you are using a windows machine to do so. It will constantly fail to sync fully, duplicate files, takes eons to download even on a fast connection, and there are bizarre file format conflicts with certain types of images. Very infuriating, and its been an issues for at least 5 years. 'Buy a mac if you want to actually adhere to proper backup standards' i guess this is the apple stance on the issue.


I've been using icloudpd to get photos off of icloud. It took a while the first time but after that I set it up to only download the latest 500 photos (total number downloaded is usually way under) and run it every few months.

https://github.com/icloud-photos-downloader/icloud_photos_do...


A workaround is to use OneDrive with Camera Upload feature turned on, and then sync this back down to your PC. You can choose how to sort (e.g. folders by year and month).


TBH, the Photos app on Macs is just as bad at this. Especially if you have a LARGE album.

I'll give you exactly the use case, and exactly why that is:

I decided I didn't want to pay for the family 1tb icloud plan anymore because 90% of it was being used up by my brother taking silly pictures and videos all the time. So I had him get an external SSD and set him up with the Photos app to download to it.

~900gb of images and videos. It took OVER. A. MONTH. to download. The whole time the photos app was being very cagey about when it would bother to download. To an M1 iMac. With an 500mbit fiber connection and connected via ethernet.

I think they do that on purpose to discourage people from quitting icloud. They want to keep you dependent on their cloud storage and they REALLY don't want you taking your files back.

I fucking hate icloud. I hate the way apple uses dark patterns and is so naggy about having an icloud membership when using an iPhone. I hate their crappy cloud syncing software too.

I ended up back on icloud later because, well, reasons... but I moved my own photo/video syncing over to onedrive. Now let's not get ahead of ourselves - one drive is a piece of crap too. But at least it's consistent on all platforms. And it's cheap as fuck.


> I think they do that on purpose to discourage people from quitting icloud.

I think they just haven't updated the CPU/bandwidth-saving provisions that have been in the products for years. The same issues happen if you are staying on iCloud and syncing to new devices. I know that I have a gigabit ethernet connection to a machine that is not doing anything else, but the app doesn't have a way to tell it that.


Yup. iCloud is just my most recent photos. Once my 200gb starts getting full, I go back and delete photos and videos year by year until i'm down to the last year or two. My entire library is still backed up to a NAS which has offsite backup as well as google photos.


Agreed on the dark patterns. I’m pretty adamant about staying on the free 5GB plan. So once a year or so my backup fails and I have to spend half an hour fighting with their intentionally terrible UI to reduce its size.


Apple should just pull that windows app/feature if they’re being this shitty with their users. That thread was infuriating.


The iCloud file syncing on Windows destroyed my files twice and I never touched it again.


By most standards im fairly security conscious. IE: No listening devices/alexa/google home in the house. For the most part no cameras or mic's inside (obviously cant get past the phone thing). Generally i prefer to self host stuff (ie: my security cams are local only and have no interenet access.)

That said I DID setup a public shared album to share to a Dakboard....Its also one i share with family.

The URL is so long, from a privacy standpoint its less a concern. The facial details are already on the internet due to family sharing stuff on facebook etc. None of the photos are particularly revelealing and nothing i wouldnt care if they were on a billboard...So meh. I doubt i could be targeted to find it and anyone stumbling across it wont see anything they wouldnt see if they drove by my house and saw my kids playing in the front yard. We also dont religiously post to it, so its not like anyone is going to glean we are out of town because of something we put there.

TL:DR - meh, not much a concern on my end, despite other concerns that may border on tinfoil.


Do you use a specialized LLM for diagrams?


I'm not OP, but I just ask GPT to turn code or process or whatever else into a mermaid diagram. Most of the time I don't even need to few-shot prompt it with examples. Then you dump the resulting text into something like https://mermaid.live/ and voilà.


I just ask chatgpt to generate text diagrams that i can embed in my markdown.


This (ie. using the github issues on the repo) is great because it co-locates the mental notes breadcrumbs, the todo list and the project itself.

I use org-mode for personal project management, and by design it intermingles notes and todos. But it never fully clicked until I started putting the notes closer and closer to my repo (my workflow pre-dates github, but this approach makes me consider some modernization :))


Interestingly, I actually did the opposite. I started with GitHub Issues as my note taking place, used that for several years. And then eventually moved away from it to Org Notes as a centralized system for all projects. Some of my reasons were:

- it's centralized; sometimes projects are related and being able to search for "all project notes involving LEDs" is useful across repos

- sometimes project work is not solely related to code, and in that case your notes are split between repo-notes and non-repo-notes

- Git issues build up and become huge threads over time, lack hierarchical organization or advanced state/dependency tracking, rely on arbitrary tags, etc.

- Git issues are non-personal, often public-facing - and even on personal repos there is sometimes the thought that you might publish the git repo one day, and now it's littered with your private thoughts.

- During dev, the process often is a bit experimental in nature - this amounts to tons of failed ideas or dead notes that clutter issue db

- Not a lot of support for the note-type distinction between "moonshot"-style rough ideas and actual concrete projects. If you use GitHub Issues for all your notes, you'll have all your wacky ideas mixed in with your actual issues. You'll subconsciously avoid posting the wild ones which means you'll subconsciously damper your creativity.

All of these lead me to suggest sticking with a well-designed centralized project note-taking system, use GitHub Issues to track /issues/ and then /link/ to the issue in your note-taking system (ideally with a little automation glue) or simply avoid GH Issues entirely for your personal issue tracking and work directly off of your notes system.


I've addressed many of those problems with a bunch of different tricks:

- I have a simonw/notes repo which will NEVER be public, which is for me to put issue threads in for things that aren't part of one of my other repos, or that I want to keep private. I sometimes change my mind and transfer issues out into other repos using this trick: https://til.simonwillison.net/github/transfer-issue-private-...

- I frequently run searches across every issue in my "simonw" GitHub profile, which covers both my public and private issue repos

- I also built https://github-to-sqlite.dogsheep.net/github/issue_comments - a separate Datasette search interface across 10,000 issue comments from my various repos. I use that a lot less now that GitHub search has improved though.

- For tying issues together, I use GitHub Projects - which can contain issues from multiple repos in a single place. I use that for my personal TODO list, and for collaborations with other people that span multiple repos.

- I link issues together a LOT - if you add a "- #123" Markdown bullet point list in a GitHub issue comment it will turn each bullet point into a fully displayed link to the referenced issue, including its open or closed state.

- You can use this for checklists too: "- [ ] #123" will turn into a checkbox which automatically checks itself when the referenced issue is closed.

- I use GitHub labels for things like "Research" to mark issues which are longer running research things as opposed to active bugs or features.


GitHub is nice but starting about a year ago I have been asking myself "what if they lock me out of my account" for online services and there is no good answer.

Obsidian plus a data query plugin can do most of the stuff you've described but my data isn't at the mercy of a 3rd party, so that's what I go with.


Yeah, I worry about that too. I have SO much of my stuff dependent on my GitHub account now.

I'm slightly reassured by how useful their APIs are. I have automated exports of a lot of my GitHub issues, though I really should shore those up and make sure I'm capturing everything.

That's one of the reasons I built https://github.com/dogsheep/github-to-sqlite


This seems to work well for me when I only have a single (highest priority) project for the next couple months. But as soon as I have competing priorities, the context switches (or scheduling overhead) kill all the momentum that I could build up with small increments.


If they are competing with each other such that neither are getting done are they really priorities?


Yes. They are in the important but not urgent quadrant of the Eisenhower matrix.


And yet, if you have two tasks that each take 1 hour, and only 90 minutes until the deadline, only one task will be completed by you, regardless of quadrant, and you will then be reminded that priority is singular.


That scenario moves the goalpost towards a losing proposition. You have two important and urgent things and already know you fail at least one of them.

TFA is about non urgent important tasks that you want to show incremental progress on. If there are multiple of those then those short bursts of works actual work can be eaten up by context switching, reprioritization, scheduling, etc.


What I do in those cases of to arbitrarily pick an order to do those things. (Likeliness to become urgent in the future, difficulty, unpleasantness, etc...) If I can't, I just pick a random order


Where do you draw the line though?


Because if you start making more widgets then now your support, marketing, sales acquisition et al costs also have to scale. But that needs foresight... And until the AI workforce comes for those too, it's easier to claim short term victory.


Great answer that addresses the business incentives. Scaling is hard (perhaps impossible in some markets) and possibly expensive, while cutting labor costs has an immediate benefit to the bottom line.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: