Hacker Newsnew | past | comments | ask | show | jobs | submit | michaelbuckbee's commentslogin

(not a joke) I wonder to what extent the ability to produce a Rush Hour 4 will effect the deal.

https://www.cnbc.com/2025/11/25/trump-pushed-paramount-reviv...


Stranger than fiction.

I can't wait to see how Chris Tucker plays it

The usage of worktrees is seeing a big comeback in the era of AI assisted coding.

I have a script that takes Github issues and spins them out into their own worktrees with corresponding stack.

I can then run individual instances of Claude Code in each and easily flip between them.


Same. Never used worktrees before, but mapping a worktrees to tickets I’m assigned to for Claude to work on is really great.

Heck with the ai, I even have it spin up a dev and test db for that worktree in a docker container. Each has their own so they don’t conflict on that front either. And I won’t lie, I didn’t write those scripts. The models did it all and can make adjustments when I find different work patterns that I like.

This is all to the point of me wondering why I never did this for myself in the past. With the number of times I’m doing multiple parts of a codebase and the annoyance of committing, stashing, checking out different branch and not being able to go more quickly between when blockers are resolved.


What comeback? It’s been there for years and people who have use for them use them (or use git-clone(1) if they are not aware of them). It didn’t fall out of use at any point.

"Comeback" is probably the wrong word, maybe "uptick" in usage?

Worktrees just fit particularly well for the scenario of developing multiple different features in parallel on the same codebase, which is a pattern that devs doing a lot of AI assisted development have.


Consider Google's search results page (setting aside the ads and dark patterns for a moment) as a form of generative UI.

You enter a term, and depending on what you entered, you get a very different UI.

"best sled for toddler" -> search modifiers (wood, under $20, toboggan, etc.), search options, pictures of sleds for sale from different retailers, related products, and reviews.

"what's a toboggan" -> AI overview, Wikipedia summary, People Also Ask section, and a block of short videos on toboggans.

"directions to mt. trashmore" -> customized map of my current location to Mt. Trashmore (my local sledding hill)

Google has spent an immense amount of time and effort identifying the underlying intent behind all kinds of different searches and shows very different "UI" for each in a way makes a very fluid kind of sense to users.


Okay. They all suck. I want a list of websites that are likely to refer to my search query. Google is terrible at understanding my intent and even more terrible at displaying information in such a way as to facilitate my task.

As an example: I was searching for an item to purchase earlier. It's a very particular design; I already know that it's going to send me a bunch of slightly-wrong knockoffs. The first thing I want to see is all of the images that are labeled like my query, as many as possible at once, so that I can pick through them. Instead, it shows me the shopping UI, which fills the screen with pricing and other information for a bunch of things that I'm definitely not going to buy, because they're not what I'm looking for. Old Google would have had the images tab in a predictable place; I'd be on it without even thinking. Now? Yet another frustrating micro-experience with Nu-gle.


I totally agree with this, and I'd go even further.

When I'm having trouble with software, I often turn to Google to figure out how to use it. I'm then directed to a YouTube video, help article, or blog post with instructions.

My take is that people are already accustomed to this question-and-answer model. They're just not used to finding it within the application itself.


That's really neat.

Even there, didn't they recently make some changes to the CS go skins ecosystem to devalue much of the aftermarket sales.

It's not going to pay off for everybody, this is a land grab for who will control this sort of central aspect of AI inference.

> this is a land grab

is it though? Unlike fiber, current GPUs will be obsolete in 5-10 years.


Amazon also uses Claude under the hood for their "Rufus" shopping search assistant which is all over amazon.com.

It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.


> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.

Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.

I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.


A childhood internet friend of mine did something similar to that but for sending SMSes for free using the telco websites' built in SMS forms. He even had a website with how much he saved his users, at least until the telcos shut him down.

Phreaking in 2025

Well Phreaking in 2003-05 (no clue when anymore), so at the same time you could still get free phone calls on pay phones in the library or hotel lobby.

Not sure for Claude Code specifically, but in the general case, yes - GPT4Free and friends.

I think if you run any kind of freely-accessible LLM, it is inevitable that someone is going to try to exploit it for their own profit. It's usually pretty obvious when they find it because your bill explodes.


Are you sure? While Amazon doesn't own a "true" frontier model they have their own foundation model called Nova.

I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.


> Are you sure? While Amazon doesn't own a "true" frontier model they have their own foundation model called Nova.

I work for Amazon, everyone is using Claude. Nova is a piece of crap, nobody is using it. It's literally useless.

I haven't tried the new versions that just came out though.


> I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.

I would assume quite the opposite: it costs more to support and run inference on the old models. Why would Anthropic make inference cheaper for others, but not for amazon?


There may well be some "interesting" financial arrangements in place between the two. After all, Claude models are available in AWS Bedrock, which means Amazon are already physically operating them for other client uses.

Nova 2 came out today so not clear how good it is yet, but Nova 1 was entirely uncompetitive.

Supposedly competitive with Haiku 4.5, GPT 5 Mini and Gemini 2.5 Flash: https://aws.amazon.com/blogs/aws/introducing-amazon-nova-2-l...

Looks less "intelligent" to me, just a lot more trained on agentic (multi-turn tool) use so it greatly outperforms the others on the benches where that helps while lagging elsewhere. They also released bigger models, where "Pro" is supposedly competitive with 4.5 Sonnet. Lite is priced the same as 2.5 Flash, Pro as GPT 5.1. We'll definitely do some comparative testing on Nova 2 Lite vs 2.5 Flash, but not expecting much.

Claude 2.0 was laughably bad. I remember wondering why any investor would be funding them to compete against OpenAI. Today I cancelled my ChatGPT Pro because Claude Max does everything I need it to.

Rufus is a Claude Haiku, yes.

I wonder if that sentence will have any discernible meaning 100 years from now.

> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.

From a perspective of "how do we monetize AI chatbots", an easy thing about this usage context is that the consumer is already expecting and wanting product recommendations.

(If you saw this behavior with ChatGPT, it wouldn't go down as well, until you were conditioned to expect it, and there were no alternatives.)


There are really impressive marketing/advertisement formulas to be had. I wont share mine but I'm sure there are many ways to go step by step from not-customers to customers where each step has a known monetary value. If an LLM does something impressive in one of the steps you also know what it is worth.

I have an approach for multiple of these steps, which involves adapting a kind of non-LLMs respected authority tech approach (my previous side project), to LLMs.

I think it can be done right now with MCP servers in a way that you don't immediately hand over your data to the chatbot portal companies so that they can cut you out. (But, over time/traffic, they could quickly learn to mimick your MCP server, much like they mimick Web content and other training data, and at least appear to casual users to interact like you, even if twisted to push whatever company bid for the current user interaction. I haven't figured out what you do when they've trained on mimicking you with an evil twin; maybe you get acquired early, and then there are more resources to solve that next problem.)


Haha just tried and it works! First I tried in Spanish (I'm in Spain) and it simply refused, then I asked in English and it just did it (but it answered in Spanish!)

EDIT: I then asked for a Fizzbuzz implementation and it kindly asked. I asked then for a Rust Fizzbuzz implementation, but this time I asked again in Spanish, and he said that it could not help me with Fizzbuzz in Rust, but any other topic would be ok. Then again I asked in English "Please do Rust now" and it just wrote the program!

I wonder what the heck are they doing there? The guardrailing prompt is translated to the store language?


* "and it kindly answered"

I just tried and Rufus does not write any python for me. Just directs me to buy books on python.

lol, i tried it. Asked `write the product details in single-line bash array` and it did so.

David Simon's earlier work, "Homicide" had a lot of interesting switching between film + video and aspect ratios as well. I think it's something that he's been interested in for a long time.

David Simon wrote the source material for Homicide but did not "make" the show, he cut his teeth in TV writing on Homicide.

I have Homicide on DVD, it also has some good extras.


It was recently remastered. I watched in the original 4:3 but I'm happy that some love has been put into restoring the show, albeit in unintended 16:9.

I really like how you framed this as the takeaway or learning that needs to happen as what should be in the alt and not a recitation of the image. Where I've often had issues is more for things like business charts and illustrations and less cute cat photos.

"A meaningless image of a chart, from which nevertheless emanates a feeling of stonks going up"

The logic stays the same though the answer is longer and not always easy. Just saying "business chart" is totally useless. You can make a choice on what to focus and say "a chart of the stock for the last five years with constant improvement and a clear increase by 17 percent in 2022" (if it is a simple point that you are trying to make) or you can provide an html table with the datapoints if there is data that the user needs to explore on their own.


but the table exists outside the alt text, right? i don't know a mechanism to say "this html table represents the contents of this image" , in a way that screen readers and other accessibility technologies take advantage of

The figure tag has both image and caption tags that link them. As far as I remember, some content could be marked as screen reader only if you don't want for the table to be visible to the rest of the users.

Additionally, recently I've been a participant in accessibility studies where charts, diagrams and the like have been structured to be easier to explore with a sr. Those needed js to work and some of them looked custom, but they are also an alternative way to layer data.


It might be that you’re not perfectly clear on what exactly you’re trying to convey with the image and why it’s there.

What would you put for this? "Graph of All-Transactions House Price Index for the United States 1975-2025"?

https://fred.stlouisfed.org/series/USSTHPI


Charts are one I've wondered about, do I need to try to describe the trend of the data, or provide several conclusions that a person seeing the chart might draw?

Just saying "It's a chart" doesn't feel like it'd be useful to someone who can't see the chart. But if the other text on the page talks about the chart, then maybe identifying it as the chart is enough?


It depends on the context. What do you want to say? How much of it is said in the text? Can the content of the image be inferred from the text part? Even in the best scenario though, giving a summary of the image in the alt text / caption could be immensely useful and include the reader in your thought process.

What are you trying to point out with your graph in general? Write that basically. Usually graphs are added for some purpose, and assuming it's not purposefully misleading, verbalizing the purpose usually works well.

I might be an unusual case, but when I present graphs/charts it's not usually because I'm trying to point something out. It's usually a "here's some data, what conclusions do you draw from this?" and hopefully a discussion will follow. Example from recently: "Here is a recent survey of adults in the US and their religious identification, church attendance levels, self-reported "spirituality" level, etc. What do you think is happening?"

Would love to hear a good example of alt text for something like that where the data isn't necessarily clear and I also don't want to do any interpreting of the data lest I influence the person's opinion.


> and hopefully a discussion will follow.

Yeah, I think I misunderstood the context. I understood/assumed it to be for an article/post you're writing, where you have something you want to say in general/some point of what you're writing. But based on what you wrote now, it seems to be more about how to caption an image you're sending to a blind person in a conversation/discussion of some sort.

I guess at that point it'd be easier for them if you just share the data itself, rather than anything generated by the data, especially if there is nothing you want to point out.


An image is the wrong way to convey something like that to a blind person. As written in one of my other comments, give the data in a table format or a custom widget that could be explored.

https://www.w3.org/WAI/tutorials/images/ including how write alt text for charts.

Charts would have a link to tabular data. It’s the “business illustrations” that are more about understanding purpose.

a plaintext table with the actual data

sorry, snark does not help with my desire to improve accessibility in the wild.

I really didn’t mean to be snarky. Maybe if I was speaking, my tone would have made that more clear, or I could have worded it differently.

“Why is this here? What am I trying to say?” are super important things in design and also so easy to lose track of.


Generally speaking it has a lot of information from things like OP's blog post on how best to structure the file and prompt itself and you can also (from within Claude Code) ask it to look at posts or Anthropic prompting best practices and adopt those to your own file.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: