Hacker Newsnew | past | comments | ask | show | jobs | submit | joshheyse's commentslogin

In the age of Claude Code and other MCPs, the last thing I want is my commit history to be mutable. I've added instructions to Claude which make a commit before each modification with a summary of the conversation that caused the change. Once I'm happy with the work, I squash the commits and push up. (Which I believe is more or less equivalent to the jj workflow.)


This flow can work with jj also. It would probably be exactly the same. You'd do `jj new` and have Claude set the description and make its changes. When you're ready for the next round, you'd repeat and do `jj new` again for the next revision.

I think jj's mutable revisions are best thought of as automation around `git commit --amend && rebase_everything_after_this_commit`. If you're not using that kind of flow with Claude, you wouldn't use that kind of flow with jj either.


I’m guessing the speeds were slow because QOS was limiting the slowed for what was speed to be a chat only connection.

What not just spoof the MAC address of “machine” that has paid.

I had written a utility that monitors MAC address on the network and tries them each until it finds on that is allowed.

Looks like someone released an app to do just that.

https://github.com/t-mullen/wififox


They may have been throttled/deprioritized but often the plane just has a really high latency connection. Some in-flight internet is provided via geo-stationary satellites, which has a minimum of 250ms latency. Even when a low-earth orbit satellite is used, I expect they are being back-hauled to the WiFi vendors datacenter before going out to the internet.

When your latency gets in the hundreds of ms, no amount of bandwidth is going to make your internet connection feel snappy. And buffer bloat means even a workload that could theoretically saturate a high latency connection may actually get trashed.


> It's a hard problem. That same internal customer will similtaneously rail against the recharge in their budget for IT. It's a cost: a drag on their P&L. IT says they're under-resourced, and they could do it quicker with more people - but that would increase the P&L drag. Vicious circle.

I started my career as a Solutions Consultant. Our primary customers were small business units in large organizations that were frustrated with “IT” and looked externally to solve their problem. Low code is a variation on this strategy.

Our delivery time estimates always beat IT estimates and our costs were often less.

Maybe because we were seen as a competitor to IT, or maybe some manager was being sneaky our first interactions with IT was usually after the engagement contract was signed. (None of these projects had RFP/RFQs)

During the discussions with IT was when the hard parts of the engagement really happened. It was then we learned about the compliance requirements. Security, data integrity, availability, platform standards, ci/cd, pmi, etc…. These unknowns often dragged out our delivery times and skyrocketed our billable hours. Putting us equal to or behind internal IT.

In my experience at large organizations Compliance is more closely aligned with legal than IT but is often an a function of IT. The rules set forth by the legal teams are enforced through technical/process controls by IT. This makes IT look like the ‘problem’ went in fact they are just following mandates set forth be legal.

It’s often easy for a business unit to complain about IT preventing revenue growth and get an exception. If a business unit complained to upper management that legal wouldn’t let them do something I doubt exceptions would be granted as easily.

I’d suggest that compliance be its own department and review all external tools or vendors instead of IT. This would put external consultants or low-code solutions on par with internal IT. If would also shortened the feedback loop between those creating the rules and those it causes grief.


With only 3 people, anything that helps tick things of the checklist of tasks.

For reference, as a CTO at a startup of 5 my tasks included:

Most development work (1 other dev) Product Manager Project Manager Managing at Tech budget Negotiating Vendor Contracts Office IT help desk Network Administraton System Administrator Colo tech (racking servers and network equipment) Office furniture Builder Customer Support Working with Patent Attorneys Managing the other C levels daily tasks Recruiting more people to reduce the list above

Sometimes it’s much more than you signed up for, but if it’s a business you believe in, you just keep checking things off the task list.


Most financial market data comes with strict licensing models. To get data from NYSE, NADAQ, CME or most other exchanges you must sign the agreements. Selling this data to vendors, news networks or end consumers is a large portion of the exchange's revenue.

The more granular or quickly you want the data, the more it will cost. In addition to paying for the initial data, you are not allowed to redistribute the data, in real-time or historically, with out paying royalties to the original data providers per user.

This creates a complicated accountanting system for data delivered to end users of the data.

Aggregate market data vendors (Factset, Bloomberg, Activ) pay for the initial data and then pay per user/query/etc...

Some data becomes available for unlimited distribution based on licensing, usually this data is time delayed and not allowed to be queried for historical analysis. You'll often see things on Yahoo or Google Finance that say "market data delayed by 10 min".

One of the most common sources of this delayed data is yahoo and there are open APIs for querying it. Usually in Python or a statistically language like R.

Interactive Brokers has an API you can add to your brokerage account for programmatic access across multiple exchanges. Which provides market data and order entry.

If I miss understood you, and you are interested in a open API standard for financial data there are several FIX, ITCH and OUCH are some. But they are almost always forked per exchange and sometimes even per product.

tl;dr It's not the API or infrastructure that costs, it's the data itself.


I never thought exchanges were making money by selling data.


That's virtually their business model. In the modern world exchanges have 3 things on offer standardization, order entry & market data.

The final 2 are how they make their money.


They are shoes. It's pretty easy to assume how they split a pair into to equally weight & size packages for the test.


Sure you could assume that, they didn't say so though.

As an experimental design that would be pretty good, but as a product experience I would not be very happy to unbox one shoe, then wait days for the other one to arrive or find out it was lost in the mail.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: