Hacker Newsnew | past | comments | ask | show | jobs | submit | gaika's commentslogin

It is exactly what happened in France when they introduced a similar tax. All traders moved into CFD (contract for difference) instead of trading the underlying stock.


HFT serves the same purpose that human Market Makers and Specialists do, only better. Kill it, and you will end up paying more.

Did you complain when human travel agents were replaced by expedia and like? Would you complain if car salesman as a profession is gone? Do you see your profit when amazon is competing with all brick and mortar shops? What makes HFT so special in that list?

So strange to see that sentiment from a science fiction author. Afraid to lose to reality evolving faster than you can imagine?


There have been studies proving that the majority of HFT traders are net liquidity 'takers'. The notion that HFT provide liquidity is false but the industry still propagates the myth to main street.

Have a look at a recent paper: http://www.bankofcanada.ca/wp-content/uploads/2012/11/Brogaa...


This paper is a joke. It only looks at the S&P futures. If I added liquidity in SPY or any of the other S&P ETF, hedged my exposure by removing liquidity in the futures, they would consider me an 'aggressive' trader.

They're missing 90% of the picture.


It is an ecosystem, of course traders with fast computers would try to fill all the niches they can. Take a look at a broader picture: http://www-rcf.usc.edu/~lharris/ACROBAT/Zerosum.pdf

Is suspect those in "Panel C: Losers who expect to profit from trading but will not" are complaining the most.


You're citing a very old paper that makes no mention of HFT? Not sure I follow the reasoning for the citation.

No one is saying that speculative traders don't serve a function in the market place. However you said "HFT serves the same purpose that human Market Makers and Specialists" when it has been proven they don't. HFT want you to believe that they serve some altruistic purpose to the marketplace to legitimize their existence. The truth is that HFT serves as a quasi-tax on each and every share traded because to execute a trade in today's marketplace non-HFT volume almost invariably passes through the hands of HFT volume thereby shaving pennies of profit off each trade.


Do you have some citation for that statement? Is there something that makes trading every millisecond much more efficient than trading, say, every tenth of a second?


For example traders in equities can participate in "on close" auctions if they prefer to do so. No millisecond guessing, no bid/ask at all, pure double side auction. Yet only ~10% of the volume goes there.

You can open up your own ECN and offer fixed auctions every minute if you think this will attract people who feel cheated by HFT.


While algorithmic trading may provide liquidity, it tends to exit markets during crashes and abnormal events. Moreover, I suspect that high-frequency trading's real advantage is not in providing lower spreads, but in much faster reaction time. I would eagerly agree to slightly higher spreads and slower times for trades if it meant less volatility and fewer crashes.


Those traders who didn't exit during the flash crash profited the most, what makes you think they will pull out the next time? On the flip side: those Market Makers that were forced to trade Facebook on the day of the IPO lost the most, when the system was totally broken, what makes you think they will handle the next problem better?

Anybody who's making a profit is almost by definition making the markets more efficient and less volatile. They buy when the price is low (pushing it up) and sell when it is high (pushing down).


No, they don't serve the same purpose that market markers and specialists do. MM and Specialists serve to stabilize the markets and are obligated to create and maintain orderly markets.

HFT have no such obligation, so they can create liquidity and remove it whenever they want. They caused the Flash Crash in 2010 by removing a large amount of liquidity when the markets needed it the most.

If HFT were forced to maintain liquidity like real market makers and specialists, then I would have no qualms with them. But they want to have their cake and eat it too, they say they provide liquidity but only when it's convenient for them, and that is the part that is total BS.


It wouldn't break current theory, it would just mean that photons travel slower than "speed of light" and have non-zero rest mass. Constant c in relativity instead of speed of photons would just mean fastest speed possible.


I have a comment about this. C, as is used in general relativity, is involved in a lot more than just the speed that light travels. It is also relevant to a lot of other equations, like time dilation in a gravitational field. Now, if this experiment resulted in changing our concept of c as the "speed of light" to the "speed of neutrinos", then your probably right. But I have to imagine that c has been verified experimentally in non-light related experiments.

For example, there is a certain speed where if you exceed it you are able to violate causal time relationships. I can't think of any experiments that would validate this. However, there is also the fact that theoretically, if you attempt to accelerate matter to the speed of light it's mass will increase infinitely. So if you accelerate it a little bit it's mass should increase a little bit, and you should be able to confirm the speed of light through an experiment where you measure infinitesimal increases in mass during large acceleration.

So my comment is that if he just broke the speed at which light travels, then everything is fine. But if he broke the speed at which you are able to violate causality, or the speed at which the mass of an object is infinite, then our entire understanding of physics is likely to be invalid.

Related reading - tachyon pistols

http://sheol.org/throopw/tachyon-pistols.html


Not really. The reason the idea of fixing c as the speed limit is that the number arises naturally as the speed of EM waves from Maxwell's equations. Briefly put, these these equations are valid in all frames, the Galilean speed addition rule has to be wrong and c should be standard in all inertial frames.

So you can't just use c for "the highest speed any particle can have in vacuum".


No.

Non-zero rest mass photons will break a lot of theories.

Plus, the "speed of light" is not merely an experimental result coming out of an interferometer. It's also a theoretical result, e.g. from Maxwell's equations - that's the one referred to by special relativity.


An alternative is that neutrinos and photons travel a different distance because there are extra dimensions that affect the two types of particles differently. These are the so-called space-time foam models.


Don't forget... c is the speed of light 'in a vacuum'. Light travels slower through gas, water or glass. Light has even been slowed down to walking speed in a laboratory. The speed of light is not constant. The speed of light in a vacuum is. We assume.


> Light travels slower through gas, water or glass.

There is a good description of what is going on in this Stack Exchange post:

http://physics.stackexchange.com/questions/13738/propagation...

It explains why saying "c is the speed of light" makes sense, because when we say light is traveling more "slowly" through a material, we are including the time spent interacting with the material, being absorbed and re-emitted.

I'm bristling a little at your statement that "the speed of light is not constant". Imagine two men walking at the same speed from A to B. But one of them is walking in a straight line, while the other is zig zagging. It would be fair to say that the one walking in a straight line is travelling from A to B faster, even though they are both moving at the same speed. The speed of light is a constant, it is just that light travelling through a medium doesn't necessarily spend all of its time travelling in one direction.


>>The speed of light is not constant

I think this is wrong. Regardless of the medium the speed of light is always constant. It seems to slow down because the photons are getting absorbed and re-transmited by atoms. But the speed of light is always the same regardless.


After much research... the concept is correct, although 'absorbed' and 'retransmitted' are not the right words to use. http://en.wikipedia.org/wiki/Slow_light

"Light traveling within a medium is no longer a disturbance solely of the electromagnetic field, but rather a disturbance of the field and the positions and velocities of the charged particles (electrons) within the material. The motion of the electrons is determined by the field (due to the Lorentz force) but the field is determined by the positions and velocities of the electrons (due to Gauss' law and Ampere's law). The behavior of a disturbance of this combined electromagnetic-charge density field (i.e. light) is still determined by Maxwell's equations, but the solutions are complicated due to the intimate link between the medium and the field. Understanding the behavior of light in a material is simplified by limiting the types of disturbances studied to sinusoidal functions of time. For these types of disturbances Maxwell's equations transform into algebraic equations and are easily solved. These special disturbances propagate through a material at a speed slower than c called the phase velocity."

As another commented pointed out, you're splitting hairs. The "speed" of light and how fast it is "moving" depends on how you define those terms. The second paragraph of the wikipedia speed of light article has it right "The speed at which light propagates through transparent materials" - which does change.


That's splitting hairs in a way that makes you deviate from standard usage of the term. Physicists say things like "the speed of light in water is lower than the speed of light in a vacuum".


But to a lay person that statement does not mean the same thing as for a physicists. To a lay person it sounds as if literally the photons slow down. And I bet that a lot of people repeat this statement thinking that light actually slows down.


No, it's correct. The light actually gets slowed down in material (anything else than a vacuum.) That has nothing to do with observation, it's an actual physical effect. The speed of light in a vacuum is constant and so is the speed of light in any particular pure material (like a pure gas.) It's just that those constant speeds are different.


I say you are incorrect. Read this: http://physlink.com/Education/AskExperts/ae509.cfm

Here is a relevant piece:

When light enters a material, photons are absorbed by the atoms in that material, increasing the energy of the atom. The atom will then lose energy after some tiny fraction of time, emitting a photon in the process. This photon, which is identical to the first, travels at the speed of light until it is absorbed by another atom and the process repeats. The delay between the time that the atom absorbs the photon and the excited atom releases as photon causes it to appear that light is slowing down.


> The delay between the time that the atom absorbs the photon and the excited atom releases as photon causes it to appear that light is slowing down.

If the photon is traveling less D over the same amount of T, I am ok with saying the velocity is lower, and it has slowed down.


"Well, actually, no, officer, I wasn't speeding. You see, while you clocked me at 90mph [c] between toll booths [atoms], once you factor in time at the booth, you'll see that I am actually driving much more slowly."


But equating decrease in velocity with "slowing down" would be confusing for most laypeople, at least.

Nobody would say that they slowed down if they increased their speed as they went through a turn.


"Material" is made of smaller things, which I think the GP is getting at. The actual photons that travel from electron to electron and such don't get slowed down; they effectively travel through a vacuum that is the tiny spaces inside molecules and atoms.


Isn't the speed at which photons travel, by definition, the speed of light?

And as for the "c" in e=mc^2, doesn't this suddenly make "c" an unknown constant? Doesn't the fact that "c" changes suddenly change the values of the other variables in that equation as well? That seems pretty fundamental to me...


As I understand it, Einstein's work rests on there being a fundamental maximum 'speed' and it seemed to him as though the speed of photons was that limit, so 'speed of light' became synonymous with this maximum. But it doesn't necessarily have to be so.

So if there's something faster, it changes our understanding of photons but not the existence of this fundamental maximum speed.

As you note, our efforts to measure c may have been off due to measuring the wrong thing, but I don't know the ramifications of a small % change in c.

(I'm not a physicist)


Yes, it is -the- basic assumptions for special relativity. And no, it doesn't just change our understanding of photons, it changes everything, since it's all connected. All theories I've looked at so far have (at least in higher versions) incorporate relativitity.

(Another basic assumption, this time for general relativity, is the equality of inertial and gravitational mass, which is not a self-evident thing. However, so far no difference has been found. (see http://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s_experiment)


I think what he's getting at is that there's this value "c" that's really important to physics appearing in equations like e=mc^2 and determining the absolute upper bound on speed, and by the way, we used to assume that photons traveled at c, rather than their actual rate of 99.9975% of c.

I don't know whether changing c by this amount would break many experimental results. Adding a rest mass to photos sounds potentially revolutionary.


Einstein based his theory on the maximum speed at which information can propogate. That's always been assumed to be the speed of light (photons). It may be possible that there is something else that can propogate information faster (e.g. neutrinos). There would still be an upper speed limit, but it wouldn't be the one we thought it was :)


Isn't the speed at which photons travel, by definition, the speed of light?

Photons speed up and slow down routinely, depending on what medium they're traveling through. c, as it is used in the equations of relativity, is currently believed to be equal to the speed of light in a vacuum. But, with my limited knowledge of GR, my understanding is that gaika is correct and that the rest of the theory can still stand if this equality is broken.


This isn't technically correct. Photons always travel the same speed but in certain materials they are absorbed and emitted by atoms, causing their apparent speed to slow down.

A photon's instantaneous speed is always the speed of light.


>>Photons speed up and slow down routinely, depending on what medium they're traveling through

Do they really? As far as I know their speed is always constant in any medium. They just seem to slow down because they get absorbed and re-transmited. That is where the lost of velocity comes from. When traveling between one atom and another, which is a vacuum, they are always traveling at the speed of light.


It wouldn't break current theory for photons to have nonzero rest mass?


Don't ask who is losing on the other side of your trades but think instead who is your customer. What is the service that you provide and how it helps them.

If you're making a profit it means you bought inventory when the price was below fair value (your customers didn't need it and wanted to sell as fast as possible) and you sold it when the price was above (your customers really needed the shares right now). The net benefit to everybody is that the volatility is lower, as you moved the price down when it was too high and moved it up when it was too low.

The market efficiency is higher too: a lot less capital is required to establish fair prices as market reacts immediately to any imbalance.

It also makes the spread lower and makes buying and selling stock cheaper for your customers. Only a few years ago market makers and specialists would chicken out at the first sign of trouble and would widen the spread between bid and ask prices. Crossing the spread is a huge part of your overall expense of trading. Unfortunately very few investors understand full impact of it on their returns and don't appreciate your contribution.

Execution time is better now. Even during flash crash it was possible to buy and sell with retail brokers, where's I still remember times in 2001 when retail broker market orders sometimes took minutes to fill.


"If you're making a profit it means you bought inventory when the price was below fair value (your customers didn't need it and wanted to sell as fast as possible), and you sold it when the price was high (your customers really needed the shares right now)."

It's a bit more complicated than that, because it's fully possible that the seller in a trade is selling above his fair value and the buyer in that trade is buying below her fair value. Supply/demand certainly plays into it (a trader with a very large position is definitely willing to take some price hit if they are able to quicky their position), which will invariably lead to a discussion of utility functions and slowly bore everyone :P


> invariably lead to a discussion of utility functions and slowly bore everyone

But that's the crux of the matter - both sides of the trade are getting some utility gain (otherwise the trade would not happen) and thus it is not a zero sum game.

It is still zero sum in short term dollars, which just obscures the subject for the people who equate utility with dollars.


No, they are not getting utility gain.

Recently I was selling an netbook, aiming to get about £100. A friend told me she was wanting something, and looking to spend about £150. We split the difference and went £125.

It would not have been a utility gain if someone had quickly jumped between us, given me £100, sold the laptop to her for £150 and kept the £50 for themselves. We would both have ended up out of pocket.

The whole point of exchanges is that they connect buyers and sellers. Parasites living making a living by being faster than anyone else are not helping anyone.

If there was a level playing field, and everyone went at the same speed, and they could still make money, then I could believe they were adding some utility.


It's more subtle than that. If you wanted to make the sale now and your friend wanted to buy next week when she got paid, then an intermediary would be adding value, purely by being willing to act more quickly. The difference would cover them sitting on the laptop for a week.


It is yet more sophisticated for some HFT trading.

Continuing the analogy: Some trading is as if there's a person standing between the two people trying to make a deal and can hear the offers before each party can react and then make their own offers to each party to capture some of the difference (re: HFT profit).

Explanation: In some forms of HFT the key advantage is that they can buy the market data faster than other people so they can see upcoming trades before other participants (undoubtedly there are better citations but here's a decent NYT article): http://dealbook.nytimes.com/2010/06/11/opening-up-the-market...


High-speed traders are living in the millisecond range. Do you think the people doing trading on the order of seconds are pleased they are paying to save milliseconds or not? I suspect not, but would be happy to be proved wrong.


What would really happen is that a second trader would enter the market, offer to buy the laptop for 110$, hold onto it and then sell it on for 140$. Actual prices vary and with enough competition, will settle to something that gives the trader still a nice profit and enough cash to insure against the risk of such an operation (the trader doesn't know up-front when he can actually onload the goods and how far the price might move meanwhile).

Of course, if you're not desperate on selling right now and or the risk for the trader is too high (i.e. the minimum price you'll sell right now is higher than what the trader is willing to offer you right now), nothing happens.

Concluding, traders do add utility to a market. And as others have already said, with high freq trading it mostly results in massively reduces spreads (and the temporal utility as outlined in the laptop example essentially disappears).


These high-frequency traders are not parasites. They are providing liquidity. Essentially, they are providing you a service. If your friend had not come along, perhaps you would have been holding onto your netbook for a lot longer, or it may not have sold at all.

If you want to see what happens when liquidity dries up, you only have to look to the Flash Crash. Yes, the whole thing started due to an erroneous trade, but the fall was greatly magnified when all of the computers were pulled from the market when they couldn't make sense of things. If this type of trading were to be banished tomorrow, the market would tank so quickly that people would be begging for the computers to be flipped back on. The genie is already out of the bottle, and it can't go back in without considerable pain to everyone around it.


Dont forget that a lot of trades didn't clear.

I read somewhere that something like 30% of trades used to fall through because of paperwork mixups and other problems before electronic exchanges.

How would you like to think you sold at a profit only to find out that the trade didn't clear and now you're in the hole because the stock bottomed out?


The worst part was May 6 2010 (the flash crash). The exchanges and SEC arbitrarily threw out a number 60%, and people who managed to buy at 59% discount got to keep their (very) profitable trade and people who managed to buy at 61% discount lost only one leg of the trade. The process was not transparent, at all :/

I know some people who made a boatload (all trades cleared), and others who lost a boatload (one part of their trade was cleared, but the other part was broken, and they had to go back and buy back at a very disadvantageous price)


Yes, you can break below diffraction limit. See http://en.wikipedia.org/wiki/Super-resolution


No you are still constrained by the limit, super resolution only works with the information that is captured.

The diffraction limit isn't a hard line limit rather the information disappears into noise as you go past the limit. So what SR does is use multiple images to remove the noise for details close to the limit using some neat algorithms. But noise quickly overcomes the information as you push against the limit.


When you turn the camera the perspective shifts, as if you're inside a cube with pictures on its walls. Is it that hard to fix to get more real feel like in a 3rd game?

Edit: default zoom is too wide-angle. Zoom in and everything is so much better.


Here's a more scientific way of predicting crashes:

http://videolectures.net/risc08_sornette_fcrm/

"Most attempts to explain market failures seek to pinpoint triggering mechanisms that occur hours, days, or weeks before the collapse. Sornette proposes a radically different view: the underlying cause can be sought months and even years before the abrupt, catastrophic event in the build-up of cooperative speculation, into an accelerating rise of the market price, otherwise known as a "bubble." "


From the original report [1]:

"It was not our purpose to examine, nor did we seek evidence on, the science produced by CRU. It will be for the Scientific Appraisal Panel to look in detail into all the evidence to determine whether or not the consensus view remains valid."

[1] http://www.desmogblog.com/sites/beta.desmogblog.com/files/ph...


Add Nouriel Roubini [1] and at least 2 other CEOs that are not listed. It feels like we're living in Graham Greene's "Our Man In Havana" [2].

[1] http://ftalphaville.ft.com/blog/2010/06/30/274571/from-roubi...

[2] http://en.wikipedia.org/wiki/Our_Man_in_Havana


Just a warning to anybody using Linked-In or Facebook - journalists are already calling all the "friends" they can find online for details. FBI will be calling soon too. This information is just too easy to find for anybody who has time to type in the name into a search box.


Just for the hell of it, I searched for both Heathfield and Foley on LinkedIn, and I’m a third-degree connection to both of them. (I wonder if this is a sign of how incestuous the Boston high-tech community is.)


> how incestuous the Boston high-tech community is.

Or how well the alleged spies have been planted...


Or how well the sleeper cells have had the programming subsumed into their consciousness...

http://en.wikipedia.org/wiki/Spies_Reminiscent_of_Us#Plot


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: