No, the argument was that because a binary representation of the number n needs ⌈log n⌉ bits, incrementing it would require O(log n) time. And my argument was that accessing these bits in memory takes an amount of time that grows with the cube root of your memory size because of physical constraints. But of course I'm not arguing that we should incorporate that into our models. Doing so would not solve any problems, quite the opposite. But assuming that basic operations on machine words can be performed in constant time solves actual problems such as "why is algorithm analysis in this model so bothersome", "why are there logarithms everywhere", and "why do the theory people say this should get slower for larger numbers? It always takes 1 cycle on my machine".