There's plenty of joking and 'it'll never work' here, but doesn't this worry anyone at all?
A system that 'knows you better than you know yourself', means a system that is able to predict your responses to different stimuli more accurately than you can.
By definition such a system can manipulate you.
Let's set aside the idea of an 'AI' - which conjures up all sorts of mythical images of talking computers.
Instead consider the implications of a single corporation with a machine learning system that has access to the private communications, personal documents, and physical location and web viewing history for a billion people and can use that information to predict the behavior of each one of them better than they can predict their own behavior.
That's what we'll get if Kurzweil and Google are only partially successful in this endeavor. It sounds to me like the greatest concentration of power in human history, and an extremely dangerous thing that we should be considering the implications of today.
Now many people seem to think that Moore's law means that machine learning systems of this power are inevitable and that it's better that it's Google than a worse entity like a government who controls it.
I think that's a dangerous dismissal for two reasons - firstly it presumes that this tool wouldn't be taken over by governments once it came into existence, or that it wouldn't be used to manipulate governments to prevent its takeover.
Secondly, even if super-powerful AI is an inevitability, it's not inevitable at all that we should be routing all of our personal information to its creator so that it can be used to model and predict our behavior en masse.
> Instead consider the implications of a single corporation with a machine learning system that has access to the private communications, personal documents, and physical location and web viewing history for a billion people and can use that information to predict the behavior of each one of them better than they can predict their own behavior.
Funny the corporation that makes that more visible to me is Netflix. After rating so many movies it can pretty well predict if I would like a move or not. It does it better than me reading the description and the title.
It is kind of nice but a bit worrying. Wonder what kind of psychological profile it could build and sell to other companies if it wanted to.
>I don't see anyone saying it's not Google's right to do these things.
I've seen plenty of people claiming that they think it's an antitrust violation and arguing that regulators should go after them for it. Though I'm not sure when not supporting a small minority platform like Windows Phone became an antitrust violation; if it has then Microsoft had better get to work shipping that edition of Office for FreeBSD.
Also, Google and Microsoft are competitors. And Microsoft has been attacking Google at every opportunity. I suppose using Microsoft as the standard for corporate citizenship is kind of a low bar, but you can only turn the other cheek so many times before some kind of response becomes an inevitability.
Especially when you have such a large organization. People keep thinking that corporations are like individuals, as though the CEO and the full legal department approves every decision an engineer makes before pushing an incremental update to one of a thousand different products.
Stories like this are completely plausible to have happened by accident or by a tiny minority of rogue individuals -- and then, multiply by tens of thousands of engineers, you see more than one such story because Microsoft PR turns every single one into front page news, and confirmation bias makes you believe it's the rule rather than the exception. Opposition research working as intended.
I don't think you've ever actually worked at a large organisation.
Engineers don't unilaterally make decisions about what goes into a product or not. That is the purview of the Product Manager and I assure you that senior executives (possibly even the CEO) do sign off on major product decisions like which platforms to support.
I assume that the large organization you've worked at isn't Google.
Product Managers don't unilaterally make decisions about what goes into a product or not. Most such decisions are made by consensus within the team (Eng + PM + UX), with the PM acting as the tiebreaker if the team can't reach consensus. Many product decisions are in fact made by engineers - a PM friend of mine once told me "The difference between Microsoft and Google is that at Google, engineers are expected to have product opinions."
As for which platforms to support - it varies based on the scale of the project. There've been times I've made the call myself (as a TL), for smaller projects like doodles and easter eggs. For a larger product like Maps as a whole, I'd imagine a VP would sign off on it.
Yes, in theory the CEO is responsible for everything that goes on at the company. As a matter of culture, though, Google tries to delegate as much decision-making power down to lower levels as possible, and then just deal with the inevitable screwups as they happen. It wouldn't be nearly as nice a place to work otherwise, nor would their "hire the best people and give them the tools and information to do their jobs" strategy work out. Yeah, PR flareups happen, but the team fixed the problem, and I bet we'll forget all about it by next week.
It could be argued that the "hire the best people and give them the tools and information to do their jobs" strategy hasn't worked out that well for product management at Google. Google Docs and App Engine are two prime examples of dysfunctional product management that seems to value the opinions of "the best people" (supposedly Google employees) more than those of the customers.
>Malcolm Gladwell's article "The Talent Myth" seems fitting
I don't think the article supports your argument. You can't argue against "talent" -- talented people are tautologically more capable than untalented people. If you don't want smart people making decisions, who do you want making them? Stupid people? Some unspecified different group of smart people under unspecified different conditions?
The article makes the strong point that certain reward systems are harmful. If you give anyone who shows strong short-term results a huge bonus and a promotion and anyone who doesn't a pink slip, you create an overwhelming incentive for people to fudge the numbers and take bad risks in order to stay on the A list. But that's not the same thing as hiring the best and brightest and then (perhaps after some probationary period) giving them the equivalent of tenure and autonomy as long as they don't start any major fires.
If the article can be summed up in a single paragraph, it's this:
"Groups don't write great novels, and a committee didn't come up with the theory of relativity. But companies work by different rules. They don't just create; they execute and compete and coordinate the efforts of many different people, and the organizations that are most successful at that task are the ones where the system is the star."
Which is bullocks, because it's overly broad. "Companies" are not a uniform thing. If you're General Electric or NASA and you make jet engines and nuclear reactors and spacecraft, you need to run everything by the lawyers and the actuaries because if you make a mistake then literally planes fall out of the sky.
But you don't design web apps the same way as you design medical devices. The risk profiles are totally, comprehensively different. If a pacemaker fails, someone dies. If Google Maps is not available on your device, you just use Mapquest. Or a paper map. Or ask someone for directions. If some engineer collects wifi data that they probably ought not to have, again no one dies, and the ultimate outcomes in the case of "collected and then deleted before being used" vs. "prevented by bureaucratic process from being collected" are, as far as I am aware, completely identical for all parties other than from a public relations perspective.
When the cost of a mistake is low, it makes good sense to take more risks. If the damage that someone can do is smaller, you don't need to put so much effort into preventing it. Which is good -- it's efficient -- because having to check and double check is slow and expensive.
There are legitimately cases where it costs more to prevent a mistake than to clean it up. In those cases it is completely rational to allow mistakes to happen. Science needs negative results. Sometimes we need to be allowed to make mistakes in order to learn from them. And companies in such industries can do far worse than to act as a risk pooling apparatus for a loosely federated collection of small teams of individual inventors.
This is probably not a strategy that works well if you're making war machines or dangerous chemicals or massive scale financial transactions. But it seems to be a strategy that works well for making web apps.
I mean, they are making billions of dollars. Enron didn't make billions of dollars. Enron lost billions of dollars and then lied about it. That's a pretty big difference.
I have no doubt that in many cases you will be able to find a piece of paper with a high level executive's signature on it or an email with that person CC'd for all manner of minutiae, and I'm sure if you're a lawyer then you like to make a big deal out of that kind of stuff.
But the truth is, decisions get delegated. The CEO signs off on the VP's recommendation, which was really a mid level manager's recommendation, which really came from someone at the bottom of the totem pole. The degree to which each of the approvers actually decided anything as opposed to merely placing their trust in the subordinate's judgement at any given company is to a large degree a matter of personal preference, individual trust relationships and corporate culture.
Moreover, there are rules and there are facts. Just because the CEO thinks he decided something doesn't mean that by the time the game of telephone is fully played, something distinctly different isn't going into the market. And in altogether too many cases, the first indication the top level executives get that something is awry is an article in the New York Times.
It's not clear whether you are actually advocating this kind of interpretation of Ayn Rand's position or not.
It seems to me that the principle of government being solely to protect individuals from the initiation of force by others is a great one, and should be the primary one. For bringing this idea into my consciousness, I am grateful to Ayn Rand and those who propagate her work.
However, "sophacles" doesn't seem to be arguing against the principle. He or she seems to be arguing against a simplistic interpretation of its implications.
For example you say: "correctly applying this principle means that if something is always a poison to everyone, it's not a legitimate product. There is no legitimate use. Producing it and selling it is simply initiating force."
Who decides what is a poison to 'everyone'? Well since by the correct application of the principle, it is the government's responsibility to protect individuals from this initiation of force, it must therefore be the responsibility of the government to recognize these situations.
Does that mean that the government needs to set up a science infrastructure to decide what is harmful to 'everyone'?
If so, how is it to staff this infrastructure with scientists, and how is it to decide who is qualified to do the work? Perhaps it needs to establish an education system for this purpose. Given that working as a government scientist by necessity means giving up a lot of time that could otherwise be spent securing shelter and provision for the future, perhaps the government should set up a way to house and provide for its employees in their old age... and so it goes on.
I'm all for the principle, seriously. I just don't see how it helps to offer unrealistically simplified interpretations of how to apply it to people who are interested enough to engage.
Unless you really do think that Ayn Rand had a perfect and complete prescription for the best of all possible societies...
Those are interesting pieces, and I feel as though my time was well spent reading them.
I can't see what they have to do with this argument though. The first one is basically a speculative essay with little to support its ideas, interesting though they are.
The second one is more rigorous, but it seems to me that it works against your argument.
It contends that Law and Order is provided by Xeer, Somali customary law, which is a tribal artifact that has developed over centuries, and depends on people being recognized as having loyalty to a tribe because it makes the tribe responsible for harms done its by members to other tribes. Thew piece also states that although private courts exist (funded by successful businessmen), Shari'a courts perform an instrumental function in creating legal order.
Both pieces also state that the Somali central state, when it existed, was weak, rampantly corrupt and never successfully displaced these tribal and religious institutions.
All this really seems to be saying is that, just like everywhere else before the emergence of the nation state, Somalia was governed by tribal law and religion. In the case of Somalia, a functioning nation state never really emerged, and so it fell back to tribal law and religion.
This turns out not to be as bad as the failing central state, or the horror stories portrayed by the mainstream media, but although falling back to tribalism and religion might not be as bad as the media portrays, it hardly seems like a model for how to improve on what we have.
I agree with you that Google should be free to have their own editorial voice, and it seems like a really bad idea for that to be interfered with directly by government.
However, as you have done, I often see this idea coupled with a quip like "If you don't like it, go find a new one."
Why do people keep saying this? It's clearly not a valid option. Doing fast, high quality search has gigantic barriers to entry. Some of these may literally be insurmountable because the data Google has archived from the past of the web will never be available to new entrants.
We simply don't have a range of search 'voices' to choose from. Telling people to choose from a range of options that doesn't exist solves nothing.
Personally, I think there's an argument for forcing Google to at least publicize that their search actually has an editorial voice to counteract the misleading idea that it was somehow algorithmically objective and free of bias, which was their position for many years.
At least that way consumers might question what they are seeing and recognize that it's not 'the web' but Google's opinion of what you should see.
I said that because I'm the CTO of a competitor (blekko) that some users think is a viable alternative.
It's true that we don't the bazillion clicks that google has logged. But google doesn't have our human curation, and brand new content on the Internet doesn't have a result click history yet.
I commend you for trying to offer an alternative, seriously!
Google may not have hand curation, but they do use manual raters as one of their signals.
It's also true that new content doesn't have a click history, but the click history as well as topological history of the internet can tell Google a lot more than just the history of individual items.
In any case, my main point here is that the position of competitors like yourselves would be strengthened if more people realized that Google wasn't somehow based on 'objective' algorithms.
You guys claim subjectivity and editorial quality as a strength and I agree that it is, but I think there's an argument that Google has been falsely advertising itself as objective and free of bias and could be forced to rectify that through advertising.
Indeed the reportage around this FTC settlement implies that Google has been 'cleared of allegations of bias', which is essentially the opposite of the truth.
Obviously people can argue for ever over the merits of one design decision over another but as someone who has programmed in Java for more than a decade, and has used Scala and Clojure commercially, I'd say that Objective-C does have some attractive qualities:
1. The object model is similar in complexity to Java (i.e. no multiple inheritance, templates, etc. that fill C++ with corner cases).
2. It is C but with an object oriented message passing layer on-top. You can think of an Objective-C program as a bunch of C programs that have a more loose coupled way of talking to each other. This gives it a balance between the abstraction of a high-level language and the performance and control of C that is different from Java and most scripting languages where you need to use a native-API to talk to C. Whether this is objectively good or not is debatable but it certainly supports Apple's strategy. Personally I find it liberating.
3. The dynamism and looseness of the language make code transformation tools much harder to write - which is why XCode is only now starting to approach Eclipse on refactorings and completion. The upside is the dynamism enables a bunch of things whose equivalents require bytecode manipulation in java - e.g. property observers, Core Data synthesized accessors, Undo proxies etc.
Java is evolving slowly to make these kinds of thing easier, and Objective-C is evolving slowly to a tighter language that's more amenable to automated transformation.
These are just a few reasons. I think it's also fair to say that Objective-C was pretty crude and was 'behind' even as recently as 2007, but the rate of improvement since then has been high and has brought it up to date. (Declared properties, blocks, ARC, GCD, are pretty major steps into the modern era)
There are definitely a lot of rough edges that still remain and there's a lot more to learn than say a scripting language, plus a very different philosophy to get to grips with by comparison with the java family but once you do know it, it's a powerful language with some great strengths.
I'm very interested to see what happens to it over the next few years.
C pretty much still Rules the System Layer. Its never going to go away for as long as its easy, and efficient, to write very good C code which performs well. In spite of the hate, there is a lot of really good C code out there still running, still working, still burning up the market. I'm pretty sure there's nary a system image which doesn't, eventually, get itself operating per the rules of C, for the most part, somewhere ..
That said, for all your very valid points about Objective-C, the same (essentially) can be said of Lua, and the Lua VM, for example.
As a mobile developer, I'm no longer interested in Objective-C - its only an Apple Language. But I can take the Lua VM and put it on all the other machines, host-wise/parasitically, and create my own internally ordered Framework which runs on all Platforms, and still gain a lot of the benefits of a re-evaluation of 'language simplicity' versus programmer effectiveness.
After 4 years of Mobile development on iOS and Android, where multiple projects have blossomed organically into unwieldy godawful trees of complexity which prove, every day, even more difficult to turn over to other programmers, themselves creating massive WordSpaceCollections: ofCode.to_BeMaintained((Some*)Way) || Other { NSLog("grr..", &etc} ;
On a Drama Scale, it goes like this:
"Oh, Android NDK/SDK, how you have blossomed to being something I regret I am not putting into the trash in the early days. XCode, you %(#@&% Asshole piece of software, Why I Gotta Download CMDLineTools just to get work done"
..
"SublimeText2, factory settings .. Open Folder->".lua files", build and distribute for MacOSX, Linux, Windows, iOS, Android, and still only need to maintain one codebase.
This seems like a long way of saying that you don't like Objective-C because it's not cross platform, and that you've decided instead to write all your mobile code in Lua and to maintain your own abstraction layer onto MacOSX, Linux, Windows, iOS, and Android.
The question I was responding to was simply whether there are compelling reasons for Apple to use Objective-C.
Supporting cross platform native development is clearly not a strategic goal for them.
I think I'd add another reason - which is that they are in control of the evolution of Objective-C.
I'm really more trying to point out that there is a great way to escape the trap being laid for you by Apple and their plans for Objective-C, which is indeed to keep the language in their own privileged domain.
And it really is important enough that anyone considering learning Objective-C today, or even using it, know that there is a way out of it: roll your own walled garden and plant what you like within it, on any platform you can.
I would be willing to wager a small bet that says that the scripted-VM-glommed-in-a-web-of-libs approach to the Platform wars will become more and more a key survival strategy in software development over the next 2 years.
The OS, and indeed Distributions are dead; long live the new King, VM-managed library bundling..
I'm not sure what you mean by 'keep the language in their own privileged domain' actually means, but I do agree that it is Apple's strategy to invest in Objective-C above other languages on its own platform, and that they have no investment in making a cross-platform framework other than HTML5.
I don't see why you describe Apple's approach as a 'trap'. They are providing a lot of software components that save effort for those who use them. The results are platform specific, but everyone who uses them knows that and chooses to make that tradeoff intentionally.
What you describe as "VM-managed library bundling" sounds a lot like "building your own platform out of open source parts and maintaining a compatibility layer to your target platforms".
That strategy works for a few of the largest most-well resourced projects - e.g. the browsers Firefox, Chrome, Safari, Opera etc, plus the Adobe Suite, and even these draw criticism for the results not being as good as they could be if they focussed on one platform.
Something like this works on the web too - where people assemble a 'platform' out of javascript libraries - because the base platform simply doesn't provide enough.
I don't see it being a viable strategy for a small team or an individual developer trying to build native applications though because of the amount of time you'll spend keeping the compatibility layer up-to-date with the rapidly changing underlying platforms.
Run with the pack and eventually become de-marginalized on the slippery platform being controlled by Apple, or work a bit harder and gain traction on every platform you can. This has always been a strategic decision for developers, large and small groups, and both paths have their merits and pitfalls. Neither is a guarantee of success.
Also, its not as hard to "keep the compatability layer" up to date as you state .. you need to do it anyway, if you use the vendor-provided native tools. If you can do that, you can take it one step further, and maintain a very productive outer-shell over the trap-tools. That can be a very good strategy, or a poor one; all the examples you provide may be the larger failure cases, but there are smaller success cases underlooked in your argument .. MOAI, Love2D, GamePlay .. these are all coming along to eat the Native Development Devotee's lunch ..
The valid argument is that it's generally harder to convince people to pay for web content than it is for content on iOS where this product is already successful, presumably because iOS users are a self-selected group who are more willing to pay for digital content.
There doesn't seem to be any argument for making the site work 'properly' in 'every' browser and discontinuing the iOS version other than a dislike for iOS or iOS users.
They offer an optional subscription through in-app purchase on iOS.
I imagine they get a lot more support that way than they would if they put it on the web with a 'donate' button, so basically they are putting their effort into the audience who is willing to pay them.
Why would they just give it away rather than going to the market that values their product?
They did put the effort in to overcome those issues, only to discover that for every 80 iOS users they had 1 Android user, so they concluded that continuing to do the work wasn't justified because there was a lack of demand. If they'd had comparable demand from Android users, they'd have continued to invest.
But if they had a crappy product for Android is it any wonder that their uptake ratio was so out of kilter? If they had platform feature parity at launch your argument would be valid but they didn't so you cannot draw the conclusion you have drawn.
It's not really my conclusion - it's the conclusion from the TWN piece. That said, even though it's not a controlled experiment, the fact that they didn't have feature parity doesn't completely invalidate their experience. 80-1 is a high ratio particularly for a free product that people presumably would have expected to improve over time if they had been interested.
Android is Open Source, so it belongs to everyone. Content producers need to do their part alongside developers to make it great, even if that means putting in more investment than supporting a propriety platform like iOS.
Because that investment will pay off over time as more and more people move to Android, and a platform that isn't controlled by a single company will make innovation easier for everyone.
Most businesses aren't run as charities. If I can make 80 times the money from iOS as from Android I don't care if it's open or closed. Hot dog vendors are not looking to build your utopia. If there's a free place they can sell hotdogs but only bring in $10 a day and there's a highly regulated place where they can bring in $800 a day they are going to take the one where they make more money.
Android already has the bulk of the market share, if they are losing 80 to 1 there's something fundamentally broken in the ecosystem. Stop blaming the hot dog vendors.
There's some legwork to do to make the claims you're making. Maybe you should step back and establish niceties like why people who are in their target demos will flock to Android "over time" and quantify how the NPV of investing now will pay off later.
It's well known that Android has overtaken iOS in terms of the base OS. A hardware advantage that Apple had is almost gone now, so the only thing missing is content. I don't think iOS buyers are fanatics or taken in by marketing because there is more marketing of Android. I think they have been choosing a better product.
So, now that hardware and the OS no-longer favor iOS, it's simply down to content providers to make the investment. Technologists bought in to Android because it's open, long before it became the best platform. Why shouldn't content producers do the same?
"It's well known that Android has overtaken iOS in terms of the base OS. "
This is not well known, although it's an opinion often voiced here.
"Why shouldn't content producers do the same?"
Because they are not utopian technologists? This article is about a content producer that invested in Android yet failed. Instead of talking about how the content producers need to do more, we should be talking about why this effort failed.
Here's a start: when you buy an iphone today the Newsstand is one of a handful of Apps you start with. It's featured and people click on it and then many of them start buying and consuming content. What is the competing experience on Android? I assume there's no equivalent to the standalone Newsstand App installed by default on the phone. I assume they hit Play, which incidentally is terribly named and many customers never click on ever because they think it only leads to pokemon. Then they have to know that Magazines on their Device are a thing now and find them on which may be easy or hard in Play but is still an order of magnitude harder than it is on iOS.
> It's well known that Android has overtaken iOS in terms of the base OS.
I wouldn't say it's "well known" at all and I certainly wouldn't say it in such definite terms--and I am an Android user.
> Why shouldn't content producers do the same?
Because they have not yet decided that their projected returns from devoting significant resources to Android outweighs the opportunity costs? They have no ties to Android unless they will realize benefits from targeting it. The purchase patterns of current Android customers doesn't make me as a user and a fan of the platform think they're going to make back their money on investment, so I certainly don't think that they're unreasonable to want to actually see some assurances of a decent return before investing in the platform.
iOS users are not price sensitive and Apple doesn't make any pretense that content is free. People who buy into the Apple world are choosing to buy into an ecosystem where they are going to have to buy content from a collection of proprietary 'Stores'. Apple is well known for the iTunes store and the App store so it's a conscious choice for their users.
Most Android devices (obviously not the highest end ones) are sold on price, and Google as a brand is known for providing free, advertising supported content. It's no great surprise that people who choose that ecosystem expect to get free stuff and don't want to be buying a lot of digital content.
If they'd wanted that they'd have bought an iOS device.
If that's what's happening here, then the warning seems good for everyone.. except that the wording is defamatory.
Rather than accusing Twitpic of being "a known distributor of malware", it might be better if the message said something like "The site appears to be infected with malware. This warning will be remain in place until the malware has been removed."
A system that 'knows you better than you know yourself', means a system that is able to predict your responses to different stimuli more accurately than you can.
By definition such a system can manipulate you.
Let's set aside the idea of an 'AI' - which conjures up all sorts of mythical images of talking computers.
Instead consider the implications of a single corporation with a machine learning system that has access to the private communications, personal documents, and physical location and web viewing history for a billion people and can use that information to predict the behavior of each one of them better than they can predict their own behavior.
That's what we'll get if Kurzweil and Google are only partially successful in this endeavor. It sounds to me like the greatest concentration of power in human history, and an extremely dangerous thing that we should be considering the implications of today.
Now many people seem to think that Moore's law means that machine learning systems of this power are inevitable and that it's better that it's Google than a worse entity like a government who controls it.
I think that's a dangerous dismissal for two reasons - firstly it presumes that this tool wouldn't be taken over by governments once it came into existence, or that it wouldn't be used to manipulate governments to prevent its takeover.
Secondly, even if super-powerful AI is an inevitability, it's not inevitable at all that we should be routing all of our personal information to its creator so that it can be used to model and predict our behavior en masse.