Unless you are extending the compiler why would you care? GCC consistently produced faster code that LLVM
If you are extending the compiler, why would you want to allow companies like apple and oracle to bottle up you hard work and not give anything back? Either by not giving the code back or by patenting parts of the functionality they add and then suing you when you try and use them.
And? The above post asks why companies need to make proprietary code changes to the compiler, or why the community should allow distributors of GCC to patent parts of the compiler and then go suing users who the distributer gave copies to.
Even if you don't make changes to the compiler, if you link to this project you are GPLv3 infected. You simply can't use this without exposing your entire project to GPLv3 infection, the usual LGPL and/or runtime library exceptions don't apply to gcc's core code because it was never intended to be used as a library.
Let's say you have an existing FOSS app that is BSD or MIT licensed and it has its own built-in scripting language. You'd like to build a JIT for that scripting language, you see this library and decide to use it in your project, well... you can't do that without changing your entire project to GPLv3 terms because the combined work created between this and your own code all has to be available under the GPLv3 terms. This is usually solved by making the relevant parts of the project LGPL or granting a runtime library exception but neither of those applies to the core gcc code, so the GPLv3 infection in unavoidable.
Whether or not that issue is important is open to debate and depends upon your software politics, but in practical terms it means very few people will use this in their project unless they are already GPLv3 committed for some reason.
That's part of the plan. Amazon make a loss in Q1-3 and turn a profit in Q4, this results in a slight profit over the year, a boatload of investment in infrastructure and it being almost impossible for competition to keep up.
1) They didn't, and
2) Even if they had, unclear, inaccurate, or misleading information regarding Google's degree of cooperation or the mechanics of acquiring tha data in a document prepared for consumers of the data for which information on the details of collection was not essential actually makes quite a lot of sense in a highly classified program (whether its classified for appropriate, security related reasons, or for political reasons.)
Consumers of the data need to know where it comes from and its scope, they don't necessarily need to know whether its acquired through cooperation or coercion or infiltration of the providers.
Also, that's what they were doing for more traditional wiretaps and you should be sure that they have access to siphon off live traffic for analysis if they want.
Is that even possible if Google's SSL certs have Extended Validation? They'd have to have cooperation all the way down to the browser vendors and I can't see Mozilla caving that easily.
There are several governments (Spain, France, Netherlands, Japan) who publicly have Root CAs in the trusted browser list[1]. It seems pretty likely (cf say, Prism) that the NSA has a CA cert where they can generate whatever certificates they want in order to MITM browser SSL communications...
I have over a terabyte of photos in Aperture, family has half a terabyte in iPhoto, whole family uses photostreams, nobody has your issues.
Apple doesn't have a great guide for best practice, and should fix that. But most of the troubles you outline, I feel you're "doing it wrong" or are actually flat wrong about how it works. This may be a training issue Apple should address.
As just one example, I very specifically want my device camera roll and combined photo stream separate. Combining them as you suggest has severely negative consequences your article doesn't consider. Edits on iOS carry over to desktop, you can edit on the loo all day if that suits your style. You never ever have to delete photos from your photo stream and no Apple dialog tells you to delete from photostream. If you set prefs right, you'll get photos imported exactly once.
Your letter raises awareness that people like photos. That's good.
But while waiting, look into some of the prefs dialogs in Aperture or iPhoto. I'm comfortable that every need you mentioned is handled.
But if you still can't find satisfaction for your particular workflow (e.g. photo pro doing commercial work in studio and on laptop in field), check out Image Capture plus the lesser known Auto Importer scriptable tool. You may have to mindlessly plug in a cable after a shoot or at least once in 30 days, but you won't have to press a key.
What are the complaints in the original article about not being able to treat your data as your own?
The complaints as I read them are about pain points in syncing and accessing the different photos you have spread across different Apple devices, and how whatever sync features exist today ("Photo streams") exacerbate rather than improve the situation.
All of them, I think. If any vendor were allowed to drop photo management software (as a first class citizen) on Apple machines, software that would even be allowed to manipulate their current cloud library through an open API - would anybody be asking Apple for anything?
But this is a pretty darned standard use case, isn't it? It's not like he's asking for the sun and the moon; he's just asking for a way to easily organize photos taken with multiple Apple devices, without using up all the storage in those devices.
That said, my family has the exact same photo problem, but only half of the devices involved are from Apple. A solution which was not limited to a closed ecosystem would be much better than an Apple-specific solution, in my opinion.
It's a standard use case for an average consumer who probably doesn't take that many photos and doesn't have much of a clue about backups. It's not a standard use case for anybody who is either a serious photographer or a geek....
Exactly this kind of service is where the closed Apple ecosystem should excel. Unfortunately, Apple has missed the transition from the Mac as a "media hub" to the cloud. iCloud came a bit late and they fail to keep up with the rapid innovation speed of other online services.
But how is the scenario described in the article not the standard use case for everyone who has more than one (internet connected) device that takes pictures?
Android/PC can do what the article wants, albeit with an initial investment of research and set-up but that's the choice... do you want buttoned up, everything 'just works' but only a certain way? or do you want customizable with tinkering required?
Isn't that part of the point? This is a totally normal thing to want, but no it is forbidden. Could a third party developer fix it? They could on Android.
They could, but it's strictly speaking not needed. It's already solved/fixed.
Just enable/install Google+/Picasa sync and you have all your pictures on all your Android-devices, and on the web, just in case you need it outside this semi-closed Google system.
It's not as non-standard as you think - I've got this exact pain-point. As well as my sister, who is not technical at all. I've gotten a couple of calls from her "Where are my pictures going?" and all I can tell her is "Hell if I know."
Hell for that matter, just try sharing your photos with your family on a home network. Both iTunes and Windows Media Player seem designed to frustrate that very common scenario.
>Adding a conditional ("do I answer or do I proxy?") on every DNS query -- and there are many -- is going to introduce enough latency to be noticed unless you throw a lot of gear at it. And you're still going to introduce latency by inserting another hop. That's my point, though I do agree with you.
Welcome to the world of recursive name servers, there is a lot of software out there that does exactly what you just mentioned, I fail to see what would be hard about making this change.
If you live in a closed source ecosystem, you die in a closed source ecosystem.
e.g If you want to distribute closed source tools and software without sorting out a license first, your going to have a hard time because often this is exactly how free (in terms of price) software companies make their money.
If you setup you're stack wrong then some of you're points are valid. But many amazon services offer you redundant infrastructure at a reasonable cost.
But the key advantage over traditional hosted solution is that you can run reproducible test and dev stacks for 8/12 hours a day without having to pay for hardware that sits there doing nothing while you sleep. Sure cloud computing has it's faults but the ability to run you a HA test stack in 3/4 datacenters in minutes, and pay for just the time you are testing it, enables people to move forward and develop more and more impressive technology.
OTOH I'm not sure why you would run cloud on your own hardware as you're already paying for the HW, I suppose it simplifies the management significantly for PayPal or they wouldn't be doing it
If you are extending the compiler, why would you want to allow companies like apple and oracle to bottle up you hard work and not give anything back? Either by not giving the code back or by patenting parts of the functionality they add and then suing you when you try and use them.