How do you grant a company access to your data but prevent them from storing it? And how does it apply to data a company generates about me? For example, if I listen to songs on Spotify, are they supposed to somehow not store it, but still give me recommendations?
If you don't want them to know what it is, encrypt it. Even if they store it, it's not much use.
If you don't want them to keep it, find a way to invalidate it. (This would be for where the read key is time sensitive.. not sure how to make that work)
I think the privacy angle is misguided. Most people don't really care about it. Even moreso for stuff like what songs did I listen to on Spotify.
The better angle is that we're becoming digital serfs. Google decided that they didn't want Google Music to exist anymore and poof went my listening history and playlists. Any service that I use today can do the same thing. If that data were stored somewhere I had access to I could have imported it in to Spotify.
This is an area I think Amazon or CloudFlare could step into. Sell consumers a NAS type box that keeps their data local. Sell companies on Lambda/Workers @ Home and have their applications run on that NAS.
> If that data were stored somewhere I had access to I could have imported it in to Spotify.
At the moment we've been pushing services in the wrong direction to create their own schemas. However, we may win back control with standards on this one.
But yes, the idea is that you are able to remove the control they have over the data you've produced. It's such a terrible arguement to claim they own the data. (Also, why do they need to control that other than to try to prevent you from leaving)
People do care about privacy, it's just that they have Snapchat-style privacy concerns, not the hypothetical ones that technologists tend to talk too much about. You're right that people don't care about YouTube having access to their stuff; they care about people having it—people like Regina, or their manager (or Regina, their manager). The whole "digital serfdom" concept is as abstract of a concern (and in the minds of many, as irrelevant) as the classic surveillance capitalism arguments that you're putting down, even if the digital serfdom concept is accurate. People just don't care about anything that isn't an immediate concern.
If you are asking how it would be technically feasible, there are essentially two ways at the top of my head.
1. End to end encryption. They store your data, but without your password, its encrypted in the db and useless.
2. You pass all your data in every request, like a sqlite file or something.
To me, this is the major flaw with Solid: Why use a third party's service at all? Why should your apps and your data be on your own server? Sandstorm and Cloudron already do this, and make it user-friendly to install, remove, and share web apps with people from a private space. Furthermore, Sandstorm also assumes apps are malicious, so it is relatively safe to install proprietary apps on-device and still prevent data exfiltration.
There are very few types of apps which truly need a third party server to work.
>if I listen to songs on Spotify, are they supposed to somehow not store it, but still give me recommendations?
This is perfectly possible.
In your example, Spotify could store the data they needed for their recommendation algorithm in aggregate form so that any link to a person was destroyed and not reversible.
And then make recommendations by running that algorithm on your locally/privately stored data, with no loss of functionality.
As such, a recommendation algorithm does not technically benefit from storing your personal data, at all.
If you plan to be a professional software engineer, then I recommend just paying for the best tools. The price of one hour of an engineer's salary (give or take) can pay for the entire IntelliJ suite for a year.
The definition of "best" is often subjective and may boil down to familiarity.
On the other hand, betting on open-source tooling may be a good way to increase the chances that your "best" tools will still be available in the future.
And you don't need to continue your subscription unless you want continued updates. My PyCharm 'subscription' lapsed around two years ago, and I'm only planning on purchasing a new fallback license for the upcoming 2020.3 release because their excellent vim-mode plugin finally supports jump lists, but requires a newer version than the one I originally paid for - and I'm honestly happy to pay them again.
And they allow one to use the personal license at work, so long as no one at home is simultaneously using that license (I'm paraphrasing, and also cognizant that "at work" and "at home" now mean something radically different than this time last year) : https://sales.jetbrains.com/hc/en-gb/articles/207240855-Can-... (going up one level has all kind of interesting other licensing FAQs)
I actually still do that even though work bought me an IJ license because my personal _suite_ license covers more tools
I started learning Java in 1996 and it was a real revelation back then. Coming from very platform-specific C, everything felt comparatively easy. And Javadocs were amazing.
Just a few months ago I dusted off an old project from 1997, loaded it up in IntelliJ IDEA, built it, and ran it. It worked! And that's Java's best feature, it's long-term language and library stability. I worry that it is at risk now with Oracle's new 6-month release cycle.
My point is: people (often, males) like to speak in the name of all women when it comes to Lena.
It’s offensive, it keeps women out of CS, it’s a symbol of the patriarchy, etc...
As woman, I am saying this is not the case, at least not for THIS woman: I’ve never felt excluded because of Lena, my CS studies were not impeded by Lena in any way, I have actually used Lena’s image for a small project.
Of course I would accept someone saying that the image offends someone (better if this point is substantiated by evidence) but I absolutely reject people speaking in the name of all women, an saying that Lena offends all women. This is not true!
Finding a person not offended does not mean an image does not contribute to a particular environment, or is an example of it. The image need not be offensive to most or even many for it to be true.
Also, be careful not to attack a strawman: the issue is of course not just the image.
As a man I experienced plenty of unpleasant conversation about women, conversation which might be referred to as men-talk or some such. Also, I'm not competitive at all, while many men would identify that as manly behavior. One of the few women in my university program confided that this (usually unfounded) self-assuredness was very annoying and a huge turn-off, a feeling we both shared, but would have been regarded as an essential ingredient to participating in that program.
Of course, such issues are also present in the reverse. My wife had similar conversations but about men amongst some female colleagues. Fortunately, we've met enough people to not have to settle for such juvenile views/conversations. I can absolutely see how a woman would find it very difficult to be comfortable in such an environment however, when I didn't even feel part of that mildly macho culture.
Just because something is 'anti-women', does not mean it can only bother women. Our cultures associate many things to gender, which in my view is a leftover of centuries past and we would do well to remove that sort of association. Maybe then, a picture of Lena wouldn't represent a (sub)culture so accurately and be therefore an indicator of the problem.
I have my evidence that Lena is not harmful to women, at least to this woman. It’s one data point, but better than zero.
You are making a much stronger claim without evidence. How is Lena harmful? How many women did Lena drive out of CS?
Do you have some proof, some data, besides neo-Marxist crap such a intersectionalized mysoginistic oppression?
It reminds me of that video that showed that college students are outraged by stereotypical Mexican Halloween costumes, but actual Mexicans are fine with it.
I don't have any hard stats, but I'm going to try to talk about my experiences. I took a programming class in highschool - it was almost all boys at the time. This resembles my programming classes in college, and the professional environments I've worked in. Crucially, none of us back in highschool had ever even heard of Lena when we signed up for the class, and I doubt very many of the girls who decided not to take it had heard of Lena either.
It feels to me like there's some deeper reason for the gender disparity in tech than Lena. I don't have any issue with changing the picture whatsoever - I'm sure we can find some other picture to use as a baseline (big buck bunny?). But I would be willing to bet that it will do precisely nothing to change the fact that there aren't very many women in CS.
I can think of many other factors that seem more plausible to me. Me and my friends were into minecraft - at the time, you installed minecraft mods by overwriting files inside the minecraft.jar. If you wanted to set up a minecraft sever, you were given some command-line program to run and you had to set up port-forwarding in your router. Just doing this stuff makes you more comfortable with computers, and makes the jump to "I want to start programming" seem much smaller than someone who has never stepped outside Chrome and MS Office. And PC gaming is much more popular among young men than young women, so this avenue of becoming comfortable with the computer is going to be much more accessible to men. Not to mention, if you like games, eventually you'll want to make one - I think every PC gamer has at least thought about installing Unreal and trying to make their dreams into reality. I think if we actually want to increase how many women are in CS (and nothing would make me happier), this is the kind of stuff we should be thinking about, not whether image processing programmers use a picture of an attractive woman with bare shoulders too often.
I don't think the argument I'd that stopping using Lena is a silver bullet. The argument is that the image reflects a particular culture, one that not only women, but women in particular on average find uninviting. Personally I have similar feelings about the sexy calenders in some car workshops. It's just a bit too much irrelevant display of ones preferences.
But is it appropriate in context? Is it welcoming? If you have a choice of jobs, is it the kind of environment you prefer?
Or is it contributing to an image of "we're a bunch of immature boys who think it's funny to use a Playboy centerfold, har har, wait till you hear the sexist jokes we tell at lunch".
It's not the end of the world. But a welcoming and inclusive work environment is the sum of hundreds of little things. This is one of those things.
And come on -- the old line "what's the matter, can't you handle a joke?" is the oldest line in the book for "defending" sexist behavior. Expecting women to just "handle something" is not the right approach. We can be better than that.
Depends on whether you would consider an image very similar to many, many others that get compressed to JPEG is "appropriate" (i.e. relevant, representative), or whether you are asking as a prude.
> Is it welcoming?
I can't see anything unwelcoming about that picture, at least not in any other way than many people's flattering profile pictures on social media. It's just a head of an attractive woman, shot by a professional photographer.
I guess the flip-side is that CS is so welcoming and inclusive that tech conferences will invite former sex workers and celebrate them. No slut-shaming from the techies!
I was referring to Lena being invited to conferences like IS&T's[0] or the ICIP[1]. At photos of both events she is rather older and more modestly dressed (unlike booth babes).
Unfortunately, your type of attitude is precisely the problem. You deny a problem exists, attack anyone who suggests fixing it, and resort to questioning character and intelligence.
I'm sorry that this topic is making you angry. But sometimes it's important to hear other voices. In your case, instead of making a wager, why don't you simply talk to some of your female friends. You don't need to "poll" them, just have a friendly conversation. Don't ask them whether they can "handle" it, but what they would prefer.
Your eyes might be opened.
I'm talking from actual diverse workplace experiences with issues like this. You don't appear to be, or else I don't think you'd be saying the things you are.
I don't suggest that you take my opinion as the final word, either. It's a complex issue, and nuanced discussion is necessary and should not be vetoed by a single voice.
And to be sure, I largely
agree with the woman in question: there are much bigger problems at hand. But as stated, the effort level required to discontinue using lena.jpg is zero.
My first programming job was for a media processing company, and we used the Lena image, and it’s origin was well known, and we were mostly young men so we all found copies of the entire shoot, and they were occasionally involved in office pranks. None of us learned it from HN/Reddit because those weren’t things in 2000.
I cannot imagine it was a welcoming environment for women.
As a woman in tech, it's actually the discussions surrounding this proposal that I find most telling. The use of Lena is a waterline. Most women (my personal standpoint bias) probably find it inoffensive at face value, and vaguely grating when they learn that it's a centerfold. It's a reminder of a time when porn in the workplace was rather common; one component of a baseline of inescapable sexual harrassment.
We've made progress, and sexual harrassment is less socially acceptable now. Some of us would like to erase that waterline, because it's a reminder of an ugly past. But what of the people, mostly men, who cling furiously to using that image? They make me uncomfortable, because it's an indicator of where they stand on pervasive sexual harrassment.
Unlike changing terms like master/slave etc, this is a zero-cost proposal (where those are very nearly zero themselves, but more pervasive). Replace the image with something that's got high contrast and color variation. I'd say you can even keep the old name so it won't impact your tooling.
I hope I can weigh in here as a woman (and I do not think my opinion carries any special weight). I don't mind the image, but I definitely do not cling to it - in fact, I wouldn't mind if it's gone at all, and I would treat as suspect someone who continually argues for its place in tech. Engineering is engineering, and porn is porn.
That doesn't stop the bizarre campaign linked in your post from being rather hyperbolic. The entire premise is that by removing this one image from common use, "millions of women" (their own phrasing in the trailer) will be empowered to pursue and feel welcomed in tech.
The presence of the image (some arguments can be put aside for a moment[0]) is a symptom, not (as far as I can tell) a cause. A campaign like this (and what methods and to whom it is addressed is not clear) would serve better to give a false sense of victory over sexism in tech. Getting the image unused isn't a "small win", I'd say it's detached completely from the battle. A total inversion of the problem, almost comically.
[0] Often people argue for things out of sheer principle, not caring much for the specifics of the matter. This is especially common, in my experience, in tech circles. However, there are interesting questions raised vis-a-vis the intersection of meaning, intention, and purpose. It is suspect to cling to 'original meanings' and intentions, and on the basis of that argument, some could well make the argument that the image is empowering as an inversion of traditional morality against sexual expression which still holds sway in conservative groups today. Just as a slave from the 19th c. would understand 'slave' in Git to refer to them, the Victorian puritan would consider the cropped Lena an abhorrent and obscene reference. They would be happy to see the terminology and image gone, but totally miss out on their situational context.
It's telling that you think this is something that would even annoy most women, especially in a professional non-dating context. The women I know would feel creeped out if someone did that at work. Honestly, even in dating if you're "racing" to open the door it looks desperate. When done naturally some women like it and some think it's archaic.
I'm sorry you have to endure such an antagonistic social environment. Where I live (midwest US), it's considered polite for all people hold the door open for all others, regardless of gender.
I'm a male and if I worked in a place where women sneered at me for holding the door for them, I guess I would start only holding the door for men.
Holding the door open for all genders is widely seen as polite and fine, for whoever arrives at the door first, man or woman. It's an optional courtesy.
Having a man "race over" to hold a door, and doing so because they're a woman where he wouldn't for another man, in a professional context, is creepy and weird.
Do you see the difference? What you're describing in the midwest is fine. But it's not what the parent was responding to. The "antagonistic social environment" with "women sneering" you're describing is a total straw-man of your own imagination.
It's not the picture itself that creates an obstacle. It's that emotionally-aware people want to go into a career where they like being around their co-workers.
So if they get the impression that many of the people in the field are completely tone-deaf about basic stuff, they start asking themselves questions like, "Since I have limited time on this planet, why would I want to spend it where a good percentage of the people around me are insufferable asshats?" And then they pick some other field. Not because they can't handle it. Because they know they can do better for themselves.
It is astonishing that we can't even agree that the photo is erotic.
Or that there's a meaningful difference between media you choose to consume for entertainment (which may be honest-to-goodness porn and that's _fine_) and irrelevant eroticism casually sprinkled in tech demos that don't explicitly involve sex.
I wish we could at least be real about the facts in evidence.
> Quis autem vel eum iure reprehenderit, qui inea voluptate velit esse, quam nihil molestiae consequatur, vel illum, qui dolorem eum fugiat, quo voluptas nulla pariatur?
Translation: Who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure?
This text could very well be the first page of a softcore erotica (or even hardcore!). The rest of the text is not usually visible. But then again, neither is the rest of lena!
According to Wikipedia[0], "[t]he placeholder text is taken from parts of the first book's discourse on hedonism." The first book being the first book of Cicero's De finibus bonorum et malorum.
To be honest I'm sure it isn't erotica just as the Holy Bible is not despite verses like Ezekiel 23:20. I must also confess I did find the revelation a little amusing. It certainly says something about men (whether that's just the men of 70s from when both originate).
I would humbly note that, by volume, almost all of what generic lossy image compression algorithms are applied on, is in fact some form of erotica. A better compressor specifically in the domain of pornography—if widely applied—would probably do more to save total global Internet bandwidth usage than, say, better compression for YouTube videos would.
How about you respect women in tech more and stop assuming they're so fragile that they need men to change an industry so they don't get their feelings hurt.
> A frozen language means that either those modifications are harder
I don't agree that it's harder. What is definitely harder is not being able to ship bug-fixes or modifications without ripping everything up because the language has moved on since your last release. And that is very common when developing for, as an example, iOS, since Swift is a fast-moving language that doesn't maintain backwards compatibility. The benefits of having some new language feature in Swift are far outweighed by the downside of existing codebases being invalidated. The various languages in the Javascript family suffer from this as well. The Python 2 -> Python 3 debacle was another example of this.
I have dusted off 20 year old Java code which compiled and ran just fine just fine. That is extraordinarily valuable to me, and requires a lot of discipline by the language maintainers. In fact, the new faster pace of Java iteration could be its downfall, time will tell.
A last note: how many language features from the past 20 years really matter? How many really speed up development, improve maintainability, etc. I would say that there are very few. In fact, perhaps the only one that passes that bar might be async/await type threading advancements.
Yeah, there's a big difference when you're targeting platforms (iOS apparently and to a lesser extent the web) that move. But if you're writing Python, Python 2 works better today than it did five years ago. The most recent Python 2-compatible version of every library that existed five years ago works at least as well as it did then, if not better.
People move to Python 3 not because Python 2 is unusable - it was perfectly usable five years ago and the bits haven't disappeared from our world - but because there's lots of small things that make development easier, faster, more pleasant, and more robust. I don't think there's any single feature you can point to, but there have certainly been countless little things where, when I work on a Python 2 codebase these days, I say "I wish this were Python 3."
Anyway, Rust in particular committed to indefinite backwards compatibility when they released 1.0, and the "epochs" system has been a good (post-1.0) implementation of this. They realized they wanted some keywords that they didn't reserve, some syntax that they didn't define, etc., so they said there are two epochs, 2015 and 2018. The compiler handles both, but the newer syntax only works in the 2018 epoch. If you have code that was written pre-2018, it'll keep working indefinitely, even with new compilers.
While I submitted this, I would like to voice my opinion that I am against "var" in Java. People may ask, "Why should I have to enter in the type if the compiler can infer it for me?" My answer is twofold: 1) You or some other maintainer will need to know what that type is later when reading the code. Of course, "var" is meaningless, requiring you to dig back one or more steps to determine the actual type. 2) You don't actually need to enter in the type, any competent IDE can do it for you.
So I'm not sure what we are saving here. When has the time spent typing in code ever been a bottleneck in software development anyways?
This is my feeling from having worked extensively in Java as well as languages that support "var": C# and Swift. I feel like my productivity goes down when I have to support code that uses inferred types. There also seems to be a performance hit when compiling code with inferred typing, although that may be circumventable with better compiler tech, who knows.
* I write code a lot more fluidly with var. When I go back to writing non-var code (enforced by some departments) I find that it breaks my focus on solving the problem at hand. I end up writing my code with var and then going back and replacing my vars with the type names.
* I find code a lot easier to read. I can understand the flow of the logic easier, the variable names are enough. Unless you have the type definition memorized, just knowing the type isn't going to help you much. You're going to need an IDE either way.
* Refactoring is easier. For example, changing the return type of a function from array to list means a lot less code needs to be changed as a result if the callers were using var. The compiler will tell you if there's any instances where a caller was using an incompatible property.
* Reviewing is easier. Your change set is a lot smaller when changing a type name.
Seriously you won't miss it when it's gone. People also used to prefer Hungarian notation.
Counter-argument to changing the return type of a function with var:
You can change the return type of the function to another type that might break some assumptions later in the code.
For example, if the code assumes it has a type Foo with a length field, and you change the return type of tha function to a Bar without that length field, the compiler will complain that Bar doesn't have a length field, rather than complaining about trying to assign a Bar to a variable of type Foo.
Even worse is if the Bar type changes the symantics of that length field (perhaps going from the number of elements to the max index of elements, causing an off-by-one error), the code could break silently.
That's mostly a strawman argument, however it is worth noting.
I mentioned the first case in my post - using var is better because there’s minimal code change when refactoring and the compiler will tell you any incompatibilities to look into.
For the second case, var or not, it doesn’t matter. If you’re changing the meaning of a property it’s common sense to ‘find all refernces’ and review where that property is used.
In your example, after you manually change the return type at the caller to Bar from Foo, the usage of Foo.length and Bar.length remains and nothing will help you if they changed semantics with neither inferred nor explicit types. The access of the length field is still valid in either case.
This is an old argument. A counterargument is that if the IDE can auto-insert the type for you, it could also show you the type on demand.
But this assumes that when reading code, you're always using a tool that can show you the type. Auto-inserting is only needed when writing.
One thing to be wary of is that even when the compiler shows you the type, (in an error message, for example), if it's complex, it will be difficult to understand. If you want to write simple code, perhaps it's better to avoid or encapsulate complicated types?
Maintenance programmers already have to deal with concealed mystery types, every time you do
f(g())
it's just syntax sugar for
var temp = g()
f(temp)
Now at least programmers aren't tempted to do gratuitous function nesting rather than introducing a variable just to avoid cluttering code with an unimportant type name.
I felt the same exact way about Rust. A few years later, it's totally liberating. I do get lost from time to time, but the savings relative to spelling that stuff out is enormous. IDE support where you mouse over and it tells you what the inferred type is even better, IMO, as you get the "right" answer with full precision not the "coerced" wherein you may have erased some type info.
People said the same thing about auto in C++. Few years later and everyone I know (and codes C++) loves it and uses it all the time.
Its a great addition and like lambas will certainly improve how we write Java
Your comparison is surprisingly apt, since both languages commonly have long type names (std::vector<int>::iterator, etc.) and I often find that code that does type inference often reduces clutter because I don’t have to know the exact type that something is, but by looking at the code I generally can know what the “high level” type is.
My experience with C++ is that auto sucks when you're reading someone else's code, and while it's a useful tool it's one that should be used thoughtfully.
That seems to go against the OP's example benefits of very complex types or actually impossible types. The former will not be obvious from the assignment by nature of their complexity, no? The latter... well, being impossible without `var` implicit types is probably often also not obvious.
Unless you are writing C# in notepad, just hover the mouse over the variable and you get the type. Even when there is an ambiguity, it's really a minor inconvenience to whoever is reading the code.
> "var" is meaningless, requiring you to dig back one or more steps to determine the actual type.
> You don't actually need to enter in the type, any competent IDE can do it for you.
These two points contradict each other. Any competent IDE can visualize the inferred type even if you don't spell it, for example as a tooltip.
This is not (only) about typing, it's about the visual noise and redundancy caused by explicit types. Plus, as the article illustrates, the ability to give names to expressions that have very complex types, reducing the need for type erasure.
Yes. By not having them there makes them easier to read and understand. Seriously.
It is often the case that very simple stream operations end with ludicrously complex type signatures. By being able to var them it makes them far simpler to read.
C# developer here: my own take is that there are times when vat makes code harder to read, but your team should value identifying and avoiding that. In any event, it’s still perfectly possible to elude type information using techniques such as never declaring a variable in the first place. From this perspective it’s just one more tool in the armoury for writing readable code.
In its favour: It’s surprising how much code doesn’t really need the types written out to be readable (a discovery that will shock Python developers not at all). Furthermore, refactoring said code has less busywork in it.
As a side note: There’s one weird benefit of var not mentioned in this. Namely, you’re guaranteed there’s not going to be a cast. This is a serious problem in C++ where assigning the result of a function to a variable can have arbitrary side effects. (It’s not that bad in Java since pretty much all you can do is cast to interface.)
I'm glad that I'm not the only one in the Java community who is extremely against `var`-like constructs. Large type inference is an anti-pattern.
People usually fight this with "why would I need to type it if the compiler can figure it out!?" but those people don't understand the cardinal rule of software engineering: code is not for the compiler or the computer to understand, it is for the programmers to understand. If this wasn't the case then more people would be using APL or similarly esoteric languages.
Adding the extra effort of recursing down the rabbit hole to find the first type being used does not sound like it will make Java more friendly.
Have I saved you any extra-effort here by specifying the types? You have no idea where these types came from or how they are defined. So why is this useful? 'Go to definition' works just as well on var.
Yuck! I need to read that entire thing to have any clue as to what that produces or how I should go about using it. It is compact but it is not understandable. The "Magic" of Java is clean abstractions with no voodoo. PHP/Ruby programmers like their frameworks who hide code from the developer, which layer on complexity, and which "just work" (until they stop). In a proper Java project anyone can visually see, in any part of the program, what is attempting to be done and what is being linked up to what. A big portion of this is types.
Is all you need to read from your example. If we were to expand your ideal to a function it would look like this:
void someFunction(var customer) {
var account = customer.GetAccount();
var transactions = account.GetTransactions()
}
I can honestly tell you that it is impossible for me to know what will happen in this program unless I go through and look at every single code path that reaches this function. Once you have entire functions, or units of code, that are using `var` it becomes entirely unmanageable. Is a string getting into `someFunction`? What about a `Trader` or a `Banker` making it's way into there rather than a `Customer`? Would you know this was happening as both Trader and Banker implement .GetAccount()?
It's no surprise though that in the world of null-propagation haven that the language maintainers would decide to expand this class of problem to all object types! Why should the null primitive be the only thing that gets to ruin our day?
That code is much more scannable. The ability to scan code, to quickly glance at a method or a class and grok it in under 30 seconds is paramount. In large projects I've worked on any code that fails the 30-second test is immediately rejected at code review. Having to hover over every variable to see the types will make scanning such code in an ide a very tedious process. It will make most code review tools (that don't support such functionality) much, much less useful. I fear for the 30-second test and scanning code.
I find it much easier to scan code that uses var. Type definitions (sometimes incredibly verbose) do not help readability. And knowing the name of the type is not the same as 'knowing' the type. You still need to go to the type's definition to know what it is. Unless you have all the types in your application memorized.
This is a poor argument for var. Competent Java programmers use the extract-to-variable keybinding (Cmd-Opt-V in my environment); it even guesses the variable name correctly.
Having the type name doesn't tell you anything about the type itself. Unless you have the type definition memorized, having the type declaration there is not very useful.
If I replaced the types above with var, you would still be in the dark as to what an Account and/or Transaction is without going to the type definition.
Eh? There's a lot of information here, even without familiarity with the code (and really, most people have at least a partial understanding of the codebases they work on).
I can see that the thing returned from getAccount() is an Account object! It's not an accountId, or the name of the account, or an AccountDTO, or an Optional<Account>, or any of the dozen other things I've seen people return from methods named getAccount().
Depending on how methods are named, the implicit documentation can be very helpful:
Account from = transfer.getFrom();
I don't think that var is particularly evil, but I think it has a poor case for existence. Meh.
...tells you nothing. Is transactions iterable? Is it an enum? Is it ordered? Are duplicates allowed? Maybe you're returning a basic type? Maybe it's another pojo?
The type definition alone wouldn't tell you if it's an enum, ordered, or no duplicates allowed...
The writer would know what type they're working with through intellisense. The reader would know it's iterable based on just reading the code that follows it..
So again, what does the type info give you? Most of what you mentioned you wouldn't be able to figure out from just a type name...
'var transactions' is enough information for most people to understand it's a list of some sort. If it really matters to you then use the IDE. The full type info is just not that useful. That's why most languages are moving towards inferred typing - typescript, c#, java, c++, rust, go, scala, etc...
Why as a maintainer should var make the type harder to know? It's right there on the screen when you are initializing the variable and hopefully you're naming your variable something to make it obvious (not Hungarian notation).
If you your method is too large to fit on the screen, the method is probably too long.
But what competent IDE doesn't allow you to just hover over the variable to the know the type?
I quite like type inference in typed languages (e.g. Go, Kotlin etc), and when I read about it coming to Java I wasn't that fussed, just saw it as a nice to have.
It didn't occur to me before I read this post that the complex, chained generic types that you can sometimes get with "builder" like patterns (e.g. SQL generators) can become incredibly complex, so this would tidy that up quite nicely
Well, I don't like streams in Java, they make code much slower, it looks fancies, but in real time, streams are often abused and make code run longer in almost every case I measured (I took some methods from Github, SO and reddit comments). Equivalent for loop/for-iterator loop is much faster. Solution: you don't have to use it, you can educate others how to use it properly.
It is possible for a compiler to optimize long method chains into the equivalent imperative code. The rust compiler is an example, and in the case of the iterator trait, it can actually be optimized better than the equivalent imperative code, by removing checks when indexing into the array.
The default Java compiler doesn't do much optimizing. This is intentional, so there would be incentive to make the JVM really smart (a great decision, I think). But it's possible that the JVM doesn't optimize streams that well at this time.
Do you have any benchmarks for particular slow use-cases? I used to think the same, but in my quick benchmarks streams were as fast as handwritten code (may be a bit slower, but not much slower).
You can change concrete types of methods without refactoring variable declarations. For example ,its nice when dealing with collections without referencing them as an interface. You might think "whats so wrong with using the interface as the declared type?" In C# you can run into issues with using interfaces as things like enumerators are objects in the interface but concrete types can use optimized structs.
Can't say I'm happy about this. Type inference really doesn't belong in Java in my opinion.
Not only is it ambiguous to developers who might be maintaining the code later, I find it much worse for readability. People tend to start writing OO code like it is Javascript which is is never good.
The IDE is Java's biggest strength, and it matters.
I've worked with plenty of engineers who are absolute masters of vim and emacs; their fingers fly on the keyboard. It looks impressive but even the best of these people look like rank amateurs compared to the people who have spent equivalent time mastering IDEA or Eclipse. With a good IDE the code practically writes itself. This is a real productivity gain, and needs to be considered as part of the value proposition of the language/environment.
I think vim/emacs vs. IDE debate is similar to that playing shooters on console vs. desktop: there's no way a joystick (a control with relative position control) could be replaced with a mouse (a control with an absolute position control) with no loss of functionality.
I don't like the fact that `var` breaks class hierarchy. I can write `List l = getList()` and then variable `l` will have only methods from `List`. If I'll decide to change `getList()` return type, it'll be easier to migrate the code. With `var` variable `l` probably will have something like `ArrayList` type and I can accidentally use methods from `ArrayList`, even if I don't really need them, tying this code to concrete class.
It's obvious that everyone will use `var` everywhere, so using `var` in one place and explicit type declaration in another probably would be even worse.
The problem here is not the `var` keyword but the `getList()` method that is leaking implementation details by having a concrete class return type instead of an interface.
But it's a good style: return concrete class, so if I really need those implementation details, I can have them without casts and potential runtime errors. Return as concrete type as possible and declare variable for holding this result as abstract as possible.
If I commit to returning an ArrayList then it's not backwards compatible if I want to change to a different concrete List. My experience has been that if you return a concrete class then people will rely on those details even when they don't need to. And if you're writing a library that others depend on then you don't even know how they're using your returned value.
Of course if people are doing unsafe downcasts because they need the guarantees of a particular implementation then you should just return the ArrayList directly. And fortunately this change is backwards compatible, so there's no need to worry!
On your point about the IDE helping you, there are two use case.
object i = new object();
I agree you only type once, the IDE will suggest object after new. But in that case the type is needlessly redundant, var could be use without making the code any less readable.
object i = myfunction();
Here the object is sementically useful but given that you have to type it before you type the name of the function I don't see how the IDE can possibly help you. Not only that but unless you know the return type of that function by heart it forces you to go check it out before you even start the line. And if you are using generics (or valuetuples) that could be a long type name.
For case 2, a modern IDE like IntelliJ has completion suffixes. So you can type:
myfunction().var<TAB>
and it would expand to
object i = myfunction();
where i would be highlighted so you can immediately type the name of the variable and when you press enter the cursor is placed after the completed statement.
I agree. It seems very clear that code readability is reduced by var-like type hiding. When I'm doing code review and I see "var address = contract.getAddress();" what is the type of that variable? I have no idea.
Not sure how this helps. I pretty much knew 'getAddress' would return some sort of Address structure. It's not like knowing the type name tells you what properties are on it. So what's the point?
When did programming suddenly shift away from "the programmer should be aware of what they are doing?"
Hell, I can't count the number of times I've had to troubleshoot bad imports/classpath management brought about by some genius letting his IDE do his thinking for him.
Making it harder to program is not synonymous with making the programmer 'more aware'. I'd argue the opposite. It's much harder to figure out the source of or find all references of something without an IDE. An IDE with intellisense makes the programmer much more aware of what his/her options and navigate large code bases quickly.
String searching text files for definitions or references is just crude.
This, on the other hand, seems perfectly reasonable to me:
var c = new Customer();
Type inference works great imho to avoid specifying the object twice using a normal constructor, not a factory function, and in a codebase where actual classes and not interfaces are specified as method paramethers, though I suspect this is an antipattern ;)
Why would an address be a string? That's poor use of types. I would expect that getAddress() returns an Address. If it did return it as something non-obvious, than I would expect that to be expressed through the interface and thus the call would be getAddressAsString().
Any sufficiently sharp tool can cut deeply in the hands of the untrained. The answer isn't to dullen our blades, it's to find better apprentices and journeyman to work with.
The answer also is to have safety features. A chainsaw has to be sharp, but also has to have a chain catcher, kickback protection, etc.
I don’t think requiring programmers to always write out the type of a variable on every declaration is such a feature, but can see arguments for requiring them in some places where compiler could infer them. Types of function arguments in function declarations are an example.
I’m OK with implicit typing when the real type is “nearby”. For instance, if 2 lines away I see “Array of ObnoxiouslyLongTypeName”, I gain nothing by having to restate ObnoxiouslyLongTypeName for an iteration loop on that array. Similarly, a method or variable name may strongly hint at what it is.
Also, verbosity in programming is definitely a hindrance to maintenance. It’s generally easier to understand what’s there when you can remove some noise.
There is no appreciable performance hit for locally inferred types given that the compiler isn’t doing much. Globally inferred types can take a hit, though most of those are based on Hindley Milner which can be pretty efficient.
If the suggested `var` is anything like that of c#, there is no performance hit. When you type `Type t = some.expression()`, the compiler has to perform type inference on `some.expression()` anyway, otherwise it wouldn't be able to tell you when you've declared the type of `t` incorrectly. Indeed, when you write the wrong type there, the error message will actually tell you what type was inferred.
It reminds me of one of Bertrand Myer's many criticisms of C++ that I read in his book on Eiffel. (Almost every page of his book made fun of C++ in some way -- it was a delightful read!)
He pointed out that there was no reason for C++ to have both "." and "->", because the compiler always knew which one was required, and it only gave the programmer the opportunity to make a mistake, and a lot of extra effort changing every line of code if you change your mind about whether to use a pointer or not.
The definition of whether a member is a pointer or an inlined struct/object should only be one place in the code: in the class declaration, not scattered around every line of code in the program that uses it.
C++'s excuse was that it was trying to be compatible with C. Of course Java side-stepped the problem by not supporting embedded structs, but C# got it right.
I vaguely remember that there was some annoying guy on comp.lang.c++ years ago who relentlessly campaigned for "operator .", because he was sure that it would somehow make C++ amazingly powerful and easy to use. But many people seemed to disagree with him and had problems with it, because it never happened.
Anybody remember what the controversial benefits and problems of "operator ." were? Here's something I just found about that:
But that's coming from the same madman who wrote the proposal for "Generalized Overloading for C++2000" that actually let you overload whitespace with "operator ' '".
The big part of local type inference, which even Java couldn’t ignore, is the inference of method and function type parameter bindings for each call site. There should be some cost there, but it’s probably not much unless you are implementing something like Scala.
Not the poster, but my vote would be that it gets far more credit for readability than it deserves.
The dynamic nature helps a fair bit for some actual readability. But in large, I think it is more good marketing and a loud opinion that it is readable that makes people think it is readable.
I’m only an amateur Python programmer so grain of salt please.
Python is excellent at being intuitive and readable for the original programmer or when you’re skimming the code looking for broad logic.
Not so much for the maintainer. The programmer needs to keep a lot of state in their head to make up for the type system. If you don’t have good tests, it is even harder.
Its so funny, watching Java fans talking about silly things like `var` whilst their language gets more obsolete every day. I'm so sorry, but basic type inference won. Can you please go away so we can get a good ecosystem and a good language (with first class support VM wise) to go with it in a single package?
On a side note, its getting very tiring to rewrite the whole ecosystem every time a language is being annoying. Can the CS types please work on this real world problem a little instead of going knee deep into homotopy type theory? Please solve this somehow: allow engineers to leave a language without leaving its library ecosystem - lets make libraries super-portable, easily.
I'm not sure I understand your second paragraph, many (most?) JVM languages (Kotlin, Scala, Clojure) have Java interop, you can leave Java "the language" while keeping Java "the library ecosystem".
re: Clojure, the stack traces I think explain the issue (they're pretty terrible, you can see the Java guts).
With Scala its probably better, but still the impedance mismatch shows here and there (the Java ecosystem doesn't use case classes)
Not sure what the situation is with Kotlin, but I'm guessing its the best because its closest to Java semantically and made by JetBrains who pay extra attention to usability for developers.
I should've probably avoided the bitter sarcasm, but I've seen this `var` argument over and over for over 10 years now and I can't believe its still around, even when its very clear its bringing the language down (you can't name anonymous object types). Local type inference does not hurt readability since in > 90% of the cases the expression on the right side contains more than enough information for a human reader to infer the type too. This is often so pronounced that Java code looks downright silly: `Item item = x.getItem();`, `Array<X> xes = new Array<X>();` and so on.
Yet developers have opposed this change over and over and now that its finally getting in it feels like such a waste that it didn't happen sooner. Perhaps the ecosystem would've kept more people if it did...
Which reminded me of another waste: every time a new VM and/or language is created (like say Golang), we pour in millions of developer hours all over again re-inventing its library ecosystem. Because clearly in our new VM / language code from other ecosystems is useless junk somehow. And everyone accepts this as normal. Doesn't that ring any alarm bells in the back of our heads by this point?
I would argue that the opposite of a bloom filter doesn't really exist, at least not in a satisfying way. A bloom filter's size is dependent only on the desired false positive rate, whereas its opposite must be dependent on the size of the data. (And don't be fooled by data that can be represented by a primary key, that's not as general as a bloom filter.) I tried, with limited success, to explain my point of view in this answer on StackExchange: https://cstheory.stackexchange.com/questions/6596/a-probabil...
This probably runs afoul of your "at least not in a satisfying way" constraint, but:
It is pretty easy (an exercise) to implement the "opposite of a Bloom filter" if you start from a summary of the complete set of events and support deletion, rather than starting from the empty set and supporting addition.
What makes everything seem hard is the (often unstated) requirement that you start from an empty set and support addition, which is roughly as hard as implementing a Bloom filter that starts from the complete set and supports deletion. Neither of the links make this requirement explicit (though, it is implicit in their "motivation" sections).
Bloom filters scale logarithmically with false positive rate and linearly with the number of items stored.
The article doesn't mention the false negative rate. Unlike a Bloom filter, it'll depend on order when the input includes repeated elements. But in general, required memory will increase quadratically in the number of items stored at a constant false negative rate (because of the "birthday paradox").
So it isn't the opposite of a Bloom filter. But what is?
This may be an unpopular opinion, but I don't want Java to move forward faster. I don't really want it to move forward much at all unless there is a huge, tangible benefit from the new feature(s). I am of the belief that programming languages should be a solid, fixed foundation on which lasting software can be reliably built. Every time a feature is added to a programming language, it becomes larger, more complex and harder to learn. Rapid changes to languages can also result in regrets, and it is essentially impossible to take something back in language development.
In general, I think too much stock is put into language features, perhaps because many developers are bored with the actual software they are writing/maintaining, and so new language features are relatively fun. As a mental experiment for those who know both Java and Kotlin, or both Java and Scala: Suppose you were asked to estimate the time required to implement a system in Java, and you arrived at an answer of 2 months. Now what would be your estimate for the same system, but written in Kotlin? How about Scala? Admit that it would be the same. (Well, probably a little longer for Scala, but just because it takes forever to compile, ha.)
I think that good languages offer features that make your code better, not let you write it faster.
Sure, it may take me two months to write that thing in Scala still, but I'll have more confidence it will work well, and it'll be nicer to read, maintain and work with moving forward.
Scala is an interesting example for this, because the language is a grab-bag of features that definitely can be abused by people who don't know better, it's definitely easy to write worse code in it.
The reality is there is an easy blueprint for Java, because C# is Java, but done better. It's moving at a good clip, but the features coming in are very useful and well thought out.
Not to mention the implicit dismissal of developer happiness. That's one of the best things about the ruby and rails communities: why shouldn't our tools be nice to use? Kotlin has (imo) that same attitude as compared to java.
Language greatness is pretty subjective and task specific. There are some languages I will never declare great (i.e. PHP, JavaScript, Ruby), but others could be great for different tasks..
> Language greatness is pretty subjective and task specific. There are some languages I will never declare great
If greatness is subjective AND task specific, then the languages you will never declare great could be considered great by others for the tasks they perform. And by your own admission if they were great subjectively AND for a particular task, that would make those languages great. But, you still claim that you would never declare them great?
Common Lisp has moved forward exactly zero "officially" for decades. Yet, modern CL implementations are largely compatible with one another, largely interoperable with modern multicore hardware and multithreaded operating systems, largely capable of using Unicode and other representation formats, etc.
Of course, CL was designed with "extending CL" as a feature, and not locking the CL user into CL as it is "today." Shame so few other languages bother with this (Java and Clojure on the JVM come to mind immediately as hard to extend). Other languages have very active language evolution (Haskell/GHC comes to mind).
I like my languages to evolve. Preferably not the way Java did (I still hate the 1.5 type erasure hack).
I agree with what you say about language evolution, but this proposal isn't just about the language—it’s also about APIs, tools, and implementation-specific features (e.g., JITs and GCs), which can evolve more flexibly than the language itself.
Sorry, can't do that. If my estimate was 2 months in Scala, it would be 6 months in Java: 2 to write the software, and 4 to find and chase down all the NPEs, casting exceptions, and concurrency bugs, while building a test kit for things the scala compiler already verified the moment my scala code compiled.
Last time I've had casting problem was when someone passed Scala Map to Freemarker.
If it takes 4 extra months to debug NPEs, then someone still needs to learn how to code, because things that compiler can fix, experienced programmer should be able to do avoid at first place.
That's what I hear. I personally think it's a shame that we have to live through years of terrible bugs in terrible languages just to become an experienced programmer that can finally know how to always avoid those mistakes in the first place.
If you're already there, great. I applaud your superior experience, and hope you really enjoy your C++ footguns. But don't tell me that the scala compiler doesn't have value to me just because you're a code god and don't need it. I actually benefit from languages that help me avoid stupid mistakes.