Having worked professionally almost exclusively in the backend for about 11 years now, and being a hobby developer for over 22 years, my advice is this-
1. It's important to start and keep at it. As many others say here, pick whichever and just learn. The longer you stick with it, the better you'll get. It naturally follows that the more you enjoy something, the longer you'll stick to it. Most of the knowledge you gain from one language can be transferred to another, except perhaps some syntax. Try to pick a language that doesn't feel too overwhelming if you don't have someone else to teach you and push you- Python is an excellent choice.
2. There are a few different programming "paradigms" out there, but the most popular (i.e. Job market) ones are Object Oriented Programming (OO/OOP/OOPS) and Functional Programming (FP). There are distinct advantages for both, so it's almost like an early choice you make in a game that affects the rest of your playthrough. To switch, you have to go back to the beginning and make the other choice, having to learn a lot of new things again. It can sometimes be tricky to wrap your head around one if you're used to the other. You can learn both in Python, but some languages force you to pick one or the other a bit more. Java, until version 8 was almost entirely Object Oriented. With Java 8+ you have a lot more functional programming (half) baked in. However most of the larger companies using Java are used to the OO way of things, even resisting changes that introduce FP into the mix for various reasons (tooling support, debuggability, etc.). Companies that want to do Functional Programming rarely pick Java, going for more traditional FP-first languages like Closure, Haskell, Scala, etc. There isn't a shortage of jobs in the market for either paradigms at the moment, however being focused on one may make it harder to be employed in the other because that's where your experience will lie. I once had offers to join a company that did Java (OO) vs a company that did Scala (FP). I picked the one that paid more, although a part of me terribly misses Scala now.
3. Learn the language that is current, has a large community and great IDE support. This can help you learn quickly. IDEA has IDEs for several languages- Java and all JVM languages (Scala, Closure, Kotlin, etc.), Javascript (and TypeScript), Python, C, etc. These languages are all great for all three major OSes- Windows, Mac and Linux. You could pick C# which has amazing IDE support (Visual Studio), but that's a bit Microsoft-heavy. Again, Python is a good choice and most OSes already have it pre-installed. If you do pick Python, pick version 3. A lot of documentation out there is for version 2 as well because somehow even decades later people refuse to move on. The most important thing to know is that if it's current, you will not run into use cases that are impossible to accomplish in the near future. If it has a large community, you can find an answer to your question on stack overflow because someone else already asked it. If it has great IDE support, it will guide you every step of the way - through syntax errors, compilation errors, stack traces, etc. Once you've learnt one language a fair amount, you can always pick another that isn't as forgiving to new comers. You'll see that a large amount of your knowledge can easily transfer.
4. Reduce your cognitive load. If you're 10 years old, you probably have more time on hand than if you're 35. Which means that you don't have the time to learn everything about something anymore. If you're learning programming while juggling a day job and kids, you don't want to spend a week learning what things like Pointers, ASTs, Pass-by-reference, case matching etc. are unless it's what you absolutely need and there isn't a library that will do it for you. How deep you go and in what language could determine where you will be hired and what kind of roles are right for you. Take a look at what companies you want to work at use. You can get a well paying backend job without knowing what pointers are, because let's face it- companies want to solve problems at the end of the day, not compete with each other on how much esoteric knowledge they possess in an elaborate trivia contest. However there's almost zero chance you will write any Linux kernel code without knowing what pointers are. If you already know something, use it. If you're a Javascript developer, maybe learn some TypeScript. It extends beautifully to both the front end and the back end and uses a lot of the same tool in both places, so you only have to learn once.
5. Build projects. Pick a problem that you want to solve, and try to solve it with the language of your choice. Pretty soon you'll see why that language is a good choice or not, and maybe learn how to overcome a problem with the language you've picked. Most problems can be solved by most languages, only how easy or painful the experience is changes. Once you solve your original problem somewhat, you can go back to the start and see if what you now know about the language, architecture, components etc. can help you solve the problem better. You'll also probably find that your problem is better defined now, and a lot of things you thought were important at the start don't matter anymore, or you changed your requirements because you found a better way as you worked through it the first time. Build a few of these solutions to problems and soon you'll be an expert at your language of choice and will have something to show for it as well. It's like chiseling rock to make a statue - your first one probably looks like a smaller, different looking rock, and your current one a more well defined thing of beauty. Your code will improve the same way, except it is a lot more forgiving. You can start over whatever small or large portion of the code, and make it better without losing anything except time, and learning all the time. (I am not a sculptor, I don't know enough about rocks either)
Sorry this went a bit off rails. I hope this was useful for a slightly wider audience.
The tl;dr version would be- Do you know where you want to work at? Learn the language they use. If you don't have a preference, pick TypeScript as it does a lot of things right and will extend to the frontend as well, or Python as it also does a lot of things right and will extend to machine learning and microcontrollers.
That's the current requirement for the compression they use. I'm sure that there's a lot of heavy optimizations under the hood to support 4K at 35Mbps, which is most likely lossy as well.
Bluray video requires about 144Mbps at the highest quality right now. A bulk of this can also be audio formats with spatial information, like Dolby Atmos, or DTS:X.
VR might need even twice that amount for both eyes. Combine that with more dense sensors, probably improvements in processors and memory allowing better lossless decoders, and you can cross 1Gbps easily. I can also see multiple users at home needing this at the same time.
100mbps h264 is fine. In fact, for existing content we got mostly indistinguishable results at 50mbps. H265 gives you another 30%.
Now there might be people who can notice the difference but in our limited human studies testing on in house VR employees they couldn’t (try it out yourself if you have an Oculus link - the Oculus PC tool lets you adjust the bandwidth live if I recall correctly). You do want to start initially with a high bandwidth so that the initial P frame gets the most detail and then you can decrease the bandwidth which applies to the subsequent I frames.
Obviously VR resolution keeps increasing every year or two so this will grow but you’ve got other techniques that combat this like ML supersampling.
I think an overlooked piece of 10GBe is lower latency and more consistent performance as packet scheduling is the most challenging aspect of streaming VR. There could also be other use cases where higher bandwidth is more critical but video streaming VR likely isn’t it.
Absolutely stupid that when you unlink an account the underlying data is silently lost. However due to GDPR and similar laws, irrecoverable delete is absolutely a thing. They cannot recover your data if it went through the same process of forgetting it as they would for GDPR, and there's a good chance the system wrongly treats the unlinking of Twitter as a request to forget. If they can somehow recover the data for you, they have a flawed architecture for forgetting your data, and they will not give that up as easily.
If they have a way to recover the data... they may not want to do admit they do... or put in the effort for individual users. Surely database backups exist and data is not completely lost for 6 months or whatever their backup strategy is to keep backups... GDPR or not.
All comments here are either:
A. "I'm so glad I'm on Linux distro A and not Windows but the UX and UI is terrible", followed by "have you tried distro B, it solves the problems of A"
B. "I'm on distro B and not Windows, but package management, upgrade and/or compatibility is terrible", followed by "have you tried distro C, it solves the problems of B"
C. "I'm on distro C and not Windows, but it doesn't support my audio or video equipment and I need to install and/or spend a few hours searching and compiling various solutions online until my machine is a Frankensteinian monster and while it works for me, it's not for everyone" followed by "have you tried distro A, it solves problems of C"
I love the flexibility of Linux in some respects, but I've had stability issues on Ubuntu and Mint, UX issues on some Fedora based ones, and Puppy Linux, compatibility issues in Elementary (and fixes that were available in Ubuntu never made it to Elementary and I got tired of waiting). I've gone through way too many distros finding the one that works for me and none have been really as pleasant as described by people.
For work I have to use a Mac, and the inconsistencies in keyboard shortcuts annoys me each and every day. Not to mention non - standard UI components stand out like a sore thumb - especially window maximizing, rescaling, browser and IDE shortcuts, etc. I wouldn't be using it if I didn't have to.
Honestly, the OS that I have had the least trouble with and the most enjoyment was Windows XP, closely followed by 7. 8 was a mess of UI and UX oddities, and 10 is only marginally better. If there was a version of windows that was as streamlined as XP for the modern world, I'd fork out $50-100 for it considering the time it would save me and my time being worth more than the hassle, and that my contribution might help subsidize the cheaper community or pirated editions of the OS.
The interview and feedback process is extremely cold and mechanical, making you feel unimportant very quickly. If you're used to scenarios like this, go ahead.
These guys have been posting this job for several months now, and I've applied twice so far. I required relocation assistance as well. My resume matches their requirements closely but they told me "Having carefully reviewed many applications for this position, we feel there are candidates whose profiles are more closely aligned with our requirements."
I guess those "many applications for this position" that "are more closely aligned" didn't work out for them. Good luck I guess.
Lovely bit of fuckery. Customer support and ease of installation usually make or break products. That said, Tesla is probably experimenting several things by letting David be the Guinea pig.
This is not as feature-rich as Streams, but Streams is Java 8, and this is for systems that want to do simple aggregations on JSON data coming in from data stores that use JSON.
I wrote it to bring something akin to MongoDB's aggregations into the JVM. With Java 8, Streams are a much better solution usually.