Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yep. Pretty soon. You write code. You create a docker file. You find a place to run the docker file with your code [cheapest!]. Run it through your tests. Monitor it. The end. No vpcs,salts, puppets, sshs,chefs, horses,anisbles, cats,ec2s,devops,noops, sysadmins, kubernetes or chaos monkeys required.


Until you discover that the thing you are building requires more than a single application running in a single container and you end up building an entire "Operating System" around your containers and the circle starts all over again.

Complexity is hardly ever in the solution, but mostly in the problem. Single solutions to complex problems often ignore/forget important parts of the problem and they come back to bite you, hard.


I don't know... My experience is the other extreme - tech teams that make everything super complicated to support everything that can possibly happen. As a consequence, the it environment requires six months of experience to even understand. It's really not very fun to work in those environments. Lots of unnecessary complexity.


Yup. I’ve seen this

- customers who insist every package has to be installed in some special place because /opt is ‘reserved’

- have to have non-standard ports for everything because it might slow down attackers

- have to have an Apache proxy in front of everything, always - even internal components. ‘Cos.

- won’t invest in trusted SSL certificates for internal services.

- every sql query has to be wrapped in a stored procedure - no exceptions.

... list goes on


On the other side of the coin, all these things would make sysadmins absolutely loathe you.

- You have 20-30 specialized applications which aren't package based stored in an application share mounted at /app with the convention /app/name/version or something but then this tool hardcodes itself to /opt/name.

- Using nonstandard ports doesn't make you more secure against someone who's specifically targeting you, but it absolutely stops automated attacks and logspam. What do you mean you can't change the port?!?

- Because there's no such thing as a secure internal network. All sites go through the hardened proxy.

- Invest? Nobody is going to pay for certs for internal services and who wants to set up a reverse proxy If you can't use an internal CA that's just poor form.

- That's a new one for me. I assume because they have a single shared database for all of their apps.


There is actually a reasonable arguement for the stored procedure one. It means you can have different permissions for tables as you do for stored procedures so your application doesn't have permission to directly query your passwords. The benefit there is if an attacker gains access to your application and/or the DB authentication credentials they cannot then export users passwords or other sensitive information.


Seems more efficient to me to use column-based permissions in this case, and create stored procedures for the queries you need to interact with the "sensitive" columns.

That of course does increase (dba) administration overhead, but it seems on it's face simpler and far more "programmer efficient" than storing every single query as a stored procedure.


It's a difficult thing to generalise but if you have a pretty large or complex application and DBA (or several) then it does sometimes pay to have your DBAs write the SQL because it's a different enough paradigm from normal backend development that not all backend developers are skilled at writing perfomant SQL. However few applications are that complex, not all businesses can afford a dedicated DBA team, and there are clearly a lot of very talented developers who can turn their hand to multiple different stacks. So it's a difficult thing to generalise.


“won’t invest in trusted SSL certificates for internal services.” <== this, I shrug my head every time. I’d do something about it If I didn’t have a billion other things to worry about like working services from teams.

You know, unit test your stuff?


I have the same experience and I'm on both sides in my day job. On the one end preventing developers from having to make the same (learning) mistakes that have already been made by ops so they can focus on developing and delivering, stuff like HA, security, backups, networking, statefulness, etc. Just deploying you apps on containers doesn't solve these problems.

But on the other end I'm constantly fighting the complexity of existing environment and preventing to much new complexity being added. It seems that every time a new tool/product is introduced which makes things simpeler, it just end up being a Hydra and total complexity is only increased.

I still have to find the holy grail if there is one, some middle ground? Or maybe it's just human nature to always make complexity where there is none since "It can't be that simple, can it?".


today new devs, picking up a 10-30yr old system: this is garbage, we have to recompile under the new 4.0 kernel just to be able to run it on new servers and nobody know how to read a makefile! lets rebuild with containers!

tomorrow new devs, picking up a 5yr old project: this is downloading a centOS build from 7yrs ago! which only run on docker from 5yrs ago! and nobody know how to build a new base image with the libraries the old one have! lets rebuild it in <whatever snake oil they have in 5yrs from now>

the problem is not serverless or not, its clueless people and corporations with code/process rot. and the only lesson from the cycle, is to never trust people who proclaim one solution is the holy grail for all problems.


That's called Kubernetes


That's the joke.


this wont be popular but I always felt like docker was a step backwards in order to regroup and take a giant leap forward.

I went from right clicking and deploying from visual studio to SSH and configuring dockerfiles, docker compose, even nginx.conf to loadbalance.

You do get more bang for the buck with such setup but its too much work on infrastructure and less time for development.

edit: add kubernetes to it, although AKS and GKE being free lessens the burden its still too much for a software dev like me.


> although AKS and GKE being free

I‘m a developer too and I feel the burden too. Can you tell what you mean here?


Setting up and maintaining a kubernetes cluster on my own would probably kill me desire to touch a computer again. Azure and Google provides free Kubernetes-cluster-management-as-service. But you are right, its still a hassle.


Been going through that process the past few days on Scaleway, as they don't have a hosted K8. It's... tough... but I'm learning a lot and have a much deeper understanding of K8s clusters from an operational perspective now. I have a much deeper understanding of the magic happening behind the scenes to keep everything talking to each other, and even if I wind up on a hosted solution in the end, it's been invaluable.


thats what I told myslef too when I invested hours after hours learning Knockout then AngularJS. Only for reactjs to dominate a year after. I skipped learning react and stuck with angular. Now I hear rumours theres a new sheriff in town named Vue.js.

Careful with filling your brain with domain-specific knowledge.

:)


There is an early access program for they hosted kubernetes service https://www.scaleway.com/kubernetes/


Yeah I signed up months ago but haven't gotten an invite yet =(


Deploying to a Kubernetes cluster is easy. You just have to learn a few key concepts to write your own .yaml definitions. Also, some open source frameworks come with the .yaml definitions already.


Well, this is probably to find common ground between FaaS people and DevOps, haha.

Being a FaaS proponent, I still think this could be a good idea, because of the kinda standardized workings of Docker. I mean, AWS also uses Docker for local testing.


It feels so amazingly depressing for so much sunk knowledge to just go away and be worthless. Feels like I could have learned so many more things that I could have had gotten joy out of today and still extract useful things out of, or build things on top of, long into the future if I'd just focus on the time-invariants of knowledge space.


Half the stuff you described there solve different problems to docker. Not to mention that docker doesn't solve all problems in infrastructure.

I like to think of docker like git. It solves some problems with distributing data but it doesn't solve the problem of developers writing code, writing tests, pipelines, etc. Nor even problems with 3rd party APIs etc. Obviously this isn't a perfect analogy but my point is docker isn't a magical silver bullet.

I've worked in places where their solution to everything was "docker" and it honestly caused more complexity problems than it solved. That's not to say I don't like docker; in fact I've been a big advocate for containerisation long before docker was a thing. However like with any tech, the key is using the right tool for the right job instead of attacking every screw with the same hammer.


If you're working on anything with above-average security requirements you will still be managing your entire stack...


I don’t think that’s likely or desirable. VPCs and the ability to make a service that’s not in the public internet are hugely valuable from a security standpoint. And Ansible, Terraform, etc are great for automation and managing configuration and architecture. It’s unwise to discard everything that came before you when hopping in the shiny new bandwagon. Even if it is the future, last generations tools can still teach you a lot.


You still need all that, you're just paying someone else to do it for you. As the app gets more complicated you will have to do more. It is impossible to remove complexity by adding abstraction. You've only hidden it.


True, but like every thing else there are economies of scale to having a few specialists managing the abstraction that everyone else can depend on.


Those things are still required. You're just paying for the abstraction and hoping your host is competent enough to keep things running smoothly.


Whenever a solution offers you simplicity, you're giving up flexibility. That's why I'd prefer to use a more advanced but standardized open source platform like Kubernetes.


dokku is pretty close to this.


Actually, even better:

You use dat or MaidSAFE to write client side apps. The back end is end to end encrypted, secure, automatically rebalanced, uncensorable, permissionless, and people install your app and use a cryptocurrency to pay for resources.

And of course you use public key cryptography to maintain your app and upgrades propagate without needing to select a domain or host for them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: