by vladaionescu on 5/12/16, 1:16 PM with 61 comments
by vidarh on 5/12/16, 2:14 PM
This is wonderfully naive. A great goal, but hopelessly unrealistic. E.g. currently it is not handling data persistence at all, so for starters it's targeting "the easy bit", yet there's hardly any mention of things like logging, or what do to when their Consul cluster decides to start complaining because membership has gotten messed up (ever had a Consul node rejecting most messages as "suspect"? ever had the same node get registered twice? all kinds of fun stuff), how to analyse performance issues, networking problems etc.
It looks like an interesting platform, but anyone thinking this gets them out of understanding ops concerns will have tough times ahead..
by zbjornson on 5/12/16, 3:40 PM
"We rely heavily on Docker and Docker Swarm for containers, orchestration and networking. Other pieces of tech we use are Consul for service discovery and gRPC with Protobuf for the RPC."
Awesome that it's using gRPC under the hood. But, I'm still missing if there's an orchestration service running somewhere that takes care of the load balancing, or if it's distributed?
"When deploying Lever, you allocate a number of servers to it. Lever can then manage resources and auto-scale services running on it - but this is only limited to the servers you have allocated to it."
It would be awesome (and critical to our use case) if it could provision instances, even if it's via a shell command manually entered into a config file. From the stack description quoted above, it sounds like it would do okay if it was coupled with an autoscaling group that does this function for now...
Finally, I love the sticky resource routing, with the use case being data sharding.
Excited to try this out.
by fortytw2 on 5/12/16, 1:50 PM
by eistrati on 5/12/16, 5:54 PM
In my humble opinion, this is not even close to "serverless", but rather "cross-platforms". The best description to current "serverless trend" that I've heard is that "infrastructure comes with no-ops and pre-scaled at massive scale, that only providers like Amazon's AWS, or Google's GCP, or Microsoft's Azure and alike can provide" ;)
by stephenr on 5/12/16, 2:50 PM
So they're serverless. Except the server you're running it on. And the lamba/style code they wrote and uploaded to it.
But serverless, because no ops staff required. Except the ones who installed and maintain this.
This is like a snake eating its own tail and wondering what hurts.
Edit: despite my sarcasm, it's always good to see open-source solutions to reduce reliance (or risk of reliance) on closed vendor solutions. Just stop calling it serverless, and I'll stop telling you how fucking ridiculous that name is.
by toddnni on 5/12/16, 1:55 PM
by siscia on 5/12/16, 4:26 PM
You wrote your service in go following in such a way that it expose 3 function and it get compiled down to a single binary, then you can dockerize your application (images usually are smaller than 6MB) and deploy it, wherever and whenever you want.
Of course everything is open source.
I explained it a little better here: http://redbeardlab.tech/2016/03/05/effe.html
by jackcosgrove on 5/12/16, 2:40 PM
by marknadal on 5/12/16, 5:47 PM
It is sad to see so much elitism in the tech world that is dismissive of anyone/anybody that knows less than them. This same attitude seems to come out a lot to dismiss really cool tools like this ( http://leveros.com ) that help solve frustrations and problems. Could we try to be more encouraging of newcomers and of tools that automate (laziness is a programmer's virtue) those gaps?
by Thaxll on 5/12/16, 2:00 PM
by csears on 5/12/16, 7:54 PM
by danielrhodes on 5/12/16, 7:50 PM
Could anybody shed some light on how this might be used in practice? It seems at the point that you have multiple endpoints, you may as well just deploy a full application.
by kolanos on 5/12/16, 4:22 PM
by skrowl on 5/12/16, 7:02 PM
by biznickman on 5/12/16, 7:33 PM