by ptrik on 3/28/22, 4:05 AM with 157 comments
by pech0rin on 3/28/22, 9:28 AM
Also since this is a beta product, I am doubting that you would hit too many problems with any deployment solution at this point. Why are you using Aurora + serverless + event bridge at all? Its magical because you are all using the same dependencies as prod, but why are you using these dependencies at all? All you need is a single postgres database (easily spun up on any developer machine) + crud app (easily able to use full stack typescript with nextjs or the like).
Word of caution about having this many moving parts, they will bite you in the ass. Now it seems all rainbows and magic but there is a reason people on this website preach simplicity -- because when shit hits the fan the simple solutions are easy to debug and fix without spending 1000s of dev hours. The complicated magic solutions are the ones that cause 24+hr downtimes and need to be re-architected. You are a customer service tool, not a devops company, the choices here don't make too much sense.
by sdevonoes on 3/28/22, 8:56 AM
So much vendor lock-in, so much dependance on internet connectivity, so many buzzwords around... so much magic (as the post's title describes). I really dislike having to deal with a lot of third-party magic when doing programming.
by raydiatian on 3/28/22, 11:38 AM
Throwing my two cents into the conversation, perhaps one day we’ll be able to have our cake and eat it too: If http://localstack.cloud (replicate AWS services locally) ever makes it to a 1.0 release, I may strongly think about vendor lock-in friendly approach like this. Unfortunately localstack recently (v0.14.1) forced me to give up on each ambition in my ideal develop-n-test-it-locally wishlist: first I gave up on a CDK provisioning resources locally, then I abandoned using ECR images for localstack lambda, and then I abandoned localstack altogether. Their S3 mock still works for local dev though!
The other thing is... if you want to develop locally but still on top of lambda, you either skip the api gateway altogether, or you write an undeployed shim for like expressJS or node http to fake that restful pattern locally. As great as not using APIGW+lambda-integration is, Lambda doesn’t believe in return codes other than 200 (ignoring throws and server failures), and you should probably use the AWS-sdk/client-lambda package. This all scatters weird lambda anti patterns. Yuck, that’s gonna be a big “chore:” commit if you ever get sufficiently sick of it. And I’m actually not in favor of using the lambda response interface, but APIGW lambda integrations are a seeming sht show. VTL and selection pattern seem so bad I’d rather just have more Kubernetes experience. Knative is good these days, I hear.
by emilsedgh on 3/28/22, 7:38 AM
I write a conventional 12 factor app, with no vendor-specific code which could be executed on my local machine for development, Heroku, or anywhere else, and hand it over to Google Cloud Run. It's really an amazing service.
I had a few wishes on their service (eg integration with Papertrail, an easy way to run background workers, etc) but overall the whole thing is the best of all worlds:
* No vendor lock in due to platform specific code
* Easy local development
* Serverless scalability and pricing
Are there particular reasons why people go through so many hoops to use Lambda when such superior experience exists?
by EnKopVand on 3/28/22, 6:53 AM
We also do a lot of “serverless” but the way we do it seems far less vendor dependent than this. Basically what we do is layer out the “node” part of our “serverless” application and let our cloud provide act as what is essentially the role of what expressJS or similar might have done for you 5-10 years ago. We’re also handling a lot the federated security through a combination of ORMs, OPA, AD and direct DB access control for the very rare BI application that needs it.
This way we can leave our cloud vendor at any time. Not without figuring out a suitable replacement but far easier than this article, and maintaining almost all of its advantages.
Interesting read, and if you’re certain AWS is a good home for the next five years from your most recently deployed service, then I don’t see too much of an issue with vendor lock-in.
by qaq on 3/28/22, 8:41 AM
by chris_armstrong on 3/28/22, 9:24 AM
Our application has evolved from a simple API Gateway + frontend architecture to numerous asynchronous processes, dozens of integrated Lambdas, numerous S3 buckets and DynamoDB tables, third-party system integrations, cross-account integrations, etc.
It's been great for our velocity, costs next to nothing to run in development, and has been generally straightforward to refactor any "monolithic" parts into smaller, reusable, event-driven units. We operate in a very traditional industry and interface with much older style enterprise systems.
I wouldn't say any part of it is truly "magical", but the ability to give each developer their own fresh copy of the environment for each new piece of work at little to no cost is incredible. It's wonderful not having to concern ourselves with scaling each of our resources or worrying that they'll go down, as well as not needing the regular maintenance that comes with servers or containers.
by claudiusd on 3/28/22, 1:49 PM
I've been experimenting with serverless for some time now and came to many of the same conclusions written about here. The biggest takeaway for me is that there are pitfalls to an overreliance on lambdas. You really need to offload as much as you can to the other serverless solutions AWS provides.
I've been using Appsync for my graphql API instead of API gateway + Lambda, and I have had a good experience with it. A lot of logic can be offloaded into mapping templates, making lambdas unnecessary in many cases. Debugging is still a bit of a pain for me, but the end result is fast, cheap, and reliable.
by treve on 3/28/22, 6:39 AM
I'd be curious what kind of use-cases people have that makes this complexity worth it.
by vishnugupta on 3/28/22, 8:34 AM
This is super interesting. I'm almost certain they mean AWS keys (with appropriate IAM permissions etc.,). Otherwise billing, developer onboarding/de-boarding and such quickly go out of control.
Back in 2015 at my previous company I had setup two AWS accounts, one for test stack and one for production. That in itself was quite a bit of pain.
by o_____________o on 3/28/22, 2:12 PM
https://github.com/serverless-stack/serverless-stack
Given their most obvious competitor is already ambiguously named and extremely popular:
by ChicagoDave on 3/28/22, 9:04 AM
If your architecture follows domain driven design principles, you can build very large, complex, scalable, and maintainable systems.
by dmitriid on 3/28/22, 6:04 AM
by grantjpowell on 3/28/22, 7:14 AM
I hate to be an armchair expert, but I'll do my best to give the _counter_ opinion to "this is a model of a good startup stack".
If you're looking to build a web app for a business on a small team, some guiding values I've found to be successful (that feel counter to the type of values that lead to the stack in the article)
1.) Write as much "Plain Ol'" $LANGUAGE as possible[2]. Where you do have to integrate with the web framework, Understand your seams well enough that it's hard for your app _not_ to work when it receives a well formed ${HTTP/GQL/Carry Pigeon/Whatever} request
2.) Learn the existing "boring" tools for $LANGUAGE, and idioms that made _small_ shops in similar tech stacks successful.
3.) Learn $LANGUAGE's unit test/integration test framework like the back of your hand. Learn how to write code that can be tested, focus on making the _easy_ way to write code in your codebase to be to write tests _then_ implement the functionality
4.) Have a strong aversion from adding new technologies to the stack. Read this [1], then read it again. Always be asking "how can I solve this with my existing tools". Try to have so few dependencies that it would be hard to "mess up" the difference between local and prod (you can go a LONG way with just Node and PostgreSQL).
Some heuristics to tell if you're doing a good job at the above points,
1.) You don't have to prescribe Dev tool choices (shells, editors, graphical apps, git flows, etc)
2.) You can recreate much of your app on any random dev machine, and feel confident it acts similar to production.
3.) Changing lines of code in your code base at random will generate compiler errors/unit test failures/etc
Most every real world software project I've worked on in the SaaS world ended up with "complexity" as the limiting factor towards meeting the business's goals. When cpu/network/disk etc was the culprit, usually the hard part of fixing it was trying to _understand_ what was going on.
Plain may be very successful in their flow, but I'd say most everything in this article runs counter from the ideas that I've seen be successful in the past.
[1] https://boringtechnology.club/
[2] At our shop we'd say "You're a ruby programming, not a rails programmer, your business logic is likely well factored/designed if it could be straight-forwardly reworked into a rails free command line app"
by code_runner on 3/28/22, 8:44 AM
As hit and miss as my azure experience has been… azure function core tools, vscode, and azurite were excellent for purely local development.
by lysecret on 3/28/22, 8:56 AM
I am doing a lot of data-engineering and serverless is an absolute god-send. I have set up all our data-eng infra on lambdas and on aws-batch and I am super happy. The only issue is we have a lot of waste dressources on our always running Postgres instance.
I would love to hear more about your experience with Aurora Serverless PostgreSQL. The main turn-off for me is that it only works with postgres version 9.something.
by dsanchez97 on 3/28/22, 1:21 PM
With all that said, I am working on building a framework that I would describe to the crowd here as Django for Serverless. It is in early stages but you can check it out at https://staging.cdevframework.io/. One of the main focuses is to make it easier to write code that is not dependent on running on a Functions as a Service (FaaS) model, so that your code can eventually be bundled up and deployed on a traditional server when a FaaS platform becomes uneconomical.
by dpacmittal on 3/28/22, 6:43 AM
by tyingq on 3/28/22, 11:53 AM
They did accidentally release documentation about "Lambda Function URLs" at the end of last year, and then pulled it back. So perhaps some of that is coming.
by chikipowpow on 3/28/22, 12:46 PM
by imron on 3/28/22, 9:32 AM
Magical
by alephnan on 3/28/22, 10:48 AM
For example, before a satisfying answer on why a serverless implementation is better, the article delves straight into tests, and how things are complexified because the serverless architecture means more hinges, glue code, moving parts which now need to be tested. Since this particular serverless interpretation involves relying on Cloud as a Service, a hermetic local development doesn’t exist, and so they build a solution around interactive serverless development.
This article is a solution seeking a problem. Many of the solutions here seems like caveats of pursuing this serverless stack, rather than a feature. This is enlightening and useful to know what’s needed to get a fully integrated serverless DevEx, but it doesn’t motivate why people should elect serverless versus a monolith.
> We need to make sure that the few engineers that we do hire can make the most impact by delivering product features quickly while maintaining a high quality.
> This meant being able to run our services at a scale at a low cost without needing to have a whole department just looking after a homegrown infrastructure.
> as well as how we’ll be able to build a successful business in the next 5 to 10 years without needing to do foundational replatforms
All these points can be true with a monolith. If anything, these points make a monolith seem viable.
There’s been article posting around of using Postgres as a hammer for everything. If anything, vendor lock-in to server less platforms will likely involve replatforms in 5-10 years.
> serverless applications. One of the main differences is that you end up using a lot of cloud services and aim to offload as much responsibility to serverless solutions as possible
This contradicts the desire to avoid replatforming.
> Having personal AWS accounts allows each developer to experiment and develop without impacting any other engineer or a shared environment like development or staging. Combined with our strong infrastructure as code use, this allows everyone to have their clone of the production environment.
Again, I’m not sure how this is a feature. This adds another incidental complexity to debug. Engineers may have different permissions and roles. Alternatively, if the engineers are going to have the exact same “environment” as the deployed environment, then the individual accounts don’t provide much value.
> full-stack TypeScript ... The ability to move between the frontend, backend, and infrastructure code without having to
This is an orthogonal concern to serverless.
> developer features... this In isolation without impacting other engineers due to everyone having their own AWS account.
This is orthogonal to serverless. In fact it barely seems like a concern of infra. If local environments exist, developers can just have their own local development branch in Git. They can also test and deploy in separate remote branches. Here, the problem was created because local development is against live services.
Typically these developer blog posts are intended to evangelize the company. Personally I find many red flags in the company culture. Seems like a fun and cool place if you want to try out the latest and greatest tech trends, and have a chance to scratch your own engineering itches. But if you care about job security, you’d want to understand what is the business impact of all this research and development?
by paxys on 3/28/22, 7:46 AM
by oxff on 3/28/22, 12:26 PM
by chrisweekly on 3/28/22, 12:15 PM
> "Good magic decomposes into sane primitives."
and
> "If it _isn't_ composed of sane primitives, steer clear."
This feels more like the latter.
by frankcaron on 3/28/22, 3:14 PM
by tomatowurst on 3/28/22, 6:49 AM
I'm actually quite skeptical of this claim. Learning a new language isn't really a big deal unless you are using relatively "esoteric" stuff like clojure or datalog which really require an experienced consultant to train your team.
With AWS Chalice, we've been able to ship production scale code (for govcloud) in Python without any one of us breaking the environment by simply using separate staging. We were able to get PHP/Javascript developers to use it with barely any downtime. In fact it was more or less appreciated from the clean and simple nature of Python right from the get go.
This feels like way too much engineering from the get go. Here's my workflow with AWS Chalice and its super basic (I'm open to improvements here).
- checkout code from github
- run localhost and test endpoints written in python (exactly like Flask)
- push to development stage API gateway
- verify it is working as intended and this is when we catch missing IAM roles, we document them. if something is wrong with our AWS setup (we dont use CDK just simply use the AWS console to set everything up once like VPC and RDS)
- push to production stage API gateway
All this shimming, typescript (rule of thumb is > 40% more code for < 20% improvement through less documentation and type errors, only really valid in large teams) separate AWS developer accounts seems overkill.
The one benefit I see from all this extra compartmentalization is if you are working in large teams for a large company with many "clients" (internal and external). you are going to discover missing IAM roles and permissions anyways and is part of being an implicit "human AWS compiler trying different stackoverflow answers" since you are locking yourself into a single vendor.
Some positives I see are CDK but if you are deploying your infrastructure once, I really don't see the need for it, unless you have many infrastructures that can benefit from boilerplate generation.
Happy to hear from all ends of the spectrum, serverless-stack could be something I explore this weekend but there's just so much going on and I'm getting lot of marketing department vibes from reading the website (like idea to ipo and typescript to rule them all) and to top it off going to https://docs.serverless-stack.com/ triggers an antivirus warning about some netlify url ( nostalgic-brahmagupta-0592d1.netlify.app) what is going on here???
by eljimmy on 3/28/22, 2:00 PM
It's not a complete mirror image of what you get but it's close enough in my experience.
by gchamonlive on 3/28/22, 11:16 AM
You could very well use those integration tests to write Architecture Fitness Functions (which is implicitly what part of the test do) but having developers write those functions instead of architects risk the architecture decisions not carrying over to those tests.
For small use cases like this (s3, API gateway + lambdas, sqs, sns, elasticache) which although having many moving they come together more or less cohesively (they all depend on each other for a single feature), if your use case grows, by having ETL jobs, more async workloads, etc... You can start having issues with architecture tests not really covering all bases.
Regarding leaking so much cloud abstractions to the developer, I am not really sure what to make of this. I think the arguments of cognitive overload might well be valid, however a basic understanding of the target infrastructure is necessary for the developer to build applications effectively. In this case I think it works because they go all in on leaking abstractions, going as far as to provisioning a separate account for each developer. What won't really work is to have abstractions built upon abstractions to make developers think they are using the cloud (by emulationg the stack locally) when they aren't actually using it. So in this case being consistent is more important, either don't leak any cloud abstraction or go all in.
by alberth on 3/28/22, 1:55 PM
It's the epitome of a "serverless" architecture.
by BFLpL0QNek on 3/28/22, 12:26 PM
Development/deployments is so much simpler and for a business with money, the price difference is negligible. You can dev/test locally, not to tied to a provider, essentially just another boring web app.
However for personal projects I’ve been playing with Sererless out of interest to see if it’s ready yet, and instead of paying $10-20 a month for a VPS I pay fractions of a cent.
I develop my Lambda as a monolith application, not a lambda per endpoint. I’m told this is an anti pattern…my take is I’m just using lambda as another compute deployment target it’s fine. I use hexagonal architecture so my app knows nothing about Lambda which makes unit testing easy.
Next I wire up a very thin adapter layer that takes the Lambda request json and converts it to the required value my app needs for routing. This is at the very edge of the app. I like to use this design regardless of lambda, I can swap out any web framework easily, even build a cli frontend for testing with minimal effort. In the context of lambda, using hexagonal architecture means I can bin Lambda, replace it with a standard web framework and deploy as a standalone app with minimal effort if I need to.
With the lambda in place I have a Cloudflare worker as the entry point to the lambda. It takes a request and forwards it to my lambda. I use a Cloudflare worker as it’s cheap/free (generous free tier) and I get a cache at the edge. I’ll use Cloudflare pages or s3 with Cloudflare in front for static assets.
I use Lambda for the app instead of Cloudflare workers simply because I want to interact with DynamoDB/S3 and I can manage permissions better inside AWS with IAM. I also want to use Rust which has very fast Lambda execution times and I had a few issues with Cloudflare workers wasm which I lost interest in figuring out as only experimenting. As I’m fronting with Cloudflare I’m also extremely dogmatic on cache headers from the lambda and propagating them to reduce calls to the origin/lambda.
The end result is reasonable performant. It’s fast but not the fastest as expected with the hops/latency, it’s extremely cheap. A small pet project may be single digit cents if even that. It’ll also handle large volumes of traffic, easily, without worrying about provisioning issues.
However, I have to jump through to many hoops to get what I have, more than what I’d like to do on a professional project. The orchestration is complex and it feels like what I save in $$$ I pay for in slower dev time jumping through hoops to get the absolute lowest cost. I enjoy this stuff and it’s a personal project done for education, still, I’d be hesitant to go this way for a real payed job as interesting as I find it.
Also pay as you go is great when it’s costing fractions of a cent, it’s also terrible in it opens you up to a new attack vector, DoS’ing your service which has unbounded pay as you go services then waking up to very large bills. Always build in rate limiting for services you use with on demand pricing.
by JoBrad on 3/28/22, 11:42 AM
by throuwout1234 on 3/28/22, 7:23 AM
>For example, Plain’s January bill for our 7 developer accounts was a total of $150—pennies compared to the developer velocity we gain by everyone having their clone of production. Typically, the largest cost for each developer is our relational database: Amazon Aurora Serverless v1 PostgreSQL. It automatically scales up when it receives requests during development and down to zero after 30 minutes of inactivity.
I don't understand this at all. If "7 full production instance cost $150" then your application is tiny and you don't need 15 AWS services. The storage costs alone should far exceed that for large application. If a $150 production instance is "scale" then we're lost as an industry. If you application is this tiny, Just. Use. A. Server.
by jpgvm on 3/28/22, 6:53 AM
by old-gregg on 3/28/22, 6:49 AM
I had the same feeling when we reached the pinnacle of complexity of Windows programming with COM/DCOM/ActiveX/.NET/WinForms/Silverlight/Visual Studio... All of that felt like necessary progress. Yet, a simple script piping text output into a browser via CGI felt like a breath of fresh air. We need this for web development now.