by yurisagalov on 11/10/22, 5:23 PM with 91 comments
by diarrhea on 11/10/22, 7:46 PM
Round-trip times for GitHub Actions are too high: sometimes you're waiting for 10 minutes just to run into a dumb typo, empty string-evaluated variable or other mishap. There's zero IDE support for almost anything beyond getting the YAML syntax itself right.
We have containerization for write-once-run-anywhere and languages like Python for highly productive (without footguns as bash has them), imperative descriptions of what to do. The main downside I see is it getting messy and cowboy-ish. That's where frameworks can step in. If the Dagger SDK were widely adopted, it'd be as exchangeable and widely understood/supported as, say, GitHub Actions themselves.
We currently have quite inefficient GHA pipelines (repeated actions etc.) simply because the provided YAML possibilities aren't descriptive enough. (Are Turing-complete languages a bad choice for pipelines?)
What's unclear to me from the article and video is how this can replace e.g. GitHub Actions. Their integration with e.g. PR status checks and the like is a must, of course. Would Dagger just run on top of a `ubuntu-latest` GHA runner?
by helderco on 11/10/22, 5:40 PM
I have a few examples in https://github.com/helderco/dagger-examples and I plan to add more.
There's also reference documentation in https://dagger-io.readthedocs.io/ so you can get a birds eye view on what's possible.
by shykes on 11/10/22, 5:25 PM
We also released an update to the Go SDK a few days ago: https://dagger.io/blog/go-sdk-0.4
by bilalq on 11/10/22, 6:12 PM
For this reason, I love using the AWS CDK. Being able to model things in TypeScript was so much nicer than the janky flow of always feeling lost in some Ruby/JSON/YAML IaC template monstrosity.
Curious how Dagger differentiates itself from AWS CDK or cdktf.
by brianzelip on 11/10/22, 11:24 PM
by dvasdekis on 11/10/22, 11:36 PM
One of the reasons we use proprietary pipelines is the automatic 'service principal' login benefits that exist on e.g. Azure Devops, where the pipeline doesn't need to authenticate via secrets or tokens, and instead the running machine has the privileges to interact directly with Azure. (See https://learn.microsoft.com/en-us/azure/devops/pipelines/tas... particularly "addSpnToEnvironment" parameter). I'm sure other clouds have something similar.
Running the same pipeline locally, there are ways to synthetically inject this, but there's no ready support in your framework yet for this (as ideally you'd have an 'authentication' parameter that you can set the details for). Is something like this planned?
by rekwah on 11/10/22, 6:49 PM
by chologrande on 11/10/22, 6:48 PM
Complete gamechanger to be able to move away from jenkins groovy CPS hell
by pm on 11/10/22, 10:54 PM
by travisgriggs on 11/10/22, 10:53 PM
I sat down and extracted the compiler/link flags, and then wrote a Python script to do the build. The code was smaller, and built faster.
Every “build” engine evolves from being a simple recipe processor to the software equivalent of a 5 axis CnC mill. Some things should not succumb to one size fits all.
by mkoubaa on 11/10/22, 6:19 PM
by epgui on 11/10/22, 6:10 PM
Doesn't this introduce a whole dimension of extra stateful complexity compared to configuration YAML?
by madjam002 on 11/10/22, 6:53 PM
by aclatuts on 11/10/22, 6:36 PM
by xedx on 11/10/22, 10:29 PM
by mehanik on 11/11/22, 6:46 PM
However it is not clear for me what is the benefit of using it instead of calling commands like docker, pytest and kubectl from Python with Plumbum or similar library. Add Fire and it is trivial to create a complete DevOps CLI for your app that can run locally or called from GitHub actions.
by tatoalo on 11/10/22, 6:42 PM
I recently used act[0] to locally test my GitHub actions pipelines and worked okay, the fact that I could interact with the Dagger API via a Python SDK could be even more convenient, will definitely try!
by mkesper on 11/10/22, 6:51 PM
by __warlord__ on 11/11/22, 7:20 AM
I'm working on a similar project, portable pipelines built with Nim :)
The idea behind this project is to:
1. Dev and Prod pipelines runs the same code, the only difference is where are executed. Allows easy troubleshooting and faster development.
2. Tests output by default on each script.
3. Decouples the code from the image/container. This allows it to be embedded into any CI stack.
4. Complex rules for retries specific scripts rather than the whole pipeline or even the step (stage)
A typical pipeline definition looks like this:
```yaml
kind: pipeline
version: 1
policy: strict
steps:
- hello
hello:
- script: echo "hello world"
expected_output: "hello world"
expected_return_code: 0
retries: 5
```
```bash
./takito --tasks tasks.yml --step hello
```
by kpen11 on 11/10/22, 5:56 PM
It goes step by step through the getting started guide from the Dagger Python SDK docs
by quelltext on 11/11/22, 3:11 AM
by pianoben on 11/10/22, 10:42 PM
by meling on 11/10/22, 10:08 PM
by throwawaaarrgh on 11/11/22, 6:14 AM
Proprietary CI/CD systems are a waste of time. If it's more complicated than a shell script, you need to strip it down.
by yakkityyak on 11/10/22, 11:41 PM
by goodpoint on 11/11/22, 10:54 AM
by dith3r on 11/10/22, 9:21 PM