by RoboTeddy on 8/15/24, 8:15 PM with 21 comments
It's easy to say just "use containers" or "use VMs" — but are there pragmatic workflows for doing these things that don't suffer from too many performance problems or general pain/inconvenience?
Are containers the way to go, or VMs? Which virtualization software? Is it best to use one isolated environment per project no matter how small, or for convenience's sake have a grab-bag VM that contains many projects all of which are low value?
Theorycrafting is welcome, but am particularly interested in hearing from anyone who has made this work well in practice.
by comprev on 8/17/24, 12:51 PM
Some examples of how we do it:
- Devs can only use hardened (by us) Docker images hosted inside our infrastructure. Policies enforce this during CI and runtime on clusters.
- All Maven/PIP/NodeJS/etc. dependencies are pulled through via proxy and scanned before first use. All future CI jobs pull from this internal cache.
- Only a handful of CI runners have outbound connectivity to the public internet (via firewalls). These runners have specific tags for jobs needing connectivity. All other runners pull dependencies / push artefacts from within our network.
- The CI Runners with Internet connectivity have domains whitelisted at the firewall level, and so far very few requests have been made to add new domains.
- External assets, e.g an OpenJDK artefact, have their checksums validated during the build stage of our base images. This checksum is included in Docker image metadata should we wish to download the asset again and compare against the public one.
by legobeet on 8/19/24, 9:06 AM
With this in mind:
- https://qubes-os.org - Use separate VMs for separate domains. Use disposable VMs for temporary sessions.
- https://github.com/legobeat/l7-devenv - My project. Separate containers for IDE and (ephemeral) code-under-test. Transparent access to just the directories needed and nothing else, without compromising on performance and productivity. Separation of authentication token while transparent to your scripts and dev-tools. Editor add-ons are pinned via submodules and baked into the image at build-time (and easy to update on a rebuild). Feedback very welcome!
- In general, immutable distros like Fedora Silverblue and MicroOS (whatever happened to SUSE ALP?) also worth considering, to limit persistence. Couples well with a setup like the one I linked above.
- Since you seem to be in a Node.js context, I should also mention @lavamoat/allow-scripts (also affiliated via $work) as something you can consider to reel in your devDeps: https://github.com/LavaMoat/LavaMoat/tree/main/packages/allo...
by mikewarot on 8/17/24, 2:35 PM
You have to trust everything, and any breach of trust breaks it all. This approach is insane, and yet, widely accepted as the way things were always done, and will always be done.
If you ever get the chance to use capability based security, otherwise known as the principle of least privilege, or multilevel security, do so.
Know that permission flags in Android, or the UAC crap in Windows, or AppArmor are NOT capability based security.
by swiftcoder on 8/19/24, 9:34 AM
- Don't take any 3rd party dependencies. Build everything in house instead. Likely only possible in niche areas of government/defence where sky-high budgets intersect with intense scrutiny.
- Manually validate each new version of every dependency in your tree. Also very expensive, complex vulnerabilities will likely still slip through (i.e. things like SPECTRE aren't going to be caught in code review).
- Use firewalls/network security groups/VPC-equivalents to prevent any network traffic that isn't specifically related to the correct operation of your software. Increasingly hard to enforce, as our tech stacks rely on more and more SaaS offerings. Needs a properly staffed network admin to enforce and reduce the pain points on developers.
- Network isolated VMs/containers that can only talk to a dedicated container that handles all network traffic. Imposes odd constraints on software architecture, doesn't play well with SaaS dependencies.
In practice you run with whatever combination of the above you can afford, and hope for the best.
by blueflow on 8/19/24, 8:48 AM
by keepamovin on 8/19/24, 9:13 AM
In that sense, isolation for develop to solve supply chain security seems a symptom-treater not a cause-treater.
A more extreme approach is to:
minimize dependencies, built a lot in-house, don't update pre-vetted dependencies before another audit
In general, I think a big dependency chain is useful for getting to PoC quickly (and in some cases it's indeed unavoidable, eg. numpy etc), but in building many simplish web apps and client server applications it's feasible to have a very narrow dependency chain, especially back-end. You can even do this front-end if you eschew framework stuff.
by meiraleal on 8/17/24, 1:01 AM
by rekado on 8/19/24, 11:25 AM
by bravetraveler on 8/19/24, 8:58 AM
by mindcrash on 8/17/24, 7:59 PM
by h4ck_th3_pl4n3t on 8/18/24, 6:42 AM
The dependency track project accumulates all dependency vulnerabilities in a dashboard. [2]
Container SBOMs can be generated with syft and grype [3] [4]
[1] https://github.com/CycloneDX
[2] https://github.com/DependencyTrack
by ralala on 8/17/24, 9:28 PM