Skip to content
Security

Software Supply Chain Security: SBOMs, Sigstore & Dependency Scanning

Anatomy of supply chain attacks (xz-utils, SolarWinds, event-stream), SBOM generation with Syft and Trivy, Sigstore keyless signing, dependency scanning tools compared, and the SLSA framework.

A
Abhishek Patel12 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Software Supply Chain Security: SBOMs, Sigstore & Dependency Scanning
Software Supply Chain Security: SBOMs, Sigstore & Dependency Scanning

A Backdoor Nobody Saw Coming

In March 2024, a Microsoft engineer noticed SSH logins were taking 500 ms longer than usual. That curiosity led to the discovery of a backdoor in xz-utils -- a compression library baked into virtually every Linux distribution. The attacker had spent two years building trust as a maintainer, submitting legitimate patches before slipping in obfuscated malicious code. If it had reached stable releases, it would have given the attacker remote code execution on every Linux server running OpenSSH.

This was not an isolated incident. It was the latest in a pattern: SolarWinds (build system compromise), event-stream (dependency hijacking), Codecov (CI tool manipulation). Each attack targeted a different link in the software supply chain, and each succeeded because defenders were not looking at those links. Supply chain security is the discipline of making every link verifiable, auditable, and tamper-resistant.

What Is Software Supply Chain Security?

Definition: Software supply chain security encompasses the practices, tools, and standards used to verify the integrity, provenance, and safety of every component that goes into building and deploying software -- from open-source dependencies and build systems to container images and CI/CD pipelines. It aims to ensure that what you ship is exactly what you intended, with no tampering at any stage.

Anatomy of Supply Chain Attacks

Supply chain attacks succeed because they exploit implicit trust. You trust your dependencies, your build system, your CI tools, and your package registries. Attackers target whichever link has the weakest verification. Four real-world attacks illustrate the main vectors:

xz-utils: Social Engineering a Maintainer

The attacker ("Jia Tan") contributed to xz-utils for two years, earning commit access through legitimate work. Pressure was applied to the original maintainer to hand over control. The backdoor was hidden in test fixture files and activated through a multi-stage obfuscation chain in the build system. Detection was accidental -- a performance regression spotted during unrelated benchmarking.

Lesson: Code review alone is not sufficient when an attacker has commit access and can modify build scripts, test data, and CI configuration over time.

SolarWinds: Build System Compromise

Attackers compromised SolarWinds' build infrastructure and injected malicious code into the Orion software update. The trojanized update was signed with SolarWinds' legitimate certificate and distributed to 18,000+ organizations, including US government agencies. The malware (SUNBURST) communicated via DNS to blend with normal traffic.

Lesson: If your build system is compromised, code signing is meaningless -- you are signing the attacker's code with your own key.

event-stream: Dependency Hijacking

A new maintainer took over the popular event-stream npm package from a burned-out original author. They added a dependency (flatmap-stream) containing encrypted malicious code that targeted the Copay Bitcoin wallet. The attack was narrowly scoped to avoid detection -- it only activated when imported by a specific application.

Lesson: Maintainer transitions in open-source projects are a critical vulnerability. Automated scanning may miss targeted payloads that only activate in specific contexts.

Codecov: CI Tool Manipulation

Attackers modified the Codecov Bash Uploader script to exfiltrate environment variables from CI pipelines. Since CI environments often contain secrets (API keys, tokens, credentials), this gave attackers access to the internal systems of Codecov's customers, including HashiCorp, Twitch, and others.

Lesson: Third-party scripts executed in CI have access to your secrets. Pin CI tool versions and verify checksums before execution.

SBOMs: Know What You Ship

A Software Bill of Materials (SBOM) is a machine-readable inventory of every component in your software -- direct dependencies, transitive dependencies, their versions, licenses, and suppliers. Think of it as a nutrition label for software. Without one, you cannot answer the basic question: "Are we affected by this new CVE?"

SPDX vs CycloneDX

FeatureSPDXCycloneDX
Maintained byLinux FoundationOWASP
ISO StandardISO/IEC 5962:2021ECMA-424
Primary focusLicense compliance + securitySecurity + risk analysis
FormatsJSON, RDF, YAML, tag-valueJSON, XML, Protobuf
VEX supportVia external documentsInline VEX
AdoptionUS government (EO 14028)Broad industry

Both formats are viable. CycloneDX is more security-focused and easier to generate. SPDX has broader government mandate support. Many organizations generate both.

Generating SBOMs with Syft and Trivy

# Generate SBOM with Syft (CycloneDX JSON)
syft packages alpine:3.19 -o cyclonedx-json > sbom.cdx.json

# Generate SBOM with Syft (SPDX JSON)
syft packages alpine:3.19 -o spdx-json > sbom.spdx.json

# Generate SBOM with Trivy (CycloneDX)
trivy image --format cyclonedx --output sbom.cdx.json alpine:3.19

# Scan an existing SBOM for vulnerabilities
grype sbom:sbom.cdx.json
# GitHub Actions: Generate SBOM on every build
- name: Generate SBOM
  uses: anchore/sbom-action@v0
  with:
    image: my-app:${{ github.sha }}
    format: cyclonedx-json
    output-file: sbom.cdx.json
    artifact-name: sbom

- name: Scan SBOM for vulnerabilities
  run: grype sbom:sbom.cdx.json --fail-on critical

Pro tip: Generate your SBOM from the final container image, not from source code alone. Source-level SBOMs miss OS packages, runtime dependencies, and anything added during the Docker build. Image-level SBOMs capture everything that actually ships.

Sigstore: Signing Without the Key Management Nightmare

Traditional code signing requires you to generate, store, rotate, and protect long-lived signing keys. Lose the key and you need to revoke everything. Have it stolen and attackers sign malware with your identity. Sigstore eliminates this problem with keyless signing backed by transparency logs.

The Sigstore Stack

  • Cosign -- Signs and verifies container images and other OCI artifacts. Supports both key-based and keyless (identity-based) signing.
  • Fulcio -- A free certificate authority that issues short-lived (10-minute) signing certificates tied to an OIDC identity (GitHub, Google, Microsoft). No long-lived keys to manage.
  • Rekor -- An immutable transparency log that records every signing event. Anyone can verify that a signature was created at a specific time by a specific identity. Think Certificate Transparency, but for software artifacts.

Signing Container Images with Cosign

# Keyless signing (uses OIDC identity -- opens browser for auth)
cosign sign --yes ghcr.io/myorg/my-app:v1.2.3

# Verify a signed image
cosign verify ghcr.io/myorg/my-app:v1.2.3 \
  --certificate-identity=deploy@myorg.iam.gserviceaccount.com \
  --certificate-oidc-issuer=https://accounts.google.com

# Key-based signing (traditional approach)
cosign generate-key-pair
cosign sign --key cosign.key ghcr.io/myorg/my-app:v1.2.3
cosign verify --key cosign.pub ghcr.io/myorg/my-app:v1.2.3
# GitHub Actions: Sign image in CI with keyless Cosign
- name: Sign container image
  uses: sigstore/cosign-installer@v3
- run: cosign sign --yes ghcr.io/myorg/my-app:${{ github.sha }}
  env:
    COSIGN_EXPERIMENTAL: "true"

Enforcing Signatures in Kubernetes

Signing images is half the story. You also need to verify signatures at admission time so unsigned or tampered images never run in your cluster.

# Kyverno policy: only allow signed images
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-image-signatures
spec:
  validationFailureAction: Enforce
  rules:
    - name: check-cosign-signature
      match:
        any:
          - resources:
              kinds:
                - Pod
      verifyImages:
        - imageReferences:
            - "ghcr.io/myorg/*"
          attestors:
            - entries:
                - keyless:
                    issuer: "https://token.actions.githubusercontent.com"
                    subject: "https://github.com/myorg/*"

Dependency Scanning: Dependabot vs Renovate vs Snyk

Dependency scanning tools monitor your dependency tree for known vulnerabilities and outdated packages. They differ significantly in configuration flexibility, supported ecosystems, and how they handle updates.

FeatureDependabotRenovateSnyk
CostFree (GitHub-native)Free (OSS) / Mend.io hostedFree tier / $25+/dev/mo
PR strategyOne PR per dependencyGrouped PRs, customizableOne PR per vuln fix
Auto-mergeVia GitHub configBuilt-in, highly configurableLimited
Monorepo supportBasicExcellentGood
Update schedulingDaily/weekly/monthlyCron-based, per-packageContinuous monitoring
Lockfile handlingGoodExcellentGood
Vulnerability DBGitHub Advisory DBRelies on registry advisoriesSnyk Vulnerability DB (proprietary)
License scanningNoNoYes
Container scanningNoYes (Docker digests)Yes
// renovate.json -- practical config for a monorepo
{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": ["config:recommended"],
  "packageRules": [
    {
      "matchUpdateTypes": ["minor", "patch"],
      "matchCurrentVersion": "!/^0/",
      "automerge": true
    },
    {
      "matchPackagePatterns": ["eslint", "prettier", "typescript"],
      "groupName": "dev tooling",
      "schedule": ["before 6am on Monday"]
    }
  ],
  "lockFileMaintenance": {
    "enabled": true,
    "schedule": ["before 4am on Monday"]
  }
}

Pro tip: Use Renovate if you want control. It supports grouping related updates into a single PR, auto-merging patches for stable dependencies, and per-package schedules. Dependabot's one-PR-per-dependency approach creates noise in active projects. Snyk is worth the cost if you need license compliance scanning or a proprietary vulnerability database with faster CVE coverage.

SLSA Framework and Reproducible Builds

SLSA (Supply-chain Levels for Software Artifacts, pronounced "salsa") is a framework from Google that defines four levels of supply chain integrity. Each level adds verifiable guarantees about how your software was built.

SLSA LevelRequirementWhat It Proves
Level 1Build process documentedProvenance exists
Level 2Signed provenance, hosted build serviceTamper-resistant provenance
Level 3Hardened build platform, isolated buildsProvenance is accurate
Level 4Two-party review, hermetic/reproducible buildsDependencies are complete and verified
# GitHub Actions: SLSA Level 3 provenance generation
- name: Generate SLSA provenance
  uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.1.0
  with:
    image: ghcr.io/myorg/my-app
    digest: ${{ steps.build.outputs.digest }}

Reproducible builds take this further: given the same source code and build environment, you get bit-for-bit identical output. This lets anyone verify that the binary you distribute was built from the source you claim. Languages like Go and Rust have good reproducibility support. Node.js and Python require more effort (pinned dependencies, fixed timestamps, deterministic bundling).

Locking Down Your CI/CD Pipeline

Your CI/CD pipeline has access to source code, secrets, signing keys, and deployment credentials. It is the highest-value target in your supply chain. Hardening it is non-negotiable.

  • Pin action versions by SHA -- Use actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 instead of @v4. Tags can be moved; commit SHAs cannot.
  • Restrict GITHUB_TOKEN permissions -- Set permissions: read-all at the workflow level and grant write access only where needed.
  • Use OpenID Connect (OIDC) -- Authenticate to cloud providers with short-lived tokens instead of storing long-lived credentials as secrets.
  • Enable branch protection -- Require PR reviews and status checks before merging to main.
  • Audit third-party actions -- Treat GitHub Actions like dependencies. Review their source and pin to specific commits.
  • Isolate sensitive workflows -- Separate build, sign, and deploy steps into different jobs with minimal permissions each.
# Hardened GitHub Actions workflow
name: Build and Deploy
on:
  push:
    branches: [main]

permissions: read-all  # Default restrictive permissions

jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
      id-token: write  # For OIDC and Sigstore
    steps:
      - uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29  # v4.2.2
      - uses: docker/build-push-action@14487ce63c7a62a3fd1178128fff2c91331a2a2e  # v6.12.0
        with:
          push: true
          tags: ghcr.io/myorg/my-app:${{ github.sha }}
      - uses: sigstore/cosign-installer@d7d6bc7722e3daa8354c50bcb52f4837da5e9b6a  # v3.8.0
      - run: cosign sign --yes ghcr.io/myorg/my-app:${{ github.sha }}

Frequently Asked Questions

What is an SBOM and why do I need one?

An SBOM (Software Bill of Materials) is a machine-readable inventory of every component in your software, including transitive dependencies, versions, and licenses. You need one to answer "are we affected?" when a new CVE drops. Without an SBOM, you are guessing. US Executive Order 14028 requires SBOMs for software sold to the federal government, and many enterprises now require them from vendors.

How does keyless signing work with Sigstore?

When you sign with Cosign in keyless mode, Fulcio verifies your identity through an OIDC provider (GitHub, Google, etc.) and issues a short-lived (10-minute) X.509 certificate. The signing event is recorded in Rekor, an immutable transparency log. Verifiers check the Rekor log to confirm that a valid certificate existed at signing time. No long-lived keys are generated or stored.

Should I use Dependabot or Renovate?

Use Dependabot if you want zero-config dependency updates on GitHub and your project is small. Use Renovate if you need grouped PRs, auto-merge for patches, monorepo support, or fine-grained scheduling. Renovate has a steeper learning curve but produces far less PR noise in active projects. For most teams with more than a handful of dependencies, Renovate is the better choice.

What SLSA level should I target?

Start with SLSA Level 2 -- signed provenance from a hosted build service like GitHub Actions. This is achievable in a day using the SLSA GitHub Generator. Level 3 (hardened builds) is appropriate for production services and container images. Level 4 (reproducible builds) is aspirational for most teams but critical for security-sensitive software like cryptographic libraries or OS packages.

How do I prevent another xz-utils-style attack on my dependencies?

No single tool prevents social engineering attacks. Layer your defenses: pin dependencies to exact versions and review upgrade diffs, generate and monitor SBOMs, scan for known vulnerabilities in CI, verify signatures on container images, use lockfiles and verify their checksums, and limit the number of direct dependencies. Tools like Socket.dev analyze package behavior changes (new network calls, filesystem access) that traditional CVE scanners miss.

Are reproducible builds practical for Node.js or Python projects?

They are harder than with Go or Rust but achievable. For Node.js: use npm ci (not npm install), pin all dependencies with a lockfile, set SOURCE_DATE_EPOCH for deterministic timestamps, and use a fixed Node.js version via a Docker image digest. For Python: use pip install --require-hashes with a pinned requirements file and build inside a pinned base image. The effort scales with project complexity, but even partial reproducibility improves your security posture.

What is the difference between vulnerability scanning and SBOM generation?

SBOM generation produces an inventory -- it tells you what components are in your software. Vulnerability scanning checks that inventory against databases of known CVEs. They are complementary: generate an SBOM once per build, then scan it against updated vulnerability databases continuously. Tools like Grype can scan a pre-generated SBOM without re-analyzing the artifact, making ongoing monitoring efficient.

Start With the Highest-Impact Steps

You do not need to implement everything at once. Start with three actions that give disproportionate returns: generate SBOMs for your container images using Syft or Trivy, enable Dependabot or Renovate for automated dependency updates, and pin your CI action versions by commit SHA. These three steps take less than a day and close the most common supply chain gaps. From there, add Cosign image signing and work toward SLSA Level 2 provenance. The goal is not perfection -- it is making every link in your supply chain verifiable and auditable.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.