Bringing WCET and Timing Analysis Into DevOps for Safety-Critical Systems
embeddedci/cdsafety

Bringing WCET and Timing Analysis Into DevOps for Safety-Critical Systems

UUnknown
2026-02-05
9 min read
Advertisement

Automate WCET checks in CI with RocqStat to catch timing regressions early, enforce budgets, and produce traceable evidence for safety-critical systems.

Hook: timing regressions are a silent hazard in safety critical pipelines

If your CI catches functional regressions but lets timing budgets slip, you still have a runaway hazard. Teams building automotive ECUs, avionics controllers, and industrial controllers face the same trap: a small code change that compiles and passes unit tests can exceed the worst case execution time and invalidate safety arguments. In 2026 the bar for timing assurance has only risen. You need WCET checks in CI that are repeatable, automated, and tied to builds.

The evolution of WCET and timing analysis in 2026

In early 2026 Vector Informatik completed the acquisition of StatInf's RocqStat technology and team, signaling a consolidation of timing analysis into mainstream verification toolchains. Industry toolchains are converging: static WCET analysis, measurement-based evidence, and software testing are being unified into DevOps workflows. This means teams can now expect tools such as RocqStat to offer tighter integrations with code testing suites like VectorCAST, and CI platforms to adopt timing checks as a standard gate.

Vector's acquisition accelerates integration of timing analysis and test automation, making WCET part of the software delivery lifecycle.

Why implement WCET checks in CI now

  • Shift-left assurance: Detect timing regressions at code review time, not in late-stage integration.
  • Certification support: Produce repeatable artifacts and traceability required for ISO 26262 and DO-178C evidence.
  • Cost control: Avoid expensive rework on hardware-in-the-loop labs or field recalls by catching budget breaches early.
  • Continuous accountability: Enforce timing budgets per feature and per branch, and track regressions over time.

How RocqStat fits into an embedded CI pipeline

RocqStat provides static worst case execution time estimation, and can be used alongside measurement evidence in a hybrid analysis. In CI, RocqStat can run as a CLI step or via integrated adapters within VectorCAST. The typical flow is:

  1. Build the binary with deterministic flags and a fixed toolchain.
  2. Extract analysis inputs: control flow graph, binary, and optional measurement traces.
  3. Run RocqStat to obtain WCET estimates per function or task.
  4. Compare results against timing budgets stored with the project or in a central policies service.
  5. Fail the build or post a warning depending on policy; publish reports and artifacts for traceability.

Key integration patterns

  • Gate checks: Fail pull requests when WCET exceeds budget for modified code paths.
  • Quality gates: Allow merges with warnings but block release branches until budgets pass.
  • Incremental analysis: Analyze only changed functions to speed up CI; perform full analysis nightly.
  • Hybrid evidence: Combine static WCET from RocqStat with execution traces from hardware-in-the-loop for stronger claims.

Practical CI examples: GitLab CI, GitHub Actions, Jenkins

Below are concrete, copy-pasteable examples showing how to run RocqStat in common CI systems. The examples assume a Docker image that contains the RocqStat CLI, the cross compiler, and any helper scripts. Use a pinned image digest to ensure repeatability.

GitLab CI example

stages:
  - build
  - wcet

build:
  stage: build
  image: myregistry/embedded-toolchain:sha256_deadbeef
  script:
    - make clean all TARGET=stm32
  artifacts:
    paths:
      - build/output.bin
      - build/mapfile.map

wcet_check:
  stage: wcet
  image: myregistry/rocqstat-ci:sha256_deadbeef
  dependencies:
    - build
  script:
    - ./tools/extract_cfg.sh build/output.bin build/mapfile.map -o analysis/input
    - rocqstat analyze --input analysis/input --output analysis/wcet.json
    - python3 tools/ci_enforce_wcet.py analysis/wcet.json budgets/wcet_policy.json
  artifacts:
    paths:
      - analysis/wcet.json
      - analysis/wcet_report.html

The enforcement script parses rocqstat output and exits nonzero on violations. Store budgets in a committed file or a remote policy service.

GitHub Actions example

name: CI

on:
  pull_request:
    paths:
      - 'src/**'

jobs:
  build-and-wcet:
    runs-on: ubuntu-latest
    container:
      image: myregistry/rocqstat-ci:sha256_deadbeef
    steps:
      - uses: actions/checkout@v4
      - name: Build
        run: make clean all TARGET=stm32
      - name: Extract CFG
        run: ./tools/extract_cfg.sh build/output.bin build/mapfile.map -o analysis/input
      - name: Run RocqStat
        run: rocqstat analyze --input analysis/input --json --output analysis/wcet.json
      - name: Enforce budgets
        run: |
          python3 tools/ci_enforce_wcet.py analysis/wcet.json budgets/wcet_policy.json

Jenkins pipeline snippet

pipeline {
  agent { docker { image 'myregistry/rocqstat-ci:sha256_deadbeef' } }
  stages {
    stage('Build') {
      steps { sh 'make clean all TARGET=stm32' }
      post { always { archiveArtifacts artifacts: 'build/**' } }
    }
    stage('WCET') {
      steps {
        sh './tools/extract_cfg.sh build/output.bin build/mapfile.map -o analysis/input'
        sh 'rocqstat analyze --input analysis/input --json --output analysis/wcet.json'
        sh 'python3 tools/ci_enforce_wcet.py analysis/wcet.json budgets/wcet_policy.json'
      }
      post { always { archiveArtifacts artifacts: 'analysis/**' } }
    }
  }
}

Designing enforcement logic

Your enforcement script needs three responsibilities:

  1. Parse RocqStat output (prefer JSON output when available).
  2. Map functions or tasks to timing budgets stored in project config or a policy store.
  3. Decide action: fail build, post check-run comment, or open an issue automatically if regression exceeds tolerance.

A minimal pseudocode enforcement flow:

# parse wcet.json
results = parse_json('analysis/wcet.json')
violations = []
for item in results:
  budget = lookup_budget(item.name)
  if item.wcet > budget * (1 + tolerance):
    violations.append({name: item.name, wcet: item.wcet, budget: budget})

if violations:
  print('Timing budget violated')
  exit 1
else:
  print('All timing budgets OK')
  exit 0

Managing nondeterminism and noisy measurements

Timing checks in CI can be noisy if based purely on measurement. Use static analysis for deterministic upper bounds, and measurement-based evidence for strengthening. When you must use execution traces in CI, apply these controls:

  • Isolate CPU and resources: Use dedicated CI runners or real-time cores; pin processes to CPUs.
  • Fix toolchain and build flags: Reproducible builds matter; use exact compiler versions and flags that affect inlining and optimization.
  • Warm caches and repeat runs: Perform multiple runs and take conservative percentiles, eg 99.9 percentile for safety margins.
  • Use HIL for final certification: Keep hardware-in-loop tests for release candidates; use CI to catch regressions earlier with static checks.

Delta analysis and performance for large codebases

Full WCET analysis can be expensive. Use incremental strategies so CI remains fast while retaining safety.

  • Change impact analysis: Analyze only functions changed in the current diff and their callers.
  • Caching analysis artifacts: Store control flow and microarchitectural models per build to avoid recomputation; architectures that embrace serverless caching and artifact patterns can reduce job time.
  • Nightly full runs: Schedule full WCET runs nightly or on merge to main to catch cross-cutting regressions; coordinate these long jobs with your edge and serverless orchestration so they don't interfere with day builds.

Traceability, artifacts, and certification evidence

For safety-critical projects you must collect evidence. CI pipelines should publish:

  • WCET result artifacts tied to commit hashes and build IDs.
  • Analysis inputs: binary, map files, CFG snapshots, measurement traces.
  • Toolchain versions and pinned Docker images for reproducibility.
  • Signed reports or attestations when required by auditors; store these in an immutable artifact repository or other tamper-evident store.

Store these artifacts in an immutable artifact repository and link them in your release notes. When tools like RocqStat integrate with VectorCAST, the idea is to centralize both test evidence and timing evidence in the same verification record, greatly simplifying traceability.

Detecting and triaging timing regressions

When a timing regression is detected, teams need a fast, predictable triage flow:

  1. Label the failing check-run with severity and suspected component.
  2. Block merge if severity exceeds policy; otherwise attach a warning and link to the report.
  3. Create a regression ticket automatically including a minimal repro and WCET diffs.
  4. Run an extended analysis: compare control flow changes, inspect assembly diffs, and look for changed compiler flags or inline heuristics.

Advanced strategies: hybrid analysis, ML prediction, and canary releases

2026 trends show three advanced patterns worth adopting:

  • Hybrid analysis: Combine RocqStat static bounds with measurement traces to shrink overapproximations without losing safety. This is especially useful on complex microarchitectures.
  • ML-assisted prioritization: Use lightweight ML models to predict likely timing hotspots from diffs and prioritize those for full analysis in CI, reducing compute cost.
  • Canary timing runs: For high-risk changes, gate merge to a canary runner that executes more expensive WCET checks or HIL tests before releasing broadly.

Case study: enforcing a 2ms task budget in an ECU pipeline

Consider a control task with a strict 2 millisecond deadline. The team configures RocqStat analysis to produce per-path and per-task WCET estimates. The CI pipeline includes the following policies:

  1. On pull request, run incremental RocqStat for changed functions. If any function contributes to the 2ms task and estimated contribution exceeds 1.6ms, fail the PR check.
  2. On merge to develop, run a full static WCET across the linked task set. If task WCET exceeds 2ms, prevent promotion to release candidate stage.
  3. Nightly HIL runs record measured latencies to ensure static analysis remains conservative while not overly pessimistic.

This mix of fast incremental checks and slower full analyses keeps developer feedback quick and prevents dangerous regressions from slipping through.

Actionable checklist for teams

  • Pin a RocqStat-compatible Docker image and toolchain in CI for reproducibility.
  • Define timing budgets per function and per task and commit them to the repo or a policy server.
  • Implement an enforcement script that parses RocqStat JSON and fails builds on budget violations with clear messaging.
  • Use change impact analysis to keep CI runs fast; schedule full runs nightly or on merge to main.
  • Publish and sign WCET artifacts with build IDs for certification traceability.
  • Combine static and measurement evidence for stronger, less pessimistic safety arguments.

Common pitfalls and how to avoid them

  • Pitfall: Relying only on measurement in CI. Fix: Use static WCET for gating and measurement for supplementary evidence.
  • Pitfall: Unpinned toolchains producing divergent results. Fix: Pin compilers and docker image digests; record hashes.
  • Pitfall: Slow full analysis blocking developers. Fix: Use incremental analysis and reserve full runs for scheduled pipelines.

Final recommendations for 2026 and beyond

As timing analysis tools become part of mainstream verification stacks, teams that automate WCET checks in CI will have a competitive advantage: fewer late-stage discoveries, stronger certification artifacts, and faster time to market. Start small with incremental checks and grow to hybrid analyses and HIL integration. Keep policies codified and treat timing like any other quality gate.

Actionable takeaways

  • Integrate RocqStat into CI as a CLI step or via your testing toolchain adapter to produce machine-readable WCET outputs.
  • Enforce timing budgets with automated scripts that map WCET results to budget files and fail builds when breached.
  • Use delta analysis for fast feedback and full analysis nightly or on merge for complete verification.
  • Archive artifacts and sign reports for traceability and certification evidence.

Next steps and call to action

If you operate safety-critical embedded systems, make timing checks a first class citizen in your CI. Start by creating a reproducible Docker image with RocqStat and your toolchain, implement a simple enforcement script that parses rocqstat JSON, and add an incremental analysis job to your PR pipeline. From there, expand to hybrid analysis and HIL gating for release candidates.

Need a turnkey starting point? We maintain CI templates and enforcement scripts for GitLab, GitHub Actions, and Jenkins tailored for embedded projects. Contact us to get templates, Docker images, and a short onboarding workshop to integrate RocqStat into your pipeline and begin catching timing regressions on every commit.

Advertisement

Related Topics

#embedded#ci/cd#safety
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-22T00:20:59.857Z