アーキテクチャ 2026年4月11日

2026 マトリクス:Mac mini M4 における CI Runner のソフト/ハード親和と分散ルール

NodeMac Team

自動化編集

Treating every Mac mini M4 in a pool as interchangeable sounds democratic until one monorepo’s jobs always land on the same host and cook its SSD, or until flaky UI tests need fresh simulators and you accidentally serialize the whole org on a single machine. Affinity is the scheduler’s preference for where work runs; spread is the opposite force—push concurrent jobs across hosts to cut correlated failure and thermal hotspots. This 2026 matrix translates Kubernetes-flavored “soft/hard” thinking into GitHub Actions–style labels, runner groups, and org policies on dedicated Apple Silicon hosts.

Label governance: label namespaces and starvation guards. Fair concurrency: concurrency slices and fairness. OS pinning: macOS and Xcode pinning. Pricing: pricing; help: help.

Soft vs hard affinity (what you are allowed to break)

Constraint Behavior if no match Typical macOS use
Soft affinity Queue or pick next best host Prefer m4-apple-silicon but allow overflow to m2 staging
Hard affinity Stay queued until exact label set matches Codesign with org-specific keychain on nm-org/signing/* runners only
Anti-affinity / spread Reject host if sibling job from same repo is running Simulator-heavy workflows: max 1 UI job per host

When to spread vs when to pin

Signal Prefer spread Prefer pin / hard label
Disk IO saturation Yes—stagger heavy DerivedData workloads Rare—unless dedicated cache volume per host
License / signing No—keep hard pool Yes—token-bound hosts
Flaky parallel UI Yes—one shard per host policy Optional sticky debug host for repro

Rule of thumb: if two failures on the same host would waste more than 30 min of engineer time to disambiguate, enforce spread or shard by modulo across runner names.

Eight-step rollout checklist

  1. Inventory workflows with runs-on lines; tag each as soft, hard, or spread.
  2. Encode spread as either orchestrator feature (unique concurrency group per repo+workflow) or distinct runner labels per shard.
  3. Limit hard affinity to signing, GPU, or compliance—everything else defaults soft.
  4. Dashboard queue depth per label; alert when hard pools sit starved while soft pools idle.
  5. Game-day: kill one host; verify soft jobs migrate, hard jobs wait with clear UI message.
  6. Document exceptions in the same repo as OS/Xcode manifests.
  7. Review quarterly as new chips (M5…) appear—soft pools should absorb experiments.
  8. Cost check: spreading may need +N Mac minis; compare rental burst vs queue SLO slip.

Anti-patterns

Marking every workflow runs-on: [self-hosted, exact-hostname]—you recreated pets. Using only soft labels but never measuring queue fairness—spread is useless if one repo floods the soft pool. Ignoring thermal throttling on laptops repurposed as runners; M4 minis in cloud metal behave more predictably, which is why teams rent dedicated Region Macs instead of stacking CI on developer desks.

NodeMac rents physical Mac mini M4 and Mac-class hosts with SSH/VNC in Hong Kong, Japan, Korea, Singapore, and the United States so platform teams can stand up additional soft pools for burst without a CapEx committee for every affinity experiment.

分散用に M4 Runner を増やす?

HK·JP·KR·SG·US—SSH/VNC 専用 Mac mini M4。

NM
NodeMac クラウド Mac
数分で利用開始

クラウド上の専用 Apple Silicon Mac。SSH/VNC。HK·JP·KR·SG·US。

始める