[{"content":"Helping regulated enterprises adopt AI coding agents, I keep encountering the same architectural contradiction: the agent needs access to source code, cloud APIs, and developer infrastructure to be useful, but the agent itself runs untrusted code. Simon Willison and Martin Fowler have described this as the \u0026ldquo;lethal trifecta\u0026rdquo; of AI security risk: access to sensitive data, exposure to untrusted content, and the ability to communicate externally. Any two of the three are manageable. All three together require a fundamentally different security posture.\nThe answer, I found early in my work with containerized agent runtimes, is not to trust the agent less but to architect the runtime so that trust is irrelevant. Zero trust applied to AI developer tools means the container, the credentials, the network, and the write path are all designed to function correctly even when the agent inside them is actively compromised. Here are six patterns I have built and tested across financial services, automotive, and defense. None require exotic tooling.\nContainer isolation as the default The container that runs the agent is assumed compromised from the moment it starts. This is not pessimism, it is a design constraint that simplifies everything downstream.\nIn practice this means a read-only root filesystem, tmpfs mounts for working data that vanish when the session ends, all Linux capabilities dropped, and an unprivileged user with no escalation path. Podman rootless containers make this straightforward without a daemon running as root. For higher-isolation environments, Firecracker microVMs provide a hardware-level boundary, that an attacker who escapes the container still lands inside a minimal VM with no host access.\nIf an attacker gains shell inside the agent\u0026rsquo;s runtime, what do they get? Read-only filesystem, short-lived tokens, no network access to anything not explicitly allowed, no persistent storage. A box designed to be thrown away, and they get thrown away with it.\nTask-scoped credentials, never shared Persistent credential stores inside agent runtimes are the single most dangerous pattern I see in enterprise AI deployments. A long-lived API token in an environment variable inside a container processing untrusted input is an invitation written in neon.\nThe pattern I have settled on uses three tiers. Self-rotating tokens, where minting a new token immediately invalidates the old one, so a stolen credential is dead by the time an attacker tries it. Certificate-minted tokens using JWT bearer flows with a 15-minute TTL, where the signing certificate lives in HashiCorp Vault (or OpenBao for the open-source path) and the agent never sees the private key. And mint-and-expire tokens with a 60-minute TTL for lower-sensitivity operations, expiring naturally, never refreshed.\nSPIFFE workload identity gives each session a cryptographically verifiable identity without embedding secrets. The runtime proves who it is by workload attestation, not by presenting a credential that could be copied. No tier stores credentials on disk. No tier shares tokens between sessions.\nHuman-in-the-loop write gates All external writes, every CRM update, every Slack message, every Git commit, every cloud resource modification, require explicit human confirmation. The agent drafts, the human approves. This is not a UX convenience pattern. It is a security boundary.\nHaving watched an unconfigured agent send an internal project identifier to a customer-facing Slack channel (an incident I described in an earlier post), I am not theoretical about this. The write gate intercepts the action, presents a dry-run table (target system, proposed change, data involved), then waits for a human to say yes.\nRead operations and write operations belong to fundamentally different trust categories. Reading a file, querying an API, searching a codebase, these are safe to auto-execute. Anything that changes state in a system visible to other humans crosses a boundary, that boundary needs a gate, and that gate needs a person.\nDefault-deny networking Outbound network access from the agent runtime is denied by default. Every allowed destination is declared explicitly in a policy file: hostname allowlists, port restrictions, protocol constraints. DNS resolution happens through a policy proxy, not inside the sandbox, so the agent cannot resolve hostnames the policy has not approved.\nThis inverts the typical container networking model, where outbound is open and you block the bad destinations after the fact. Default-deny means the agent reaches exactly the APIs and services its task requires, nothing else. No path to the public internet, no path to internal services it does not need.\nEgress logging captures every connection attempt, successful or denied. When the OWASP Top 10 for LLM Applications lists prompt injection as a top risk, default-deny networking is the infrastructure-level answer, that even a successfully injected prompt cannot exfiltrate data to a destination the policy has not blessed.\nPolicy-as-code, not honor system Every pattern described above, the container configuration, the credential scopes, the network allowlists, the write gate rules, exists as a version-controlled configuration file. Not as a wiki page, not as a comment in an agent prompt, not as tribal knowledge held by the platform team.\nOpen Policy Agent (OPA) evaluates these policies at runtime, making enforcement programmatic rather than procedural. Data classification rules, credential scope definitions, network policies, all living in the same repository as the infrastructure code, reviewed through the same merge request process, subject to the same approval gates.\nAuditors do not accept \u0026ldquo;we told the agent not to do that\u0026rdquo; as evidence of a control. SOC 2 and ISO 27001 audits require demonstrable, reviewable, versioned controls. Policy-as-code turns security configuration into something an auditor can diff against the previous version and verify was reviewed by a human before deployment.\nAudit trails that survive the session When the agent session ends and the container is destroyed (and it should be destroyed, not recycled), the audit record must persist. Every tool invocation, every credential mint, every write gate decision, every network connection attempt, logged to a store that outlives the ephemeral runtime.\nThis is the pattern, that I found most often missing in early deployments. The container is properly isolated, the credentials properly scoped, but when a compliance team asks \u0026ldquo;what did the agent do last Thursday,\u0026rdquo; the answer is silence, because the logs went away with the container.\nStructured audit events shipped to an immutable log store before the session ends give regulators what they need: evidence, not trust. The trail captures what the agent did, what it was allowed to do (the policy snapshot), what was denied, and what credentials it held.\nThese patterns are not aspirational Everything described here runs on standard tooling. Podman for rootless containers. OPA for policy evaluation. Vault or OpenBao for secret brokering. SPIFFE for workload identity. Standard log shipping to whatever immutable store your compliance team already trusts.\nThe instinct in regulated environments is to either lock down everything (making the agent useless) or grant broad access and hope the model behaves (making the agent dangerous). Zero trust applied to agent runtimes offers a third path: grant precisely the access the task requires, enforce it at the infrastructure level, make every decision auditable, and destroy the runtime when the work is done.\nI have watched enough enterprise AI pilots stall on security review to know, that the blocker is rarely the technology. It is the absence of a legible security architecture, that compliance teams can evaluate. Six patterns, each implementable in a week, each auditable, each explainable to a CISO in a single sentence. That is the difference between a pilot that passes security review and one that dies in committee.\n","permalink":"https://universalamateur.gitlab.io/post/zero-trust-patterns-for-ai-developer-tools/","summary":"Six patterns for running AI coding agents in environments where the container is assumed compromised.","title":"Zero-Trust Patterns for AI Developer Tools"},{"content":"Working with customers whose data cannot leave the building, I kept running into the same problem: every deal sizing tool assumes an internet connection. A SaaS spreadsheet, a web app with an API backend, a shared Google Sheet with formulas that phone home for exchange rates. For customers in automotive, defense, and financial services, where air-gapped environments are not an edge case but the default operating condition, none of these work. Pricing data is sensitive (deal structure, seat counts, consumption patterns), and the moment you ask a field seller to paste that into a cloud-hosted tool, you have created a compliance conversation that nobody wants to have.\nSo I built two companion web apps for internal deal sizing, both deployed to GitLab Pages, both running entirely in the browser, both requiring exactly zero network calls after the initial page load.\nThe Two-Tool Architecture The first tool, the Field Collector, is what sales reps interact with directly. They upload a customer\u0026rsquo;s Service Ping JSON (a structured telemetry payload that describes how an instance is being used) or enter the same data manually when no export is available. The tool validates every input, auto-fills persona breakdowns using a default ratio (31% creators, 16% verifiers, 49% reporters, a split that field sellers can override when they know the customer\u0026rsquo;s actual distribution), and exports a structured JSON file that captures everything Deal Desk needs to build a quote.\nThe second tool, the ELA Calculator, takes that export file (or accepts direct input for quick scenarios) and does the actual math: annual pricing, credit consumption broken down by workflow, three pricing paths rendered side by side for comparison, and an ROI overlay that shows the customer what they are getting for their spend. Having worked through enough deal cycles to know that sellers need to show options rather than a single number, the three-path layout was a deliberate design choice from day one.\nBoth tools are vanilla HTML, CSS, and JavaScript. No React, no Vue, no build step, no bundler. The constraint was specific: field sellers run this on any laptop, including customer-provided machines with restricted browsers, locked-down USB policies, and no ability to install anything. The browser is the runtime. The static files served by GitLab Pages are the entire infrastructure.\nOne Parser, Two Repos The Service Ping payload is the shared data contract between both tools. Extracting the right fields from a Service Ping JSON (active user counts, feature usage signals, CI minutes consumed) requires parsing logic that both tools must agree on. Rather than maintaining two copies of the same extraction code, both repos share a parser.js module synced via a version constant.\n// parser.js — shared between Field Collector and ELA Calculator const PARSER_VERSION = \u0026#34;1.4\u0026#34;; function parseServicePing(payload) { const extract = (path, fallback = 0) =\u0026gt; path.split(\u0026#34;.\u0026#34;).reduce((obj, key) =\u0026gt; obj \u0026amp;\u0026amp; obj[key] !== undefined ? obj[key] : fallback, payload); return { version: PARSER_VERSION, active_users: extract(\u0026#34;usage_activity_by_stage.manage.events\u0026#34;, 0), ci_pipelines: extract(\u0026#34;usage_activity_by_stage.verify.ci_pipelines\u0026#34;, 0), merge_requests: extract(\u0026#34;usage_activity_by_stage.create.merge_requests\u0026#34;, 0), deployments: extract(\u0026#34;counts.deployments\u0026#34;, 0), security_scans: extract(\u0026#34;usage_activity_by_stage.secure.user_sast_jobs\u0026#34;, 0), extracted_at: new Date().toISOString() }; } The PARSER_VERSION constant is the coordination mechanism. When the extraction logic changes (a new field added, a path updated because the Service Ping schema evolved), the version bumps, and both repos know they need the update. Two repos, one parsing contract, no shared server required.\nThe Export Schema as Documentation Connecting the two tools without a server means the JSON export file is the API. Treating it with the same seriousness as a REST endpoint, I documented the schema in a FIELD-EXPORT-SCHEMA.md file (currently at v1.1) that specifies every field, its type, its validation rules, and its default value. When the Field Collector writes a file and the ELA Calculator reads it, both sides validate against the same contract. If a seller manually edits the JSON (which happens), the Calculator catches malformed fields before attempting any math.\nThis formality around a simple JSON file might seem excessive for an internal tool, but having discovered through experience that \u0026ldquo;it is just a JSON file\u0026rdquo; becomes \u0026ldquo;why does the calculator show NaN\u0026rdquo; within about two weeks of field use, the schema documentation paid for itself almost immediately.\nSecurity Without a Server Deploying to GitLab Pages with Content Security Policy headers means the tools are hardened even in their static form. The CSP rules prevent inline script injection, restrict resource loading to same-origin, and ensure that even if someone bookmarks the page on a shared machine, the tool cannot be tricked into loading external resources. For an air-gapped deployment, where the pages might be saved locally and opened from a file system, the same CSP discipline means the tools behave identically whether served over HTTPS or opened from disk.\nNo cookies, no local storage persistence (all state lives in the current session and the exported file), no analytics scripts, no font loading from CDNs. After the HTML, CSS, and JS files load, the network connection could be severed entirely and the tools would not notice.\nWhat This Pattern Teaches Building for constrained environments, where there is no server, no database, no network after page load, forces a kind of architectural clarity that I found surprisingly liberating. Every feature request gets filtered through a simple question: can this run entirely in the browser? If the answer is no, the feature does not ship. That constraint eliminated entire categories of complexity (user accounts, session management, data synchronization, server provisioning) and left behind two tools that do exactly one thing each, do it well, and cannot leak data because there is nowhere for the data to go.\nI found in this constraint a surprising freedom. The air-gapped requirement, which initially felt like a limitation, turned out to be the best architectural decision the project never had to make. It was given to us by the environment, and everything good about the design flows from accepting it rather than working around it.\nFor anyone building internal tools in regulated environments, the pattern is worth considering: static files, client-side processing, documented export schemas, and the browser as your only runtime. The infrastructure cost is a GitLab repo with Pages enabled. The maintenance burden is close to zero. And the compliance story writes itself, because the data never leaves the machine it was entered on.\n","permalink":"https://universalamateur.gitlab.io/post/building-an-air-gapped-pricing-calculator-with-zero-backend/","summary":"Two static HTML apps deployed to GitLab Pages, no server, no database, handling enterprise pricing workflows where data cannot leave the browser.","title":"Building an Air-Gapped Pricing Calculator with Zero Backend"},{"content":"Standing in the Maritim Hotel in Ingolstadt on a Friday around noon, with 300 automotive executives settling into the post-lunch energy dip, I had exactly 15 minutes to make a point about AI governance. The Car IT Symposium is one of those events where the German automotive industry sends its engineering leadership to hear about technology trends, and my silver sponsor slot fell right into the window where coffee is wearing off and lunch is pulling people toward their chairs with gravitational certainty.\nThe talk was titled \u0026ldquo;Your Models, Your Rules: Who Really Controls AI in Your Engineering Process?\u0026rdquo; which, for a 15-minute slot, is already doing a lot of heavy lifting in the title alone.\nThe Bridge That Wrote Itself What made the slot work was timing I could not have planned. The talk immediately before mine was from Synopsys, presenting their AI-powered code scanning capabilities, showing the audience what modern static analysis can find in automotive codebases. Walking up to the lectern with that context still fresh in the room, I opened with a single reframing: \u0026ldquo;The talk you just saw showed you what AI can find in your code. I am here to ask: who controls what happens next?\u0026rdquo;\nThat bridge, from detection to governance, landed better than anything else I said that day. The room shifted. You could see it in the body language, the way people who had been leaning back after a solid Synopsys demo suddenly sat forward, because the question implied that finding problems is only half the story.\nThree Questions for Monday Morning Rather than walking through architecture diagrams or product demos, I structured the entire talk around three questions that every automotive engineering leader should be asking their teams.\nWho controls the model? Can you swap Claude for Mistral tomorrow if your compliance team requires it? Can you move from a cloud provider to on-prem with Ollama when a customer\u0026rsquo;s data residency requirements change? If the answer is \u0026ldquo;not without six months of re-integration,\u0026rdquo; you have a vendor lock-in problem disguised as an AI strategy.\nWho sets the rules? Where are the guardrails defined, who can change them, and are they auditable? In an industry that lives by ASPICE process assessments and ISO 26262 functional safety standards, \u0026ldquo;the AI team configured it\u0026rdquo; is not an acceptable answer to an auditor.\nWho sees the bill? What happens when 500 engineers across your organization start burning tokens with no visibility into consumption, no cost attribution per team, and no way to correlate spend with output? The financial governance gap in enterprise AI adoption is, in my experience, consistently the one that catches leadership off guard first.\nWhy It Resonated The automotive world is not Silicon Valley. These are people who write software that goes into vehicles carrying human lives, governed by MISRA C coding standards and functional safety processes that predate the current AI wave by decades. Their supply chains span 40 or more tier-1 suppliers, each with their own toolchains, their own compliance requirements, their own interpretation of what \u0026ldquo;secure\u0026rdquo; means. And with the EU AI Act enforcement coming in August 2026, regulatory pressure on AI governance is not theoretical for this audience. It is a line item on their project plans.\nThe \u0026ldquo;governance first\u0026rdquo; framing resonated precisely because these people already think in governance terms. They do not need to be convinced that oversight matters. What they needed, and what I think the talk delivered, was a simple vocabulary for the specific oversight gaps that AI introduces into their existing processes.\nThe gap between Silicon Valley AI enthusiasm and German industrial reality was visible in the room. Several hallway conversations afterward circled around the same theme: \u0026ldquo;We know we need AI. We do not know how to govern it within the frameworks we already have.\u0026rdquo;\nWhat Travels Looking back, the thing that worked was not the content itself but the format. Three questions, each expressible in a single sentence, each pointing at a governance gap that is easy to verify (\u0026ldquo;go ask your team, and if they cannot answer in 30 seconds, you have found something to fix\u0026rdquo;). The framework is simple enough to repeat in a hallway, memorable enough to survive the drive back to Munich or Stuttgart, and actionable enough that a CTO hearing it secondhand on Monday morning can do something with it.\nThat was the goal: give 300 executives something they would say to their CTO the following week. \u0026ldquo;Have you asked the three questions?\u0026rdquo; If even a handful of those conversations happened, the 15 minutes were worth it.\n","permalink":"https://universalamateur.gitlab.io/post/what-i-learned-speaking-about-ai-at-an-automotive-conference/","summary":"What happens when you present AI governance to 300 automotive executives on a Friday afternoon in Ingolstadt.","title":"What I Learned Speaking About AI at an Automotive Conference"},{"content":"The convergence nobody planned Having watched a dozen enterprise teams adopt AI coding agents over the past year, I can tell you the bottleneck has moved. It is no longer writing code. It is reviewing, governing, and shipping the code that agents produce. And the artifact sitting at the center of that bottleneck, the one every tool eventually converges on, is the merge request.\nThink about the landscape for a moment. Claude Code, OpenAI Codex, Cursor, Devin, KiloCode, OpenCode. Every one of these tools, when it finishes its work, produces the same thing: a proposed change to source code, packaged as a merge request or pull request, submitted for validation and review. Nobody designed this convergence. It happened because the merge request is the one artifact in the software development lifecycle where intent, execution, review, governance, and delivery all meet.\nYou do not need to win the IDE war. IDEs rotate every 18 months, and the current generation of AI-native editors will be no exception. You do not need to win the agent war either, because agents are becoming commoditized faster than anyone predicted. What matters is making the merge request the best possible destination for every change, from every source, validated and governed to enterprise standards.\nNot issues. Not epics. Not planning boards. The merge request is where the actual work crystallizes into something reviewable, testable, and deployable. Planning artifacts come and go. The merge request is the moment of truth.\nToday\u0026rsquo;s merge request is a diff viewer with a comment thread. Tomorrow\u0026rsquo;s needs to be a workspace.\nThe center is hollow The merge request has not changed in principle since GitHub launched pull requests in 2010. Scott Chacon, GitHub\u0026rsquo;s co-founder, now building GitButler, put it plainly: \u0026ldquo;The Pull Request has not only hardly changed in principle since we launched it 15 years ago, but worse than that, nobody else has innovated beyond it since.\u0026rdquo;\nThat was tolerable when humans wrote all the code. It is breaking now, because the volume of proposed changes is climbing while the review surface stays the same size.\nThe evidence is not anecdotal. CircleCI analyzed 28.7 million CI/CD workflows and found something that should worry every engineering leader: feature branch throughput is up 15%, but main branch throughput, the code that actually ships, is down 7%. Teams can write faster. They cannot ship faster. One in seven pull requests now involves an AI agent as author or co-author, and the review process was never built for that volume or that authorship model.\nFive things the merge request is missing Sitting with customers, running through their agentic workflows, I keep finding the same five gaps. None of them are exotic. All of them are blocking adoption at scale.\nIt does not show why. A reviewer opens an agent-authored merge request and sees a diff. The reasoning, the dead ends, the architectural trade-offs that led to this particular implementation are all gone. The agent session that produced the code is ephemeral, and when it ends, the context dies with it. The reviewer is left to reconstruct intent from output, which is the most expensive kind of code review.\nIt does not show what it means. The merge request has a handful of hardcoded widget types (CI status, test reports, security scans). Customers routinely work around this limitation by faking report types, because the merge request cannot display arbitrary structured data. When your AI agent produces a compliance attestation or a cost-impact analysis alongside the diff, there is nowhere to put it.\nIt does not enforce safety for AI-authored code. When an agent authors a change on behalf of a developer, the requesting developer can still approve their own merge request, because the system treats the AI as the author. This breaks separation of duties, and for regulated industries (finance, healthcare, defense), it is a compliance gap that blocks adoption entirely.\nIt does not connect to intent. Specs, architecture decisions, threat models exist somewhere upstream. The merge request has a \u0026ldquo;closes #123\u0026rdquo; link and nothing else. For an agent-authored change, the gap between what was requested and what was delivered is wider than it has ever been, and the merge request offers no way to bridge it.\nIt does not persist. Agent sessions are ephemeral by design. When a session ends, the working context evaporates. Another agent, or the same agent in a new session, starts from scratch. The merge request should be a resumable workspace where any actor (human or agent) can pick up where the last one left off. Instead it is a snapshot.\nThe competitive landscape is moving These gaps are not invisible to the market. Cursor acquired Graphite, merging an AI-native IDE with stacked pull requests and AI code review into a single integrated experience. GitHub is building multi-agent governance with policy-as-code, treating the pull request as an orchestration surface for autonomous agents. GitButler is rethinking the review unit entirely, moving from branch-based diffs to patch-based review, which is a more natural model when changes come from multiple agents working in parallel.\nNobody has assembled the full vision end to end. The pieces exist across half a dozen products, each solving one or two of the five gaps while leaving the rest open. Whoever stitches together reasoning traces, extensible widgets, AI-aware governance, intent linkage, and persistent workspaces inside the merge request will own the integration point for the entire agentic SDLC. That is the race, and it is happening now, not in some abstract future roadmap.\nWhat this means if you are adopting AI coding tools today Having spent the better part of this year helping teams operationalize agentic workflows, I keep arriving at the same practical advice. Do not optimize for the agent. Do not optimize for the IDE. Optimize for the merge request, because it is the only artifact that survives when the tooling around it inevitably changes.\nConcretely, that means investing in review capacity (humans and automated), not just generation capacity. It means treating your merge request approval policies as governance contracts, not bureaucratic hurdles. It means demanding that every AI tool you adopt can produce a merge request that meets your compliance bar, not just a diff that compiles. And it means evaluating your DevSecOps platform not by how many agents it can spawn, but by how well its merge request experience absorbs the output of agents you have not even adopted yet.\nThe merge request is already the center of everything. The question is whether we upgrade it to carry the weight, or watch the bottleneck tighten until the productivity gains from AI coding tools show up only on dashboards, never in production.\n","permalink":"https://universalamateur.gitlab.io/post/the-merge-request-is-the-center-of-everything/","summary":"IDEs change every 18 months. Agents are disposable. The merge request is the one artifact that survives.","title":"The Merge Request Is the Center of Everything"},{"content":"Having the same conversation for the fourth time in a month, with a different enterprise customer each time, I have accepted that this is now a pattern and not a coincidence. The conversation goes like this: an engineering organization adopts an AI coding tool (GitHub Copilot, Cursor, Claude Code, pick your favorite), celebrates a quarter of productivity gains, and then someone from finance walks into a room and asks a question that nobody can answer. \u0026ldquo;What are we spending on this?\u0026rdquo;\nThe excitement phase is real. Developers are faster. Pull requests move quicker. DORA metrics improve. I do not dispute any of that. What I dispute is the assumption, that the billing and cost governance will sort itself out. It will not. It never does. And the gap between \u0026ldquo;this tool is great\u0026rdquo; and \u0026ldquo;we know what it costs per team per quarter\u0026rdquo; is where I keep finding organizations stuck, usually around the 90-day mark.\nThree tiers of billing maturity I found early in my conversations, that the maturity of an organization\u0026rsquo;s AI billing posture falls into one of three tiers, each with its own failure mode.\nTier 1: Included credits (the trap) Credits bundled with the subscription. The most dangerous of the three, precisely because it feels free. Nobody tracks consumption because there is no line item to track. The monthly invoice looks the same whether 50 developers are using the tool or 500.\nThen one quarter, usage spikes 400% because someone enabled agentic workflows and the agents are burning tokens at a rate no human developer could match. The CTO discovers that the \u0026ldquo;included\u0026rdquo; credits ran out two months ago and overage charges have been accruing silently. Zero visibility breeds zero governance. If you cannot see the meter, you cannot manage the spend. The trap is not that included credits are expensive. The trap is that they are invisible.\nTier 2: On-demand usage (the panic) Pay-per-use with no commitment. Full visibility into spend, which sounds like progress until you realize it comes with zero predictability. Finance cannot forecast quarterly costs because consumption fluctuates with sprint intensity, team headcount changes, and whether someone left an agent running over the weekend.\nEvery sprint review becomes a budget conversation. Engineering managers start self-throttling AI usage to stay within informal limits that nobody formally set, which defeats the entire purpose of adopting AI tools in the first place. The organization bought acceleration and then applied the brakes because the financial governance was not there to absorb variable spend.\nThis is the phase where I see the most organizational friction, because the people who see the bill (finance) and the people who generate the bill (engineering) have no shared framework for resolving the tension.\nTier 3: Pre-commitment with consumption tracking (the goal) Committed credit pools, analogous to reserved instances in cloud infrastructure, provide the revenue floor that finance needs for forecasting. Consumption tracking layered on top provides the per-team, per-project, per-workflow visibility that engineering leadership needs for accountability. Spend caps and alerts provide the governance rails, that prevent any single team or runaway agent from consuming the entire pool.\nSnowflake proved this model works at scale, sustaining 127% net revenue retention with a consumption-based pricing model that enterprises learned to budget around. AWS Bedrock is moving agent infrastructure in the same direction. The pattern is established. What is missing in most enterprises is the internal machinery to operate within it.\nThis is where every organization needs to get to. Almost none are there today.\nThe governance gap that arrives before the security gap Here is what surprises people: the billing governance gap catches leadership off guard before the security gap does. Not because security does not matter (it does, enormously), but because security gets budget, attention, a dedicated team, and executive sponsorship from day one. Cost attribution gets a spreadsheet.\nWhen 500 engineers start burning tokens with no cost attribution per team, the financial governance gap surfaces within weeks. \u0026ldquo;Who sees the bill?\u0026rdquo; is the simplest governance question an organization can ask about its AI tooling, and it is consistently the one engineering leadership can least answer, because the billing infrastructure was designed for seat-based licensing and nobody rebuilt it for consumption-based pricing.\nThe shift from seat-based to consumption-based pricing is happening across the entire DevSecOps industry, and the number of consumption-based SKUs is proliferating faster than billing infrastructure can support them. Code completion, chat, agentic workflows, autonomous agents, each carries its own token economics. Organizations whose internal financial governance assumes a per-seat world are the ones getting caught.\nSelf-managed environments have it worse For organizations running self-managed or air-gapped deployments, the billing visibility problem compounds. There is no cloud-side metering automatically tracking consumption. Billing depends on periodic usage reports, persistent telemetry channels, or manual aggregation. The data needed for cost attribution either arrives late, arrives incomplete, or does not arrive at all.\nSOX compliance requirements add another layer, that most teams underestimate. Any revenue-touching billing system, which consumption-based AI pricing absolutely is, must meet audit and control standards that a spreadsheet-based process cannot satisfy. The gap between \u0026ldquo;we track this in a sheet\u0026rdquo; and \u0026ldquo;this is SOX-compliant\u0026rdquo; is measured in months of work and significant tooling investment.\nWhat to do about it The FinOps discipline, born from cloud infrastructure cost management, is the closest existing framework. The core principle transfers directly: make cost visible, allocate it to the teams that generate it, create feedback loops so that engineering decisions carry financial context. FinOps practices for cloud spend optimization are the starting point, not the destination, for AI tool cost governance.\nBefore you adopt the next AI coding tool, ask your team one question: \u0026ldquo;Who sees the bill?\u0026rdquo; If they cannot answer in 30 seconds, you have found something to fix. Not next quarter. Not when finance asks. Now, while the spend is still small enough to govern and before the agentic workflows turn the consumption curve into something nobody budgeted for.\n","permalink":"https://universalamateur.gitlab.io/post/the-billing-problem-nobody-talks-about/","summary":"Every enterprise adopting AI coding tools hits the same wall within 90 days: nobody knows what it costs.","title":"The Billing Problem Nobody Talks About"},{"content":"Running four concurrent AI coding agent sessions on a Monday morning, with each one touching different repositories, I discovered something that should have been obvious from the start: the configuration file matters more than the model. Not the system prompt. Not the temperature. The file that tells the agent where the fences are.\nOver the past year I have iterated on this configuration almost daily, working with Claude Code sessions that run in parallel, each scoped to a different task. The file that governs their behavior, an AGENTS.md (or CLAUDE.md, depending on your tooling), is not documentation. It is a contract between you and your agent, and the difference between help and damage comes down to whether that contract exists.\nPermission boundaries The first pattern, and the one most people skip. Every tool an agent can access falls into two categories: read-only operations that are inherently safe, and write operations that need a gate.\nReading a file, searching a codebase, querying a database with a read-only role, these the agent should execute without asking. Pushing to a git remote, posting a Slack message, creating a merge request, commenting on an issue, these are actions visible to other humans. Making them auto-execute is handing the agent a megaphone with no off switch.\nThe boundary is simple: anything that changes state outside the agent\u0026rsquo;s local session requires explicit confirmation. \u0026ldquo;Never push to main without asking. Never send a Slack message without approval. Never create external comments without review.\u0026rdquo; Three lines that prevent three categories of incident.\nCredential scoping Different tasks need different access levels. A research session pulling documentation does not need write access to production. A documentation session does not need Slack credentials. A CI debugging session does not need customer data.\nScoping credentials to the task rather than the tool is OWASP least-privilege, applied to a non-human actor. Withhold everything the agent does not need for the specific task at hand. It will not complain. It will simply not have the opportunity to misuse what it does not have.\nWrite gates This is the most important pattern. A write gate interposes a human confirmation step before any action visible to the outside world. Git pushes, Slack messages, GitLab comments, pull request creation, issue updates, all crossing the boundary from \u0026ldquo;local work\u0026rdquo; to \u0026ldquo;public artifact.\u0026rdquo; Every one should require you to see what the agent is about to do and say yes.\nAI agents are confidently wrong at exactly the rate you would expect from a system with no concept of embarrassment. An agent will draft a perfectly formatted Slack message to a customer channel containing an internal project identifier, with the same confidence it uses to write a correct unit test. The write gate is the only thing between that draft and your reputation.\nSafety hooks Pre-commit hooks are the last line of defense, catching problems at the boundary between the agent\u0026rsquo;s workspace and the shared repository. A minimal set checks for three things.\nFirst, secrets. API keys, tokens, passwords, connection strings. Tools like detect-secrets or gitleaks run in milliseconds and catch the credentials that agents will happily commit alongside application code.\nSecond, file exclusions. Your AGENTS.md itself, .env files, credential stores, anything that should never leave the machine. A simple pattern list prevents the agent from committing its own instruction set into the shared repository.\nThird, content validation. Customer names, internal identifiers, proprietary labels. A regex-based scan against a deny list catches the leaks that are invisible until someone outside your team reads the commit history.\nMulti-session coordination When you run multiple agents in parallel, a new class of problems appears. Two agents editing the same file. One agent\u0026rsquo;s changes invalidating another\u0026rsquo;s assumptions. Shared documents growing stale because three sessions are appending without reading.\nThe coordination patterns I have settled on use three mechanisms. A shared knowledge graph via the Model Context Protocol that lets sessions register their scope, so conflicts surface before they cause damage. File-level advisory signals, where an agent declares what it intends to modify before starting. And append-only conventions for shared files, where agents add to the end rather than editing in place. Advisory locking and append-only logging, applied to agents.\nWhat happens without these Here is what an unconfigured agent did in a single session before I learned these patterns. It pushed directly to main, bypassing review. It sent a Slack message to a customer-facing channel containing an internal project number. And it committed a file with API credentials to a shared repository. Three incidents, one session, all preventable with 20 lines of configuration.\nThe model was not the problem. The model was doing what it was designed to do: complete the task. The problem was that nobody told it where the boundaries were.\nA starter template For anyone running Claude Code or a similar agent, here is a minimal starter AGENTS.md:\n# AGENTS.md - Minimal Safety Configuration # Permission boundaries # Read operations: auto-execute. Write operations: require confirmation. # Never push to main/master without explicit approval. # Never send messages to external platforms without review. # Credential scoping # Research sessions: read-only access. No write credentials loaded. # CI sessions: repo-scoped tokens only. No Slack, no customer data. # Write gates # All git push, Slack, GitLab comment, and PR actions: confirm first. # Draft all external-facing content for review before sending. # Safety hooks # Pre-commit: scan for secrets (gitleaks), deny .env and AGENTS.md. # Pre-commit: regex deny-list for customer names and internal IDs. # Multi-session coordination # Register session scope on start. Check for conflicts before editing. # Shared files (journals, status): append-only. Never overwrite. The configuration is short because the patterns are simple. The discipline is in writing them down before the first session starts, not after the first incident. The model brings capability. You bring the guardrails. Twenty lines, written once, is the difference between an agent that accelerates your work and one that sends your internal project numbers to a customer channel on a Tuesday afternoon.\n","permalink":"https://universalamateur.gitlab.io/post/configuring-ai-agents-that-dont-embarrass-you/","summary":"What goes into an AGENTS.md, why safety hooks matter, and what happens when you skip them.","title":"Configuring AI Agents That Don't Embarrass You"},{"content":"How TeamOps principles foster discontinuous innovation at GitLab TeamOps, as an Operating Model, emphasizes the importance of Decision Velocity and Agency Discontinuous Innovation is not a small evolutionary step and difficult to predict. TeamOps fosters Discontinuous Innovation by encouraging 2-way-door Decisions. GitLab encourages everyone to question and propose alternatives to past decisions. Explicit Opportunity for undistracted incubation is another aspect of the GitLab way with TeamOps. TeamOps allows GitLab to benefit from both, high decision velocity and market changing disruptive innovations. What is TeamOps? TeamOps is an Operating Model that aims to help teams and organizations make greater progress by focusing on how team members relate and work together.\nIt is based on four guiding principles:\nValuing Results transparently Creating a Shared Reality Enabling Everyone to Contribute Maximizing Decision Velocity These principles are supported by a set of action tenets that provide guidance on how to put them into practice.\nTeamOps emphasizes the importance of transparency, flexibility, and empowerment in order to create a more efficient, productive, and satisfying work environment. It is based on the belief that everyone should have the opportunity to contribute and that organizations should focus on measuring results rather than inputs such as hours worked.\nBy adopting the principles and action tenets of TeamOps, organizations can create a culture of collaboration, innovation, and continuous improvement that helps them achieve their goals and objectives more effectively.\nIt has been developed and tested at GitLab, and is now being offered to other organizations as a way to improve their operations.\nWhat is discontinuous innovation? Discontinuous innovation, also known as disruptive innovation, refers to a type of innovation that creates a new market or significantly changes an existing market by introducing a product or service that is significantly better or cheaper than what is currently available.\nIt often involves the development of a new technology or business model that disrupts the existing market, making it difficult for incumbent firms to compete.\nDiscontinuous innovation can be difficult to predict and risky for organizations to pursue, as it often involves significant changes to existing business models and technologies.\nHowever, it can also lead to significant growth and success for companies that are able to successfully introduce disruptive products or services to the market.\nWhy does discontinuous innovation need fostering in TeamOps? TeamOps, with its action tenets, accelerates your organization\u0026rsquo;s Decision Velocity tremendously.\nAll Decisions focused to the smallest scope on the lowest level stepping through evolutionary iterations driven by a designated Directly Responsible Individuals (DRI), lead to the fittest solution surviving as a Minimal Viable Change (MVC).\nUmpteen resolutions keep your organization agile and its change velocity high.\nEveryone in your organization should feel free to disagree with a decision, after they commit and act according to the resolution. A collaborative decision, following the TeamOps model, does not need a consensus, but can always be questioned, iterated upon or even reverted. Treating every decision as a 2-way-door, rolling back on an iteration and implementing the better solution is the rule, realizing blameless problem solving.\nThe high decision velocity empowers your teams and the whole organization to adapt and get more efficient.\nThis leads to more focus on small agile iterations, than on the big changing ideas in your teams.\nEmpowered and involved in the organization, your team members will strive to continuously make their day-to-day tasks and products more efficient as well as remove waste.\nTo stay ahead of the curve and be competitive in the market, your organization needs also to continuously look for new and innovative ways to solve problems and create value.\nAdditionally to a high decision velocity an organization can stay relevant and adapt to changing market conditions, by incubating disruptive innovations.\nHow does GitLab foster discontinuous innovation with TeamOps? GitLab facilitates discontinuous innovations in several ways.\nEverything can be questioned GitLab’s sub value “Disagree, commit, and disagree”, in the “Results” section of the six “CREDIT” core values, implements the reversibility of every decision, replacing it with a better one. Every team member can propose a better solution for a past decision and expect valid feedback plus the possibility for implementation.\nA great example of this is the Transformation of GitLab itself to one Platform instead of many tools.\nIn 2016 Kamil Trzciński, at the time a fresh full member of the GitLab team, approached GitLab’s co founder Dimitri Zaporozhets with the idea to fuse GitLab Source Code Management (SCM) and Continuous Integration (CI) into one DevOps Tool and was many times rejected.\nThe tools should stay lean and simple. A fusion would build a complicated Monolith.\nWhile committing to this decision and contributing to the source code of both tools, Kamil ideated a common tool with all the synergies and advantages one all encompassing DevOps Platform has for the Software Development Life Cycle (SDLC).\nAt the end Dimitri and his other co-founder, Sid Sijbrandij, saw the value in the direction Kamil had proposed. You can listen to the story told by himself here.\nKamil’s discontinuous innovation, away from the market prevailing many small tools in the DevOps sector, led GitLab to hold the position of the one DevSecOps Plattform, reducing cost, effort and complexity for all its customers.\nExplicit room for long-term bets To expand GitLab’s Serviceable Addressable Market (SAM) in areas, which fit within GitLab\u0026rsquo;s company mission that are currently not serviced, the Incubation Engineering Department has been founded.\nIn this division, Single-engineer Groups move fast, ship, get feedback, and iterate on specific ideas to draw benefits, but not distractions, from within the larger organization.\nThe established Software Demo Process facilitates asynchronous collaboration, iterative feedback, and minimal alignment with the rest of GitLab while keeping the autonomy of this incubator.\nThrough TeamOps as operational model at GitLab, this extension of the organization has and will lead to disruptive innovations in the product, for example ⛅🌱 Cloud Seed and OKR Management.\nConclusion Fostering discontinuous innovation is important for GitLab, because it will lead to significant competitive advantages and help them stay ahead of the curve in the DevSecOps sector.\nWith TeamOps as the operational model in GitLab, fostering discontinuous innovation is supported by several principles and action tenets.\nFirst, the principle of enabling everyone to contribute encourages a diverse range of perspectives and ideas, which can be essential for generating new and innovative solutions. By providing equal opportunities for all team members to contribute, regardless of level, function, or location, organizations can create an environment that is more conducive to innovative thinking.\nSecond, the principle of maximizing decision velocity helps organizations act quickly on new ideas, rather than getting bogged down in lengthy decision-making processes. This can be especially important in the fast-moving software industry, where time is of the essence.\nThird, the sub value of “Disagree, commit, and disagree” facilitates team members to ideate and propose better solutions for decisions taken in the past. This empowers everyone to come up with a disruptive innovation to a problem, which was not solved in the best way.\nFinally, the principle of transparent measurement helps organizations focus on results rather than inputs, which can encourage a focus on innovation rather than simply meeting short-term targets. By measuring and rewarding results rather than the number of hours worked or other inputs, organizations can create an environment that encourages team members to take risks and try new things.\nAt GitLab, these principles are supported by a number of action tenets, including asynchronous workflows, which allow team members to collaborate and contribute on their own time, and Directly Responsible Individuals (DRIs), who are empowered to make decisions and take ownership of projects.\nGitLab also emphasizes the importance of iteration and encourages team members to break down complex problems into smaller, more manageable pieces that can be tackled one step at a time.\nOverall, the principles and action tenets of TeamOps can create an environment that supports and fosters discontinuous innovation by empowering team members to take ownership of their work, encouraging creativity and risk-taking, and focusing on continuous iteration and learning.\nFree Training and Certification - Your start into TeamOps If you are interested in learning more about TeamOps and how to implement this operating model, signup for the free Teamops Certification by GitLab.\n","permalink":"https://universalamateur.gitlab.io/post/how-teamops-principles-foster-discontinuous-innovation-at-gitlab/","summary":"How GitLab uses TeamOps to foster discontinuous innovation through decision velocity and empowerment.","title":"How TeamOps Principles Foster Discontinuous Innovation at GitLab"},{"content":"My personal blog Universalamateur.net powered by Hugo on GitLab Pages Here I describe the intial setup of my personal blog powered by the static site generator HuGo using GitLab Pages.\nSettings in GitLab Naming the project username.gitlab.io, in my case Universalamateur.gitlab.io makes the page available under the domain Universalamateur.gitlab.io. Pages is activated in the project Pages Access Control List is set with this public Project to everyone Pages deployment for static site is set up and everything in the public folder is served to the visitor. Start with Hugo Rquirements for local work We isntall Git and Hugo, then pull the repo is it exists, otherwise clone it. Then in the repo directory we update the git submodule there, the theme Anake for hugo.\nMacOS M1 brew install hugo git git -C Universalamateur.gitlab.io pull || git clone https://gitlab.com/Universalamateur/Universalamateur.gitlab.io.git cd Universalamateur.gitlab.io git submodule update --init --recursive Linux x86 sudo apt install git hugo -y git -C Universalamateur.gitlab.io pull || git clone https://gitlab.com/Universalamateur/Universalamateur.gitlab.io.git cd Universalamateur.gitlab.io git submodule update --init --recursive Starting a local Hogu Webserver hugo server -D -F The -D argument tells Hugo to include draft post The -F argument tells Hugo to include future dated post that are published Initital One time Setup localy was Setting up the Site and adding the Theme Anake.\nhugo new site Universalamateur.gitlab.io cd Universalamateur.gitlab.io git init git submodule add https://github.com/theNewDynamic/gohugo-theme-ananke themes/ananke echo \u0026#34;theme = \u0026#39;ananke\u0026#39;\u0026#34; \u0026gt;\u0026gt; config.toml hugo server /D Implemented custom theme At the moment using Anake, Future our own theme based on Anake\nPosting with Hugo Used Front Matter in Articles by default --- title: \u0026#34;{{ replace .Name \u0026#34;-\u0026#34; \u0026#34; \u0026#34; | title }}\u0026#34; date: {{ .Date | time.Format \u0026#34;:date_long\u0026#34; }} draft: true tags: [] featured_image: \u0026#34;\u0026#34; description: \u0026#34;\u0026#34; --- featured image Add the image to the folder /static/images/ include in the Front Matter: featured_image: '/images/NAME_OF_PICTURE.jpg' to hide the header text on the featured image on a page, set in Fornt Matter omit_header_text: true Implemented Categories . ├── config.toml ├── archetypes │ ├── default.md // default archetype for all articles │ └── post.md // Declared archetype for posts ├── content │ ├── _index.md // Overwriting the initial Landing page with pst listing │ ├── about │ │ └── index.md // Bio and Portfolio Page │ └── post │ ├── Blog-Setup-with-Hugo-on-GitLab.md // A Technical BLog Article │ ├── My-Lasagna-recipe.md // Post with recipes │ └── The-Initial-Post.md // and other Articles ├── static │ └── images // Folder for prepared images │ └── SunDown.jpg Archetypes Blog Post Creation Commands hugo new General/NAME-OF-POST.md hugo new General/NAME-OF-POST.md Ideas for the future Used tags parser Write a python parser for a sheculed pipeline, which commits back into a md file the used tags and pages number used those tags plus links on\n","permalink":"https://universalamateur.gitlab.io/post/blog-setup-with-hugo-on-gitlab/","summary":"Setting up a personal blog with Hugo and GitLab Pages.","title":"Blog Setup With Hugo on GitLab"},{"content":"The recipe for my favorite Lasagne Here it comes!\n","permalink":"https://universalamateur.gitlab.io/post/my-lasagna-recipe/","summary":"The recipe for my favorite lasagne.","title":"My Lasagna Recipe"},{"content":"Who am I? Welcome to my blog! My name is Falko Sieverding and I am excited to share my thoughts, experiences, and insights with you. On this blog, you will find a wide range of topics that I am passionate about, including DevSecOps/DevOps, CI/CD, Boardgames, Team Leadership and Productivity tips. Sometimes a food recipe or book review might sneak in.\nWhether you are looking for career advice in Customer Success in the IT sector, DevSecOps related technical tutorials, or simply a place to connect with like-minded individuals, I hope that you will find something of value here. Thank you for stopping by and I hope you will join me on this journey as we explore TeamOps, Asynchronous work in Customer Success and Enabling quickly DevSecOps in your SDLC together.\nRemember to check back often for new updates and don\u0026rsquo;t hesitate to leave a comment or share your own thoughts and experiences. I look forward to connecting with you!\nWhy a Blog? Creating a blog is a great way to share my resume, opinions, technical tutorials, and portfolio with a wider audience. By having a single source of truth for all of this information, I can more easily control how I present myself to others and ensure that the information which is available about me, is accurate and up-to-date. In addition, a blog can be a great way to connect with like-minded individuals and build a community of followers who are interested in the same topics that I am passionate about. I hope this will be a rewarding and fulfilling experience.\nWhat do I hope to achieve through my blog? My first goal is to document my achievements and enable others to learn from them. this shall be also a collection for myself, to remember later how I have executed specific tasks to reach my goals.\nThis personal blog is hosted on GitLab Pages The source code of this site is hosted in this repository. From there it deploys to GitLab Pages.\nOne of my first posts will be the documentation of this process.\n","permalink":"https://universalamateur.gitlab.io/post/the-initial-post/","summary":"Hello world, why this blog exists, and what to expect.","title":"The Initial Post"},{"content":"AI Solutions Architect | Düsseldorf, Germany\nIn 1979 not only Pink Floyd released \u0026ldquo;The Wall\u0026rdquo; and Sony the Walkman, but my parents, a math/informatics teacher married to a kindergarten teacher, received me on the second to last day of May. Born and raised in the Lower Rhine region of Germany, a bicycle ride distant from the green border with the Netherlands, I found early in life my passion for water sports and board/roleplay games. In 1988 my father handed down to me his Commodore 64 home computer and connected to our old color TV it became the magnet for my attention from then on. Many hours were spent loading Mafia from datasette and playing with 4 friends hot seat, noses inches from the buzzing TV screen. With my Abitur in 1998 and Zivildienst, as orderly for the elderly, done, I studied electrical engineering at RWTH Aachen. Earning my livelihood with different side jobs (lifeguard at the local pool, flipping burgers in the nightshift, building and repairing PCs), I worked my way through every layer of the IT stack across 20+ years, from system administrator at a mechanical engineering institute to leading international teams, from the Euregio to Athens and back. Today, as GitLab\u0026rsquo;s subject matter expert for the Duo Agent Platform, I build the systems that let AI agents ship code in regulated industries.\nWhat I\u0026rsquo;m Building Deploying autonomous AI agents in environments where nothing leaves the building, for automotive OEMs, banks and defense contractors Designing zero-trust runtime architectures for autonomous coding agents and governed MCP workflows, solving the security and rollout problems that keep AI-assisted development locked to early adopters instead of reaching every builder in the SDLC Building the governance layer for enterprise AI adoption, credit dashboards and cost controls so organizations can scale agents without losing oversight Speaking I speak regularly at conferences, workshops and roundtables on AI engineering, AI sovereignty, AI governance and the transformation of the software development lifecycle through agentic AI, both on stage and virtual.\nSelected:\nCar IT Symposium 2026, Ingolstadt, \u0026ldquo;Your Models, Your Rules: Who Really Controls AI in Your Engineering Process?\u0026rdquo; (speech + panel) Bitkom Lenkungsausschuss Software 2026, Berlin, \u0026ldquo;Agenten statt Assistenten\u0026rdquo; (speech + roundtable) 2Hero/Cillers DevSecOps Flow Hackathon 2026, Chas Academy Stockholm, workshop lead + live credit dashboard (60+ participants) GitLab Epic Conference 2025, London + Paris, conference presentations GitLab \u0026amp; Google Cloud Fireside Chat 2024, \u0026ldquo;AI-powered DevSecOps\u0026rdquo; (moderator) Recurring AI agent and MCP integration workshops for automotive OEMs and enterprise customers (2025-2026) Career 2024-now AI Solutions Architect, GitLab — Duo Agent Platform SME, SA Center of Excellence 2022-2024 Customer Success Manager → Solutions Architect, GitLab 2022 Customer Success Manager, Code Intelligence (Bonn) 2021-2022 Team Lead Solution Engineering, Fieldcode (Athens) 2019-2021 Technical Support Manager, Fleet Complete (Athens) 2015-2019 IT Support Manager, evosec (Germany) 2005-2015 System Admin → Team Lead, RWTH Aachen 1999-2004 B.Sc. Electrical Engineering, RWTH Aachen Beyond the Terminal Next to every job I ever held, there was always a parallel life in board and roleplay games. I wrote the 12-part Arkham Horror \u0026ldquo;Szenarien-Massaker\u0026rdquo; campaign for Mephisto magazine, articles for GamesOrbit, created and moderated the Söhne Sigmars podcast about Warhammer Fantasy Roleplay, and co-hosted MetaArena, a German-language Android: Netrunner podcast with 50+ episodes.\nWhen Netrunner took over my gaming life, I founded the Euregio Meisterschaften cross-border tournament series, bringing players from Germany, the Netherlands and Belgium together in Aachen and Essen. TeamworkCast streamed our events and somewhere along the way I ended up commentating matches on YouTube, earning nicknames like \u0026ldquo;the honey badger of Netrunner.\u0026rdquo; My signature deck \u0026ldquo;The Finger!\u0026rdquo; went 27-3 across three Store Championships. From 2015 to 2017, working 30 hours a week alongside my day job, I tried making a living in the board game industry with LudiCreations, running crowdfunding campaigns and doing game design. The money was not enough, but the experience was everything.\nSince 2013 I am chairman of the Oecher Meeples, a board game club in Aachen. For 24 years I was fitness trainer at SC Delphin Geldern, the swimming club in my hometown. A universal amateur in every sense.\nSome of My Favorite Things In my humble opinion you get a good picture of another human being and perhaps some shared interests by knowing their favorite things.\nBooks The Hitchhiker\u0026rsquo;s Guide to the Galaxy - Douglas Adams Cryptonomicon - Neal Stephenson Going Postal - Terry Pratchett Board Games Saint Petersburg (2004) - Bernd Brunnhofer Android: Netrunner (2012) - Richard Garfield Imperial (2006) - Mac Gerdts Songs Don\u0026rsquo;t Stop Me Now (1979) - Queen (10) and Counting (2006) - BoySetsFire Family Tree (1996) - H2O universalamateur.gitlab.io | LinkedIn | GitLab | GitHub\n","permalink":"https://universalamateur.gitlab.io/about/","summary":"About Falko Sieverding, the Universalamateur","title":"About"}]