PushBackLog
Roster

Personas

20 fictional practitioners covering the full spectrum of roles in a technology product organisation. Each persona carries a best practices profile across all 12 library categories.

Engagement guide

Working With the Team

The PushBackLog team is the product's first and most demanding client. Before PushBackLog can enforce quality gates on other teams' backlogs, review AI execution runs for other teams' projects, and close the feedback loop between production incidents and properly-formed work items, it has to work that way itself. This roster is not assembled as a demo or a showcase — it is the actual squad building a platform whose central claim is that bad tickets cost your team more than bad code, and whose entire value proposition rests on doing delivery right.

That creates a specific kind of team: one with sharp edges, strong convictions, institutional depth, and a particular intolerance for the shortcuts that produce the poor backlog quality PushBackLog exists to end. What follows is an honest account of what that means for anyone engaging with them.

What the Team Brings

The product they're building is their own first use case — and they know it.

PushBackLog is confirmed to manage its own continued development through itself as early as the platform is functional enough to do so. That means every persona on this roster will eventually be represented as a configured AI worker in the system, with enforcement levels drawn directly from the profiles in this directory. Every quality gate the platform enforces on other teams will first have been enforced on a PBL work item. The team does not have the option of building a quality-enforcement tool while operating without quality enforcement. This is not a healthy tension held in theory — it is a structural constraint that shapes how they work every day.

Security is non-negotiable at every level of the stack.

PushBackLog handles tenant data, codebase access via GitHub App installation tokens, AI execution runs in ephemeral Fargate sandboxes, and short-lived credentials that must never be stored long-lived. The threat surface is meaningful. The team's response is not a security checklist reviewed at release — it is five independent practitioners from different disciplines who all independently hold the same four security practices at hard: OWASP Top 10, input validation, secrets management, and least privilege. Dayle sets it in the engineering charter. Sandra enforces it as a management non-negotiable. Marcus builds it into the platform foundation from IAM roles outward. Jacinto runs structured threat model reviews against every feature that touches a trust boundary. Michael designs test cases that verify security intent rather than just functional correctness.

For a product that will be asking enterprise clients to install a GitHub App on their organisation and grant it codebase access, this is not over-engineering. It is the minimum credible position.

Delivery is disciplined because the product depends on it being trusted.

PushBackLog's competitive claim is that it produces better backlog quality than teams manage on their own — and it autopilots execution of that work to a standard worth merging. That claim falls apart the instant the team ships something vague, unreviewed, or built against acceptance criteria nobody wrote down. Christopher Shank owns the backlog and does not let items proceed without clarity on what done means. Sandra Whitfield holds Definition of Done and Definition of Ready as hard requirements. Rebecca Novak's BDD scenarios are binding. Jordon Taylor runs cleaner ceremonies than most teams manage at twice the head count.

This is the team building the quality gates the platform will use. Their internal standards and PBL's configurable quality dimensions are, functionally, the same document.

The institutional memory here is the Best Practices Library in human form.

PushBackLog's Best Practices Library is one of its three strategic assets — a public, freely readable catalog of actionable practices that personas embed as their durable knowledge layer. The library did not emerge from a content strategy meeting. It emerged from a team that includes Michael McCoin, who can reconstruct the reasoning behind TDD, mocking strategy, and the test pyramid from first principles. Marlene Sanchez has built and rebuilt systems through every architectural era since the late 1980s and has a direct, personal failure catalogue for every practice she holds. Ryan Pass has been de-escalating production incidents and customer crises since before client-server was an architectural term.

The library's value to paying tenants is that practices are actionable, justified, and grounded in consequence rather than convention. When Jacinto documents a security principle, or Michael writes a mocking strategy entry that leads with what an incorrect mock makes invisible, they are not summarising published theory — they are encoding hard-won patterns from careers that predate most of the frameworks the library's users are building with.

The platform is built to fail gracefully because its first engineers have been on-call.

PushBackLog will run AI execution jobs in ephemeral Fargate containers, broker real-time refinement sessions via Socket.io, process webhook events from GitHub at volume, and surface run results to tenant dashboards with low latency. Marcus Okonkwo builds with written, tested runbooks as baseline hygiene. Dayle holds structured logging and distributed tracing at hard, because she has managed production incidents where their absence turned a thirty-minute diagnosis into a six-hour investigation. The question Marcus asks at every architecture review — who is on call for this when it breaks, and do they have what they need? — is the same question Dayle brings to platform planning. That is the operational posture of a team that builds a platform other teams will depend on.

The PushBackLog name is not aspirational branding — it is a job description.

Christy Arthur will open a PR review and ask why is this not tested? before she asks anything else, and she will not be satisfied with a schedule-based answer. Todd Jensen's first question about any practice is what problem does this solve, and is that a problem we actually have? — which is the same question PBL's quality gates ask about every work item that tries to enter the refinement queue below the configured threshold. Marlene's architectural pushback is precise: not "this is wrong" but "this breaks when the queue backs up — here is why." Jacinto does not escalate everything; he escalates what is genuinely exploitable and requires the team to own undeclared risk explicitly. The product claims it will push back on poor quality. The team building it does the same thing, consistently, as a working style rather than a policy.

Where the Friction Lives — and What PushBackLog Does About It

The process overhead is real. It is also the product's point.

Threat model reviews, binding acceptance criteria, QA architectural gates, CI/CD pipeline enforcement, infrastructure review points, and sprint ceremony discipline are individually justified and collectively considerable. Under delivery pressure, this overhead creates a genuine tension: the team either maintains standards and takes the time cost, or compromises them quietly and pays in production or rework later.

This is exactly the pressure that PushBackLog is designed to relieve. As the platform matures toward managing PBL's own backlog through quality gates, AI-assisted refinement, and autopilot execution, it will progressively replace the manual overhead that currently sits between a work item entering the backlog and a persona executing it. The quality enforcement does not disappear; it moves from human bandwidth to a configured system that does not get tired under a deadline. The team's current process overhead is the specification for the platform's automation.

Single points of expertise will narrow as the persona library deepens.

Marcus is the only platform engineer. Jacinto is the only dedicated security specialist. Rebecca is the only designer. Under sprint pressure, the depth of specialist scrutiny applied to any given piece of work is constrained by one person's capacity. This is the most direct argument for the platform's AI persona model: the first production use case for a security-specialist persona trained against Jacinto's enforcement levels, or a design-review persona carrying Rebecca's WCAG 2.1 AA and user-centred design standards, is reviewing PBL's own output.

The plan is explicit — PBL's personas and projects are real production data from day one; dogfooding begins as early as the platform is functional enough to support it. Every specialist's knowledge base is being encoded into the Best Practices Library at the same time as the platform is being built. The single-point-of-failure risk is also the clearest possible expression of the problem the product exists to solve.

Generational and stylistic friction produces the library's most durable entries.

The forty-five-year professional experience gap between Christy Arthur and Michael McCoin could produce coordination overhead or intellectual entropy. In practice, it produces something the library cannot generate from a single perspective: a live tension between what the evidence from sixty years of production failures says and what a practitioner building with current tools against a current problem actually needs to know. Todd Jensen's challenge to any practice — does this solve a problem we actually have? — is the same filter PBL's quality gates will apply to backlog items before they reach refinement. Michael's insistence on asking what failure a test makes visible is the reasoning behind the Library article on mocking strategy. Marlene's architectural conservatism comes from systems she has watched fail in ways that took organisations years to untangle.

The friction here is generative. It is the difference between a Best Practices Library that summarises methodology documents and one that encodes the reasoning behind each practice — which is the only version that earns the trust of a paying tenant who will embed those practices in an AI that executes autonomously on their codebase.

Standards collision under delivery pressure is the failure mode the product makes visible — and then removes.

Christopher's authority as Product Owner will, under sustained scope pressure, come into tension with Michael's quality gates, Jacinto's security reviews, and Randee Hall's acceptance testing rigour. The team has established structures to surface this tension before it compounds silently. But those structures depend on bandwidth that is already committed to active delivery.

This is precisely the collision that per-tenant quality gates and configurable enforcement modes are designed to handle. When PBL's own backlog items move through configured quality dimensions — title clarity, description completeness, acceptance criteria, scope boundedness, linked context — the decision about what enters refinement is made by the system, not by a conversation that requires Jordon, Christopher, Sandra, and Michael to all be available at the same moment.

The distributed team is coordinating across time zones now. The platform removes that constraint.

Team members span Pacific through Eastern time and managed services personnel on separate rhythms. Context does not always travel with decisions. The team's logging and documentation culture provides partial compensation, but asynchronous coordination across four time zones is structurally imperfect.

PushBackLog addresses this at the execution layer. Once a work item has been quality-gated and refined, the persona that executes it does not operate on business hours, does not require a meeting to start, and surfaces its run results in the product for async human review. The gap between item approved for execution and PR raised for review stops being a calendar coordination problem. This is the confirmed MVP execution model, and the team is building toward it as its own primary workflow.

Junior team members are operating at the early end of their arcs, and PBL is designed around exactly that.

Todd Jensen and Michele Wilson are capable practitioners carrying real delivery load at the early-career end of a team with very deep senior coverage. Senior mentorship depends on senior capacity that is already mostly committed to active delivery. Todd's pragmatic instincts will occasionally short-circuit a practice he has not yet seen fail. Michele's pattern recognition at the customer-facing tier is still building.

The product's value proposition is explicitly designed for this configuration: teams where the senior expertise exists but is not always available to supervise every individual execution. PushBackLog's AI personas apply the team's senior-level standards — security enforcement at Jacinto's level, architectural conservatism at Marlene's level, quality gate rigour at Michael's level — to every execution run, not just the ones that get a senior reviewer. The readiness score and refinement flow ensure that what reaches a persona for execution has already been shaped to a quality threshold that does not depend on who happens to be available to review the ticket before it lands in the sprint.

What to Expect From the Engagement

This team is building a platform that closes the feedback loop between a raw idea, a shipped feature, and the production signal that tells you whether you built the right thing. The five delivery zones PushBackLog covers — Discovery, Definition, Build, Ship, and Validate & Evolve — map directly to the work this team does every sprint. The team is its own first test case for whether that loop works.

Bring a well-defined problem with genuine flexibility on the path, and the team delivers reliably and tells you what it cannot do as clearly as what it can.

Bring a vague direction and expect the team to absorb the ambiguity silently, and the team will push back — explicitly, early, and with specific questions. This is the quality gate in action before the software exists to automate it.

Bring non-negotiable security or compliance requirements, and the team will meet them and extend them further than asked — because they are building a product that enterprise clients will install on their GitHub organisations, and they do not treat that responsibility lightly.

Bring a deadline that cannot move, and the team will tell the truth about what can be delivered to a standard they are prepared to stand behind within that deadline. They will not ship work that fails the criteria PushBackLog will enforce on other teams.

Expect the questions to continue. The team that builds a quality-enforcement platform does not stop asking quality-enforcement questions between platform releases. The questions are not procedural overhead. They are the product.

Full roster

Name
Salvador N. Davison
Daniel C. Sprouse
Dayle C. Anderson
Cindy R. Read
Sandra K. Whitfield
Christopher J. Shank
Jordon M. Taylor
Jacinto V. Robles
Marlene G. Sanchez
Christy C. Arthur
Todd D. Jensen
Marcus D. Okonkwo
Rebecca L. Novak
Michael M. McCoin
Randee L. Hall
Helen T. Singleton
Ryan S. Pass
Michele B. Wilson
Timothy J. Jimenez
Kenneth E. Gaymon