Human-Centered Design Playbook for Salesforce Project Teams

Human-Centered Design Playbook for Salesforce Project Teams

This playbook outlines a practical, human-centered approach for Salesforce teams working on case management and similar workflows. It follows Salesforce’s design playbook style – structured, actionable, and approachable – guiding teams through the full HCD (Human-Centered Design) lifecycle. The guidance is organized into phases (Discover, Define, Design, Deliver, Evolve), with modular Plays in each phase. Each Play includes Purpose, When to Use, How to Run It, Tools, and Success Signals. The advice is platform-agnostic but inspired by Salesforce values. Key emphases include co-design with stakeholders, inclusive and trauma-informed practices, strong design governance and QA, and measurable outcomes (usability, equity, trust, adoption).

Discover

In the Discover phase, the team gathers insights on users’ needs, context, and pain points, and sets up project governance. This is about understanding the current situation deeply before proposing solutions. It also involves preparing the project to be human-centered.

Stakeholder & User Mapping

Purpose: Identify all relevant stakeholders and users, and establish project governance. List every group affected by or influencing the solution (case managers, clients, admins, IT, executives, etc.) so none are overlooked. At this stage, set up a design authority or governance committee with clear roles and decision processes to oversee the HCD approach.

When to Use: At project kickoff or early discovery. Use this Play to assemble the project team, get executive buy-in for HCD, and include user representatives in governance. It ensures the project is aligned with real user needs from the start.

How to Run It:

  • Map Stakeholders: Brainstorm and list all stakeholder groups (internal teams, end-users, partners, etc.). Identify key user segments, including vulnerable or marginalized groups, and supporters (trainers, helpdesk).

  • Identify Co-Design Participants: Choose representatives from each user group (case workers, clients, etc.) to participate in future design activities. Ensure diversity so multiple perspectives are included.

  • Set Governance Structure: Create a design authority or steering group (project leaders plus some user advocates). Define roles (e.g. UX lead, stakeholder reps) and a process for decisions (e.g. require sign-off for major UX changes).

  • Plan Communication: Schedule regular reviews (e.g. weekly updates) where the team shares discoveries and decisions with stakeholders. Maintain open channels (Slack/Chatter) so users feel their input is valued.

Tools: Use stakeholder mapping templates (like power-interest grids), RACI charts for governance roles, and collaboration tools (Slack, Salesforce Chatter). Document the governance plan in Salesforce’s Experience Cloud or a project wiki.

Success Signals:

  • A comprehensive stakeholder map exists and is agreed by the team.

  • A governance committee/design authority is established with clear membership and roles.

  • Stakeholders (including end-users) understand and support the HCD process.

  • No key user group is missed in research planning (e.g. front-line caseworkers and clients are included).

Participatory User Research Planning

Purpose: Plan and conduct inclusive user research to uncover real needs, pain points, and ideas. Use participatory methods that involve users (co-design, interviews, workshops) so they shape the insights. Ensure research ethics (consent, privacy) and stakeholder oversight are in place for responsible co-design.

When to Use: At the start of the project (or before any new feature design), before defining requirements. Use this to gain a deep understanding of current workflows (e.g. how case managers work, what clients experience) when you don’t yet have clear solution ideas. Repeat when entering new domains or adding user groups.

How to Run It:

  • Define Research Goals: Clarify what you want to learn (e.g. “How do case officers track client progress?”). Align these goals with stakeholders so they see the value.

  • Choose Participatory Methods: Select inclusive research methods such as user interviews, contextual inquiries (observing users at work), co-design workshops (users sketching solutions), and diary studies. Consider participants’ comfort and accessibility needs.

  • Governance & Ethics: Establish consent and privacy protocols. Have an ethics checklist (voluntary participation, confidentiality) and get necessary approvals (e.g. IRB for healthcare contexts).

  • Recruit Diverse Users: From the stakeholder map, recruit participants across roles, skill levels, and backgrounds. This surfacing of varied needs ensures equity (avoiding design for the “average” user).

  • Conduct Sessions: Pair a facilitator and note-taker. In workshops, give participants simple tools (sticky notes, sketch templates, whiteboard or Miro) to express ideas. Create a respectful, safe atmosphere so participants share freely.

  • Capture Findings: Document user quotes, current pain points, and any ideas users suggest. Collect sketches and notes from co-design sessions. Pay attention to emotional cues – frustrations may indicate broken trust or stressful steps in the workflow.

Tools: Use interview guides and empathy maps; record sessions (with consent); use whiteboard software (Miro, Lucidchart) for remote co-design. Salesforce Surveys or Forms can gather quick polls. Keep a shared research log or repository.

Success Signals:

  • Research sessions involve a representative cross-section of end-users and stakeholders.

  • Participants feel the process was inclusive and respectful (users give feedback that they appreciated being heard).

  • The team identifies clear pain points or unmet needs (e.g. “Users struggle with step X” or “Clients currently workaround by using Excel”).

  • The team collects user ideas or sketches that generate potential solutions.

  • The research is ethically conducted: all participants gave informed consent, and no sensitive data was mishandled.

Journey Mapping & Insight Synthesis

Purpose: Turn raw discovery data into concrete design guidance: personas, journey maps, and insight statements. Journey maps chart each step of the user experience and highlight pain points and emotions; personas summarize major user types. This builds empathy and reveals key improvement opportunities.

When to Use: After completing discovery research. Once you have interview and observation findings, summarize them into personas and a current-state journey map. It’s also valuable to repeat if new user groups are added.

How to Run It:

  • Create Personas: Identify patterns and draft 2–5 personas for major user types (e.g. “Frontline Case Worker Alice”, “Program Manager Bob”, “Client Carlos”). For each, list goals, needs, pain points, context. Base them on real data (use quotes or metrics) to keep them grounded.

  • Map the Journey: Choose a key workflow (e.g. “Submitting a new case”, or a client’s service journey). Map it step by step from the user’s perspective. For each step, note what the user does, thinks, and feels, and mark any friction or questions. Use icons or colors to highlight pain points. Also note where trust is built or lost (e.g. does a security request make the user feel uneasy?).

  • Identify Moments of Truth: Mark critical moments that strongly influence satisfaction or outcomes. These are steps where failure derails the process (e.g. onboarding or case closure). They become priorities for improvement.

  • Involve Stakeholders: Conduct a synthesis workshop with the team and some user reps. Review the journey map and personas together to validate findings and ensure everyone shares the user perspective.

  • Extract Insights: Write key insight statements or “How Might We” (HMW) questions from the map. For example: “Clients often feel lost during onboarding – HMW create more transparency to build trust?” These link directly to user needs identified.

Tools: Journey mapping and persona templates (paper or virtual tools like Miro/Mural). Sticky notes and affinity mapping for clustering observations.
Success Signals:

  • 2–5 personas are created and actively used by the team (stakeholders recognize them as real users).

  • A current-state journey map is documented with clear pain points and emotions; the team refers to it when prioritizing features.

  • The team has a list of validated pain points and opportunity areas drawn from research (e.g. “Users need easier status tracking; the current workaround is too slow”).

  • Team members across roles (developers, PMs, etc.) can articulate who the user is and what they need, showing empathy alignment.

  • The user is clearly “at the center” of thinking, evidenced by discussions that focus on user experience as much as technology.

Define

In the Define phase, the team formalizes what they learned: they nail down the core user problem and set criteria for success. This means focusing on the right problem, defining scope, and establishing metrics and guiding principles.

Problem Framing Workshop

Purpose: Formulate a clear, user-centered problem statement (or a concise set of them) that everyone agrees on. This aligns the team and stakeholders on what exactly needs solving, and includes any equity or inclusion concerns so the problem isn’t one-sided.

When to Use: After Discover (or whenever scope feels unclear). Use this to prevent solution-jumping and ensure everyone agrees on the problem before ideation.

How to Run It:

  • Review Insights: Start the workshop by revisiting personas and top pain points. Remind everyone of user quotes or stories to keep the user’s voice present.

  • Draft Problem Statements: Each participant writes a one-sentence problem (e.g. “Case managers need a way to track case updates in real-time because delays are causing frustration”). Use the format User–Need–Insight.

  • Share & Combine: Share all drafts and look for common themes. Merge similar statements into 1–3 consolidated problem statements. Make sure any equity issues are mentioned (e.g. if a specific group is underserved).

  • Formulate HMW Questions: For each problem, create one or more “How Might We…” questions to spark ideas (e.g. “How might we enable secure real-time case collaboration?”). Focus these questions on the user’s challenge, not on technical solutions.

  • Define Scope: Discuss what is in scope vs. out of scope. Note constraints (budget, policy) but encourage creative thinking initially.

  • Alignment: Obtain agreement from stakeholders (verbal or written) that this is the problem to solve. This becomes the project’s north star.

Tools: Whiteboard (physical or virtual) for brainstorming. Dot-voting or priority charts to pick top problems. Templates or slides for the User–Need–Insight format. Document the final problem statements in a design brief or Confluence page.
Success Signals:

  • The team can articulate the problem concisely in user-centric language (not as a technical spec).

  • Stakeholders (everyone involved) agree on the problem and show no confusion about project direction.

  • The problem statement addresses root causes (not just symptoms) of the issues found.

  • A set of clear “How Might We” questions is ready to guide ideation.

  • The team remains solution-agnostic (no one is pushing a fix before the problem is clearly defined).

Define Success Metrics (Usability, Equity, Adoption, Trust)

Purpose: Decide how you will measure the solution’s success from a human-centered viewpoint. This includes standard usability metrics and broader indicators like equity, trust, and adoption. Defining metrics early ensures accountability (“you can’t change what you don’t measure”).

When to Use: During Define phase, after problem framing but before finalizing designs. Revisit when project goals shift or new priorities emerge.

How to Run It:

  • Identify Key UX Metrics: Consider Google’s HEART framework (Happiness, Engagement, Adoption, Retention, Task success). For example, measure Task Success (accuracy or time to complete case entry), Adoption (% of caseworkers using the system daily), Engagement (frequency of key actions, like updates per week), and Happiness (user satisfaction or CSAT scores).

  • Include Equity Metrics: Ensure no user group is left behind. Track usage and outcomes by segment (role, region, demographics). For example, measure if any group has significantly slower task times. If you spot disparities (e.g. new hires slower than veterans), that signals a need to improve training or design for that group.

  • Trust and Safety Metrics: Plan how to gauge user trust. Use surveys (e.g. “I trust the system to protect client data” on a 5-point scale) and quantitative indicators (number of privacy/security incidents, support tickets about data concerns). Consider how comfortable users feel submitting sensitive data (e.g. % opting into optional fields).

  • Adoption & Training Metrics: Set goals for onboarding (e.g. % of users completing training, time to proficiency). Track feature adoption rates (which parts of the system are used or ignored).

  • Set Targets/Baselines: Define initial targets or improvements. E.g. increase task success to 95% (from 80%), have ≥90% of staff logging in weekly, keep any user group’s task success within 10% of the best group (equity target), and aim for trust scores of 4/5 or higher. These targets guide design decisions.

  • Validate with Stakeholders: Confirm these metrics align with business goals (they may add ROI or compliance KPIs). Ensure all metrics link back to user outcomes (e.g. adoption drives ROI).

Tools: HEART framework reference materials. Analytics planning tools (Salesforce dashboards, Google Analytics, or in-app telemetry). A table or spreadsheet listing each metric, how to collect it, and the target. Salesforce’s built-in adoption/usage reports can be leveraged if available.
Success Signals:

  • A documented list of success metrics covering usability (effectiveness, efficiency), user satisfaction, adoption, equity, and trust is finalized.

  • Team members can describe success in concrete user terms (e.g. “Filing a new case will take under 2 minutes on average”).

  • There is a clear plan for collecting each metric (e.g. scheduled surveys for satisfaction, automated logs for usage).

  • These metrics will serve as success criteria to monitor during and after rollout, demonstrating commitment to outcomes beyond just delivery.

Define Experience Principles (Accessibility & Ethics)

Purpose: Establish overarching design principles, with special focus on accessibility, inclusivity, and ethical concerns. These principles serve as “north stars” for design quality. For example, a principle could be “No user left behind,” meaning the design must work for users of all abilities and tech skills.

When to Use: At the end of Define phase, once the problem and metrics are clear. Use it to lock in key quality standards before design begins. This makes guidelines like WCAG compliance or trauma-informed language non-negotiable from the start.

How to Run It:

  • Brainstorm Principles: In a short workshop, draft 3–5 design principles. They should be brief, inspiring statements. Draw from Salesforce’s values (Trust, Innovation, Equality) and what users need. Also consider any commitments (e.g. “Users should feel safe and in control at all times”).

  • Address Accessibility: Include a principle like “Accessible by design.” State that the product will meet standards (WCAG 2.1 AA or legal requirements). Emphasize features like text alternatives for images, sufficient color contrast, and full keyboard navigation. Note that accessibility benefits all users, not just compliance.

  • Trauma-Informed Ethos: Add principles based on trauma-informed design, such as “Design for emotional safety.” This means using gentle, clear language, offering undo/exit options, and avoiding blame (e.g. “Oops, let’s try that again” instead of scolding error messages). Ensure content warnings or safe words for sensitive information.

  • Ethics and Privacy: If relevant, include “Ethical by design” or “Privacy first.” Commit to collecting only needed data and explaining its use. If AI or data analysis is involved, promise transparency (“We explain how automated decisions are made”).

  • Finalize & Communicate: Narrow to 3–5 principles (too many dilutes them). Have the team agree these are mandatory guidelines. Publish them visibly (project wiki, posters). Refer to them during design reviews: ask “Does this design uphold our principles?”

Tools: Examples of principle lists (e.g. Microsoft’s Inclusive Design principles, WCAG checklists, trauma-informed guidelines from content design sites). A collaborative doc to draft and refine the wording.
Success Signals:

  • The team has a concise set of principles explicitly mentioning accessibility and inclusion (not just generic platitudes).

  • Team members reference the principles during design reviews (e.g. “This complies with our ‘no blame’ principle by using a helpful error message”).

  • Early design iterations are refined based on these principles (e.g. simplifying an interface to give users more control).

  • As development proceeds, there are fewer surprise fixes for accessibility or tone issues because the principles were applied from the start.

Design

In the Design phase, the team generates and refines solutions with continuous user feedback. This is a creative, collaborative phase – co-designing with users, building prototypes, and testing early. The following Plays ensure inclusivity and alignment with user needs.

Co-Design Ideation Sessions

Purpose: Involve users and stakeholders in generating solution ideas through hands-on workshops. Co-design taps into the creativity of end-users and front-line staff, ensuring concepts are grounded in real needs and diverse viewpoints. It also builds buy-in, since people are invested in ideas they helped create.

When to Use: At the start of Design, once you have clear problem statements and principles. Run co-design when brainstorming features or workflows (e.g. new case intake process). Also use it whenever design is stuck – new user input can spark fresh approaches.

How to Run It:

  • Plan the Workshop: Gather 6–10 participants (designers, a few actual end-users like caseworkers or clients, SMEs, perhaps a developer or admin). Block 1–2 hours. Define a clear focus (e.g. “Improve the case intake experience”).

  • Warm-up & Mindsets: Start with a quick creative exercise (e.g. “worst idea” challenge) to get people thinking outside the box. Remind everyone of the HMW questions and design principles. Encourage an atmosphere of empathy and courage (in Salesforce’s Relationship Design terms, bringing compassion to others’ perspectives).

  • Divergent Ideation: Have everyone generate ideas individually first – for example, use “Crazy 8s” (sketch 8 ideas in 8 minutes) or brainwriting with sticky notes. If remote, use Miro/Mural. The goal is to produce many ideas without group pressure.

  • Share & Build: Each person presents their top ideas. As a group, discuss and expand on them. The facilitator ensures non-designers’ voices are heard (often users hesitate to speak up, so validate their contributions). Combine similar ideas and note the most promising ones.

  • Prioritize: Use dot voting or a simple matrix (impact vs. effort) to pick a few ideas to prototype. Ensure at least one selected idea directly came from users’ suggestions.

  • Document Outcomes: Capture photos or screenshots of sketches/boards. List the top ideas with brief notes. Record the rationale (e.g. “Idea X was chosen because users said it would save them an hour per day”).

Tools: Sticky notes, markers, sketch templates for low-fi drawing. Digital: Miro/Mural, Google Jamboard, or Slides. Dots or digital dot-voting tools for prioritization. A timer to keep exercises focused.

Success Signals:

  • Several solution ideas are produced collaboratively, and at least one user-generated idea is chosen for prototyping (showing true participation).

  • Participants report feeling energized or positive (e.g. a caseworker says, “I loved being included; it’s exciting to see our ideas”).

  • The ideas align with the earlier problem and principles (ensuring continuity).

  • A diversity of ideas emerges (not just the designers’ thoughts – users brought new perspectives).

  • Stakeholders gain confidence in HCD when they see engaged users driving the process.

Rapid Prototyping & Usability Testing

Purpose: Quickly turn top ideas into tangible prototypes and test them with real users. The goal is to iterate rapidly based on feedback, making sure the design works in practice before heavy development. Prototypes can be paper sketches or interactive mocks. Usability testing with these prototypes validates design decisions through actual user behavior.

When to Use: Immediately after ideation, with initial concepts. Continue to cycle: each time the design evolves or a major feature is added, prototype and test again. Also use usability testing whenever the team is undecided on a design choice – testing provides evidence.

How to Run It:

  • Prototype Quickly: Choose an appropriate fidelity. Early on, hand-drawn storyboards or wireframes are fine. Later, build clickable prototypes in Figma, Adobe XD, etc. The prototype should allow users to attempt key tasks (e.g. creating a new case or a client logging in).

  • Plan the Test: Write realistic scenarios (e.g. “You are a case manager assigning a follow-up task”). Plan tasks covering essential use cases. Sessions ~30–60 minutes. Recruit 5–8 test users (actual case managers or knowledgeable staff).

  • Conduct Testing: Have a facilitator and note-taker. Ask participants to “think aloud” while using the prototype. Do not help them unless necessary. Observe where they hesitate or express confusion. Record their comments (e.g. “I expected this button to do that”). Also note emotional reactions (frustration or satisfaction).

  • Include Accessibility: If possible, test with at least one user using assistive technology (screen reader, keyboard only). Even a non-interactive prototype can hint at accessibility issues (like focus order). This ensures early that the design is on track for users with disabilities.

  • Debrief & Iterate: After each test, review findings with the team. Identify and prioritize usability problems (e.g. unclear navigation, confusing labels). Link issues to design principles (e.g. violating consistency or control). Make revisions to the prototype. Also note what worked well.

  • Repeat: Run another test round with the updated prototype. Each cycle should reduce issues, improving task success and user satisfaction.

Tools: Prototyping: Figma/Sketch/InVision or even paper. Testing: screen-sharing software (Zoom, etc.) or a usability lab. Recording (with consent) for later review. Note-taking spreadsheets or tools (Airtable). Accessibility checkers (axe, Wave) for quick automated issues, combined with manual testing.

Success Signals:

  • Usability testing uncovers issues the team hadn’t noticed, and these are fixed before development (e.g. a confusing form layout is improved after users got lost). This prevents costly rework.

  • Over successive tests, users’ task success rates and completion times improve, showing the design is becoming intuitive.

  • The team has confidence going into build because the design was validated by real users (“We’ve seen users complete these tasks successfully”).

  • Feedback from diverse users (including one with accessibility needs) has shaped a more universally usable design.

  • Stakeholders who observe tests see positive user reactions, reinforcing trust in the HCD approach.

Inclusive Design QA (Accessibility & Trauma-Informed Review)

Purpose: Conduct a thorough design review for accessibility and trauma-informed principles before development. This final design QA step catches any issues that might exclude or harm users. Fixing these in design is much easier than after coding. This play ensures the design is inclusive, respectful, and meets the established principles.

When to Use: At the end of Design phase, when high-fidelity designs or a design system is ready but before full development. Also run this QA at key points (e.g. a feature’s design is approved for build). Essentially, any time design is “ready for handoff,” do this check.

How to Run It:

  • Accessibility Audit: Check the design against WCAG 2.1 AA standards (or your organization’s standard). Review color contrasts (>=4.5:1 for text), font sizes, and avoid conveying info by color alone. Ensure all interactive elements are visible and labeled. Verify that users could navigate via keyboard. Check that copy is plain language. Use tools (contrast checkers, screen reader simulators) to assist. Aim for perceivable, operable, understandable, robust design.

  • Assistive Tech Simulation: If possible, have a team member try the design with a screen reader (or use a Figma plugin). This can reveal issues like missing alt text or confusing element order.

  • Trauma-Informed Review: Revisit trauma-informed principles (safety, choice, transparency, empowerment). Critique the UI and content: Are any words potentially triggering or blaming? (Use supportive language: e.g. “Oops, let’s try again.”) Do users have control (undo/cancel)? Are privacy/security cues clear to build trust? Do users know what to expect (consistency)? For any sensitive info, provide warnings. Check cultural sensitivity in imagery and wording (avoid stereotypes, be inclusive).

  • Diverse Reviewers: Involve an accessibility expert or someone not on the design team for a fresh look. If possible, have a representative user (or someone acting as one) preview the design and give feedback on any discomfort.

  • Document and Fix: List any issues (e.g. “Contrast on main button is 3:1 – needs >=4.5:1”; “Confirmation dialog is abrupt – add reassurance or allow cancel”). Prioritize fixes. Adjust the design and note any fixes to pass to developers (like adding ARIA labels).

  • Sign-off: Have the design authority or UX lead formally sign off on these checks. A checklist (accessibility, style guide, etc.) should be all green before saying “design is complete.”

Tools: WCAG quick reference or checklist. Automated contrast checkers (WebAIM, Stark). The trauma-informed principles list as a review guide. Bug tracker or task list to record any changes needed. Possibly run parts of design by a content designer or psychologist for trauma sensitivity.

Success Signals:

  • The final design meets accessibility standards (e.g. text contrasts fixed, screen reader navigation flows logically). All critical accessibility issues are resolved pre-build.

  • The UI and content conform to trauma-informed guidelines (supportive language, undo options present, no unexpected triggers). No last-minute “we forgot to fix X” surprises.

  • Stakeholders or auditors sign off on the design’s accessibility and ethics.

  • In later testing/pilot, there are minimal complaints about inaccessibility or insensitive content (we addressed them in design).

  • Overall risk is reduced – the product is far less likely to exclude or distress users after launch.

Deliver

In the Deliver phase, the project moves through development, QA, and launch. The focus is on building the solution in line with the design and ensuring quality. These Plays emphasize inclusive testing, user training, and setting up feedback for go-live.

User Story Validation & Inclusive QA Testing

Purpose: As features are developed, verify that each meets user needs and quality standards. Embed HCD into Agile: testers and team members should confirm not only functionality but also usability and accessibility. Involving real users (or proxies) in acceptance testing catches practical issues automated tests might miss (e.g. a workflow that works but confuses a new user).

When to Use: Throughout the development sprints and especially in the official QA/UAT phase before launch. Whenever a feature is marked “done,” it should undergo this validation. Also use in any early user pilot or beta.

How to Run It:

  • HCD Acceptance Criteria: For each user story, write acceptance criteria covering UX and quality, not just functionality. E.g.: “Case assignment feature – (a) User can assign a case (functional), (b) The assign action is easily found (usability – e.g. within 3 clicks), (c) Screen reader announces the assignment confirmation (accessibility).” This ensures developers know what “done” means.

  • Pair QA with UX: Have QA testers or product owners test scenarios based on personas (realistic user stories). If possible, include a few actual end-users in UAT; their feedback is invaluable.

  • Inclusive Testing: Include diverse conditions: test with different devices/browsers, assistive tech (keyboard-only navigation, screen reader). For example, ensure no element is unreachable via keyboard. If your user base is multilingual or multinational, check language and cultural appropriateness.

  • Stress-test for Empathy: Simulate error scenarios: input mistakes, time-outs. Verify the system’s response is helpful and non-threatening. Error messages should guide users calmly (trauma-informed QA).

  • Log Issues from User Perspective: When logging defects, describe impact on the user (“Keyboard user got trapped on popup, major accessibility issue”) rather than just “doesn’t match spec.”

  • Regression Check: After fixes, re-test critical user journeys end-to-end. Ensure no new complexity was added and performance is still good (slow performance can hurt adoption as much as design flaws).

  • Acceptance Sign-off: Use a UAT checklist with human-centered checks (e.g. “✅ New user completed onboarding without help,” “✅ All pages passed accessibility scan,” “✅ Content reviewed for tone”). The product owner or user rep signs off once it’s all green.

Tools: Test case management (Jira, Zephyr) with space for UX notes. Automated accessibility checkers (axe, Wave) and manual keyboard tests. Salesforce’s Lightning Experience Accessibility Checker for standard components. Session recordings for UAT observations. Feedback survey forms for pilot users.

Success Signals:

  • QA/UAT uncovers UX or accessibility issues (e.g. confusing label) which are fixed pre-launch.

  • All critical accessibility issues are resolved (no blockers left). Accessibility is treated as essential.

  • Actual end-users in UAT sign off on usability (e.g. case managers in a pilot complete tasks with minimal questions).

  • Each user story has evidence (notes or tester sign-off) that it meets user-focused criteria.

  • After launch, there are very few usability-related support tickets, indicating the thorough validation was effective.

Training and Onboarding for Adoption

Purpose: Prepare and support users for the new system with empathetic training and onboarding. Even the best design needs proper introduction. This play focuses on enabling users with knowledge and trust, reinforcing why the change is beneficial, and using empathy in all communications.

When to Use: In the weeks leading up to launch and immediately after. Begin planning during development and execute right before users go live. Also apply this for new user groups or major updates (continuous onboarding).

How to Run It:

  • Create Empathetic Content: Develop training that speaks users’ language and acknowledges their concerns. For example, start sessions by saying, “We know the old system frustrated you with duplicate work – here’s how the new one solves that.” This shows the solution was built from their feedback. Use scenario-based learning: teach features through actual tasks (e.g. “Let’s create a new case step by step”) rather than abstract demos.

  • Multi-Modal Delivery: Combine live sessions (webinars or in-person demos), short video tutorials, one-page quick-reference guides, and hands-on practice labs. Ensure all content is accessible (caption videos, easy-reading handouts). Apply trauma-informed pacing: don’t overload users with too much at once. Use staged rollout or in-app tours (e.g. Salesforce guided tours) so users can explore at their own speed.

  • Champions and Peer Support: Identify enthusiastic early adopters or pilot users as “champions.” They can help colleagues and share success stories (“This new workflow cuts my processing time in half!”). Set up a help channel (a Chatter or Slack group) where users can ask questions. Peer support builds community and trust.

  • In-App Guidance: Use Salesforce tools (like In-App Guidance) to show tips or checklists inside the app. For example, display a welcome screen on first login highlighting key features and help links. Include feedback buttons (e.g. “Was this tip helpful?”) to gather instant input.

  • Gather Feedback During Training: Encourage questions and feedback during training. Conduct quick surveys after sessions to gauge confidence (“Do you feel ready to use the new system?”). Use this input to tweak both the product (if possible) and support materials.

  • Reinforce Over Time: Plan follow-up support. Send “tips and tricks” emails, hold a Q&A session a few weeks after launch to address common issues, and update training docs as needed. Learning is ongoing.

Tools: Leverage Salesforce’s myTrailhead or Trailhead Live to create interactive modules. Use video conferencing for live training (record them for later viewing). Maintain a knowledge base or FAQ. Optionally use gamification (badges for completion, leaderboards). Use surveys (Google Forms, Salesforce Surveys) to collect training feedback.

Success Signals:

  • High training engagement (attendance or completion rates are strong), indicating users are invested in learning.

  • Positive feedback: trainees report that the training prepared them well. Comments like “This was exactly about our daily work” show relevance.

  • At launch, fewer frantic support calls or panicked reactions – users find help resources on their own or already understand basics.

  • Adoption metrics in the first weeks meet targets (e.g. 80% of staff logging in weekly). If not, the team uses feedback to iterate (maybe scheduling extra sessions for lagging groups).

  • Anecdotes like: a skeptical user says, “Actually, once I tried it, it’s not bad – thanks for the training!” (shows a convert to the new system).

Go-Live Feedback Loop Setup

Purpose: Establish mechanisms to collect user feedback and monitor metrics immediately upon launch and in the early use period. This ensures issues are identified and resolved quickly, and that insights feed into the next iteration. In other words, we don’t just launch and disappear – we continue the human-centered cycle by listening and adapting.

When to Use: In the days just before go-live (to set up) and for the first several weeks after launch. Also repeat for any major release or update.

How to Run It:

  • Launch Support Team: Form a “war room” or dedicated support channel staffed by the project team (admins, a developer, designer/researcher, and support reps). Define how to collect feedback (a special email alias, a chat channel, a feedback form) and how often the team will review it (daily stand-ups or daily summary emails). Make sure all users know how to report issues or suggestions (“Here’s how to get help”).

  • Proactive Check-ins: Don’t wait for complaints – reach out to users early. For example, call or message a few users in week 1 to ask, “How’s it going? Any questions?” Use in-app prompts after a user’s first session (e.g. “Rate your experience 1–5 and tell us what could be better”). This shows customers that their voices are important from day one.

  • Monitor Metrics Live: Use the success metrics dashboard in real time if possible. Track usage (login rates, number of cases processed vs old system), performance (page load times), and any satisfaction measures (quick CSAT survey). Also monitor qualitative signals: volume/type of helpdesk tickets, Slack/Chatter feedback. If a metric dips (say supervisors aren’t using a new feature), investigate immediately (“Are they missing training, or is the UI unclear?”).

  • Triage Feedback: As feedback comes in, categorize it: critical bugs to fix immediately, usability tweaks (quick changes), nice-to-have suggestions, and positive comments. Assign critical issues for hotfixes. Implement quick usability fixes (like text edits) to show responsiveness. Communicate changes: e.g. “We heard you – we’ve added a shortcut button to the dashboard.”

  • Share Early Wins and Learnings: With the team and stakeholders, highlight successes (e.g. “We’ve had 100 cases created in 2 days, and users love the new navigation”). Also be transparent about issues (“We noticed some confusion with the report feature; here’s our plan to improve it”). Sharing feedback builds trust with all parties.

  • Plan Next Iteration: Use collected feedback to prioritize the backlog for the next version. Re-evaluate success metrics: if some targets aren’t met (e.g. trust scores low), make concrete plans (maybe a design tweak or more training). This sets up the Evolve phase of continuous improvement.

Tools: Use Salesforce feedback tools (Chatter groups, Surveys). Set up a real-time dashboard (Salesforce or Google Analytics) for key metrics. Use Service Cloud or a ticketing system to flag “new system” issues. For internal coordination, use Slack or a spreadsheet to log anecdotal feedback. Schedule daily/weekly check-ins for the team.

Success Signals:

  • A steady stream of user feedback is coming in (users feel comfortable reporting issues/suggestions).

  • Early metrics look positive: user satisfaction is meeting goals and adoption is growing. Any areas lagging are identified and addressed quickly.

  • The team fixes top user-reported issues rapidly, and communicates that back (“Thanks for reporting that; here’s the fix” – building trust).

  • The project doesn’t just end at launch: there is a clear plan for continuous improvements, reflecting the customer-centric mindset. For instance, ongoing governance meetings still happen to review feedback and plan updates.

Evolve (Post-Launch)

In the Evolve phase, the product and processes are sustained and continuously improved. User needs and contexts change over time, so the team must keep learning and adapting. These Plays focus on ongoing metrics tracking, feedback loops, and knowledge sharing to ensure lasting success.

Measure Outcomes & Equity Regularly

Purpose: Continuously track the defined success metrics (usability, equity, adoption, trust, etc.) and evaluate whether the solution is delivering its intended results for all users. This keeps the team data-driven and accountable. Importantly, it helps ensure no user group is unintentionally disadvantaged over time.

When to Use: Start after a stabilization period post-launch (e.g. 1–2 months in) and then on a regular cadence (monthly or quarterly). Also do it after major feature releases.

How to Run It:

  • Dashboard & Reports: Maintain a live dashboard of key metrics (e.g. average case resolution time, weekly active users, task completion rates, satisfaction scores). Break down metrics by segment (role, region, demographic) to spot inequities. For example, if one department’s usage lags, or if one user group has much lower satisfaction, flag it.

  • Regular Reviews: Hold periodic reviews with the core team and stakeholders. Ask: Are we meeting targets? If not, why not? For any metrics off-track, drill down. For instance, if case completion time improved overall but got slower for a new user group, investigate that group’s workflow.

  • User Feedback Channels: Keep feedback channels open beyond launch (ongoing surveys, user forums, or focus groups). Combine this qualitative feedback with the quantitative metrics for insights (e.g. low usage of a feature coupled with user complaints about it).

  • Equity Checks: Periodically audit equity. Analyze if usage or success differs across groups (role, location, ability). If disparities persist, take action (maybe extra training, UI adjustments, or outreach to that group).

  • Share Results: Report results to stakeholders. Celebrate successes (e.g. “We hit 95% task success – great job!”) and be transparent about areas to improve (e.g. “We still see lower adoption in Dept B; here’s our plan”).

  • Adjust Targets: If certain goals are consistently exceeded, raise the bar. If some were too ambitious, recalibrate. Ensure metrics stay aligned with evolving business and user goals.

Tools: Continued use of analytics tools and surveys. Visualization dashboards (Salesforce, Tableau, or similar). Documentation of metric reviews and action items (Confluence or shared docs).
Success Signals:

  • The team can clearly report current UX performance (e.g. “Current CSAT is 4.2/5, and 92% of staff use the system weekly – both above targets”).

  • No user group is left behind unnoticed: any usage or success gaps are identified and closed over time. For instance, a lagging regional office sees usage improve after targeted support.

  • Trust/satisfaction metrics remain stable or improve. Few data/privacy complaints occur.

  • The team uses these measurements to drive changes (e.g. a dip in a metric leads to a prioritized fix), showing the feedback loop works.

Continuous Improvement & Iteration Planning

Purpose: Treat the product as an evolving system. Use all collected feedback and metrics to plan enhancements and fixes. Continue using HCD processes (mini–Discover/Design) for significant new work. Maintain governance and quality as the solution grows.

When to Use: Post-launch, on an ongoing basis (e.g. in each sprint or quarterly release). Whenever planning next features or improvements.

How to Run It:

  • Maintain a Backlog: Continuously add user feedback, bugs, and improvement ideas into the product backlog. Include context (e.g. user quotes or segments affected) so future work stays user-focused.

  • Co-Design New Features: For major new features or changes, re-engage users. Hold smaller-scale co-design or user review sessions to ensure enhancements meet their needs (e.g. if adding a reporting dashboard, involve end-users in its design).

  • Sustain Governance: Keep the design authority or HCD steering group active (perhaps monthly post-launch). They should review proposed changes for alignment with experience principles and equity.

  • Ongoing Quality Checks: Apply the same QA rigor to new work (accessibility audits, usability tests, trauma-informed reviews) as was done initially. Update training and help materials for each new feature.

  • Knowledge Transfer: Document key design decisions, research findings, and rationale so new team members (or admins) understand why things are done a certain way. This preserves HCD knowledge over time.

  • Celebrate & Communicate: When rolling out updates, highlight user-suggested improvements (e.g. “You asked for a shortcut here, we added it!”). Share usage successes to encourage continued feedback.

Tools: Same backlog/issue tracking (Jira, Trello) with tags for UX tasks. A UX research repository (Confluence, Airtable). Regular design reviews. Release notes or internal emails announcing user-driven changes.
Success Signals:

  • The product continues to meet user needs; satisfaction remains high or improves as updates roll out.

  • Users stay engaged and feel heard, continuing to provide input. Many new requests come from users who see their feedback implemented.

  • No regressions: updates do not break accessibility or confuse users because HCD practices are maintained.

  • The governance group actively guides improvements, and stakeholders note the consistent quality.

  • In the long term, this project is seen as a model for human-centered success within the organization.

Lifecycle Retrospective & Knowledge Sharing

Purpose: Reflect on the HCD process itself: what worked well, what challenges arose, and how the process can improve. Share these lessons to grow the organization’s HCD maturity.

When to Use: At major milestones (e.g. 6–12 months post-launch) or at project end. Also after big releases or when the core team changes.

How to Run It:

  • HCD Retrospective: Convene the core team and possibly key stakeholders. Ask: Which HCD activities were most valuable? (Users might point to co-design or testing.) What difficulties did we face? (Maybe recruiting users took too long.) What would we change next time? Record honest feedback.

  • Gather User Voice: If possible, collect a few user testimonials about the process (e.g. “I felt really listened to by the team,” or “At first I was unsure, but the usability test convinced me this tool is great”). These highlight the human impact.

  • Update the Playbook: Incorporate any new practices or insights into this playbook. For example, if remote workshops worked well, note them. If a specific tool was helpful, add it.

  • Share with Peers: Present the project’s HCD journey (maybe a lunch-and-learn or an internal blog post). Highlight key outcomes and how HCD contributed (with data if possible). Be open about lessons learned (“Next time, budget more time for interviews”). This helps other teams.

  • Recognize Contributors: Acknowledge team members and user collaborators who championed HCD (public shout-outs, awards, or case study features). Celebrating success reinforces a culture of design.

  • Plan Future HCD Integration: Use insights to improve how future projects start (e.g. mandate early user research, allocate budget for accessibility review). Feed these improvements into project methodology documents or standards.

Tools: Retrospective formats (Start/Stop/Continue, etc.) on a board or Mural. Internal communication channels (wiki, Chatter, newsletter). Design community meetings (forums for UX designers to share stories).
Success Signals:

  • The team can clearly articulate how HCD drove positive outcomes (e.g. increased adoption, fewer errors) and what could be done better.

  • Lessons learned are documented and applied: future projects use these practices without reinventing the wheel.

  • Team members report growth: those new to HCD gained skills they’ll use again; veterans feel validated by the results.

  • The playbook and process documentation are updated with improvements, setting up the next team for success.

  • Human-centered design becomes recognized as the default approach in the organization, with this project cited as an example of its value.

By following this Human-Centered Design Playbook through Discover, Define, Design, Deliver, and Evolve, Salesforce project teams can ensure their case management solutions (and similar workflows) are not only technically sound but truly effective, inclusive, and trusted by users.

The modular Plays provide structure but are meant to be adapted – teams should iterate on the process just as they iterate on the product. Remember: at the heart of HCD is a simple principle – keep the people (users, stakeholders, and our broader community) at the center of every decision. When done right, this leads to solutions that are adopted and loved, not just deployed, driving success for both the project and the mission it serves.