Comprehensive HCI Research Report: Multidimensional Testing in Human-Computer Interaction
1. Updated Research Title Options
- “Beyond Usability: A Multidimensional Approach to Testing in Human-Computer Interaction”
- “Designing for All: Evaluating Usability, Accessibility, and Cognitive Load in HCI”
- “Human-Centered Testing in HCI: Measuring Interaction, Emotion, and Experience”
- “The Full Spectrum: UX, Accessibility, and Cognitive Testing in Human-Computer Interaction”
- “From Function to Feeling: A Comprehensive Evaluation of Human-Computer Interaction Systems”
2. Expanded HCI Testing Plan Template
Research Objectives:
- Evaluate the system’s usability, accessibility, cognitive load, task efficiency, and emotional impact on users.
Participant Profile:
- A diverse group across age, ability, tech-savviness, and cognitive preference to reflect real-world users.
Test Methodology:
- Task-based usability testing
- Accessibility audit using screen readers and keyboard-only navigation
- Cognitive load assessment through NASA-TLX
- Emotional response capture using Affect Grid or Self-Assessment Manikin (SAM)
- Think-aloud protocol
- Post-task interviews and questionnaires
Test Areas and Evaluation Metrics:
Area | Metrics |
---|---|
Usability | Task completion, error rates, SUS scores |
Accessibility | WCAG 2.1 compliance, screen reader compatibility |
Cognitive Load | NASA-TLX results, task duration variance |
Task Efficiency | Time-on-task, number of clicks, path length |
Emotional Response | Valence-arousal scale ratings |
Information Design | Findability, navigation success, card sorting |
Location and Setup:
- Mixed setup: remote sessions for screen reader testing, and in-lab testing for eye-tracking and observation.
Timeline:
- 3-week evaluation with parallel user groups testing different interaction conditions.
3. Comprehensive Task Scenario Template
Scenario Title: Evaluate Task Management Flow
Goal: Assess how users interact with different interface layers and how design impacts efficiency, comprehension, and satisfaction.
Scenario Instructions:
“You are planning your workweek. Using the application, create three new tasks, organize them into a project folder, assign deadlines, and set a priority level for each.”
Evaluation Focus by Category:
- Usability: Can users complete the task without errors?
- Cognitive Load: Are they overwhelmed with options or labels?
- Accessibility: Can they complete the task using keyboard-only navigation or screen readers?
- Emotional Response: How do they feel during and after the task?
Follow-Up Probes:
- “Was anything mentally demanding during this task?”
- “Did you feel in control or frustrated?”
- “What made the task easier or harder?”
4. Post-Test Interview Template (Expanded Scope)
1. General Experience
- “Describe your overall interaction with the system.”
2. Usability & Navigation
- “Was it easy to complete tasks without help?”
- “Did any feature feel out of place or redundant?”
3. Accessibility
- “Were you able to navigate using your preferred input method?”
- “Was the visual design readable and friendly?”
4. Cognitive Load & Mental Effort
- “Did you feel mentally taxed during the process?”
- “Were the instructions and layout intuitive?”
5. Emotional Response
- “What emotions did you feel while using the system?”
- “Were there points where you felt frustrated, delighted, or confused?”
6. Information Architecture
- “Was the content organized in a logical way?”
- “Could you easily find what you were looking for?”
7. Recommendations
- “What would you improve to reduce confusion or frustration?”
5. HCI Data Collection Template (Multi-Metric)
Participant | Task Time (s) | Errors | NASA-TLX Score | Accessibility Score | Emotion (V/A) | SUS Score | Notes |
---|---|---|---|---|---|---|---|
P1 | 85 | 1 | 45 (High Load) | 3/5 | (4,6) | 70 | “Task clear but overloaded by pop-ups” |
P2 | 62 | 0 | 25 (Low Load) | 5/5 | (7,3) | 85 | “Smooth and enjoyable” |
P3 (Low Vision) | 120 | 3 | 52 | 2/5 (Navigation issues) | (3,7) | 50 | “Screen reader failed on submenus” |
6. System Usability Scale (SUS) Template
Keep the 10-question SUS format to maintain comparability, but now also pair with other scores (e.g., NASA-TLX, WCAG audit).
7. NASA-TLX (Cognitive Load) Template
For each task, users rate on a scale of 0–100:
- Mental Demand
- Physical Demand
- Temporal Demand
- Performance
- Effort
- Frustration
8. Accessibility Testing Checklist (WCAG 2.1)
WCAG Principle | Check | Status |
---|---|---|
Perceivable (e.g. alt text, color contrast) | ✔️ | Passed |
Operable (keyboard nav, focus order) | ❌ | Issue in dropdown menu |
Understandable (clear labels, feedback) | ✔️ | Passed |
Robust (screen reader compatibility) | ❌ | Some headings not announced |
9. Emotional Response Grid (Affect Grid or SAM)
Let users indicate their valence (pleasure) and arousal (activation) on a 2D grid after tasks.
User | Valence (1–9) | Arousal (1–9) | Interpretation |
---|---|---|---|
P1 | 6 | 8 | High energy, positive |
P2 | 4 | 2 | Low energy, neutral |
P3 | 2 | 7 | Frustrated or anxious |
10. Conclusion and Multidimensional Recommendations
Summary of Findings:
- The system performed well in usability and emotional engagement but had significant accessibility gaps and elevated cognitive load in complex task flows.
Recommendations:
- Simplify multi-step workflows to reduce cognitive strain.
- Improve screen reader compatibility and keyboard navigation.
- Adjust the UI for better visual hierarchy and information chunking.
- Introduce onboarding/tutorials for new users.