Responsible AI Disclosure
& Reporting System
Empowering early-stage AI startups to assess and manage risks without compromising agility while accelerating business growth and investor appeal
RADARs
Responsible AI Disclosure & Reporting System.
  
Project Brief
A web application streamlines AI transparency reporting for early-stage AI startups by combining smart templates with real-time guidance. This enables startups to focus on their priorities while staying informed with knowledge of potential risks.
  
  
Sponsored By
My Role
  
UX Researcher & Designer
Front & Back-end Developer
Project Type
  
To B Team Project
Time Frame
  
Sep 2024 - Mar 2025
Tools Used
  
Cursor, Webflow, Zapier, Figma, Adobe Photoshop, Lottie
Problem Statement
In an increasingly competitive startup landscape, survival is the top priority for AI startups, leaving little time and resources for ensuring transparency in AI operations. However, the lack of transparency increases business risks, making it harder to attract investors and build long-term trust.
40%
of startups reduced scale after AI transparency incidents
18+
months are usually required to rebuild trust with clients
Moreover, existing frameworks and toolkits for responsible AI practices are difficult to use and quickly become outdated, leaving startups without practical, up-to-date solutions.As a result, many AI startups prioritize survival over compliance, overlooking risks until they escalate into costly business threats.
Solution & Impact
Our tool empowers early-stage AI startups to not just survive but thrive. With insights into potential risks, our tool helps them make informed decisions that enhance investor appeal and long-term success. Once perceived as costly and time-consuming, transparency is now accessible and actionable, making it an asset rather than a burden.
Our web application streamlines AI transparency reporting for early-stage startups by combining smart templates with real-time guidance. This enables startups to focus on their priorities while staying informed with knowledge of potential risks.
  
  

Step 1: Basic Input

  
Users enter information about their industry, AI base model, use case(s), and audience.
  
  

Step 2: Tailored Questions

  
Answer a set of customized questions about their AI practices like data handling, testing, etc.
  
  

Step 3: Generate Report

  
Generates a comprehensive transparency report that identifies risks and provides suggestions.

56x

Faster responsible AI learning & reporting process for AI startups

20k+

Legal consultation cost expexted to be saved for AI startups

2

Companies are helping test our solution
Process & Approach
September 2024
Research
● Secondary research
● Field study and research
● 13 subject matter experts interviews
Insights:
● "Responsible AI is important!"
● There're already countless Responsible AI Frameworks on the market
November 2024
Pivot & Define
● Research questions
● Product direction
● Problem scope
Insights:
● AI startups do not care about Responsible AI. They put development and survival the top priority.
● Transparency on AI practice (part of Responsible AI) is already a trend from legal perspective.
Helping startups improve transparency on AI practice could be our breaking point.
The “Catch-22” for AI Startups:
Why existing tools don't work:
● No mandatory laws yet for AI companies in the U.S.
● Frameworks from large corporations act more as legal shields, often spanning 50-200 unreadable pages.
December 2024
Prototype
● System architecture
● User flow
● Low to high fidelity UX/UI
● Frontend & backend implementation
Feburary 2025
Iterate
● 4 prototype usability tests
● 3 design iteration sprints
● 5 functional usability tests
● 6 Stress tests
● Debug sprint sessions
March 2025
Deliver
● Documentation
● Cloud environment setup
● Asset Handover (design Assets, codebase, technical documentation)
● Open Pitch & Poster Session
● Performance monitoring & feedback collecting
Challenges & Pivots
Deliverables - UI/UX
Custom Profile
We filter out the noise and enable startups to focus only on risks specific to their industry.
Tailored Assessment
We provide clear guidance for each questions with AI experts available 24/7
Analysis & Recommendation Report
Automatically generate customized evaluation reports with actionable suggestions
Link to the live demo website: https://brightpath-report.webflow.io/rai-input
* This demo is still under iteration. If the report generation is stuck on the processing page, it may be due to an ongoing system update on our end. Interface variations may occur depending on the development stage.
Design Iterations
Through iterative usability testing, we identified and resolved issues, refining the interface across all features. For this presentation, I’ve selected the 'Section Recommendation' feature interface as a representative example.
V1 - Mid-fidelity Usability Testing Results
V2 - Hi-fidelity Usability Testing Results
V3 - Hi-fidelity Usability Testing Results
V4 - Hi-fidelity Usability Testing Results
Success Metrics for The Whole Prototype
We conducted six structured tests to evaluate report generation time, accuracy, and relevance under various conditions. Based on the feedback received, we refined the code, resolved bugs, and performed another round of six structured tests.
Required
Testing Result
Testing result after refinement
95%
83.3%
100%
1. Accuracy of Output
Description:
Evaluates the precision of generated reports by tracking errors, inconsistencies, or missing information in the responses.
Process:
We evaluated the correctness of the system-generated reports by comparing them against expected results.
90%
92%
N/A
Relevance of Suggestions
Description:
Assesses how well the provided recommendations align with industry compliance standards and user-specific contexts.
Process:
Suggestions were assessed for accuracy and industry relevance.
1'
1'13''
1'15''
Report Generation Time
Description:
Time taken from user input to the system generating complete recommendations.
Process:
Time 1: 1’05’’          Time 2: 1’09’’          Time 3: 1’44’’(Failed)
Time 4: 1’14’’         Time 5: 1’06’’             Time 6: 1’30’’

The time required to call the API, complete the validation loop, and generate the report slightly exceeded our planned duration.

The third test failed because the input contained too many risks, surpassing the prompt token limit. We implemented error prevention functions and conducted an additional round of tests.

Given the low cost-effectiveness of speeding up the demo under the current system architecture, we decided to implement a 'Tips' feature on the generation processing page to alleviate customer impatience.
Summary & Learning
One sentence to summarize the iteration process:
But we did even more. We integrate visual aesthetics, branding identity, and functionality, leveraging a minimalistic design approach to reduce user effort and maximize efficiency: