Cedric AI
Project Name: Cedric
Client: Amazon
Role: Product Experience Designer (UI & UX)
Tools Used: Figma, FigJam, Claude, Cloudscape Design System
Project Duration: 6 months
Industry: Technology / Enterprise AI Solutions

Overview
Cedric is an internal AI-powered application built for Amazon employees to assist with document summarization, content queries, and idea generation. Initially developed as a hackathon project, it evolved into a widely-used tool across multiple Amazon verticals, including Devices, E-Commerce, AWS, and Advertising. The chatbot empowers employees to quickly access and summarize key information while ensuring data security. I was one of the earliest product designers to join the team, collaborating with cross-functional teams to solve design challenges that arise from Amazon's massive scale.
Project Brief

Design an intuitive, secure, and scalable Gen AI application that enables Amazon employees to seamlessly summarize large documents, retrieve important information, and receive actionable insights. The goal was to create a tool that improved productivity while adhering to Amazon’s strict security requirements.

Key Insights & Design Challenges
Given Amazon's global scale and the massive volume of documents accessed by employees, the primary insight was that there was a need for a seamless solution to summarize these documents and retrieve critical information while ensuring data security. The challenge was not just handling large amounts of data, but also ensuring that employees could use the tool intuitively without overwhelming the backend systems, and preventing potential data leakage.

As a designer, my role was to solve the following design challenges:
Intuitive Document Interaction in a Secure Environment
Employees needed to interact with large volumes of documents quickly, but without the risk of data leakage. The challenge was to create a secure environment where documents could be accessed and summarized without compromising sensitive information.
Managing Backend Load
With potentially thousands of employees using the system simultaneously, the backend could quickly become overloaded. The design needed to incorporate solutions for managing token limits and backend processing efficiently.
Scalability and Backend Load
Given the scale at which Amazon operates, millions of interactions could happen simultaneously. The design needed to address how to manage these interactions without overloading the system, ensuring smooth performance for all users regardless of the load.
Design Process​
Problem Identification & Design Brief
The first step in the process was collaborating with stakeholders to understand the pain points. After discussing with product leads, we identified that document summarization was the core need, followed by secure handling of information. We also understood the scale at which the tool had to operate and that Amazon’s security requirements were paramount.
​


Research & Cross-functional Collaboration
I worked closely with teams from various Amazon departments (Devices, E-Commerce, AWS, and Advertising) to understand their specific needs and workflows. Through user research, we identified key user personas and potential bottlenecks in document management. This feedback directly influenced the design approach, especially in ensuring an intuitive user interface.
​
I created wireframes and prototypes that incorporated:
Document Upload and Summarization: The tool was designed to allow easy document upload (e.g., PDFs, internal wiki pages) for immediate summarization.
Add to Context and Memory: One of the key innovations was the “Add to Context” feature. This allowed the system to remember the documents that had been uploaded or summarized previously, effectively "storing" them in memory. This innovation reduced the need for excessive token usage, as the AI model could recall previous context without needing to rescan the document.

Token Management: The interface included visual feedback showing the current token usage, helping users stay within system limits. The system was designed to manage backend load by limiting token usage and ensuring that documents exceeding a certain length (e.g., 10 pages) could be processed more efficiently by prompting users to download summaries or upload documents in smaller chunks.
Proactive Suggestions & Feedback Prompts:
Based on the user's interactions, Cedric could offer suggestions for relevant documents or provide additional summaries. This made the AI experience more dynamic and helpful, offering users context-specific recommendations to improve efficiency.

Security & Data Protection: I worked closely with Amazon’s security team to ensure the design adhered to strict internal security protocols. Features like Incognito Mode and Text Protection were added to prevent sensitive data from being shared or copied.
Results & Impact
Increased Adoption: Since its launch, Cedric has been widely adopted by over 100,000 employees across various Amazon teams. It became an integral part of their workflow, especially for document-heavy tasks, significantly improving efficiency.​
​
Backend Efficiency & Scalability: By implementing memory and context management, along with token limits, the system effectively scaled without overwhelming backend systems, even as the user base grew.
​
Positive Feedback on Usability: Employees appreciated the ability to quickly summarize documents, the proactive suggestions for additional actions, and the overall ease of use. The Add to Context feature was particularly praised for reducing repetitive tasks and streamlining workflows.