AI Learning Model
Too Long; Didn't Read
The Problem
With an ever-expanding repository of historical artifacts comprising billions of images, the need for a streamlined and efficient indexing approach at FamilySearch has become not just urgent but a critical priority.
The Solution
By establishing collaborative ties and incorporating PRImA's Aletheia tool, FamilySearch will be able to generate accurate data sets on historical artifacts. This ground-truth data will empower the organization to construct and refine machine-learning algorithms, enhancing the indexing process more efficiently. For this project, I designed the management and operational processes for the machine-learning.
Introduction
FamilySearch currently processes over five million historical images spanning various record types, which will increase rapidly each year. The numerous volunteers and users assisting with indexing these records need to catch up with the rate of image ingestion.
FamilySearch decided to collaborate with PRImA (Pattern Recognition & Image Analysis), which was developing an accurate and cost-effective tool for recognizing and annotating scanned documents. The idea was to create ground truth data by annotating images of various historical records. Then, using machine learning, train an AI to read and index these historical records.
The Proper Workflow
The existing team needed a designated UX designer. My manager suggested I join the team, and shortly after joining, we discovered that the prototype's current workflow needed to change. At the time, operators would complete machine-learning tasks one at a time and return to the management dashboard. Instead, they wanted to stay in the Aletheia tool until they finished their tasks for any given artifact.
Before attempting to solve this issue, I first took some time to understand how the current prototype worked. I spoke with one of the project's lead developers, who walked me through it. Next, I talked to the manager of technical machine learning development at FamilySearch to identify the needs for this problem space. During this process, I learned that once an operator finishes a task, they mark it as complete, and an overlay dialog box pops up to confirm the action.Â
I experimented with different overlay dialog boxes to ask the user if they would like to continue with the next task once it is confirmed, marking it as complete. However, after presenting the solution to the users, they wanted to see the tasks completed, what task was next, and what tasks remained.
My second iteration included a way for operators to choose which task to complete next. However, after presenting that solution to the technical machine learning development manager, I learned that the operational tasks must be completed in order.
My third iteration asked the user if they wanted to continue to the next task, listed below, along with all remaining tasks for the current artifact. This version tested very well with operators and received the manager's stamp of approval.
The overlay dialog box was one solution to the workflow issue. However, there was more to this problem. The dashboard also needs to be changed to reflect this new workflow. It needed to focus on the artifacts and the tasks associated with those artifacts.
Currently, the dashboard is organized by task type; this new dashboard, organized by artifact, provides the operators with the optimal view for a more efficient flow. I did have a design constraint on the new dashboard; it had to be a table chart. The current dashboard had too much text and very few visuals. Too much text could make operations less efficient in their work.
To address the dashboard issue, I reorganized the chart to list artifact IDs first, with an image icon next to each artifact. The task is listed to the right of the artifact ID. This change will help operators see each artifact and each task associated with it at a glance. It also provides the optimal workflow for operators. To help with the too much text, I created status chips organized by color. These chips communicate to the operator if the task is ready to annotate, review, or on hold or if the task has rework or is completed. This change helps the operators work more efficiently. These updates to the dashboard tested very well with the manager and operators, so now the operators had their optimal workflow.
Aletheia Updates
Inside the Aletheia tool, the technical machine learning development manager asked for the ability to assign a task. As I worked on updating the bottom bar to meet that need, I decided to improve the existing elements to enhance clarity.
For the first iteration, I updated the state to show the colored chips I had created for the dashboard. I also made the save button a primary button and modified the placement of the other information listed on the bottom bar. This information includes the project name, the task name, who is doing the work, who is reviewing the work, and the artifact identification number. I also changed the icon on the bottom bar. The team designed the icon to put a task on hold before my arrival, but I needed clarification since it was a compose icon. The last modification was to change the compose icon to an edit icon and add a menu overlay that allows a user to mark a task as complete, put a task on hold, or assign the task to an operator (if the user was a manager).
For the second iteration, I changed the edit icon to an overflow menu icon, placing it on the far right side of the bottom bar. I then arranged the project name, task type, and artifact number in the center of the bottom bar. The information on the state of the task, who will complete the task, and who will review the task is placed on the far left side of the bottom bar.
The manager was very pleased with these changes to the bottom bar, and soon had the functionality she requested.
Completing the Actions
Once the bottom bar was complete and approved, I started designing the user experience by adding why a user was putting a task on hold and assigning any given task. Different use cases exist for putting a task on hold; each reason is displayed on the dashboard.
An overlay message appears when a user clicks the "put on hold" button. The textbox has placeholder text that reads "reason." The operator types the reason in the textbox and clicks the "put task on hold" button.
An overlay will also appear when assigning a task. The manager will click the dropdowns, select the operators to annotate and review, and then click the "assign" button.
Errors
Around this time, the team discovered that two users could edit the same task and that operators would run into issues if they tried to mark a task as complete or put it on hold.
To address this issue, I created two error messages. Both messages explain to the user why the action failed and how to fix the problem. In this situation, it is to refresh the page.
Conclusion
The current impact of my work on this team has completely transformed how operators teach AI to read historical artifacts, and the workflow for operators is more efficient. This project is ongoing, and improvements are being made faster than I can document here.
Key Takeaways
I dropped the ball on some of the wording in these designs. Take the time to get the wording just right.
Behind a well-designed website are users providing invaluable feedback. Present/test with your users often.