Short-form video platforms are popular among young adults, but their design, combined with AI tools, may amplify the spread of misinformation.
In partnership with the Google News team, we build off of Google research, and blended our own insights from young adult users to craft a speculative platform. This platform harnesses generative AI benefits to curb misinformation.
During this speculative project, our team spent 4 months researching young adults' engagement with short-form videos and understanding potential future impacts of generative AI. We then devoted 2 months to the design phase, translating our insights into actionable solutions.
2 UX Designers (including me)
1 UX Researcher
Advised by Google
Figma - product design and ideation
Protopie - microinteractions, advanced prototyping
Dovetail - User research, coding, affinitization
• Led ideation sessions and designed the 'connections' feature
• Directed our research plan and facilitated UX research sessions
• Fostered collaboration with Google and University of Washington faculty
I wrote a large portion of our research plan, facilitated sessions, and analyzed transcripts. This research was done to understand users' current interaction with misinformation on these platforms and to assess their sentiments towards the emerging influence of generative AI on short-form video.
I conducted semi structured interviews with remote screen sharing with 11 participants. The platforms used by participants included TikTok, Instagram Reels, and YouTube shorts.
I utilized a method called thematic analysis to code emerging themes and labeled them with relevant tags in Dovetail with the help of my team. I started an affinity diagram afterwards to discuss notable quotes with the team and encourage brainstorming.
Through in-depth research, we discovered that the problem we are trying to address is that Young adult’s trust in information revolves around the source's relatability rather than the pure accuracy of the information.
After meeting with interaction designers, PMs, and researchers on the Google News team, they noted similar findings in their own research, and felt that traditional fact checking efforts are unsuccessful. This brought us to our HMW, "How might we make thinking critically about short form videos not feel like work?"
We delved into sketching, storyboarding, and whiteboarding, and had intense discussions. Our goal was to balance the authenticity young adults seek with the rise of AI-generated content and misinformation, preserving the platform's value for them.
We each pinned up 20 sketches for brainstorming how to balance the authenticity that young adults crave, while helping them think more critically about the information they're consuming.
These are 8 of my 20 sketches which helped drive our final product concept. Concepts focused on addressing misinformation created by generative AI through increasing creator transparency.
After reviewing our sketches, we progressed to storyboarding. I advocated for my teammate Kieran's storyboard shown below (since design is a team sport!), which highlighted a video creator's connections in place of fact checking.
While initially toying with the idea of an immersive VR "party" metaphor to visualize a creator's network, we recognized its infeasibility in real-world settings, like bus stops or trains, where our target audience frequently consumes content. We moved on to iOS, really liking the map idea for a creators connections. After consulting with Google, we decided that a map would not scale well. We pivoted to designing a flipping gesture for iOS.
This offers users a sneak peek into "who's truly behind the video" and their affiliations. An engaging alternative to traditional fact-checking, it promotes transparency without asserting correctness.
The Google News team pointed us to the material3 design documentation and Figma plugins for material design. Upon flipping, users instantly view an AI-summarized claim from the preceding video. My primary focus was the 'connections' feature, spotlighting the creator's digital affiliations to offer content credibility insights.
• With a simple gesture, the user "flips" the video, immediately seeing an AI-summarized claim and revealing a video section of testimonials and community feedback directly related to the video's claim.
• They can view other users’ video responses to the claims and film their own response. Instead of taking content at face value, they're encouraged to engage critically and make informed decisions.
• Users can further scroll to see "connections", offering an additional layer to evaluate the digital company the creator keeps, reinforcing the ethos of informed consumption.
The "connections" feature dives into the affiliations of a creator. Users can explore collaborators, shared opinions, and even conflicting views. This provides not just a view into who the creator aligns with, but also illuminates broader conversations and debates around specific topics or claims.
By understanding these connections, users can make informed decisions about their future content consumption. With tailored flagging, they can decide if they wish to see similar creators, topics, or claims moving forward.
Users can inspect a single connection, which uses generative AI to summarize their views on topics, and how the connections views differ from the creator in question.
We showcased the culmination of our efforts in a tangible, relatable format. Our presentation at the iconic Seattle Library brought to life the very features we'd spent long nights designing, particularly emphasizing the "claims" and "connections" aspect.
Retention: Measure the count of users who continue to use FlipSide after their first session. Do they prefer FlipSide, or the standard SFV experience?
Flipping & Session Duration: Are users who flip more engaged with the platform? Are users spending more time on the FlipSide or main video scroll? What aspects of our platform are the most engaging?
Flagging, Liking, and Saving: Monitor how flagging is used. Does flagging guide users to a more desirable space overtime? Are we presenting users with desirable content? Do our AI-generated summaries seem accurate and trustworthy? Could flagging signal the presence of misinformation?
Meeting Users Where They Are: One of the most profound takeaways was the importance of truly understanding and empathizing with our users. Rather than introducing a new, complex solution like VR maps, which users might be hesitant to engage with, it's vital to craft solutions that seamlessly integrate into their current habits.
The Power of Iteration: The transition from an immersive VR concept to a more user-centric flipping gesture reinforced the significance of iteration in design. Not every initial idea will be the right fit, and being adaptable and open to change is crucial.
Complexity Behind Simplicity: The act of a simple flip was inspired by the Google ethos of addressing the most complex challenges with the simplest solution. The challenge of making critical thinking on short-form videos engaging, yet straightforward, was a lesson in the intricate art of simplification.
Stakeholder Management: Working closely with stakeholders from Google and professors from the University of Washington offered a unique perspective on balancing user needs with business and academic perspectives. This taught me the value of communication, feedback, and iterative design, and what academics value vs. industry professionals.
Cross-disciplinary Insights: Interacting with experts across disciplines broadened my understanding of UX design's interdisciplinary nature. It emphasized that good design doesn't exist in isolation but is a product of varied perspectives, from technological to sociological.
Holistic User Experience: Beyond the interface, understanding the broader implications of our design decisions, like the potential societal impact of addressing misinformation, underscored the expansive role of a UX designer.