Bib tagging is still the backbone of endurance photo discovery for one simple reason: it scales. It’s familiar, it’s fast, and it maps cleanly onto how runners think about their race day identity. But anyone who has photographed enough events also knows the uncomfortable truth: the photos people care about most are often the ones where the bib is hardest to read. Arm swing, hydration vests, jackets, glare, crowded chutes, angled turns, and the general chaos of motion all create the same failure mode—great photos with obscured numbers. Flashframe’s tagging strategy is built around that reality, not around an idealized “bib always visible” world.
Bib tagging Flashframe was the first company into the automated bib tagging space, and pioneered the way things are done, and has a patent in bib searching technology. Flashframe’s bib tagging uses machine learning to read the bibs, so it takes just seconds to tag photos. This is the foundation: when the bib is visible, the system is fast and reliable, and it gives athletes the simplest possible path from search to checkout. Bib tagging isn’t outdated. It’s still the primary discovery pattern for marathons, triathlons, cycling events, and anything with a numbered identifier that’s reasonably camera-visible.
Obscured bib recovery The real work starts when the bib isn’t readable. Flashframe has a heuristic comparison and matching system that looks at sequentially taken photos from a single position (the same camera) to infer identity when a bib is temporarily blocked. This is the kind of practical solution that fits how races are photographed: runners move through the same frame in a short burst, posture changes from shot to shot, and the bib often becomes visible in adjacent frames even if it’s blocked in the best-looking one. The heuristic approach is explicitly not facial recognition. It’s a way to recover “blocked” photos by using the sequence itself as evidence. In practice, it’s been extremely effective at surfacing the exact images that would otherwise get lost—often the dynamic, mid-stride moments that people actually want to buy. All these photos are then matched to a bib number and searchable by the participant.
Face search Flashframe now offers facial recognition that you can opt into as a photographer, making discovery even more effective. The key word there is opt-in. Bib tagging remains the backbone, and the heuristic system covers a large chunk of the “arm blocked the bib” problem without biometrics. Face search exists as an additional path for the remaining edge cases—the moments where the bib is truly not usable and the sequence isn’t enough. It’s a complement, not a replacement, and it’s especially valuable because it reduces the number of “I can’t find myself” experiences that quietly kill conversion. Athletes just upload a selfie or photo of themselves, and Flashframe will analyze the gallery for matching photos. Flashframe’s terms describe face search as opt-in and explain how biometric data is used. If you want the exact language, you can find it here: https://www.flashframe.io/terms/ . This matters not just for compliance, but for trust—because the athletes using the platform should know the goal is just photo discovery. Its also critical to note that photographers take responsibility for turning this functionality on and ensuring its legality in their respective jurisdiction.
Timing mats and near-zero untagged photos For events that can provide timing data and mats, Flashframe can also tag photos off timing signals, leaving effectively 0 photos untagged. This is the completeness layer that’s easy to underestimate. Timing data turns the long tail of ambiguous photos into solvable attribution problems because it narrows the candidate set based on when a runner crossed a known point. When used well, this is how you close the last gaps that purely visual tagging can’t reliably solve, especially at scale. The practical outcome is fewer untagged images, fewer missed purchases, and fewer support requests from participants who assume you simply didn’t photograph them.
Manual Tagging Sometimes a person can do it better. While we don't necessarily thing that's always the case, we want you as the customer to feel confident in the way things are done. If you want us to engage our manual tagging teams, they are always available to help analyze and tag your photography. If you'd like this done, we just request that you give us a 24 hours heads up at support[at]flashframe.io so that we can appropriately coordinate or teams to assist with your event photography.
Bib Number and Name Matching If you're working directly with the event, many times you can get a CSV or excel file of bib number, first & last name of the participants. If so, you can upload that information in our system to allow athletes to search their name instead of their bib number. This can be very effective for folks who forgot their bib number after the event and still allows them to find their photos.
The real takeaway Flashframe’s tagging approach is layered on purpose. Machine learning bib reads handle the fast, scalable baseline. The heuristic recovery handles the common “bib blocked” reality without requiring facial recognition. Opt-in face search provides a second discovery path for the hardest edge cases. Timing mats and race data can push completeness to effectively zero untagged photos when the event can provide those inputs. The result isn’t just “better tagging.” It’s better conversion, less support burden, and a more reliable promise to participants: if you were on the course, you’ll be able to find yourself.