In 2016, at SXSW I sat in on an SXSports session, “1984 Meets Moneyball: Who Owns Player Data?” [1]—and it remains the best privacy talk I’ve heard not for the novelty but because no one was talking about this space. The panelists described a future where athlete data could—and in places already did—quietly predict, price, and decide: contract terms, scholarship bets, even inferences about health and career longevity. They pointed to systems like Catapult’s GPS/IMU tracking and NBA arena camera arrays measuring movement and efficiency with the hidden secondary uses (e.g., contract pricing) by Teams/Owners.
I’ve kept tabs, waiting for research interest and public discourse to catch up. Interest is finally picking up, but perhaps not fast enough to influence change. Collection has only grown, AI is now default, NIL is here, and sports popularity continues to expand.
With the next high‑school season starting, this post looks at several major vendors in high‑school sports—how they use data beyond the core service and where incentives may misalign—to spark a better conversation about athlete privacy.
Ingest & storage — platform accounts for teams/schools; cloud hosting of games and practice film.
Analysis & AI — auto‑tagging, event detection, highlights, pose/ID, search and comparison tools.
Telemetry — GPS/IMU/HR wearables (Catapult, WIMU/Titan, STATSports) feeding dashboards and readiness metrics.
Team & comms — roster, scheduling, messaging, and sharing links (e.g., TeamSnap; built‑ins).
Distribution — private links, recruiting shares, public streaming (school sites, NFHS Network, social).
Monetization — subscriptions, sponsorship/ads, and data‑powered value‑adds (rankings, leaderboards, marketplaces).
Capture + host: Focus cameras and team uploads feed a central video library.
AI‑assisted analysis: auto‑cut, tags, highlights and search to break down games.
Optional telemetry: integrations like WIMU/Titan and Titan GPS overlays connect sensor data to film. [5] [6] [7]
Sharing: team and recruiting shares; school/national distribution via partners.
Exposure & recruiting (athletes + parents): a consistent film library and easy highlight tools make it simpler to get seen by college programs.
Development & feedback (athletes + coaches): tags, clips, and comparisons help track progress, set goals, and plan training.
Shareability (parents + athletes): one place to package film, stats, and context; simple links to send to coaches and recruiters.
Team logistics (parents): schedules, messages, and availability in one app reduce game‑day chaos.
Community & memory (families): livestreams for relatives who can’t travel; highlight reels for senior nights.
NIL & brand‑building (where allowed): athletes can grow a public profile; audience metrics (views, watch‑time, followers) help with brands, collectives, and marketplaces.
There are two big buckets—video footage and telemetry (wearable/sensor data). Each has a raw form and a derived form created by AI/analytics. This snapshot does not cover medical records or protected health information (PHI), though telemetry can be health‑adjacent (e.g., HR/HRV) and may intersect with injury reporting and return‑to‑play decisions which are increasingly used in product marketing for reasons to use the platforms.
Raw footage: the video files captured by team/venue cameras (e.g., Hudl Focus, Veo, Pixellot).
Footage‑derived data: information extracted from video—player IDs, jersey detection, pose/skeleton tracks, event tags (e.g., made 3PT, tackle), highlights, similarity/“embeddings,”and ranking or risk signals.
Raw telemetry: data directly from sensors like GPS/IMU/heart‑rate (speed, location, acceleration, heart beats).
Telemetry‑derived data: analytics built from raw streams—total distance, top speed, accel/decel counts, “PlayerLoad,” readiness/fatigue indices, injury‑risk flags.
There’s growing research interest on athlete data & AI—especially around privacy, fairness, transparency, and accountability. Solid 2024–2025 review papers are starting to appear [2].
Training by default; ad controls instead of AI controls. Visible toggles center on targeted ads/sale‑share. Athlete‑level switches for model training or profiling are uncommon and often law‑based.
Derived data ≠ raw data. Pose tracks, embeddings, labels, and risk scores are often treated as de‑identified “derivatives,” with broader reuse and fuzzier retention than raw files.
Third‑party flows are opaque. Ad/analytics partners appear; it’s hard to trace who sees what and when across schools, recruiters, streams, and vendors.
Controls are scattered. Cookie banners, state‑privacy pages, and email forms—not a single athlete‑facing control center; defaults for minors are rarely off.
Contracts can dictate terms of use. Districts/leagues can negotiate training exclusions, retention, and exports; families mostly inherit those choices.
Right now, value accrues to platforms and programs; risk accrues to athletes. This snapshot surfaces what can be known from public pages and spark a better conversation—about training vs. inference, derived vs. raw, and who gets to say no.
Ownership is complicated across schools, vendors, recruiters, and families. The lines between service delivery and secondary use blur, and incentives don’t always align. That’s why this deserves more open conversation—paired with clearer disclosures, explainability, and athlete controls.
Details:
Footage → AI / product training: whether uploads may train/improve models beyond your team’s use.
Telemetry → AI / product dev: whether wearable data (even “de‑identified”) can be reused to build/validate features or models.
“Sell/Share” for marketing: whether viewing or account data flows to ad/analytics partners.
Opt‑out visible: user‑facing toggles/forms in product or privacy pages—not just legal promises.
AI transparency page: a central explainer of model uses, retention, and who sees what.
Control AI processing: a specific ability to exclude your uploads/derived data from training or profiling (law‑based or product toggles).
Legend: 🚨 evidence of secondary use · ✅ explicitly limited/disallows · ❓ unclear · — not applicable
Vendor (typical HS use) |
Footage → AI / product training |
Telemetry/derived perf → AI / product dev |
“Sell/Share” for marketing/adtech? |
Opt‑out visible? |
AI transparency page? |
Can you control AI processing (training/profiling)? |
Hudl (platform + Focus cameras) |
🚨 Student page references using uploaded videos/usage to improve AI/services. |
❓ WIMU/Titan collect rich telemetry; policy isn’t specific about telemetry training. |
🚨 State page describes sell/share. |
✅ “Do Not Sell or Share” & rights forms. |
No |
Partial (law‑based) — some states expose rights to opt‑out of targeted ads/sale/share and, in some cases, profiling/automated decisions. No dedicated model‑training toggle. |
Veo (auto‑capture cameras) |
🚨 Policy lists develop, train, validate AI models. |
— |
❓ Marketing/cookies noted; adtech specifics lighter. |
Partial(marketing consent; cookie banner). |
No |
Partial (GDPR) — can object/restrict certain processing and withdraw marketing consent; no specific training toggle. |
Pixellot (auto‑production + NFHS partner) |
🚨 FAQ: self‑learning AI improved by large content volume. |
— |
🚨 CCPA sharing for cross‑context ads/analytics. |
✅ Cookie/opt‑out mechanisms. |
No |
No — marketing opt‑outs only; no training control documented. |
Catapult (OpenField/GPSports; Catapult One) |
— |
🚨 Privacy/terms allow product development and commercial applications, incl. Derivative Materials(de‑identified) across products. |
❓ |
❓ (often contract‑driven; varies by org). |
No |
Contractual only — negotiate exclusions (e.g., training/research) in the DPA/order form; no public toggle. |
NFHS Network (HS streaming) |
❓ |
— |
🚨 Policy lists video-viewing data among collected categories and says users may allow disclosure to advertising/analytics partners, which may result in ‘valuable consideration’; wording is broad/ambiguous. |
✅ Standard opt‑out rights. |
No |
No — ads/analytics choices only; nothing on AI training. |
TeamSnap (team management) |
— |
— |
🚨 Opt‑outs for targeted ads and sale/share. |
✅ “Your Privacy Choices”. |
No |
Partial (law‑based) — marketing/ads opt‑outs and state‑privacy rights; no model‑training toggle. |
STATSports (Wearable Vest GPS tracker) |
No mention, articles indicate AI/ML methods used [15] but no clear statement on AI models. |
❓ No public “train models” statement; age/consent gates present. |
❓ |
❓ |
No |
Unclear — age/consent gating documented; no AI‑processing control documented. |
Context: Wearables like WIMU/Titan document many variables (≈150–250+) and detailed derived metrics (top speed, accel/decel, workload). Even where training language is vague, the data’s richness makes governance/controls matter.
Hudl — Public pages say uploaded videos + some usage data may train AI; state page exposes sell/share and targeted‑ads opt‑outs. Telemetry (via WIMU/Titan) is deep, but training specifics for telemetry aren’t spelled out on student pages. [3] [4] [5] [6] [7]
Veo — Policy lists “develop, train, validate AI models” as a purpose; controls presented in GDPR/marketing/cookie framing rather than a dedicated training opt‑out. [10]
Pixellot — FAQ emphasizes self‑learning AI improved by high capture volume [11], and privacy pages include sale/share choices for ads/analytics [12]. No AI transparency page; no training control documented on public pages.
Catapult — Privacy/terms allow reuse for product development, commercial applications, and creation of de‑identified Derivative Materials for research/commercialization; controls are typically contract‑driven. [8] [9]
NFHS Network / TeamSnap — Focus on ad/analytics choices and rights requests; not positioned as model‑training vendors. [13] [14]
[2] Kim, J.H. (2025). Ethical implications of artificial intelligence in sport (Review). ScienceDirect. link
[3] Hudl — Student Athlete Account Privacy Statement: hudl.com/privacy/student-athletes
[4] Hudl — U.S. State‑Specific Privacy: hudl.com/privacy/us-states
[5] Hudl — WIMU FAQ (250+ variables): hudl.com/products/wimu/faq
[6] Hudl — Titan FAQ (GPS metrics & overlays): hudl.com/products/titan/faq
[7] Hudl — WIMU blog (datapoints & variables): hudl.com/blog/epts-fifa-quality-certification
[8] Catapult — Privacy Policy (product development; commercial applications; derivatives): catapult.com/privacy-policy
[9] Catapult — Sub‑Processors (example list): catapult.com (sub‑processors)
[10] Veo — Privacy Policy (develop/train/validate AI models): veo.co/en-us/privacy
[11] Pixellot — FAQ (self‑learning AI; volume): pixellot.tv/faq/
[12] Pixellot — Privacy / CCPA (sale/share; GPC): pixellot.tv/privacy-policy/
[13] TeamSnap — Privacy Policy (Your Privacy Choices; targeted ads/sale‑share): teamsnap.com/privacy-policy
[14] NFHS Network — Privacy Policy (video‑viewing data; “valuable consideration”): nfhsnetwork.com/privacy-policy
[15] StatSports launches new AI powered technology: https://statsports.com/article/statsports-launch-new-ai-powered-technology
[16] StatSports – Privacy Policy: https://statsports.com/legal/privacy-policy