On February 5, 2026, five Illinois residents filed a class action against Microsoft alleging that Teams’ live transcription feature collects voiceprints, biometric identifiers expressly protected under the Illinois Biometric Information Privacy Act (BIPA), without the notice, consent, or retention policies the statute requires.
The case is Basich et al. v. Microsoft Corporation, Case No. 2:26-cv-00422, filed in the U.S. District Court for the Western District of Washington. Microsoft has not yet responded.
This is not a case about Microsoft Teams specifically. It is a case about whether a technical process called diarization, the method by which AI transcription tools determine who said what, constitutes collection of a biometric identifier. If the court says yes, every SaaS product that attributes speech to individual speakers has a BIPA problem.
What the Complaint Alleges
Microsoft introduced live transcription in Teams in 2021. The feature creates a real-time, archivable written record of meeting dialogue with speaker attributions and timestamps. When a participant speaks, the transcript shows their name next to what they said.
To make that attribution work, Microsoft uses a process called diarization. The complaint walks through a five-step pipeline, all of which allegedly runs on Microsoft Azure servers.
First, Microsoft records the meeting audio and pre-processes it to reduce noise. Second, Voice Activity Detection identifies when someone is speaking. Third, Speech Segmentation divides the detected speech into smaller segments and flags potential changes in who is speaking. Fourth, Microsoft extracts individual speaker profiles in the form of voiceprints from each speaker segment. According to the complaint, these voiceprints capture distinct vocal characteristics, including pitch, tone, and timbre, and are stored as numerical vectors unique to the individual. Fifth, Microsoft matches speech segments to speakers using those voiceprints, then links them to pre-existing identity information (names, profile pictures, email addresses, and organizational affiliations) to produce the attributed transcript.
The complaint alleges Microsoft does none of the things BIPA requires before collecting biometric identifiers. It does not inform participants in writing that voiceprints are being collected. It does not disclose the specific purpose or duration of collection. It does not obtain a written release from participants. And it does not maintain a publicly available written policy establishing a retention schedule and destruction guidelines for voiceprint data.
Teams does display a banner when transcription starts: “Transcription has started. Started by you. Let everyone know they’re being included.” The banner includes a link labeled “Privacy policy” that points to the general Microsoft Privacy Statement. But the complaint alleges that the Privacy Statement does not mention voiceprints at all. The only reference to voice-related data processing appears in a section titled “Speech Recognition technologies,” which describes an opt-in program for reviewing audio snippets to improve AI, using de-identified data. That is not the same thing as diarization, and it is not a BIPA-compliant disclosure.
The complaint further alleges that Microsoft has a U.S. State Data Privacy Laws Notice covering California but has no Illinois-specific or BIPA-specific policy addressing its collection of voiceprints from Teams users. The plaintiffs call this omission “reckless, if not intentional,” given that BIPA has been the subject of significant litigation since at least 2019 and Microsoft is plainly aware of its requirements.
The Intelliframe Distinction
Microsoft actually does have a separate, opt-in voice and face enrollment feature called Intelliframe. Users who enroll provide explicit consent, go through a voice capture wizard, and can unenroll and delete their data at any time. Microsoft’s own documentation states that Intelliframe “recognition features cannot be used in the state of Illinois.”
The complaint’s class definition explicitly excludes persons who voluntarily enrolled an Intelliframe voice profile. The lawsuit targets the default diarization that happens during live transcription, the process that runs automatically when any meeting organizer enables transcription, without any enrollment or consent from other participants. Microsoft built a BIPA-compliant consent flow for its opt-in feature and blocked it in Illinois. The plaintiffs’ argument is that the default transcription feature collects the same category of biometric data through the same type of voice analysis, without any of the same safeguards.
The Central Legal Question: Is Diarization a Voiceprint?
BIPA defines “biometric identifiers” as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” Voiceprint is listed alongside fingerprints and faceprints. There is no statutory definition of what a voiceprint is beyond the plain meaning of the term: a distinctive pattern of voice characteristics that identifies a person.
The complaint alleges that the numerical vectors Microsoft extracts during diarization, capturing each speaker’s pitch, tone, and timbre, are voiceprints. It analogizes them to faceprints (mathematical representations of facial geometry used in facial recognition) and fingerprints (used in automated fingerprint identification systems). All three, the complaint argues, reduce a unique biological characteristic to a mathematical representation stored as numerical data.
Microsoft will almost certainly argue that diarization is not voiceprint collection. The most likely defense is that diarization is temporary signal processing: the system analyzes audio characteristics to distinguish Speaker A from Speaker B within a single meeting, but does not create a persistent biometric template that could identify an individual across contexts. Under this framing, diarization is closer to a noise filter than a fingerprint scanner. It distinguishes speakers for the duration of a transcript without creating a reusable identifier.
The plaintiffs’ strongest counter is in paragraph 39 of the complaint: Microsoft links the voice-derived speaker profiles to pre-existing identity information, including names, profile pictures, and email addresses. The diarization output is not anonymous speaker labels. It is named individuals identified by their vocal characteristics. If the system can determine that a specific voice belongs to Alex Basich by analyzing that voice’s unique characteristics and matching them to Basich’s identity, that is the functional definition of a voiceprint.
There is also the Intelliframe comparison. Microsoft treats its opt-in voice enrollment feature as collecting biometric data subject to BIPA (it blocks the feature in Illinois entirely). The plaintiffs will argue that if opt-in voice enrollment creates a voiceprint, default diarization that analyzes the same vocal characteristics to achieve the same result (identifying who is speaking) must also create a voiceprint. The distinction between the two features is the consent flow, not the underlying biometric analysis.
Expected Defenses
Beyond the “diarization is not a voiceprint” argument, Microsoft has several other likely defenses.
Extraterritoriality. Microsoft is a Washington corporation. Azure servers are globally distributed. The diarization processing almost certainly does not occur on servers physically located in Illinois. Microsoft will argue that BIPA claims require the alleged violation to occur “primarily and substantially” in Illinois, and that server-side processing in Washington or elsewhere cannot constitute collection in Illinois. The plaintiffs will counter that their voiceprints were collected while they were physically present in Illinois, which is where the biometric data originated, regardless of where it was processed.
Consent through Terms of Service. Microsoft may argue that users accepted the Microsoft Services Agreement and Privacy Statement, which together constitute consent. The complaint preempts this by alleging that neither document mentions voiceprints, discloses the specific purpose or duration of voiceprint collection, or constitutes the “written release” BIPA requires. BIPA’s consent requirements are specific: written notice of collection, written disclosure of purpose and duration, and a written release. A general terms-of-service acceptance is unlikely to satisfy those requirements, particularly when the terms do not mention the specific biometric data at issue.
The 2024 BIPA amendment (SB 2979). The amendment caps recovery so that collection of the same biometric identifier from the same person using the same method counts as a single violation. This significantly limits per-person damages (one recovery per class member, not one per meeting). But it does not eliminate the claim. Each class member is still entitled to $1,000 (negligent) or $5,000 (intentional/reckless) per violation, plus injunctive relief. Given the proposed class period (March 2021 to present) and Teams’ 320 million daily active users globally, the class size in Illinois alone could still be substantial. Whether the amendment applies retroactively to pre-August 2024 violations is itself contested: federal courts in Illinois reached opposite conclusions within days of each other in late 2024, and the question remains unresolved.
Why This Case Matters Beyond Microsoft
If the court rules that diarization creates a voiceprint under BIPA, the implications extend to every SaaS product that does speaker attribution. The technical process is functionally identical across platforms. AI transcription tools, meeting assistants, call center analytics platforms, clinical documentation systems, and customer service voice AI all use some form of speaker diarization to determine who said what. The vendor names differ. The underlying voice analysis does not.
The directly affected category includes AI meeting transcription tools (Otter.ai, Fireflies.AI, Fathom, Grain, Avoma), conversation intelligence platforms used for sales and customer service (Gong, Chorus, CallRail, Observe.AI), ambient clinical documentation tools used in healthcare (Abridge, Nuance DAX Copilot, Ambience Healthcare, Nabla), and the platform companies themselves (Zoom, Google Meet, Webex) to the extent they offer attributed transcription.
The indirectly affected category is any SaaS company that routes audio through a third-party speech-to-text service that performs diarization. If your product records a meeting or a call, sends the audio to an API, and receives back a transcript with speaker labels, your API provider may be collecting voiceprints as your subprocessor. Your BIPA exposure flows through your vendor’s technical architecture whether or not you perform the voice analysis yourself.
A companion case, Cruz v. Fireflies.AI Corp. (C.D. Ill., filed December 18, 2025), alleges the same theory against a smaller AI meeting assistant vendor. That case also involves non-users whose meetings were recorded without their consent. Between Basich and Cruz, courts in two different jurisdictions will be considering whether AI-powered speaker attribution constitutes voiceprint collection under BIPA.
The Employer Exposure Problem
The complaint names Microsoft, not the employers whose organizations deployed Teams with transcription enabled. But BIPA liability can attach to multiple entities involved in the same biometric collection. Illinois courts have held that companies that enable, authorize, or benefit from biometric data collection can be implicated even if a third-party vendor performs the technical processing.
For SaaS founders, this creates a dual exposure. If you are the vendor (you provide the tool that does speaker attribution), you face direct BIPA claims from the individuals whose voiceprints your tool allegedly collects. If you are the deployer (you enable a third-party transcription tool in your workplace or on your platform), you face claims for authorizing or benefiting from the collection.
The complaint’s class definition reinforces this: it covers all persons whose biometric identifiers were captured “during the transcription of Microsoft Teams meetings in which they participated while residing (and/or present) in Illinois.” The class is defined by where the participant was located, not by who enabled the feature. A meeting organizer in California who enables transcription on a call with an Illinois participant triggers the same exposure as if the organizer were in Illinois.
What This Means for Your Legal Stack
The voiceprint question raised by this case connects directly to the biometric data provisions that should already be in your legal documents if your product touches audio, voice, or speaker identification in any way.
Subprocessor audit for voice processing. If your product uses any third-party API or service that processes audio and returns speaker-attributed output, you need to understand whether that service performs diarization. If it does, that service may be collecting biometric identifiers on your behalf. Your subprocessor list, your DPA, and your data flow documentation all need to account for this. The question your enterprise customers will ask: does any component of your product analyze voice characteristics to identify individual speakers? If the answer is yes, your biometric data obligations are triggered regardless of whether your product performs the analysis directly or outsources it.
BIPA-specific provisions. If your product is used by anyone in Illinois, and it collects, processes, or facilitates the collection of biometric identifiers (including voiceprints), your terms need to address BIPA compliance. This means written notice that biometric data is being collected, written disclosure of the purpose and duration of collection, a mechanism for obtaining written consent before collection occurs, and a publicly available retention and destruction policy. These are not optional best practices. They are statutory requirements with $1,000 to $5,000 per-violation damages and a private right of action.
Product design decisions. The Intelliframe example is instructive. Microsoft built a consent-compliant enrollment flow for its opt-in voice feature and blocked it in Illinois. But it apparently did not apply the same analysis to its default transcription feature, which the complaint alleges performs the same category of voice analysis without any of the same safeguards. If your product has a feature that analyzes voice characteristics, the time to assess whether it constitutes biometric data collection is before deployment, not after a complaint is filed. Consider whether speaker attribution can be offered as an opt-in feature with a BIPA-compliant consent flow, rather than as a default that applies to all participants.
The non-user problem. Both the Basich and Cruz complaints involve participants who never agreed to any terms of service. In a meeting, the organizer may have accepted the platform’s terms. The other participants may have joined via a link without creating an account or agreeing to anything. BIPA requires consent from each individual whose biometric data is collected, not just from the account holder who enabled the feature. Your product’s consent architecture needs to reach every participant, not just the user who clicked “I agree.”
Get specialized counsel before you ship. BIPA compliance is not a problem you can solve with a template privacy policy or a generic legal document generator. The statute’s requirements are specific (written notice, written disclosure of purpose and duration, written release, publicly available retention and destruction policy), the case law is actively developing, the retroactivity question is unresolved, and the interaction between BIPA and your product’s technical architecture requires analysis that is unique to your data flows. If you are building a product that processes voice data and performs any form of speaker identification, attribution, or diarization, consult a lawyer who specializes in BIPA compliance before you deploy. This is particularly true if your product will be used by or accessible to anyone in Illinois, which, for any SaaS product with a US customer base, is effectively a certainty. The cost of a BIPA compliance review before launch is a fraction of the cost of defending a class action after one.
This is the ninth post in the AI Privacy Litigation series. Previous: AI Disclosure Laws: What’s in Force, What’s Coming, and What Your Legal Stack Needs Now. For the full series, see the AI Privacy Litigation series page.
For the foundational B2B SaaS legal stack (Terms of Service, Privacy Policy, DPA, SLA), see our core series on B2B SaaS legal frameworks. For the AI-specific contractual framework, see the AI-Enabled SaaS series. For the broader biometric data analysis, see Clearview AI and BIPA: Why Biometric Data Is the Highest-Risk Category for SaaS Vendors.
No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them. No Boiler is not a law firm, and communications with us do not create an attorney-client relationship or carry any expectation of confidentiality. Use of our platform and content is governed by our Terms of Service and Privacy Policy.