# API & Scripting (FaceSync Player)

#### For CPU friendly setup

To trigger dialogue dynamically in-game (e.g., from your Quest System or AI logic), use the `FaceSyncPlayer` component.

```csharp
using UnityEngine;
using Frostember.FaceSync; // Add the namespace

public class NPCDialogueTrigger : MonoBehaviour
{
    public FaceSyncPlayer facePlayer;
    public FaceSyncAnimationAsset greetingData;

    public void StartDialogue()
    {
        // Assign the pre-baked animation asset
        facePlayer.currentAnimation = greetingData;
        
        // Play the facial animation and the associated audio simultaneously
        facePlayer.Play();
    }

    public void StopDialogue()
    {
        facePlayer.Stop();
    }
}
```

{% hint style="info" %}
The `FaceSyncController` will automatically detect that the player is active, pause its live procedural generation, read the exact baked vectors for lips, eyes, and head, and perfectly sync them with the audio.
{% endhint %}

#### For fast setup

This approach is the fastest and most straightforward way to get your character talking from a programmer's perspective. It requires no data "baking" in the Editor. Once you assign a new audio clip to the `AudioSource` component and play it, the system instantly analyzes the audio and the character automatically begins moving their lips, eyes, and head.

This method is ideal for rapid prototyping or for games where only one character speaks at a time.

```csharp
using UnityEngine;

public class NPCSpeaker : MonoBehaviour
{
    [Tooltip("Reference to our character's AudioSource component")]
    public AudioSource npcAudioSource;

    [Tooltip("The audio file we want to play")]
    public AudioClip newDialogueClip;

    public void Speak()
    {
        // 1. Assign the new audio clip to the AudioSource
        npcAudioSource.clip = newDialogueClip;
        
        // 2. Start playback
        // FaceSyncController will automatically intercept the audio and generate animation in real-time
        npcAudioSource.Play();
    }
}
```

> ### ⚠️ Important Performance Warning (CPU Load)
>
> This Plug & Play approach processes audio in real-time. This means the LipSync engine must dissect the playing audio waves 60 times per second and perform a computationally heavy mathematical FFT (Fast Fourier Transform) phoneme analysis.
>
> * ✅ When it is ideal: If the main character or a single NPC you are currently conversing with is speaking this way (e.g., Visual Novels, adventure games). In this scenario, the CPU load is manageable, making it the most convenient solution.
> * ❌ When NOT to use it: If you plan to have multiple speaking characters at once in the scene (e.g., a bustling town square where 10 NPCs are talking simultaneously). Real-time analysis of that many audio sources will overload the main CPU thread and cause a drastic drop in FPS.
>
> #### 💡 Solution for Large Scenes and Mobile/VR (Optimization)
>
> If you need multiple characters talking simultaneously, or if you are targeting performance-constrained platforms (Meta Quest, mobile phones), we strongly recommend using our Offline Baker.
>
> By pre-baking the audio into a `FaceSyncAnimationAsset` in the Editor, you reduce the CPU playback load to absolute zero, while the visual result and synchronization remain perfectly identical.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://frostember-studios.gitbook.io/face-sync/getting-started/images-and-media.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
