The primary functions of LVS's Search APIs are to:
- Import timecode-based metadata from multiple external sources to create searchable Video Clip objects.
- Search Video Clip objects by various criteria to find specific moments within your discovered video assets.
- Manage Video Clip indexing settings for your LVS tenant.
Understanding Video Clips
The fundamental object in LVS Search is the Video Clip, which is defined as a unique set of metadata (e.g. transcripts, faces, date ranges, etc) associated with a timecode range (start time & end time) for a specific asset. Video Clip metadata for a single asset may come from multiple sources, and Video Clips may overlap each other on an asset's timeline.
Certain Video Clip objects are automatically created from Asset-level fields, so that this data can be included as criteria within your search queries. These fields are:
- Asset Start Date
- Asset End Date
- Asset Tags
Video Clip Types
- Simple Video Clips are created through the LVS Enrichment process, JSON data ingestion, and/or Asset-level metadata. They are the objects that LVS's SIMPLE SEARCH endpoints search and can be modified using SEARCH DATA MANAGEMENT endpoints.
- Compound Video Clips are automatically created for each and every unique overlap of Simple Clips Search, and are the objects searched by LVS's ADVANCED SEARCH endpoints. You cannot edit Compound Clips directly, but they are automatically updated whenever you make changes to their corresponding Simple Video Clips. You can also control how Compound Clips are indexed using TENANT SEARCH SETTINGS endpoints.