[Remote] AI Speech Research Intern
Note: The job is a remote job and is open to candidates in USA. Centific AI Research seeks a PhD Research Intern to design and evaluate speech‑first models, with a focus on Spoken Language Models (SLMs) that reason over audio and interact conversationally. You’ll move ideas from prototype to practical demos, working with scientists and engineers to deliver measurable impact.
Responsibilities
- End‑to‑end speech dialogue systems (speech‑in/speech‑out) and speech‑aware LLMs
- Alignment between speech encoders and text backbones via lightweight adapters
- Efficient speech tokenization and temporal compression suitable for long‑form audio
- Reliable evaluation across recognition, understanding, and generation tasks—including robustness and safety
- Latency‑aware inference for streaming and real‑time user experiences
- Prototype a conversational SLM using an SSL speech encoder and a compact adapter on an existing LLM; compare against strong baselines
- Create a data recipe that blends conversational speech with instruction‑following corpora; run targeted ablations and report findings
- Build an evaluation harness that covers ASR/ST/SLU and speech QA, including streaming metrics (latency, stability, endpointing)
- Ship a minimal demo with streaming inference and logging; document setup, metrics, and reliability checks
- Author a crisp internal write‑up: goals, design choices, results, and next steps for productionization
Skills
- PhD candidate in CS/EE (or related) with research in speech, audio ML, or multimodal LMs
- Fluency in Python and PyTorch, with hands‑on GPU training; familiarity with torchaudio or librosa
- Working knowledge of modern sequence models (Transformers or SSMs) and training best practices
- Depth in at least one area: (a) discrete speech tokens/temporal compression, (b) modality alignment to LLMs via adapters, or (c) post‑training/instruction tuning for speech tasks
- Strong experimentation habits: clean code, ablations, reproducibility, and clear reporting
- Experience with speech generation (neural codecs/vocoders) or hybrid text+speech decoding
- Background in multilingual or code‑switching speech and domain adaptation
- Hands‑on work evaluating safety, bias, hallucination, or spoofing risks in speech systems
- Distributed training/serving (FSDP/DeepSpeed), and experience with ESPnet, SpeechBrain, or NVIDIA NeMo
Benefits
- Comprehensive healthcare, dental, and vision coverage
- 401k plan
- Paid time off (PTO)
- And more!
Company Overview
Company H1B Sponsorship
Apply To This Job