DevOps Engineer - Infrastructure Automation & Distributed Systems

We are looking for a DevOps Engineer to design, automate, and operate the infrastructure powering our real-time conversational AI platform. This role focuses on building scalable, repeatable deployment systems using Ansible to provision and manage complex, distributed environments across customer and cloud infrastructure. You will develop and maintain Ansible playbooks, roles, and automation frameworks that evolve from single-node deployments to large-scale systems supporting millions of users and hundreds of thousands of concurrent conversational streams across hundreds of machines. This includes designing for reliability, observability, and seamless upgrades in high-availability environments. A key aspect of this role is working directly with customers to tailor deployments to their infrastructure, security, and integration requirements. You will play a critical role in ensuring consistent, production-grade deployments while supporting systems through their full lifecycle. You will also build and maintain the observability stack, leveraging Grafana, Prometheus, and Loki to create dashboards, alerting systems, and diagnostic tooling that provide real-time insight into system performance, reliability, and scaling behavior. The ideal candidate has strong experience with Ansible and infrastructure as code, deep knowledge of Linux systems, containerized environments (Docker), and distributed system operations. Experience with networking protocols, monitoring/metrics systems, and scaling real-time platforms is highly desirable.

Senior Python Developer - Real-Time Conversational AI Infrastructure

We are looking for a Senior Python Developer to help architect and scale a high-performance platform for processing and analyzing conversational data across voice, chat, and meeting systems. This role involves building real-time, event-driven pipelines that ingest, transform, and route large volumes of streaming data, enabling AI-powered capabilities such as transcription, summarization, sentiment analysis, and intelligent workflow automation. You will work on systems evolving from single-node deployments to distributed, horizontally scalable architectures supporting millions of users and hundreds of thousands of concurrent conversational streams. The ideal candidate has deep experience with async Python (asyncio, FastAPI, Pydantic), strong knowledge of distributed systems and API design, and a track record of building low-latency, high-throughput services. Experience with Redis as a primary data store is required. Familiarity with document databases (MongoDB, DynamoDB) and vector databases (pgvector, Pinecone, Weaviate, Chroma) is a plus. Experience integrating LLMs, speech-to-text systems, or real-time AI/analytics pipelines is highly desirable.

Frontend Engineer - React & AI-Powered Workflow Design

We are looking for a Frontend Engineer to build the next generation of AI-driven workflow design tools for our vConductor platform — turning natural language into executable systems. This role centers on creating a visual, interactive environment where users can design, execute, and analyze complex conversational data pipelines powered directly by AI. You will develop a React/TypeScript-based interface that allows users to describe workflows in plain language, with AI translating intent into fully configured processing pipelines. This includes building dynamic, graph-based editors, integrating real-time system feedback, and enabling intelligent workflow generation based on available processing capabilities. In addition, you will design AI-assisted diagnostics that help users understand and resolve pipeline failures by analyzing system behavior, error patterns, and historical execution data. The goal is not just to surface data, but to make the system explainable and self-guiding. The ideal candidate has strong experience with React and modern frontend architectures, and is comfortable building highly interactive, stateful applications. Experience with graph-based UIs (e.g., React Flow), API-driven systems, and integrating LLM-powered features into user workflows is highly desirable. Familiarity with Python, FastAPI, or schema-driven interfaces is a plus.