Opis oferty pracy
Job Description
Area of Work
You will join our global Machine Learning and Data Science unit - a core team of machine learning scientists (LLM, ASR), engineers and an expanding group of country-specific linguists - embedded in the ever growing Noa product organization used daily by doctors in Brazil, Mexico, Poland, Italy, Spain and Germany. Noa.ai’s mission is to broaden access to top-quality healthcare first by lifting administrative burdens from clinicians and, in the near future, by assuming selected medical tasks. Our most famous product is Noa Notes, which records consultations, transcribes the audio and produces a structured draft of the medical record.
By design, we operate cross-functionally, moving ideas from prototyping and rapid validation to full-scale production with state-of-the-art ML frameworks and modern MLOps practices. We begin with rapid validation sprints when engineers, product managers, and product developers work side by side on early experiments, then centralise and consolidate proven capabilities, standardizing validation methods and engineering best practices through shared tools, frameworks, and lifecycles.
Role
As a Machine Learning Engineer in the Noa ML team you will support a product area within our Noa organisation to deliver end to end ML capabilities. You will work alongside other machine learning professionals at various seniority levels and report directly into the Head of Machine Learning & Data Science.
Backed by a strong engineering culture, we pair industry-leading ML rigor with pragmatic delivery, making smart trade-offs to ship value quickly and iterate fast. In this role, you’ll work alongside exceptional engineers and scientists on one of Docplanner’s most strategic initiatives, leveraging a cutting-edge tech stack to push the boundaries of AI in healthcare.
Our tech stack includes Python, Pytorch, Whisper, FastAPI - among the others - running on Kubernetes in AWS.
What you will be doing
Work closely with cross-functional teams, including scientists, engineers, and product stakeholders, to deliver AI-driven machine learning initiatives that directly contribute business objectives.
Design, deploy and iterate over ML services for diverse data types (e.g., audio, text), while proactively identifying and eliminating performance bottlenecks driving continuous improvements.
Assess platform engineering and MLOps bottlenecks; research and design scalable GPU resource-optimization strategies, and recommend solutions that balance performance, cost, and reliability.
Research, architect, and deploy LLM-powered information retrieval solutions (eg. RAG) to deliver accurate and scalable results in complex, multilingual product environments.
Partner with the AI Platform team to refine MLOps best practices, evolve frameworks, and establish efficient, scalable workflows.
Architect, deploy, and maintain high-throughput, reliable data pipelines to support training-set curation and data-annotation tooling.