⚠️ Sorry, this job is no longer available.

Find AI Work That Works for You

Latest roles in AI and machine learning, reviewed by real humans for quality and clarity.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
LangChain.jpg

Deployed Engineer (Toronto)

LangChain
CA.svg
Canada
Full-time
Remote
false
About UsAt LangChain, our mission is to make intelligent agents ubiquitous. We build the foundation for agent engineering in the real world, helping developers move from prototypes to production-ready AI agents that teams can rely on. We began as widely adopted open-source tools and have grown to also offer a platform for building, evaluating, deploying, and operating agents at scale.Today, LangChain, LangGraph, LangSmith, and Agent Builder are used by teams shipping real AI products across startups and large enterprises. Millions of developers trust LangChain to power AI teams at companies like Replit, Clay, Coinbase, Workday, Lyft, Cloudflare, Harvey, Rippling, Vanta, and 35% of the Fortune 500.With $125M raised at Series B from IVP, Sequoia, Benchmark, CapitalG, and Sapphire Ventures, we’re at a stage where we’re continuing to develop new products, growth is accelerating, and all team members have meaningful impact on what we build and how we work together. LangChain is a place where your contributions can shape how this technology shows up in the real world.About the TeamThe Deployed Engineering team works directly with companies building and running AI agents in production, helping turn ideas and prototypes into systems teams can rely on.This is a hands-on, highly technical team that partners closely with customer engineers across the full lifecycle, from pre-sales evaluations to post-deployment advisory work. The focus is on achieving the technical win, co-designing agent architectures, and helping customers operate agents reliably at scale using the LangChain suite.Deployed Engineers sit at the intersection of engineering, product, and go-to-market, shaping how LangChain is adopted in the field and feeding real-world insights back into the platform.About the RoleThe Deployed Engineer…You’ll work on some of the hardest problems in applied AI — not demos, not research, but systems that real teams depend on in production. The feedback loop is fast, the impact is visible, and the work you do directly shapes how AI agents are built in the real world.Location(s)West: San Francisco, Pacific Northwest, Southern CaliforniaCentral: Austin, Chicago, DenverEast: New York, AtlantaEMEA: London, AmsterdamToronto, CanadaWhat You’ll DoCo-architect and co-build production AI agents with customer engineering teamsOwn the technical win in pre-sales by designing POCs, answering deep technical questions, and guiding evaluationsHelp customers deploy and operate agent-based applications such as conversational agents, research agents, and multi-step workflowsAdvise customers post-sale on architecture, best practices, and roadmap-level decisionsRun technical demos, trainings, and workshops for developer audiencesSurface field feedback and contribute reusable patterns, cookbooks, and example code that scale across customersOccasionally contribute code upstream when it meaningfully improves customer outcomesWhat You’ll Bring3+ years in a relevant technical role (software engineering, customer engineering, solutions engineering, founding/product engineering), ideally in a startup or scale-upStrong Python, JavaScript and systems fundamentalsHave designed agent-based or LLM-powered applications beyond simple API calls, including multi-step workflows, orchestration, and failure handlingAre comfortable working directly with customers during POCs, architecture reviews, and technical evaluationsCan explain technical tradeoffs clearly and build trust with developer audiencesTake responsibility for outcomes, not just recommendationsHave a bias toward action and enjoy figuring things out as you goAre excited about operating AI agents in production, not just building demosNice to Have’s:You’ve deployed AI agents in production, especially using LangChain, LangGraph, or similar frameworksWorked with LLM evaluation, observability, or guardrailsHave experience with cloud environments (AWS, GCP, Azure), containers, and basic Kubernetes conceptsHave shipped and operated production software and are comfortable owning systems under real-world constraintsCompensation & BenefitsWe offer competitive compensation that includes base salary, variable compensation for relevant roles, meaningful equity, benefits, and perks. Benefits include things like medical, dental, and vision coverage, flexible vacation, a 401(k) plan, and life insurance. Actual compensation and offerings will vary based on role, level, and location. Team members in the EU, UK, and APAC receive locally competitive benefits aligned with regional norms and regulations.
No items found.
Hidden link
Deepgram.jpg

Senior Forward-Deployed Engineer, Federal

Deepgram
$160,000 – $200,000
US.svg
United States
Full-time
Remote
false
Company OverviewDeepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are ‘Powered by Deepgram’, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgram’s voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.Company Operating RhythmAt Deepgram, we expect an AI-first mindset—AI use and comfort aren’t optional, they’re core to how we operate, innovate, and measure performance.Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if you’re not excited to experiment, adapt, think on your feet, and learn constantly, or if you’re seeking something highly prescriptive with a traditional 9-to-5.OpportunityDeepgram is seeking a Senior Applied Engineer to join our Applied Engineering team, operating in the Forward-Deployed Engineer (FDE) role with a focus on federal customers. In this role, you will embed directly with our most strategic federal accounts to lead complex deployments of Deepgram's Voice AI platform in production environments where mission performance matters, delivery is urgent, and ambiguity is the default. You will own the full technical lifecycle of federal engagements, from initial discovery and proof-of-concept through production deployment and ongoing optimization. You'll map customer problems, structure delivery, and ship solutions fast. This includes scoping, sequencing, and building full-stack integrations that create measurable mission impact while driving clarity across internal and external teams. Along the way, you'll identify reusable patterns, codify best practices, and share field signals that influence Deepgram's product roadmap. As a member of the Applied Engineering team, you'll serve as the trusted technical thought partner for federal stakeholders, guiding adoption, maximizing operational value, and ensuring successful deployments at scale. While your engagements will primarily involve federal customers, you'll remain a core part of the broader Applied Engineering team and may contribute to commercial engagements where your expertise is valuable.Note: this role is based remotely out of the Washington D.C. Metropolitan area.About Applied Engineering at DeepgramThe Applied Engineering (AppEng) team at Deepgram combines the functions that other companies might separate into Sales Engineering, Solutions Architecture, Implementation, Forward-Deployed Engineering, and Technical Support. We serve as the technical interface between Deepgram and our customers throughout their entire journey, from initial discovery and proof-of-concept, through implementation and onboarding, to ongoing technical support. We work closely with our Customer Success and Developer Relations (DevRel) teams to ensure a positive, growth-focused experience for our customers, as well as with Product and Engineering teams to deliver solutions that meet customer needs. This unified approach allows us to provide comprehensive technical guidance and build deeper relationships with our customers.As a Senior Applied Engineer operating in the FDE role for federal engagements, you'll work with a high degree of autonomy in customer environments that often have unique infrastructure constraints, compliance requirements, and security considerations. Your work will directly shape how federal agencies leverage Voice AI to achieve their missions.What You'll DoOwn technical delivery across federal deployments, from first prototype to stable productionEmbed deeply with federal customers to design and build mission-critical applications powered by Deepgram's Voice AI modelsLead technical discovery and solution design for federal prospects and customers, partnering with Account Executives to navigate complex government sales and procurement cyclesPrototype and build full-stack integrations using Python, JavaScript, Rust, or comparable stacks that deliver real mission impactEnable successful deployments across customer environments by delivering observable systems spanning infrastructure through applicationsProactively guide federal stakeholders on maximizing operational value from Deepgram's platform, including performance optimization and deployment strategiesScope work, sequence delivery, and remove blockers early, making deliberate trade-offs between scope, speed, and qualityBuild and manage relationships with customer leadership and technical stakeholders to ensure successful deployment and scaleContribute directly to code when clarity or momentum depends on itCodify working patterns into tools, playbooks, reference architectures, and building blocks the broader team can useShare field feedback with Product and Engineering to influence model and product developmentServe as an escalation point for complex technical issues in federal deploymentsAnalyze deployment patterns to identify product improvement opportunities and inform Deepgram's federal go-to-market strategyTime Allocation40% — Embedded customer delivery: deployments, integrations, and technical problem-solving at federal customer sites and facilities25% — Pre-sales technical engagement: discovery, demos, proof-of-concepts, and proposal support20% — Building reusable solutions, automation, documentation, and reference architectures15% — Internal collaboration: product feedback, knowledge sharing, and contributing to Applied Engineering strategyYour First 90 DaysFirst 30 Days: Complete comprehensive onboarding to understand Deepgram's technology, products, and competitive landscape. Gain proficiency with our core APIs and documentation. Shadow active customer engagements across both commercial and federal accounts to build familiarity with how the Applied Engineering team operates, the deployment patterns we use, and the specific requirements of government environments.60 Days: Begin taking ownership of federal customer engagements, leading technical discovery and developing proof-of-concepts for government prospects. Start contributing to federal-specific technical content, reference architectures, and deployment guides. Participate in sales strategy sessions and provide technical insights on active federal deals.90 Days: Own the full technical relationship with assigned federal accounts. Develop and deliver complex solutions addressing specific mission challenges. Establish patterns for repeatable federal deployments and begin contributing to the team's federal go-to-market methodology.You'll Love This Role If YouThrive in high-ambiguity, mission-driven environments where you need to move fast and make sound decisions under pressureAre passionate about translating complex technical capabilities into real operational value for government missionsEnjoy embedding deeply with customers, building trust, and becoming their go-to technical partnerFind satisfaction in shipping production systems that make a tangible differenceAre energized by the challenge of deploying cutting-edge Voice AI in complex, constrained environmentsSee yourself as an engineer first who happens to be great with customers, not the other way aroundCan ruthlessly prioritize across multiple projects and operate with high autonomyIt's Important To Us That You HaveActive Top Secret/SCI (TS/SCI) security clearance, or equivalent5+ years of engineering or technical deployment experience, ideally in customer-facing or government environmentsStrong software engineering background with professional development experience in at least one modern programming language, such as Python, JavaScript, or similarProven ability to scope and deliver complex systems in fast-moving or ambiguous contextsExperience building and deploying production systems in government or similarly constrained environmentsExcellent verbal and written communication skills with the ability to translate technical concepts for both technical and non-technical stakeholders, including senior government leadershipTrack record of navigating complex sales or procurement cycles in the federal spaceWillingness to travel up to 50%, including on-site work at government facilitiesIt Would Be Great if You HadExperience with speech recognition, NLP, AI/Voice agents, or related AI technologiesFamiliarity with cloud deployment models (AWS GovCloud, Azure Government, etc.), Kubernetes, Terraform, and related infrastructureExperience building or deploying systems powered by LLMs or generative models, with an understanding of how model behavior affects product experienceKnowledge of FedRAMP, FISMA, or other government compliance frameworksExperience with API-first products and developer toolsBackground in sales methodologies such as MEDDIC or Solution Selling, particularly in government contextsExperience developing automation solutions, self-service tools, and technical documentation for customer enablementBenefits & Perks*Holistic healthMedical, dental, vision benefitsAnnual wellness stipendMental health supportLife, STD, LTD Income Insurance PlansWork/life blendUnlimited PTOGenerous paid parental leaveFlexible schedule12 Paid US company holidaysQuarterly personal productivity stipendOne-time stipend for home office upgrades401(k) plan with company matchTax Savings ProgramsContinuous learningLearning / Education stipendParticipation in talks and conferencesEmployee Resource GroupsAI enablement workshops / sessions*For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
No items found.
Hidden link
Crusoe.jpg

Senior Software Engineering Director, Developer Experience

Crusoe
$301,750 – $355,000
US.svg
United States
Full-time
Remote
false
Crusoe is on a mission to accelerate the abundance of energy and intelligence. As the only vertically integrated AI infrastructure company built from the ground up, we own and operate each layer of the stack — from electrons to tokens — to power the world's most ambitious AI workloads. When you join Crusoe, you join a team that is building the future, faster.We're in the midst of the greatest industrial revolution of our time. The demand for AI compute is boundless, and power is a bottleneck. We're solving that — with an energy-first approach that makes AI infrastructure better for the world and faster for the people innovating with AI.We're looking for problem-solving, opportunity-finding teammates with a sense of urgency, who believe in the scale of our ambition and thrive on a path not fully paved — people who want to grow their careers alongside a team of experts across energy, manufacturing, data center construction, and cloud services.If you want to do the most meaningful work of your career, help our customers and partners advance their AI strategies, and be part of a high-performing team that believes in each other, come build with us at Crusoe.About the Role:Crusoe is on a mission to accelerate the abundance of energy and intelligence. As the Senior Director of Engineering for Developer Experience, you will own and drive the strategy, execution, and culture of the team responsible for how Crusoe's engineers and non-engineers build, ship, and operate software. Crusoe is looking for a visionary to develop and further build a talented team focused on making every developer at Crusoe faster, more effective, and more empowered through world-class internal tooling, platforms, and AI-powered automation. You'll partner closely with senior engineering and product leadership to define and deliver the foundation that accelerates everything we build.What You'll Be Working On:Developer Platform & Internal Tooling: Define and execute the long-term vision for Crusoe's internal developer platform — including shared services, internal APIs, repositories, and self-service infrastructure — that enables engineering teams to move with speed and confidence.AI Platform: Rapidly develop and productionize software for the entire company. You’ll create and evangelize the golden path to productionizing AI-developed tools, supporting every team at Crusoe. You’ll tip the scales on build-vs-buy for every proposed SaaS purchase.CI/CD & DevOps Infrastructure: Oversee the design, reliability, and continuous improvement of Crusoe's CI/CD pipelines, build systems, and deployment infrastructure, ensuring engineering teams can ship safely and rapidly at scale.Engineering Productivity & Process: Define and drive org-wide engineering productivity initiatives — establishing metrics, identifying bottlenecks, and implementing tooling and process improvements that measurably improve developer experience across the company.People Leadership: Manage and grow a team of talented engineers and establish a high-performance culture rooted in accountability, innovation, and continuous learning.Cross-Functional Partnership: Collaborate with senior leaders across Engineering, Infrastructure, Security, and Product to align Developer Experience investments with company-wide engineering goals and priorities.What You'll Bring to the Team:Proven Leadership: 10+ years of software engineering experience, with at least 4+ years in senior engineering leadership roles.Developer Experience Expertise: Deep, hands-on understanding of what it takes to build great developer experience, such as internal platforms, tooling, CI/CD systems, and developer productivity programs at scale.AI Evangelist: You should be extremely fluent in agentic software development .Technical Depth: Strong engineering fundamentals with the ability to lead engineers on architecture decisions, technology choices, and complex technical trade-offs across platform and infrastructure domains.Strategic Vision with Execution Discipline: Demonstrated ability to define a roadmap for a platform or infrastructure organization and drive it to delivery through structured planning, prioritization, and cross-functional alignment.Data-Driven Approach: Experience establishing and using developer productivity metrics (e.g., DORA metrics, deployment frequency, cycle time) to identify opportunities and demonstrate impact.Communication & Influence: Exceptional communicator who can operate effectively at both the executive level and in the weeds with engineering teams — translating strategy into execution and results into business impact.Benefits:Industry competitive payRestricted Stock Units in a fast growing, well-funded technology companyHealth insurance package options that include HDHP and PPO, vision, and dental for you and your dependentsEmployer contributions to HSA accounts Paid Parental Leave Paid life insurance, short-term and long-term disability Teladoc 401(k) with a 100% match up to 4% of salaryGenerous paid time off and holiday scheduleCell phone reimbursementTuition reimbursementSubscription to the Calm appMetLife LegalCompany paid commuter benefit; $300/monthCompensation RangeCompensation will be paid in the range of up to $301,750 - $355,000 + Bonus. Restricted Stock Units are included in all offers. Compensation to be determined by the applicants knowledge, education, and abilities, as well as internal equity and alignment with market data.Crusoe is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.
No items found.
Hidden link
Mistral AI.jpg

Data Quality Specialist

Mistral AI
FR.svg
France
Full-time
Remote
false
About Mistral At Mistral we are on a mission to democratize AI, producing frontier intelligence for everyone, developed in the open, and built by engineers all over the world. We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation, with teams distributed between Europe, the USA and Asia. We are creative, low-ego and team-spirited. At Mistral, we develop models for the enterprise and for consumers, focusing on delivering systems which can really change the way in which businesses operate and which can integrate into our daily lives. All while releasing frontier models open-source, for everyone to try and benefit. Mistral is hiring experts in the training of large language models and distributed systems. Join us to be part of a pioneering company shaping the future of AI. Role Summary We’re seeking highly motivated Data Quality Specialists with strong analytical skills and a keen eye for detail to join our Human Data Annotation team within the Science organisation. In this role, you will contribute to and audit human data annotations, upholding the highest standards of quality and efficiency. Key ResponsibilitiesGenerate and validate high-quality data annotations, based on guidelines and continuous feedback, for the development and evaluation of AI modelsIn collaboration with the technical team, review/audit annotations, clarify requirements, share insights, and improve annotation processes, tools, and guidelinesAbout you Relevant academic background (science, technology, engineering, mathematics)Professional proficiency in English, with strong writing and comprehension skillsOutstanding research and analytical skillsExercises great judgement with complex instructions, limited data, and/or multiple information sourcesExcellent communication, interpersonal, and organizational abilitiesAdapts easily to dynamic environments and changing requirementsAppetite for operational work and high tolerance for repetitive tasksPassionate about and committed to learning new tools and technologiesNice to have Proven track record working with dataFluency in multiple languagesExperience with code Benefits: France💰 Competitive cash salary and equity🥕 Food : Daily lunch vouchers🥎 Sport : Monthly contribution to a Gympass subscription 🚴 Transportation : Monthly contribution to a mobility pass🧑‍⚕️ Health : Full health insurance for you and your family🍼 Parental : Generous parental leave policy🌎 Visa sponsorship
No items found.
Hidden link
Handshake.jpg

Senior Engineering Manager, Handshake AI

Handshake
$230,000 – $300,000
US.svg
United States
Full-time
Remote
false
About HandshakeHandshake is the career network for the AI economy. 20 million knowledge workers, 1,600 educational institutions, 1 million employers (including 100% of the Fortune 50), and every foundational AI lab trust Handshake to power career discovery, hiring, and upskilling, from freelance AI training gigs to first internships to full-time careers and beyond. This unique value is leading to unparalleled growth; in 2025, we tripled our ARR at scale.Why join Handshake now:Shape how every career evolves in the AI economy, at global scale, with impact your friends, family and peers can see and feelWork hand-in-hand with world-class AI labs, Fortune 500 partners and the world’s top educational institutionsJoin a team with leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, among othersBuild a massive, fast-growing business with billions in revenueAbout the RoleThe Senior Engineering Manager at Handshake AI leads a core product and platform engineering team responsible for building the systems that integrate human expertise into AI development workflows.This team owns critical infrastructure that connects talent networks, data operations, and research needs into scalable, reliable, and high-quality platforms. The role sits at the intersection of engineering, product, and operations, ensuring our systems can support rapid growth, complex workflows, and frontier AI partners. You’ll lead a team of ~9 engineers today and are expected to add leadership capacity (including managing an EM) as we scale.Location: San Francisco, CA. This is an in-office role, 5 days/week (no remote/hybrid)What You’ll DoLead, hire, and develop a high-performing engineering team building core product and platform infrastructureOwn roadmap and execution in close partnership with Product, Research, and OperationsDrive architecture and technical strategy for scalable, reliable, and extensible systemsBuild modular platforms that enable new domains, workflows, and partners to launch quicklyRaise the bar on engineering quality across reliability, observability, performance, and data integrityFoster a culture of ownership, velocity, and strong engineering fundamentals in a fast-moving, ambiguity-heavy environmentWhat We’re Looking ForEngineering leader + builder: 3+ years managing teams, plus 5+ years hands-on engineering experienceStrong people leadership: experience leading senior engineers; managing an EM (or equivalent scope) is a plusExecution in ambiguity: proven ability to align cross-functionally and deliver in fast-moving, unclear problem spacesSystems + product mindset: strong platform/distributed systems background, and the ability to turn research/ops needs into a clear roadmap, ship iteratively, and measure outcomesNice to HaveExperience with RL training infrastructure, simulation systems, or evaluation platformsHuman-in-the-loop systems (annotation, rubric tooling, QA pipelines, workflow platforms)Operations-heavy, tech-enabled environment experienceExperience building systems used by applied ML or AI research teamsPerksHandshake delivers benefits that help you feel supported—and thrive at work and in life.The below benefits are for full-time US employees.🎯 Ownership: Equity in a fast-growing company💰 Financial Wellness: 401(k) match, competitive compensation, financial coaching🍼 Family Support: Paid parental leave, fertility benefits, parental coaching💝 Wellbeing: Medical, dental, and vision, mental health support, $500 wellness stipend📚 Growth: $2,000 learning stipend, ongoing development💻 Remote & Office: Internet, commuting, and free lunch/gym in our SF office🏝 Time Off: Flexible PTO, 15 holidays + 2 flex days🤝 Connection: Team outings & referral bonusesExplore our mission, values, and comprehensive US benefits at joinhandshake.com/careers.
No items found.
Hidden link
Together AI.jpg

Software Development in Test Intern

Together AI
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Figure.jpg

AI Tooling Frontend Engineer - Helix Team

Figure AI
$150,000 – $250,000
US.svg
United States
Full-time
Remote
false
Figure is an AI Robotics company developing a general purpose humanoid. Our humanoid robot is designed for commercial tasks and the home. We are based in San Jose, CA and require 5 days/week in-office collaboration. It’s time to build. Figure’s vision is to deploy autonomous humanoids at a global scale. Our Helix team is seeking an experienced Frontend Engineer to enhance our internal, web-based data and AI training tools. This role focuses on developing intuitive web interfaces that support key AI research functions, including robot data annotation, training dataset visualization, and experiment tracking. The ideal candidate has experience building rich, interactive web interfaces using React and TypeScript. Responsibilities Design and build intuitive web interfaces for robot data annotation, datasets visualization, and experiment tracking Utilize data-driven techniques to optimize interfaces for efficiency and fast iteration cycles Integrate AI models to automate manual tasks Work together with AI researchers, robot operators, and annotators to support new user experiences Requirements Strong software engineering fundamentals Bachelor's or Master's degree in Computer Science, Robotics, Engineering, or a related field Minimum of 4 years of professional, full-time experience building rich, interactive web interfaces Proficiency in React and TypeScript Bonus Qualifications Experience using data stores (Postgres, MySQL, ElasticSearch, Redis, etc.) Experience managing cloud infrastructure (AWS, Azure, GCP) Experience with Tailwind CSS Experience building data annotation and dataset management tools. The US base salary range for this full-time position is between $150,000 - $250,000 annually. The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.
No items found.
Hidden link
Snorkel AI.jpg

Senior Product Manager – Data & Quality

Snorkel AI
$172,000 – $300,000
US.svg
United States
Full-time
Remote
false
About Snorkel At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About the Role Snorkel AI is hiring Frontier AI Solutions Engineers who will partner with leading AI labs on their most challenging data problems. This is a high-impact, customer-facing role that combines technical depth with strong presales instincts. You'll partner with customer research teams to design complex data and environments that improve frontier model performance, demonstrating Snorkel's capabilities through research-driven engagements. You'll work at the critical intersection of research, technical strategy, and customer partnership. This includes scoping training data needs, designing RL environments and tasks, developing evaluation frameworks, probing model behavior and failure modes, and translating customer research objectives into actionable technical plans. You'll develop technical specifications, analyze frontier model failure modes, and serve as a thought partner to customer research teams throughout the sales cycle and into early delivery phases. Main Responsibilities Partner with frontier AI research labs to design datasets and environments that improve model performance Lead technical conversations with customer researchers to understand model capabilities, failure modes, data requirements, and success criteria Probe model behavior through systematic evaluation to uncover weaknesses and identify high-impact data interventions Design evaluation frameworks, calibration processes, and quality rubrics that establish measurable project success metrics Develop technical specifications for data projects that balance research rigor with operational feasibility Serve as thought partner to customer research teams throughout the sales cycle, building trust and credibility Stay current on frontier AI research, RL environment design, post-training techniques, and evaluation methodologies Preferred Qualifications Strong expertise in frontier AI concepts including LLMs, training data pipelines, evaluation methodologies, post-training techniques (RLHF, DPO, RLAIF), and domain areas such as coding agents, reasoning, multimodal models, or RL environments Experience in applied ML research, data science, or research-intensive technical roles with customer-facing or collaborative research experience Proficiency in Python and familiarity with ML frameworks and LLM APIs Excellent communication skills — ability to deliver technical presentations and explain complex concepts to diverse audiences Familiarity with data curation workflows, synthetic data generation, LLM-as-a-Judge, or evaluation framework design Ability to work in a fast-moving environment, comfortable with ambiguity and rapid iteration B.S. in Computer Science, Machine Learning, or related field with 4+ years of experience in AI/ML solutions engineering or technical customer-facing roles Compensation range for Tier 1 locations of San Francisco Bay Area and New York City, $172K - $300K OTE. All offers also include equity in the form of employee stock options. Our compensation ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Why Join Snorkel AI? At Snorkel AI, we're building the future of data-centric AI. Our Expert Data-as-a-Service organization partners with world-class customers to solve some of the hardest data challenges — creating training and evaluation data that power the next generation of LLMs and AI systems. You'll work directly on projects that impact real production systems, while shaping how internal teams deliver faster, better, and more intelligently. This is a rare opportunity to own technical data workflows and be a founding member of the technical DaaS team.  #LI-CG1 Salary Range  -   Salary Range $172,000—$300,000 USDBe Your Best at Snorkel Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success. Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
Hidden link
X.jpg

Software Engineer, Internal Tools

X AI
$45 – $100 / hour
US.svg
United States
Full-time
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the Role As an Accounting Expert, you will be instrumental in enhancing the capabilities of our cutting-edge technologies by providing high-quality input and labels using specialized software. Your role involves collaborating closely with our technical team to support the training of new AI tasks, ensuring the implementation of innovative initiatives. You'll contribute to refining annotation tools and selecting complex problems from corporate accounting domains, with a focus on financial reporting, consolidation, internal controls, and GAAP compliance where your expertise can drive significant improvements in model performance. This position demands a dynamic approach to learning and adapting in a fast-paced environment, where your ability to interpret and execute tasks based on evolving instructions is crucial. AI Tutor’s Role in Advancing xAI’s Mission As an AI Tutor, you will play an essential role in advancing xAI's mission by supporting the training and refinement of xAI’s AI models. AI Tutors teach our AI models about how people interact and react, as well as how people approach issues and discussions in corporate accounting. To accomplish this, AI Tutors will actively participate in gathering or providing data, such as text, voice, and video data, sometimes providing annotations, recording audio, or participating in video sessions. We seek individuals who are comfortable and eager to engage in these activities as a fundamental part of the role, ensuring a strong alignment with xAI’s goals and objectives to innovate. Scope An AI Tutor will provide services that include labeling and annotating data in text, voice, and video formats to support AI model training. At times, this may involve recording audio or video sessions, and tutors are expected to be comfortable with these tasks as they are fundamental to the role. Such data is a job requirement to advance xAI’s mission, and AI Tutors acknowledge that all work is done for hire and owned by xAI. Responsibilities Use proprietary software applications to provide input/labels on defined projects.   Support and ensure the delivery of high-quality curated data.   Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies.   Interact with the technical staff to help improve the design of efficient annotation tools.   Choose problems from corporate accounting fields that align with your expertise, providing rigorous solutions and model critiques where you can confidently provide detailed solutions and evaluate model responses.   Regularly interpret, analyze, and execute tasks based on given instructions.   Key Qualifications Must have 3+ years of Big 4 public accounting experience (audit/assurance) on corporate or SEC clients, or an equivalent senior corporate accounting role (e.g., Controller, Assistant Controller, or Technical Accounting Manager at a public company or large private enterprise with complex GAAP reporting).   Must possess a Master's or PhD in Accounting (corporate focus) or equivalent as a licensed CPA.   Proficiency in reading and writing, both in informal and professional English.   Strong ability to navigate various corporate accounting information resources, databases, and online resources (e.g., FASB codification, SEC EDGAR, 10-K/10-Q filings, ERP systems).   Outstanding communication, interpersonal, analytical, and organizational capabilities.   Solid reading comprehension skills combined with the capacity to exercise autonomous judgment even when presented with limited data/material.   Strong passion for and commitment to technological advancements and innovation in corporate accounting.  Preferred Qualifications 5+ years at a Big 4 firm or in a senior corporate controllership role, with direct involvement in SEC reporting, SOX 404, or complex consolidations.   Experience drafting or reviewing 10-K/10-Q footnotes, MD&A, or technical accounting memos.   Possesses experience with at least one publication in a reputable accounting journal or outlet.   Teaching experience as a professor   Location & Other Expectations This position is based in Palo Alto, CA, or fully remote.   The Palo Alto option is an in-office role requiring 5 days per week; remote positions require strong self-motivation.   If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time.   We are unable to provide visa sponsorship.   Team members are expected to work from 9:00am - 5:30pm PST for the first two weeks of training and 9:00am - 5:30pm in their own timezone thereafter.   For those who will be working from a personal device, please note your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.   Compensation $45/hour - $100/hour The posted pay range is intended for U.S.-based candidates and depends on factors including relevant experience, skills, education, geographic location, and qualifications. For international candidates, our recruiting team can provide an estimated pay range for your location. Benefits: Hourly pay is just one part of our total rewards package at xAI. Specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles.xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.
No items found.
Hidden link
Harmattan AI.jpg

Data Engineer - Foundational

Harmattan AI
FR.svg
France
Full-time
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleAs a Data Engineer on the Foundational team, you will serve as the "plumber" for deep learning, building the massive, high-performance data infrastructure required to power our foundational models. Based in Paris, you will manage terabytes—and eventually petabytes—of raw, unstructured, and noisy video data (EO and IR). Your mission is to ensure our ML engineers spend their time designing architectures, not waiting for data loaders or wrangling corrupted files.ResponsibilitiesMulti-Modal Ingestion Pipeline: Build ETL/ELT pipelines to extract, decode, and store raw Electro-Optical (EO) and Infrared (IR) video from field logs into highly optimised formats like WebDataset, TFRecords, or Parquet.Sensor Synchronisation & Alignment: Develop algorithms to programmatically synchronise EO and IR frames temporally and spatially to provide paired inputs for model training.High-Throughput Data Loading: Architect storage-to-GPU pipelines to ensure multi-node training clusters maintain >90% GPU utilisation without I/O bottlenecks.Distributed Processing: Write and optimise distributed data processing jobs using tools like Apache Spark, Ray, or Apache Beam to process thousands of hours of tactical video logs.Data Quality & Versioning: Implement automated quality checks to filter corrupted or blank frames and maintain 100% reproducible training runs through robust versioning and lineage tracking.Infrastructure Evaluation: Assess and implement advanced storage solutions (e.g., MinIO, S3 tiering) to manage growing datasets while optimising for cost and latency.Candidate RequirementsEducational Background: A BS or MS in Computer Science, Software Engineering, or Distributed Systems is highly preferred. Deep knowledge of operating systems, networking, and parallel computing is essential.Technical Experience: 5-6+ years of experience building and maintaining terabyte-scale pipelines for unstructured data (video, images, or point clouds).Performance Optimisation: Proven track record of maximising multi-node GPU utilisation and optimising data loaders for frameworks like PyTorch or JAX.Tooling Expertise: Strong command of distributed computing tools (Spark, Ray, Beam) and ML data versioning tools (DVC, Apache Iceberg, or Pachyderm).Adaptability & Ownership: A systems-thinker who thrives in a fast-paced startup environment and views messy data as an engineering problem to be solved via automation.Commitment: 100% dedication to Harmattan AI’s mission of providing a defensive edge to allied nations through ethical, high-impact technologyWe look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
Hidden link
Harmattan AI.jpg

Computer Vision Engineer

Harmattan AI
CH.svg
Switzerland
Full-time
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleWe are looking for a Computer Vision Engineer to join our Machine Learning and Computer Vision team. This role is crucial for developing core technical components across various robotics/aerospace projects.ResponsibilitiesResearch & Data Preparation: Conduct research on state-of-the-art Computer Vision methodologies. Participate in the creation and curation of training and validation datasets. Perform statistical analyses and develop visualization tools to ensure data quality.Algorithm Development & Optimization: Build and refine training pipelines and metrics to enhance model performance.Develop and optimize Computer Vision algorithms for multiple robotics/aerospace projects.Deployment & Integration: Implement ML/CV models into production-ready environments. Ensure seamless integration with Harmattan AI’s systems and conduct rigorous code reviews.Validation & Monitoring: Test algorithms in real-world environments and develop monitoring tools. Track model performance and continuously improve deployed solutions.Cross-Team Collaboration: Work closely with software and simulation teams to align development with system requirements. Communicate findings effectively to stakeholders.Candidate RequirementsEducational Background: A degree from a top-tier engineering school or university (Master’s degree in Computer Science or related field, PhD is a plus)Technical Expertise: Strong mathematical foundations, coding skills (Python, C++ is a plus) and hands-on ML/CV project experience. Experience in top AI companies is a huge plus.Passion for ML: Enthusiasm for Machine Learning and Computer Vision. Strong Communication & Teamwork: Ability to collaborate effectively with diverse teamsCommitment: 100% dedication to Harmattan AI’s mission, vision, and ambitious growth plans, ready to go the extra mile to ensure operational excellence.We look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
Hidden link
Hayden AI.jpg

Machine Learning Operations Engineer

Haydenai
$135,699 – $190,000
US.svg
United States
Full-time
Remote
false
About UsAt Hayden AI, we are on a mission to harness the power of computer vision to transform the way transit systems and other government agencies address real-world challenges.From bus lane and bus stop enforcement to transportation optimization technologies and beyond, our innovative mobile perception system empowers our clients to accelerate transit, enhance street safety, and drive toward a sustainable future.Job Title: Machine Learning Operations Engineer Company: Hayden AI Technologies, Inc.Location: 460 Bryant Street, Suite 100, San Francisco, CA 94107Position Duties:Optimize orchestration processes to ensure efficient deployment and management of AI models.Implement cost-saving strategies to minimize infrastructure expenses while maximizing performance.Upgrade throughput to enhance scalability and responsiveness of AI systems.Collaborate with cross-functional teams to identify bottlenecks and implement solutions to improve workflow efficiency.Ship new features and updates rapidly while maintaining high levels of quality and reliability.Deploy and monitor machine learning models produced by deep learning engineers.Design, deploy, and maintain performant and scalable processes for data acquisition and manipulation to enhance dataset accessibility.Participate actively in the team's software development process, including design reviews, code reviews, and brainstorming sessions.Maintain accurate and updated software development documentation.Degree Requirements: Bachelor of Science or Engineering degree or a foreign equivalent in Robotics, Machine Learning, Computer Science, Electrical Engineering, Electronics and Telecommunication Engineering, or a related field.Experience Requirement: One (1) year of work experience in the job offered or as a Deep Learning Engineer, Perception Engineer II, MLOps, Research Engineer II, Machine Learning, Computer Vision Intern, or another related deep learning engineer role.Other Special Requirements:One (1) year of work experience with all of the following:Deploying real-world applied computer vision (including deep learning models) on edge devicesPython programming and software designC++Software development tools and libraries including PyTorch, OpenCV, TensorFlow, MLflowAutomated data annotationDistributed training in the cloudDeploying and managing GPU clusters for ML pipelines and workflowsRate of Pay: $135,699.00 to $190,000.00 per yearLocation of Position and Interview: Hayden AI Technologies, Inc. 460 Bryant Street, Suite 100 San Francisco, CA 94107Applicants Should Submit Resumes To: Janet Le-Mcintosh janet.le@hayden.ai Hayden AI Technologies, Inc. 460 Bryant Street, Suite 100 San Francisco, CA 94107
No items found.
Hidden link
Magical.jpg

Software Engineering Manager, Autonomous

Magical
CA.svg
Canada
Full-time
Remote
false
About MagicalMagical is an agentic automation platform bringing state-of-the-art AI to healthcare, delivering AI agents that actually work in production.We're building "AI employees" that automate the repetitive, time-consuming workflows that slow teams down. Our focus is healthcare – a $4 trillion industry buried in administrative complexity – where we automate claims processing, prior authorizations, and eligibility checks, enabling providers to focus on patient care.Our TractionThe shift to agentic automation in healthcare is inevitable, and we're leading it:Dramatic acceleration in revenue growth with customers expanding into new workflows before renewal7-day proof-of-concepts that demonstrate real value fast, in an industry where months is the normSelf-healing automations with production-grade reliability at scale, where most competitors fail to launchUnlike many AI companies making bold promises, we ship reliable solutions that deliver measurable results. We're backed by Greylock, Coatue, and Lightspeed with $41M raised. Our founder, Harpaul Sambhi, is a second-time founder who successfully sold his first company to LinkedIn.About the RoleAs our Engineering Manager on our Autonomous team, you will lead and scale a high-calibre team of engineers dedicated to defining the future of AI agent development, pushing the boundaries of AI and backend systems.You are deeply passionate about the craft of management and find genuine fulfillment in helping engineers grow their careers. You bring the technical credibility required to navigate complex architectural discussions and translate deep technical challenges into clear business strategies. In this role, you will serve as the essential bridge between product vision and technical execution.This is a hybrid role with 2 days per week in our Toronto office.In this role, you willOversee the technical roadmap for the Autonomous team, translating architectural complexity into clear product strategiesMentor a diverse group of engineers, ranging from product-focused builders to specialized Staff Engineers, and actively support their professional growthPartner closely with Product and Design to ensure our agent-building tools remain intuitive while supporting deep technical capabilitiesChampion a "show > tell" culture by ensuring the team ships rapidly and maintains a high bar for both technical stability and user experienceClear technical and operational roadblocks to ensure the team operates with high agency and clarityYour background looks something like thisHave a proven track record of leading and scaling engineering teams in fast-paced, high-growth environments.Possess the technical background necessary to critically evaluate complex trade-offs and provide strategic direction on complex system designsExperience navigating the balance between long-term technical health and the immediate needs of a rapidly evolving productEmbody a servant-leadership philosophy, with a primary focus on the success of the team and individual growthHigh degree of agency: you thrive in ambiguity and proactively improve processes or solve bottlenecks without much outside inputStrong business acumen and a genuine interest in how technical decisions impact the customer and the company's successEven betterPrior experience building AI-powered products or developer toolsA sharp eye for design and product qualityExperience with real-time interfaces, data visualization, or collaborative editingUnderstanding of agent systems, LLMs, or evaluation frameworksTrack record of building products that balance power and simplicityWe're building the best self-serve agentic automation platform for the healthcare industry and we're just getting started. Come join us.
No items found.
Hidden link
Magical.jpg

Software Engineering Manager, Autonomous

Magical
US.svg
United States
Full-time
Remote
false
About MagicalMagical is an agentic automation platform bringing state-of-the-art AI to healthcare, delivering AI agents that actually work in production.We're building "AI employees" that automate the repetitive, time-consuming workflows that slow teams down. Our focus is healthcare – a $4 trillion industry buried in administrative complexity – where we automate claims processing, prior authorizations, and eligibility checks, enabling providers to focus on patient care.Our TractionThe shift to agentic automation in healthcare is inevitable, and we're leading it:Dramatic acceleration in revenue growth with customers expanding into new workflows before renewal7-day proof-of-concepts that demonstrate real value fast, in an industry where months is the normSelf-healing automations with production-grade reliability at scale, where most competitors fail to launchUnlike many AI companies making bold promises, we ship reliable solutions that deliver measurable results. We're backed by Greylock, Coatue, and Lightspeed with $41M raised. Our founder, Harpaul Sambhi, is a second-time founder who successfully sold his first company to LinkedIn.About the RoleAs our Engineering Manager on our Autonomous team, you will lead and scale a high-calibre team of engineers dedicated to defining the future of AI agent development, pushing the boundaries of AI and backend systems.You are deeply passionate about the craft of management and find genuine fulfillment in helping engineers grow their careers. You bring the technical credibility required to navigate complex architectural discussions and translate deep technical challenges into clear business strategies. In this role, you will serve as the essential bridge between product vision and technical execution.This is a hybrid role with 2 days per week in our San Francisco office.In this role, you willOversee the technical roadmap for the Autonomous team, translating architectural complexity into clear product strategiesMentor a diverse group of engineers, ranging from product-focused builders to specialized Staff Engineers, and actively support their professional growthPartner closely with Product and Design to ensure our agent-building tools remain intuitive while supporting deep technical capabilitiesChampion a "show > tell" culture by ensuring the team ships rapidly and maintains a high bar for both technical stability and user experienceClear technical and operational roadblocks to ensure the team operates with high agency and clarityYour background looks something like thisHave a proven track record of leading and scaling engineering teams in fast-paced, high-growth environments.Possess the technical background necessary to critically evaluate complex trade-offs and provide strategic direction on complex system designsExperience navigating the balance between long-term technical health and the immediate needs of a rapidly evolving productEmbody a servant-leadership philosophy, with a primary focus on the success of the team and individual growthHigh degree of agency: you thrive in ambiguity and proactively improve processes or solve bottlenecks without much outside inputStrong business acumen and a genuine interest in how technical decisions impact the customer and the company's successEven betterPrior experience building AI-powered products or developer toolsA sharp eye for design and product qualityExperience with real-time interfaces, data visualization, or collaborative editingUnderstanding of agent systems, LLMs, or evaluation frameworksTrack record of building products that balance power and simplicityWe're building the best self-serve agentic automation platform for the healthcare industry and we're just getting started. Come join us.
No items found.
Hidden link
Grammarly.jpg

Senior Product Designer, Mobile

Grammarly
$103,000 – $128,000
US.svg
United States
CA.svg
Canada
MX.svg
Mexico
Full-time
Remote
false
SUPERHUMAN MAIL 👉 We exist so that professionals end each day feeling happier, more productive, and closer to achieving their potential. Our customers get through their inboxes twice as fast; many see inbox zero for the first time in years. Today we are… The fastest email experience in the world Loved and adored: see what our customers say 📣 We've joined forces with Grammarly to build the AI-native productivity suite of the future, with Superhuman as the central communication layer. This partnership accelerates our mission to help professionals achieve their potential — now at even greater scale. Come shape the future of email, communication, and productivity! BUILD LOVE 💜 At Superhuman, we deeply understand how to build products that people love. We incorporate fun and play; we infuse magic and joy; we make experiences that amaze and delight. It all starts with the right team — a team that deeply cares about values, customers, and each other. CREATE MASSIVE IMPACT 🚀 We're not solving a small problem, and we're not addressing a small market. We're going after email; the one activity that consumes more of our work day than any other. Our ambition doesn't stop there. Next: calendars, notes, contacts, and team communication. We are building the productivity platform of the future. DO THE BEST WORK OF YOUR LIFE 🌟 We have created the frameworks for how to build product market fit and redefined the narrative of how to onboard customers successfully. We have shown the world it’s possible to build a premium productivity brand. Our investors included Andreessen Horowitz, First Round Capital, IVP, Tiger Global Management, Sam Altman, and the founders of Gmail, Dropbox, Reddit, Discord, Stripe, GitHub, AngelList, and Intercom. This time, we’re swinging beyond the fences and fundamentally rethinking how individuals and teams should collaborate. We are building a household brand and a worldwide organization. We are here to do the best work of our lives, and we hope you are too. ROLE 👩🏽‍💻👨‍💻 Own the observability and lifecycle management of AI features across the organization Build tools and infrastructure to enable teams to develop, monitor, and optimize LLM-powered features Design and implement closed-loop evaluation pipelines that automatically validate prompt changes Develop comprehensive metrics and dashboards to track LLM usage: cost per feature, token patterns, and latency. Create systems that tie user feedback to specific prompts and LLM calls Establish best practices and processes for the full lifecycle of prompts: development, testing, deployment, and monitoring Collaborate with engineering teams across the org to ensure they have the tools and visibility needed to build high-quality AI features Technologies we use: Go, Postgres, Kubernetes, Google Cloud, various LLM providers (OpenAI, Anthropic, Google Vertex) SOUND LIKE YOU? 🙌 Experience: You have 4+ years of software development experience with a focus on backend engineering, DevOps, ML Ops, or SRE work. You're proficient in at least one back-end programming language (ideally Go). You have hands-on experience with observability, metrics, and monitoring systems. AI Enthusiast: You believe AI will revolutionize how we work as well as the experiences that we create for our customers. Driven by passion and curiosity, you leverage AI to dramatically increase your own productivity and the impact of your team. Metrics-Driven: You understand percentiles (P90, P95), know how to build meaningful dashboards, and can turn raw data into actionable insights. You've worked with monitoring and observability tools. Systems Thinker: You think in terms of pipelines, lifecycles, and closed loops. You know how to build scalable infrastructure that enables other teams to move faster. Remarkable Quality: You produce work that is striking, worthy of attention, and a contribution to the state of the art. Asynchronous Communicator: You’re effective across various mediums (especially Slack, Notion, and email) and can produce and consume detailed written materials as needed without sacrificing speed. You respond quickly and thoughtfully to unblock others and speed things up. Start-to-Finish Ownership: Acts with 100% responsibility for their own outcomes as well as the outcomes of the company. Bias to action: Speed matters. Takes rapid and decisive steps forward, even in the face of uncertainty, recognizing action is the catalyst for progress and growth. Growth Mindset: You embrace challenges, welcome feedback, and believe you and others can always grow. SALARY INFO 💸 Superhuman takes a market-based approach to compensation, which means base pay may vary depending on your location. Base pay may vary considerably depending on job-related knowledge, skills, and experience. The expected salary ranges for this position are outlined below by location and may be modified in the future. Canada: $128,000-$174,000 CAD Mexico: $1,928,000-$2,405,000 MXN Brazil: R$562,000-R$702,000 BRL Argentina: $103,000-$128,000 USD The salary ranges do not reflect total compensation, which includes base salary, benefits, and company equity. This range is intentionally broad because we are open to considering candidates at multiple levels of seniority within engineering. The exact salary offered will depend on the candidate’s skills, experience, and the level at which they join our team. BENEFITS 🎁 Superhuman offers all team members competitive pay along with a benefits package encompassing the following and more: Excellent health care (including a wide range of medical, dental, vision, mental health, and fertility benefits) Disability and life insurance options 401(k) and RRSP matching (US & Canada only) Paid parental leave 20 days of paid time off per year, 12 days of paid holidays per year (17 days for LatAm), two floating holidays per year, and flexible sick time Generous stipends (including those for caregiving, pet care, wellness, your home office, and more) Annual professional development budget and opportunities Superhuman takes a market-based approach to compensation, so base pay may vary by location. COME JOIN US 🎟️ We value our differences, and we encourage all to apply — especially those whose identities are traditionally underrepresented in tech organizations. We do not discriminate on the basis of race, religion, color, gender expression or identity, sexual orientation, ancestry, national origin, citizenship, age, marital status, veteran status, disability status, political belief, or any other characteristic protected by law. Superhuman is an equal opportunity employer and a participant in the US federal E-Verify program (US). We also abide by the Employment Equity Act (Canada). #LI-Remote
No items found.
Hidden link
ASAPP.jpg

Lead AI/ML Engineer

ASAPP
$170,000 – $190,000
US.svg
United States
Full-time
Remote
false
At ASAPP, our mission is simple: deliver the best AI-powered customer experience—faster than anyone else. To achieve that, we’re guided by principles that shape how we think, build, and execute. We value customer obsession, purposeful speed, ownership, and a relentless focus on outcomes. ASAPP’s AI Engineering team is seeking an enterprising, talented and curious machine learning engineer.  We are seeking a highly experienced Lead AI/ML Engineer to join our Core GenerativeAgent team. You will play a pivotal role in designing, building, and deploying cutting-edge AI systems that power mission-critical enterprise applications. This role is ideal for an individual who thrives in ambiguity, is deeply technical, and has a strong product sense paired with deep expertise in foundational models and enterprise AI systems. You will lead the design and delivery of end-to-end voice AI solutions, combining large language models with speech technologies such as speech-to-text, text-to-speech, and real-time streaming audio pipelines. This role requires a hands-on technical leader who can architect low-latency, highly reliable conversational voice systems and guide a team through ambiguity toward production excellence. We are looking for someone who understands the unique constraints of voice experiences, latency, turn-taking, interruption handling, streaming inference, and audio quality, and can translate these into scalable, enterprise-grade systems. This is a hybrid role with weekly in-person responsibilities. We have offices in New York City and Mountain View, CAWhat you'll doLead the design and implementation of scalable ML/AI systems, with a focus on large language models, vector databases, and retrieval-based architecturesIntegrate and apply foundation models from major providers (OpenAI, AWS Bedrock, Anthropic, etc.) for prototyping and production use casesAdapt, evaluate, and optimize LLMs for domain-specific enterprise applicationsBuild and maintain infrastructure for experimentation, deployment, and monitoring of AI models in productionImprove model performance and inference workflows with attention to latency, cost, and reliabilityProvide technical leadership within the team, mentoring engineers and promoting best practices in ML engineeringPartner with product and cross-functional stakeholders to translate requirements into scalable ML solutionsContribute to the evolution of internal standards for experimentation, evaluation, and deploymentWhat you'll need6+ years of experience in Machine Learning or AI systems, with hands-on experience in LLMs, speech, or conversational AI systemsProficiency on Python and ML frameworks like PyTorch or TensorFlowProven experience leading complex, cross-functional AI initiativesExperience building or integrating speech-to-text and text-to-speech systemsDeep understanding of latency-sensitive system design and distributed architecturesStrong proficiency in Python and ML frameworks such as PyTorch or TensorFlowStrong experience integrating foundational models into production applicationsUnderstanding of RAG pipelines, prompt engineering, and vector searchExperience deploying and scaling AI systems using AWS (required), Docker, Kubernetes, and CI/CD practicesStrong communication skills with the ability to align engineering, product, and executive stakeholdersComfortable operating in fast-paced environments and driving clarity in ambiguous problem spacesWhat we'd like to seeExperience with speech model fine-tuning or acoustic/language model optimizationHands-on experience with real-time or streaming audio systems (WebRTC, gRPC streaming, or similar architectures)Experience optimizing TTS prosody, pronunciation control, and voice customizationBackground in MLOps, experimentation platforms, or evaluation frameworks for speech and conversational systemsContributions to open-source AI or speech toolingGraduate degree (MS or PhD) in Computer Science, Machine Learning, Speech Processing, or related field 170,000 - 190,000 a yearCompensation package also includes a performance bonus on top of the listed salary range Separately, we also offer a compelling equity grant comprised of stock optionsBenefits include: Competitive compensation with stock optionsComprehensive medical, vision, and dental insurance 401k matchingFitness and wellness stipendMental well-being benefitsProfessional learning and development stipendParental leave, including adoptive and foster parents3 weeks paid time off (increases with tenure) along with sick leave, bereavement and jury duty ASAPP is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, disability, age, or veteran status. If you have a disability and need assistance with our employment application process, please email us at careers@asapp.com to obtain assistance. #LI-AG1 #LI-Hybrid
No items found.
Hidden link
Fluidstack.jpg

Software Engineer, Inference Platform

Fluidstack
$165,000 – $500,000
US.svg
United States
Full-time
Remote
false
About FluidstackAt Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light.We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.About the RoleInference is now the defining cost and latency bottleneck for frontier AI. Fluidstack’s Inference Platform team owns the serving layer that sits between our global accelerator supply and the production workloads our customers run on it: LLM serving frameworks, KV cache infrastructure, disaggregated prefill/decode pipelines, and Kubernetes-based orchestration across multi-datacenter footprints.This is a hands-on IC role at the intersection of distributed systems, model optimization, and serving infrastructure. You’ll own end-to-end inference deployments for frontier AI labs and our inference product, drive measurable improvements in throughput, cost-per-token, and time-to-first-token, and contribute to the platform architecture choices that determine how Fluidstack deploys across tens of thousands of accelerators. You will:Own inference deployments end-to-end: from initial configuration and performance tuning to production SLA maintenance and incident response.Drive measurable improvements in throughput, TTFT, and cost-per-token across diverse model families (dense transformers, mixture-of-experts, multi-modal) and customer workload patterns.Build and operate KV cache and scheduling infrastructure to maximize utilization across concurrent requests.Implement and validate disaggregated prefill/decode pipelines and the Kubernetes orchestration that supports them at scale.Profile and resolve bottlenecks at the compute, memory, and communication layers; instrument deployments for end-to-end observability.Partner with customers to translate their model architectures, access patterns, and latency requirements into deployment configurations and upstream platform improvements.Contribute to inference platform architecture and roadmap, with a focus on reducing deployment complexity, improving hardware utilization, and expanding support for new model classes and accelerators.Participate in an on-call rotation (up to one week per month) to maintain the reliability and SLA commitments of production deployments. Basic Qualifications5+ years of professional software engineering experience with a track record of shipping production-quality systems.Strong programming skills in Python and/or Go.Hands-on production experience with at least one LLM serving framework (vLLM, SGLang, TensorRT-LLM, TGI, or equivalent).Working knowledge of PyTorch or JAX and an understanding of how model architecture choices affect inference characteristics.Experience deploying and operating GPU workloads on Kubernetes at production scale, including autoscaling and resource scheduling.Solid understanding of GPU memory hierarchies, compute parallelism, and the tradeoffs across tensor, pipeline, and expert parallelism strategies.Ability to create structure from ambiguity and communicate technical tradeoffs clearly to both engineering peers and customers.Great written and verbal communication skills in English. Preferred QualificationsProduction experience with disaggregated prefill/decode architectures (NVIDIA Dynamo, LLM-d, or equivalent), including scheduling policies and network fabric configuration.Deep familiarity with KV cache strategies: RadixAttention, slab-based memory allocators, cross-request prefix sharing, and cache-aware scheduling.Experience with multi-node GPU inference across InfiniBand or RoCE fabrics, including NCCL collective communication tuning.Custom kernel or operator development experience (e.g., CUDA, Triton, torch.compile, Pallas, or equivalent)Contributions to open-source inference engines (vLLM, SGLang, TGI, TensorRT-LLM, or similar).Hands-on experience with quantization tooling: GPTQ, AWQ, FP8 via llm-compressor, or AutoGPTQ.Knowledge of speculative decoding implementations (Medusa, EAGLE-3, draft-model approaches) and their performance/quality tradeoffs.Experience optimizing and adapting model implementations for non-NVIDIA accelerators and their ecosystems: AMD, TPU, Trainium/Inferentia, Cerebras, Groq, and other custom ASICs. Salary & BenefitsCompetitive total compensation package (salary + equity).Retirement or pension plan, in line with local norms.Health, dental, and vision insurance.Generous PTO policy, in line with local norms. The base salary range for this position is $165,000 – $500,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options.We are committed to pay equity and transparency.Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email careers@fluidstack.io with your resume/CV, the role you've applied for, and the date you submitted your application-- someone from our recruiting team will be in touch.
No items found.
Hidden link
Heidi.jpg

Product Manager, Models

Heidi Health
AU.svg
Australia
Full-time
Remote
false
Who We AreHowdy, we're Heidi 👋"The AI startup growing faster than Canva"That's what the Financial Review called us. In 18 months, we supported over 73 million patient visits and become one of the fastest-growing companies in the world.We pivoted from broad healthcare AI to building Earth's finest AI Care Partner. Today, we support over 2 million patient sessions weekly across 116 countries and over 110 languages. Hundreds of thousands of clinicians use Heidi to complete documentation.Our mission is simple: strengthen the human connection at the heart of healthcare.We've found product-market fit with individual clinicians through our freemium medical scribe, transforming unstructured clinical visits into structured text artefacts. Clinicians and organizations quite like it. Now, we embark upon consuming more than just documentation. Every new job a clinician delegates to Heidi makes patients feel more attended to, cleans up health system logjams, and lets clinicians be clinicians again.That's where you come in.The roleWe're looking for a Product Manager to own the AI models that power everything Heidi does. Someone who thinks platform teams exist to make product teams faster.You will own Heidi's models platform: evaluation pipelines, fine-tuning infrastructure, model routing, and safety systems. Hundreds of thousands of clinicians across 116 countries use these models in clinical settings every week. You'll work with engineers and researchers, partner with product PMs and clinical safety, and stay close enough to product teams that you know what they need before they file a ticket.You will report into Product leadership. This is a platform role. Every user-facing product at Heidi depends on what your team builds.This role will be based in either our Sydney or Melbourne office.We don't care about logos; the traditional insignia of competence. We'll evaluate senior well-credentialed candidates and young, hungry hopefuls alike. If you're an engineer who's been living inside these models and wants to move up a layer of abstraction into product, we want to hear from you.What you'll do:Own product strategy and roadmap for Heidi's models platform (evaluation, safety, model routing, fine-tuning infrastructure), setting clear goals and being held accountable to achieving themPrioritise your team's work across enablement requests, model safety and quality, and bets on new capabilitiesFigure out where product teams get stuck on your models and fix the platform so they don'tBuild eval tooling and fine-tuning workflows that your engineers and product teams can actually use in clinical settingsDecide what to improve next by reading clinician feedback, model quality signals, and what product teams are asking forAllocate engineering capacity across product teams who all want more than you can give, and tell them clearly what you're deferringWork with your engineers on eval design, fine-tuning trade-offs, and model architecture decisions at a technical levelSet model quality and safety targets grounded in clinical outcomes (did the note capture the right diagnosis? did the referral letter contain the right history?)Spot infrastructure that two product teams are building separately and consolidate itWatch foundation model developments and decide when to rip up your roadmapIf we'd worked together the last 6 weeks, you'd have:Defined an evaluation framework for model quality that your engineers actually useMade a clear ship/hold decision on a model update under pressure from a product team, and communicated the rationale to leadershipIdentified overlapping model capability requests across two product teams and proposed shared infrastructureBuilt a 90-day roadmap that balances enablement requests with your own priorities for model qualityHad a productive disagreement with a senior engineer about prioritisation and reached a resolution you both committed toWhat we're looking for:4+ years working on AI platform, infrastructure, or model-adjacent products, though we care more about what you've built than time servedTechnical depth on model evaluation, fine-tuning, and production AI systems. You've designed eval suites, debugged model regressions, and understand what makes models fail in production.Genuine curiosity about what models get wrong in clinical settings and whyTechnical enough to hold your own with your engineers, credible enough to present safety trade-offs to leadershipYou use AI tools to do your own work, not just manage people who doStrong opinions, weakly held. You'll shift the room when you're right.Willingness to update your views when the technology shifts, which it does roughly quarterly.Data fluency with diagnostic teeth - can you read evaluation results and distinguish a real regression from noise? Can you design an eval that catches the thing your current suite misses?If you answer 'NO' to these questions, this may not be the job for you:Are you an execution powerhouse?Have you worked on AI products where model quality directly affected end users?Can you allocate engineering resources across competing priorities and defend the split?Are you comfortable making decisions with incomplete information, then revising them when the picture changes?Are you able to execute without a legion of data analysts, product marketers, and research coordinators at your beck and call?Does the prospect of re-energising our health systems make you feel fuzzy inside?The Way We Work1. Build to LastWe design for safety and reliability so clinicians, patients, and our teams can trust what we build every day.2. Own Your PracticeIdeas rise on merit, not title, and everyone shares responsibility for the standards we set together.3. Move Fast, Stay SteadyWe move quickly but never at the cost of trust. Progress only matters if people can depend on what we make.4. Make Others BetterHonest feedback, steady support, and shared growth keep our teams improving together.Why you will flourish with usFlexible hybrid working environment, with 3 days in the office.A generous personal development budget of $500 per annumLearn from some of the best engineers and creatives, joining a diverse teamBecome an owner, with shares (equity) in the company, if Heidi wins, we all winThe rare chance to create a global impact as you immerse yourself in one of Australia’s leading healthtech startupsIf you have an impact quickly, the opportunity to fast track your startup career!Heidi is dedicated to creating an equitable, inclusive, and supportive work environment that brings people together from diverse backgrounds, experiences, and perspectives. Our strength is in our differences. We're proud to be an equal opportunity employer and welcome all applicants as we're committed to promoting a culture of opportunity for all.
No items found.
Hidden link
OpenAI.jpg

Forward Deployed Engineer (FDE) - Seattle

OpenAI
$162,000 – $280,000
US.svg
United States
Full-time
Remote
false
About the teamOpenAI’s Forward Deployed Engineering team partners with customers to turn research breakthroughs into production systems. We operate at the intersection of customer delivery and core platform development.About the roleForward Deployed Engineers (FDEs) lead complex end-to-end deployments of frontier models in production alongside our most strategic customers. You will own discovery, technical scoping, system design, build, and production rollout, partnering directly with customer engineering and domain teams.You will measure success through production adoption, measurable workflow impact, and eval-driven feedback that changes product and model roadmaps. You’ll work closely with our Product, Research, Partnerships, GRC, Security, and GTM teams.This role is based in Seattle. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. Travel up to 50% is required.In this role you willOwn technical delivery across multiple deployments from first prototype to stable productionBuild full-stack systems that deliver customer value and sharpen how we learnEmbed closely with customer teams, understand their needs, and guide adoption of what you buildScope work, sequence delivery, and remove blockers earlyMake trade-offs between scope, speed, and quality; adjust plans to protect deliveryContribute directly in the code when progress or clarity depends on itCodify working patterns into tools, playbooks, or building blocks that others can useShare field feedback that helps Research and Product understand where the models succeed and where they can improveKeep teams moving through clarity and follow-throughYou might thrive in this role if youBring 5+ years of engineering or technical deployment experience that includes customer-facing workHave scoped and delivered complex systems in fast-moving or ambiguous environmentsWrite and review production-grade code across frontend and backend using Python, JavaScript, or comparable stacksHave built or deployed systems powered by LLMs or generative models and understand how model behaviour affects product experienceSimplify complexity and make fast, sound decisions under pressureCommunicate clearly with engineers, product teams, and customer stakeholdersSpot risks early and adjust without slowing downModel calm and judgment when the stakes are highAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Lorikeet.jpg

Senior Software Engineer

Lorikeet
AU.svg
Australia
Full-time
Remote
false
About LorikeetLorikeet is the most powerful customer support AI for complex businesses like fintechs, healthtechs, marketplaces and delivery services.We’re doing this by building ground up from the premise that most support responses should be automated with transparent, customizable AI, and that support teams should spend their time managing automation and engaging with complex cases, not grinding through high volumes of simple tickets. Once teams are freed from reactive support, we want to help them tackle what’s next: providing personalized concierge services to their customers.To deliver this combination of powerful AI systems and well designed tooling we’re leveraging Jamie’s experience as an early member of Google’s generative AI team and Steve’s experience building for operational teams at Stripe, as well as the experience of our team who’ve joined us from places like Stripe, Canva, Atlassian, Dropbox and Dovetail.We are growing fast, have paying customers, real revenue, an exciting roadmap and a strong sales pipeline. We’ve raised over USD 50m from leading VCs and angel investors, including QED, Blackbird, Square Peg, Claire Hughes Johnson (ex Stripe COO), Cristina Cordova (Linear COO), Bob Van Winden (Stripe Head of Support), and Cos Nicolaescu (Brex CTO).Our global customers include:The largest telehealth company in Australia,The largest bank for teens in the US,One of the largest NFT marketplaces by trading volumeOne of the largest Web3 gaming companies… and a handful of other enterprise customers with over 1 million support tickets a year.What’s unique about this opportunity?Technical founders and an engineering-led culture. Most at Lorikeet write code. Everyone at Lorikeet owns working with our users and building a great product for them. Engineers will take ownership of challenging problems and define and implement solutions.Warm, mature, in-person, flexible culture. Low ego, high trust team. No tolerance for ‘talented jerks’. We value working together in office as the default in our (quite nice!) Surry Hills office. Folks on the team have young families, so we embrace a) working efficiently, and b) working flexible hours to fit in life priorities outside of work. We’re committed to building a diverse team and really encourage folks from underrepresented backgrounds to reach out - we value user obsession and eagerness to learn over traditional credentials.High pay, high expectations, high performance. We’re building a small, great team. Engineers are generally underpaid in Sydney and under compensated with equity. We aim to match unicorn / scale up pay at base salary and offer a potentially life-changing equity stake in the business. Our team get the same monthly updates we send to our investors because they’re investors and owners too.On the technical cutting edge. With our users we’re defining what an AI-first SaaS product looks like. No one has figured out what the UI/UX, capabilites and data models of an AI first company are - it’s white space for us to invent. The AI agent problems we’re solving are beyond the cutting edge at the biggest research labs. We’re building on a modern tech stack, with Typescript, React/Remix, PrismaORM, NestJS and some Python sprinkled in. Knowledge of that stack is nice, but we know good engineers will pick up new languages.No nonsense recruitment process. The process is: 1) informal chats with Steve and Jamie to hear our pitch and understand your interests and goals, 2) a ~two day paid work trial where you come in and ship with us. There’s no better way for each of us to figure out if we like working together than to work together!About the role and youYou'll be building a powerful project that is truly innovating the world of customer support. Together we'll be defining what an AI-first SaaS product looks like. No one has figured out what the UI/UX, capabilities and data models of an AI first company are - it’s white space for us to invent. The AI agent problems we’re solving are beyond the cutting edge at the biggest research labs.We'd love to speak with you ifYou are excited by working with a top caliber team tackling the aboveYou have 5+ years of experience working in a top tier engineering organisation, and ideally some exposure to startups/scaleupsYou are comfortable across the stack and are excited to lead ambitious, ambiguous projects that involve strong technical decision-making, effective implementation, and good product and design instinctsYou're keen to mentor / lead less experienced engineersIf you don't quite match this and are from and under-represented background we strongly encourage you to reach out. We know first hand that diverse teams are higher performing and are proud that our team reflects a broad spectrum of identities and lived experiences.
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.