This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We're looking for a talented Al Engineer to join our team in Madrid, a talent focused on implementing and scaling large language models (LLMs) and generative Al systems. In this role, you will bridge the gap between cutting-edge research and practical applications, turning innovative Al concepts into robust, efficient, and production-ready systems. You will work closely with our research team and data engineers to build and optimize Al solutions that drive our company's products and services. In this role you will report to the Finance Product Development Lead.
Job Responsibility:
Implement and optimize large language models and generative Al systems for production environments
Collaborate with researchers and clients to translate research prototypes into scalable, efficient implementations tailored to client needs
Design and develop Al infrastructure components for model training, fine-tuning, and inference
Optimize Al models for performance, latency, and resource utilization
Implement systems for model evaluation, monitoring, and continuous improvement
Develop APls and integration points for Al services within our product ecosystem
Troubleshoot complex issues in Al systems and implement solutions
Contribute to the development of internal tools and frameworks for Al development
Stay current with emerging techniques in Al engineering and LLM deployment
Collaborate with data engineers to ensure proper data flow for Al systems
Implement safety measures, content filtering, and responsible Al practices
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or related technical field
3+ years of hands-on experience implementing and optimizing machine learning models
Strong programming skills in Python and related ML frameworks (PyTorch, TensorFlow)
Experience with deploying and scaling Al models in production environments
Familiarity with large language models, transformer architectures, and generative Al
Knowledge of cloud platforms (AWS, GCP, Azure) and containerization technologies
Understanding of software engineering best practices (version control, CI/CD, testing)
Experience with ML engineering tools and platforms (MLflow, Kubeflow, etc.)
Strong communication skills and experience interfacing with clients or external partners
Strong problem-solving skills and attention to detail
Ability to collaborate effectively in cross-functional teams
Nice to have:
Experience with fine-tuning and prompt engineering for large language models
Knowledge of distributed computing and large-scale model training
Familiarity with model optimization techniques (quantization, pruning, distillation)
Experience with real-time inference systems and low-latency Al services
Understanding of Al ethics, bias mitigation, and responsible Al development
Experience with model serving platforms (TorchServe, TensorFlow Serving, Triton)
Knowledge of vector databases and similarity search for LLM applications
Experience with reinforcement learning and RLHF techniques
Familiarity with front-end technologies for Al application interfaces
What we offer:
competitive compensation structure, including salary, performance-based bonuses, and additional components based on experience
comprehensive benefits as part of the total compensation package