This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We love technology, and we enjoy what we do. We are always looking for innovation. We have social awareness and try to improve it daily. We make things happen. You can trust us. Our Enrouters are always up for a challenge. We ask questions, and we love to learn. We pride ourselves on having great benefits and compensations, a fantastic work environment, flexible schedules, and policies that positively impact the balance of work and life outside of it. We care about who you are in the office and as an individual. We get involved, we like to know our people, we want every Enrouter to become part of a great community of highly driven, responsible, respectful, and above all, happy people. We want you to enjoy working with us.
Job Responsibility:
Data Engineering & Pipelines: Build, optimize, and maintain scalable data pipelines
Work with Databricks and SQL for big data queries and transformations
Integrate structured/unstructured datasets into AI workflows
AI & ML Engineering: Apply libraries like scikit-learn, pandas, NumPy for preprocessing and modeling
Design, prompt, and evaluate LLM-based workflows
Leverage frameworks like LangChain and vector databases for retrieval and orchestration
Automation & Orchestration: Develop solutions using low-code and no-code tools such as LangFlow, LangSmith, Flowise, n8n, etc.
Integrate APIs, connectors, and services into end-to-end pipelines
Create reusable components for rapid prototyping and deployment
Collaboration & Delivery: Work cross-functionally with data engineers, analysts, and product teams
Document workflows, models, and processes for reproducibility
Ensure solutions are scalable, secure, and aligned with business goals
Requirements:
1–2+ years of professional experience in AI/ML, Data Engineering, or related roles
Proficiency in Python (pandas, scikit-learn, NumPy)
Experience with LangChain and LLM-based prompting
Knowledge of Databricks and distributed data processing
Hands-on experience with low-code/no-code tools (LangFlow, LangSmith, Flowise, n8n)
Familiarity with building and debugging data pipelines and APIs
Strong problem-solving skills, curiosity, and ability to learn quickly
Nice to have:
Exposure to cloud platforms (AWS, GCP, or Azure)
Knowledge of vector databases (Pinecone, Weaviate, FAISS)
Experience with CI/CD pipelines and containerization (Docker, GitHub Actions)
Background in data visualization or BI tools
What we offer:
Monetary compensation
Year-end Bonus
IMSS, AFORE, INFONAVIT
Major Medical Expenses Insurance
Minor Medical Expenses Insurance
Life Insurance
Funeral Expenses Insurance
Preferential rates for car insurance
TDU Membership
Holidays and Vacations
Sick days
Bereavement days
Civil Marriage days
Maternity & Paternity leave
English and Spanish classes
Performance Management Framework
Certifications
TALISIS Agreement: Discounts at ADVENIO, Harmon Hall, U-ERRE, UNID