Skills relacionados:
Python SQL Spark Airflow
Gross salary $2500 - 3700 Full time
Data Engineer
  • Checkr
  • Santiago (Hybrid)
Python SQL Kubernetes CI/CD

Checkr está expandiendo su centro de innovación en Santiago para impulsar la precisión y la inteligencia de su motor de verificaciones de antecedentes a escala global. Este equipo colabora estrechamente con las oficinas de EE. UU. para optimizar el motor de selección, detectar fraude, y evolucionar la plataforma con modelos GenAI. El candidato seleccionado formará parte de un esfuerzo estratégico para equilibrar velocidad, costo y precisión, impactando millones de candidatos y mejorando la experiencia de clientes y socios. El rol implica liderar iniciativas de optimización, diseño de estrategias analíticas y desarrollo de modelos predictivos dentro de una pila tecnológica de alto rendimiento.

This job is original from Get on Board.

Responsabilidades del Cargo

    • Crear, mantener y optimizar canales de datos críticos que sirvan de base para la plataforma y los productos de datos de Checkr.
    • Crear herramientas que ayuden a optimizar la gestión y el funcionamiento de nuestro ecosistema de datos.
    • Diseñar sistemas escalables y seguros para hacer frente al enorme flujo de datos a medida que Checkr sigue creciendo.
    • Diseñar sistemas que permitan flujos de trabajo de aprendizaje automático repetibles y escalables.
    • Identificar aplicaciones innovadoras de los datos que puedan dar lugar a nuevos productos o conocimientos y permitir a otros equipos de Checkr maximizar su propio impacto.

Requisitos del Cargo

  • Más de dos años de experiencia en el sector en un puesto relacionado con la ingeniería de datos o backend y una licenciatura o experiencia equivalente.
  • Experiencia en programación en Python o SQL. Se requiere dominio de uno de ellos y, como mínimo, experiencia en el otro.
  • Experiencia en el desarrollo y mantenimiento de servicios de datos de producción.
  • Experiencia en modelado, seguridad y gobernanza de datos.
  • Familiaridad con las prácticas y herramientas modernas de CI/CD (por ejemplo, GitLab y Kubernetes).
  • Experiencia y pasión por la tutoría de otros ingenieros de datos.

Consideración

Favor considerar adjuntar cv en Ingles actualizado al momento de postular

Beneficios

    • Un entorno de colaboración y rápido movimiento
    • Formar parte de una empresa internacional con sede en Estados Unidos
    • Asignación de reembolso por aprendizaje y desarrollo
    • Remuneración competitiva y oportunidades de promoción profesional y personal
    • Cobertura médica, dental y oftalmológica del 100% para empleados y dependientes
    • Vacaciones adicionales de 5 días y flexibilidad para tomarse tiempo libre

En Checkr, creemos que un entorno de trabajo híbrido fortalece la colaboración, impulsa la innovación y fomenta la conexión. Nuestras sedes principales son Denver, CO, San Francisco, CA, y Santiago, Chile.
Igualdad de oportunidades laborales en Checkr

Checkr se compromete a contratar a personas cualificadas y con talento de diversos orígenes para todos sus puestos tecnológicos, no tecnológicos y de liderazgo. Checkr cree que la reunión y celebración de orígenes, cualidades y culturas únicas enriquece el lugar de trabajo.

Pet-friendly Pets are welcome at the premises.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Health coverage Checkr pays or copays health insurance for employees.
Computer provided Checkr provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Checkr gives you paid vacations over the legal minimum.
Beverages and snacks Checkr offers beverages and snacks for free consumption.
Gross salary $2800 - 3100 Full time
Lider Técnico AWS – Datos
  • BC Tecnología
  • Santiago (Hybrid)
SQL CI/CD AWS Lambda Data Architecture
BC Tecnología es una consultora de TI con enfoque en servicios, outsourcing y selección de profesionales, que acompaña a clientes de sectores como finanzas, seguros, retail y gobierno. En este rol, liderarás proyectos de datos en entornos cloud para clientes de alto perfil, asegurando soluciones escalables y alineadas a estándares de arquitectura. Trabajarás en un entorno ágil para diseñar e implementar soluciones de datos (ETL/ELT), gobernanza de datos y migraciones a la nube, colaborando estrechamente con equipos de Infraestructura, Desarrollo y Unidades de Negocio. Participarás en iniciativas de mejora continua, calidad de entrega y gobierno de datos, impulsando buenas prácticas de CI/CD y cumplimiento normativo. En un entorno híbrido, combinarás trabajo remoto con presencia en nuestras oficinas para colaborar de forma eficiente con stakeholders y equipos multidisciplinarios.

Apply directly through getonbrd.com.

Funciones principales

  • Liderar tecnicamente equipos de datos y proyectos AWS orientados a soluciones de ingesta, procesamiento y almacenamiento de datos.
  • Gestionar stakeholders y asegurar alineación de expectativas, alcance y plazos.
  • Diseñar y revisar arquitecturas de datos escalables (ETL/ELT, data lakes, data warehouses) utilizando servicios AWS (Glue, S3, Redshift, Lambda, Step Functions).
  • Garantizar gobernanza de datos, calidad, seguridad y cumplimiento de buenas prácticas de CI/CD y control de versiones.
  • Promover prácticas ágiles (Scrum/Kanban), liderazgo técnico, mentoring y desarrollo de capacidades del equipo.
  • Identificar y gestionar riesgos técnicos, definir indicadores de rendimiento y ejecutar planes de mitigación.
  • Colaborar con áreas de Infraestructura, Desarrollo y negocio para entregar soluciones alineadas a objetivos estratégicos.
  • Participar en revisiones de arquitectura, diseño de soluciones y documentación técnica.

Requisitos y perfil

Buscamos un Líder Técnico AWS con al menos 5 años de experiencia liderando proyectos y equipos de datos en entornos cloud. Se valorará experiencia sólida en proyectos de datos (ETL/ELT, arquitecturas escalables) y conocimiento profundo de servicios AWS como AWS Glue, S3, Redshift, Lambda y Step Functions. Se requiere experiencia en entornos ágiles (Scrum/Kanban), así como gobernanza de datos, CI/CD y buenas prácticas de calidad. El candidato ideal combinará habilidades técnicas sólidas con capacidad de liderazgo, comunicación efectiva y orientación a resultados.
Habilidades técnicas: diseño de arquitecturas de datos, orquestación de pipelines, proficient en SQL, modelado de datos, seguridad y cumplimiento, gestión de stakeholders, migraciones a la nube.
Competencias blandas: liderazgo colaborativo, comunicación clara, pensamiento estratégico, orientación a soluciones, capacidad de influencia y trabajo en equipo multifuncional.

Desirable

Certificaciones en AWS (por ejemplo, AWS Certified Solutions Architect – Professional o AWS Certified Data Analytics) son una ventaja. Experiencia con herramientas de orquestación adicionales, ciencia de datos, o herramientas de observabilidad y monitoreo de datos. Conocimientos en gobernanza de datos, calidad y metadatos, y experiencia en proyectos en sectores regulados también son deseables.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $3200 - 4100 Full time
Python Git Data Analysis SQL

En Artefact LatAm, somos una consultora líder enfocada en acelerar la adopción de datos e inteligencia artificial para generar impacto positivo. El Senior Data Scientist es un profesional altamente experimentado en el análisis de datos, con profundos conocimientos en técnicas estadísticas, de programación y de aprendizaje automático. Su rol principal es utilizar estas habilidades para extraer conocimientos significativos y tomar decisiones estratégicas basadas en datos dentro de la organización.

Además de desarrollar modelos analíticos avanzados, el Data Scientist Senior ejerce un rol importante dentro del equipo asignado al cliente, aportando con su conocimiento técnico para poder tomar decisiones concretas que ayuden al desarrollo del proyecto. Su experiencia ayuda en la conceptualización hasta la implementación, y asegura la entrega de soluciones prácticas y detallistas que cumplan con las necesitas del cliente.

Apply through Get on Board.

Tus responsabilidades serán:

  • Análisis de Datos: Aplicar técnicas avanzadas de análisis exploratorio para comprender la estructura y características de grandes volúmenes datos, provenientes de diversas fuentes.
  • Desarrollo de Modelos Predictivos Avanzados: utilizar técnicas avanzadas de aprendizaje automático y estadística para desarrollar modelos predictivos robustos que permitan predecir tendencias, identificar patrones y realizar pronósticos precisos.
  • Optimización de Algoritmos y Modelos: dirigir la optimización de algoritmos y modelos existentes para mejorar la precisión, eficiencia y escalabilidad.
  • Visualización y Comunicación de Datos: crear visualizaciones claras y significativas para comunicar los hallazgos y resultados de manera efectiva al cliente y a otros stakeholders clave.
  • Desarrollo de Herramientas Analíticas: diseñar y desarrollar herramientas analíticas personalizadas y sistemas de soporte para la toma de decisiones basadas en datos, utilizando lenguajes de programación como Python, R o SQL.
  • Gestión de Proyectos: liderar frentes de un proyecto relacionados al análisis de datos complejos, desde la conceptualización hasta la implementación, planificando estratégicamente los hitos y entregables acordados con los clientes.
  • Investigación y Desarrollo Continuo: mantenerse actualizado en las últimas tendencias y avances en análisis de datos, inteligencia artificial y metodologías relacionadas. Compartir conocimientos y experiencias con el equipo para fomentar un ambiente de aprendizaje continuo.
  • Contribución a Propuestas y Desarrollo de Negocios: colaborar en el desarrollo de propuestas internas para potenciales clientes, utilizando su experiencia y conocimientos para identificar oportunidades y diseñar soluciones innovadoras.
  • Apoyar al equipo desde un rol de mentor, traspasando conocimientos y buenas prácticas, proporcionándole capacitación personalizada según las necesidades individuales de los miembros.

Los requisitos del cargo son:

  • Formación en Ingeniería Civil Industrial/Matemática/Computación, Física, Estadística, carreras afines, o experiencia equivalente en análisis avanzado de datos.
  • Experiencia laboral de al menos 4 años en roles de análisis de datos, preferiblemente en industrias relevantes.
  • Experto en Python, SQL y Git, con habilidades demostradas en el desarrollo de modelos analíticos y aplicaciones.
  • Amplio conocimiento de bases de datos relacionales y no relacionales, así como experiencia en procesamiento de datos (ETL).
  • Profundo conocimiento en machine learning, feature engineering, reducción de dimensiones, estadística avanzada y optimización.
  • Inglés avanzado.

Condiciones

  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Días de vacaciones adicionales a lo legal y medio día libre de cumpleaños. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por desempeño de la empresa, además de bonos por trabajador referido y por cliente.
  • Almuerzos quincenales pagados con el equipo (Chile) o Tarjeta de Alimentación (México).
  • Cobertura de salud adicional (Mexico).
  • Computadora de altos specs para trabajar cómodamente.
  • Flexibilidad horaria y trabajo por objetivos.
  • Posibilidad de participar en proyectos a nivel global, con intercambios con otros países con presencia del grupo.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Ciudad de México).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

...y más!

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks Artefact LatAm offers space for internal talks or presentations during working hours.
Paid sick days Sick leave is compensated (limits might apply).
Health coverage Artefact LatAm pays or copays health insurance for employees.
Company retreats Team-building activities outside the premises.
Computer repairs Artefact LatAm covers some computer repair expenses.
Computer provided Artefact LatAm provides a computer for your work.
Education stipend Artefact LatAm covers some educational expenses related to the position.
Performance bonus Extra compensation is offered upon meeting performance goals.
Personal coaching Artefact LatAm offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
Vacation over legal Artefact LatAm gives you paid vacations over the legal minimum.
Beverages and snacks Artefact LatAm offers beverages and snacks for free consumption.
Vacation on birthday Your birthday counts as an extra day of vacation.
Parental leave over legal Artefact LatAm offers paid parental leave over the legal minimum.
Gross salary $1100 - 1700 Full time
Data Engineer
  • Decision Point Latam
  • Ciudad de México &nbsp Santiago (Hybrid)
Python Excel SQL ETL
  • Development as a Subject Matter Expert in FMCG sales & marketing analytics domain, working directly with top FMCG brands across Latin America.
  • Work extensively with clients. A direct interaction with clients and with our team in India. These direct interactions will fasten your learning process and will enable you to master traits of strategy consulting, i.e. from understanding business objective to analyzing data in a methodical way culminating with a final output to be delivered to the client.
  • Our senior partners have a wide professional experience and expertise, having played executive roles in leading companies in Chile and various Industrial and FMCG Global companies across continents. Advanced Analytics and Big Data is not only about Data Science, but also Decision Science. You will get the best from both.

Applications at getonbrd.com.

Funciones del cargo

  • Data Infrastructure Development: Design, build, and maintain scalable data infrastructure on Cloud Platforms for data processing to support various data initiatives and analytics needs within the organization
  • Data Pipeline Implementation: Design, develop and maintain scalable data pipelines to ingest, transform, and load data from various sources into cloud-based storage and analytics platforms using Python, and SQL
  • Collaboration and Support: Collaborate with cross-functional teams to understand data requirements and provide technical support for data-related initiatives and projects. Helping translating business realities into data bases solution.
  • Performance Optimization: Optimize data processing workflows and cloud resources for efficiency and cost-effectiveness. Implement data quality checks and monitoring to ensure the reliability and integrity of data pipelines.
  • Build and optimize data warehouse solutions for efficient storage and retrieval of large volumes of structured and unstructured data.
  • Data Governance and Security: Implement data governance policies and security controls to ensure compliance and protect sensitive information across cloud platforms environment.

Requerimientos del cargo

  • Bachelor’s degree in computer science, Engineering, Statistics, Mathematics, or related field. Master's degree preferred.
  • Advanced English is mandatory
  • 1+ years of experience as Data Engineer
  • Cloud data storage is mandatory
  • Strong understanding of data modeling, ETL processes, and data warehousing concepts
  • Experience in SQL language, relational data modelling and sound knowledge of Database administration is mandatory
  • Proficiency in Python related to Data Engineering for developing data pipelines, ETL (Extract, Transform, Load) processes, and automation scripts.
  • Proficiency in Microsoft Excel
  • Experience within integrating data management into business and data analytics is mandatory
  • Experience working with cloud platform for deploying and managing scalable data infrastructure
  • Experience working with technologies such as DBT, airflow, snowflake, Databricks among others is a plus
  • Excellent Stakeholder Communication
  • Familiarity with working with numerous large data sets
  • Comfort in a fast-paced environment
  • Strong analytical skills with the ability to collect, organize, analyses, and disseminate significant amounts of information with attention to detail and accuracy
  • Excellent problem-solving skills
  • Strong interpersonal and communication skills for cross-functional teams
  • Proactive approach to continuous learning and skill development
  • Experience in leading or collaborating with a team of data scientists and engineers in developing and delivering machine learning models that work in a production setting..

Condiciones

  • Hibrido. 4x1 en Chile y 3x2 en México
  • 2 DP Days libres por quarter
  • Seguro de salud complementario en Chile
  • almuerzo en oficina
  • 5 días extra de vacaciones

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Health coverage Decision Point Latam pays or copays health insurance for employees.
Computer provided Decision Point Latam provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Decision Point Latam gives you paid vacations over the legal minimum.
Beverages and snacks Decision Point Latam offers beverages and snacks for free consumption.
$$$ Full time
Data Engineer – Proyecto (Híbrida)
  • BC Tecnología
  • Santiago (Hybrid)
Python PostgreSQL SQL ETL
En BC Tecnología diseñamos y ejecutamos soluciones de TI para clientes en sectores como servicios financieros, seguros, retail y gobierno. Nuestro equipo de Data & Analytics se centra en impulsar la continuidad operativa de flujos de datos corporativos mediante pipelines robustos, integraciones escalables y monitoreo proactivo. Participarás en un proyecto con enfoque en datos de alto volumen, trabajando con tecnologías modernas y un entorno ágil para entrega continua y mejoras de producto.

Official job site: Get on Board.

Funciones

  • Diseñar y mantener pipelines ETL/ELT para datos críticos de la organización.
  • Orquestar y monitorear flujos de datos con Apache Airflow en entornos productivos.
  • Optimizar consultas SQL en PostgreSQL y/o Amazon Redshift para rendimiento y costos.
  • Gestionar repositorios y pipelines CI/CD en Azure DevOps.
  • Resolver incidencias y asegurar la calidad, disponibilidad y trazabilidad de los datos.
  • Colaborar con equipos de ciencia de datos, ingeniería y negocio para entender requerimientos y entregar soluciones escalables.
  • Participar en la definición de estándares de gobierno de datos y mejores prácticas de ingeniería de datos.

Descripción

  • Buscamos Ingeniero/a de Datos con experiencia en desarrollo de pipelines y entornos productivos para asegurar fluidez y confiabilidad de los datos corporativos.
  • Requisitos técnicos: Python y SQL avanzados; experiencia con PostgreSQL y/o Amazon Redshift; Apache Airflow; Azure DevOps; manejo de grandes volúmenes de datos.
  • Competencias: pensamiento analítico, proactividad, orientación a resultados, capacidad de trabajo en equipo y comunicación efectiva con stakeholders.
  • Se valoran proyectos previos en entornos financieros y experiencia con herramientas de monitoreo y observabilidad de datos.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.

La modalidad híbrida que ofrecemos, ubicada en Santiago Centro, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.

Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $3500 - 3700 Full time
Python SQL Microstrategy ETL

Coderslab.io is looking to hire a Big Data & Reporting Lead to lead the organization’s data architecture and analytics strategy.

This role will be responsible for designing, governing, and optimizing the enterprise data architecture, ensuring proper structuring, integration, automation, and consumption of data for reporting, advanced analytics, and decision-making.

The position has a strong focus on data architecture, analytical modeling for MicroStrategy, process automation using n8n, and optimization of ETL/ELT data pipelines.

About the client and the project: the company delivers innovative technology solutions and provides opportunities for continuous learning under the guidance of experienced professionals and cutting-edge technologies. The goal is to deliver value in key business processes and improve operational efficiency through SAP.

Originally published on getonbrd.com.

Funciones del cargo

Data Architecture
Design and govern the data architecture for Big Data and BI platforms.
Define analytical data models for reporting and analytics.
Design data lakes, data warehouses, and data marts aligned with business needs.
Establish data governance, quality, and lineage standards.
Ensure platform scalability, availability, and reliability.

Modeling and Reporting in MicroStrategy
Design and optimize the semantic layer and metadata in MicroStrategy.
Define analytical models and Star Schema structures.
Lead the development of dossiers, operational reports, and analytical cubes.
Optimize queries, performance, and execution times.
Define caching, aggregation, and pre-calculation strategies.

Automation of Analytical Processes (n8n)
Design data and reporting automation workflows using n8n.
Integrate sources such as APIs, databases, cloud services, and BI tools.
Automate data extraction, report generation, dashboard distribution, and alerts.
Design orchestration pipelines for analytical processes.

Data Processing Optimization
Design and optimize scalable ETL/ELT processes.
Optimize queries, data pipelines, and incremental loads.
Reduce latency and resource consumption in reporting.
Implement efficient data ingestion strategies.

Technical Leadership and Management
Lead Data Engineering, BI, and Analytics teams.
Track data architecture and reporting projects.
Define the data platform evolution roadmap.
Establish KPIs for reporting performance, data quality, and analytics adoption.
Align business needs with the data architecture.

Requerimientos del cargo

Experience leading data architecture or analytics platforms.
Experience in analytical data modeling (Star Schema, Data Modeling).
Experience working with Big Data or Data Warehousing platforms.
Experience with MicroStrategy for modeling and reporting.
Experience designing ETL / ELT processes and data pipelines.
Advanced SQL knowledge.
Experience with Python for data processing or automation.
Experience designing scalable data architectures.

Technologies
Big Data & Data Platforms
Spark
Hadoop
Databricks
Snowflake / BigQuery / Redshift
Kafka
Business Intelligence
MicroStrategy
Power BI (nice to have)
Tableau (nice to have)
Automation & Orchestration
n8n
Airflow
REST APIs
Webhooks
Databases
SQL Server
PostgreSQL
Oracle
NoSQL
Data Engineering
Python
Advanced SQL
ETL / ELT pipelines

Opcionales

Experience with workflow automation using n8n.
Experience with orchestration tools such as Airflow.
Experience with Power BI or Tableau.
Knowledge of event-driven or streaming architectures (Kafka).
Experience in data governance, data quality, and data cataloging.

Condiciones

Modalidad prestacion de servicios

Gross salary $4500 - 4800 Full time
Data Engineer
  • Coderslab.io
Python Agile SQL ETL

Coderslab.io es una empresa dedicada a transformar y hacer crecer negocios mediante soluciones tecnológicas innovadoras. Formarás parte de una organización en expansión con más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Te unirás a equipos diversos que reúnen a parte de los mejores talentos tecnológicos para participar en proyectos desafiantes y de alto impacto. Trabajarás junto a profesionales experimentados y tendrás la oportunidad de aprender y desarrollarte con tecnologías de vanguardia.
Role Purpose

We are looking for a Data Engineer to design, develop, and support robust, secure, and scalable data storage and processing solutions. This role focuses on data quality, performance, and integration, working closely with technical and business teams to enable data-driven decision making.

Find this job and more on Get on Board.

Funciones del cargo

Key Responsibilities

  • Design, develop, test, and implement databases and data storage solutions aligned with business needs.
  • Collaborate with users and internal teams to gather requirements and translate them into effective technical solutions.
  • Act as a bridge between IT and business units.
  • Evaluate and integrate new data sources, ensuring compliance with data quality standards and ease of integration.
  • Extract, transform, and combine data from multiple sources to enhance the data warehouse.
  • Develop and maintain ETL/ELT processes using specialized tools and programming languages.
  • Write, optimize, and maintain SQL queries, stored procedures, and functions.
  • Design data models, defining structure, attributes, and data element naming standards.
  • Monitor and optimize database performance, scalability, and security.
  • Assess existing database designs to identify performance improvements, required upgrades, and integration needs.
  • Implement data management standards and best practices to ensure data consistency and governance.
  • Provide technical support during design, testing, and production deployment.
  • Maintain clear and accurate technical documentation.
  • Work independently on projects of moderate technical complexity with general supervision.
  • Participate in Agile teams, contributing to sprint planning and delivery.
  • Provide on-call support outside business hours and on weekends on a rotating basis.

Requerimientos del cargo

Required Qualifications

  • Bachelor’s degree in Computer Science, Information Systems, Database Systems, Engineering, or a related field, or equivalent experience.
  • 4–5 years of professional experience in a similar role.
  • Strong experience with:
    • SQL
    • Snowflake
    • ETL / ELT processes
    • Cloud-based data warehousing platforms
  • Experience with ETL tools (e.g., Informatica) and programming languages such as Python.
  • Solid understanding of data warehouse design and administration.
  • Experience working with Agile methodologies (Scrum).
  • Strong analytical, conceptual thinking, and problem-solving skills.
  • Ability to plan, prioritize, and execute tasks effectively.
  • Strong communication skills, able to explain technical concepts to non-technical stakeholders.
  • Excellent written and verbal communication skills.
  • Strong interpersonal, listening, and teamwork skills.
  • Self-motivated, proactive, and results-driven.
  • Strong service orientation and professional conduc

Opcionales

Preferred Qualifications

  • Certifications in Snowflake, SQL Server, or T-SQL.

Condiciones

Remote | Contractor | High English proficiency

$$$ Full time
Data Engineer (Pyspark, AWS)
  • Improving South America
Python SQL ETL Kafka
En Improving South America, brindamos servicios de TI para transformar la percepción del profesional de TI. Nos enfocamos en consultoría de TI, desarrollo de software y formación ágil. El/la BI Developer trabajará en proyectos orientados a inteligencia de negocio, visualización de datos y creación de dashboards impactantes que faciliten la toma de decisiones. Colaborará con equipos multifuncionales para entregar soluciones escalables y de alto valor para clientes internacionales, dentro de un entorno 100% remoto.
La empresa promueve una cultura de trabajo excepcional basada en el trabajo en equipo, la excelencia y la diversión, con enfoque en crecimiento personal y recompensas compartidas. Al integrarse, el/la candidato/a formará parte de una comunidad que prioriza la comunicación abierta y relaciones laborales sólidas a largo plazo, respaldada por una estructura de desarrollo profesional y aprendizaje continuo.

This job offer is available on Get on Board.

Responsabilidades del puesto

En Improving South America buscamo un/a Senior Data Engineer para diseñar y operar soluciones de datos de alta disponibilidad a escala global, trabajando con pipelines batch y streaming que procesan grandes volúmenes de información. El rol requiere experiencia construyendo pipelines robustos, trabajando con Kafka, PySpark y data warehouses en AWS, además de fuerte dominio de SQL y modelado de datos.

Responsabilidades del rol:

  • Diseñar y operar pipelines de datos batch y streaming.
  • Procesar grandes volúmenes de datos (billones de eventos diarios y datasets multi-terabyte).
  • Construir integraciones entre MySQL y Redshift.
  • Diseñar modelos de datos y optimizar consultas SQL.
  • Implementar estrategias de CDC, cargas incrementales y full loads.
  • Integrar datos mediante APIs internas y de terceros.
  • Diagnosticar fallas en pipelines, problemas de latencia y calidad de datos.
  • Colaborar en decisiones de arquitectura de datos.

Requerimientos del cargo

  • 7+ años de experiencia en Data Engineering.
  • Inglés intermedio/avanzado (B2/C1) para comunicación técnica.
  • Experiencia sólida con Python.
  • Experiencia con PySpark.
  • Experiencia trabajando con Kafka.
  • Experiencia con Redshift u otro data warehouse moderno.
  • Experiencia integrando MySQL → Redshift.
  • Dominio avanzado de SQL (modelado, optimización y queries complejas).
  • Experiencia en AWS y servicios de datos en la nube.
  • Experiencia diseñando pipelines ETL/ELT batch y streaming.
  • Experiencia con Glue, Step Functions o arquitecturas serverless en AWS.
  • Experiencia trabajando con herramientas de desarrollo asistidas por IA (ej. Cursor).
  • Experiencia en entornos de alto volumen de datos.

Beneficios que ofrecemos

  • Contrato a largo plazo.
  • 100% Remoto.
  • Vacaciones y PTOs
  • Posibilidad de recibir 2 bonos al año.
  • 2 revisiones salariales al año.
  • Clases de inglés.
  • Equipamiento Apple.
  • Plataforma de cursos en linea
  • Budget para compra de libros.
  • Budget para compra de materiales de trabajo
  • mucho mas..

Computer provided Improving South America provides a computer for your work.
Informal dress code No dress code is enforced.
$$$ Full time
Data Operational Engineer
  • TIMINING
  • Santiago (Hybrid)
Python Git SQL ETL

En TIMining, trabajamos para convertir la información operativa de las faenas mineras en valor accionable a través de nuestras plataformas de control y monitoreo. Este rol se integra al equipo de datos, aportando en el diseño, desarrollo y operación de pipelines ETL que integran fuentes diversas hacia las bases de datos y productos de TIMining. Serás parte de un proyecto orientado a la continuidad operativa, la calibración de algoritmos y la automatización de procesos internos para optimizar el flujo de trabajo del cliente y del equipo.

Apply exclusively at getonbrd.com.

Funciones

  • Desarrollar, mantener y documentar scripts en Python y SQL (conectores) para ETL hacia las bases de datos de los productos de TIMining.
  • Diseñar, implementar y mantener flujos de CI/CD para que los cambios en las pipelines lleguen a producción de forma segura y automatizada.
  • Monitorear la salud y rendimiento de procesos de datos (logging y alerting), garantizando uptime y respuestas ante incidentes operativos.
  • Administrar y orquestar pipelines con herramientas de planificación (Airflow, Dagster) y contenedores (Docker).
  • Validar resultados de pipelines (cualitativa y cuantitativamente) comparando con reportes operacionales de faenas.
  • Identificar, evaluar y mitigar riesgos en el desarrollo de pipelines, contemplando calidad de datos y planes de contingencia.
  • Desarrollar proyectos internos para automatizar labores rutinarias y simplificar el trabajo del equipo.
  • Asistir y presentar en reuniones técnicas con clientes para gestionar accesos a fuentes de información y resolver consultas.
  • Analizar y documentar fuentes de información del cliente por sistema (FMS, MGS u otras) y calibrar algoritmos de los softwares de la empresa.
  • Ejecutar turnos 24/7 para asegurar continuidad operativa.

Requisitos y experiencia

Formación en Ingeniería en Ciencia de Datos, Ingeniería Civil o carreras afines en computación. Se requieren mínimo 2 años de experiencia en cargos similares y experiencia comprobable en la implementación de pipelines ETL. Se valora conocimiento y manejo avanzado de Python y SQL, experiencia práctica en despliegue de aplicaciones y manejo de contenedores, así como experiencia en orquestación de datos con herramientas como Apache Airflow o Prefect. Dominio de control de versiones (Git) y trabajo colaborativo, consultas a APIs y bases de datos avanzadas. Conocimientos de Google Suite y Office. Habilidades analíticas, proactividad y capacidad para trabajar de forma autónoma y en equipo. Idiomas: Español nativo; Inglés deseable (upper-intermediate).

Se buscan candidatos con experiencia en proyectos tecnológicos y conocimiento de la industria minera a cielo abierto, además de experiencia con arquitecturas Cloud (AWS, Azure o GCP) e Infraestructura como Código (Terraform, CloudFormation).

Requisitos deseables

Experiencia en:
- Implementación de proyectos tecnológicos.
- Conocimiento de la industria minera y su operación.
- Familiaridad con metodologías ágiles, y experiencia con herramientas de Infraestructura como Código.
- Deseable conocimiento en soluciones de monitoreo y en entornos de producción de datos a gran escala.

Beneficios

Ofrecemos un entorno enfocado a innovación en la industria minera, con oportunidades de desarrollo profesional y trabajo en equipo multidisciplinario. Si cumples con el perfil, te invitamos a formar parte de TIMining y contribuir a la transformación digital de operaciones mineras.

Gross salary $3000 - 4000 Full time
Senior Data Engineer
  • Artefact LatAm
Python Big Data Data lake Data Architecture

En Artefact LatAm, somos una consultora líder enfocada en acelerar la adopción de datos e inteligencia artificial para generar impacto positivo. El Senior Data Engineer tendrá la responsabilidad de liderar el desarrollo de proyectos de Big Data con clientes, diseñando y ejecutando arquitecturas de datos que sirvan como puente entre la estrategia empresarial y la tecnología, bajo los principios de gobernanza de datos establecidos por los clientes. Además, será responsable de diseñar, mantener e implementar estructuras de almacenamiento de datos tanto transaccionales como analíticas. Este rol implica trabajar con grandes volúmenes de datos provenientes de diversas fuentes, procesarlos en entornos de Big Data y traducir los resultados en diseños técnicos sólidos y datos consistentes. También se espera que revise la integración consolidada de datos y describa cómo la interoperabilidad capacita a múltiples sistemas para comunicarse entre sí.

This company only accepts applications on Get on Board.

Tus responsabilidades serán:

  • Diseñar arquitecturas de datos que cumplan con los requisitos de los clientes y se alineen con su estrategia empresarial, asegurando la adherencia a los principios de gobernanza de datos.
  • Diseñar, implementar, mantener y actualizar estructuras de almacenamiento de datos transaccionales y analíticas, garantizando la integridad y disponibilidad de los datos.
  • Extraer datos de diversas fuentes y transferirlos eficientemente a entornos de almacenamiento de datos.
  • Diseñar e implementar procesos que soporten grandes volúmenes de datos en entornos de Big Data, utilizando herramientas y tecnologías pertinentes en cada proyecto.
  • Comunicar hallazgos, resultados y diagnósticos efectivamente, contando una historia para facilitar la comprensión de los hallazgos y la toma de decisiones por parte del cliente
  • Colaborar con equipos multidisciplinarios en la gestión estratégica de proyectos, asegurando la entrega oportuna y exitosa de soluciones de datos.
  • Desarrollar y mantener soluciones en la nube y on premise.
  • Utilizar metodologías ágiles para el desarrollo y entrega de soluciones de datos, adaptándose rápidamente a los cambios y requisitos del proyecto.
  • Apoyar al equipo traspasando conocimientos y buenas prácticas, apoyando en la capacitación y aprendizaje continuo según las necesidades individuales de los miembros
  • Gestionar al equipo mediante una planificación estratégica del proyecto, asegurando una distribución eficiente de tareas y una comunicación clara de los objetivos.

Los requisitos del cargo son:

  • Experiencia mínima de 3 años en el uso de herramientas de gestión de datos.
  • Experiencia previa en la gestión estratégica de equipos multidisciplinarios.
  • Conocimientos avanzados de Python o Pyspark y experiencia en su aplicación en proyectos de datos.
  • Experiencia en el diseño e implementación data warehouse, data lakes y data lake house
  • Desarrollo de soluciones de disponibilización de datos.
  • Experiencia práctica con al menos uno de los principales almacenes de archivos en la nube.
  • Buen manejo del inglés.

Algunos deseables no excluyentes:

  • Experiencia en consultoría y/ proyectos de estrategia o transformación digital
  • Experiencia con servicios de procesamiento y almacenamiento de datos de AWS o GCP, Azure
  • Certificaciones

Algunos de nuestros beneficios:

  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Hasta 11 días de vacaciones adicionales a lo legal. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por utilidades de la empresa, además de bonos por trabajador referido y por cliente.
  • Medio día libre de cumpleaños, además de un regalito.
  • Almuerzos quincenales pagados con el equipo en nuestros hubs (Santiago, Bogotá, Lima y Ciudad de Mexico).
  • Presupuesto de 500 USD al año para capacitaciones, sean cursos, membresías, eventos u otros (Chile).
  • Flexibilidad horaria y trabajo por objetivos.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Bogotá, Lima y Ciudad de Mexico).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

...y más!

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks Artefact LatAm offers space for internal talks or presentations during working hours.
Bicycle parking You can park your bicycle for free inside the premises.
Company retreats Team-building activities outside the premises.
Computer repairs Artefact LatAm covers some computer repair expenses.
Computer provided Artefact LatAm provides a computer for your work.
Education stipend Artefact LatAm covers some educational expenses related to the position.
Performance bonus Extra compensation is offered upon meeting performance goals.
Personal coaching Artefact LatAm offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
Vacation over legal Artefact LatAm gives you paid vacations over the legal minimum.
Beverages and snacks Artefact LatAm offers beverages and snacks for free consumption.
Vacation on birthday Your birthday counts as an extra day of vacation.
Parental leave over legal Artefact LatAm offers paid parental leave over the legal minimum.
$$$ Full time
Java Python Node.js ETL
En <Devaid> nos apasionan los desafíos tecnológicos y nuestros clientes lo saben. Por lo anterior, nos plantean problemáticas que nos obligan a estar constantemente probando e implementando nuevas tecnologías.
Trabajamos fuertemente en la nube ya que somos Partner Premier de Google Cloud en Chile, por lo que tendrás la oportunidad de formarte como un profesional cloud.
Dependiendo de las necesidades del cliente, <Devaid> ofrece soluciones web, móviles, integración de sistemas, entre otros. Esto permite acceder a la herramienta sin importar el dispositivo ni el lugar dónde se encuentra. Permitimos el trabajo colaborativo entre múltiples usuarios manteniendo una base centralizada de información.

Apply directly at getonbrd.com.

Funciones del cargo

Esperamos que puedas desempeñarte en las siguientes actividades:
  • Creación de pipelines de carga y transformación de datos.
  • Modelamiento de datos y creación de Data Warehouse y Data Lakes.
  • Integración de sistemas.
  • Creación de modelos de machine learning con herramientas low code autoML.
Vas a participar como ingeniero de datos en equipos de consultores que prestan servicios a empresas importantes en Chile. En estos equipos participan distintos perfiles, tales como desarrolladores de software, arquitectos de datos y data scientists. Los servicios se prestan de forma remota y son prestados por proyecto (no es outsourcing de recursos), por lo que puedes trabajar desde tu casa sin problemas. Diariamente vas a tener reuniones con tu equipo para coordinar actividades y resolver temas complejos que vayan surgiendo.

Requerimientos del cargo

Los requisitos para un buen desempeño de las funciones son:
  • 1 año de experiencia como Data Engineer.
  • Programación en lenguaje Python, NodeJS o Java (al menos uno de los 3).
  • Conocimiento de soluciones de Data Warehouse y ETL.
  • Conocimiento de plataformas de procesamiento de datos como Apache Spark, Dataflow o similares.
  • Haber trabajado previamente con alguna nube pública (AWS, Azure o GCP).
Si no cumples alguno de estos puntos no te desanimes, queremos conocerte igualmente.
El trabajo es 100% remoto, pero es necesario que tengas RUT y/o papeles al día en Chile.

Deseables

Suman puntos en tu postulación si cumples alguna de las siguientes habilidades, ninguno de estos son excluyentes:
  • Conocimiento de herramientas Google Cloud, entre ellas Google BigQuery, Dataflow, Data Fusion y Pub Sub.
  • Experiencia en plataformas de deployment de infraestructura como Terraform.
  • Experiencia utilizando la herramienta de consola gcloud.

Beneficios

Prometemos un ambiente muy grato de trabajo, lleno de desafíos y donde podrás ver los proyectos en los que estas involucrada/o siendo utilizados en un corto tiempo activamente por nuestros clientes, lo que siempre es muy gratificante.
Otras actividades:
  • Actividades mensuales (Cupones de Food delivery, juegos en línea, actividades grupales).
  • Actividad paseo anual: La empresa se junta por 2 días en algún lugar turístico para realizar actividades grupales y unir al equipo.
  • Día libre flexible en tu cumpleaños.
  • Capacitaciones en lo que más te guste.
  • Certificaciones Google Cloud: Programa de certificación en distintas ramas profesionales de GCP, gracias a que somos Partner Premier de Google Cloud en Chile.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage Devaid pays or copays health insurance for employees.
Company retreats Team-building activities outside the premises.
Education stipend Devaid covers some educational expenses related to the position.
Performance bonus Extra compensation is offered upon meeting performance goals.
Vacation on birthday Your birthday counts as an extra day of vacation.
Gross salary $1200 - 1800 Full time
Data Analyst (Azure + BI)
  • Asesoría y Gestión de Procesos S.A
SQL ETL Power BI Data governance
En Asesoría y Gestión de Procesos S.A. nos encontramos en un proceso de búsqueda de talento para un equipo de Data & Analytics orientado a impulsar la visibilidad operativa y estratégica de nuestros clientes, principalmente en los sectores automotriz e inmobiliario. El proyecto abarca el ciclo completo del dato: desde la ingesta y el modelado hasta la visualización y el monitoreo proactivo. Nuestro objetivo es transformar datos en insights accionables que impulsen decisiones de negocio, alinear KPIs con objetivos estratégicos y entregar dashboards y alertas confiables para equipos ejecutivos y operativos.
Trabajamos con Azure Data Factory, Data Lake y herramientas de BI como Power BI y Grafana para monitoreo en tiempo real. El rol se integra en una empresa con 12 años de experiencia, un portafolio de más de 120 clientes y una misión clara de acelerar y mejorar procesos mediante tecnología e innovación.

This job is published by getonbrd.com.

Funciones y responsabilidades

  • Entender el negocio y definir KPIs clave junto a stakeholders, documentando reglas de cálculo y asegurando que los indicadores sean accionables.
  • Diseñar y desarrollar pipelines ETL/ELT en Azure Data Factory, integrando diversas fuentes (bases de datos, APIs, archivos) y garantizando calidad de datos.
  • Modelar datos en esquemas adecuados y mantener Data Warehouse/Data Mart para consumo analítico eficiente.
  • Desarrollar dashboards interactivos en Power BI y Grafana, traduciendo complejidad analítica en visualizaciones claras y útiles para distintos perfiles.
  • Monitorear datos, definir alertas automáticas y notificaciones ante desviaciones, identificando anomalías y generando insights proactivos.
  • Colaborar con equipos de negocio e IT para garantizar disponibilidad, escalabilidad y seguridad de las soluciones de datos.
  • Participar en la definición de arquitectura de datos y buenas prácticas de gobierno de datos.

Requisitos y perfil

  • Experiencia sólida en integraciones y análisis de datos con Azure Data Factory y servicios de Azure (Data Lake) y manejo avanzado de SQL.
  • Experiencia creando dashboards en Power BI y Grafana; conocimiento de modelamiento de datos (Data Warehouse, OLAP) y procesos ETL/ELT.
  • Capacidad para diseñar soluciones end-to-end: desde la definición de KPIs hasta la entrega de visualizaciones y alertas operativas.
  • Conocimientos en scripting y buenas prácticas de gobierno de datos, calidad y seguridad.
  • Habilidad para comunicar insights a audiencias no técnicas, pensamiento analítico y foco en impacto de negocio.
  • Experiencia previa en roles de BI/analítica y capacidad para trabajar de forma autónoma y colaborativa.

Competencias y habilidades deseables

Certificaciones en BI/Analytics y experiencia con proyectos en sectores automotriz. Se aprecia experiencia en entornos ágiles, gestión de stakeholders y capacidad para priorizar en entornos cambiantes.

Beneficios

En Asesoría y Gestión de Procesos S.A, ofrecemos un entorno laboral flexible y beneficios atractivos, como:
  • Tres tardes libres al año.
  • Vestimenta informal.
  • Dos días libres extra al año.
  • Día libre por tu cumpleaños.
  • Seguro complementario.
  • Y muchos otros beneficios.
¡Esperamos contar contigo en nuestro equipo!

Fully remote You can work from anywhere in the world.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage Asesoría y Gestión de Procesos S.A pays or copays health insurance for employees.
Informal dress code No dress code is enforced.
Vacation over legal Asesoría y Gestión de Procesos S.A gives you paid vacations over the legal minimum.
$$$ Full time
Sr Data Engineer – CRM Customer Service
  • BC Tecnología
  • Santiago (Hybrid)
Python SQL ETL Spark
BC Tecnología es una consultora de TI con experiencia en servicios IT, outsourcing y selección de profesionales. Nos especializamos en diseñar equipos ágiles para Infraestructura, Desarrollo de Software y Unidades de Negocio, con clientes en servicios financieros, seguros, retail y gobierno. Buscamos incorporar a nuestro equipo a un/a SR Data Engineer con fuerte enfoque en CRM y migración de datos para proyectos de CRM Customer Service, entre otros clientes de alto nivel. El rol forma parte de iniciativas de modernización de datos, migración a la nube y fortalecimiento de la gobernanza de datos para un programa orientado a soluciones de experiencia del cliente.

Apply to this job without intermediaries on Get on Board.

Funciones principales

  • Diseñar y desarrollar pipelines ETL/ELT para integración y migración de datos.
  • Ejecutar migración de datos desde sistemas legados hacia plataformas cloud y Dynamics 365.
  • Asegurar integridad, calidad y disponibilidad de los datos mediante validaciones y reconciliaciones.
  • Colaborar con el Technical Lead en la arquitectura de datos del programa.
  • Documentar modelos de datos, pipelines y procesos de migración.
  • Participar en ceremonias ágiles y reportar avances del frente de datos.
  • Colaborar con QA en validación end-to-end de datos.
  • Transferir conocimientos de datos al equipo.

Descripción

Requerimos un/a profesional con al menos 4 años de experiencia en ingeniería de datos, con prioridad en entornos CRM y retail. El/la candidato/a será responsable de diseñar e implementar pipelines para extracción, transformación y carga de datos, así como gestionar migraciones complejas desde sistemas legados hacia entornos en la nube y Microsoft Dynamics 365 Dataverse. Se integrará a un equipo técnico colaborativo, participando en la definición de la arquitectura de datos, la aseguración de calidad y la entrega continua mediante prácticas CI/CD aplicadas a datos. Se valorará experiencia en AWS (S3, Glue, Athena, Redshift, Lambda, Step Functions), Airflow o Step Functions para orquestación, Python y Spark/PySpark, manejo de SQL avanzado, modelado dimensional y relacional, y conocimiento de Dynamics 365.
Buscamos proactividad, orientación a resultados y habilidades de comunicación para trabajar en un entorno ágil y multi-funcional, con foco en la entrega de valor al negocio y una cultura de mejora continua.

Requisitos deseables

Experiencia en migración de datos entre sistemas ERP/CRM a plataformas en la nube; familiaridad con governance de datos, reconciliaciones y validación de datos de extremo a extremo; experiencia con teams y stakeholders de negocio; certificaciones en AWS o Data & Cloud; capacidad de documentación clara y ordenada; conocimientos de Microsoft Dynamics 365 Dataverse. Se valorará experiencia en retail y servicios de CRM, y habilidad para trabajar en entornos regulados.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

$$$ Full time
Ingeniero de Datos AWS
  • BC Tecnología
  • Santiago (Hybrid)
Python SQL ETL Spark
En BC Tecnología diseñamos equipos de trabajo ágiles para servicios IT, con foco en Infraestructura, Desarrollo de Software y Unidades de Negocio para clientes en Finanzas, Seguros, Retail y Gobierno. Nuestro objetivo es entregar soluciones de alto impacto mediante consultoría, desarrollo de proyectos, outsourcing y selección de personal.
Como parte de nuestro programa CRM Customer Services migramos y consolidamos datos a plataformas cloud (AWS) y Dynamics 365 Dataverse, garantizando integridad, calidad y disponibilidad de la información para operaciones y analítica. Participarás en iniciativas innovadoras con clientes de alto nivel, con enfoque en aprendizaje continuo y desarrollo técnico dentro de un entorno colaborativo y orientado al cliente. La modalidad híbrida permite combinar trabajo remoto con presencia en nuestras oficinas para fomentar colaboración y dinamismo.

This job is exclusive to getonbrd.com.

Funciones

  • Diseñar, desarrollar y ejecutar procesos de ingeniería de datos y migración requeridos por el programa CRM Customer Services, asegurando la integridad, calidad y disponibilidad de los datos en plataformas cloud.
  • Conocimientos técnicos: ingeniería de datos en AWS (S3, Glue, Athena, Redshift, Lambda, Step Functions); ETL/ELT para diseño y desarrollo de pipelines de datos; migración de datos entre sistemas legados y plataformas cloud; SQL avanzado y modelado de datos (dimensional, relacional).
  • Desarrollar pipelines en Python y Spark/PySpark, aplicar calidad de datos (validación, limpieza, reconciliación, profiling) y utilizar herramientas de orquestación (Airflow, Step Functions); control de versiones y CI/CD aplicado a datos.
  • Conocer Microsoft Dynamics 365 Dataverse y su modelo de datos; diseñar y ejecutar migraciones desde sistemas legados hacia plataformas cloud y Dynamics 365; documentar modelos, pipelines y procesos de migración.
  • Participar en ceremonias ágiles, reportar avances y colaborar con QA para validación end-to-end; transferir conocimiento de datos al equipo y colaborar estrechamente con el Technical Lead en la arquitectura de datos del programa.

Requisitos y perfil

Buscamos un profesional con experiencia sólida en ingeniería de datos en entornos cloud, especialmente AWS, y en proyectos de migración de datos hacia soluciones modernas de nube y CRM. Debes dominar pipelines ETL/ELT, modelado de datos relacional y dimensional, y tener capacidad para trabajar en entornos dinámicos y colaborativos. Se valorará experiencia con Oracle/Siebel, Great Expectations o Deequ para calidad de datos, y conocimiento del sector Retail. Debes ser proactivo, orientado a resultados, con habilidades de comunicación para trabajar con equipos multidisciplinarios y stakeholders.
Requisitos mínimos: experiencia en AWS Data Analytics/Data Engineering; diseño y migración de datos entre sistemas; SQL avanzado; Python o Spark; experiencia con herramientas de orquestación; familiaridad con Dynamics 365 Dataverse; experiencia en entornos ágiles y capacidad para documentar procesos y modelos de datos. Deseable certificación AWS Data Analytics, experiencia en migración desde Oracle/Siebel y conocimiento de herramientas de calidad de datos. Se valora experiencia en Retail y en entornos de CRM.

Conocimientos Deseables

Certificación AWS Data Analytics o Data Engineering. Experiencia migrando datos desde Oracle/Siebel. Conocimiento en herramientas de calidad de datos como Great Expectations o Deequ. Experiencia en el sector Retail. Conocimiento adicional en DevOps de datos y metodologías ágiles. Habilidad para trabajar en equipos multiculturales y capacidad de explicar conceptos técnicos a audiencias no técnicas. Se valora experiencia en arquitectura de datos para CRM y en gestión de proyectos de migración complejos.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

$$$ Full time
Data Engineer – SQL Migration
  • WiTi
  • Santiago (Hybrid)
SQL ETL Automation AWS

En WiTi lideramos un proyecto estratégico de migración de un ecosistema analítico legado hacia una arquitectura moderna en la nube sobre AWS. El objetivo es estandarizar, optimizar el rendimiento y escalar la operación, trasladando lógica SQL no estándar a SQL estándar para Amazon Redshift. Este esfuerzo involucra automatización para acelerar la migración, reducción de errores y una alta interacción con equipos de data, BI y TI para asegurar trazabilidad, reproducibilidad y gobernanza de datos a nivel enterprise.

Serás parte de un equipo multidisciplinario que diseña y ejecuta la migración de punta a punta, estableciendo reglas de conversión, pipelines, controles de calidad y guías de codificación reutilizables. El proyecto ofrece visibilidad transversal sobre ETL/ELT y buenas prácticas de gobierno de datos en un entorno cloud escalable.

Apply only from getonbrd.com.

Responsabilidades Clave

  • Analizar programas y scripts existentes con lógica SQL no estándar, incluyendo estructuras de procesamiento propias de entornos legacy (jobs, macros, librerías).
  • Convertir y reescribir lógica SQL legada a SQL estándar compatible con Amazon Redshift, cuidando equivalencia funcional y performance.
  • Definir un enfoque repetible para migrar grandes volúmenes de programas: reglas, patrones de conversión y estándares de codificación.
  • Automatizar el proceso de transformación mediante scripts, reglas de conversión, validaciones automáticas, templates o pipelines.
  • Trabajar con procesos ETL/ELT en AWS, integrándose con el stack del cliente (fuentes, cargas, transformaciones, orquestación, monitoreo).
  • Validar equivalencia funcional entre el sistema origen y Redshift mediante reconciliaciones de datos, controles de calidad y monitoreo.
  • Documentar reglas de conversión, decisiones técnicas y casos borde para un proceso mantenible y auditable.
  • Colaborar con data y TI para asegurar trazabilidad, reproducibilidad y rendimiento del data warehouse en la nube.

Requisitos Excluyentes

  • SQL avanzado: queries complejas, optimización de performance, joins pesados, window functions/CTEs, lectura e interpretación de planes de ejecución.
  • Experiencia práctica con Amazon Redshift: diseño y escritura de SQL, buenas prácticas de rendimiento y modelado en Redshift.
  • Conocimiento de ETL/ELT en AWS (p. ej., Glue, Lambda, Step Functions) y otras herramientas de orquestación.
  • Experiencia en contextos enterprise centrada en calidad, trazabilidad, documentación y resultados reproducibles.
  • Experiencia en migraciones desde tecnologías legacy hacia cloud data warehouses (Redshift, Snowflake, BigQuery) y automatización de migraciones.

Requisitos Deseables

  • Se valorará experiencia previa migrando desde tecnologías legacy hacia cloud data warehouses y participación en automatización de migraciones.
  • Conocimientos de Python u otros lenguajes de scripting para apoyar automatización y tooling interno, así como experiencia en gobernanza de datos (naming conventions, documentación, data quality checks y monitoreo).

Beneficios

En WiTi fomentamos una cultura de aprendizaje y colaboración, con foco en proyectos digitales y de datos de alto impacto. Entre los beneficios se incluyen:

  • Plan de carrera personalizado orientado a desarrollo en data, cloud y analítica.
  • Certificaciones para continuar creciendo en tu carrera (AWS, data, analítica).
  • Cursos de idiomas para desarrollo personal y profesional.

Digital library Access to digital books or subscriptions.
Computer provided WiTi provides a computer for your work.
Personal coaching WiTi offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
$$$ Full time
Databricks Administrator
  • Improving South America
Python SQL Automation Terraform
En Improving South America, brindamos servicios de TI para transformar la percepción del profesional de TI. Nos enfocamos en consultoría de TI, desarrollo de software y formación ágil.

Contribuirás a la construcción y mantenimiento de soluciones de datos que soportan analítica, reporting y la toma de decisiones operativas en toda la organización.

Trabajando de cerca con data engineers y otros perfiles tecnológicos, apoyarás las plataformas que permiten a los equipos transformar datos en insights relevantes.

En este rol, te enfocarás en la gestión de plataformas de datos y en su rendimiento general. Colaborarás con equipos multifuncionales para entender requerimientos de datos, mejorar sistemas existentes y entregar soluciones que respondan a necesidades del negocio.

Esta es una excelente oportunidad para seguir desarrollando tus habilidades en data engineering mientras contribuyes a impulsar decisiones basadas en datos a escala

Apply at the original job on getonbrd.com.

Job functions

  • Monitorear y mantener la salud, disponibilidad y rendimiento de instancias de Snowflake y Databricks, utilizando herramientas nativas y estándares internos
  • Revisar periódicamente métricas de uso, logs del sistema y consumo de recursos para detectar y abordar anomalías
  • Asegurar la ejecución de actualizaciones, parches y respaldos conforme a políticas y estándares definidos
  • Investigar incidentes y degradaciones del servicio, gestionando su resolución o escalamiento para minimizar el impacto en el negocio
  • Administrar el ciclo completo de accesos: provisión, desprovisión y asignación de roles en Snowflake y Databricks, garantizando cumplimiento de estándares de seguridad
  • Implementar y auditar controles de acceso a datos, trabajando junto a equipos de seguridad (InfoSec) y líderes de plataforma
  • Mantener actualizados grupos, permisos y accesos según cambios organizacionales o necesidades de proyectos
  • Actuar como punto principal de contacto para soporte técnico e incidentes relacionados con las plataformas
  • Asesorar a los equipos en buenas prácticas de uso eficiente y seguro de las plataformas (optimización de costos, data sharing, orden de workspaces)
  • Mantener documentación clara y actualizada de la plataforma (onboarding, FAQs, guías de troubleshooting)

Qualifications and requirements

  • Título universitario en Ciencias de la Computación, Sistemas de Información o carrera afín, o experiencia equivalente
  • +2 años de experiencia administrando plataformas Snowflake y Databricks (o al menos una con conocimiento sólido de la otra)
  • Dominio de SQL, scripting (Python o Shell) y ecosistemas de datos en la nube (AWS, Azure o GCP)
  • Conocimiento en herramientas de automatización (Terraform, AWS CloudFormation, Databricks CLI/API, entre otros)
  • Experiencia gestionando usuarios, roles y controles de seguridad en entornos regulados
  • Capacidad para diagnosticar y resolver problemas de plataforma
  • Experiencia con herramientas de monitoreo, logging y alertas
  • Inglés intermedio -avanzado o avanzado (indispensable debido a que se realizan reuniones con equipos internacionales)

Conditions

  • Contrato a largo plazo.
  • 100% Remoto.
  • Vacaciones y PTOs
  • Posibilidad de recibir 2 bonos al año.
  • 2 revisiones salariales al año.
  • Clases de inglés.
  • Equipamiento Apple.
  • Plataforma de cursos en linea
  • Budget para compra de libros.
  • Budget para compra de materiales de trabajo
  • mucho mas..

Internal talks Improving South America offers space for internal talks or presentations during working hours.
Computer provided Improving South America provides a computer for your work.
$$$ Full time
Cloud Data Engineer
  • WiTi
  • Santiago (Hybrid)
Python SQL ETL CI/CD
WiTi conecta talento tecnológico con proyectos de alto impacto en Latinoamérica. Nuestro equipo se enfoca en la integración de sistemas, software a medida y desarrollos innovadores para dispositivos móviles, con énfasis en resolver problemas complejos a través de soluciones innovadoras.
Este rol forma parte de un equipo responsable de modernizar un ecosistema analítico legado hacia una arquitectura cloud en AWS, con foco en estandarización, performance y escalabilidad. El proyecto implica migrar y optimizar la lógica de bases de datos preexistentes hacia Amazon Redshift, contribuyendo a la automatización del proceso y garantizando la calidad, consistencia y rendimiento de los datos.

Apply directly on Get on Board.

Responsabilidades Clave

  • Analizar y comprender procesos analíticos existentes (en SQL u otros entornos heredados) para reestructurarlos sobre Amazon Redshift.
  • Convertir y optimizar lógica SQL hacia estándares compatibles con Redshift, aplicando buenas prácticas de modelado y rendimiento.
  • Diseñar y documentar enfoques repetibles para la migración de consultas y estructuras de datos (catálogo de reglas, patrones de transformación).
  • Colaborar en tareas de automatización de migraciones (scripts en Python, templates SQL, validaciones automáticas, pipelines CI/CD).
  • Mantener y mejorar procesos ETL/ELT en AWS, apoyándose en servicios como Glue, Lambda, Step Functions y S3.
  • Validar resultados de conversión mediante controles de reconciliación y pruebas de calidad de datos.
  • Documentar decisiones técnicas, reglas de conversión y excepciones para asegurar trazabilidad y mantenibilidad del proceso.

Requisitos Excluyentes

  • 3+ años de experiencia como Ingeniero de Datos o rol equivalente.
  • Dominio avanzado de SQL estándar (uniones complejas, funciones de ventana, CTE, tuning, lectura de planes de ejecución).
  • Experiencia práctica con Amazon Redshift (particionamiento, distribución, optimización de consultas y almacenamiento).
  • Conocimientos sólidos de procesos ETL/ELT en entornos cloud, idealmente AWS.
  • Experiencia en proyectos orientados a migración o modernización de plataformas de datos.
  • Conocimientos en Python para scripting y automatización de validaciones.
  • Nivel intermedio o superior de inglés técnico.

Deseables

  • Experiencia con DataOps, manejo de pipelines (Airflow, Step Functions o similares).
  • Familiaridad con herramientas de Infraestructura como Código (Terraform, CloudFormation).
  • Experiencia en gobierno de datos, nomenclaturas y validaciones automáticas de calidad.
  • Capacidad de documentar y estandarizar procesos en contextos corporativos.

Beneficios

En WiTi fomentamos una cultura de aprendizaje continuo, colaboración y crecimiento profesional. Entre los beneficios se pueden incluir:
  • Plan de carrera y oportunidades de desarrollo profesional.
  • Acceso a certificaciones y formación continua.
  • Cursos de idiomas y acceso a biblioteca digital para tu desarrollo personal y profesional.

Digital library Access to digital books or subscriptions.
Computer provided WiTi provides a computer for your work.
Personal coaching WiTi offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
$$$ Full time
Ingeniero/a de Datos
  • Assetplan
  • Santiago (Hybrid)
Python Excel SQL ETL

Assetplan es una compañía líder en renta residencial con presencia en Chile y Perú, gestionando más de 40,000 propiedades y operando más de 90 edificios multifamily. El equipo de datos tiene un rol clave para optimizar y dirigir procesos internos mediante soluciones de análisis y visualización de datos, apoyando la toma de decisiones estratégicas en la empresa. Este rol se enfoca en diseñar, desarrollar y optimizar procesos ETL, creando valor mediante datos fiables y gobernados.

En este contexto, el/la profesional se integrará a un equipo multidisciplinario para transformar necesidades de negocio en soluciones de datos escalables que impulsen la eficiencia operativa y la calidad de la información. El objetivo es promover la gobernanza de datos, lograr dashboards útiles y facilitar decisiones informadas en toda la organización.

Applications are only received at getonbrd.com.

  • Diseñar, desarrollar y optimizar procesos ETL (Extract, Transform, Load) utilizando Python (Pandas, Numpy) y SQL para ingestar y transformar datos de diversas fuentes.
  • Desarrollar y mantener dashboards y paneles en Power BI, integrando visualizaciones estratégicas que acompañen los procesos ETL y proporcionen insights relevantes para áreas de negocio.
  • Trabajar de forma colaborativa con distintas áreas para interpretar necesidades y traducirlas en soluciones de datos que faciliten la toma de decisiones estratégicas.
  • Promover la calidad, escalabilidad y gobernanza de datos durante el diseño, desarrollo y mantenimiento de pipelines, asegurando soluciones robustas y accesibles.
  • Comunicar de manera efectiva con equipos de negocio y tecnología, alineando las soluciones con objetivos corporativos y generando impacto medible en la organización.

Requisitos y perfil

Buscamos profesionales con 1 a 3 años de experiencia en áreas de datos, en roles de ingeniería o análisis que impliquen manipulación y transformación de datos. Se valorará manejo de SQL a nivel medio, Python (intermedio/avanzado) con experiencia en Pandas y Numpy, y Power BI para desarrollo y mantenimiento de dashboards. Se requiere nivel avanzado de Excel para análisis y procesamiento de información. Experiencia en entornos ágiles y metodologías de desarrollo colaborativo facilita la integración entre equipos técnicos y de negocio. Se valoran conocimientos en otras herramientas de visualización y procesamiento de datos, así como experiencia en gobernanza y calidad de datos para fortalecer el ecosistema de información de Assetplan.
Competencias: capacidad de análisis, atención al detalle, buena comunicación, proactividad y orientación a resultados. Capacidad para trabajar en un entorno dinámico y colaborar con diferentes áreas de la organización para traducir requerimientos en soluciones concretas.

Conocimientos y habilidades deseables

Se valoran conocimientos en metodologías ágiles para gestión de proyectos, habilidades de comunicación efectiva con equipos multidisciplinarios y experiencia en herramientas adicionales de visualización y procesamiento de datos. Experiencia en buenas prácticas de gobernanza y calidad de datos será un plus para robustecer el ecosistema de información de Assetplan.

Beneficios

En Assetplan valoramos y reconocemos el esfuerzo y dedicación de nuestros colaboradores, ofreciendo un ambiente laboral positivo basado en el respeto mutuo y la colaboración. Entre nuestros beneficios contamos con:
  • Días extras de vacaciones por años de antigüedad
  • Modalidad de trabajo híbrido y flexibilidad para trámites personales
  • Monto mensual en app de snacks en la oficina
  • Medio día libre en tu cumpleaños
  • Copago en seguro complementario de salud
  • Reajuste anual de renta basado en IPC
  • Bono anual por resultados de empresa
  • Eventos empresa y happy hours
  • Acceso a plataforma de cursos de formación
  • Convenios con gimnasios, descuentos y más

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Health coverage Assetplan pays or copays health insurance for employees.
Computer provided Assetplan provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Assetplan gives you paid vacations over the legal minimum.
Beverages and snacks Assetplan offers beverages and snacks for free consumption.
$$$ Full time
Data Engineer
  • NeuralWorks
  • Santiago (Hybrid)
Python SQL Cloud Computing Data Engineering

NeuralWorks es una compañía de alto crecimiento fundada hace 4 años. Estamos trabajando a toda máquina en cosas que darán que hablar.
Somos un equipo donde se unen la creatividad, curiosidad y la pasión por hacer las cosas bien. Nos arriesgamos a explorar fronteras donde otros no llegan: un modelo predictor basado en monte carlo, una red convolucional para detección de caras, un sensor de posición bluetooth, la recreación de un espacio acústico usando finite impulse response.
Estos son solo algunos de los desafíos, donde aprendemos, exploramos y nos complementamos como equipo para lograr cosas impensadas.
Trabajamos en proyectos propios y apoyamos a corporaciones en partnerships donde codo a codo combinamos conocimiento con creatividad, donde imaginamos, diseñamos y creamos productos digitales capaces de cautivar y crear impacto.

👉 Conoce más sobre nosotros

Apply directly on the original site at Get on Board.

Descripción del trabajo

El equipo de Data y Analytics trabaja en diferentes proyectos que combinan volúmenes de datos enormes e IA, como detectar y predecir fallas antes que ocurran, optimizar pricing, personalizar la experiencia del cliente, optimizar uso de combustible, detectar caras y objetos usando visión por computador.
Dentro del equipo multidisciplinario con Data Scientist, Translators, DevOps, Data Architect, tu rol será clave en construir y proveer los sistemas e infraestructura que permiten el desarrollo de estos servicios, formando los cimientos sobre los cuales se construyen los modelos que permiten generar impacto, con servicios que deben escalar, con altísima disponibilidad y tolerantes a fallas, en otras palabras, que funcionen. Además, mantendrás tu mirada en los indicadores de capacidad y performance de los sistemas.

En cualquier proyecto que trabajes, esperamos que tengas un gran espíritu de colaboración, pasión por la innovación y el código y una mentalidad de automatización antes que procesos manuales.

Como Data Engineer, tu trabajo consistirá en:

  • Participar activamente durante el ciclo de vida del software, desde inception, diseño, deploy, operación y mejora.
  • Apoyar a los equipos de desarrollo en actividades de diseño y consultoría, desarrollando software, frameworks y capacity planning.
  • Desarrollar y mantener arquitecturas de datos, pipelines, templates y estándares.
  • Conectarse a través de API a otros sistemas (Python)
  • Manejar y monitorear el desempeño de infraestructura y aplicaciones.
  • Asegurar la escalabilidad y resiliencia.

Calificaciones clave

  • Estudios de Ingeniería Civil en Computación o similar.
  • Experiencia práctica de al menos 3 años en entornos de trabajo como Data Engineer, Software Engineer entre otros.
  • Experiencia con Python.
    Entendimiento de estructuras de datos con habilidades analíticas relacionadas con el trabajo con conjuntos de datos no estructurados, conocimiento avanzado de SQL, incluida optimización de consultas.
  • Pasión en problemáticas de procesamiento de datos.
  • Experiencia con servidores cloud (GCP, AWS o Azure), especialmente el conjunto de servicios de procesamiento de datos.
  • Buen manejo de inglés, sobre todo en lectura donde debes ser capaz de leer un paper, artículos o documentación de forma constante.
  • Habilidades de comunicación y trabajo colaborativo.

¡En NeuralWorks nos importa la diversidad! Creemos firmemente en la creación de un ambiente laboral inclusivo, diverso y equitativo. Reconocemos y celebramos la diversidad en todas sus formas y estamos comprometidos a ofrecer igualdad de oportunidades para todos los candidatos.

“Los hombres postulan a un cargo cuando cumplen el 60% de las calificaciones, pero las mujeres sólo si cumplen el 100%.” D. Gaucher , J. Friesen and A. C. Kay, Journal of Personality and Social Psychology, 2011.

Te invitamos a postular aunque no cumplas con todos los requisitos.

Nice to have

  • Agilidad para visualizar posibles mejoras, problemas y soluciones en Arquitecturas.
  • Experiencia en Infrastructure as code, observabilidad y monitoreo.
  • Experiencia en la construcción y optimización de data pipelines, colas de mensajes y arquitecturas big data altamente escalables.
  • Experiencia en procesamiento distribuido utilizando servicios cloud.

Beneficios

  • MacBook Air M2 o similar (con opción de compra hiper conveniente)
  • Bono por desempeño
  • Bono de almuerzo mensual y almuerzo de equipo los viernes
  • Seguro Complementario de salud y dental
  • Horario flexible
  • Flexibilidad entre oficina y home office
  • Medio día libre el día de tu cumpleaños
  • Financiamiento de certificaciones
  • Inscripción en Coursera con plan de entrenamiento a medida
  • Estacionamiento de bicicletas
  • Programa de referidos
  • Salida de “teambuilding” mensual

Library Access to a library of physical books.
Accessible An infrastructure adequate for people with special mobility needs.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks NeuralWorks offers space for internal talks or presentations during working hours.
Life insurance NeuralWorks pays or copays life insurance for employees.
Meals provided NeuralWorks provides free lunch and/or other kinds of meals.
Partially remote You can work from your home some days a week.
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Computer repairs NeuralWorks covers some computer repair expenses.
Dental insurance NeuralWorks pays or copays dental insurance for employees.
Computer provided NeuralWorks provides a computer for your work.
Education stipend NeuralWorks covers some educational expenses related to the position.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Recreational areas Space for games or sports.
Shopping discounts NeuralWorks provides some discounts or deals in certain stores.
Vacation over legal NeuralWorks gives you paid vacations over the legal minimum.
Beverages and snacks NeuralWorks offers beverages and snacks for free consumption.
Vacation on birthday Your birthday counts as an extra day of vacation.
Time for side projects NeuralWorks allows employees to work in side-projects during work hours.
Gross salary $2100 - 2500 Full time
Data Engineer (ETL y Datos)
  • Equifax Chile
  • Santiago (Hybrid)
Java Python Scala ETL
En Equifax Chile transformamos datos en oportunidades. Como parte de una compañía global de data, analítica y tecnología, trabajamos para ayudar a instituciones financieras, empleadores y agencias gubernamentales a tomar decisiones críticas con mayor confianza. En este rol de Data Engineer, nos enfocamos en el ciclo de vida del dato: desde que las fuentes ingresan a la compañía, pasando por el diseño e implementación de la solución, hasta el uso final por distintos clientes internos y externos. Integrar, modelar y poner en producción procesos de ETL y atributos analíticos es clave para habilitar consumo confiable, escalable y listo para analítica.

Apply directly on the original site at Get on Board.

¿Qué harás?

Como Data Engineer, seremos responsables del análisis constante de las fuentes, del diseño y la implementación del ciclo de vida del dato: desde que llega a la compañía hasta el uso final por los distintos clientes internos como externos.
  • Análisis de Requerimientos de negocio.
  • Diseño de Datos y Solución.
  • Implementación y Mejora de procesos de ETL.
Además, trabajaremos con tecnologías de modelamiento y manejo de datos, con conocimientos del área estadística y conocimientos básicos de modelamiento para la puesta en producción de modelos y atributos analíticos.

¿Qué experiencia necesitas?

Buscamos que cuentes con al menos 2 años de experiencia con alguna herramienta de ETL, por ejemplo: SSIS, Pentaho, Data Factory u otras. También requerimos al menos 2 años de experiencia con desarrollo de ETL en alguno de los siguientes lenguajes: Java, Scala o Python.
Adicionalmente, necesitamos que tengas al menos 2 años de experiencia con motores de bases de datos.
Valoraremos conocimientos relacionados a tecnologías de modelamiento y manejo de datos, conocimientos del área estadística y conocimientos básicos de modelamiento para la puesta en producción de modelos y atributos analíticos.
En el día a día, esperamos que seas una persona analítica, orientada a la mejora continua y con foco en entregar soluciones confiables para clientes internos y externos. Te moverás entre requerimientos de negocio y la implementación técnica, manteniendo claridad en el diseño de la solución y en la evolución de los procesos de ETL.
Requisito adicional: inglés intermedio.

¿Qué podría diferenciarte?

  • Al menos un año con experiencia en la nube (deseable, no excluyente).
  • Al menos un año con experiencia en herramientas CI/CD a nivel usuario (no desarrollo), por ejemplo: GoCD, Jenkins, Azure DevOps u otra.
  • Experiencia y criterio para apoyar la puesta en producción de modelos y atributos analíticos, considerando buenas prácticas de datos y continuidad operativa.

¿Qué ofrecemos?

Ofrecemos modalidad de trabajo híbrido con horarios flexibles para un balance saludable entre vida personal y laboral, además de días libres adicionales para fomentar el bienestar. Nuestro paquete de compensación integral incluye seguro médico complementario y convenio con gimnasio para promover un estilo de vida saludable. También contamos con beneficios específicos para madres y padres en la organización. Podrás acceder a una plataforma de aprendizaje en línea para desarrollo profesional continuo, junto con programas de reconocimiento que valoran el aporte de cada integrante del equipo, en un entorno diverso, multicultural y orientado al crecimiento de carrera.

Wellness program Equifax Chile offers or subsidies mental and/or physical health activities.
Equity offered This position includes equity compensation (in the form of stock options or another mechanism).
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks Equifax Chile offers space for internal talks or presentations during working hours.
Life insurance Equifax Chile pays or copays life insurance for employees.
Paid sick days Sick leave is compensated (limits might apply).
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Health coverage Equifax Chile pays or copays health insurance for employees.
Mobile phone provided Equifax Chile provides a mobile phone for work use.
Company retreats Team-building activities outside the premises.
Computer repairs Equifax Chile covers some computer repair expenses.
Dental insurance Equifax Chile pays or copays dental insurance for employees.
Computer provided Equifax Chile provides a computer for your work.
Education stipend Equifax Chile covers some educational expenses related to the position.
Fitness subsidies Equifax Chile offers stipends for sports or fitness programs.
Performance bonus Extra compensation is offered upon meeting performance goals.
Conference stipend Equifax Chile covers tickets and/or some expenses for conferences related to the position.
Informal dress code No dress code is enforced.
Vacation over legal Equifax Chile gives you paid vacations over the legal minimum.
Vacation on birthday Your birthday counts as an extra day of vacation.
Parental leave over legal Equifax Chile offers paid parental leave over the legal minimum.
Gross salary $3500 - 3700 Full time
Data Scientist
  • Coderslab.io
Python Machine Learning Data Engineering ML Ops
Coderslab.io es una empresa dedicada a transformar y hacer crecer negocios mediante soluciones tecnológicas innovadoras. Formarás parte de una organización en expansión con más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Te unirás a equipos diversos que reúnen a parte de los mejores talentos tecnológicos para participar en proyectos desafiantes y de alto impacto. Trabajarás junto a profesionales experimentados y tendrás la oportunidad de aprender y desarrollarte con tecnologías de vanguardia.

Apply directly from Get on Board.

Funciones del cargo

Diseñar, desarrollar y validar modelos de machine learning, analítica avanzada e inteligencia artificial orientados a casos de uso de negocio.
Construir y ejecutar experimentos de ciencia de datos evaluando métricas de desempeño, sesgo, estabilidad y capacidad de generalización.
Utilizar Amazon SageMaker para entrenamiento, tuning, versionamiento, despliegue y monitoreo de modelos.
Implementar soluciones de IA generativa y agentes utilizando Amazon Bedrock y sus capacidades asociadas.
Preparar, explorar y transformar datos de distintas fuentes, asegurando calidad, consistencia y disponibilidad.
Desarrollar notebooks, pipelines y procesos reproducibles para entrenamiento y evaluación de modelos.
Colaborar con equipos de datos, arquitectura, negocio y desarrollo para traducir requerimientos en soluciones analíticas productivas.
Participar en la industrialización de modelos, incluyendo pruebas, monitoreo, observabilidad y mejora continua.
Asegurar buenas prácticas de MLOps, gobierno de modelos, seguridad y uso eficiente de recursos cloud.
Documentar supuestos, metodología, resultados y limitaciones técnicas de los modelos desarrollados.

Requerimientos del cargo

  • Experiencia sólida y comprobable con Amazon SageMaker.
  • Experiencia en Amazon Bedrock para soluciones de IA generativa.
  • Conocimiento práctico del ecosistema AWS: S3, Lambda, API Gateway, RDS, Glue, Athena, CloudWatch e IAM.
  • Mínimo 3 años en roles de Data Scientist, Machine Learning Engineer o posiciones afines.
  • Experiencia en desarrollo y despliegue de modelos en ambientes productivos sobre AWS.
  • Dominio de Python y librerías orientadas a ciencia de datos y machine learning.
  • Conocimiento de feature engineering, experimentación, evaluación de modelos y monitoreo post-despliegue.
  • Manejo de datos estructurados y, deseable, no estructurados.
  • Conocimiento de principios de MLOps, CI/CD y buenas prácticas de versionamiento y reproducibilidad.
  • Título profesional en Ingeniería Civil en Computación, Ingeniería Informática, Ingeniería Matemática, Estadística, Ciencia de Datos o carrera afín.

Opcionales

AWS Certified Machine Learning – Specialty
AWS Certified Data Engineer – Associate
AWS Certified Solutions Architect – Associate
AWS Certified Developer – Associate
AWS Certified Cloud Practitioner

Condiciones

Remoto
Fulltime

Gross salary $2000 - 2200 Full time
Python SQL Spark CI/CD
Interfell conecta empresas con el talento IT de LATAM, gestionando procesos de Staffing y Recruiting para impulsar el trabajo remoto y la transformación digital. Nuestro objetivo es potenciar la inclusión y el equilibrio vida-trabajo, brindando una experiencia de contratación integral y de alta calidad. Esta posición forma parte de un equipo enfocado en generar oportunidades de ventas y vínculos con potenciales clientes, contribuyendo al crecimiento de nuestras operaciones en la región.
Como Data Architect, serás responsable de diseñar y definir la arquitectura del Data Lake multitenant en AWS, garantizando escalabilidad, seguridad, gobernanza y capacidad de crecimiento para integrar múltiples fuentes de datos.
Este rol es clave para establecer estándares técnicos que permitan la integración consistente de nuevas fuentes de datos, asegurando calidad, trazabilidad y eficiencia en el procesamiento.
Contrato por 2 meses

Send CV through Get on Board.

Job functions


Diseño de arquitectura multitenant

  • Diseñar la arquitectura del Data Lake en AWS considerando múltiples clientes o dominios de datos
  • Definir esquemas de particionamiento, namespaces y control de acceso por tenant
  • Establecer las capas del Data Lake (RAW, PROCESSED, CURATED)
  • Diseñar estrategias de organización y particionamiento de datos

Definición de estándares CI/CD

  • Diseñar el framework de CI/CD para pipelines de datos
  • Definir procesos de despliegue automatizado
  • Establecer la estructura de repositorios y versionamiento

Estrategia de ingestión de datos

  • Definir estrategias de ingestión para APIs, bases de datos y streaming
  • Diseñar patrones de integración usando AWS Glue, DMS y Kafka

Gobernanza y calidad de datos

  • Establecer estándares de calidad (evitar nulos, duplicados, asegurar llaves primarias)
  • Definir políticas de catalogación, metadata y control de acceso

Optimización y escalabilidad

  • Diseñar la arquitectura considerando crecimiento en volumen y fuentes de datos
  • Definir estrategias de optimización de costos en AWS

Acompañamiento técnico

  • Guiar técnicamente a Data Engineers y DevOps durante la implementación
  • Validar pipelines y decisiones de arquitectura

Qualifications and requirements


Formación y experiencia

  • Ingeniería de Sistemas, Informática o carreras afines
  • +4 años diseñando arquitecturas de datos
  • Experiencia en arquitecturas Data Lake, Medallion y Multitenant
  • Experiencia definiendo reglas de transformación entre capas
  • Experiencia estableciendo estándares de ingestión y transformación

Habilidades técnicas

  • AWS (S3, Glue, DMS, Kafka, IAM)
  • Modelado de datos
  • Spark, SQL y Python
  • Terraform y Databricks
  • Arquitecturas multitenant
  • CI/CD pipelines
  • Gobernanza de datos
  • Optimización de costos en AWS

Habilidades blandas

  • Capacidad de diseño estratégico
  • Comunicación con stakeholders técnicos y de negocio
  • Pensamiento analítico

Conditions

Oportunidad de crecimiento con un equipo multinivel
Vacaciones y feriados
Flexibilidad y autonomía
Pago USD
Trabajo remoto - Latam

Fully remote You can work from anywhere in the world.
$$$ Full time
Ingeniero/a de Datos
  • WiTi
  • Santiago (Hybrid)
Python SQL ETL AWS

WiTi conecta talento tecnológico con proyectos de alto impacto en Latinoamérica. Nuestro equipo se enfoca en la integración de sistemas, software a medida y desarrollos innovadores para dispositivos móviles, con énfasis en resolver problemas complejos a través de soluciones innovadoras.

Buscamos un/a Ingeniero/a de Datos para integrarse a un proyecto estratégico en uno de los grupos de distribución automotriz más importantes del país, con operaciones a nivel nacional y una infraestructura de datos en plena etapa de transformación y modernización.

Serás responsable de diseñar, implementar y documentar procesos de carga, transformación y migración de grandes volúmenes de datos en un entorno AWS. Trabajarás en un contexto enterprise donde la calidad, la trazabilidad y la reproducibilidad de los resultados son fundamentales, colaborando con equipos técnicos y de negocio para asegurar que los datos sean confiables, escalables y mantenibles.

Send CV through getonbrd.com.

Responsabilidades Clave

  • Diseñar un enfoque repetible para la carga de grandes volúmenes de datos, estandarizando reglas y patrones de conversión.
  • Participar en automatizaciones de procesos mediante scripts, reglas de validación, templates y pipelines.
  • Implementar y mantener procesos ETL/ELT en AWS, integrándose con el stack del cliente en fuentes, cargas, transformaciones y monitoreo.
  • Documentar reglas de negocio, decisiones técnicas y casos borde para asegurar que los procesos sean mantenibles y escalables.

Requisitos Excluyentes

  • SQL avanzado: PL/SQL, queries complejas, optimización, joins pesados, window functions, CTEs y lectura de planes de ejecución.
  • Experiencia con Amazon Redshift: escritura de SQL, performance y buenas prácticas.
  • Conocimiento del mundo ETL/ELT en AWS (las herramientas específicas pueden variar según el stack).
  • Experiencia trabajando en contextos enterprise con foco en calidad, trazabilidad y resultados reproducibles.
  • Disponibilidad para asistir presencialmente 3 o 4 veces por semana a oficinas ubicadas en Panamericana altura de Lampa

Requisitos Deseables

  • Experiencia en automatización de migraciones: reglas de conversión, validaciones automáticas y pipelines de QA.
  • Conocimientos de Python u otro lenguaje de scripting para apoyar automatización y controles.
  • Conocimientos de AWS QuickSight.
  • Experiencia con gobierno de datos y buenas prácticas: naming conventions, documentación y data quality checks.

Beneficios

En WiTi promovemos un ambiente colaborativo donde la cultura del aprendizaje es parte fundamental. Entre nuestros beneficios están:

  • Plan de carrera personalizado para el desarrollo profesional.
  • Certificaciones para continuar creciendo en tu carrera.
  • Cursos de idiomas, apoyando el desarrollo personal y profesional.

Digital library Access to digital books or subscriptions.
Computer provided WiTi provides a computer for your work.
Personal coaching WiTi offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
$$$ Full time
Data Engineer
  • WiTi
  • Santiago (Hybrid)
Python SQL ETL Automation

WiTi conecta talento tecnológico con proyectos de alto impacto en Latinoamérica. Nuestro equipo se enfoca en la integración de sistemas, software a medida y desarrollos innovadores para dispositivos móviles, con énfasis en resolver problemas complejos a través de soluciones innovadoras.

Buscamos un/a Ingeniero/a de Datos para integrarse a un proyecto estratégico en uno de los grupos automotrices líderes del país, con presencia nacional en la comercialización de vehículos livianos y comerciales, y una infraestructura de datos en plena etapa de modernización y escalamiento.

Serás responsable de diseñar, implementar y documentar procesos de carga, transformación y migración de grandes volúmenes de datos en un entorno AWS.

Trabajarás en un contexto enterprise donde la calidad, la trazabilidad y la reproducibilidad de los resultados son fundamentales, colaborando con equipos técnicos y de negocio para asegurar que los datos sean confiables, escalables y mantenibles.

© Get on Board.

Responsabilidades Clave

  • Diseñar un enfoque repetible para la carga de grandes volúmenes de datos, estandarizando reglas y patrones de conversión.
  • Participar en automatizaciones de procesos mediante scripts, reglas de validación, templates y pipelines.
  • Implementar y mantener procesos ETL/ELT en AWS, integrándose con el stack del cliente en fuentes, cargas, transformaciones y monitoreo.
  • Documentar reglas de negocio, decisiones técnicas y casos borde para asegurar que los procesos sean mantenibles y escalables.

Requisitos Excluyentes

  • SQL avanzado: PL/SQL, queries complejas, optimización, joins pesados, window functions, CTEs y lectura de planes de ejecución.
  • Experiencia con Amazon Redshift: escritura de SQL, performance y buenas prácticas.
  • Conocimiento del mundo ETL/ELT en AWS (las herramientas específicas pueden variar según el stack).
  • Experiencia trabajando en contextos enterprise con foco en calidad, trazabilidad y resultados reproducibles.
  • Disponibilidad para asistir presencialmente 3 o 4 veces por semana a oficinas ubicadas en Panamericana altura de Lampa

Requisitos Deseables

  • Experiencia en automatización de migraciones: reglas de conversión, validaciones automáticas y pipelines de QA.
  • Conocimientos de Python u otro lenguaje de scripting para apoyar automatización y controles.
  • Conocimientos de AWS QuickSight.
  • Experiencia con gobierno de datos y buenas prácticas: naming conventions, documentación y data quality checks.

Beneficios

En WiTi promovemos un ambiente colaborativo donde la cultura del aprendizaje es parte fundamental. Entre nuestros beneficios están:

  • Plan de carrera personalizado para el desarrollo profesional.
  • Certificaciones para continuar creciendo en tu carrera.
  • Cursos de idiomas, apoyando el desarrollo personal y profesional.

Digital library Access to digital books or subscriptions.
Computer provided WiTi provides a computer for your work.
Personal coaching WiTi offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
$$$ Full time
Ingeniero de Datos
  • Factor IT
  • Santiago (Hybrid)
Python SQL BigQuery Docker
En Factor IT trabajamos para impulsar la transformación digital en grandes empresas de la región, con foco en Data & Analytics, automatización e inteligencia artificial. Dentro de nuestros proyectos, participamos en iniciativas que construyen y evolucionan plataformas de datos sobre Google Cloud (GCP), integrando servicios, pipelines y automatización para habilitar analítica avanzada y toma de decisiones basada en datos. Te unirás a un equipo que diseña soluciones robustas y escalables, con tecnologías modernas, alto expertise técnico y una cultura de colaboración y aprendizaje continuo.

This posting is original from the Get on Board platform.

Ingeniero de Datos

Como Ingeniero de Datos, nuestro objetivo es diseñar, construir y mantener pipelines de datos confiables y escalables en entornos de GCP, asegurando que los datos fluyan correctamente desde las fuentes hasta los modelos y capacidades analíticas.
Entre tus responsabilidades:
  • Desarrollar y optimizar consultas SQL avanzadas (PostgreSQL, MySQL).
  • Implementar procesos ETL/ELT usando Airflow, dbt y servicios de orquestación/ingesta como Dataflow y Pub/Sub.
  • Programar en Python para automatizar transformaciones e integraciones.
  • Trabajar con servicios y prácticas de GCP para construir soluciones mantenibles.
  • Desplegar y gestionar componentes mediante Docker y Kubernetes, garantizando robustez y escalabilidad.
Nos enfocamos en colaborar estrechamente con el equipo para entender requerimientos del negocio, proponer mejoras y asegurar calidad, eficiencia y confiabilidad en todo el ciclo de vida de la plataforma de datos.

Requisitos excluyentes

Buscamos un Ingeniero de Datos con experiencia práctica para integrarse a proyectos regionales y con impacto real en la transformación tecnológica, especialmente en el sector financiero.
Requisitos excluyentes:
  • SQL avanzado (PostgreSQL, MySQL).
  • BigQuery.
  • ETL/ELT: Airflow, dbt, Dataflow, Pub/Sub.
  • Python.
  • Experiencia en GCP.
  • Docker y Kubernetes.
Además, valoramos:
  • Capacidad para analizar problemas, depurar y mejorar pipelines existentes.
  • Orientación a la calidad y a la confiabilidad de los datos.
  • Buena comunicación y trabajo colaborativo para alinear soluciones con necesidades del negocio.
  • Mentalidad de aprendizaje continuo y adaptación a tecnologías emergentes.
Nos importa que seas proactivo, que puedas proponer mejoras y que mantengas un enfoque responsable en la operación y evolución de la plataforma de datos.

Deseable

Sumará puntos si cuentas con:
  • Streaming (Kafka, Flink).
  • Java o Scala.
  • Experiencia con herramientas BI (Looker, Power BI, Tableau).
Estas habilidades nos ayudan a ampliar la capacidad de análisis, habilitar casos en tiempo real y facilitar la integración con productos y consumo de datos.
Ofrecemos una modalidad de trabajo híbrida desde Santiago, Chile, con flexibilidad horaria para un balance saludable entre vida profesional y personal.
Vas a formar parte de un ambiente colaborativo, dinámico y con tecnologías de última generación que impulsan el crecimiento profesional y la innovación tecnológica.
Contarás con un paquete salarial competitivo, acorde a la experiencia y perfil, e integrado a una cultura inclusiva que valora la diversidad, creatividad y el trabajo en equipo.
Participarás en proyectos desafiantes con impacto real en la transformación tecnológica de la región y en el sector financiero, dentro de una organización que promueve la innovación y el desarrollo profesional continuo.

Gross salary $3500 - 3700 Full time
CI/CD Infrastructure as Code AWS Lambda API Development

Coderslab.io es una empresa global líder en soluciones tecnológicas con más de 3,000 colaboradores en todo el mundo, incluyendo oficinas en América Latina y Estados Unidos. Formarás parte de equipos diversos compuestos por talento de alto desempeño para proyectos desafiantes de automatización y transformación digital. Colaborarás con profesionales experimentados y trabajarás con tecnologías de vanguardia para impulsar la toma de decisiones y la eficiencia operativa a nivel corporativo.

Exclusive to Get on Board.

Funciones del cargo

Diseñar, desarrollar y mantener soluciones de ingeniería de datos sobre AWS.

Implementar componentes y procesos utilizando AWS Lambda, Amazon S3, Amazon API Gateway y Amazon RDS.

Diseñar y mantener infraestructura como código mediante AWS CloudFormation.

Gestionar despliegues automatizados y pipelines CI/CD utilizando GitHub Actions integrados con AWS.

Asegurar buenas prácticas de versionamiento, testing, observabilidad y despliegue continuo.

Monitorear, optimizar y resolver incidentes en componentes de datos desplegados en ambientes productivos.

Colaborar con equipos de arquitectura, desarrollo y negocio para traducir requerimientos funcionales en soluciones técnicas.

Requerimientos del cargo

Experiencia sólida con AWS Lambda, Amazon S3, AWS CloudFormation, Amazon API Gateway y Amazon RDS.

Conocimiento en integración y automatización de despliegues con GitHub Actions hacia AWS.

Experiencia aplicando prácticas de CI/CD e infraestructura como código (IaC).

Conocimiento de seguridad, permisos y buenas prácticas operativas en AWS.

Capacidad para desarrollar e integrar APIs y componentes de datos en la nube.

Mínimo 3 años de experiencia en ingeniería de datos, desarrollo cloud o roles equivalentes.

Experiencia comprobable trabajando en ambientes AWS productivos.

Título profesional en Ingeniería Informática, Ingeniería Civil en Computación o carrera afín.

Opcionales

Certificaciones deseables

  • AWS Certified Cloud Practitioner
  • AWS Certified Developer – Associate
  • AWS Certified Solutions Architect – Associate
  • AWS Certified Data Engineer – Associate

Condiciones

Remoto Fulltime

$$$ Full time
Arquitecto de Datos
  • Factor IT
  • Santiago (Hybrid)
SQL BigQuery CI/CD Cloud Architecture
En Factor IT impulsamos la transformación digital con foco en Data & Analytics, IA, automatización y consultoría estratégica. Buscamos un/una Arquitecto(a) de Datos para integrarse a proyectos regionales con impacto real en grandes empresas, incluyendo el sector financiero. En este rol, contribuiremos al diseño y evolución de plataformas de datos sobre Google Cloud, asegurando escalabilidad, confiabilidad y gobernanza. El objetivo es habilitar analítica avanzada y consumo eficiente de datos para distintos equipos de negocio, integrando prácticas modernas de modelado, orquestación y gobierno a lo largo del ciclo de vida de la información.

Find this vacancy on Get on Board.

Funciones

Como Arquitecto(a) de Datos en Factor IT, nuestro objetivo es diseñar y estandarizar soluciones de datos en Google Cloud que permitan transformar datos en decisiones confiables y oportunas. Sus principales responsabilidades serán:
  • Diseñar la arquitectura de datos end-to-end considerando ingestión, almacenamiento, procesamiento, modelado y consumo.
  • Desarrollar y optimizar pipelines usando BigQuery y orquestadores como Airflow, además de automatizaciones con Dataflow cuando aplique.
  • Implementar y mantener modelado de datos (por ejemplo, capas analíticas y/o modelos dimensionales) asegurando performance y consistencia semántica.
  • Crear y mantener automatizaciones con dbt, definiendo transformaciones, pruebas y documentación de datos.
  • Gestionar la gobernanza de datos: estándares, accesos, calidad, linaje y buenas prácticas para el uso responsable de la información.
  • Promover patrones de ingeniería (CI/CD, versionado, pruebas y monitoreo) para asegurar estabilidad operativa en ambientes productivos.
  • Coordinar con equipos de negocio y técnicos para traducir requerimientos a soluciones escalables y medibles.

Requisitos y experiencia

Buscamos un/una Arquitecto(a) de Datos con experiencia sólida para liderar el diseño y la evolución de soluciones modernas de datos en entornos cloud. Necesitamos que tengas un nivel avanzado de SQL y que puedas aplicar ese conocimiento para optimizar rendimiento, asegurar calidad y resolver problemas complejos.
Requisitos excluyentes
  • SQL avanzado.
  • BigQuery.
  • Airflow, dbt y Dataflow.
  • Modelado de datos.
  • Gobierno de datos.
Experiencia esperada
  • Participación en la construcción y/o mejora de plataformas de datos orientadas a analítica y toma de decisiones.
  • Capacidad para definir estándares y guías de ingeniería para equipos que consumen y desarrollan sobre la plataforma.
  • Experiencia trabajando con prácticas de calidad de datos, estandarización y controles de acceso.
Competencias y habilidades clave
  • Enfoque analítico y mentalidad de mejora continua.
  • Comunicación clara para alinear stakeholders técnicos y de negocio.
  • Proactividad para anticipar riesgos (performance, costos, calidad, disponibilidad) y proponer mitigaciones.
  • Orientación al trabajo colaborativo y a la transferencia de conocimiento dentro del equipo.
En Factor IT valoramos una cultura basada en la conversación y el entendimiento profundo de los requerimientos del negocio. Por eso, buscamos a alguien que pueda traducir necesidades reales en soluciones técnicas robustas, escalables y gobernables.

Deseable

  • Certificación GCP Data Engineer.
  • Experiencia adicional con diseño de arquitecturas de datos escalables y optimización de costos en BigQuery.
  • Conocimiento en patrones de gobierno de datos (catálogo/metadata, linaje, políticas de acceso) y prácticas de calidad medibles.
  • Experiencia liderando iniciativas end-to-end (desde la definición de arquitectura hasta la puesta en producción y el soporte evolutivo).

Beneficios

Ofrecemos modalidad de trabajo híbrida desde Santiago, Chile, con flexibilidad horaria para un balance saludable entre la vida profesional y personal.
Además, en Factor IT contamos con un ambiente colaborativo, dinámico y con tecnologías de última generación que impulsan el crecimiento profesional, la innovación y el aprendizaje continuo.
Tu paquete salarial será competitivo y acorde a la experiencia y perfil, sumado a una cultura inclusiva que valora la diversidad, creatividad y el trabajo en equipo. Trabajarás en proyectos desafiantes con impacto real en la transformación tecnológica de la región y en el sector financiero.
Si te interesa construir soluciones de datos con alto impacto, únete a Factor IT y sé parte de un equipo que transforma el futuro de la tecnología.

Gross salary $1900 - 2200 Full time
Data Engineer
  • Houm
  • Santiago (Hybrid)
Python Git SQL Kubernetes

La persona será dueña de un ecosistema de datos maduro y bien documentado, con más de 300 DAGs en producción, y tendrá por delante una migración estratégica de Airflow 2.x a Airflow 3.x sobre Kubernetes, además de la reconstrucción de flujos legacy que presentan problemas de escalabilidad y mantenimiento.

This company only accepts applications on Get on Board.

Funciones del cargo

  • Ownership y mantenimiento del stack de datos completo: Apache Airflow, dbt, AWS Redshift y S3.
  • Monitoreo proactivo y resolución de incidentes en pipelines de datos.
  • Disponibilización de datos para inteligencia de negocios y otros consumidores internos.
  • Liderar la migración de Airflow 2.x en VM única hacia un cluster Airflow 3.x en Kubernetes.
  • Reconstrucción y modernización de flujos legacy.
  • Colaboración activa con operaciones, producto, finanzas y otras áreas como proveedor interno de datos.
  • Coordinación con DevOps para infraestructura y configuración de repositorios.
  • Apoyo en proyectos transversales que requieran soluciones de datos.

Requerimientos del cargo

Requisitos

  • Título de Ingeniería Civil Industrial o Informática.
  • +2 años de experiencia, idealmente en Start Ups.
  • Experiencia sólida con Apache Airflow (desarrollo de DAGs, debugging, operación en producción).
  • Experiencia sólida con dbt (modelos, tests, documentación, resolución de incidentes).
  • Manejo fluido de Python y SQL.
  • Experiencia con Git y prácticas de trabajo en bases de código grandes.
  • Experiencia con al menos una nube pública: AWS, GCP o Azure.
  • Capacidad de trabajar de forma autónoma tomando decisiones de arquitectura.
  • Excelente comunicación con stakeholders no técnicos.
  • Perfil proactivo: capaz de gestionar y priorizar su propia carga de trabajo, levantar la mano y buscar a las personas correctas.
  • Entusiasmo genuino para colaborar con equipos diversos.

Opcionales

Nice to have

  • Experiencia previa con AWS Redshift y servicios del ecosistema AWS (S3, IAM, etc.).
  • Experiencia con Kubernetes o migración de servicios hacia orquestación en contenedores.
  • Experiencia con MLOps o despliegue de modelos de machine learning en producción.
  • Conocimiento del dominio real estate o proptech.
  • Experiencia con herramientas de observabilidad y monitoreo (Datadog, CloudWatch, etc.).

Condiciones

Lo que ofrecemos

  • More holidays to chill! (días extra de vacaciones).
  • Seguro complementario de salud.
  • Beneficios Caja Los Andes.
  • Tarde libre por cumpleaños (tuyo e hijo/a).
  • 5 días extra de licencia por paternidad.
  • Modalidad híbrida (2 días presencial + 3 remoto).

Health coverage Houm pays or copays health insurance for employees.
Computer provided Houm provides a computer for your work.
Vacation over legal Houm gives you paid vacations over the legal minimum.
Beverages and snacks Houm offers beverages and snacks for free consumption.
$$$ Full time
Data Engineer Junior/Semi Senior
  • Lisit
  • Santiago (Hybrid)
Python Git SQL Docker
En Lisit creamos, desarrollamos e implementamos servicios de software enfocados en automatización y optimización, manteniendo innovación y pasión por los desafíos. Trabajamos con un acompañamiento consultivo para lograr transformaciones exitosas mediante una estrategia integral de implementación. Para el área ASAP, buscamos un/una Data Engineer Junior o Semi Senior que apoye la construcción y evolución de pipelines de datos (ETL/ELT) y modelos asociados a necesidades de negocio. El objetivo es habilitar la generación confiable y escalable de datos para impulsar decisiones y automatizar flujos críticos para el negocio, con calidad, trazabilidad y buenas prácticas.

Apply only from getonbrd.com.

Responsabilidades

En el área ASAP, nos vas a ayudar a:
  • Diseñar, desarrollar y mantener pipelines ETL/ELT que permitan cargar, transformar y preparar datos para consumo analítico y/o operacional.
  • Implementar modelos de datos alineados a requerimientos del negocio, cuidando claridad, mantenibilidad y consistencia.
  • Escribir y optimizar consultas SQL y código Python para automatizar procesos de transformación y extracción.
  • Trabajar con repositorios y versionamiento usando Git, asegurando trazabilidad de cambios y buenas prácticas de desarrollo.
  • Apoyar la integración y despliegue de componentes en plataformas Cloud, idealmente Google Cloud, siguiendo criterios de eficiencia y rendimiento.
  • Colaborar con el equipo para documentar el flujo end-to-end, gestionar dependencias y asegurar calidad de datos en cada etapa.
Buscamos que los datos estén listos a tiempo, con menos retrabajo y con una base sólida para evolucionar modelos y automatizaciones.

Requisitos

Buscamos un/una Data Engineer Junior o Semi Senior para reforzar el área ASAP por urgencia crítica para el negocio.
Requerimos:
  • Conocimiento en una plataforma Cloud (idealmente Google Cloud).
  • Python y SQL en nivel intermedio a avanzado (excluyente).
  • Experiencia en generación de ETL/ELT y modelos de negocio.
  • Conocimientos de Git.
Te va a ir muy bien si:
  • Te gusta trabajar con objetivos claros y priorización (urgencia crítica implica foco y ejecución).
  • Eres ordenado/a con la calidad del dato, la documentación y la reproducibilidad de procesos.
  • Colaboras activamente: levantando dudas temprano, compartiendo avances y proponiendo mejoras.
  • Enfrentas problemas con mentalidad analítica, cuidando performance, validaciones y estabilidad.

Deseable

  • Conocimientos en Docker.
  • Composer (Airflow).
  • Cloud Run y Cloud Run Functions.
  • Terraform.
  • Dataform.
Estos conocimientos suman porque facilitan automatización, despliegues consistentes e infraestructura como código.

Beneficios

100% remoto (según organización). Cuando aplique por necesidad del proyecto, la opción ideal es trabajar en Santiago con modalidad 3x2; en caso borde, modalidad remota. Buscamos mantener un esquema de trabajo que permita foco y continuidad para llegar con calidad a los objetivos del área ASAP.

Si te interesa aportar con pipelines de datos, Python, SQL y buenas prácticas de ingeniería, escríbenos para conversar.

$$$ Full time
Ingeniero de Datos
  • BICE VIDA
  • Santiago (Hybrid)
Python SQL ETL Data lake
En BICE VIDA somos líderes en el rubro de las aseguradoras y trabajamos para satisfacer las necesidades de seguridad, prosperidad y protección de nuestros clientes. Estamos impulsando una fuerte transformación digital para mantenernos a la vanguardia, entregar soluciones world-class y responder a los constantes cambios del mercado.

Exclusive offer from getonbrd.com.

🎯¿Qué buscamos?

En BICE Vida nos encontramos en búsqueda de un Ingeniero de Datos Junior para desempeñarse en el COE de Datos, perteneciente a la Gerencia de Planificación y Gobierno de Datos.
🧭 El objetivo del cargo es apoyar la construcción, mantenimiento y mejora de los procesos que permiten que los datos lleguen limpios, ordenados y disponibles para que la organización pueda analizarlos y tomar buenas decisiones.
💡Tendrás la oportunidad contribuir aprendiendo y aplicando buenas prácticas, colaborando con ingenieros senior y equipos de negocio, y asegurando que los datos fluyan de forma segura, confiable y eficiente dentro de la plataforma de datos📊.

📋 En este rol deberás:
  • Participar en el proceso de levantamiento de requerimientos con las áreas de negocio, apoyando a las áreas usuarias en el entendimiento de sus necesidades desde un punto de vista funcional.
  • Apoyar la incorporación de nuevas fuentes de datos al repositorio centralizado de información (Data Lake) de la compañía.
  • Comprender conceptos fundamentales de ETL/ELT.
  • Validación básica de datos.
  • Identificar errores en ejecuciones o datos.

🧠 ¿Qué necesitamos?

  • Formación académica: Ingeniero Civil Informático o Ingeniero Civil Industrial o carrera afín.
  • Mínimo 1 años de experiencia en gestión de datos o en desarrollos de soluciones informáticas.
  • Experiencia trabajando en alguna nube (AWS, GCP, Azure)
  • Conocimiento en herramientas de consulta de datos, tales como SQL y Python (nivel intermedio).
  • Participación en proyectos relacionados a datos, independiente de la tecnología utilizada.

✨Sumarás puntos si cuentas con:

  • AWS
  • Terraform
  • Spark/Scala
  • Tableau
  • Github
  • R Studio
  • Metodologías Ágiles (Scrum, Kanban)

¿Cómo es trabajar en BICE Vida? 🤝💼

  • Contamos con la mejor cobertura de la industria y te brindamos un seguro complementario de salud, dental y vida, y además un seguro catastrófico (Para ti y tus cargas legales). 🏅
  • Bonos en UF dependiendo de la estación del año y del tiempo que lleves en nuestra Compañía. 🎁
  • Salida anticipada los días viernes, hasta las 14:00 hrs, lo que te permitirá balancear tu vida personal y laboral. 🙌
  • Dress code semi-formal, porque privilegiamos la comodidad. 👟
  • Almuerzo gratis en el casino corporativo, con barra no-fit el dia viernes. 🍟
  • Contamos con capacitaciones constantes, para impulsar y empoderar equipos diversos con foco en buscar mejores resultados. Nuestra Casa Matriz se encuentran en el corazón de Providencia a pasos del metro de Pedro de Valdivia. 🚇
  • Puedes venir en bicicleta y la cuidamos por ti. Tenemos bicicleteros en casa matriz. 🚲

Wellness program BICE VIDA offers or subsidies mental and/or physical health activities.
Accessible An infrastructure adequate for people with special mobility needs.
Life insurance BICE VIDA pays or copays life insurance for employees.
Meals provided BICE VIDA provides free lunch and/or other kinds of meals.
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Health coverage BICE VIDA pays or copays health insurance for employees.
Dental insurance BICE VIDA pays or copays dental insurance for employees.
Computer provided BICE VIDA provides a computer for your work.
$$$ Full time
Data Engineer GCP
  • TCIT
  • Santiago (Hybrid)
Python BigQuery ETL Google Cloud Platform

En TCIT, somos líderes en desarrollo de software en modalidad cloud con más de 9 años de experiencia. Trabajamos en proyectos que transforman digitalmente a organizaciones, desde sistemas de gestión agrícola y de remates en línea, hasta soluciones para tribunales y monitoreo de certificaciones para minería. Participamos en iniciativas internacionales, colaborando con partners tecnológicos en Canadá y otros mercados. Nuestro equipo impulsa soluciones de calidad y sostenibles, con foco en impacto social. Buscamos ampliar nuestro equipo con talentos que quieran crecer y dejar huella en proyectos de alto impacto en la nube.

Apply directly through getonbrd.com.

Funciones principales

  • Responsable de entregar soluciones eficientes, robustas y escalables en GCP. Tu rol implicará:
  • Diseñar, construir y mantener sistemas de procesamiento de datos escalables y de alto rendimiento en GCP.
  • Desarrollar y mantener pipelines de datos para la extracción, transformación y carga (ETL) de datos desde diversas fuentes en GCP.
  • Implementar soluciones para el almacenamiento y procesamiento eficiente de grandes volúmenes de datos utilizando las herramientas y servicios de GCP.
  • Colaborar con equipos multidisciplinarios para entender los requisitos y diseñar soluciones adecuadas en el contexto de GCP.
  • Optimizar el rendimiento de los sistemas de procesamiento de datos y garantizar la integridad de los datos en GCP.

Requisitos y perfil

Buscamos un Data Engineer con dominio en Python y experiencia demostrable trabajando con soluciones en la nube. El/la candidato/a ideal deberá combinar habilidades técnicas con capacidad de comunicación y trabajo en equipo para entregar soluciones de datos de alto rendimiento.

Requisitos técnicos:

  • 1- 4 años de experiencia en Ingeniería de Datos y GCP (Excluyente)
  • Experiencia desarrollando pipelines de datos con Python (pandas, pyarrow, etc.).
  • Experiencia en Google Cloud Platform (GCP) y servicios relacionados con datos (ETL/ELT, Dataflow, Glue, BigQuery, Redshift, Data Lakes, etc.).
  • Experiencia con orquestación de procesos (Airflow, Prefect o similares).
  • Buenas prácticas de seguridad y gobernanza de datos, y capacidad para documentar soluciones.

Habilidades blandas:

  • Comunicación clara y capacidad de trabajar en equipos multifuncionales.
  • Proactividad, orientación a resultados y capacidad de priorizar en entornos dinámicos.
  • Ingenio para solucionar problemas y aprendizaje continuo de nuevas tecnologías.

Deseables

Experiencia con herramientas de gestión de datos en la nube (BigQuery, Snowflake, Redshift, Dataflow, Dataproc).

Conocimientos de seguridad y cumplimiento en entornos de datos, experiencia en proyectos con impacto social o regulaciones sectoriales.

Habilidad para escribir documentación técnica en español e inglés y demostrar capacidad de mentoría a otros compañeros.

Condiciones

Trabajo en modalidad hibrida.
Las Oficinas se encuentran ubicadas en la comuna de las Condes, cercano a metro Manquehue.

Computer provided TCIT provides a computer for your work.
Beverages and snacks TCIT offers beverages and snacks for free consumption.
$$$ Full time
Data Engineer Databricks
  • 42Labs
  • Santiago (Hybrid)
Python SQL Scala Databricks

En 42Labs no solo desarrollamos tecnología: construimos soluciones donde lo técnico y lo humano van de la mano. Trabajamos en iniciativas que transforman negocios en distintos verticales (financiero, logística y educación), creando plataformas de datos que permiten tomar mejores decisiones, automatizar procesos y habilitar analítica confiable. Como Data Engineer enfocado en Databricks, seremos parte de un equipo que diseña y mantiene pipelines robustos, escalables y orientados a calidad, asegurando que los datos lleguen a tiempo, con integridad y trazabilidad. Nuestro objetivo es que la plataforma de datos soporte casos de uso reales, desde ingesta y procesamiento hasta modelado y consumo, promoviendo buenas prácticas, colaboración y mejora continua dentro de una cultura transparente y sin jerarquías rígidas.

Exclusive offer from getonbrd.com.

Funciones

En el rol de Data Engineer con Databricks, nuestro foco será construir y operar pipelines de datos de punta a punta, asegurando rendimiento, calidad y mantenibilidad.
  • Diseñar, desarrollar y mantener pipelines de ingesta, procesamiento y transformación de datos en Databricks.
  • Implementar modelos de datos y estrategias de organización (por ejemplo, capas y convenciones) para soportar analítica y reporting.
  • Optimizar rendimiento (jobs, particiones, formatos de almacenamiento y configuración) para costos eficientes y tiempos de respuesta adecuados.
  • Asegurar calidad de datos mediante validaciones, controles de consistencia y manejo de errores/recuperación.
  • Producir trazabilidad end-to-end: documentación, linaje y buenas prácticas de versionado y despliegue.
  • Colaborar con Ingeniería de Software y stakeholders para entender requerimientos, priorizar y convertirlos en soluciones medibles.
  • Monitorear procesos y responder incidentes: revisar logs, métricas y alertas, y proponer mejoras preventivas.
Trabajaremos con autonomía en un esquema híbrido, apoyándonos en feedback constante y en una cultura de colaboración donde la calidad y el impacto en las personas importan.

Requisitos

Buscamos un/a Data Engineer con experiencia práctica en el ecosistema de datos y con foco en construir soluciones confiables, escalables y fáciles de mantener. Valoramos la combinación entre criterio técnico, comunicación clara y orientación a la mejora continua.
Lo que necesitamos de ti
  • Experiencia con Databricks y trabajo con pipelines de datos (ingesta, transformación y orquestación).
  • Conocimientos sólidos en procesamiento distribuido y formatos de datos para optimización de rendimiento.
  • Buenas prácticas de ingeniería de datos: control de versiones, documentación, pruebas/validaciones y manejo de errores.
  • Experiencia implementando capas/modelos para analítica (por ejemplo, a través de enfoques como medallion o similares) y asegurando consistencia.
  • Capacidad para depurar y mejorar rendimiento de jobs (lecturas/escrituras, particiones, configuración y tuning).
  • Conocimientos en SQL y al menos un lenguaje para desarrollo (comúnmente Python/Scala, según el stack).
  • Mentalidad de calidad: validar datos, detectar anomalías y proponer correcciones con enfoque preventivo.
  • Comunicación efectiva: explicar decisiones técnicas, levantar riesgos temprano y alinear expectativas con equipos no técnicos.
Cómo nos gusta trabajar
  • Colaboración genuina y transparencia: nos importa cómo construyes con el equipo, no solo el resultado.
  • Autonomía responsable: propones mejoras, haces seguimiento y entregas con foco en impacto.
  • Aprendizaje constante: te sumas a la Academia 42Labs y disfrutas compartir conocimiento.

Deseable

  • Experiencia con orquestación y programación de workflows (por ejemplo, jobs programados, scheduling y patrones de reintento).
  • Conocimiento de seguridad y gobernanza de datos (permisos, acceso por roles, auditoría básica).
  • Experiencia con herramientas de monitoreo/alertas para operación de pipelines.
  • Participación en diseño de arquitectura de datos (estándares de modelado, convenciones y escalabilidad).
  • Experiencia trabajando con equipos multidisciplinarios (Data, Backend, BI) y levantando requerimientos con claridad.

Beneficios

  • Salud y protección integral: seguros complementarios de salud, dental, de vida y catastrófico 100% financiados por nosotros (con opción de extender a tu familia). También estamos integrados a la red de beneficios de Caja Los Andes y la ACHS.
  • Tiempo y flexibilidad: contamos con Flexi Days y Party Time (tardes libres). Celebramos tu cumpleaños con una tarde libre y damos tiempo extra para hitos como matrimonio, nacimiento de hijos o exámenes de título.
  • Bienestar y equilibrio: promovemos un balance real con un entorno de trabajo híbrido que confía en tu autonomía.
  • Crecimiento: Academia 42Labs, planes de desarrollo personalizados y acceso a Udemy Business.
  • Conectividad y apoyos: bonos mensuales para conexión a internet y plataformas de ocio favoritas, además de aguinaldos en Fiestas Patrias y Navidad.
Si te entusiasma ser parte de una comunidad que aprende, colabora y celebra, queremos conocerte. ¡Postula con nosotros!

Health coverage 42Labs pays or copays health insurance for employees.
Computer provided 42Labs provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal 42Labs gives you paid vacations over the legal minimum.
$$$ Full time
Data Engineer Senior
  • Grupo Mariposa
Continuous Integration ETL Data Architecture Databricks
Somos una corporación multinacional de bebidas y alimentos con operaciones regionales, un portafolio amplio de marcas y una estrategia acelerada de transformación digital. Dentro de Apex Digital / M5, el área de Data & Analytics habilita productos analíticos, datos gobernados y capacidades avanzadas para las unidades de negocio, incluyendo CBC, Beliv, BIA y las iniciativas transversales de transformación digital.
Como parte de esta evolución, la organización está avanzando hacia una arquitectura empresarial de AI Agents basada en Databricks, ADLS Gen2, Unity Catalog, Azure AI / Microsoft Foundry, Copilot Studio y Power Automate, buscando habilitar asistentes y agentes empresariales seguros, trazables, escalables y conectados con los datos core del negocio.

Apply to this job directly at getonbrd.com.

Funciones del cargo

1. Diseñar e implementar soluciones de ingeniería de datos escalables, eficientes y mantenibles utilizando tecnologías como:
- Azure Data Factory (ADF),
- Databricks
- Unity Catalog
2. Capacidad para aplicar arquitecturas por capas (Bronze/Silver/Gold)
3. Automatización de ETL/ELT con validación de calidad de datos, y estrategias de integridad como pipelines idempotentes y manejo de SCD.
4. Garantizar datos confiables, optimizados en costos y performance, alineados con las necesidades del negocio, respaldados por documentación robusta y estándares de código (PEP8, Git) para facilitar su evolución y gobierno.

Requerimientos del cargo

  • Coordinar el funcionamiento de los distintos entornos donde se ejecutan los procesos de procesamiento de datos.
  • Extraer, transformar y cargar los datos para que estén alineados con respecto a las necesidades del negocio.
  • Generar integraciones eficientes que permitan realizar la ingesta de datos requeridos para la lógica de negocio.
  • Generar flujos de integración continua que permitan validar los flujos desarrollados de forma eficaz.
  • Mentorizar a ingenieros juniors en buenas practicas y soluciones escalables.
  • Proponer e implementar mejoras tecnológicas que optimicen los flujos de datos.

Principales Retos

  • Requiere criterio para diseñar, implementar y mantener una estructura de datos eficiente, escalable e intuitiva.
  • Requiere criterio y experiencia para cumplir con las mejores prácticas de código para el desarrollo de funcionalidades competitivas en el mercado.
  • Procesar volúmenes de datos en crecimiento exponencial sin que los costos en la nube se disparen.
  • Implementar mecanismos de data quality que no impacten la velocidad de los procesamientos.

Conditions

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage Grupo Mariposa pays or copays health insurance for employees.
Informal dress code No dress code is enforced.
Vacation on birthday Your birthday counts as an extra day of vacation.
Gross salary $2000 - 2400 Full time
Data Engineer
  • Coderslab.io
  • Santiago (Hybrid)
Big Data ETL Automation Google Cloud Platform

Coderslab.io es una empresa dedicada a transformar y hacer crecer negocios mediante soluciones tecnológicas innovadoras. Formarás parte de una organización en expansión con más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Te unirás a equipos diversos que reúnen a parte de los mejores talentos tecnológicos para participar en proyectos desafiantes y de alto impacto. Trabajarás junto a profesionales experimentados y tendrás la oportunidad de aprender y desarrollarte con tecnologías de vanguardia.

Apply exclusively at getonbrd.com.

Funciones del cargo

Objetivo del rol:

Análisis, diseño, desarrollo y mantenimiento de sistemas de procesamiento de datos en proyectos de Big Data. El profesional deberá crear pipelines en plataformas Cloud y Data Lake para la entrega de modelos de datos en producción, apoyando también en la arquitectura, el diseño de plataformas, el desarrollo de procesos ETL/ELT, ingeniería de datos serverless y modelamiento analítico.

Requerimientos del cargo

  1. Experiencia en análisis, diseño, desarrollo y pruebas de procesos de ingesta de datos (ETL/ELT) en entornos de Big Data sobre GCP (Data Lake).
  2. Capacidad para realizar mantenimiento correctivo y evolutivo de pipelines de datos ETL/ELT, asegurando su estabilidad y mejora continua.
  3. Experiencia en desarrollo de soluciones de ingeniería de datos bajo arquitecturas serverless, mediante la construcción de pipelines escalables.
  4. Conocimiento en automatización y orquestación de pipelines de datos.
  5. Habilidad para integrar, consolidar, depurar y estructurar datos provenientes de diversas fuentes, orientados a su consumo en soluciones analíticas.
  6. Capacidad de colaboración y apoyo en tareas relacionadas con el rol, de acuerdo con las necesidades del proyecto.

Condiciones

Modalidad de contratación: Plazo fijo

Gross salary $1500 - 2200 Full time
Data Engineer
  • GUX Technologies
  • Santiago (Hybrid)
DevOps ETL Power BI Qlik Sense

En Proyectum Chile, impulsamos la excelencia en Dirección de Proyectos a través de servicios de consultoría, capacitación y outsourcing especializado. Somos una organización internacional presente en 12 países de Latinoamérica, compartiendo conocimiento, metodologías y activos de alto valor. Además, somos el principal Authorized Training Partner (ATP) del PMI en la región, liderando la transformación en gestión de proyectos y agilidad.

Nos encontramos en búsqueda un Data Engineer para integrarse a un servicio en el dominio de plataforma de datos, participando en el desarrollo de soluciones modernas en entornos cloud, con foco en generación de valor a partir de datos. Responsable de generar activos tecnológicos y productos de datos, traduciendo los requerimientos de negocio en información relevante.

Apply to this job directly at getonbrd.com.

Descripción del rol

Funciones principales:

  • Desarrollar procesos ETL / ELT en Snowflake y AWS
  • Desarrollar visualizaciones en Qlik Sense y Power BI
  • Traducir requerimientos de negocio en activos tecnológicos y productos de datos
  • Generar historias de usuario y documentación de análisis
  • Participar en la definición de blueprints y soluciones tecnológicas
  • Colaborar en el desarrollo de soluciones de datos en entornos cloud

Requisitos del cargo

Educación:

  • Título profesional de Ingeniería Informática o carrera afín

Requisitos excluyentes:

  • Experiencia en industrias: Financiera, Medios de Pago, Fintech o Retail
  • Experiencia con Snowflake
  • Manejo de AWS Suite
  • Conocimientos en prácticas DevOps
  • Experiencia en visualización de datos (Qlik Sense / Power BI)

Requisitos deseables:

  • Experiencia en infraestructura cloud
  • Experiencia en Datawarehouse

Habilidades clave

  • Orientación a resultados y generación de valor
  • Pensamiento analítico y estructurado
  • Proactividad y autonomía
  • Trabajo colaborativo
  • Comunicación efectiva entre áreas técnicas y de negocio

Conditions

Computer provided GUX Technologies provides a computer for your work.
$$$ Full time
Data Engineer
  • CyD Tecnología
  • Antofagasta (In-office)
Git SQL ETL Power BI

En CyD Tecnología somos una empresa innovadora en el sector de la tecnología, enfocada en el desarrollo de plataformas web personalizadas que transforman procesos complejos en soluciones simples y eficientes. Nuestro equipo diseña y entrega aplicaciones web y móviles que automatizan, integran y digitalizan operaciones críticas, ayudando a las empresas a reducir costos, mejorar el control y tomar decisiones basadas en datos en tiempo real.

Apply from getonbrd.com.

Responsabilidades Principales

El Data Engineer será responsable de diseñar, desarrollar y mantener soluciones de datos orientadas a la construcción de dashboards en Power BI, asegurando la disponibilidad, calidad y consistencia de la información para la toma de decisiones.

Trabajará en la integración de distintas fuentes de datos, transformación de información y modelado necesario para soportar reportes de gestión. Además, participará en la optimización de procesos y en la mejora continua de los modelos de datos utilizados por el negocio.

Dentro de sus funciones principales se encuentran:

  • Desarrollar y mantener dashboards en Power BI (principalmente MOP L3 y L4).
  • Construir y gestionar Dataflows para la preparación y transformación de datos.
  • Integrar fuentes de datos como Snowflake y bases de datos locales.
  • Diseñar y optimizar modelos de datos para reporting.
  • Escribir y optimizar consultas SQL para extracción y procesamiento de datos.
  • Asegurar la calidad y consistencia de los datos en los reportes.
  • Apoyar la estandarización de datos y buenas prácticas de desarrollo BI.
  • Documentar procesos y mantener trazabilidad de los flujos de datos.

Competencias Técnicas Requeridas

Se requiere formación en Ingeniería Informática o carrera afín, junto con experiencia en desarrollo de soluciones BI y manejo de datos.

Requisitos excluyentes:

  • Experiencia desarrollando dashboards en Power BI.
  • Manejo de Dataflows y Power Query para transformación de datos.
  • Dominio de SQL para consultas complejas.
  • Experiencia integrando datos desde Snowflake u otras fuentes similares.
  • Conocimiento en modelado de datos para reporting.
  • Experiencia con DAX para métricas y cálculos.

El trabajo considera jornada 4x3 en faena de la II Región de Antofagasta. No existe modalidad de teletrabajo.

Además, se valorará:

  • Experiencia trabajando con grandes volúmenes de datos.
  • Conocimientos en Dataverse o entornos de Power Platform.
  • Experiencia en optimización de rendimiento de dashboards.

Conocimientos Opcionales

Se considerarán como un plus los siguientes conocimientos o experiencia:

  • Experiencia en Power Platform (Power Apps, Power Automate).
  • Conocimientos en arquitectura de datos en la nube (Azure).
  • Experiencia en automatización de procesos de datos (ETL/ELT).
  • Conocimientos en gobernanza y calidad de datos.
  • Manejo de herramientas de versionamiento (Git).
  • Experiencia en metodologías ágiles.

Conditions

Health coverage CyD Tecnología pays or copays health insurance for employees.
Computer provided CyD Tecnología provides a computer for your work.
Gross salary $1900 - 2100 Full time
Ingeniero de Datos
  • VTI-UChile
  • Santiago (Hybrid)
Python Git SQL Linux

El trabajo se enmarca en el desarrollo proyecto FONDEF, titulado “¿Cómo progreso en mi aprendizaje?:
Sistema inteligente para fortalecer la autorregulación del aprendizaje en línea en estudiantes de educación superior”.

Esta iniciativa da continuidad a un proyecto previo orientado al desarrollo de modelos predictivos y explicativos del aprendizaje autorregulado mediante analítica de aprendizaje. En esta nueva etapa, el foco está en el diseño e implementación de soluciones que permitan fortalecer activamente la autorregulación de estudiantes en entornos digitales, entendida como la capacidad de planificar, monitorear y evaluar su propio proceso de aprendizaje.

El trabajo considera el uso intensivo de datos educativos y el desarrollo de herramientas basadas en evidencia para mejorar la experiencia y resultados de aprendizaje en educación superior.

© getonbrd.com.

Funciones del cargo

Responsable de diseñar, completar y optimizar el modelo de datos que soporta las métricas de
aprendizaje de las plataformas LMS administradas por la Oficina EOL, implementadas sobre Open edX.

El cargo tiene como objetivo integrar múltiples fuentes de datos (logs de eventos, bases de datos
relacionales y no relacionales) para estructurar un sistema consistente de eventos, acciones de
aprendizaje y métricas por usuario y curso, así como habilitar el acceso a esta información mediante APIs
para su consumo en interfaces y sistemas externos.

Se espera que el candidato sea capaz de comprender rápidamente arquitecturas de datos existentes,
trabajar sobre sistemas en desarrollo y completar tanto el modelamiento como la capa de exposición de
datos.

Responsabilidades Claves

Modelamiento de Datos Analíticos:

Diseñar, completar y mantener el modelo de datos; analítico (eventos, métricas, dimensiones).; Definir estructuras de datos orientadas a analítica (tablas de hechos, dimensiones, relaciones); Asegurar consistencia, trazabilidad y calidad de los datos.; Documentar modelos y definiciones de métricas.

Integración de Fuentes de Datos

Integrar datos provenientes de: o logs de eventos de la plataforma o bases de datos relacionales (SQL) o bases de datos no relacionales (MongoDB) Diseñar procesos de transformación de datos (ETL/ELT). Resolver problemas de integración, duplicidad y calidad de datos.

Análisis y Estructuración de Eventos

o Interpretar y estructurar eventos de interacción de usuarios. o Modelar acciones de aprendizaje y comportamiento dentro de la plataforma. o Definir métricas clave a partir de eventos (engagement, progreso, uso, etc.).

Desarrollo de APIs de Datos

o Diseñar y desarrollar APIs para exponer métricas y datos analíticos. o Implementar endpoints eficientes para consumo por interfaces y sistemas externos. o Asegurar buenas prácticas de diseño (performance, versionado, consistencia).

Optimización y Soporte Analítico

o Optimizar consultas y estructuras para análisis eficiente. o Apoyar la generación de reportes y visualizaciones. o Colaborar con equipos técnicos y funcionales.

Requisitos técnicos.

Bases de datos relacionales (PostgreSQL/MySQL) Avanzado
Bases de datos no relacionales (MongoDB) Avanzado
SQL avanzado (consultas complejas, optimización) Avanzado
Modelamiento de datos analíticos (eventos, hechos, dimensiones) Avanzado
Python (procesamiento de datos y desarrollo backend) Avanzado
Uso de ORMs en Python Medio
Manejo de migraciones Medio
Desarrollo de APIs REST Avanzado

Frameworks backend (FastAPI y/o Django) Avanzado
Procesos ETL/ELT Medio
Sistemas basados en eventos / tracking de usuarios Medio
Git Medio
Linux / Docker Medio

Condiciones

  • Viernes cortos.
  • Días administrativos.

Partially remote You can work from your home some days a week.
Informal dress code No dress code is enforced.
Gross salary $900 - 1200 Full time
Data Process Analyst
  • Datasur
  • Santiago (Hybrid)
Python PostgreSQL ETL Automation

En Datasur, somos líderes en inteligencia comercial basada en datos de comercio exterior. Nuestra plataforma procesa millones de registros de importaciones y exportaciones de más de 70 países, y estamos listos para escalar más alto.

Buscamos un/a Ingeniero/a de Procesos con al menos un año de experiencia para un proyecto de automatización del flujo de producción de datos. El rol se enfoca en levantar, analizar, documentar y mejorar procesos, impulsando la transición desde operaciones manuales a modelos estandarizados, trazables y escalables.

Se requiere una visión TI orientada a procesos, capaz de mapear flujos end-to-end, detectar brechas, definir controles y traducir necesidades de negocio en requerimientos funcionales claros. El trabajo abarca todo el ciclo de datos (ingesta, estandarización, calidad, monitoreo, orquestación y carga analítica), identificando riesgos y oportunidades de automatización.

This job offer is on Get on Board.

Funciones del cargo

1. Levantar, analizar y documentar procesos actuales y futuros del flujo de producción de datos.
2. Estandarizar procesos, definiciones, reglas operativas y puntos de control entre áreas.
3. Traducir requerimientos operativos y funcionales en documentos claros para equipos TI.
4. Apoyar la definición de flujos objetivo, casos de uso, reglas de negocio, validaciones y métricas de control.
5. Coordinar con áreas de Producción de Datos y equipos técnicos para asegurar consistencia en el diseño del proceso.
6. Participar en la elaboración de diagramas de proceso, procedimientos, manuales y documentación de operación.
7. Acompañar la implementación de mejoras, haciendo seguimiento a avances, dependencias y acuerdos operativos.
8. Apoyar la definición de indicadores de calidad, trazabilidad, alertas y seguimiento del proceso.

Requerimientos del cargo

  1. Formación en Ingeniería de Procesos, Ingeniería Civil Industrial, Ingeniería en Informática, Ingeniería en Ejecución, Sistemas o carrera afín.
  2. Al menos 1 año de experiencia en levantamiento, análisis, documentación o mejora de procesos.Interés o experiencia en procesos vinculados a TI, datos, automatización o transformación digital.
  3. Conocimiento en modelamiento de procesos, levantamiento de requerimientos y documentación funcional.
  4. Capacidad para interactuar con perfiles técnicos y no técnicos.

Se valorará

  • Experiencia en proyectos de datos, ETL, calidad de datos, automatización o integración de sistemas.
  • Conocimiento general de conceptos como pipelines, validaciones, logs, monitoreo, trazabilidad y gobernanza de datos.
  • Familiaridad con entornos donde participan tecnologías como Python, PostgreSQL, Airflow, Spark o soluciones de procesamiento de datos, aunque el foco principal del cargo no es desarrollar, sino ordenar y mejorar el proceso.

Condiciones

  • Un proyecto desafiante, con impacto real en el mundo del comercio exterior.
  • Equipo comprometido, ágil y con visión de crecimiento global.
  • Libertad para proponer, crear y liderar cambios.
  • Modalidad flexible y cultura de resultados.

Gross salary $3000 - 5000 Full time
Data Engineer
  • Revel Street LLC
SQL DevOps ETL CI/CD

Revel Street LLC helps corporate event planners discover and reach private dining venues through an extensive, dependable database. We use LLMs extensively to gather and enrich venue data, streamline the event planning workflow, and reduce the time and effort required to source options for events such as private dining, cocktail receptions, and conferences. We are looking for an experienced Data Engineer to help us improve data quality, fix existing data issues, and ingest more data from APIs and LLM-based sources to complement our current datasets. Our current stack includes React, TanStack, Cloudflare, Django, and Dagster, and we expect you to design solutions that are scalable, testable, and grounded in core engineering fundamentals.

© Get on Board.

Responsibilities

You’ll proactively turn ambiguous requirements into well-structured engineering plans. You’ll communicate trade-offs and risks early, and you’ll verify outcomes through hands-on testing. You’ll bring a “build, measure, improve” mindset to performance, reliability, and user experience.

  • Design, build, and maintain dbt pipelines for our analytics and operational workloads
  • Build and maintain ETL/ELT processes to ingest data from multiple APIs and other external sources
  • Set up and manage workflows in orchestration platforms such as Dagster
  • Develop and refine our data models to support analytics, reporting, and downstream products
  • Diagnose and fix data quality issues (duplicates, missing fields, inconsistent formats, incorrect mappings, etc.)
  • Implement robust data cleaning and validation checks
  • Integrate LLM-based data enrichment (e.g., using OpenAI or similar APIs) to improve and complement event data
  • Collaborate with our product and ops team to understand data needs and translate them into technical solutions

Requirements

  • Very high English proficiency (clear communication, strong writing, and the ability to collaborate effectively)
  • At least 3 years of data engineering experience including experience with dbt and the modern data stack
  • Some experience with devops, CI/CD, and database management.
  • At least 6 months of experience working exclusively in an agentic coding environment (e.g., Claude Code, Codex)
  • Ability to understand data engineering fundamentals, not just generate code—debugging, reasoning about behavior, and ensuring correctness

Bonus (preferred)

  • Bachelor’s degree in Computer Science, Engineering, or a related field.

Conditions

Fully remote You can work from anywhere in the world.
Gross salary $2000 - 2200 Full time
Python BigQuery Apache Spark CI/CD

Equifax es mucho más que una empresa de informes; es una compañía global líder en datos, analítica y tecnología con presencia en 24 países. En Chile, operan desde 1979 entregando soluciones críticas de ciberseguridad, identidad y riesgo a más de 14.000 empresas.

El Hub Tecnológico (SDC) Lo que hace única a esta oportunidad es que Chile alberga el Santiago Development Center (SDC). Este centro lidera la transformación digital de Equifax a nivel mundial, concentrando cerca del 60% de sus desarrollos tecnológicos globales.

Cultura y Visión Equifax promueve un entorno de colaboración y excelencia técnica, donde el talento local tiene el desafío de crear soluciones de impacto mundial. Su visión es clara: usar la data y la tecnología para potenciar la toma de decisiones financieras en todo el mundo.

Apply directly on the original site at Get on Board.

Funciones del cargo

¿Qué harás en tu día a día?

  • Fuerte enfoque en el desarrollo y procesamiento de datos en la nube.
  • Automatización de procesos y manipulación de datos
  • Consultas y manejo de grandes volúmenes de datos
  • Procesamiento distribuido de datos en tiempo real y por lotes.

Skills

Técnicas

  • 2+ años de conocimiento en Python
  • 2+ años de conocimientos en BigQuery
  • 2+ años de conocimientos en Apache beam / Apache Spark
  • Inglés A2 (conversacional)

Personales

  • Capacidad de autogestión
  • Buenos skills de comunicación
  • Fortaleza en trabajo en equipo
  • Adaptación al cambio (trabajarán en distintas geos de Latam)
  • Título académico en Ingeniería Informática, Sistemas o carreras afines.

Contrato indefinido desde el inicio con 23people - Tiempo del proyecto 6 meses con posible extensión

  • Modalidad: Home Office con residencia en Chile.
  • Experiencia: Desde 2 años en adelante
  • Horario: Lu - Ju 08:30 a 18:30 / Vi 08:30 a 17:30

Deseables

  • Perfil analítico
  • Unit test
  • Airflow
  • PySparck
  • CI/CD
  • Postman
  • Jmeter

Beneficios

Algunos de nuestros beneficios

  • Seguro complementario: Seguro de salud, vida y dental
  • Curso de inglés: En nuestro programa de formación en idioma inglés, ofrecemos dos modalidades para adaptarnos a tus necesidades y objetivos.
  • Reembolso de certificaciones internacionales: Apoyamos el crecimiento profesional, por lo que te reembolsamos el costo de un examen de certificación internacional que quieras realizar.
  • Bono de vacaciones: Por cada semana que te tomes de vacaciones te otorgamos una compensación.
  • Aguinaldos en fiestas patrias y Navidad: Queremos que en fechas tan especiales la pases bien junto a tu familia, por lo que te entregamos un bono en septiembre y diciembre
  • Día libre de cumpleaños: Puedes optar por tomar tu día libre, el día previo a tu cumpleaños, el mismo día de tu cumpleaños o el día posterior.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Life insurance Equifax pays or copays life insurance for employees.
Health coverage Equifax pays or copays health insurance for employees.
Dental insurance Equifax pays or copays dental insurance for employees.
Computer provided Equifax provides a computer for your work.
Vacation on birthday Your birthday counts as an extra day of vacation.
$$$ Full time
Analista de Infraestructura y Despliegue
  • Coderslab.io
  • Bogotá (Hybrid)
Git SQL Oracle Linux
Coderslab.io es una empresa que ayuda a las organizaciones a transformarse y crecer mediante soluciones tecnológicas innovadoras. Formarás parte de un grupo de más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Trabajarás dentro de equipos diversos que cuentan con talento de primer nivel y participarás en proyectos innovadores y desafiantes que impulsarán tu desarrollo profesional. Tendrás la oportunidad de aprender de profesionales experimentados y de trabajar con tecnologías de vanguardia en un entorno colaborativo y orientado a resultados.

Apply only from getonbrd.com.

Funciones del cargo

El objetivo del cargo es Administrar y Configurar los Ambientes de Pruebas del Banco al igual que el proceso de Aplicaciones y desarrollos hasta que son puestos en Ambiente Productivo.
-Recibir la Documentación como manuales, documentos de entrega y todo lo relacionado con Desarrollos de Software ya sea interno o externo
-Despliegues (Manuales/continuos) de desarrollos recibidos en Ambientes de Pruebas con las
correspondientes configuraciones de los Objetos.
-Procesos de Configuración y Homologación de Ambientes de Pruebas de acuerdo con lo requerido
-Verificación y solución de Errores que se generen en los Ambientes de prueba ya sea por
despliegues de Nuevos Desarrollos o instalaciones o configuraciones de nuevas aplicaciones
-Control de Versionamiento de Fuentes de Aplicaciones y Objetos de Desarrollo
-Alistamiento y Generación de Documentación, Objetos y Aplicaciones que deben de ser puestas en
Ambiente Productivo
-Ejecuciones para la correcta puesta en Producción de los Aplicativos Locales (Web-Windows …)
-Mantenimiento y desarrollo de pipelines en GITLAB

Requerimientos del cargo

- Conocimiento del Proceso de la Gestión de Configuración del Software
- Administración de Sistemas Operativos Windows Server (Versiones varias)
- Instalaciones sobre IIS – Servicios Web – Servicios Windows
- Conocimientos básicos de Versionamiento en Herramientas como GIT - TFS- SVN
- Conocimientos en SharePoint - Confluence
- Conocimientos básicos en Sistemas Operativos, Linux, Windows Server
- Conocimientos intermedios en Bases de Datos SQL, Oracle, DB2
- Conocimientos básicos de Visual Studio
- Instalaciones de ETL´S SQL
- Experiencia en Despliegues de Aplicaciones Web, Windows, Cliente Servidor, NodeJs …
- Manejo de Herramienta SoapUi
- Conocimientos en Herramienta Power Center
- Conocimientos en Herramienta GoAnyWhere

$$$ Full time
QA Engineer II (L4)
  • OpenLoop
  • Lima (Hybrid)
Python ETL TypeScript Testing Frameworks

About OpenLoop

OpenLoop was co-founded by CEO, Dr. Jon Lensing, and COO, Christian Williams, with the vision to bring healing anywhere. Our telehealth support solutions are thoughtfully designed to streamline and simplify go-to-market care delivery for companies offering meaningful virtual support to patients across an expansive array of specialties, in all 50 states.

Our Company Culture

We have a relatively flat organizational structure here at OpenLoop. Everyone is encouraged to bring ideas to the table and make things happen. This fits in well with our core values of Autonomy, Competence and Belonging, as we want everyone to feel empowered and supported to do their best work.

Apply from getonbrd.com.

Responsabilities

We're seeking a QA Automation Engineer to join our Data Engineering team and take ownership of quality assurance across our data pipelines and infrastructure. This role will be instrumental in building and maintaining automated test suites that ensure the reliability and accuracy of our healthcare data systems. You'll work closely with a small, focused team of data engineers to establish testing strategies, prioritize coverage for critical data paths, and maintain quality standards as we scale.

• Quality Ownership: Own and maintain the automated test suite that runs in our CI pipeline, including integration tests, data quality checks, and smoke tests for our data infrastructure.

• Strategic Collaboration: Partner closely with data engineers to understand pipeline architecture, identify critical data paths, and develop comprehensive testing strategies that prioritize business-critical datapoints.

• Test Development: Write and maintain automated tests for data pipelines using Python and TypeScript, ensuring coverage across batch and event-driven workflows.

• Data Validation: Implement data quality checks including row counts, schema validation, key-column validation, idempotency testing, and duplicate handling across ETL processes.

• CI/CD Integration: Build and maintain testing frameworks that integrate seamlessly with our CI/CD pipelines using GitHub Actions, AWS CodePipeline, and CodeArtifact.

• Documentation & Standards: Document test cases, testing strategies, and coverage metrics to establish repeatable quality standards across the data team.

• Continuous Improvement: Identify testing gaps and systematically expand coverage toward end-to-end testing of critical data pipelines.

Requirements

• 3 years of experience in QA automation or software testing, with a focus on data pipelines or backend systems.

• 3 years of hands-on experience with Python and TypeScript for test automation.

• Strong experience with CI/CD pipelines (GitHub Actions, AWS CodePipeline, CodeArtifact).

• Hands-on experience working with data lakes and ETL processes on AWS (familiarity with services like S3, Glue, Athena, Lambda, Step Functions, SQS, EventBridge).

• Experience with testing frameworks for Python (pytest, unittest) and TypeScript/JavaScript (Jest, Mocha).

• Understanding of data structures, data modeling concepts, and data lineage.

• Experience testing in a multi-tenant SaaS environment.

• English (C1/C2) fluency.

Desirable skills

ISTQB Certification

Our Benefits

  • Contract under a Peruvian company ID("Planilla"). You will receive all the legal benefits in Peruvian soles (CTS, "Gratificaciones", etc).
  • Monday - Friday workdays, full time (9 am - 6 pm).
  • Unlimited Vacation Days - Yes! We want you to be able to relax and come back as happy and productive as ever.
  • EPS healthcare covered 100% with RIMAC --Because you, too, deserve access to great healthcare.
  • Oncology insurance covered 100% with RIMAC
  • AFP retirement plan—to help you save for the future.
  • We’ll assign a computer in the office so you can have the best tools to do your job.
  • You will have all the benefits of the Coworking space located in Lima - Miraflores (Free beverage, internal talks, bicycle parking, best view of the city)

Life insurance OpenLoop pays or copays life insurance for employees.
Paid sick days Sick leave is compensated (limits might apply).
Partially remote You can work from your home some days a week.
Health coverage OpenLoop pays or copays health insurance for employees.
Retirement plan OpenLoop pays or matches payment for plans such as 401(k) and others.
Computer provided OpenLoop provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal OpenLoop gives you paid vacations over the legal minimum.
Gross salary $1800 - 2400 Full time
QA Automatizador Mobile
  • 3IT
  • Santiago (Hybrid)
Java Docker Selenium CI/CD
Somos 3IT ¡Innovación y talento que marcan la diferencia!
Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.
Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.
Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

This offer is exclusive to getonbrd.com.

📝 ¿Cuál sería tu trabajo?

Asegurar la calidad del software mediante la implementación de pruebas automatizadas, supervisando todas las etapas del desarrollo para prevenir defectos y garantizar el funcionamiento óptimo del producto.

🎯 ¿Qué necesitamos para sumarte a nuestro equipo?

  • Uso de Git.
  • Manejo de Docker.
  • Experiencia en sector bancario.
  • Práctica en testing de software.
  • Aplicación de BDD con Gherkin y Cucumber.
  • Capacidad para pruebas cloud en AWS y OCI.
  • Monitoreo con Dynatrace, Elastic y Grafana.
  • Dominio de metodología ágil, Scrum y Kanban.
  • Trayectoria en automatización de pruebas con Java.
  • Familiaridad con despliegues mediante DA y CloudBees.
  • Administración de granjas de dispositivos móviles y web.
  • Competencia en integración continua con Jenkins y Bamboo.
  • Recorrido mínimo de 3 años con las tecnologías requeridas.
  • Conocimientos en pruebas de estrés con JMeter y LoadRunner.
  • Habilidades en pruebas técnicas sobre logs, servicios y bases de datos.
  • Gestión de herramientas de calidad como Jira, Confluence, Xray y GitHub.
  • Experiencia con Selenium, Appium y frameworks BDD bajo arquitectura Gradle.
  • Implementación de validaciones de servicios REST y SOAP con Postman o SoapUI.

⭐ Plus para este rol

  • Certificación ISTQB.
  • Manejo de BrowserStack.
  • Conocimientos en inteligencia artificial aplicada a QA.
📍 ¿Dónde y cómo trabajarás?
  • Ubicación oficina: Comuna de Santiago
  • Modalidad: Híbrido
✋ Algunas consideraciones antes de postular:
  • Debes tener disponibilidad para trabajar en modalidad híbrida y asistir de forma presencial a las oficinas de cliente.
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista.

Beneficios que tendrás si te unes a nuestro team:

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Sodexo + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Wellness program Banco de Chile offers or subsidies mental and/or physical health activities.
Accessible An infrastructure adequate for people with special mobility needs.
Life insurance Banco de Chile pays or copays life insurance for employees.
Digital library Access to digital books or subscriptions.
Health coverage Banco de Chile pays or copays health insurance for employees.
Dental insurance Banco de Chile pays or copays dental insurance for employees.
Computer provided Banco de Chile provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Beverages and snacks Banco de Chile offers beverages and snacks for free consumption.
Parental leave over legal Banco de Chile offers paid parental leave over the legal minimum.
$$$ Full time
DataOps Engineer
  • BC Tecnología
Azure CI/CD Terraform Databricks
BC Tecnología es una consultora de TI que implementa soluciones en infraestructura, desarrollo y servicios de outsourcing para clientes de servicios financieros, seguros, retail y gobierno. En este proyecto LATAM, se busca un DataOps Engineer con al menos 3 años de experiencia para un entorno Azure + Databricks, trabajando de forma remota para LATAM. El equipo se orienta a la construcción de pipelines confiables y escalables de datos, con foco en calidad, monitoreo y seguridad. Participarás en la automatización de flujos de datos, implementación de IaC y mejoras continuas en procesos de integración y entrega de datos.

This job is exclusive to getonbrd.com.

Funciones

  • Diseñar, implementar y mantener pipelines de datos en entornos Azure y Databricks, gestionando clusters, jobs y notebooks.
  • Desarrollar pipelines en PySpark y herramientas de Orquestación (Azure Data Factory, Databricks Workflows).
  • Automatizar la validación y calidad de datos, estableciendo métricas y alertas para monitoreo proactivo.
  • Gestionar IaC con Terraform para infraestructura de datos y entornos de desarrollo, prueba y producción.
  • Integrar CI/CD en Azure DevOps / GitHub / GitLab para despliegues de pipelines y código.
  • Aplicar buenas prácticas de seguridad, cumplimiento y optimización de costos en Azure.
  • Trabajar con equipos multifuncionales para entender requerimientos, diseñar soluciones y entregar resultados de alto impacto.

Requisitos y perfil

Buscamos un profesional con al menos 3 años de experiencia en DataOps/Data Engineering, con fuertes habilidades en Azure y Databricks. Debe dominar PySpark, Azure Data Factory y Databricks Workflows, así como herramientas de CI/CD y prácticas de seguridad de datos. Se valoran experiencia en automatización de calidad de datos, monitoreo y optimización de rendimiento. Capacidad para trabajar de forma remota, proactividad, orientación a procesos y capacidad de colaborar en equipos ágiles. Deseable experiencia en entornos regulados y conocimiento de principios de gobierno de datos.

Deseables

Certificaciones en Azure (AZ-xxx), experiencia en orquestación de datos, conocimiento de herramientas de observabilidad, y background en sectores financieros o seguros. Habilidad para comunicar oportunidades técnicas a stakeholders y documentar soluciones de forma clara.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Fully remote You can work from anywhere in the world.
Health coverage BC Tecnología pays or copays health insurance for employees.
Computer provided BC Tecnología provides a computer for your work.
Gross salary $1000 - 1300 Full time
Desarrollador Web
  • Coderslab.io
  • Lima (Hybrid)
HTML5 Python BigQuery ETL

CodersLab es una empresa dedica al desarrollo de soluciones dentro del rubro IT y actualmente, nos enfocamos en expandir nuestros equipos a nivel global para posicionar nuestros productos en más países de América Latina y es por ello que estamos en búsqueda de un Desarrollador Web para unirse a nuestro equipo.

Formarás parte de un equipo desafiante y ambicioso, con ganas de innovar en el mercado, donde tus ideas y contribuciones serán altamente valiosas para el negocio.

¡Postúlate ahora para este increíble desafío!

© Get on Board. All rights reserved.

Funciones del cargo

  • Desarrollo de funcionalidad de gestión del canal con python y html5, tanto Backend como Frontend.
  • Migración de funcionalidades hacia web.
  • Documentación funcional de los desarrollos.
  • Carrera sistemas o relacionados.
  • Experiencia de cualquier sector; sin embargo, plus si tiene experiencia en el sector financiero.

Requerimientos del cargo

Experiencia entre 2 y 3 años

  • Experiencia en HTML5
  • Experiencia en SQL Server
  • Experiencia en Python
  • Experiencia en BigQuery
  • Experiencia en Gitlab
  • Experiencia en ETLs
  • Experiencia de cualquier sector; sin embargo, plus si tiene experiencia en el sector financiero.

Condiciones

Modalidad de contratación: Recibo por honorarios
Modalidad: Hibrida (3 veces a oficina)

$$$ Full time
Desarrollador Web Python/HTML5
  • BC Tecnología
  • Lima (Hybrid)
HTML5 Python BigQuery Microservices
BC Tecnología es una consultora de TI con experiencia en diseñar soluciones para clientes de servicios financieros, seguros, retail y gobierno. Nuestro enfoque es entregar proyectos de desarrollo y migración de funcionalidades, con equipos ágiles y foco en la continuidad operativa y la evolución de canales digitales. En esta posición, formarás parte de iniciativas que buscan migrar funcionalidades desde aplicaciones móviles (APK) hacia plataformas Web, asegurando soluciones eficientes, escalables y alineadas a los estándares corporativos del banco.
Trabajarás en proyectos que requieren integración de datos, migraciones de funcionalidades, y desarrollo de soluciones que optimicen la experiencia del usuario final, manteniendo la calidad y la trazabilidad documental necesarias para entornos regulados.

Send CV through getonbrd.com.

Funciones y responsabilidades

  • Desarrollar funcionalidades de gestión de canal utilizando Python y HTML5, abarcando tanto Backend como Frontend.
  • Migrar funcionalidades desde APK a plataformas Web, asegurando transiciones sin pérdidas de rendimiento ni integridad de datos.
  • Participar en la elaboración de documentación funcional de los desarrollos, manteniendo trazabilidad y claridad para equipos de operación y negocio.
  • Colaborar en la definición técnica y en la revisión de código para garantizar adherencia a estándares del banco y buenas prácticas.
  • Trabajar de forma colaborativa con equipos de UI/UX, QA y DevOps para entregar soluciones escalables y mantenibles.
  • Identificar, registrar y proponer mejoras continuas en procesos, rendimiento y seguridad de las aplicaciones.

Requisitos y perfil buscado

Requisitos técnicos obligatorios:
  • HTML5
  • Python
  • SQL Server
  • BigQuery
  • GitLab
  • ETLs
Experiencia entre 2 y 3 años como desarrollador, con antecedentes en desarrollo web y migración de funcionalidades. Se valorará experiencia en el sector financiero o en industrias afines. Capacidad para trabajar en entornos colaborativos, orientados a resultados y con buenas habilidades de comunicación para documentar y coordinar cambios con stakeholders.

Deseables

Experiencia previa en migraciones desde apps móviles a plataformas web, conocimiento adicional de arquitecturas de microservicios, y familiarity con procesos de gobierno de datos y seguridad en entornos regulados.Idiomas: español fluido; inglés técnico deseable.

Beneficios y entorno de trabajo

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

$$$ Full time
Python PostgreSQL SQL Docker
Niuro connects ambitious projects with elite tech teams to deliver high-impact solutions for leading U.S. companies. The selected candidate will join a fintech-focused environment where data integrity, reliability, and scalability are paramount. You will contribute to building autonomous, high-performance backend systems that ingest, normalize, validate, and store market data at scale. This role emphasizes robust data pipelines, production-grade services, and seamless API-based integrations, enabling real-time and historical market data workflows for analytics and trading applications. The project culture values technical excellence, continuous improvement, and a collaborative global team committed to delivering measurable value while maintaining a strong administrative support backbone to allow engineers to focus on impactful work.

© Get on Board. All rights reserved.

Core Responsibilities

Design, implement, and maintain asynchronous Python services for market-data ingestion in a fintech setting. Build clean, well-typed, maintainable Python code using modern best practices. Design and operate microservice-based architectures using Docker. Optimize concurrency, throughput, and resource usage in asynchronous systems. Own services end-to-end: development, debugging, monitoring, and long-term improvements.
  • Data Pipelines & Reliability: Build and maintain robust API-based ingestion pipelines. Handle real-world failure modes including partial data, retries, idempotency, and upstream instability. Monitor ingestion success, latency, and data quality metrics. Conduct root-cause analyses on data incidents and implement durable fixes. Ensure deterministic behavior under load.
Database & Data Integrity: Work directly with PostgreSQL and TimescaleDB using raw SQL where appropriate. Design and maintain normalized schemas for time-series and reference data. Ensure data correctness, consistency, and traceability across ingestion layers. Maintain and debug production databases. Design scalable data structures to support growing data volume and query load.

Required Experience & Skills

• 5+ years of professional experience building backend systems in Python.
• Strong experience with async Python (asyncio, async I/O patterns).
• Excellent knowledge of PostgreSQL, raw SQL, and database performance tuning.
• Experience designing and operating production distributed systems.
• Strong understanding of failure modes, backpressure, retries, and idempotency.
• Proven ability to own systems end-to-end in production.

Bonus – Fintech & Data Awareness

• Experience with financial or market data.
• Familiarity with time-series modeling and high-volume data ingestion.
• Ability to reason about how data quality impacts downstream trading or analytics systems.
• Experience supporting analytics or front-end consumers of market data.

Benefits

We provide opportunities to participate in impactful and technically rigorous industrial data projects that drive innovation and professional growth. Our work environment emphasizes technical excellence, collaboration, and continuous innovation.
Niuro supports a 100% remote work model, allowing global flexibility. We invest in career development through ongoing training programs and leadership opportunities, ensuring continuous growth and success.
Upon successful completion of the initial contract, there is potential for long-term collaboration and stable, full-time employment, reflecting our long-term commitment to our team members.
Joining Niuro means becoming part of a global community dedicated to technological excellence and benefiting from strong administrative support that enables you to focus on impactful work without distractions.

Informal dress code No dress code is enforced.
$$$ Full time
Desarrollador .NET / SQL / Angular
  • BC Tecnología
  • Santiago (Hybrid)
Python Scrum MVC Microservices
BC Tecnología es una consultora de TI que administra portafolio, desarrolla proyectos y ofrece outsourcing y selección de profesionales para áreas de Infraestructura Tecnología, Desarrollo de Software y Unidades de Negocio. El proyecto se centra en migraciones de datos entre plataformas, desarrollo y mantenimiento de soluciones basadas en SQL Server, .NET y front-end con Angular, orientadas a clientes en sectores como servicios financieros, seguros, retail y gobierno. El rol implica trabajar en equipos ágiles para entrega de software de alta calidad, con foco en rendimiento, escalabilidad y cumplimiento de requerimientos del Product Owner y normas de arquitectura digital. Participarás en iniciativas de mejora continua, migraciones de datos y desarrollo de microservicios en un entorno tecnológico avanzado, con énfasis en buenas prácticas de pruebas y entrega incremental.

© Get on Board.

Funciones principales

  • Desarrollar y mantener aplicaciones y procesos utilizando SQL Server y SQL Integration Services (SSIS), ASP.NET y .NET Framework 4.x.
  • Desarrollar soluciones de software que aprovechen eficientemente recursos (memoria, disco, CPU) y cumplan con requerimientos y funcionalidades definidas por el Product Owner.
  • Programar código funcional, mantenible y de calidad para incrementar el producto, abarcando Backend y Frontend (MVC con Angular, Python cuando aplique).
  • Diseñar e implementar microservicios, gestionar su ciclo de vida y su despliegue en entornos de nube como AWS.
  • Realizar pruebas unitarias e integrales, corregir defectos detectados en QA y asegurar que los incrementos de producto estén listos para producción al final de cada sprint.
  • Participar en la propiedad colectiva del código del incremento del sprint y buscar mejoras continuas en entregables y procesos.
  • Analizar e interpretar datos para apoyar la toma de decisiones, vinculando requisitos de negocio con soluciones técnicas robustas.
  • Colaborar en equipos ágiles Scrum, manteniendo una comunicación efectiva y documentando artefactos técnicos y funcionales.
  • Requisitos de migraciones de datos entre plataformas, con conocimiento avanzado de procesos masivos (Batch) y herramientas de integración.

Descripción

Buscamos un Desarrollador Senior con sólida experiencia en migraciones de datos, desarrollo full-stack y capacidad para trabajar en un entorno bancario y de servicios. El candidato ideal poseerá un historial probando soluciones complejas, integrando capas de presentación, negocio y datos, y demostrará habilidades analíticas avanzadas para modelar y transformar información. Se requiere experiencia en SQL Server, SSIS, .NET, MVC con Angular y desarrollo de microservicios. Deberá trabajar con metodologías Scrum y colaborar con equipos multifuncionales para entregar soluciones de alta calidad, escalables y seguras. Se valorarán certificaciones en .NET, SQL Server y/o Scrum, así como experiencia en plataformas en la nube. El rol implica un turno híbrido con presencia en Santiago centro y coordinación con equipos en Las Condes según la modalidad de la empresa.

Requisitos deseables

Formación universitaria en Ingeniería de Sistemas, Informática o campos afines. Mínimo 5 años de experiencia en desarrollo de software en proyectos similares. Dominio avanzado de HTML5, CSS y JavaScript, conocimiento de Angular y desarrollo móvil/web. Experiencia comprobable en SQL Server, SSIS, ETL, ASP.NET, MVC con Angular y Python. Deseable experiencia en migraciones de datos, procesos batch/masivos (CMD) y desarrollo de microservicios. Capacidad analítica avanzada, buena comunicación y trabajo en equipo. Conocimientos en Genesys Cloud y Salesforce Marketing Cloud son un plus. Se valora experiencia en entornos bancarios y en entornos que requieren alta seguridad y cumplimiento regulatorio.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $1800 - 2300 Full time
Ingeniero(a) Junior de Software y Robótica
  • Maquintel robotic services
  • Santiago (In-office)
Python Git Data Analysis Linux

Resumen del cargo

Buscamos un/a ingeniero recién egresado/a con fuerte base técnica y ganas de aprender "en la vida real" para sumarse a un equipo que construye soluciones de inspección de activos críticos usando robótica, percepción (visión/3D), plataformas de datos y gemelos digitales. Tu foco será conectar el mundo físico con el digital: capturar datos desde robots y sensores, procesarlos (imágenes, nubes de puntos, telemetría), exponerlos en una plataforma (APIs/dashboards) y transformarlos en un gemelo digital útil para operación y mantenimiento.

Perfil ideal

  • Ingeniero Civil Eléctrico, Electrónico, Computación/Informática
  • Recién egresado/a o 0-2 años de experiencia (practicas cuentan).
  • Alto potencial, curiosidad y mentalidad de aprendizaje acelerado.
  • Liderazgo desde el primer día: ownership de tareas, iniciativa y capacidad de pedir ayuda a tiempo.
  • Orden y rigurosidad: reproducibilidad, bitácora técnica, documentación y foco en calidad de datos.

Exclusive to Get on Board.

Responsabilidades clave

1) Software de captura y procesamiento (Robótica + Datos)

  • Desarrollar y mantener herramientas para recolección, limpieza y procesamiento de datos generados por robots (imágenes, video, LiDAR/nubes de puntos, IMU y otros sensores).
  • Diseñar pipelines reproducibles para logging, sincronización, validación y respaldo de datos.
  • Automatizar tareas recurrentes (importación, conversión de formatos, control de calidad, generación de reportes base).

2) Percepción y analítica (Visión + 3D)

  • Implementar y optimizar algoritmos de procesamiento de imágenes (OpenCV) y análisis de datos espaciales / nubes de puntos (filtros, registro, segmentación, métricas).
  • Apoyar la curación de datasets, anotación/etiquetado cuando aplique, y validación de resultados.
  • Medir desempeño: precisión/recall cuando corresponda, error, cobertura, repetibilidad y tasa de reproceso.

3) Plataforma de datos (Backend + Integraciones + Dashboards)

  • Construir y mantener servicios para explotación de datos: APIs, conectores y componentes de backend.
  • Modelar y mantener bases de datos (por ejemplo PostgreSQL) y apoyar flujos de ETL liviano y exports para clientes.
  • Crear visualizaciones y dashboards para usuarios no expertos, enfocadas en decisión y trazabilidad.

4) Gemelos digitales (Activos + Evidencia + Trazabilidad)

  • Estructurar activos y campanas: jerarquías, metadatos, criticidad, evidencia y comparaciones "antes/después".
  • Apoyar la construcción de vistas 3D/modelos y reportes técnicos orientados a mantención y operación.
  • Asegurar consistencia: naming, versionado de datasets y estándares internos.

5) Integración y calidad (Hardware/Software + Operación)

  • Colaborar con ingeniería de hardware/robótica para una integración fluida (interfaces, formatos, límites de cómputo).
  • Diseñar y ejecutar pruebas para asegurar rendimiento, robustez y calidad del dato.
  • Documentar código y procesos de forma clara; mantener control de versiones y buenas prácticas de desarrollo.

Habilidades técnicas requeridas

  • Python (obligatorio). C++ (deseable).
  • Manejo de Linux/Unix y herramientas de terminal.
  • Análisis de datos: NumPy, Pandas, SciPy (o equivalentes).
  • Procesamiento de imágenes: OpenCV (deseable fuerte).
  • Control de versiones: Git (obligatorio).
  • Capacidad de crear visualizaciones (por ejemplo Matplotlib) y dejar herramientas usables por terceros.

Deseables (suman mucho)

  • ROS/ROS2 (nodos, tópicos, servicios, acciones).
  • Nubes de puntos y 3D: Open3D, PCL u otras.
  • Machine Learning aplicado a vision: PyTorch/TensorFlow/Keras.
  • Nube: AWS, Azure o Google Cloud (almacenamiento, procesamiento o despliegue).
  • Docker y nociones de APIs REST.

Si te apasiona la robótica, el software, machine learning, la innovación y el desarrollo, con un foco en la generación de nuevos productos y servicios, este es el lugar ideal para aprender y crecer profesionalmente. Estarás investigando, desarrollando e implementando soluciones en base a una combinación entre software y hardware para resolver problemas desafiantes con alto impacto en la industria y medio ambiente.

Somos un equipo multidisciplinario, con un ambiente laboral grato y relajado que cuenta con servicios únicos bien desarrollados y probados. Tenemos mucho entusiasmo por seguir desarrollando e implementando servicios y soluciones innovadoras.

Fuimos finalistas del Premio nacional de innovación Avonni 2019. También recibimos el premio a la mejor contribución a la industria de transporte de relaves Optimus Pipe 2018.

$$$ Full time
Lead Software Architect
  • Improving South America
.Net Cybersecurity CI/CD Cloud Architecture

Improving South America es una empresa líder en servicios de TI que busca transformar positivamente la percepción del profesional de TI mediante consultoría de tecnología, desarrollo de software y formación ágil. Somos una organización con una cultura que fomenta el trabajo en equipo, la excelencia y la diversión, inspirando a nuestro equipo a establecer relaciones duraderas mientras ofrecemos soluciones tecnológicas de vanguardia. Nuestra misión está alineada con el movimiento de Capitalismo Consciente, promoviendo un entorno de trabajo excepcional que impulsa el crecimiento personal y profesional dentro de una atmósfera abierta, optimista y colaborativa.

Official source: getonbrd.com.

Funciones del cargo

  • Liderar el diseño y la evolución de arquitecturas de software escalables, seguras y de alto rendimiento, tanto On-premises como en la nube, utilizando Microsoft Azure.
  • Definir la arquitectura técnica end-to-end de soluciones SaaS, asegurando estándares de calidad, resiliencia, mantenibilidad y escalabilidad.
  • Diseñar y liderar la implementación de data warehouses empresariales, incluyendo modelado de datos, pipelines ETL y optimización de performance.
  • Colaborar estrechamente con equipos de Desarrollo, DevOps y Data para asegurar una integración fluida entre aplicaciones y plataformas de datos.
  • Crear y mantener documentación arquitectónica, diagramas y especificaciones técnicas para aplicaciones y plataformas de datos.
  • Actuar como referente técnico en el uso de servicios de Azure, tales como App Services, Azure Functions, Azure SQL, Cosmos DB, Azure Data Factory, Synapse Analytics y Azure Storage.
  • Definir y velar por el cumplimiento de estándares de arquitectura, buenas prácticas y modelos de gobernanza en los distintos proyectos.
  • Trabajar junto a stakeholders de negocio para alinear las decisiones técnicas con los objetivos estratégicos de la compañía.
  • Evaluar nuevas tecnologías y herramientas, proponiendo mejoras en performance, escalabilidad y eficiencia de costos.
  • Brindar liderazgo técnico y mentoría a desarrolladores e ingenieros, promoviendo buenas prácticas y crecimiento del equipo.

Requerimientos del cargo

  • Nivel de inglés intermedio/ avanzado - B2/C1 (Indispensable).
  • +10 años de experiencia en desarrollo de software, con evolución hacia roles de liderazgo técnico.
  • Experiencia liderando el diseño e implementación de arquitecturas cloud-native en Microsoft Azure (Azure Functions, Azure SQL, Cosmos DB, Azure Data Factory, Synapse Analytics, and Azure Storage).
  • Sólido dominio de .NET (C#) y ASP.NET Core, con capacidad para tomar decisiones de arquitectura y buenas prácticas de desarrollo.
  • Experiencia liderando equipos en el uso de Azure DevOps, incluyendo CI/CD, gestión de releases y calidad de código.
  • Capacidad para diseñar soluciones enfocadas en escalabilidad, tolerancia a fallos, seguridad y cumplimiento.
  • Conocimientos en data warehousing, con capacidad para guiar decisiones relacionadas al manejo y la estrategia de datos.
  • Fuerte enfoque en ciberseguridad, asegurando el cumplimiento de estándares y buenas prácticas en los desarrollos.
  • Experiencia liderando equipos bajo metodologías ágiles, promoviendo la mejora continua y la colaboración.
  • Fuertes habilidades de comunicación, liderazgo e influencia técnica.

Beneficios

  • 100% Remoto.
  • Vacaciones y PTOs.
  • Posibilidad de recibir 2 bonos al año.
  • 2 revisiones salariales al año.
  • Clases de inglés.
  • Equipamiento Apple.
  • Plataforma de cursos en linea.
  • Budget para compra de libros.
  • Budget para compra de materiales de trabajo

Internal talks Improving South America offers space for internal talks or presentations during working hours.
Computer provided Improving South America provides a computer for your work.
Vacation over legal Improving South America gives you paid vacations over the legal minimum.
Vacation on birthday Your birthday counts as an extra day of vacation.
$$$ Full time
Arquitecto de Soluciones Senior
  • BC Tecnología
  • Santiago (Hybrid)
REST API Microservices Cloud Computing CI/CD
BC Tecnología es una consultora de TI con experiencia en diseñar soluciones para clientes de servicios financieros, seguros, retail y gobierno. Nos enfocamos en consultoría y diseño de soluciones, formación de equipos, outsourcing de personal, desarrollo de proyectos y servicios de soporte y administración IT. Nuestra cultura favorece el crecimiento profesional, la integración y el intercambio de conocimiento entre equipos. En este rol, liderarás la definición y validación de arquitecturas técnicas para soluciones en el sector retail, coordinando con equipos de implementación y asegurando la interoperabilidad entre sistemas core, middleware, plataformas SaaS y entornos híbridos.
La posición se enmarca en un entorno de proyectos innovadores con clientes de alto nivel y múltiples sectores, promoviendo prácticas ágiles, gestión de stakeholders y un enfoque centrado en la calidad de datos y la seguridad.

Apply to this job from Get on Board.

Funciones y responsabilidades

  • Levantar, diseñar y validar arquitecturas técnicas para soluciones en el sector retail, asegurando interoperabilidad entre sistemas core, middleware, plataformas SaaS y entornos híbridos (cloud y on‑premise).
  • Traducir requisitos de negocio en soluciones técnicas robustas, escalables y alineadas a la estrategia de la empresa y del cliente.
  • Liderar la implementación técnica y colaborar con equipos de desarrollo, DevOps, seguridad y operaciones.
  • Definir patrones de integración (REST APIs, eventos, archivos, SFTP), orquestación de procesos y diseño de buses de eventos; gestionar microservicios y middleware.
  • Modelar y documentar componentes e interfaces (C4, BPMN, diagramas de secuencia) para garantizar claridad y trazabilidad.
  • Gestión de datos: asegurar calidad, consistencia y rendimiento de datos; trabajar con soluciones de analítica y almacenamiento (p. ej., bases de datos analíticas, esquemas optimizados).
  • Seguridad y cumplimiento: diseñar arquitecturas seguras, control de accesos, cifrado y cumplimiento normativo.
  • Metodologías Ágiles y DevOps: promover prácticas de CI/CD, automatización de despliegues, monitoreo y mejora continua.
  • Gestión de stakeholders de negocio y tecnología; comunicación efectiva y liderazgo técnico para equipos multidisciplinarios.
  • Evaluar y seleccionar tecnologías alineadas a los objetivos del negocio; proactividad en resolver problemas y gestionar múltiples prioridades.
  • Conocimientos deseables en CRM, especialmente Customer Services.

Perfil y requisitos

Requisitos indispensables: más de 5 años de experiencia en roles de arquitectura de soluciones, preferentemente en retail, consumo masivo o industrias con alta integración de sistemas; experiencia comprobada en proyectos de integración de sistemas core y soluciones cloud/SaaS; experiencia en migraciones, modernización o implementación de plataformas; experiencia liderando equipos técnicos y gestionando stakeholders de negocio y tecnología; capacidad de comunicación efectiva y liderazgo para interactuar con equipos multidisciplinarios; dominio de Cloud Computing (AWS) y de patrones de integración (REST, eventos, files, SFTP); experiencia con bases de datos analíticas (p. ej., Redshift) y ETL/ELT; conocimiento de seguridad, cumplimiento, y prácticas DevOps; diseño detallado de arquitectura considerando flujos de datos, interoperabilidad y resiliencia; modelado y documentación de componentes e interfaces; capacidad para evaluar tecnologías y asegurar interoperabilidad entre legacy, cloud y SaaS; buenas prácticas de desarrollo. Deseable: conocimientos en CRM, específicamente Customer Services.

Requisitos deseables

Visión estratégica y orientación a resultados; capacidad para gestionar múltiples prioridades; proactividad y resolución de problemas; buenas prácticas de desarrollo; habilidades de comunicación y negociación con stakeholders; enfoque orientado al cliente y capacidad para traducir necesidades de negocio en soluciones técnicas efectivas.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $2000 - 2200 Full time
Especialista Desarrollador
  • BC Tecnología
  • Santiago (Hybrid)
Node.js Scrum Microservices Angular
BC Tecnología es una empresa de servicios TI que gestiona portafolio, desarrolla proyectos, realiza outsourcing y selección de profesionales para clientes en sectores financieros, seguros, retail y gobierno. Buscamos incorporar un Especialista Desarrollador para formar parte de equipos ágiles que trabajarán en proyectos de desarrollo de software y migraciones de datos, con enfoque en la entrega de incrementos de valor para clientes de alta exigencia. El candidato participará en iniciativas de desarrollo y mantenimiento de aplicaciones, liderando y colaborando en soluciones de integración y procesamiento de datos dentro de un marco de Scrum, con foco en calidad, escalabilidad y performance. El rol se desempeñará en un entorno híbrido, combinando trabajo en oficina y remoto, para asegurar una ejecución eficiente y una entrega continua de valor.

© Get on Board.

Funciones y responsabilidades

  • Desarrollar y mantener aplicaciones y procesos, asegurando calidad, rendimiento y escalabilidad.
  • Participar en equipos ágiles y entrega de incrementos de producto con foco en valor para el negocio.
  • Analizar y modelar datos para soluciones TI, incluyendo migraciones entre plataformas, procesos batch/masivos y ETL.
  • Trabajar con SQL Server, SSIS, ASP.NET, .NET Framework y Angular, aplicando patrones de diseño y buenas prácticas de desarrollo.
  • Contribuir en la definición técnica y arquitectónica de soluciones, incluyendo microservicios (Node.js) e integraciones con AWS.
  • Colaborar con equipos multidisciplinarios y respaldar la mejora continua de procesos y metodologías Scrum.

Requisitos y perfil deseado

Requisitos mínimos:
  • Formación universitaria en Ingeniería en Sistemas, Informática o carrera afín.
  • Mínimo 3 años de experiencia en desarrollo de software.
  • Experiencia en migración de datos entre plataformas, procesos batch/masivos y ETL.
  • Conocimientos en SQL Server, SSIS, ASP.NET, .NET Framework y MVC con Angular.
  • Experiencia en microservicios (Node.js) e integración con AWS.
  • Manejo de metodologías Scrum y trabajo en entornos ágiles.
Competencias deseables: capacidad de análisis, orientación a resultados, proactividad, trabajo en equipo, buenas habilidades de comunicación y capacidad de adaptación a entornos dinámicos.

Requisitos deseables

Se valorará:
  • Conocimientos adicionales en servicios de nube, contenedores y herramientas de orquestación.
  • Experiencia en diseño y migración de soluciones de datos en entornos regulados.
  • Experiencia en cliente/finanzas y proyectos de cambio organizacional.

Beneficios y cultura

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Health coverage BC Tecnología pays or copays health insurance for employees.
Computer provided BC Tecnología provides a computer for your work.
$$$ Full time
Python Node.js SQL Django

WiTi es una compañía que acompaña a grandes holdings de retail en la implementación de soluciones omnicanal basadas en datos y audiencias. Este rol se integra a una unidad de retail media enfocada en productos de segmentación y audiencias para múltiples países de la región. Trabajarás con equipos de negocio, data e ingeniería para impulsar productos que conectan marcas con millones de clientes a través de plataformas de publicidad basadas en datos, con un enfoque en escalabilidad y rendimiento.

This company only accepts applications on Get on Board.

Funciones y responsabilidades principales

  • Diseñar, desarrollar y mantener servicios backend en Python, exponiendo APIs escalables y robustas (FastAPI; deseable experiencia en Django).
  • Desarrollar interfaces frontend en ReactJS, con posible experiencia en arquitecturas modernas como microfrontends.
  • Modelar y operar bases de datos SQL y NoSQL, garantizando integridad, rendimiento y calidad de los datos.
  • Colaborar en flujos ETL y soluciones de Big Data para construir audiencias y segmentaciones avanzadas.
  • Participar en decisiones de arquitectura usando monorepos con NX (deseable experiencia en PolyRepo).
  • Trabajar con contenedores Docker y orquestación con Kubernetes para desplegar y operar en producción.

Perfil y experiencia requeridos

Buscamos un/a Desarrollador/a Full Stack Senior con probada experiencia en desarrollo de aplicaciones end-to-end. Se requiere dominio de Python en backend, idealmente en entornos productivos, y sólida experiencia desarrollando interfaces en ReactJS. Debe tener conocimiento del mundo Retail y experiencia en proyectos de ecommerce o retail media. Se valora experiencia con FastAPI (y/o Django), conocimiento de Node.js (NestJS) como parte del stack backend, y experiencia en microfrontends o arquitecturas frontend modulares. Se espera experiencia en procesos ETL y plataformas de Big Data, trabajo con monorepos (NX) y pipelines CI/CD, y familiaridad con Docker y Kubernetes en ambientes productivos. Nivel básico de inglés para lectura de documentación técnica.

Conocimientos y habilidades deseables

• Experiencia con FastAPI y buenas prácticas de diseño de APIs REST.
• Conocimiento de Node.js (NestJS) como parte del backend.
• Experiencia en arquitecturas frontend modulares (microfrontends).
• Experiencia en Big Data y pipelines ETL.
• Trabajo con NX monorepos y entornos CI/CD.
• Docker y Kubernetes en producción.
• Inglés básico para lectura de documentación técnica.

Beneficios y condiciones

En WiTi ofrecemos un entorno de trabajo 100% remoto, con flexibilidad y autonomía. Fomentamos un ambiente colaborativo y una cultura de aprendizaje constante. Beneficios destacados:

  • Plan de carrera personalizado para desarrollo profesional.
  • Certificaciones para continuar creciendo en tu carrera.
  • Cursos de idiomas para apoyar el desarrollo personal y profesional.

Si te apasiona la tecnología y quieres formar parte de nuestro equipo, queremos conocerte.

Digital library Access to digital books or subscriptions.
Computer provided WiTi provides a computer for your work.
Personal coaching WiTi offers counseling or personal coaching to employees.
Gross salary $3500 - 4300 Full time
Redis REST API Node.js MongoDB

Breezy HR is a remote-first hiring platform tailored for small and mid-sized businesses. We are expanding our SaaS product with LLM-enabled workflows and a backend-first focus to deliver fast, reliable experiences for both candidates and hiring managers. You’ll contribute to core features, improve data pipelines, and integrate managed AI capabilities (AWS Bedrock) to power smarter recruiting processes. This role sits at the intersection of product engineering and AI-enabled automation, driving end-to-end delivery from design to production.

Official source: getonbrd.com.

What you’ll own

  • Lead delivery for major features: decompose complex problems, drive execution, and bring initiatives to production.
  • Build and evolve backend services: design, implement, and improve REST APIs, microservices, data ingestion/processing, and third-party integrations.
  • Ship LLM-enabled features (AWS Bedrock): integrate managed LLM services into product workflows with reliability, monitoring, guardrails, and cost/latency awareness.
  • Own quality in production: debug across services, optimize performance, and uphold correctness.
  • Collaborate cross-functionally with Product and mentor teammates through reviews and collaboration.
  • Maintain hands-on ownership with a pragmatic, ship-and-iterate mindset.

What you’ll bring

We’re seeking a senior backend engineer with 7+ years of web application experience and a strong track record shipping scalable, API-driven systems. You’ve built and operated production services in Node.js, including microservices, REST APIs, and asynchronous workflows. You’re comfortable working with data stores like MongoDB and Redis (schema design, indexing, caching, performance). This role requires that you’ve shipped at least one production LLM workflow end-to-end using AWS Bedrock (not a prototype), with reliability and cost/latency in mind. You communicate clearly in English (B2+ required, C1 preferred), document decisions, work autonomously with a bias toward action, and bring strong product ownership, turning ambiguous goals into shipped outcomes. You must be located in Colombia for payroll/compliance.

Nice-to-have

Deeper AWS infrastructure experience (e.g., Terraform/CDK/CloudFormation, networking, CI/CD, and production observability patterns). Frontend experience with modern frameworks like React, Angular, Vue, or Svelte to help ship end-to-end product changes.

What we offer

Remote-first environment with flexible collaboration across time zones, a startup-paced team culture, and the opportunity to shape AI-enabled features in a growing SaaS product. Competitive salary in COP, exposure to cutting-edge LLM-driven workflows, and a collaborative, low-ego team. You’ll work with a distributed engineering and product squad focused on fast, reliable delivery.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Computer provided Breezy HR provides a computer for your work.
Informal dress code No dress code is enforced.
$$$ Full time
Automation Jira Confluence Project Management
Niuro connects projects with elite tech teams, collaborating with leading U.S. companies. We empower projects by providing autonomous, high-performance engineering squads and handle end-to-end administrative tasks so clients can accelerate delivery. The Health and Life Sciences sector is a strategic focus for us, including healthcare providers, pharmaceutical companies, and medical technology firms. This role contributes to impactful, technically rigorous initiatives that drive innovation, while offering ongoing career development, leadership opportunities, and a pathway to long-term collaboration.
As part of Niuro’s global ecosystem, you will join a multidisciplinary team dedicated to delivering scalable, high-quality Salesforce solutions for complex healthcare workflows. You will engage with a diverse client base, operate remotely across LATAM, and benefit from a robust support infrastructure designed to accelerate success and enable you to focus on delivering exceptional results.

Apply to this job from Get on Board.

Key Responsibilities

  • Serve as the primary client contact for assigned Salesforce engagements, leading discovery sessions, clarifying requirements, communicating trade-offs, and maintaining trusted advisor relationships.
  • Translate business needs into technical specifications and execute hands-on Salesforce configuration, including custom objects, automation (Flows, Process Builder, Automation Rules), and UX design considerations.
  • Own end-to-end project execution using JIRA: backlog creation and prioritization, sprint planning, risk identification, and on-time delivery of milestones.
  • Produce comprehensive documentation in Confluence: meeting notes, detailed requirements specs, solution architecture decisions, and client-facing project plans.
  • Lead end-to-end QA testing: design test cases, validate configurations against acceptance criteria, reproduce issues, and sign off on release readiness.
  • Manage data migration workstreams: profile source data, design mapping strategies, execute loads (native or third-party tools), and verify post-migration data integrity.
  • Facilitate internal alignment meetings to ensure engineering handoffs are crisp and blockers are resolved within 24 hours.

What You’ll Bring

We are seeking a Senior Salesforce Consultant with 5+ years of hands-on experience in Salesforce implementation, consulting, or platform management. You will balance strategic client partnership with disciplined project execution in a fast-moving environment.
Required skills include deep expertise with Salesforce configuration: custom fields, Lightning pages, Flows, validation rules, and security models. You should have a proven track record of direct client-facing work in consulting or professional services, demonstrated ownership of timelines and deliverables, and strong experience using JIRA for task tracking and sprint coordination. Excellent documentation skills (Confluence or equivalent) and experience conducting formal QA testing cycles are essential. Competence with data migration processes (ETL, mapping, loading, and integrity verification) is highly desirable.
Healthcare industry knowledge accelerates impact but is not required to start. You must be comfortable with ambiguity, able to switch contexts rapidly between stakeholder management, technical configuration, and quality assurance, and be adept at producing clear, actionable artifacts for clients and internal teams.

Desirable Skills & Experience

Experience delivering large-scale Salesforce implementations within Health and Life Sciences is highly advantageous. Certifications such as Salesforce Certified Administrator, Sales Cloud Consultant, Service Cloud Consultant, or Platform Developer I/II are a plus. Familiarity with data privacy regulations common to healthcare (e.g., HIPAA) and secure handling of patient data is beneficial. Strong stakeholder management, negotiation, and presentation skills, coupled with a collaborative mindset and a demonstrated ability to drive results in multi-year client engagements, are desirable traits.

What Niuro Offers

We provide the opportunity to participate in impactful and technically rigorous industrial data projects that drive innovation and professional growth. Our work environment emphasizes technical excellence, collaboration, and continuous innovation.
Niuro supports a 100% remote work model, allowing flexibility in work location globally. We invest in career development through ongoing training programs and leadership opportunities, ensuring continuous growth and success.
Upon successful completion of the initial contract, there is potential for long-term collaboration and stable, full-time employment, reflecting our long-term commitment to our team members.
Joining Niuro means becoming part of a global community dedicated to technological excellence and benefiting from a strong administrative support infrastructure that enables you to focus on impactful work without distraction.

Gross salary $4800 - 5700 Full time
Tech Manager
  • Artefact LatAm
  • Ciudad de México (Hybrid)
Business Intelligence Data Architecture Problem Solving Data Modeling

En Artefact LatAm, somos una consultora líder enfocada en acelerar la adopción de datos e inteligencia artificial para generar impacto positivo

Como Tech Manager, liderarás la visión técnica y ejecución estratégica de soluciones avanzadas en Data Engineering, BI e IA, garantizando arquitecturas escalables y de alto impacto. Serás el catalizador de transformaciones digitales complejas, gestionando equipos multidisciplinarios y actuando como el puente crítico entre los objetivos de negocio de los clientes y la innovación tecnológica. Tu enfoque integrará la excelencia en el delivery, la gobernanza de datos y el desarrollo de talento, consolidando estándares globales que posicionen a la compañía como un referente técnico en el mercado.

© Get on Board.

Funciones del cargo

Capacidades de Datos y Tecnología: Diseñar, implementar y escalar soluciones robustas (modelos predictivos, segmentación IA y BI en tiempo real) garantizando excelencia técnica, escalabilidad y fiabilidad.

Liderazgo de Transformaciones: Actuar como líder técnico en iniciativas de datos e IA, guiando equipos en transformaciones complejas bajo mejores prácticas de ingeniería y arquitectura sólida.

Estrategia y Arquitectura: Definir la visión técnica de plataformas de datos y ecosistemas de BI, alineando decisiones de infraestructura, nube, gobernanza y seguridad con los objetivos del negocio.

Excelencia en Proyectos: Responsable de la ejecución integral, calidad y rendimiento. Anticipar riesgos técnicos y gestionar dependencias para asegurar entregas a tiempo y en alcance.

Gestión de Equipos y Clientes: Dirigir y asesorar equipos multidisciplinarios fomentando una cultura de ingeniería. Actuar como contacto técnico principal para clientes, traduciendo necesidades de negocio en soluciones escalables.

Innovación Continua: Evaluar nuevas tecnologías y herramientas en datos e IA, impulsando la experimentación y validación de conceptos.

Requerimientos del cargo

  • 8 años de experiencia liderazgo proyectos relacionados a data
  • Liderazgo técnico demostrado: amplia experiencia en la dirección de proyectos de datos, BI o IA en entornos complejos.
  • Mentalidad ingenieril: sólidos conocimientos de arquitecturas de datos, plataformas en la nube, canalizaciones de datos y ciclos de vida de IA/ML.
  • Capacidad analítica y de resolución de problemas: pasión por resolver problemas complejos utilizando datos y tecnología.

Sumas puntos si...

  • Innovador: curioso y con visión de futuro, siempre explorando nuevas herramientas y enfoques para mejorar las soluciones y la eficiencia.
  • Autónomo y responsable: capaz de impulsar iniciativas técnicas de forma independiente y asumir la plena responsabilidad de los resultados.
  • Buen comunicador: capaz de tender puentes entre los equipos técnicos y las partes interesadas no técnicas.

Condiciones

  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Hasta 11 días de vacaciones adicionales a lo legal. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por utilidades de la empresa, además de bonos por trabajador referido y por cliente.
  • Medio día libre de cumpleaños, además de un regalito.
  • Almuerzos quincenales pagados con el equipo en nuestros hubs (Santiago, Bogotá, Lima y Ciudad de Mexico).
  • Flexibilidad horaria y trabajo por objetivos.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Bogotá, Lima y Ciudad de Mexico).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks Artefact LatAm offers space for internal talks or presentations during working hours.
Meals provided Artefact LatAm provides free lunch and/or other kinds of meals.
Partially remote You can work from your home some days a week.
Digital library Access to digital books or subscriptions.
Company retreats Team-building activities outside the premises.
Computer repairs Artefact LatAm covers some computer repair expenses.
Computer provided Artefact LatAm provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Personal coaching Artefact LatAm offers counseling or personal coaching to employees.
Conference stipend Artefact LatAm covers tickets and/or some expenses for conferences related to the position.
Informal dress code No dress code is enforced.
Vacation over legal Artefact LatAm gives you paid vacations over the legal minimum.
Vacation on birthday Your birthday counts as an extra day of vacation.
Parental leave over legal Artefact LatAm offers paid parental leave over the legal minimum.
$$$ Full time
JavaScript PostgreSQL Node.js DevOps

Company and Project Context

BNamericas is the leading Latin American business intelligence platform with 28 years of experience delivering news, project updates, and data on people and companies across strategic sectors such as Electric Power, Infrastructure, Mining & Metals, Oil & Gas, and ICT. We empower clients to access high-value information to make informed business decisions. The Engineering Lead will play a pivotal role in shaping a growing information platform used across industries and geographies, driving architecture, data workflows, and product evolution.

As part of a dynamic, multicultural team, you will drive high-performance software, data, and cloud initiatives, ensuring scalability, reliability, and security while fostering a culture of engineering excellence. This role combines hands-on development with strategic leadership to deliver a modular, scalable platform and to integrate cutting-edge AI-enabled capabilities where appropriate.

Originally published on getonbrd.com.

Core Responsibilities

  • Lead by example as a senior developer: design, implement, and review high-performance, maintainable code following clean code principles, testing, CI, and agile practices.
  • Shape and evolve system architecture with emphasis on scalability, modularity, security, and reliability; drive architectural decisions and technical direction.
  • Drive integration initiatives, including seamless Appian integration with the platform and interconnectivity between internal systems and tools.
  • Lead and mentor engineers, fostering accountability, continuous improvement, and high performance; remove blockers and optimize development workflows.
  • Oversee infrastructure planning and operations to ensure high availability, cost-efficiency, and robust security.
  • Guide data solutions, including data warehousing, transformations, and overall data architecture; oversee data acquisition, including web scraping strategies and automation.
  • Manage relationships with external partners (e.g., scraping providers) to ensure quality and alignment with technical standards.
  • Explore and help implement modern AI-driven solutions (e.g., agent-based AI) to enhance data workflows, automation, and product capabilities.
  • Partner with senior stakeholders across product, content, and business teams to align engineering efforts with company priorities.
  • Contribute to long-term technical direction and platform evolution to ensure scalability and sustainability.
  • Evaluate emerging technologies and introduce tooling or architectural improvements where relevant; steer platform evolution into a scalable, modular, high-quality technical solution.
  • Support the continued evolution of the platform to meet expanding geographic and sector coverage, ensuring robust data pipelines and a secure, resilient system.

Ideal Profile

What you’ll bring

Proven experience in a senior or lead engineering role, ideally within SaaS or data/information platforms. Strong hands-on development skills in JavaScript, Node.js, and PostgreSQL with a track record of scalable system design. Solid understanding of DevOps, cloud infrastructure (AWS), and security best practices. Experience with data architecture, including data warehousing and transformation pipelines. Experience integrating third-party platforms (e.g., Appian) and working with internal data pipelines. Familiarity with web scraping technologies, automation, and management of external vendors. Exposure to or interest in AI-driven solutions (e.g., agent-based AI) is a strong plus. Fluent English is required; Spanish and/or Portuguese are a strong plus. Strong communication skills and the ability to collaborate with both technical and non-technical stakeholders. A strategic mindset with the ability to balance hands-on delivery and broader technical direction. An entrepreneurial attitude focused on quality, ownership, and impact.

Why you’ll love this role

You will shape and advance a growing information platform used across industries and geographies. This is a high-impact position with significant ownership, offering the chance to influence technical direction, data strategy, and product evolution while helping to build a culture of engineering excellence. You’ll work with a collaborative, diverse team in a dynamic market, and you'll have the opportunity to leave a lasting imprint on our platform and product roadmap.

Benefits

At BNamericas, we foster an inclusive, diverse, creative, and highly collaborative work environment. Our team is dynamic, committed, and always willing to support one another, creating a positive and motivating workplace.

We offer a range of benefits, including referral bonuses for bringing in new talent, early finishes on special occasions such as national holidays and Christmas, opportunities for continuous learning and professional development, and a casual dress code that encourages authenticity and comfort at work.

We invite you to be part of a company that values diversity and work-life balance, and that promotes an empowered, goal-oriented, and passionate way of working. Join us!

Fully remote You can work from anywhere in the world.
Gross salary $3500 - 5000 Full time
Django React TypeScript Web Architecture
Revel Street LLC helps corporate event planners discover and reach private dining venues through an extensive, dependable database. We use LLMs extensively to gather and enrich venue data, streamline the event planning workflow, and reduce the time and effort required to source options for events such as private dining, cocktail receptions, and conferences. As a Senior Full Stack Engineer, we’ll ask you to build and maintain the end-to-end web experience that powers these workflows—turning data pipelines and agentic tooling into reliable, user-friendly product features. Our current stack includes React, TanStack, Cloudflare, Django, and Dagster, and we expect you to design solutions that are scalable, testable, and grounded in core engineering fundamentals.

Apply directly on Get on Board.

Role Description

We’re hiring a Senior Full Stack Engineer for a contract, remote role focused on agentic coding. You’ll write 90%+ of your code in an exclusively agentic coding environment such as Claude Code (or a similar setup). This is not a “vibe coder” position—we expect strong fundamentals, thoughtful engineering, and disciplined delivery.
Your goals
  • Design, develop, and maintain front-end and back-end components of our web applications.
  • Build agentic systems, pipelines, and workflows that reliably support our data and product needs.
  • Ensure quality through manual testing, debugging, and performance-focused iteration.
  • Deploy scalable solutions and keep them operating smoothly.
Day-to-day responsibilities
  • Create and evolve user-facing features in the React/TypeScript ecosystem.
  • Implement and maintain server-side functionality in Django and related services.
  • Work with Cloudflare for performance and delivery considerations.
  • Develop and maintain data/ops workflows using Dagster (and related pipeline patterns).
  • Design “agentic” workflows and pipelines that translate LLM-driven capabilities into dependable software behavior.
  • Perform manual testing, debugging, and validation to ensure correctness and usability.
  • Collaborate with cross-functional teams to align engineering work with product goals.
  • Stay current with technology trends and apply them pragmatically where they improve outcomes.

Qualifications

Required
  • Very high English proficiency (clear communication, strong writing, and the ability to collaborate effectively).
  • At least 4 years of full stack experience, with solid experience in the React/TypeScript ecosystem.
  • At least 6 months of experience working exclusively in an agentic coding environment (e.g., Claude Code, Codex).
  • We require that the work is done in an agentic coding environment; VSCode Copilot and “copy/paste from ChatGPT” do not count as agentic coding experience.
  • Strong problem-solving skills with strong attention to detail.
  • Strong product design sense (we care about UX and practical product judgment).
  • Ability to understand fundamentals, not just generate code—debugging, reasoning about behavior, and ensuring correctness.
Bonus (preferred)
  • Bachelor’s degree in Computer Science, Engineering, or a related field.
How we work
  • You’ll proactively turn ambiguous requirements into well-structured engineering plans.
  • You’ll communicate trade-offs and risks early, and you’ll verify outcomes through hands-on testing.
  • You’ll bring a “build, measure, improve” mindset to performance, reliability, and user experience.

Desirable

Desirable skills and experience
  • You have used orchestrators that can run multiple agents simultaneously like Superset, Cmux, Conductor
  • Comfort designing workflows that combine agentic coding outputs with human review, validation, and testing.
  • Practical experience with scalable web application architecture and reliability practices.

Benefits

  • We provide a Claude code max plan ($100 per month plan, $200 if you need it)
  • High ownership of the codebase and the product

Fully remote You can work from anywhere in the world.
$$$ Full time
Senior Front-end Developer
  • Sanctuary Computer
E-commerce TypeScript Testing Frameworks Next.js

In this role, you’ll work on a variety of client projects to find cost-effective, high-quality, pragmatic solutions to complex problems. Responsibilities will include:

  • Collaborating with Technical Lead to meet clients' development needs
  • Building and maintaining high-performance web applications with modern frontend frameworks and tools
  • Implementing responsive, accessible, and pixel-perfect user interfaces based on design specifications
  • Integrating frontend applications with headless CMS platforms, APIs, and third-party services
  • Optimizing application performance, including bundle size, load times, and runtime efficiency
  • Architecting scalable component libraries and design systems for consistency across projects
  • Writing clear documentation for code maintenance and usage
  • Participating in project team meetings, including Sprint Planning, daily standups, and retrospectives
  • Participating in code reviews, providing constructive feedback to teammates and ensuring adherence to best practice

Apply to this posting directly on Get on Board.

Job functions

Original job posting link here for more details

We're looking for a Senior Frontend Developer who excels at building pixel-perfect websites using modern frontend frameworks. You'll collaborate with our team to build elegant, performant, and visually stunning web experiences. Your work will span a diverse range of client projects, from immersive brand websites to complex web applications, all requiring a keen eye for detail and technical excellence.

The person we’re looking for is happy, relaxed and easy to get along with. They’re flexible on anything except conceits that will lower their usually outstanding work quality. They work “smart”, by carefully managing their workflow and staggering features that have dependencies intelligently — they prefer deep work but are OK coming up to the surface now and then for top level / strategic conversations.

We believe people with backgrounds or interests in design, art, music, food or fashion tend to have a well rounded sense of design & quality — so a variety of hobbies or side projects is a big nice to have!

Quick tip: Kindly submit a complete and thoughtful application, including relevant links that help verify your work experience and identity. Applications with missing or insufficient information will not move forward in the review process.
Our team carefully reviews every complete submission, and we truly appreciate the time and effort you put into applying.

Qualifications and requirements

Must Have Competencies:
We’re always pitching for new and exciting technology niches. Some of the areas below are relevant to us!
  • 8+ years writing highly performant frontend code, an obsession for 95+ Lighthouse scores
  • Expert level experience with Typescript, and one of Next.js, Nuxt, Svelte, Vue
  • Extensive experience with headless CMS like Sanity, Contentful, Prismic or more
  • Fluency in industry standard PaaS like Vercel, Netlify, Firebase, etc
  • Fluency in eCommerce technologies like Shopify (headless & liquid), Stripe, Swell and others
  • Experience building accessible, responsive interfaces with attention to performance optimization and SEO best practices
  • Strong understanding of modern CSS methodologies (Tailwind, CSS Modules, etc) and animation libraries
  • Experience with state management solutions (Redux, Zustand, Pinia) and API integration patterns
  • Proficiency with testing frameworks (Jest, Playwright, Cypress) and commitment to writing maintainable, well-documented code
  • Experience with design systems and component libraries, working closely with designers to ensure pixel-perfect implementations
  • Real-time & performance optimization: experience with WebSockets for live data updates, caching strategies (Redis, CDN-level caching), CDN configuration and optimization (Cloudflare, Fastly), and image optimization techniques including proxies and delivery networks
Nice to Have Competencies:
We’re always pitching for new and exciting technology niches. Some of the areas below are relevant to us!
  • WebGL & Canvas expertise: experience building interactive graphics, animations, and visualizations using WebGL, Three.js, or native Canvas API
  • Data visualization: creating compelling, interactive data visualizations with libraries like Mapbox, D3.js, Chart.js, or similar tools
  • Full-stack development experience: comfortable working across the entire stack, from frontend to backend and database layers
  • PostgreSQL expertise: strong experience with database design, query optimization, and managing complex relational data structures
  • GraphQL & API design: building and maintaining GraphQL or REST APIs with a focus on performance and developer experience
  • Real-time technologies: experience with WebSockets, Server-Sent Events, or similar technologies for building live, interactive features
  • Authentication & security: implementing secure authentication flows (OAuth, JWT) and following security best practices
  • Client-facing experience: working directly with customers to gather requirements and provide technical solutions
  • Product management experience: defining product roadmaps and collaborating closely with stakeholders
  • Engineering management experience: leading teams, setting technical direction, and mentoring developers

Conditions

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Gross salary $4000 - 5500 Full time
Python DevOps Artificial Intelligence CI/CD

Krunchbox is a retail analytics platform used by global consumer brands to transform point-of-sale data into actionable insights. Our platform ingests and processes large volumes of retail data from retailers across North America and Australia, helping brands understand sales performance, optimize inventory, and make smarter supply chain decisions. We are modernizing our platform architecture and rebuilding core components with a focus on scalability, performance, and AI-driven insights. The Senior Backend Engineer will help design and build the next generation of our data platform, collaborating with engineering, product, and data teams to deliver scalable backend services, data ingestion pipelines, and robust cloud infrastructure. This role is ideal for those who enjoy data-intensive systems, large-scale processing, and AI-enabled workflows in a fast-growing SaaS environment.

Apply to this posting directly on Get on Board.

Functions

Design and build scalable backend services using Python and FastAPI. Develop and maintain data ingestion and processing pipelines that power analytics across global brands. Build and maintain API services that drive the Krunchbox platform. Improve system performance, reliability, and scalability. Implement and maintain cloud infrastructure and DevOps pipelines. Collaborate with product, engineering, and data teams to deliver new capabilities. Participate in architecture and platform design decisions. Write clean, well-tested, maintainable code. Contribute to engineering best practices and documentation.

Description

We are looking for a senior backend engineer with 5+ years of Python experience and a strong background in building API services and data-intensive systems. You should be proficient with FastAPI or similar async frameworks, design RESTful APIs following best practices, and have hands-on experience with cloud platforms (AWS, Azure, GCP) and CI/CD pipelines. You will work on scalable data pipelines, large datasets, and AI-enabled enhancements, contributing to a modern, AI-native engineering culture. Collaboration across engineering, product, and data teams is essential, as is a proactive approach to performance, reliability, and documentation. Familiarity with analytics databases (ClickHouse, Snowflake, BigQuery, Redshift) and data orchestration tools (Airflow, Dagster, Prefect) is highly desirable. We value ownership, fast iteration, and a passion for solving complex engineering problems in a high-growth SaaS environment.

Desirable

Nice-to-have skills include full-stack experience (React, TypeScript), experience with analytics databases (ClickHouse, Snowflake, BigQuery, Redshift), data pipeline tooling (ETL/ELT), and AI/ML infrastructure familiarity. Prior experience in SaaS startups or high-growth tech companies, and a track record of owning systems from design through deployment, are also beneficial. Comfort with AI-assisted development tools (e.g., Claude Code) to accelerate coding, debugging, and architecture exploration is a plus.

Benefits

  • Competitive compensation package.
  • Comprehensive health and benefits coverage.
  • A predominantly in-person, collaborative work environment located in Santiago to encourage fast iteration and real-time problem solving.
  • Opportunity to scale and lead a global SaaS platform that solves real-world customer challenges.
  • A direct, impactful role in shaping the future of AI-powered supplier-retailer collaboration.

$$$ Full time
Technical Lead
  • ARKHO
  • Bogotá or Cali (Hybrid)
Python SQL Spark Data lake

ARKHO es una consultora experta en tecnologías de la información, que ofrece servicios expertos de TI en el marco de modernización de aplicaciones, analítica de datos, analítica avanzada y migración a la nube. Nuestro trabajo facilita y acelera la adopción de la cloud en múltiples industrias.

Nos destacamos por ser Partner Advanced de Amazon Web Services con foco estratégico en la generación de soluciones usando tecnología en la nube, somos obsesionados por lograr los objetivos propuestos y tenemos especial énfasis en el grupo humano que compone ARKHO (nuestros Archers), reconociendo a nuestro equipo como un componente vital para el logro de los resultados.

¿Te motivas? ¡Te esperamos!

This job offer is available on Get on Board.

🎯 Objetivo del Rol

Liderar la definición, diseño e implementación de soluciones de Data & AI en entornos cloud, asegurando arquitecturas escalables, eficientes y alineadas a las necesidades del negocio, mientras se guía al equipo técnico hacia altos estándares de calidad, innovación y entrega de valor continuo.

🧭 Perfil del Archer

Buscamos un(a) profesional que combine profundidad técnica en Data & AI con una fuerte capacidad de liderazgo y visión de negocio.
Es alguien que:
  • Tiene capacidad de tomar decisiones técnicas estratégicas.
  • Se mueve con comodidad entre el detalle técnico y la conversación con stakeholders.
  • Lidera desde el ejemplo, elevando el nivel del equipo y fomentando buenas prácticas.
  • Es hands-on cuando se necesita, pero también sabe delegar y guiar.
  • Se adapta a entornos dinámicos, con mentalidad ágil y orientación a impacto.
🧩 Requisitos
  • +7 años de experiencia en desarrollo de software, Data Engineering o roles afines.
  • Experiencia liderando equipos técnicos y proyectos complejos end-to-end.
  • Dominio sólido de Python y SQL, junto con experiencia en arquitecturas de datos, procesamiento distribuido (Spark), orquestación (Airflow) y ecosistema AWS, incluyendo implementación de Data Lakes, Data Warehouses y Lakehouses.
  • Experiencia en IA aplicada: modelos generativos (LLMs) y técnicas RAG.
  • Manejo de herramientas de BI (Power BI, Tableau u otras).
  • Experiencia trabajando con metodologías ágiles

Deseable

Deseable: Desarrollo backend (Django, FastAPI), Conocimientos en DevOps (CI/CD, Docker, IaC), Certificaciones AWS (Solutions Architect, Data Engineer, ML).

🌟 Beneficios del Archer

  • 📆 Día administrativo semestral hasta los 12 meses
  • 🏖️ Week off: 5 días de vacaciones extra
  • 🎉 ¡Celebra tu cumpleaños!
  • 📚 Path de entrenamiento
  • ☁️ Certificaciones AWS
  • 🏡 Flexibilidad (trabajo híbrido)
  • 💍 Regalo por casamiento + 5 días hábiles libres
  • 👶 Regalo por nacimiento de hijos
  • ✏️ Kit escolar
  • 🤱 Beneficio paternidad
  • ❤️ Bonda (plataforma de descuentos y bienestar)
  • 💰 Aguinaldos
  • 🧘♀️ ARKHO Open Doors

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Vacation over legal ARKHO gives you paid vacations over the legal minimum.
$$$ Full time
Data Engineer AWS
  • ARKHO
  • Cali (Hybrid)
Python SQL ETL Data Engineering
ARKHO es una consultora experta en tecnologías de la información, que ofrece servicios expertos de TI en el marco de modernización de aplicaciones, analítica de datos, analítica avanzada y migración a la nube. Nuestro trabajo facilita y acelera la adopción de la cloud en múltiples industrias.
Nos destacamos por ser Partner Advanced de Amazon Web Services con foco estratégico en la generación de soluciones usando tecnología en la nube, somos obsesionados por lograr los objetivos propuestos y tenemos especial énfasis en el grupo humano que compone ARKHO (nuestros Archers), reconociendo a nuestro equipo como un componente vital para el logro de los resultados.
¿Te motivas? ¡Te esperamos!

Opportunity published on Get on Board.

Funciones del cargo

  • Diseñar, desarrollar y mantener pipelines de datos en AWS.
  • Participar en la migración y refactorización de procesos ETL legacy hacia AWS Glue.
  • Implementar procesos de ingesta, transformación y carga de datos en arquitectura Lakehouse.
  • Desarrollar soluciones eficientes con foco en performance y estabilidad.
  • Ejecutar monitoreo, soporte y mejora continua de pipelines productivos.
  • Aplicar prácticas de Data Quality y validación de datos.
  • Colaborar en iniciativas de metadata, catálogo y linaje de datos.
  • Participar en orquestación de flujos con herramientas como Step Functions.
  • Documentar procesos y flujos técnicos.
  • Trabajar junto a equipos de negocio, BI y arquitectura.

Requerimientos del cargo

  • 3 a 5 años de experiencia en Data Engineering.
  • Experiencia desarrollando pipelines ETL / ELT en ambientes productivos.
  • Conocimiento práctico de servicios AWS orientados a datos: Glue, S3, Athena, Redshift o similares.
  • Manejo de Python para procesamiento de datos.
  • Conocimiento de PySpark o Spark.
  • Experiencia en SQL avanzado.
  • Conocimiento de modelamiento de datos (Data Warehouse / Lakehouse).
  • Integración con múltiples fuentes de datos: Oracle, SQL Server, DB2 u otras.
  • Experiencia en monitoreo y soporte de procesos batch o pipelines productivos.

Deseables

  • Experiencia en migración de procesos legacy.
  • Conocimiento en Data Quality, metadata o catálogo de datos.
  • Experiencia con Step Functions, IAM o Lake Formation.
  • Experiencia en sector financiero o industrias reguladas.
  • Experiencia con Infraestructura como Código (IaC)

Beneficios

📆 Día administrativo semestral hasta los 12 meses
🏖️ Week off: 5 días de vacaciones extra
🎉 ¡Celebra tu cumpleaños!
📚 Path de entrenamiento
☁️ Certificaciones AWS
🏡 Flexibilidad (trabajo híbrido con posibilidad a remoto)
💍 Regalo por casamiento + 5 días hábiles libres
👶 Regalo por nacimiento de hijos
✏️ Kit escolar
🤱 Beneficio paternidad
❤️ Bonda (plataforma de descuentos y bienestar)

$$$ Full time
.Net C# Microservices ETL
En Improving South America, brindamos servicios de TI para transformar la percepción del profesional de TI. Nos enfocamos en consultoría de TI, desarrollo de software y formación ágil.

La empresa promueve una cultura de trabajo excepcional basada en el trabajo en equipo, la excelencia y la diversión, con enfoque en crecimiento personal y recompensas compartidas. Al integrarse, el/la candidato/a formará parte de una comunidad que prioriza la comunicación abierta y relaciones laborales sólidas a largo plazo, respaldada por una estructura de desarrollo profesional y aprendizaje continuo..

Estamos buscando un/a Software Architect con experiencia en Microsoft Azure y plataformas de datos, para liderar el diseño de soluciones escalables y de alto impacto.

Este rol es clave para definir la arquitectura tecnológica, establecer estándares y acompañar a los equipos en la construcción de sistemas robustos, seguros y mantenibles.

This job offer is on Get on Board.

Job functions

  • Diseñar arquitecturas cloud y on-premise escalables, seguras y resilientes
  • Liderar el diseño de data warehouses, pipelines ETL y modelado de datos
  • Definir estándares de arquitectura, buenas prácticas y lineamientos técnicos
  • Trabajar en conjunto con equipos de backend, data y DevOps
  • Evaluar tecnologías y proponer mejoras en performance, escalabilidad y costos
  • Acompañar técnicamente a los equipos (mentoría y toma de decisiones)

Qualifications and requirements

  • +7 años de experiencia en desarrollo con .NET (C#, ASP.NET Core)
  • Experiencia sólida en arquitectura cloud-native y microservicios
  • Experiencia trabajando con Microsoft Azure
  • Conocimiento en Data Warehousing, ETL y modelado de datos
  • Experiencia con CI/CD, Azure DevOps e Infrastructure as Code (ARM o Bicep)
  • Experiencia diseñando sistemas escalables, seguros y tolerantes a fallos
  • Buenas habilidades de comunicación y liderazgo técnico

Desirable skills

  • Experiencia con Synapse, Data Factory o herramientas de BI (Power BI, SSIS, SSAS)
  • Conocimientos en ciberseguridad y compliance
  • Experiencia en entornos Agile / Scrum

Conditions

  • Contrato a largo plazo.
  • 100% Remoto.
  • Vacaciones y PTOs
  • Posibilidad de recibir 2 bonos al año.
  • 2 revisiones salariales al año.
  • Clases de inglés.
  • Equipamiento Apple.
  • Plataforma de cursos en linea
  • Budget para compra de libros.
  • Budget para compra de materiales de trabajo
  • mucho mas..

Internal talks Improving South America offers space for internal talks or presentations during working hours.
Computer provided Improving South America provides a computer for your work.
Informal dress code No dress code is enforced.
$$$ Full time
Automation and Reporting Analyst
  • CloudWalk
  • São Paulo
analyst web fintech cloud

About CloudWalk:

We are not just another fintech unicorn. We are a pack of dreamers, makers, and tech enthusiasts building the future of payments. With millions of happy customers and a hunger for innovation, we're now expanding our neural network - literally and metaphorically.


About the Role:

You will join our reporting team, focused on building automation and reporting solutions that scale across all of CloudWalk’s products. This is not just about data pipelines — you’ll also contribute to the creation of a reporting app, including its infrastructure and a web-based interface. AI will be at the center of everything we do, and you’ll be applying it in every step of development.

We’re looking for someone with strong critical thinking for data, grit to overcome challenges, and an endless curiosity for technology. You will be at the intersection of compliance, product, and engineering, helping us reimagine how reporting and automation can become smarter, faster, and globally scalable.


\n


What You’ll Be Doing:
  • Develop and maintain automation processes for reporting and data workflows.
  • Build and optimize SQL queries to ensure accuracy and scalability.
  • Apply AI in daily development, from automation to anomaly detection and intelligent reporting.
  • Collaborate with teams across the company to ensure reporting solutions serve multiple products and stakeholders.
  • Contribute to the development of our reporting application (infrastructure and webapp).
  • Document processes and continuously improve automation and reporting practices.


What You Need to Succeed:
  • Solid knowledge of SQL and hands-on experience with automation.
  • Strong critical thinking for data validation and problem-solving.
  • Passion for technology, with curiosity and openness to apply AI in practical ways.
  • Grit and perseverance to handle challenges and deliver results.
  • Effective communication and ability to collaborate with multidisciplinary teams.


Nice to Haves:
  • Experience with Kubernetes or applied AI in production environments.
  • Exposure to cloud platforms and containerized infrastructure.
  • Familiarity with web applications or chatbots.
  • Experience in fintech or complex reporting environments.


\n

Join us at CloudWalk, where we’re not just engineering solutions; we’re building a smarter, AI-driven future for payments—together.


By applying for this position, your data will be processed as per CloudWalk's Privacy Policy that you can read here in Portuguese and here in English.



Please mention the word **TRUSTED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
analyst technical software code

About Sayari: 

Sayari is a risk intelligence provider that provides the public and private sectors with immediate visibility into complex commercial relationships by delivering the largest commercially available collection of corporate and trade data from over 250 jurisdictions worldwide. Sayari's solutions enable risk resilience, mission-critical investigations, and better economic decisions. 
 
Headquartered in Washington, D.C., Sayari’s solutions are trusted by Fortune 500 companies, financial institutions, and government agencies, and are used globally in over 35 countries. Funded by world-class investors, with a strategic $228 million investment by TPG Inc. (NASDAQ: TPG) in 2024, Sayari has been recognized by the Inc. 5000 and the Deloitte Technology Fast 500 as one of the fastest growing private companies in the United States and was featured as one of Inc.’s “Best Workplaces” for 2025.

POSITION DESCRIPTION

You will be the technical and mission expert for Sayari's most strategic government partners. You will embed directly with government analysts, operators, and data scientists to solve their hardest mission-enabling, intelligence and/or law enforcement problems. Your primary objective is to ensure that Sayari is deeply integrated into our clients' workflows, becoming an indispensable tool for missions ranging from sanctions evasion and counter-threat finance to securing critical supply chains. This is software engineering on the front lines, placing you at the critical juncture between our technology, our government clients, and their high-stakes missions.

This role is a blend of a software engineer, a data analyst, and a mission consultant. You will be architecting data pipelines or writing production code one day and brief

Please mention the word **ENDEARING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Finance Analyst
  • H1
  • New York
analyst saas system technical

At H1, we believe access to the best healthcare information is a basic human right. Our mission is to provide a platform that can optimally inform every doctor interaction globally. This promotes health equity and builds needed trust in healthcare systems. To accomplish this our teams harness the power of data and AI-technology to unlock groundbreaking medical insights and convert those insights into action that result in optimal patient outcomes and accelerates an equitable and inclusive drug development lifecycle.  Visit h1.co to learn more about us.


The Finance team plays a crucial role in creating that future. It is our role to serve as a liaison between H1’s Commercial & Technical teams to oversee issues related to financial reporting, analysis, forecasting, and planning, as well as resource prioritization and business management. With a deep understanding of the business levers underlying the operations of our Infrastructure team, this team is responsible for helping the business to drive toward clear and effective decisions which are critical to the success of the Company


WHAT YOU'LL DO AT H1

As a Finance Analyst, you’ll be part of a highly visible team that partners with leaders and departments across the company. You’ll support the finance team with quarterly and annual forecasting, expense budgeting, key metrics reporting and analysis, close processes, and variance analysis, while also driving various automation and simplification projects.


- Assist with the preparation of annual budgets and financial forecasts to ensure alignment with the company’s strategic goals and key initiatives

- Support the finance team in reporting and analyzing key metrics such as annual recurring revenue (ARR) and churn

- Provide actionable insights on revenue and collection trends, customer retention and profitability, and other key performance drivers

- Assist with the implementation of variable compensation plans for teams across the organization

- Track and calculate monthly, quarterly, and annual sales commissions in accordance with approved compensation plans

- Support monthly financial presentations for both the executive team and board of director meetings

- Implement scalable processes through automation and process improvement to help strengthen the finance foundation

- Perform ad-hoc analysis on critical business needs


ABOUT YOU

You’re a strong financial data driven analytical professional, with experience in FP&A or strategic finance  for high growth, enterprise B2B SaaS tech, healthcare or marketplace companies. You know how to thrive in a fast-paced and frequently changing environment.


REQUIREMENTS

- 3+ years of experience in a Finance department

- Bachelor’s  degree in Finance, Accounting, or a related major field (MBA is a plus)

- Experience in B2B SaaS financial modeling is a plus

- Advanced skills in Microsoft Excel and PowerPoint (Google Sheets and Slides experience is a plus)

- Excellent communication skills with the ability to interact directly with people at all levels of the organization

- Ability to meet deadlines while working in a fast paced environment

- Advanced system skills and the ability to learn new systems quickly.

- Strong attention to detail and ability to effectively prioritize tasks



COMPENSATION

This role pays $75,000 to $88,000 per year, based on experience, in addition to stock options.


Anticipated role close date: 01/10/2026


\n


\n

H1 OFFERS

- Full suite of health insurance options, in addition to generous paid time off

- Pre-planned company-wide wellness holidays

- Retirement options

- Health & charitable donation stipends

- Impactful Business Resource Groups

- Flexible work hours & the opportunity to work from anywhere

- The opportunity to work with leading biotech and life sciences companies in an innovative industry with a mission to improve healthcare around the globe



H1 is proud to be an equal opportunity employer that celebrates diversity and is committed to creating an inclusive workplace with equal opportunity for all applicants and teammates. Our goal is to recruit the most talented people from a diverse candidate pool regardless of race, color, ancestry, national origin, religion, disability, sex (including pregnancy), age, gender, gender identity, sexual orientation, marital status, veteran status, or any other characteristic protected by law.

 

H1 is committed to working with and providing access and reasonable accommodation to applicants with mental and/or physical disabilities. If you require an accommodation, please reach out to your recruiter once you've begun the interview process. All requests for accommodations are treated discreetly and confidentially, as practical and permitted by law.



Please mention the word **DISTINCTION** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
GTM Analytics Engineer
  • Stedi
  • Remote
saas founder architect recruiter

We're building a new healthcare clearinghouse

In the healthcare sector, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) requires that all insurance payers exchange transactions such as claims, eligibility checks, prior authorizations, and remittances using a standardized EDI format called X12 HIPAA. A small group of legacy clearinghouses process the majority of these transactions, offering consolidated connectivity to carriers and providers.

Stedi is the world's only programmable healthcare clearinghouse. By offering modern API interfaces alongside traditional real-time and batch EDI processes, we enable both healthcare technology businesses and established players to exchange mission-critical transactions. Our clearinghouse product and customer-first approach have set us apart. Stedi was ranked as Ramp’s #3 fastest-growing SaaS vendor.

Stedi has lightning in a bottle: engineers and designers shipping products week in and week out; a lean business team supporting the company’s infrastructure; passion for automation and eliminating toil; $92 million in funding from top investors like Stripe, Addition, USV, Bloomberg Beta, First Round Capital, and more. To learn more about how we work, watch our founder Zack’s interview with First Round Capital.

What we’re looking for

We’re hiring a full-stack data and analytics engineer to build and own the data foundation that will power our daily GTM operations: revenue analytics, product usage telemetry, CRM data quality, attribution, funnel performance, and forecasting.

This is not a typical business analyst position. You will architect the pipelines, models, and automations that ensure our GTM teams have reliable, real-time insights into how customers discover, adopt, and expand with Stedi and our products. You will work closely with Sales, GTM Ops, Product, and Finance, executing data and analytics engineering workstreams, and conducting hands-on analysis to build the source-of-truth data for our GTM operations.

What you'll do

  • Build and maintain GTM data pipelines: Own ingestion, transformation, and syncing of CRM data (HubSpot), product-usage telemetry, billing data, and third-party enrichment data in Redshift to support GTM analytics workstreams.

  • Develop core GTM & revenue data models: Improve operational efficiency through standardization of datasets for Sales, GTM Ops, Finance, and the executive team, while establishing common metric definitions across revenue, customer segments, and more.

  • Ship dashboards, alerts, and decision-making tools: Improve telemetry into business performance by building dashboards to track things like sales funnel performance and pipeline quality. Better inform GTM leadership through automation of weekly/monthly reporting and establishing a revenue forecast.

  • Investigate trends and build models to support sales. Accelerate sales effectiveness through implementation of alerting for critical events (e.g. pipeline drops, usage contractions, stuck deals, missed lifecycle transitions), conducting key analyses (e.g. pipeline velocity, win rates, segmentation performance), and development of GTM models (e.g. ICP scoring, account prioritization, churn risk).

  • Own the GTM analytics roadmap: Work with GTM leadership to maintain a backlog of GTM analytics engineering work. Proactively identify the next set of capabilities the GTM org needs (forecasting, routing logic, new usage signals, etc).

Who you are

  • You have exceptional analytical skills: You’ve made a career in working with data to improve products and overall business operations. You know the tools, best practices, and playbooks necessary to stand up a high-performing and organized analytics function at the company.

  • You know the tech stack: You write efficient SQL queries to analyze large datasets and can work with complex schemas. You're an expert with data visualization tools like Tableau, QuickSight, or Power BI. Familiarity with cloud environments (AWS, Azure, GCP).

  • You create and execute your own work: You notice patterns others miss and dig deep to understand root causes. You've identified data issues or operational inefficiencies that led to meaningful improvements.

  • You do what it takes to get the job done: You are resourceful, self-motivating, self-disciplined, and don’t wait to be told what to do. You put in the hours.

  • You move quickly: We move quickly as an organization. This requires an ability to match our pace and not get lost by responding with urgency (both externally to payers and internally to stakeholders), communicating what you are working on, and proactively asking for help or feedback when you need it.

  • You are a “bottom feeder”: You thrive on the details. No task is too small in order to find success, generate revenue, and improve our costs.

The annual compensation range for this role is $180,000-$230,000. For roles with a variable component, the range provided is the role’s On Target Earnings ("OTE") range, which means that the range is inclusive of the sales commissions or bonus target and annual base salary. This range may be inclusive of multiple experience levels at Stedi and will be narrowed during the interview process based on a number of factors, including the candidate’s experience, location, and qualifications. Please reach out to your recruiter with any questions.

We’ve been made aware of individuals impersonating the Stedi recruiting team. Please note:

  • All official communication about roles at Stedi will only come from an @stedi.com email address.

  • If you’re unsure whether a message is legitimate or have any concerns, feel free to contact us directly at careers@stedi.com.

We appreciate your attention to this and your interest in joining Stedi.

At Stedi, we're looking for people who are deeply curious and aligned to our ways of working. You're encouraged to apply even if your experience doesn't perfectly match the job description.



Please mention the word **LOGICAL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • ChowNow
  • Remote
support mobile senior sales
ABOUT US: ChowNow is one of the leading players in off-premise restaurant technology. As takeout becomes a vital revenue stream for independent restaurants, our platform helps owners focus on what they do best—serving great food—by offering solutions across the entire digital dining experience. From building branded websites and mobile apps, to powering online orders, managing menus, consolidating delivery, and running targeted marketing, we give restaurants the tools to grow on their own terms. We support over 20,000 restaurants across North America, helping process $1B+ in gross food sales while saving our partners over $700M in third-party commission fees. Through our white-label ordering solutions, a growing demand network (including Google, Yelp, Apple, and Snap), and a diner-friendly marketplace, we empower independent restaurants to own their customer relationships and avoid inflated pricing and fees charged by 3rd party delivery apps like Uber and Doordash. Founded in 2012.

Please mention the word **SWANKY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Machine Learning Engineer
  • Fetch
  • United States
software mobile senior engineer

What we're building and why we're building it. 

Every month, millions of people use Fetch earning rewards for buying brands they love, and a whole lot more. Whether shopping in the grocery aisle, grabbing a bite at the drive-through or playing a favorite mobile game, Fetch empowers consumers to live rewarded throughout their day. To date, we've delivered more than $1 billion in rewards and earned more than 5 million five-star reviews from happy users. 

It's not just our users who believe in Fetch: with investments from SoftBank, Univision, and Hamilton Lane, and partnerships ranging from challenger brands to Fortune 500 companies, Fetch is reshaping how brands and consumers connect in the marketplace. When you work at Fetch, you play a vital role in a platform that drives brand loyalty and creates lifelong consumers with the power of Fetch points. User and partner success are at the heart of everything we do, and we extend that same commitment to our employees.

At Fetch, we value curiosity, adaptability, and the confidence to explore new tools, especially AI, to drive smarter, faster work. You don't need to be an expert, but you should be ready to learn quickly and think critically. We welcome learners who move fast, challenge the status quo, and shape what's next, with us.  Ranked as one of America's Best Startup Employers by Forbes for two years in a row, Fetch fosters a people-first culture rooted in trust, accountability, and innovation. We encourage our employees to challenge ideas, think bigger, and always bring the fun to Fetch.

Fetch is an equal employment opportunity employer.

About the Role:

We are seeking a Machine Learning Software Engineer to join Fetch's Scan, Match & Catalog team. This role sits at the intersection of applied machine learning, data engineering, and production systems, with a focus on improving receipt understanding, product matching, and catalog enrichment at scale. You w

Please mention the word **FASHIONABLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Senior Data Engineer
  • Exadel
  • Brazil, Bulgaria, Colombia, Georgia, Lithuania, Poland, Romania
jira salesforce code web

Why Join Exadel 

We’re an AI-first global tech company with 25+ years of engineering leadership, 2,000+ team members, and 500+ active projects powering Fortune 500 clients, including HBO, Microsoft, Google, and Starbucks.

From AI platforms to digital transformation, we partner with enterprise leaders to build what’s next.
What powers it all? Our people are ambitious, collaborative, and constantly evolving.

About the Client  

A U.S.-based education services provider offering online and campus-based post-secondary education, primarily serving military personnel, veterans, and public service communities. The organization delivers degree and certificate programs across disciplines such as nursing, health sciences, business, IT, and liberal arts. In addition to its headquarters in West Virginia, the customer operates facilities and partner institutions across the United States. The primary product areas to work with are learning management systems, student enrollment, and academic operations on web and mobile platforms.

What You’ll Do  

  • Design, implement, and maintain scalable data pipelines using Snowflake, Coalesce.io, Airbyte, and SQL Server/SSIS, with some use of Azure Data Factory
  • Build and maintain dimensional data models to ensure high-quality, structured data for analytics and reporting
  • Implement Medallion architecture in Snowflake, managing bronze, silver, and gold layers
  • Collaborate with teams using Jira for task tracking and GitHub for code repository management
  • Ensure reliable ETL processes, data transformations, and data integration workflows
  • Help improve data modeling practices and address weaknesses in dimensional modeling

What You Bring  

  • Hands-on experience with Snowflake, Coalesce.io, Airbyte, SQL Server/SSIS, and Azure Data Factory
  • Strong understanding of Medallion architecture and dimensional data modeling
  • Practical experience in building ETL pipelines and transforming data for analytics
  • Familiarity with Jira and GitHub for collaborative work
  • Strong analytical and problem-solving skills, with ability to collaborate across teams
  • Minimum 4-hour overlap with US Eastern Time

Nice to Have

Exposure to Power BI (optional)Experience with Salesforce data integrationBackground in higher education / ed-tech domains

English level 

Intermediate/Upper-Intermediate

Legal & Hiring Information 

  • Exadel is proud to be an Equal Opportunity Em

    Please mention the word **EXALTATION** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Analytics Engineer
  • Alpaca
  • Remote - North America
crypto technical support financial

Who We Are:

Alpaca is a US-headquartered self-clearing broker-dealer and brokerage infrastructure for stocks, ETFs, options, crypto, fixed income, 24/5 trading, and more. Our recent Series C funding round brought our total investment to over $170 million, fueling our ambitious vision.

Amongst our subsidiaries, Alpaca is a licensed financial services company, serving hundreds of financial institutions across 40 countries with our institutional-grade APIs. This includes broker-dealers, investment advisors, wealth managers, hedge funds, and crypto exchanges, totalling over 6 million brokerage accounts.

Our global team is a diverse group of experienced engineers, traders, and brokerage professionals who are working to achieve our mission of opening financial services to everyone on the planet. We're deeply committed to open-source contributions and fostering a vibrant community, continuously enhancing our award-winning, developer-friendly API and the robust infrastructure behind it.

Alpaca is proudly backed by top-tier global investors, including Portage Ventures, Spark Capital, Tribe Capital, Social Leverage, Horizons Ventures, Unbound, SBI Group, Derayah Financial, Elefund, and Y Combinator.

 

Our Team Members:

We're a dynamic team of 230+ globally distributed members who thrive working from our favorite places around the world, with teammates spanning the USA, Canada, Japan, Hungary, Nigeria, Brazil, the UK, and beyond!

We're searching for passionate individuals eager to contribute to Alpaca's rapid growth. If you align with our core values—Stay Curious, Have Empathy, and Be Accountable—and are ready to make a significant impact, we encourage you to apply.

About the Role:

We are seeking an Analytics Engineer to own and execute the vision for our data transformation layer. You will be at the heart of our data platform, which processes hundreds of millions of events daily from a wide array of sources, including transactional databases, API logs, CRMs, payment systems, and marketing platforms.

You will join our 100% remote team and work closely with Data Engineers (who manage data ingestion) and Data Scientists and Business Users (who consume your data models). Your primary responsibility will be to use dbt and Trino on our GCP-based, open-source data infrastructure to build robust, scalable data models. These models are critical for stakeholders across the company—from finance and operations to the executive team—and are delivered via BI tools, reports, and reverse ETL systems.

What You'll Do:

  • Own the Transformation Layer: Design, build, and maintain scalable data models using dbt and SQL to support diverse business needs, from monthly financial reporting to near-real-time operational metrics.
  • Set Technical Standards: Establish and enforce best practices for data modelling, development, testing, and monitoring to ensure data quality and reliability.


Please mention the word **AGILE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Software Engineer Trading Infrastructure
  • Gauntlet
  • New York City / San Francisco / Los Angeles / Remote
software design web3 defi

Gauntlet leads the field in quantitative research and optimization of DeFi economics. We manage market risk, optimize growth, and ensure economic safety for protocols facilitating most spot trading, borrowing, and lending activity across all of DeFi, protecting and optimizing the largest protocols and networks in the industry. We build institutional-grade vaults for decentralized finance, delivering risk-adjusted onchain yields for capital at scale. Designed by the most vigilant, quantitative minds in crypto and informed by years of research.


As of November 2025, Gauntlet manages over $2B in vault TVL, and optimizes risk and incentives covering over $42 billion in customer TVL. We continually publish cutting-edge research that informs our risk models, alerts, and analysis, and is among the most cited institutions — including academic institutions — in terms of peer-reviewed papers addressing DeFi as a subject. We’re a Series B company with around 75 employees, operating remote-first with a home base in New York City.


As a company, we build institutional-grade vaults that deliver risk-adjusted DeFi yields at scale, powered by automated risk models and off-chain intelligence. Gauntlet curates strategies across Morpho, Drift, Symbiotic, Aera and more, with >$2B in vault TVL and a growing suite of Prime, Core and Frontier vaults.


Our mission is to drive adoption and understanding of the financial systems of the future. We operate with a trader’s discipline and a risk manager’s skepticism: size carefully, stress routinely, unwind decisively. The label equals the package equals the contents. No surprises, just predictable, reliable vaults.


Join our derivatives trading team and work on the key infrastructure that powers our product offering as well as trading systems. Work with a team with decades of experience in tech and finance to build the backbone of our high-performance derivatives trading strategies. You'll work close to trading, own critical infrastructure end-to-end, and ship systems that manage real capital in live crypto markets.

\n


Responsibilities
  • Design, implement, and operate scalable distributed systems in production.
  • Build low-latency and streaming systems for real-time and near real-time workloads.
  • Develop data pipelines and ETL workflows for ingesting, transforming, and serving data.
  • Build and maintain application services and APIs used by internal and external systems.
  • Implement Web3 protocol integrations, including smart contract interactions and on-chain data ingestion via RPCs, logs, and indexers.
  • Apply SRE principles to improve reliability, observability, and operational correctness.
  • Participate in incident response, debugging production issues and driving root-cause fixes.
  • Contribute to system design and code reviews, maintaining high engineering standards.
  • Leverage AI-assisted development tools to improve productivity, code quality, and system understanding, while exercising strong engineering judgment.
  • Write and maintain technical documentation for systems and workflows.


Qualifications
  • 6+ years of professional software engineering experience.
  • Strong proficiency in Python, Rust, and/or JavaScript/TypeScript.
  • Experience building low-latency or high-throughput systems.
  • Experience designing and operating scalable distributed systems.
  • Hands-on experience with Web3 systems, including interacting with smart contracts and consuming on-chain data.
  • Experience with streaming or messaging systems (e.g. Kafka, Pub/Sub).
  • Experience with data storage systems (e.g. Postgres, ClickHouse).
  • Experience deploying and operating software in cloud environments (e.g. GCP).
  • Familiarity with containerized systems (Docker, Kubernetes).
  • Understanding of SRE practices, including monitoring, alerting, and incident response.
  • Strong understanding of security fundamentals (authentication, authorization, secrets management).


Bonus Points
  • Previous experience at financial or trading firms.
  • Smart contract development experience (e.g. Solidity).
  • Experience with workflow orchestration (e.g. Dagster).
  • Experience operating systems with strict reliability or performance requirements.
  • Exposure to infrastructure as code or CI/CD systems.


Benefits and Perks
  • Remote first - work from anywhere in the US & CAN!
  • Competitive packages with the added opportunity for incentive-based compensation
  • Regular in-person company retreats and cross-country "office visit" perk
  • 100% paid medical, dental and vision premiums for employees
  • Laptop provided
  • $1,000 WFH stipend upon joining
  • $100 per month reimbursement for fitness-related expenses
  • Monthly reimbursement for home internet, phone, and cellular data
  • Unlimited vacation policy
  • 100% paid parental leave of 12 weeks
  • Fertility benefits


\n
$185,000 - $225,000 a year
\n

Please note at this time our hiring is reserved for potential employees who are able to work within the contiguous United States and Canada. Should you need alternative accommodations, please note that in your application.


The national pay range for this role is $165,000 - $205,000 plus additional On Target Earnings potential by level and equity in the company. Our salary ranges are based on paying competitively for a company of our size and industry, and are one part of many compensation, benefits and other reward opportunities we provide. Individual pay rate decisions are based on a number of factors, including qualifications for the role, experience level, skill set, and balancing internal equity relative to peers at the company.  


#LI-Remote



Please mention the word **CONSUMMATE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Principal Data Operations &amp; Migration Lead
  • StarCompliance
  • York, United Kingdom
technical support software financial

About StarCompliance

StarCompliance is on a mission to make compliance simple and easy. Trusted globally by enterprise financial institutions, the user-friendly STAR platform empowers organizations to achieve regulatory compliance while safeguarding their integrity and business reputations. Through a customizable, 360-degree view of employee activity, the STAR software enables firms to automate the detection and resolution of potential areas of conflict while streamlining daily workflows and increasing efficiency. 


Role  

StarCompliance is looking for a senior, hands-on Data Operations & Migration Specialist to oversee our data feed operations and client data migration capabilities. This role combines technical leadership with day-to-day delivery, acting as a player coach who sets direction, unblocks issues, and still gets hands-on when it matters.


You will own the operational health of broker and client data feeds, lead complex data migration initiatives during client onboarding, and provide mentorship and technical guidance to engineers and analysts across both functions. Deep domain knowledge in financial services data, particularly regulated trading, transaction, or reference data, is critical. 


This role sits within the Enterprise Data function and works closely with R&D, Client Support Services, Professional Services, and Relationship Management to ensure client data is secure, accurate, compliant, and delivered on time. 

\n


Responsibilities
  • Leadership Responsibilities 
  • Provide technical and operational leadership across Data Operations and Data Migration functions. 
  • Act as a player coach, balancing hands-on delivery with coaching, mentoring, and upskilling team members. 
  • Set standards for operational excellence, data quality, documentation, and incident management. 
  • Own prioritisation and workload planning across feeds and migrations, ensuring delivery commitments are met. 
  • Serve as the escalation point for complex data issues, client escalations, and high-risk migrations. 
  • Partner with Product, Engineering, and Professional Services to influence roadmap decisions and onboarding strategies.  
  • Act as a trusted technical partner for internal teams and external stakeholders during onboarding and operational change. 
  • Translate complex technical and data concepts into clear, actionable guidance for non-technical audiences. 
  • Contribute to client-facing discussions where deep data or feed expertise is required. 

  • Data Feed Operations Ownership 
  • Oversee the delivery, maintenance, and evolution of StarCompliance’s broker and client data feed infrastructure. 
  • Ensure secure setup and ongoing management of SFTP connectivity, access permissions, and encryption standards. 
  • Own operational monitoring of daily and intraday feeds, proactively identifying trends, risks, and failure patterns. 
  • Drive continuous improvement across feed automation, resilience, monitoring, and alerting. 
  • Work closely with the wider Enterprise Data engineering team on feed-related enhancements and defect resolution. 
  • Ensure platforms such as MoveIt and associated automation tooling are stable, well configured, and fit for scale. 

  • Data Migration Leadership 
  • Oversee the planning and execution of complex data migrations from third-party vendors into StarCompliance products. 
  • Define and review migration strategies, data mappings, validation approaches, and cutover plans. 
  • Ensure data integrity, accuracy, and regulatory compliance throughout the migration lifecycle. 
  • Provide hands-on support for data analysis, transformation, and validation where required. 
  • Oversee post-migration support, ensuring issues are resolved quickly and root causes addressed. 


Skills & Experience
  • Strong experience in financial services, fintech, regtech, or similarly regulated data environments.
  • Deep domain knowledge of financial broker feeds, file-based integrations, and operational data pipelines.
  • Hands-on experience with SQL Server, including T-SQL for investigation and data validation.
  • Strong understanding of ETL processes and tooling.
  • Experience with secure file transfer technologies and encryption standards, including SFTP, PGP/GPG, and SSH.
  • Proficiency in scripting and automation using tools such as PowerShell, Python, and SQL.
  • Proven experience leading data operations or data migration initiatives in production environments.
  • Ability to balance strategic thinking with hands-on delivery.
  • Excellent problem-solving skills and calm decision-making under pressure. 


Minimum Qualifications
  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or equivalent professional experience.  
  • Proven leader with 5+ years in data operations, data engineering, data migration, or related technical roles, ideally within financial services or compliance technology. 


How We Think About AI..
  • At StarCompliance, AI is not a side experiment or a specialist niche. We treat it as a practical capability that strengthens how we operate, scale, and deliver secure, high quality data services. 

  • In Enterprise Data, we expect senior leaders to: 
  • Use AI assisted tools to improve operational efficiency. 
  • Stay informed about how AI can enhance data operations, migration strategy, and automation in regulated environments. 
  • Apply AI thoughtfully, with strong awareness of data security, client confidentiality, regulatory risk, and cost. 
  • Help the team adopt AI responsibly in day-to-day operations, without compromising control, traceability, or compliance standards. 


\n

StarCompliance Background Checks


All positions require pre-employment screening due to employees potentially having access to highly sensitive and confidential information involving finance and compliance; candidates must be trustworthy and have a heightened sensitivity to protecting confidential financial, professional information.  To be eligible for employment with StarCompliance, candidates must undergo a rigorous background investigation with checks including, but not limited to, criminal record history, consumer credit, employment history, qualifications, and education checks.  



Equal Opportunity Employer Statement


We prohibit discrimination and harassment of any kind based on race, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, gender identity or expression, marital/civil union/domestic partnership status, veteran status or any other protected characteristic as outlined by country, state, or local laws.


This policy applies to all employment practices within our organisation, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. StarCompliance makes hiring decisions based solely on qualifications, merit, and business needs at the time. For more information, please request a copy of our Equal Opportunities Policy.




Please mention the word **CAPTIVATING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
CFO
  • Marathon Talent
  • Remote
cfo support software accounting

Offshore CFO (Multifamily Real Estate) — Job Description

Overview

We are hiring a CFO to lead the finance and accounting function for a U.S.-based multifamily owner/operator. This role owns

financial statements, monthly close, cash management, budgeting/forecasting, reporting, and controls across multiple

properties and entities. The right candidate is tech-forward and excited to modernize finance through automation, AI, and APIdriven integrations.

Key Responsibilities

• Monthly close & financial statements: Own timely, accurate close and delivery of P&L, balance sheet, and cash flow

with supporting schedules.

• Reconciliations & controls: Ensure complete bank/GL reconciliations, AR/AP tie-outs, accruals, prepaids, CIP/fixed

assets, intercompany, and documented processes.

• Management reporting: Produce property/portfolio reporting including budget vs. actual, variance explanations, and

key operating KPIs.

• Cash management: Maintain daily cash visibility and a rolling 13-week cash forecast; manage payment cadence,

approvals, reserves, and liquidity planning.

• Budgeting & forecasting: Lead annual budgets and reforecasts (revenue, payroll, utilities, repairs, insurance, taxes,

CapEx).

• CapEx / renovation tracking: Track project budgets, spend, and ROI support (CIP and unit-level economics as

applicable).

• Lender / compliance support: Manage covenant reporting, lender deliverables, and coordination with CPAs/tax/audit

teams.

• Section 8 / Housing Authority & municipal compliance: Support affordable housing reporting and compliance (as

applicable), including coordination with Housing Authorities/cities, audits, and required documentation.

• Team leadership: Lead and develop offshore accounting staff (AP/AR/accountants); set SOPs, close calendar, and

review standards.

• Tech/automation leadership: Implement and optimize workflows using AI tools, automation, and API connections

across property management, accounting, reporting, and data pipelines.

Requirements (Must-Have)

• Minimum 8+ years of experience as a CFO (or senior finance leader) in real estate; multifamily strongly preferred.

• Expert in financial statements, close management, reconciliations, cash forecasting, and internal controls.

• Strong ability to deliver decision-ready reporting (budget vs. actual, variance analysis, KPIs).

• Bilingual proficiency: fluent professional English and Spanish (written and spoken).

• Property management software experience; ResMan preferred.

• Expense management software experience with Brex or Ramp; Brex preferred.

• Experience working with Section 8 programs, Housing Authorities, and municipal/city requirements (as applicable),

including compliance reporting and audit support.

• Strong understanding of real estate legal entities and structures (LLCs/LPs/SPVs), intercompany accounting, and

entity-level reporting.

• Tech-forward mindset: comfortable implementing automation/AI and working with APIs/integrations (no coding

required, but must be fluent with modern tools).

• Advanced Excel/Google Sheets skills; comfortable building standardized reporting templates and dashboards.

• Ability to work offshore with consistent overlap with U.S. business hours and days (ET/CT preferred).

Preferred

• Multi-entity consolidation, lender compliance/covenants, and renovation-heavy portfolios.

• Experience with BI/reporting tools (Power BI/Tableau) and modern AP/bill pay tools.

Working Model

• Remote / Offshore (LATAM preferred for timezone overlap)

• Reports to Ownership/CEO/Managing Partner; partners closely with Operations and Asset Management



Please mention the word **COMPLIANT** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Director Data Engineering
  • Revinate
  • Atlanta, GA
director design hr security

Revinate is one of the largest and most innovative providers of direct revenue-generating solutions in the hospitality industry. Revinate's mission is to deliver hoteliers scalable direct revenue and profits from data-driven solutions that cultivate deeper relationships with guests. Revinate’s Direct Booking Platform helps capture, convert and retain guests with strategies and services that maximize direct booking revenue. This combination maximizes the lifetime value of each guest through personalized and targeted campaigns across the guest journey. Revinate Marketing has won 1st place for Hotel CRM & Email Marketing in the HotelTechAwards five years in a row!


About Us


Revinate is an innovative hospitality tech company that is revolutionizing how customers manage their operations and enhance the guest experience. Our solutions leverage advanced technology, data analytics, and automation to improve efficiency and drive customer happiness in the hospitality industry.  


The Opportunity


We are seeking an experienced and visionary Director, Data Engineering to lead our Data Platform initiatives. In this critical role, you will be responsible for defining the strategy, architecture, and execution of our end-to-end data ecosystem, encompassing data ingestion pipeline, operational data stores, our evolving data lakehouse, and robust data APIs. You will build and lead a high-performing team of data engineers, fostering a culture of innovation, collaboration, and operational excellence. This role requires not only deep technical expertise but also a strong understanding of how data can drive business value, including leveraging data science and machine learning to optimize our operations.


Key Responsibilities


Strategic Leadership: Define and execute the long-term vision and roadmap for our data platform, aligning with overall business objectives and technology strategy.


Team Leadership & Development: Recruit, mentor, and lead a talented team of data engineers, fostering their growth and ensuring best practices in data engineering.


Data Pipeline: Oversee the design, development, and maintenance of scalable and reliable real time data ingestion pipeline, ensuring data quality, accuracy, and timely delivery.


Operational Data Stores: Lead the architecture and management of our operational data stores, optimizing for performance, reliability, and accessibility to support critical business applications.


Data Lakehouse Development: Drive the strategic evolution and implementation of our data lakehouse, enabling unified data access, advanced analytics, and machine learning initiatives.


Data API Development: Champion the design and development of secure, performant, and well-documented data APIs to facilitate data consumption across various applications and user groups.


Data Governance & Quality: Enforce data governance policies, standards, and procedures to ensure data integrity, security, privacy, and compliance.


Operational Efficiency through Data Science/ML: Collaborate closely with data science and analytics teams to identify opportunities where data science and machine learning can be applied to optimize internal operations, automate processes, and improve efficiency within the data platform itself (e.g., predictive maintenance for pipelines, intelligent resource allocation).


Performance & Scalability: Ensure the data platform is highly performant, scalable, and resilient, capable of handling growing data volumes and complex analytical workloads.


Technology Evaluation: Evaluate and recommend new data technologies, tools, and platforms to enhance our data capabilities and stay ahead of industry trends.


Cross-Functional Collaboration: Partner effectively with engineering, product, analytics, data science, and business teams to understand data requirements and deliver impactful solutions.


Monitoring & Support: Establish robust monitoring, alerting, and on-call support processes for all data systems, ensuring high availability and rapid issue resolution.

\n


What You’ll Bring
  • 10+ years of experience in data engineering roles, with at least 5 years in a leadership or management position overseeing data engineering teams.
  • Proven track record of building and scaling complex data platforms from the ground up, or significantly evolving existing ones.

Deep expertise in designing, building, and operating:
  • Data Ingestion Pipelines: (e.g., Kafka, Flink, Spark Streaming, Airflow, equivalent cloud services like Kinesis, Pub/Sub, Dataflow)
  • Operational Data Stores: (e.g., Cassandra, ScyllaDB, DynamoDB, PostgreSQL, MySQL)
  • Data Warehousing/Lakehouse Technologies: (e.g., AWS, GCP, S3, Iceberg, Redshift, BigQuery)
  • Data APIs & Services: (e.g., RESTful APIs, GraphQL)

  • Strong proficiency in Java / ScalaExtensive experience with cloud data platforms (AWS, GCP) and their respective data services.
  • Solid understanding of data modeling techniques (relational, dimensional, NoSQL).
  • Literacy in Data Science and Machine Learning concepts:Familiarity with common ML algorithms and their applications.
  • Understanding of the MLOps lifecycle and data requirements for ML models.Ability to identify and articulate how data science/ML can be used to improve data platform operations (e.g., anomaly detection in pipelines, resource optimization).
  • Experience with implementing data governance, data quality, and metadata management tools and practices.
  • Excellent communication, interpersonal, and presentation skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences.
  • Strong analytical and problem-solving abilities, with a focus on delivering practical and scalable solutions.
  • Bachelor's or Master's degree in Computer Science, Data Engineering, or a related quantitative field.


Benefits
  • Health insurance-employee premium paid 100% by Revinate
  • Dental insurance-employee and dependents’ premium paid 100% by Revinate
  • Vision insurance-employee and dependents’ premium paid 100% by Revinate
  • 401(k) with employer match
  • Short & Long Term Disability insurance
  • Life insurance
  • Paid Flex time off
  • Monthly work from home stipend
  • Telehealth access
  • Employee Assistance Program (EAP)


\n
$190,000 - $200,000 a year
The compensation package for the Director, Data Engineering includes a base salary and a performance-based bonus.

This salary range may be inclusive of several career levels at Revinate and will be narrowed during the interview process based on a number of factors, including (but not limited to) the candidate’s experience, qualifications and location. 
\n

Interview Process 

We're excited you're considering a career with Revinate! Our goal is to ensure this is the right opportunity for you, while also determining if you're the right fit for our team. The interview process for this role is designed to be a two-way street, where you'll get to know us just as we get to know you.


 - Recruiter Screen - 30 min

 - Technical Interview - 60 min

 - Cross Functional Interview - 30 min

 - Final Interview - 30 min 




Revinate values the flexibility of a remote workforce and the benefits of localized hiring. We focus on specific cities to foster local communities and enhance team cohesion, allowing employees to collaborate, attend local events, and build a strong sense of community and company culture.

Candidates must be located in the city listed in the job application. Thank you!


Revinate is not open to third party solicitation or resumes for our posted FTE positions. Resumes received from third party agencies that are unsolicited will be considered complementary.



Important Security Alert

We have been made aware of fraudulent activities involving individuals impersonating our HR team and offering fake job opportunities. Please be vigilant and ensure your safety by verifying all job offers.


For Authentic Opportunities: Only refer to our official careers page on our company website. Your security is our priority. If you encounter any suspicious activity, please report it immediately. Stay safe and secure! You can confirm or inquire with any questions by reaching out to recruiting@revinate.com





AI and Hiring 

Please note that interviews at Revinate will be recorded using brighthire.ai. As we continue to build more structure into our interview processes -- the best way to eliminate unconscious bias! We are encouraging our interviewers to focus more on our candidates and the conversation than taking notes. Instead, we can rely on brighthire.ai to do the note taking for us. If you’re uncomfortable with recording your interview, please let us now. We’ll opt you out.   


Excited?!  Want to learn more? Apply Now!

Our Core Values:

One Revinate - United & Strong, on a single mission together

Built on Trust - It’s the foundation of everything we do

Expect Amazing - We think, dream & deliver big

Customer Love -- When the customer wins, we win

Make it Simpler -- Apply it to everything we do

Hungerness -- Feel it, follow it, be relentless about our success

Grounded in Gratitude - We’re glad to be here & make the most of every day


Revinate Inc. provides Equal Employment Opportunity to all employees and applicants for employment without regard to race, color, religion, gender identity or expression, sex, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. Revinate complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities. 


Revinate is not open to third party solicitation or resumes for our posted FTE positions. Resumes received from third party agencies that are unsolicited will be considered complementary. 


If you are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to recruiting@revinate.com.


By submitting your application you acknowledge that you have read Revinate's Privacy Policy (https://www.revinate.com/privacy/)




Please mention the word **HONORABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Intern Software Development
  • Netomi
  • Remote - India
software design technical code

About the Company:

Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences.


Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!


Job description


We are looking for a Software Development Intern to help us with coding, fixing, executing and versioning existing code for applications. If you're passionate to solve real time fundamental problems, explore, learn and work on technologies out of scope, Netomi is the perfect place for you.

\n


Job Responsibilities
  • Assist in planning, design and execution of SOA backend platforms. Mostly around REST based Web Frameworks using JAVA (Spark,Spring, ORM)
  • High level and Low level design of the highly scalable components
  • Works collaboratively in a multi-disciplinary team environment
  • Assist key technical advisors to define the roadmap of project


Requirements
  • Experience on some scripting language for automated build/ deployments, preferably Java
  • Pursuing B.E./B.Tech in Computer Science from tier I & II institutes (2025 and 2026 passouts only)


\n

Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.



Please mention the word **MERRY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Intern Software Development
  • Netomi
  • Remote - India
software design technical code

About the Company:

Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences.


Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!


Job description


We are looking for a Software Development Intern to help us with coding, fixing, executing and versioning existing code for applications. If you're passionate to solve real time fundamental problems, explore, learn and work on technologies out of scope, Netomi is the perfect place for you.

\n


Job Responsibilities
  • Assist in planning, design and execution of SOA backend platforms. Mostly around REST based Web Frameworks using JAVA (Spark,Spring, ORM)
  • High level and Low level design of the highly scalable components
  • Works collaboratively in a multi-disciplinary team environment
  • Assist key technical advisors to define the roadmap of project


Requirements
  • Experience on some scripting language for automated build/ deployments, preferably Java
  • Pursuing B.E./B.Tech in Computer Science from tier I & II institutes (2025 and 2026 passouts only)


\n

Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.



Please mention the word **FLEXIBLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • Lalamove
  • Kuala Lumpur
technical support java senior

At Lalamove, we believe in the power of community. Millions of drivers and customers use our technology every day to connect with one another and move things that matter. Delivery is what we do best and we ensure it is always fast and simple. Since 2013, we have tackled the logistics industry head on to find the most innovative solutions for the world’s delivery needs. We are full steam ahead to make Lalamove synonymous with delivery and on a mission to impact as many local communities we can. We have massively scaled our efforts across Asia and now have our sights on taking our best in class technology to the rest of the world. And we are looking for talented professionals to join us in this journey!!


As a Senior Data Engineer at Lalamove, you will be joining the global Data team as a key member of our expanding technology team in our new market. Due to the importance of user privacy and our commitment to compliance laws, we need an additional engineer to support our operations in the expanding market, while collaborating closely with our global engineering team.


\n


What you'll do:
  • Provide production support and incident response of our data in expanding market platform.
  • Support and troubleshoot technical issues, including the data pipelines running on top of the data platform.
  • Collaborate with a geographically-dispersed team of engineers to support compliance for the expanding market.
  • Support ad hoc requests related to expanding market data and operations.


What you'll need:
  • Legally permitted to work in Malaysia
  • 5+ years of relevant experience in data engineering
  • Experience in supporting Big Data operations
  • Proficiency in SQL
  • Hands-on experience in linux systems and command line operations
  • Experience in Java and Spring Boot framework
  • Good command of English, fluency in Mandarin is a plus


\n

To all candidates- Lalamove respects your privacy and is committed to protecting your personal data.

This Notice will inform you how we will use your personal data, explain your privacy rights and the protection you have by the law when you apply to join us. Please take time to read and understand this Notice. Candidate Privacy Notice: https://www.lalamove.com/en-hk/candidate-privacy-notice



Please mention the word **DASHING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$181000 - $213000 Full time
Senior Software Engineer Data
  • Freshpaint
  • Remote
software architect technical growth

About Freshpaint:

Customer data is the fuel that drives all modern businesses. From product analytics, to marketing, to support, to advertising, advanced data analysis in the warehouse, and even sales – customer data is the raw material for each function at a modern business.

For highly regulated businesses in healthcare, it’s always been a challenge to harness that customer data and get it to the marketing and analytics tools that require it while following patient privacy laws….until now.

Something as simple as running ads to get more users is simple for an e-commerce of software company to do. But common web analytics and advertising tools collect sensitive user identifiers and healthcare information automatically. Those same tools are not HIPAA compliant.

We provide a layer of data governance to make current web analytics tools HIPAA-compliant. For analytics, our customers can continue getting the insights they need to improve the patient experience. For marketing, Freshpaint safeguards health information while helping our customers promote access to care through popular advertising platforms like Facebook, Google, and others.

In short, we help healthcare marketers promote access to care and safeguard patient privacy at the same time. This is an important, complex problem in a massive market (healthcare is 20% of the US GDP).

Our customers manage their customer data with:

  1. Privacy Platform. We help healthcare providers automate their website’s + app’s HIPAA compliance, and safeguard patient data. This is our core product today

  2. Future additional product lines! Our core product provides a platform that we're building marketing applications on top of.


We’re fully remote. If you strongly value in-person work, Freshpaint is likely not the best fit for you. Even though we don’t care where you’re located, we only hire within the US. Many of our team is concentrated in various metro areas like SF or NYC. To balance out our remote-ness, we gather the team 2x times per year for offsites. We’re backed by leading investors including Y-Combinator, Intel Capital, and angel investors like the Head of Data from Slack, Head of Data at LinkedIn, and more.

Who we are:

Freshpaint was founded by web analytics veterans who realized how hard it was for highly regulated companies to collect and use customer data in a compliant way. We started as part of Y Combinator’s S19 cohort and have been focused on enabling healthcare companies collect, safeguard, and activate patient data since.

In 2022 the government issued updated guidance around HIPAA, basically making our software a requirement to use for healthcare companies. As a result, we're one of the fastest growing software companies on earth right now.

Our team has deep analytics and growth experience, with all of us coming from high-growth companies like Heap, Pendo, Iterable, Quantum Metric, and Retool. If you value lots of freedom and ownership in your work, interfacing with customers, and working on a product with high customer impact, then Freshpaint is your home.

About the Role

At Freshpaint, we believe that strong Engineering teams are built of individuals who

  • Solve problems, not tickets – Jump into unfamiliar territory and learn what's needed to move the team forward

  • Think like owners – Focus on delivering measurable business impact rather than completing tasks

  • Elevate others – Actively mentor, unblock, and celebrate teammates, knowing the team's wins are your wins

We are looking for a Senior Software Engineer - Data to join one of our Product-oriented teams. As Freshpaint has grown, our Products have become more sophisticated and increasingly leveraged multiple sources of data. We’re seeking a Software Engineer who has competencies in Data and Data Engineering to help us shape the next generation of Freshpaint Products. We believe there’s a big opportunity ahead, and this person will contribute to the team’s success by building new products and by influencing how we incorporate data into our Product offerings.

What You’ll Do

  • Use your expertise to build Software Products that rely on data

    • Deliver business outcomes by either directly owning, or guiding others to build reliable and scalable products

    • Mentor engineers and analysts on best practices for data quality, reliability, testing, monitoring, and documentation

    • Partner closely with analytics, product, and engineering teams to identify data requirements and translate them into robust, scalable solutions

  • Join customer calls (both internal teams and external users) to hear firsthand what problems they're solving and what features actually move the needle

  • Design and refine data models that underpin product functionality while implementing monitoring systems to ensure reliability and performance

  • Collaborate with our Data Guild to define the organization’s data strategy influencing decisions on tooling, architecture, and engineering standards

  • Solve problems side-by-side with team members through a combination of pairing and solo work

If this sounds like you, we would love to chat!

What We’re Looking For

  • 5+ years of experience in building Products, either in Software Engineering, Data engineering or a closely related role

  • Strong customer orientation, with a focus on details that drive product impact and customer value

  • Proven experience building and maintaining production-grade data pipelines

  • Proficiency in application development

  • Proficiency in SQL and at least one data engineering language (e.g., Python, Scala, or Java)

  • Hands-on experience with large-scale data warehouses, regardless of specific tooling

  • Experience with data visualization and the ability to tell clear, compelling stories with data

  • Hands-on experience with modern data warehouses and data modeling best practices

  • Experience working with cloud-based data platforms (AWS, GCP, or Azure)

  • Familiarity with orchestration tools, version control, and CI/CD best practices

  • Ability to work independently, make sound architectural decisions, and thrive in ambiguous environments

  • Strong communication skills and comfort collaborating with both technical and non-technical partners

Nice to Have

  • Experience being an early data engineer at a company

  • Experience with Golang, Typescript, Data Build Tool

  • Experience with tools like Snowflake, Looker, or Fivetran

  • Experience with analytics engineering or BI tooling

  • Prior experience helping scale a data platform as the company grows

Why This Role Is Exciting

  • Build the foundation for what's next. You'll architect the data systems and strategy that power Freshpaint's future, shaping how the company scales for years to come

  • See your impact everywhere. Your work will touch every team and product at Freshpaint, giving you visibility into how engineering decisions drive real business outcomes

  • Code one day, strategize the next. You'll split your time between writing code and making architectural decisions that set technical direction, perfect if you want to keep your hands on the keyboard while influencing the big picture

Interview Process

At the start of the call, we will briefly go through a few standard verification steps to ensure we’re speaking to the right person. This helps protect both candidates and our team against AI misuse. If at any point we get the sense we aren’t speaking with the right candidate, we reserve the right to end the call early.

  • Recruiter Screen

  • Hiring Manager Call

  • Virtual Onsite with Technical Pairings

  • CEO Interview

  • Offer!

Perks & Benefits

We take care of our team—here’s a peek at what you get when you join:

  • Competitive pay + generous equity (10-year exercise window)

  • Fully remote (U.S. only) with a $150/month coworking stipend

  • Half-day Fridays, every Friday

  • Unlimited PTO—with a required 2-week minimum

  • Top-tier health, dental & vision (100% covered for you, 80% for dependents)

  • 2 “Treat Yourself” days a year—$100 and a day off, just because

  • Generous parental leave

  • Epic offsites twice a year (past trips: Greece, Jackson Hole, Cabo, wine country + more)

And more—check out our careers page for the full list.



Please mention the word **SUCCESSFULLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Software Engineer Data Platform
  • Zus Health
  • United States
software embedded system ceo

Who we are


Zus is a shared health data platform designed to accelerate healthcare data interoperability by providing easy-to-use patient data via API, embedded components, and direct EHR integrations. Founded in 2021 by Jonathan Bush, co-founder and former CEO of athenahealth, Zus partners with HIEs and other data networks to aggregate patient clinical history and then translates that history into user-friendly information at the point of care. Zus's mission is to catalyze healthcare's greatest inventors by maximizing the value of patient insights - so that they can build up, not around.


What we're looking for


We’re looking for an experienced Software Engineer to join the “Costco” team at Zus, which builds services for managing our rapidly growing bulk data offerings while adhering to complex healthcare access control requirements.


The ideal candidate will be excited to take on the challenge of processing, storing and delivering the entire health records of millions of patients, adopting tools to handle growing scale, and ensuring high data quality and freshness. You are creative, innovative and love to run experiments to explore the paths to evolve and develop our platform as we scale.


As As part of the core Zus platform, the Costco team has needed to rapidly innovate to stay ahead of data volumes that grow at 10x per year and a growing base of data-savvy customers using data to improve patient care. They are also contending with an evolving regulatory landscape in data privacy and security.


On the Costco team, you will work with microservices in Go, streaming data pipelines in AWS, and state-of-the-art data technologies including Apache Iceberg, Apache Spark, Snowflake, and dbt. Expect to learn a lot and be put on mission-critical projects with direct customer impact.

\n


As part of our team, you will
  • Build and operate data services driving our applications and APIs
  • Collaborate with team members and across Engineering to iteratively prototype and develop new functionality
  • Partner with product managers and other Zusers


You're a good fit because you
  • Learn fast and enjoy open-ended technical challenges
  • Have experience with operationally stable, scalable, and cost efficient data services
  • Enjoy owning your work and seeing it deploy safely in production
  • Are experienced using Cloud Data Warehouses such as Snowflake, Big Query, Redshift or Databricks
  • Have experience with at least one of the following: deployment technologies (GitHub Actions, CircleCI, etc.), cloud providers (AWS, Azure, GCP), and Infrastructure as Code (Terraform, CloudFormation, etc.)
  • Are excited to ~ finally! ~ enable a true digital revolution in healthcare
  • Thrive amid the changing landscape of a growing and evolving startup
  • Enjoy collaboration and solving unique problems


It would be awesome if you were
  • Experienced at working with petabyte-scale data
  • Experienced with Apache Iceberg, Apache Spark, and other large-scale data technologies
  • Experienced with AuthN/AuthZ and fine-grained access control
  • Familiar with multiple languages including either Go or Python
  • Experienced in working with healthcare data and APIs
  • Familiar with the FHIR and/or TEFCA standards


\n
$140,000 - $180,000 a year
We are a remote first company that believes that in-person interactions are beneficial. You should be comfortable traveling about once a quarter to collaborate with teammates face to face.
\n

We will offer you…


• Competitive compensation that reflects the value you bring to the team a combination of cash and equity

• Robust benefits that include health insurance, wellness benefits, 401k with a match, unlimited PTO

• Opportunity to work alongside a passionate team that is determined to help change the world (and have fun doing it)


Please Note: Research shows that candidates from underrepresented backgrounds often don’t apply unless they meet 100% of the job criteria. While we have worked to consolidate the minimum qualifications for each role, we aren’t looking for someone who checks each box on a page; we’re looking for active learners and people who care about disrupting the current healthcare system with their unique experiences.


We do not conduct interviews by text nor will we send you a job offer unless you've interviewed with multiple people, including the Director of People & Talent, over video interviews. Job scams do exist so please be careful with your personal information.




Please mention the word **UNFETTERED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Staff Software Engineer
  • Office Hours
  • Remote
software system consulting technical

About Us

Office Hours is an on-demand expert network that connects leading organizations with trusted experts across various knowledge domains. Experts earn income by sharing their knowledge through advisory work, projects, and AI model training. Our platform handles the complexities behind the scenes— screening, compliance, scheduling, and payments—so knowledge sharing stays focused on meaningful insights and real impact.

We’re a hyper-growth and profitable company, quickly expanding our expert network, launching new offices, and new products. We are headquartered in San Francisco, with offices in Brooklyn and Bangalore. Our customers include the fastest-growing digital health companies, technology companies, institutional investment firms, consulting firms and AI Labs. We are backed by top marketplace investors and operators of companies like DoorDash, Airbnb, Affirm.

What we believe

Human knowledge is the world’s most valuable asset. And yet, despite being more interconnected than ever, most knowledge still remains stuck in our heads, inaccessible and underutilized. Our vision is to make human knowledge easily accessible and infinitely scalable by building tools for the new age knowledge economy.

About the role

At first glance, Office Hours looks simple: search, match, connect, and pay. Under the hood, the system is anything but.

We’re building and evolving a deeply interconnected platform spanning search, discovery, recommendations, data pipelines, logistics, payments, compliance, and performance. The entire stack has been built in-house, from expert profiles and discovery experiences to workflow automation and an underlying knowledge graph that ties everything together.

We’re looking for a Staff Full Stack Software Engineer who enjoys working across the stack, takes ownership of complex problems, and cares deeply about building thoughtful, high-quality product experiences. This is a hands-on role with real influence over product direction, technical architecture, and how we ship software.

What you’ll do

  • Own the design, implementation, and rollout of meaningful user-facing features, from problem definition through production

  • Partner closely with design, product, and client-facing teams to translate real user needs into shipped solutions

  • Architect, build, and evolve scalable, reliable systems across the front end, back end, and infrastructure

  • Set a high bar for code quality through clear implementations, thoughtful tradeoffs, and active participation in reviews and technical discussions

  • Explore and integrate modern tools, including AI-powered workflows, and share learnings that improve how the team builds and ships

What you bring

  • 8+ years of professional software engineering experience, with meaningful time spent working across the stack

  • A track record of shipping high-quality, user-facing products in production environments

  • Strong product intuition and the ability to translate ambiguous user or business problems into technical solutions

  • Comfort operating in fast-moving environments where priorities evolve and ownership matters

  • A bias toward action, paired with sound judgment and attention to detail

Our tech stack

  • Back end: Node.js, Typescript, MongoDB & Postgres, OpenSearch, Temporal

  • Front end: React, Next.js, Tailwind, shadcn

  • Infrastructure: AWS, Kubernetes, Docker, Datadog, Sentry

  • Workflow: GitHub, Slack, Notion, Figma, Linear, PostHog, Metabase

Benefits + Perks

  • Competitive salary and equity

  • Medical, dental, and vision coverage

  • 401(k)

  • Monthly wellness and fitness stipend

  • Paid time off policy, along with company holidays

  • Annual company off-sites (Tahoe, Mendocino, Mexico City, San Diego, Park City)

  • Parent-friendly policies, remote flexibility, and paid family leave

Pay Transparency Notice

Full-time offers include base salary, equity, and benefits.

Pay range: $225,000- $250,000 based on seniority and relevant experience

*This role can be 100% remote, but we do have offices in San Francisco and NYC

Don’t meet every single requirement? Studies have shown that some candidates, especially underrepresented groups such as women and people of color, are less likely to apply to jobs unless they meet every single qualification. At Office Hours we believe in building a diverse and inclusive workplace, so if you’re excited about this role but don’t meet every qualification in the job description, we still encourage you to apply. You could still be the right candidate for this or other roles at Office Hours!



Please mention the word **LIGHTER** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$170000 - $190000 Full time
software assistant design system

Who is Flock?

Flock Safety is the leading safety technology platform, helping communities thrive by taking a proactive approach to crime prevention and security. Our hardware and software suite connects cities, law enforcement, businesses, schools, and neighborhoods in a nationwide public-private safety network. Trusted by over 5,000 communities, 4,500 law enforcement agencies, and 1,000 businesses, Flock delivers real-time intelligence while prioritizing privacy and responsible innovation.

We’re a high-performance, low-ego team driven by urgency, collaboration, and bold thinking. Working at Flock means tackling big challenges, moving fast, and continuously improving. It’s intense but deeply rewarding for those who want to make an impact.

With nearly $700M in venture funding and a $7.5B valuation, we’re scaling intentionally and seeking top talent to help build the impossible. If you value teamwork, ownership, and solving tough problems, Flock could be the place for you.

The Opportunity

We're hiring a Senior Software Engineer to build Night Shift, a conversational AI assistant that helps investigators surface critical evidence and close cases faster. You'll design and implement the conversational interface, build the orchestration backend that manages LLM interactions and tool calling, and develop integration pipelines connecting our AI to Flock's existing data platform and APIs. This is a ground-floor opportunity where product thinking matters as much as technical execution: you'll shape chat experiences with complex context management, partner with platform teams to design new APIs or leverage existing ones, and solve the reliability challenges of deploying AI in high-stakes investigative workflows. You'll collaborate closely with ML engineers on prompt engineering and agentic workflows while maintaining a strong point of view on what makes a great user experience. If you've built LLM-powered products and thrive at the intersection of customer impact and technical depth, this role is for you.

The Skillset

  • Love for coding and continuous learning, especially in the rapidly evolving LLM space

  • Resourceful problem-solver mindset: excel in ambiguous situations and take initiative to define product direction

  • Strong TypeScript / Node / Express skills for web services and API design (REST, SSE, WebSockets for streaming)

  • Modern web framework expertise (React / TypeScript preferred), particularly for conversational UI and chat interfaces

  • Hands-on LLM experience: OpenAI/Anthropic/Gemini APIs, prompt engineering, streaming responses, and conversation context management

  • Familiarity with agentic patterns: function calling, tool use (MCP), and orchestrating multi-step workflows

  • API integration skills: consume existing APIs or design new ones to ground AI in investigative data

  • Database confidence: PostgreSQL and sophisticated SQL for data retrieval

  • Cloud infrastructure basics: Docker, Kubernetes (Helm), AWS services (S3, SQS, API Gateway)

  • Product-minded: translate user feedback into technical requirements and make pragmatic tradeoffs

  • Bonus points for: LLM evaluation tools (LangSmith, Langfuse), vector search/RAG, microservices architecture, or Terraform

90 Days at Flock

The First 30 Days

  • Onboard and Integrate:

    • Familiarize yourself with Flock's mission, investigative workflows, and how customers use our platform today

    • Pair with engineers across Cloud Software and ML teams to understand existing APIs, data models, and system architecture

    • Build relationships with key stakeholders to understand their capabilities and constraints. Meet with members of:

      • Machine Learning (agentic systems, model serving)

      • Data Engineering (investigative datasets, pipelines)

      • Platform teams (APIs, infrastructure)

      • Product and Design (customer needs, UX direction)

  • Ship Early and Learn:

    • Complete a first-day push to production

    • Pick up initial sprint tickets: bug fixes, small UX improvements, or API integrations

    • Participate in customer feedback sessions to understand investigator workflows and pain points

The First 60 Days

  • Build the Foundation:

    • Deliver core conversational UI components and establish patterns for chat interfaces

    • Implement backend orchestration for LLM interactions and tool calling

    • Stand up observability for the AI system (logging, tracing, basic metrics)

    • Work with ML team to integrate agentic workflows and refine prompt strategies

  • Demonstrate Velocity:

    • Own end-to-end features that connect UI, backend orchestration, and data integrations

    • Collaborate with Product to rapidly iterate based on early user testing

    • Propose technical improvements to chat quality, performance, or reliability

90 Days & Beyond

  • Drive Product Impact:

    • Lead development of a core Night Shift capability that demonstrably improves investigator efficiency

    • Represent the team in cross-functional initiatives, balancing zero-to-one experimentation with engineering best practices

    • Establish patterns for testing and quality in an evolving AI product

  • Shape the Direction:

    • Influence product roadmap through technical insights and customer feedback

    • Mentor team members on LLM integration patterns or full-stack best practices

    • Own a domain area (e.g., conversation management, data grounding, streaming architecture)

The Interview Process

We want our interview process to be a true reflection of our culture: transparent and collaborative. Throughout the interview process, your recruiter will guide you through the next steps and ensure you feel prepared every step of the way. To check out our interview stages and how you should prepare visit experiences on our careers page.

Salary & Equity

In this role, you’ll receive a starting salary of $170,000-$185,000 as well as stock options. Base salary is determined by job-related experience, education/training, as well as market indicators. Your recruiter will discuss this in-depth with you during our first chat.

The Perks

🌴Flexible PTO: We seriously mean it, plus 11 company holidays.

⚕️Fully-paid health benefits plan for employees: including Medical, Dental, and Vision and an HSA match.

👪Family Leave: All employees receive 12 weeks of 100% paid parental leave. Birthing parents are eligible for an additional 6-8 weeks of physical recovery time.

🍼Fertility & Family Benefits: We have partnered with Maven, a complete digital health benefit for starting and raising a family. Flock will provide a $50,000-lifetime maximum benefit related to eligible adoption, surrogacy, or fertility expenses.

🧠Spring Health: Spring Health offers a variety of mental health benefits, including therapy, coaching, medication management, and digital tools, all tailored to each individual's needs.

💖Caregiver Support: We have partnered with Cariloop to provide our employees with caregiver support

💸Carta Tax Advisor: Employees receive 1:1 sessions with Equity Tax Advisors who can address individual grants, model tax scenarios, and answer general questions.

💚ERGs: We want all employees to thrive and feel like they belong at Flock. We offer three ERGs today - Women of Flock, Flock Proud, and Melanin Motion. If you are interested in talking to a representative from one of these, please let your recruiter know.

💻WFH Stipend: $150 per month to cover the costs of working from home.

📚Productivity Stipend: $300 per year to use on Audible, Calm, Masterclass, Duolingo, Grammarly and so much more.

🏠Home Office Stipend: A one-time $750 to help you create your dream office.

If an offer is extended and accepted, this position requires the ability to obtain and maintain Criminal Justice Information Services (CJIS) certification as a condition of employment. Applicants must meet all FBI CJIS Security Policy requirements, including a fingerprint-based background check.

Flock is an equal opportunity employer. We celebrate diverse backgrounds and thoughts and welcome everyone to apply for employment with us. We are committed to fostering an environment that is inclusive, transparent, and collaborative. Mutual respect is central to how Flock operates, and we believe the best solutions come from diverse perspectives, experiences, and skills. We embrace our differences and know that we are stronger working together.

If you need assistance or an accommodation due to a disability, please email us at recruiting@flocksafety.com. This information will be treated as confidential and used only to determine an appropriate accommodation for the interview process.

At Flock Safety, we compensate our employees fairly for their work. Base salary is determined by job-related experience, education/training, as well as market indicators. The range above is representative of base salary only and does not include equity, sales bonus plans (when applicable) and benefits. This range may be modified in the future. This job posting may span more than one career level.



Please mention the word **EMPOWERMENT** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • ARB Interactive
  • Miami
security python game technical

At ARB Interactive, creativity, tech, and play collide. Founded in 2022, we've grown to nearly 200 team members and were named one of LinkedIn's ​2025 Top 50 Startups in the United States​! We move fast, think big, and love bold ideas that push boundaries (and buttons). From new rewards to fresh game mechanics, every challenge is a chance to innovate and have fun doing it. Our culture is collaborative, curious, and full of laughter because great ideas grow best between coffee, code, and a few epic high-fives.

Summary

We’re looking for a Senior Data Engineer to help shape and expand the foundation of our modern data stack. This is a hands-on role for someone who’s excited to build and improve robust, scalable pipelines and collaborate cross-functionally to turn raw data into business-critical insights.

As a senior member of the team, you’ll play a key role in technical decision-making, partnering closely with analytics, engineering, product, and other talented and collaborative teammates, to help ensure our systems scale with the business. If you’re passionate about solving real-world complex data challenges, in order to move the needle in a high-growth environment, this role provides the perfect blend of a technical challenge and meaningful impact.

This is a great opportunity for someone who thrives on hands-on execution but also enjoys mentoring others, guiding architectural decisions, and helping shape the future of the data function.

Responsibilities

  • Design, build, and maintain scalable, efficient ETL/ELT pipelines

  • Model clean, trusted datasets to support analytics, experimentation, and reporting

  • Optimize our data infrastructure for performance, cost, governance, and maintainability

  • Partner with data analysts and product teams to improve data accessibility and accuracy

  • Enable self-service analytics by designing intuitive data models and comprehensive documentation

  • Implement robust data quality frameworks, monitoring, alerting and observability to ensure data reliability

  • Collaborate with product and engineering on instrumentation of new product features and events

  • Mentor junior team members, contribute to code reviews, and share best practices

  • Influence the long-term direction of our data architecture and tooling

  • Take on team leadership or people management responsibilities as the team scales

Requirements

  • 5+ years of experience in data engineering or related roles

  • Strong SQL and Python skills, with a focus on readable and efficient code

  • Deep understanding of data warehousing concepts and data modeling best practices

  • Hands-on experience with tools in the modern data stack (e.g., dbt, Airflow, Snowflake, BigQuery, Redshift)

  • Strong communication and collaboration skills; able to work cross-functionally with analysts, PMs, and engineers

  • A bias toward action and ownership; you thrive in fast-paced, high-autonomy environments

Nice to Have

  • Experience in gaming, entertainment, or high-volume consumer applications

  • Familiarity with event tracking platforms (e.g., Segment, Amplitude)

  • Experience hiring or onboarding engineers in a high-growth environment

Diversity Commitment: We are focused on building a diverse and inclusive team. We welcome people of all backgrounds, experiences, abilities, and perspectives and are an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Important Security Notice: Our recruitment team will only contact candidates through official channels using @arbinteractive.com email addresses and via our recruiting platform, Ashby. If you find a position on a third party careers page (LinkedIn, Indeed, etc.), the job posting will redirect you to our careers page (https://jobs.ashbyhq.com/arb-interactive) to begin your application. We will never request payment, banking information, or personal identification details during the application process.

If you're ever uncertain about the legitimacy of communication claiming to be from our company, please forward it to recruiting@arbinteractive.com for verification before responding or clicking any links.



Please mention the word **SMOOTHES** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer
  • TextNow
  • Open- Canada
python support travel cloud

We believe communication belongs to everyone. We exist to democratize phone service.  TextNow is evolving the way the world connects, and that's because we're made up of people with curious minds who bring an optimistic yet critical lens into the work we do.   We're the largest provider of free phone service in the nation. And we're just getting started. 

 

Join us in our mission to break down barriers to communication and free the flow of conversation for people everywhere. 

 

TextNow is looking for an experienced Data Engineer with hands-on experience designing and developing data platforms. You will own the design, development, and maintenance of TextNow's data platform, enabling us to make effective data-informed decisions. You will be part of cross-functional efforts to build scalable and reliable frameworks that support allTextNow's business and data products. In this role, you can interact with different functional areas within the business and influence decision-making in a fast-growing mobile communications start-up.   

\n


What You'll Do
  • Own TextNow's data warehouse, data pipelines, and integration points between various business systems. 
  • Design, develop, and support new and existing batch and real-time data pipelines, and recommend improvements or modifications. 
  • Manage data models to enable AI/ML data products. 
  • Champion TextNow's data ecosystem by working with engineering and infrastructure teams to enable quicker access to data for insights and decision-making. 
  • Communicate data modeling and architecture processes to cross-functional teams. 
  • Identify, design, and implement process improvements across the data platform. 


Who You Are
  • Have 3–5 years of experience working with data warehouse/data lake and ETL architectures(e.g.,databricks, iceberg), cloud data warehouses (e.g., Snowflake), and hands-on experience in Python and SQL — preferably in companies with fast-growing and evolving data needs. 
  • Have at least 2 years of experience with Airflow and Spark. 
  • Have developed scalable, real-time data pipelines using Python/Scala, SQL, and distributed processing frameworks such as Spark or Flink. 
  • Have exposure to the AWS platform and services such as EKS, MSK, and MWAA (preferred). 
  • Have experience building data features using Snowflake, dbt, and Python to power real-time AI/ML inference. 
  • Are respectfully candid, with the ability to initiate and drive tasks to completion. 
  • Are highly organized, dependable, and follow a structured work approach. 


\n
$88,900 - $127,000 a year
Final compensation will be determined based on a number of factors, including skills, experience, location and on-the-job performance. We’re committed to paying competitively to hire and retain high-caliber talent. We recognize that exceptional talent may fall outside of these ranges; we encourage all qualified candidates to apply even if their compensation expectations are outside of the listed range.
\n

More about TextNow...


Our Values:

·  Customer Obsessed (We strive to have a deep understanding of our customers)

·  Do Right By Our People (We treat each other with fairness, respect, and integrity)

·  Accept the Challenge (We adopt a "Yes, We Can" mindset to achieve ambitious goals)

·  Act Like an Owner (We treat this company like it's our own... because it is!)

·  Give a Damn! (We are deeply commited and passionate about our work and achieving results)


Benefits, Culture, & More:

·   Strong work life blend 

·   Flexible work arrangements (wfh, remote, or access to one of our office spaces)

·   Employee Stock Options 

·   Unlimited vacation 

·   Competitive pay and benefits

·   Parental leave

·   Benefits for both physical and mental well being (wellness credit and L&D credit)

·   We travel a few times a year for various team events, company wide off-sites, and more


Diversity and Inclusion:

At TextNow, our mission is built around inclusion and offering a service for EVERYONE, in an industry that traditionally only caters to the few who have the means to afford it. We believe that diversity of thought and inclusion of others promotes a greater feeling of belonging and higher levels of engagement. We know that if we work together, we can do amazing things, and that our differences are what make our product and company great. 


TextNow Candidate Policy

By submitting an application to TextNow, you agree to the collection, use, and disclosure of your personal information in accordance with the TextNow Candidate Policy



Please mention the word **COOPERATIVELY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Analyst
  • TextNow
  • Open- Canada
analyst python support growth

We believe communication belongs to everyone. We exist to democratize phone service.  TextNow is evolving the way the world connects and that's because we're made up of people with curious minds who bring an optimistic, yet critical lens into the work we do.   We're the largest provider of free phone service in the nation. And we're just getting started.


Join us in our mission to break down barriers to communication and free the flow of conversation for people everywhere.


TextNow is looking for a motivated Senior Data Analyst to join our Analytics & Insights team. You’ll drive data-informed decision-making across the organization by translating business problems into analytical solutions, designing insightful dashboards, and uncovering trends that shape strategic actions.

This role is perfect for someone with strong analytical skills, deep business acumen, and a passion for using data to tell stories that inspire action.


What You’ll Do


Analyze complex datasets to identify actionable insights, trends, and opportunities

Develop and maintain dashboards, reports, and data visualizations using tools like Looker, Tableau, Power BI, or Redash

Conduct ad hoc analyses to support product, marketing, and operations initiatives

Partner with data engineering teams to ensure data quality, integrity, and availability

Develop and maintain KPI frameworks and performance measurement systems

Assist in building scalable data models and automation pipelines

Collaborate cross-functionally with Product, Finance, Marketing, and Operations teams to define analytical needs

Translate business questions into data requirements and present insights and recommendations to senior leadership

Mentor junior analysts and foster a culture of data-driven decision-making

Define and standardize analytical best practices across the organization


You’ll Be a Great Fit If You Have


Bachelor’s degree in Data Science, Statistics, Mathematics, Economics, Computer Science, or a related field (Master’s preferred)

5+ years of experience in data analytics or business intelligence

Proficiency in SQL and at least one programming language (e.g., Python or R)

Experience with modern BI tools (Looker, Tableau, Power BI, Mode, or Redash)

Strong understanding of A/B testing, statistical analysis, and data modeling

Experience working with large-scale datasets and cloud-based environments (e.g., Snowflake, Eppo)

Excellent communication and storytelling skills with data

Attention to detail, analytical rigor, and curiosity for continuous improvement


Preferred Skills


Experience in telecommunications, SaaS, or consumer app environments

Familiarity with machine learning concepts and predictive analytics

Understanding of ETL processes and data warehousing fundamentals

Experience collaborating with product teams on experimentation and growth analytics


Estimated Base Salary Range by Location:


Canada (CAD): $103,700 – $140,300

US – National (USD): $114,800 – $155,300

Final compensation will be determined based on a number of factors, including skills, experience, location, and on-the-job performance. We’re committed to paying competitively to hire and retain high-caliber talent. We recognize that exceptional talent may fall outside of these ranges; we encourage all qualified candidates to apply even if their compensation expectations are outside of the listed range.

\n


\n

More about TextNow...


Our Values:

·  Customer Obsessed (We strive to have a deep understanding of our customers)

·  Do Right By Our People (We treat each other with fairness, respect, and integrity)

·  Accept the Challenge (We adopt a "Yes, We Can" mindset to achieve ambitious goals)

·  Act Like an Owner (We treat this company like it's our own... because it is!)

·  Give a Damn! (We are deeply committed and passionate about our work and achieving results)


Benefits, Culture, & More:

·   Strong work life blend 

·   Flexible work arrangements (wfh, remote, or access to one of our office spaces)

·   Employee Stock Options 

·   Unlimited vacation 

·   Competitive pay and benefits

·   Parental leave

·   Benefits for both physical and mental well being (wellness credit and L&D credit)

·   We travel a few times a year for various team events, company wide off-sites, and more


Diversity and Inclusion:

At TextNow, our mission is built around inclusion and offering a service for EVERYONE, in an industry that traditionally only caters to the few who have the means to afford it. We believe that diversity of thought and inclusion of others promotes a greater feeling of belonging and higher levels of engagement. We know that if we work together, we can do amazing things, and that our differences are what make our product and company great. 


TextNow Candidate Policy

By submitting an application to TextNow, you agree to the collection, use, and disclosure of your personal information in accordance with the TextNow Candidate Policy



Please mention the word **WISELY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer
  • Loop
  • Remote
python growth code cloud

The Data team at Loop is on a mission to empower merchants with transformative data products that drive success beyond returns. By building tools that merchants love and fostering a robust data culture, the team enables smarter decision-making across the board. Whether creating insights to guide merchants’ strategies or strengthening internal data-driven processes, the Data team is integral to shaping Loop’s future and unlocking new opportunities for our merchants and teams alike.


As a Data Engineer at Loop, you’ll have the chance to significantly impact our ability to solve merchant problems and fulfill merchant needs. You’ll be an integral member of the team, owning all aspects of data availability, quality, and ease of use of our data platforms. Your success in this role will depend on a healthy blend of creativity and structure with a continuous focus on delivering value to the business.


At Loop, we’re intentional about the way we work so that we can do our best work. We call this our Blended Working Environment. We work from our HQ in Columbus, OH, or one of our Hub or Secluded locations, and are distributed throughout the United States, select Canadian provinces, and the United Kingdom. For this position, we’re looking for someone to join us in a location where we already have an established Hub or HQ.


Our data stack: Snowflake, Fivetran, dbt, GoodData, Secoda

\n


What you’ll do:
  • Maintain and optimize existing data pipelines and warehouse solutions for performance, reliability, and cost efficiency. 
  • Support internal analytics and ML teams with data modeling, schema updates, and ad hoc data needs. 
  • Contribute to dbt projects and assist in ensuring data quality, observability, and accessibility. 
  • Write clean, tested, and documented code, and participate in code reviews. 
  • Collaborate with senior data engineers to understand and contribute to new ingestion sources, ML pipelines, and other forward-looking initiatives. 
  • Ensure internal stakeholders can access and use data effectively, enabling faster business insights and decision-making.


Your experience:
  • 4 years of hands-on experience building and maintaining data pipelines and data sets in a cloud environment (Snowflake, GBQ, Redshift, etc.). *We're expecting top candidates to have hands-on experience with Snowflake, specifically!
  • 2+ years of Python experience, creating reliable workflows and data processing scripts. 
  • Strong SQL skills and experience with data modeling. 
  • Experience with dbt or similar transformation tools. Familiarity with distributed systems and ETL/ELT processes.
  • Nice to have: Experience with data observability, lineage, or governance tools. 
  • Nice to have: Exposure to BI tools and supporting analytics teams. 
  • Nice to have: Experience working on cross-functional data projects. 
  • Nice to have: Familiarity with Fivetran, Kafka, or modern data integration platforms. 


Our Data Team values
  • Progress over perfection and focus on delivering value. 
  • Strong, open, and continuous collaboration with peers and stakeholders. 
  • Autonomy and accountability. 
  • Drive to solve problems. 
  • Engagement and participation in our Agile practices.


\n
$118,400 - $177,600 a year
We know that making decisions about your career and compensation is a huge deal. Because of that, we’re incredibly thoughtful about our compensation strategy. We want you to feel safe and excited, but also comfortable with the compensation package of a startup. We’ve outlined some important information for you here, but please know there’s a lot more to compensation than we can cover in this job posting. 

The posted salary range is the base salary for this opportunity. The salary range is subject to change, and may be adjusted in the future.

The actual annual salary paid for this position will be based on several factors, including, but not limited to: your prior experience and skills related to the position, geographic location, company needs, current market demands, and your total compensation goals. 

Great humans deserve great benefits. At Loop, you’ll be eligible for benefits such as: medical, dental, and vision insurance, flexible PTO, company holidays, sick & safe leave, parental leave, 401k, monthly wellness benefit, home workstation benefit, phone/internet benefit, and equity.
\n

#LI-ST1


Loop Story


Commerce should feel effortless. Every product adored, every order perfect, every customer loyal for life. But reality is messier: operations get tangled, margins grow thin, and trust is fragile. That’s where Loop steps in. We create confidence where commerce fails.


We started by fixing returns and exchanges. Today, we’re building a connected commerce operations suite — powering everything from order tracking to fraud prevention, with hundreds of innovations in between. Grounded in data and insight, our platform helps merchants make smarter decisions with every transaction. Over 5,000 of the world’s most loved brands trust Loop to turn cost centers into growth engines. Our mission is simple: protect margins, delight customers, and help merchants build businesses that last.


Life at Loop is rooted in our core values. We balance high empathy with high standards, knowing that work is better when we can show up authentically and resilience is built by facing challenges head-on. We expect you’ll grow quickly, learning skills that last far beyond your time here. Loop is a formative chapter in your career — a chance to shape the future of commerce and to leave better than when you arrived.


Learn more about us here: https://loopreturns.com/careers.


You can review our privacy notice here.



Please mention the word **LIBERATION** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
design system python music

At Spotify, we're building the revenue platform that drives how revenue and taxes are processed across the company — enabling reliable, scalable financial operations across every market, product line, and partner. Our systems are essential to Spotify’s ability to earn, track, and report revenue and taxes, supporting everything from subscriptions and advertising to creator payouts.


As engineers on this team, we design and maintain the backend and data platform capabilities that power millions of transactions each day with precision. We build services that handle tax calculations, produce compliant financial records, and support regulatory requirements across global markets — all while staying agile to keep up with Spotify’s evolving business models. We equip Finance teams with flexible, configurable tools that govern how revenue and taxes are applied across products, enabling rapid adjustments without needing deep technical expertise. Our modular, process-oriented components simplify the development, maintenance, and scaling of the critical Order to Cash enterprise process that underpin Spotify’s financial operations.

\n


What You'll Do
  • Gain deep expertise in Spotify’s revenue platform, understanding how it enables financial operations, compliance, and strategic decision-making.
  • Design and implement scalable backend and data systems that process millions of transactions daily — supporting accurate tax calculation, billing, revenue recognition, financial configuration, and tax reporting.
  • Build intuitive, self-serve tools that empower Finance teams to define and manage product-specific revenue and tax configuration independently, without requiring engineering involvement.
  • Develop and enhance modular platform capabilities that encodes critical enterprise workflows, promoting consistency, reusability, and ease of maintenance across financial systems.
  • Lead the creation of new platform capabilities within the Tax Solutions space, focusing on Tax Reporting and global regulatory compliance.
  • Partner closely with Engineers, Product and Finance stakeholders to design systems that are scalable, auditable, and highly reliable.
  • Champion engineering best practices, strong architectural design, and operational excellence across backend and data platforms.
  • Foster a collaborative team culture rooted in shared ownership, constructive feedback, and continuous improvement.


Who You Are
  • You have experience in data engineering, including building and maintaining data pipelines.
  • You are proficient in Python and ideally Scala or Java
  • You possess a foundational understanding of system design, data structures, and algorithms, coupled with a strong desire to learn quickly, embrace feedback, and continuously improve your technical skills.
  • You’re familiar with cloud-native development and deployment, ideally within the Google Cloud Platform.
  • You think critically about system design and strive to build solutions that are reliable, maintainable, and auditable at scale.
  • You have good communication skills and can articulate your ideas and ask clarifying questions.
  • You love collaborating with others.
  • You thrive in ambiguous and fast-changing environments, and know how to make progress even when requirements are evolving.
  • You approach platform engineering with empathy for your users - prioritising usability, configurability, and long-term sustainability.
  • You care deeply about code quality, testing, and documentation, and you aim to build systems that are easy to understand and operate.
  • You enjoy collaborating across functions and bring clarity and alignment when working with engineering, finance, and product partners.
  • You’re naturally curious, self-motivated, and always looking for ways to grow your technical skills and improve how things are done.


Where You'll Be
  • This role is based in London, United Kingdom.
  • We offer you the flexibility to work where you work best! There will be some in person meetings, but still allows for flexibility to work from home.


\n

Spotify is an equal opportunity employer. You are welcome at Spotify for who you are, no matter where you come from, what you look like, or what’s playing in your headphones. Our platform is for everyone, and so is our workplace. The more voices we have represented and amplified in our business, the more we will all thrive, contribute, and be forward-thinking! So bring us your personal experience, your perspectives, and your background. It’s in our differences that we will find the power to keep revolutionizing the way the world listens.


At Spotify, we are passionate about inclusivity and making sure our entire recruitment process is accessible to everyone. We have ways to request reasonable accommodations during the interview process and help assist in what you need. If you need accommodations at any stage of the application or interview process, please let us know - we’re here to support you in any way we can.


Spotify transformed music listening forever when we launched in 2008. Our mission is to unlock the potential of human creativity by giving a million creative artists the opportunity to live off their art and billions of fans the chance to enjoy and be passionate about these creators. Everything we do is driven by our love for music and podcasting. Today, we are the world’s most popular audio streaming subscription service.



Please mention the word **NOBLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • Ethena Labs
  • Globally Remote
crypto back-end python cto

Who We Are and What We are Doing:

Ethena Labs is actively building and deploying a suite of groundbreaking digital dollar products aiming to upgrade money into the internet era.


Our flagship product, USDe, is a synthetic dollar backed by digital assets, and takes the novel approach of using a delta-neutral hedged basis strategy to maintain its peg. This product scaled from zero to $15b in 18 months.


Expanding on this, iUSDe is designed specifically for traditional financial institutions, incorporating necessary compliance features to enable them to access the crypto-native rewards our protocol generates, in an institutional-friendly manner.


Ethena has also developed USDtb: a fiat backed GENIUS compliant stablecoin in partnership with BlackRock which has scaled to ~$2b.


These products are also offered in a whitelabel stablecoin offering where any application, chain, wallet or exchange can launch their own stablecoin on Ethena's back-end infrastructure.


Through these offerings, Ethena Labs is not just creating new financial products; we are building the foundational infrastructure for a more open, efficient, and interconnected global financial system.


Open job offerings will be focused on two new major product lines coming to market in the next few months.


Join us!!


The Senior Data Engineer is a critical role reporting directly to the CTO. The primary mission is to rapidly deliver a reliable, production-ready market data platform that serves as the single source of truth for trading, risk, and business intelligence.


You’ll immediately own the entire data platform from inception and deliver working historical and real-time Tardis pipelines in the first 60 days. Beyond the initial MVP, the role requires iteratively evolving the platform into a best-in-class, cloud-native, observable, and self-service system. You will work hand in hand with the CTO & trading team to scope & deliver to business needs. The Senior Data Engineer will also serve as the go-to data expert for the firm and will be responsible for mentoring future junior data engineers or analysts.


\n


What You’ll Do
  • Rapidly spin up the cloud environment. Deliver working historical backfill pipelines from Tardis.dev into a queryable database.
  • Deliver a real-time Tardis WebSocket pipeline, ensuring data is normalized, cached for live consumption, accurate, replayable, and queryable by Day 60.
  • Ensure all pipelines are idempotent, retryable, and use exactly-once semantics. Implement full CI/CD, Terraform, automated testing, and secrets management.
  • Implement proper observability (structured logs, metrics, dashboards, alerting) from day one. Provide immediate self-service access to the MVP database for Trading and BI teams via tools like Tableau/Metabase, and through simple internal REST APIs.
  • Develop specialized timeseries data, including USDe backing-asset and a full opportunity-surface timeseries for Delta-neutral/lending/borrow opportunities.
  • Ingest data from additional sources (Kaiko, CoinAPI, on-chain via TheGraph/Dune). Plan for 10x+ data growth via schema evolution, partitioning, and performance tuning. Establish enterprise-grade governance, including a data quality framework, RBAC, audit logs, and a semantic layer.
  • Create full architecture documentation, runbooks, and a data dictionary. Onboard and mentor future junior staff.


What We’re Looking For
  • Proven track record of delivering working, production data in weeks, not months, with the ability to ruthlessly cut scope to hit a 60-day MVP while managing technical debt.
  • Have built Tardis historical and real-time pipelines before (or equivalent high-quality crypto market data feeds), understanding specific quirks, rate limits, and WebSocket structures.
  • Expert in large-scale, reliable ETL/ELT for financial or market data.
  • Fluent in provisioning full environments with Terraform in days and expert in AWS/GCP serverless technologies.
  • Expert Python and SQL skills and proficiency with time-series databases like TimescaleDB or ClickHouse, ensuring fast queries from day one.
  • Advanced knowledge of WebSocket clients, message queues, and low-latency streaming, GitOps, automated testing/deploy and observability practices.
  • Significant understanding of stablecoins, lending protocols, and opportunity surface concepts, or a proven ability to ramp up extremely quickly.


\n

Why Ethena Labs?


You'd be joining a group that has well established itself as one of the most successful crypto-native company's of all time, a group with a mission to revolutionise decentralised finance and it's position in global finance.


Work alongside a passionate and innovative team that values collaboration and creativity.

Enjoy a flexible, remote-friendly work environment with established opportunities for personal growth and learning.


If you subscribe to the mission of separating the dollar from the state, then we want to hear from you!


We look forward to receiving your application and will be in touch after having a chance to review. 


In the meantime, here are some links to more information about Ethena Labs to help you check us out:

Website

Twitter/X

LinkedIn



Please mention the word **INFALLIBILITY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$60000 - $80000 Full time
Data Engineer
  • Sayari
  • Remote - US
python software code financial

About Sayari: 

Sayari is a risk intelligence provider that equips the public and private sectors with immediate visibility into complex commercial relationships by delivering the largest commercially available collection of corporate and trade data from over 250 jurisdictions worldwide. Sayari's solutions enable risk resilience, mission-critical investigations, and better economic decisions. 

Headquartered in Washington, D.C., its solutions are trusted by Fortune 500 companies, financial institutions, and government agencies, and are used globally by thousands of users in over 35 countries. Funded by world-class investors, with a strategic $228 million investment by TPG Inc. (NASDAQ: TPG) in 2024, Sayari has been recognized by the Inc. 5000 and the Deloitte Technology Fast 500 as one of the fastest growing private companies in the United States and was featured as one of Inc.’s “Best Workplaces” for 2025.

POSITION DESCRIPTION

Sayari is looking for an Entry-Level Data Engineer to join our Data team located in Washington, DC. The Data team is an integral part of our Engineering division and works closely with our Software & Product teams, as well as other key stakeholders across the business.

JOB RESPONSIBILITIES:

  • Write and deploy crawling scripts to collect source data from the web
  • Write and run data transformers in Scala Spark to standardize bulk data sets
  • Write and run modules in Python to parse entity references and relationships from source data
  • Diagnose and fix bugs reported by internal and external users
  • Analyze and report on internal datasets to answer questions and inform feature work
  • Work collaboratively on and across a team of engineers using basic agile principles
  • Give and receive feedback through code reviews

SKILLS & EXPERIENCE

Req

Please mention the word **HARMLESS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Data Scientist
  • Arbol
  • New York City, New York
back-end python support fintech

Arbol is a global climate risk coverage platform and FinTech company offering full-service solutions for any business looking to analyze and mitigate exposure to climate risk. Arbol’s products offer parametric coverage which pays out based on objective data  triggers rather than subjective assessment of loss. Arbol’s key differentiator versus traditional InsurTech or climate analytics platforms is the complete ecosystem it has built to address climate risk. This ecosystem includes a massive climate data infrastructure, scalable product development, automated, instant pricing using an artificial intelligence underwriter, blockchain-powered operational efficiencies, and non-traditional risk capacity bringing capital from non-insurance sources. By combining all these factors, Arbol brings scale, transparency, and efficiency to parametric coverage.


In this role, you will research, develop, and apply machine learning tools to model and price climate and weather risk. You will work with diverse weather and geospatial datasets covering a suite of phenomena, from traditional weather-station readings of temperature and precipitation, to radar measurements of hail stone sizes, to satellite indices of vegetation content. You will learn how to use our existing catalog of pricing and modeling tools, engage in their improvement and maintenance, and develop new methodologies. We are open to a range of experience levels for this position.



About the Team

The analytics team is responsible for making sense of the terabytes of data Arbol has at its disposal. It forms the connective tissue between more client-facing teams, such as sales, and back-end roles like data engineering. You’ll be joining a small team of data scientists and researchers and will have a unique opportunity to impact many levels of the firm. This is an ideal position for someone interested in building machine learning systems while taking a deep dive into the insurance industry.

\n


What You'll Be Doing
  • Collaborate within the analytics team and across teams to gain expertise Arbol’s data/pricing infrastructure and products
  • Develop and improve models for climate and weather perils such as heat waves, severe convective storms, and tropical cyclones
  • Implement, assess, and execute pricing algorithms for a wide array of weather risks
  • Work with sales and executive teams to perform business-critical analytics


What You'll Need
  • BA in statistics, computer science, mathematics, or related quantitative field
  • Experience programming in Python and familiarity with common data science packages (Pandas, Numpy, scikit-learn)
  • Experience analyzing large datasets
  • Strong problem solving and analytical skills
  • Comfort with statistics (e.g., linear regression, hypothesis testing)
  • Willingness to work and learn in a fast-paced environment


\n
$95,000 - $125,000 a year
\n

Essential Job Functions & Physical Requirements

Ability to sit for extended periods of time while working at a computer, with or without reasonable accommodation

Ability to use a computer, keyboard, mouse, and standard office equipment (e.g., phone, printer, scanner)

Ability to view a computer screen for prolonged periods, with or without reasonable accommodation

Ability to communicate effectively in person, by phone, and via email

Ability to occasionally stand, walk, bend, and reach within an office environment

Ability to lift and/or move up to 10–15 pounds occasionally (e.g., office supplies, files), with or without reasonable accommodation

Ability to perform repetitive motions, such as typing or data entry

Ability to maintain focus and attention while performing detailed tasks



Interested, but you don’t meet every qualification? Please apply!

Arbol values the perspectives and experience of candidates with non-traditional backgrounds and we encourage you to apply even if you do not meet every requirement.


Accessibility

Arbol is committed to accessibility and inclusivity in the hiring process. As part of this commitment, we strive to provide reasonable accommodations for persons with disabilities to enable them to access the hiring process. If you require an accommodation to apply or interview, please contact hr@arbol.io


Benefits

Arbol is proud to offer its full-time employees competitive compensation and equity in a high-growth startup.  Our health benefits include comprehensive health, dental, and vision coverage, and an optional flexible spending account (FSA) to support your health.  We offer a 401(k) match to support your future, and flexible PTO for you to relax and recharge. 


Equal Opportunity Employer

Arbol is an Equal Opportunity Employer and does not discriminate on the basis of race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, veteran status, or any other legally protected status.



Arbol participates in the E-Verify program to confirm employment eligibility.




Please mention the word **EVENTFUL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineering Intern
  • RefinedScience
  • Remote
python students support software

Data Engineering Intern

At RefinedScience, our mission is to advance care by bringing together the best science, data and minds – disease by disease, patient by patient, cell by cell to discover pathways to life beyond disease.   

WHAT WE ARE LOOKING FOR

We are seeking a motivated Data Engineering Intern to join our team. This internship is open to undergraduate and graduate students who are interested in building data infrastructure that supports advanced analytics, data science, and AI-driven insights in healthcare and life sciences.

You will work closely with data scientists, bioinformaticians, and engineers to help design, build, and improve data pipelines and platforms that power RefinedScience's research and analytics initiatives.

KEY ACTIVITIES

  • Assist in building and maintaining data pipelines for ingesting, transforming, and validating clinical, biological, and real-world data
  • Support integration of data from multiple sources (e.g., clinical data, analytics outputs, external datasets)
  • Help develop and optimize ETL/ELT workflows to ensure data quality and reliability
  • Collaborate with data science and bioinformatics teams to support analytics and machine learning workflows
  • Contribute to data modeling, documentation, and best practices for data infrastructure
  • Participate in code reviews, testing, and performance improvements
  • Participate in Quality Reviews and Troubleshooting
  • Communicate progress and findings to cross-functional teams

MUST HAVES

  • Currently enrolled in a Bachelor's, Master's, or Ph.D. program in Data Engineering, Computer Science, Data Science, Software Engineering, or a related field
  • Experience with Python and/or SQL through coursework, projects, or internships
  • Basic understanding of data pipelines, databases, and data transformation concepts
  • Familiarity with version control (e.g., Git)
  • Strong analytical thinking and problem-solving skills
  • Ability to learn quickly and work collaboratively in a team envir

    Please mention the word **LOGICAL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Junior Data Engineer
  • Satelligence
  • Utrecht
design python django technical

At Satelligence we're looking for a Jr. Data Engineer to join our team.

We are looking for a Junior Data Engineer:

Employment type: 32–40h/week

Location: Utrecht, NL (hybrid)

Experience: Junior–Medior level

Salary: €48 000 – €60 000 gross/year (including 8% holiday allowance, based on 40h/week)

About the job

As Data Engineer your main responsibilities are on building out capabilities of our (geo)data query engine. You’ll be part of the data engineering team, which develops and maintains our satellite data processing engine, geospatial storage and query engine and a set of internal tools used mainly by our OPS team. Our tech stack is Python, Django, PostGIS, deployed on Google Cloud services like GKE and cloud functions. This role will report to Engineering Lead.


What will you do?

You'll be instrumental in empowering our product teams to develop and deploy features that help our clients reach their sustainability targets. You'll ensure the reliability, scalability, and performance of our cloud-based data platform, enabling us to deliver critical environmental intelligence through our API. Your work will directly contribute to:

  • Building and maintaining scalable infrastructure on GCP using infrastructure-as-code tools like Terraform

  • Optimizing data pipelines for processing and storing massive datasets (ETL, OLAP)

  • Developing and managing APIs for efficient data dissemination.

  • Implementing data engineering best practices for data quality, security, and performance.

  • Collaborating closely with product teams to understand their needs and provide technical guidance.

  • Contributing to the design and implementation of data storage solutions using databases like PostgreSQL

  • Monitoring and troubleshooting platform performance and ensuring high availability.


    About you

    • You are an experienced Python developer

    • You are experienced with RDBMS, especially postgresql

    • You are familiar with Django

    • You prefer a well organized codebase over getting your pull requests merged fast

      Nice to have

      • You are experienced with Infrastructure as Code tools such as Terraform

      • You have experience with Google Cloud (Cloud SQL, Cloud Composer, Kubernetes)

      • You worked with PostGIS before or bring other experience with geospatial data


        What we offer you:

        📍Office centrally located in Utrecht city (with direct access via bus 8 or a 20-minute walk from Utrecht Central Station)
        😎27 holidays (based on full-time employment)
        👐Solid pension scheme with employer contribution
        🚆NS Business Card for employees commuting from outside Utrecht
        🖥️Laptop and necessary IT equipment provided
        🩺Additional income protection in case of long-term illness or disability, complementing the statutory coverage
        🥪Daily lunch, fruits, and Aroma Club coffee at the office
        🍹Not the main reason to join, but definitely a fun one: Annual Team Week, after-summer drinks with friends and family and a festive Christmas celebration.

        Meet Satelligence!
        Satelligence is the market leader in remote sensing technology for sustainable sourcing with the mission to halt deforestation. We provide traders, manufacturers and agribusinesses such as Mondelez, Bunge, Cargill, Unilever, Rabobank with critical sustainability insights empowering them to minimize their global environmental footprint and track their progress against climate objectives, ensuring a sustainable supply chain. We were founded in 2016 and currently employ +40 people, working in Utrecht and several locations in Asia, Africa, and South America.

        Apply for the job

        Do you want to join our team as our new junior Data Engineer? Then we'd love to hear about you!


        Please mention the word **FAIR** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Software Engineer
  • Ren
  • Remote
software design python training

 

Job Title:

Sr Software Engineer

Department:

Product Engineering

 

Position Description:

The Sr Software Engineer will be working with other engineers, architects, and product managers to develop software on our philanthropic solutions software platform. This person must be self-motivated and results-oriented with strong programming skills across modern enterprise software architectures. The Sr Software Engineer is expected to work well in an agile development environment to mentor and develop those around them and build superior products.

 

Duties & Responsibilities:

  • Write and maintain scripts written in Python for data engineer and machine learning pipelines.
  • Modification of database objects using SQL (stored procedures, views, tables etc.)
  • Write Automated Unit, Integration, and UI-level Tests to increase code quality and lower defect rate.
  • Provide technical guidance, mentorship while providing technical and design feedback leveraging code and peer reviews across the full application stack.
  • Collaborate and pair with other software and data engineers and product professionals to design, implement and test new features and product refinements.
  • Refactor existing code to improve maintainability and quality.
  • Author and present training materials and documentation to other team members and users of software
  • Work closely with Product Management and other areas of the business to ensure market needs are met.
  • Work with Architecture team to design and implement new service-based, automated application environment.


Please mention the word **CHERISHED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Product Data Analyst
  • Big Health
  • Remote - US
analyst python supervisor support

Our Mission

At Big Health, our mission is to help millions back to good mental health by providing fully digital, non-drug options for the most common mental health conditions. Our FDA-clear digital therapeutics—SleepioRx for insomnia and DaylightRx for anxiety—guide patients through first-line recommended, evidence-based cognitive and behavioral therapy anytime, anywhere. Our digital program, Spark Direct, helps to reduce the impact of persistent depressive symptoms. 


In pursuit of our mission, we’ve pioneered the first at-scale digital therapeutic business model in partnership with some of the most prominent global healthcare organizations, including leading Fortune 500 healthcare companies and Scotland’s NHS. Through product innovation, robust clinical evaluation, and a commitment to equity at scale, we are designing the next generation of medicine and the future of mental health care. 


Our Vision

Over the next 5-10 years, we believe digital therapeutics will transform the delivery of healthcare worldwide by providing access to safe and effective evidence-based treatments. Big Health is positioned to take the lead in this transformation.


Big Health is a remote-first company, and this role can be based anywhere in the US.


Join Us

We're seeking a Product Data Analyst contractor to drive data-informed product decisions by improving our data democratization, analyzing data, generating insights, and generating reports. You'll partner closely with product, growth, enrollment marketing, and client implementation teams to understand user behavior, measure product performance, and identify opportunities for growth and improvement. 

\n


Key Responsibilities
  • Use SQL to query data in Snowflake.
  • Update Snowflake data models, consistent with current data architecture. 
  • Use LookML to add new dimensions, measures, table calculations, and explores to Looker .
  • Create dashboards in Looker and Post Hog to support growth, enrollment marketing, client implementation, product initiatives, and/or company OKRs. 
  • Conduct deep-dive analyses using data from Snowflake and Looker to understand user behavior patterns, identify friction points in the user journey, and uncover opportunities for product enhancement. Analyses may include, but are not limited to, descriptive analytics, correlation, regression, and between-group analyses. 
  • Present the results of these analyses to a cross-functional audience, translating complex data findings into actionable recommendations.
  • Build externally-facing reports that provide stakeholders with clear visibility into user engagement, and feature adoption, clinical outcomes, and recommendations for optimal product use. 
  • Provide data to help justify and inform decision-making around A/B tests and experiments to validate product hypotheses and measure the impact of new features or changes. 
  • Use DBT to build data models and add new data sources to Snowflake. 
  • Assist with updating data dictionary and ERD. 
  • Communicate proactively. During onboarding, you will meet 3-5x/week with your supervisor to provide updates on ticket status and to ask questions. Asking questions outside of these meetings is expected and welcomed. 
  • Work with your supervisor and relevant stakeholders to proactively discuss requirements when questions arise. 


Required Qualifications
  • 3+ years of experience in product analytics, data analysis, or a related analytical role, preferably in a product-driven technology company
  • Strong SQL skills and experience working with large datasets in modern data warehouses like Snowflake, BigQuery, or Redshift
  • Experience with dbt or similar data transformation tools for building modular, tested, and documented data models
  • Proficiency in version control systems like Git for managing code and collaborating with data and engineering teams 
  • Proficiency in analytics tools such as Python or R for statistical analysis and data manipulation
  • Familiarity with BI visualization tools like Looker, Tableau, or Mode
  • Basic understanding of data pipeline orchestration and workflow management tools such as Airflow or similar. Familiarity with ELT/ETL processes and data integration tools like Fivetran, Stitch, or custom-built pipelines 
  • Solid understanding of statistical concepts including hypothesis testing, regression analysis, and experimental design. Experience designing and analyzing A/B tests with proper statistical rigor 
  • Familiarity with healthcare concepts and terminology are highly desirable 
  • Strong communication skills


Background and Life at Big Health
  • Backed by leading venture capital firms.
  • Big Health’s products are used by large multinational employers and major health plans to help improve sleep and mental health. Our digital therapeutics are available to more than 62 million Medicare beneficiaries.
  • Surround yourself with the smartest, most enthusiastic, and most dedicated people you'll ever meet—people who listen well, learn from their mistakes, and when things go wrong, generously pull together to help each other out. Having a bigger heart and a small ego are central to our values.


\n
$50 - $80 an hour
The hourly rate range for this contractor position is $50.00 - $80.00 per hour. This range reflects the target hourly rate for the engagement and may vary based on experience, scope of work, location, and engagement structure. The hourly rate is the sole and full compensation provided for this contractor position.

Rates are determined by role requirements, level, and market factors. The range displayed reflects the minimum and maximum target hourly rates for this engagement. Final rates are determined based on relevant skills, experience, availability, and the specific terms of the engagement. Compensation for contractors does not include benefits, paid time off, or other employee benefits and is subject to change based on business needs.
\n

We at Big Health are on a mission to bring millions back to good mental health, in order to do so, we need to reflect the diversity of those we intend to serve. We’re an equal opportunity employer dedicated to building a culturally and experientially diverse team that leads with empathy and respect. Additionally, we will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.


Big Health participates in E-Verify for all new hires in the United States.



Please mention the word **NIMBLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Analyst
  • Restaurant365
  • Remote
analyst saas python technical

Restaurant365 is a SaaS company disrupting the restaurant industry! Our cloud-based platform provides a unique, centralized solution for accounting and back-office operations for restaurants. Restaurant365’s culture is focused on empowering team members to produce top-notch results while elevating their skills. We’re constantly evolving and improving to make sure we are and always will be “Best in Class” ... and we want that for you too!


Restaurant365 is seeking a Data Analyst to join our Enterprise Data Analytics team. This role supports business teams across the organization by helping turn data into insights that inform day-to-day decisions and longer-term planning.


As a Data Analyst, you will partner with stakeholders to understand business questions, support reporting needs, and help maintain dashboards and KPIs. You’ll work within established data models and governance practices while continuing to build your technical and business analysis skills. This role is ideal for someone who enjoys working with data, learning the business, and growing into a strong analytics partner over time.

\n


How you'll add value:
  • Analytics & Reporting
· Analyze operational, customer, financial, and usage data to support business reporting and ad hoc analysis.
· Help maintain and monitor KPIs that track business performance and operational health.
· Build, update, and maintain dashboards and reports in Domo for business stakeholders.
· Assist with trend analysis, performance monitoring, and identifying areas for improvement.
· Support forecasting, planning, and recurring reporting processes under guidance from senior analysts or managers.
  • Business Partnership
· Work with business stakeholders to understand reporting needs and translate questions into clear analytics requests.
· Help define basic success metrics and KPIs for initiatives and projects.
· Provide clear, well-documented analyses that support business decision-making.
· Participate in requirement gathering sessions and stakeholder check-ins.
  • Collaboration & Enablement
· Partner with other analysts, analytical engineers, and data engineers to ensure accurate and consistent reporting.
· Follow established data governance and quality standards for dashboards and reports.
· Support documentation of metrics definitions, dashboards, and reporting logic.
· Learn to present insights in a clear, concise way to both technical and non-technical audiences.


What you'll need to be successful in this role:
  • 2–4 years of experience in data analytics, business analytics, or a related role.
  • Experience working in a SaaS, technology, or data-driven environment is a plus.
  • Working knowledge of SQL for querying and analyzing data.
  • Experience using BI tools (Domo preferred, but others acceptable).
  • Familiarity with Excel or Google Sheets for analysis and validation.
  • Exposure to Python or R is a plus but not required.
  • Ability to analyze datasets, identify trends, and summarize findings clearly.
  • Basic understanding of common business metrics (revenue, retention, adoption, operational efficiency).
  • Comfort working with defined KPIs and reporting frameworks.
  • Clear written and verbal communication skills.
  • Ability to explain analysis results in a straightforward, business-friendly way.
  • Willingness to learn, ask questions, and incorporate feedback.
  • Ability to work effectively with cross-functional partners.
NICE TO HAVE
  • Exposure to Snowflake, dbt, or modern cloud data platforms.
  • Experience supporting recurring business reporting or executive dashboards.
  • Familiarity with basic project tracking or Agile concepts.
  • Interest in growing toward advanced analytics, analytics engineering, or business analytics leadership.


R365 Team Member Benefits & Compensation
  • This position has a salary range of $87,083.33-$121,916.67 per year. The above range represents the expected salary range for this position. The actual salary may vary based upon several factors, including, but not limited to, relevant skills/experience, time in the role, business line, and geographic location. Restaurant365 focuses on equitable pay for our team and aims for transparency with our pay practices.
  • Comprehensive medical benefits, 100% paid for employee
  • 401k + matching
  • Equity Option Grant
  • Unlimited PTO + Company holidays
  • Wellness initiatives

#BI-Remote


\n
$87,083.33 - $121,916.67 a year
\n

DYN365, Inc d/b/a Restaurant365 is an equal opportunity employer.



Please mention the word **FTW** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
python senior engineering

Somos um dos maiores bancos privados do Brasil, conforme o ranking do Banco Central. E temos muito orgulho em dizer que, pelo segundo ano consecutivo, fomos reconhecidos como a melhor instituição financeira para trabalhar no Brasil, segundo o ranking da GPTW 2025! Também recebemos o selo de Diversidade na categoria Mulher, reforçando nosso compromisso com a equidade.  


Nossa cultura acontece de verdade: sendo simples, corretos, parceiros e corajosos. Valorizamos as relações, a inovação e um ambiente leve, cada vez mais colaborativo e com intencionalidade no avanço da diversidade e inclusão.


Estamos em constante evolução e construímos #parcerias de sucesso para entregarmos nosso propósito de tornar mais tranquila a vida financeira de pessoas e empresas


Se identificou? Então venha trabalhar com a gente! 

\n


Dá uma olhada nos desafios que te esperam:
  • Estamos buscando uma pessoa Engenheira de Machine Learning Senior para atuar na evolucao da nossa plataforma de Machine Learning e garantir que os modelos utilizados em diversas areas do banco operem com alta qualidade governanca e escalabilidade;
  • Análise das ferramentas internas com olhar critico e espaço para trazer melhorias, atuando com papel consultivo;
  • Cuidará da observabilidade dos modelos de ML, sugerindo metricas para monitoramento mais eficiente;
  • Análise da qualidade de código de implantação;
  • Ser ponto de referência das plataformas utilizadas internamente.


E aí, se identificou? Agora gostaríamos de saber se você tem o perfil e os conhecimentos abaixo:
  • Experiência sólida em engenharia de ML, MLOps ou Data Engineering aplicada a modelos em produção;
  • Forte domínio de Python e bibliotecas de ML/ciência de dados;
  • Experiência com plataformas distribuídas, preferencialmente Databricks/Spark.


\n

Diversidade e inclusão 


O BV atua intencionalmente em prol da aceleração da equidade e representatividade no mercado financeiro, respeitando e apoiando a diversidade em toda sua pluralidade e interseccionalidade, garantindo uma transformação social positiva. 

 

Por isso, convidamos pessoas negras, mulheres, profissionais com deficiência, comunidade LGBTQIA+ e pessoas de qualquer idade a conhecerem a gente um pouco mais e a se inscreverem nesta vaga. 



Please mention the word **PROSPEROUS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Analyst 3
  • SkySlope
  • Remote
analyst salesforce python technical

OUR ORIGIN STORY 🎂


In 2011 SkySlope started as an idea born at the kitchen table of our CEO, with just him and two others. Headquartered in Sacramento, California, we have since grown out of our previous 3 offices and many of our close to 150 employees are spread all across the United States. Those 150 employees support close to 300,000 users across 5,000 offices nationwide and now in Canada as well. Included in that is 8 out of the 15 largest Real Estate Brokerages in the nation.


But, despite being happy with what we’ve achieved we know that as industry leaders in our space there’s a lot of work left to be done. All of the growth and success that has happened is a result of us obsessing over building cutting edge software that makes the Real Estate world a better place. We know this only happens by hiring people who don’t just come up with out of the box ideas but hiring people who actually see those ideas through and bring them to life. As we’ve grown, we’ve been fortunate enough to hire plenty of people who possess that quality and realize it’s equally important to hire people who can pair that skill with empathy, collaboration, and a keen sense of urgency. If you’re looking to join a company where you can have real impact and surround yourself with an incredible team of people then look no further.

                                                                                                                                                                                                                


SKYSLOPE’S CORE VALUES 💪🏻


These are the principles that helped us get to where we are and they are the principles that will guide us to where we want to go in the future. You can apply them to your professional life, your personal life, to any business and any situation. In no specific hierarchy, our core values are:


Awareness | Execution | Obsession | Ownership | Humility | Radical Candor | Urgency | Greatness | Inches I Fun


Learn more about our core values from our CEO, Tyler Smith here!

                                                                                                                                                                                                                


About the role: We are looking for a Data Analyst III to join our team and to help elevate the way we leverage data across the organization. While this role includes traditional data retrieval and reporting, we're looking for someone who goes beyond fulfilling requests — someone who proactively identifies trends, surfaces insights, and brings forward recommendations that help teams make better decisions before they even know to ask. Experience or curiosity around AI-assisted analytics is a plus, but this is first and foremost a strong data analyst role.

\n


What Sets You Apart
  • You don't wait to be asked. You dig into the data, find what matters, and bring it to the people who need it. You're curious about new tools and techniques — including AI — but you're grounded in strong analytical fundamentals. You care about getting the answer right and communicating it in a way that actually moves the needle.


Essential Functions
  • Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
  • Query, extract, and transform data from multiple sources across MS SQL Server, MySQL, and MongoDB environments to support business needs
  • Build and maintain automated reports, dashboards, and data pipelines that reduce manual effort and improve data accessibility
  • Partner with cross-functional teams to understand their goals and proactively deliver analytical insights that drive action
  • Identify patterns, trends, anomalies, and opportunities in data sets and communicate findings clearly to both technical and non-technical audiences
  • Develop and maintain Python scripts for data automation, transformation, reporting and analysis
  • Contribute to improving our data infrastructure, documentation, and analytical best practices
  • Explore opportunities to incorporate AI-powered tools and techniques into existing workflows where they add clear value


Other Duties
  • Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.


Requirements
  • 5+ years of experience in a data analyst or similar role with progressive responsibility
  • Advanced SQL proficiency across both MS SQL Server and MySQL, including complex joins, stored procedures, query optimization, and cross-database work
  • Python proficiency for scripting, data manipulation, and automation (pandas, NumPy, or similar libraries)
  • Experience with BI/visualization tools such as Tableau, Power BI, Looker, or similar platforms
  • Solid understanding of data warehousing concepts, data modeling, and ETL/ELT processes
  • Strong communication skills with the ability to translate analytical findings into clear, actionable recommendations for stakeholders
  • Self-directed mindset with a demonstrated history of going beyond ad-hoc requests to proactively surface insights and improve processes


Preferred Qualifications
  • Familiarity with cloud platforms (Azure, AWS, or GCP)
  • Exposure to machine learning concepts or AI-assisted analytics tools (e.g., using APIs for text analysis, summarization, or data enrichment)
  • Experience with A/B testing, statistical modeling, or causal inference
  • Knowledge of version control (Git) and collaborative development workflows
  • Statistics, data science, or related degree or certification (equivalent experience welcomed)
  • MongoDB experience, including aggregation pipelines and working with unstructured or semi-structured dataExperience with data orchestration or transformation tools such as dbt, Apache Airflow, or similar
  • Familiarity with product and web analytics platforms such as Heap and/or Google Analytics
  • Exposure to tools such as Chameleon, HubSpot, or Salesforce is a bonus but not required
  • Real estate industry knowledge and/or experience
  • Experience mentoring junior analysts or leading small-scale analytical projects


\n
$100,000 - $120,000 a year
\n

Medical Insurance – Company pays flat dollar amount towards premium 

There are 3 plan options 

Our Medical Insurance plans are provided through United Healthcare 

The United Healthcare HMO is only offered to California residents

Eligibility begins 1st of the month following date of hire

Per Paycheck (24 pay periods a year)

Employee costs per tier are as follows:


UHC HDHP/HSA

Employee Only  $58.92

Employee + Child $147.30

Employee + Spouse $175.78

Employee + Family $259.24


UHC PPO

Employee Only $104.10

Employee + Child $244.63

Employee + Spouse $289.91

Employee + Family $422.63


UHC HMO (CA residents only)

Employee Only $84.56

Employee + Child $198.71

Employee + Spouse $235.49

Employee + Family $343.29


Dental Insurance – Company pays 75% of monthly premium only on Base Plan

This PPO plan is administered through Principal

Eligibility begins 1st of the month following date of hire


Principal Dental Base Plan

Employee Only $4.19

Employee + Child $11.73

Employee + Spouse $8.50

Employee + Family $17.20


Principal Dental Buy-Up Plan

Employee Only $6.65

Employee + Child $19.53

Employee + Spouse $13.51

Employee + Family $28.35


Vision Insurance – Company pays 100% of monthly premium

This plan is administered through Principal (VSP choice network)

Eligibility begins 1st of the month following date of hire


Basic Life and AD&D Insurance (with additional Voluntary Plans available) – Company paid plan with a guarantee issue amount of $25,000. 

Plan is administered through Principal

Eligibility begins 1st of the month following date of hire

Pricing varies for additional coverage, based upon age, coverage and dependent classification


Voluntary Short & Long Term Disability Insurance Plans – Optional plans to help protect your financial well-being.

Plan is administered through Principal

Eligibility begins 1st of the month following date of hire

Pricing varies, based upon age


Voluntary Accident insurance- Optional plans available to purchase that pays you a cash benefit to help with your expenses if you or a covered family member is injured due to an accident. 

Employee Only $4.39

Employee + Spouse $6.73

Employee + Child(ren) $7.49

Employee + Family $11.50


Voluntary Hospital Indemnity- Optional plans available to purchase that pays you a cash benefit to help with your expenses if you or a covered family member is admitted to the hospital

Employee Only $6.85

Employee + Spouse $17.43

Employee + Child(ren) $11.41

Employee + Family $22.84


Voluntary Critical Illness- Optional plans available to purchase to help with your expenses if you or a covered family member is diagnosed with a covered critical illness. 

Pricing varies, based upon age


Flexible Spending Account – A tax savings account you put money into that you use to pay for certain out-of-pocket health care and dependent care costs.

Plan is administered through Discovery Benefits

Eligibility begins 1st of the month following date of hire, if you sign up by the 25th of the month


Health Savings Account (HSA)– A tax savings account for employees enrolled in a High Deductible Health Plan. You can put money into this account to pay for certain out-of-pocket health care costs

Plan is administered through Discovery Benefits

Eligibility begins 1st of the month following date of hire, if you sign up by the 25th of the month

Must be enrolled in the UHC HDHP/HSA medical plan with SkySlope to be eligible

SkySlope contributes $300 to an individual HSA and $600 to a family HSA


401(k) Plan – Company will match $0.50 on each $1.00 contributed up to the first 6% of eligible earnings

Plan is administered through Principal

Eligibility begins first pay date after 90 days of employment

Auto-enrollment after eligibility at 3% of gross annual earnings

Defer between 1% and 40% of eligible contribution


Employee Stock Purchase Plan - Company match equal to 33.3333% of dollars contributed to the plan, based upon the average purchase price for the quarter.

Plan administered through Fidelity 

Eligibility begins first pay date after 90 days of employment

May contribute after-tax dollars from 3% to 15% of base earnings


Paid Time Off (PTO) – Company provides 120 hours (equivalent of 15 days) of PTO for new hires

PTO accrual begins after 90 days of employment


16 Paid Holidays

11 observed, 5 floating (used for personal holidays)

List of observed holidays published annually

Eligibility begins on your first day of employment


Bereavement Leave – Company will provide you with the following off to grieve the loss of a loved one. 

5 paid days of leave for an immediate family member. This is a spouse, child, parent, grandparent. 

1 paid day of leave for a close non-family member.


Discounts through Fidelity - Purchasing discounts for wireless, car rentals, hotels and more…


Pet Insurance through Nationwide- 50%, 70% reimbursement plans available through Nationwide with options for wellness. SkySlope contributes $20 a month, per pet, up to 2 pets towards the cost of the plan


Paid Parental Leave - All full-time regular employees are eligible for SkySlope’s Paid Parental Leave program, which provides employees with up to six (6) weeks of pay following the birth or placement of a new child. Paid Parental Leave must be taken within the first 6 months of the birth or placement of a new child. Employees will be paid at their regular rate of pay based upon their normal work schedule, up to a maximum of forty (40) hours per week.


Dayforce Wallet- All full-time regular employees will have access to sign up for Dayforce Wallet. Dayforce Wallet is a program provided by our payroll provider that allows employees to access their pay on-demand as soon as it is earned, without waiting for their standard payday.


Waldorf University discounts and perks- 10% off tuition for employees and their families, free text books, and scholarship opportunities available


Child Literacy Assistance Program discount- Discounted annual membership to Luminous Minds, an online resource center created to help with child literacy struggles. $85 for 1 year membership as a SkySlope Employee.


$1,000 Employee Referral bonuses- SkySlope will give every referrer $1,000 (post-tax) after a referee passes their 90 day mark. 


In addition to the above you also receive other perks like our Annual Employee Appreciation Day and additional internal company events.


                                                                                                                                                                                                                


SkySlope, is an Equal Opportunity employer. All qualified applicants will receive

consideration for employment without regard to race, color, religion, sex, age, disability, protected veteran status,

national origin, sexual orientation, gender identity or expression (including transgender status), genetic

information or any other characteristic protected by applicable law.


We sincerely thank you for taking the time to review our open positions and hope you'll take the time to submit a concise and thoughtful application.


Still thinking about applying? Waiting to hear back from us? Check out our social media in the meantime!

SkySlope | Facebook | Instagram | YouTube | LinkedIn | Twitter


Your privacy is important to us. Learn more about what data is collected and how we use it here.





Please mention the word **PROMINENT** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Big Data Engineer
  • Oowlish Technology
  • Remote
python support software growth

Join Our Team


Oowlish, one of Latin America's rapidly expanding software development companies, is seeking experienced technology professionals to enhance our diverse and vibrant team.


As a valued member of Oowlish, you will collaborate with premier clients from the United States and Europe, contributing to pioneering digital solutions. Our commitment to creating a nurturing work environment is recognized by our certification as a Great Place to Work, where you will have opportunities for professional development, growth, and a chance to make a significant international impact.


We offer the convenience of remote work, allowing you to craft a work-life balance that suits your personal and professional needs. We're looking for candidates who are passionate about technology, proficient in English, and excited to engage in remote collaboration for a worldwide presence.


About the Role:


We are seeking a hands-on Big Data Engineer to support and enhance an AWS-based data platform, focusing on pipeline reliability, scalable processing, and performance optimization. This role requires strong Python expertise, deep familiarity with AWS data services, and the ability to maintain production-grade data workflows.


You will work on event-driven pipelines, contribute to CI/CD improvements, and collaborate on platform reliability initiatives. This role is ideal for someone who enjoys building and maintaining data infrastructure, optimizing large-scale data processing systems, and working in cloud-native environments.


This is a 6-month engagement, aligned to ET time zone.

\n


Key Responsibilities:
  • Develop and maintain data processing logic using Python
  • Build, optimize, and support data pipelines using AWS Glue and Lambda
  • Write and optimize complex SQL queries for analytics and operational workloads
  • Support platform reliability and pipeline monitoring
  • Contribute to CI/CD processes using GitHub and GitHub Actions
  • Collaborate on infrastructure improvements using Infrastructure-as-Code principles
  • Troubleshoot and resolve pipeline failures and performance issues
  • Support data consumption layers used by BI tools


Must Have:
  • 4+ years of experience as a Data Engineer / Big Data Engineer
  • Strong hands-on Python experience (data processing and application logic)
  • Advanced SQL skills (query optimization, performance tuning)
  • Production experience with AWS Lambda and AWS Glue
  • Experience working with CI/CD tools (GitHub, GitHub Actions)
  • Familiarity with Snowflake and/or Aurora
  • Understanding of Infrastructure-as-Code (IaC) concepts
  • Comfortable working in the ET time zone


Nice to Have:
  • Experience with BI tools (Sigma preferred)
  • Experience with event-driven architectures
  • Exposure to enterprise-scale data platforms


\n


Benefits & Perks:


Home office;

Competitive compensation based on experience;

Career plans to allow for extensive growth in the company;

International Projects;

Oowlish English Program (Technical and Conversational);

Oowlish Fitness with Total Pass;

Games and Competitions;



You can also apply here:


Website: https://www.oowlish.com/work-with-us/

LinkedIn: https://www.linkedin.com/company/oowlish/jobs/

Instagram: https://www.instagram.com/oowlishtechnology/





Please mention the word **AWESOMENESS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • Oowlish Technology
  • Remote
python support software growth

Join Our Team


Oowlish, one of Latin America's rapidly expanding software development companies, is seeking experienced technology professionals to enhance our diverse and vibrant team.


As a valued member of Oowlish, you will collaborate with premier clients from the United States and Europe, contributing to pioneering digital solutions. Our commitment to creating a nurturing work environment is recognized by our certification as a Great Place to Work, where you will have opportunities for professional development, growth, and a chance to make a significant international impact.


We offer the convenience of remote work, allowing you to craft a work-life balance that suits your personal and professional needs. We're looking for candidates who are passionate about technology, proficient in English, and excited to engage in remote collaboration for a worldwide presence.


About the Role:


We are seeking a Senior Data Engineer with strong expertise in enterprise data modeling and AWS-based data platforms to support a mature and evolving data ecosystem. This role requires hands-on experience working with large-scale data environments, optimizing data models, and maintaining event-driven pipelines in a cloud-native architecture.


You will work across data modeling, pipeline development, API data support, and infrastructure collaboration. This position is ideal for someone comfortable operating in enterprise environments, maintaining production-grade systems, and improving performance and scalability across a modern AWS data stack.


This is a 6-month engagement with ET time zone alignment required.

\n


Must-Have:
  • 6+ years of experience in Data Engineering
  • Strong experience with Snowflake and Aurora Postgres
  • Advanced SQL and data modeling expertise (logical & physical design)
  • Hands-on experience with AWS data services (Glue, Lambda, DMS, EventBridge)
  • Strong Python experience for data pipelines
  • Experience supporting enterprise-scale data platforms
  • Experience with CI/CD (GitHub Actions)
  • Comfortable working in the ET time zone


Nice to Have:
  • Experience working with Terraform
  • Exposure to artifact management and infrastructure-as-code best practices
  • Experience in performance tuning at scale
  • Experience implementing automated data quality frameworks
  • Prior experience in enterprise or large distributed systems


\n


Benefits & Perks:


Home office;

Competitive compensation based on experience;

Career plans to allow for extensive growth in the company;

International Projects;

Oowlish English Program (Technical and Conversational);

Oowlish Fitness with Total Pass;

Games and Competitions;



You can also apply here:


Website: https://www.oowlish.com/work-with-us/

LinkedIn: https://www.linkedin.com/company/oowlish/jobs/

Instagram: https://www.instagram.com/oowlishtechnology/





Please mention the word **RECTIFYING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Software Engineer
  • itD Tech
  • Arizona
software design python training
itD is seeking a Software Engineer to design and scale the data pipelines that power next-generation foundation models for machine-generated data, including time series, logs, and large-scale event streams. This role contributes directly to the success of model training and production systems by enabling reliable, high-performance data infrastructure at scale. The ideal candidate will bring deep experience in distributed systems and data engineering, along with a proven track record of delivering scalable, production-ready data pipelines that support machine learning workflows. Location: Remote (U.S.-based; time zone alignment with Pacific or Central preferred) We provide comprehensive medical benefits, a 401(k) plan, paid holidays, and more. Please note that we are only considering direct W2 candidates at this time, as we are unable to offer sponsorship. Responsibilities: • Build and scale distributed data pipelines for large-scale time series, log data, and high-volume event streams. • Design and maintain reliable, high-performance Spark and Python workflows to support model training datasets. • Analyze and resolve performance bottlenecks related to latency, memory utilization, data skew, and throughput. • Improve data quality, validation processes, and reproducibility for machine learning workloads. • Partner with machine learning engineers and researchers to

Please mention the word **UNDAUNTED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

Sobre trabajos de Data Engineering

Remote Data Engineering jobs. Data pipelines, ETL, data architecture and big data. En RemoteJobs.lat conectamos a profesionales de Latinoamerica con empresas que ofrecen trabajo 100% remoto. Todas nuestras ofertas permiten trabajar desde cualquier ciudad, con pagos en dolares o moneda internacional.

Rango salarial

$4,000 - $11,000 USD/mes

Posiciones abiertas

327

Ubicacion

100% Remoto LATAM

Tip: Tambien puedes buscar ofertas en skills relacionados como Python, SQL,

Data Engineering salary ranges by seniority

Estimated ranges in USD/month for remote contracts with international companies. Vary by company, complementary stack and client location.

Level Years of experience Range USD/month
Junior 0-2 $4,000 - $5,750
Mid-level 2-4 $5,400 - $7,850
Senior 4-7 $7,500 - $9,950
Lead/Staff 7+ $9,250 - $11,000

Companies hiring remote Data Engineering from LATAM

Some companies that have historically hired Data Engineering profiles to work 100% remotely from Latin America:

Mercado Libre Globant Auth0 Nubank Cloudwalk Stripe GitLab Crossover Toptal

Frequently asked questions

The typical range for a remote Data Engineering working for international companies is $4,000 - $11,000 USD/mes. The exact amount depends on seniority, the company's country, and whether the contract is full-time or project-based.

The most in-demand Data Engineering profiles usually combine Python, Sql, Spark. Adding one of these opens more job offers and often increases salary range by 15% to 30%.

For US/EU companies yes: B2 minimum for technical interviews. There are alternatives at LATAM companies (Mercado Libre, Globant, Rappi) or agencies like Toptal where intermediate English is enough to start.

The 3 highest-impact things: (1) a public GitHub with 2-3 solid projects relevant to Data Engineering, (2) an English LinkedIn profile optimized for recruiters, and (3) applying to 20+ offers per week instead of 2-3.