Skills relacionados:
Python SQL Spark Airflow
$170000 - $190000 Full time
software assistant design system

Who is Flock?

Flock Safety is the leading safety technology platform, helping communities thrive by taking a proactive approach to crime prevention and security. Our hardware and software suite connects cities, law enforcement, businesses, schools, and neighborhoods in a nationwide public-private safety network. Trusted by over 5,000 communities, 4,500 law enforcement agencies, and 1,000 businesses, Flock delivers real-time intelligence while prioritizing privacy and responsible innovation.

We’re a high-performance, low-ego team driven by urgency, collaboration, and bold thinking. Working at Flock means tackling big challenges, moving fast, and continuously improving. It’s intense but deeply rewarding for those who want to make an impact.

With nearly $700M in venture funding and a $7.5B valuation, we’re scaling intentionally and seeking top talent to help build the impossible. If you value teamwork, ownership, and solving tough problems, Flock could be the place for you.

The Opportunity

We're hiring a Senior Software Engineer to build Night Shift, a conversational AI assistant that helps investigators surface critical evidence and close cases faster. You'll design and implement the conversational interface, build the orchestration backend that manages LLM interactions and tool calling, and develop integration pipelines connecting our AI to Flock's existing data platform and APIs. This is a ground-floor opportunity where product thinking matters as much as technical execution: you'll shape chat experiences with complex context management, partner with platform teams to design new APIs or leverage existing ones, and solve the reliability challenges of deploying AI in high-stakes investigative workflows. You'll collaborate closely with ML engineers on prompt engineering and agentic workflows while maintaining a strong point of view on what makes a great user experience. If you've built LLM-powered products and thrive at the intersection of customer impact and technical depth, this role is for you.

The Skillset

  • Love for coding and continuous learning, especially in the rapidly evolving LLM space

  • Resourceful problem-solver mindset: excel in ambiguous situations and take initiative to define product direction

  • Strong TypeScript / Node / Express skills for web services and API design (REST, SSE, WebSockets for streaming)

  • Modern web framework expertise (React / TypeScript preferred), particularly for conversational UI and chat interfaces

  • Hands-on LLM experience: OpenAI/Anthropic/Gemini APIs, prompt engineering, streaming responses, and conversation context management

  • Familiarity with agentic patterns: function calling, tool use (MCP), and orchestrating multi-step workflows

  • API integration skills: consume existing APIs or design new ones to ground AI in investigative data

  • Database confidence: PostgreSQL and sophisticated SQL for data retrieval

  • Cloud infrastructure basics: Docker, Kubernetes (Helm), AWS services (S3, SQS, API Gateway)

  • Product-minded: translate user feedback into technical requirements and make pragmatic tradeoffs

  • Bonus points for: LLM evaluation tools (LangSmith, Langfuse), vector search/RAG, microservices architecture, or Terraform

90 Days at Flock

The First 30 Days

  • Onboard and Integrate:

    • Familiarize yourself with Flock's mission, investigative workflows, and how customers use our platform today

    • Pair with engineers across Cloud Software and ML teams to understand existing APIs, data models, and system architecture

    • Build relationships with key stakeholders to understand their capabilities and constraints. Meet with members of:

      • Machine Learning (agentic systems, model serving)

      • Data Engineering (investigative datasets, pipelines)

      • Platform teams (APIs, infrastructure)

      • Product and Design (customer needs, UX direction)

  • Ship Early and Learn:

    • Complete a first-day push to production

    • Pick up initial sprint tickets: bug fixes, small UX improvements, or API integrations

    • Participate in customer feedback sessions to understand investigator workflows and pain points

The First 60 Days

  • Build the Foundation:

    • Deliver core conversational UI components and establish patterns for chat interfaces

    • Implement backend orchestration for LLM interactions and tool calling

    • Stand up observability for the AI system (logging, tracing, basic metrics)

    • Work with ML team to integrate agentic workflows and refine prompt strategies

  • Demonstrate Velocity:

    • Own end-to-end features that connect UI, backend orchestration, and data integrations

    • Collaborate with Product to rapidly iterate based on early user testing

    • Propose technical improvements to chat quality, performance, or reliability

90 Days & Beyond

  • Drive Product Impact:

    • Lead development of a core Night Shift capability that demonstrably improves investigator efficiency

    • Represent the team in cross-functional initiatives, balancing zero-to-one experimentation with engineering best practices

    • Establish patterns for testing and quality in an evolving AI product

  • Shape the Direction:

    • Influence product roadmap through technical insights and customer feedback

    • Mentor team members on LLM integration patterns or full-stack best practices

    • Own a domain area (e.g., conversation management, data grounding, streaming architecture)

The Interview Process

We want our interview process to be a true reflection of our culture: transparent and collaborative. Throughout the interview process, your recruiter will guide you through the next steps and ensure you feel prepared every step of the way. To check out our interview stages and how you should prepare visit experiences on our careers page.

Salary & Equity

In this role, you’ll receive a starting salary of $170,000-$185,000 as well as stock options. Base salary is determined by job-related experience, education/training, as well as market indicators. Your recruiter will discuss this in-depth with you during our first chat.

The Perks

🌴Flexible PTO: We seriously mean it, plus 11 company holidays.

⚕️Fully-paid health benefits plan for employees: including Medical, Dental, and Vision and an HSA match.

👪Family Leave: All employees receive 12 weeks of 100% paid parental leave. Birthing parents are eligible for an additional 6-8 weeks of physical recovery time.

🍼Fertility & Family Benefits: We have partnered with Maven, a complete digital health benefit for starting and raising a family. Flock will provide a $50,000-lifetime maximum benefit related to eligible adoption, surrogacy, or fertility expenses.

🧠Spring Health: Spring Health offers a variety of mental health benefits, including therapy, coaching, medication management, and digital tools, all tailored to each individual's needs.

💖Caregiver Support: We have partnered with Cariloop to provide our employees with caregiver support

💸Carta Tax Advisor: Employees receive 1:1 sessions with Equity Tax Advisors who can address individual grants, model tax scenarios, and answer general questions.

💚ERGs: We want all employees to thrive and feel like they belong at Flock. We offer three ERGs today - Women of Flock, Flock Proud, and Melanin Motion. If you are interested in talking to a representative from one of these, please let your recruiter know.

💻WFH Stipend: $150 per month to cover the costs of working from home.

📚Productivity Stipend: $300 per year to use on Audible, Calm, Masterclass, Duolingo, Grammarly and so much more.

🏠Home Office Stipend: A one-time $750 to help you create your dream office.

If an offer is extended and accepted, this position requires the ability to obtain and maintain Criminal Justice Information Services (CJIS) certification as a condition of employment. Applicants must meet all FBI CJIS Security Policy requirements, including a fingerprint-based background check.

Flock is an equal opportunity employer. We celebrate diverse backgrounds and thoughts and welcome everyone to apply for employment with us. We are committed to fostering an environment that is inclusive, transparent, and collaborative. Mutual respect is central to how Flock operates, and we believe the best solutions come from diverse perspectives, experiences, and skills. We embrace our differences and know that we are stronger working together.

If you need assistance or an accommodation due to a disability, please email us at recruiting@flocksafety.com. This information will be treated as confidential and used only to determine an appropriate accommodation for the interview process.

At Flock Safety, we compensate our employees fairly for their work. Base salary is determined by job-related experience, education/training, as well as market indicators. The range above is representative of base salary only and does not include equity, sales bonus plans (when applicable) and benefits. This range may be modified in the future. This job posting may span more than one career level.



Please mention the word **EMPOWERMENT** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $3500 - 3700 Full time
CI/CD Infrastructure as Code AWS Lambda API Development

Coderslab.io es una empresa global líder en soluciones tecnológicas con más de 3,000 colaboradores en todo el mundo, incluyendo oficinas en América Latina y Estados Unidos. Formarás parte de equipos diversos compuestos por talento de alto desempeño para proyectos desafiantes de automatización y transformación digital. Colaborarás con profesionales experimentados y trabajarás con tecnologías de vanguardia para impulsar la toma de decisiones y la eficiencia operativa a nivel corporativo.

Apply directly on Get on Board.

Funciones del cargo

Diseñar, desarrollar y mantener soluciones de ingeniería de datos sobre AWS.

Implementar componentes y procesos utilizando AWS Lambda, Amazon S3, Amazon API Gateway y Amazon RDS.

Diseñar y mantener infraestructura como código mediante AWS CloudFormation.

Gestionar despliegues automatizados y pipelines CI/CD utilizando GitHub Actions integrados con AWS.

Asegurar buenas prácticas de versionamiento, testing, observabilidad y despliegue continuo.

Monitorear, optimizar y resolver incidentes en componentes de datos desplegados en ambientes productivos.

Colaborar con equipos de arquitectura, desarrollo y negocio para traducir requerimientos funcionales en soluciones técnicas.

Requerimientos del cargo

Experiencia sólida con AWS Lambda, Amazon S3, AWS CloudFormation, Amazon API Gateway y Amazon RDS.

Conocimiento en integración y automatización de despliegues con GitHub Actions hacia AWS.

Experiencia aplicando prácticas de CI/CD e infraestructura como código (IaC).

Conocimiento de seguridad, permisos y buenas prácticas operativas en AWS.

Capacidad para desarrollar e integrar APIs y componentes de datos en la nube.

Mínimo 3 años de experiencia en ingeniería de datos, desarrollo cloud o roles equivalentes.

Experiencia comprobable trabajando en ambientes AWS productivos.

Título profesional en Ingeniería Informática, Ingeniería Civil en Computación o carrera afín.

Opcionales

Certificaciones deseables

  • AWS Certified Cloud Practitioner
  • AWS Certified Developer – Associate
  • AWS Certified Solutions Architect – Associate
  • AWS Certified Data Engineer – Associate

Condiciones

Remoto Fulltime

$$$ Full time
Python PostgreSQL SQL Docker
Niuro connects ambitious projects with elite tech teams to deliver high-impact solutions for leading U.S. companies. The selected candidate will join a fintech-focused environment where data integrity, reliability, and scalability are paramount. You will contribute to building autonomous, high-performance backend systems that ingest, normalize, validate, and store market data at scale. This role emphasizes robust data pipelines, production-grade services, and seamless API-based integrations, enabling real-time and historical market data workflows for analytics and trading applications. The project culture values technical excellence, continuous improvement, and a collaborative global team committed to delivering measurable value while maintaining a strong administrative support backbone to allow engineers to focus on impactful work.

Apply without intermediaries from Get on Board.

Core Responsibilities

Design, implement, and maintain asynchronous Python services for market-data ingestion in a fintech setting. Build clean, well-typed, maintainable Python code using modern best practices. Design and operate microservice-based architectures using Docker. Optimize concurrency, throughput, and resource usage in asynchronous systems. Own services end-to-end: development, debugging, monitoring, and long-term improvements.
  • Data Pipelines & Reliability: Build and maintain robust API-based ingestion pipelines. Handle real-world failure modes including partial data, retries, idempotency, and upstream instability. Monitor ingestion success, latency, and data quality metrics. Conduct root-cause analyses on data incidents and implement durable fixes. Ensure deterministic behavior under load.
Database & Data Integrity: Work directly with PostgreSQL and TimescaleDB using raw SQL where appropriate. Design and maintain normalized schemas for time-series and reference data. Ensure data correctness, consistency, and traceability across ingestion layers. Maintain and debug production databases. Design scalable data structures to support growing data volume and query load.

Required Experience & Skills

• 5+ years of professional experience building backend systems in Python.
• Strong experience with async Python (asyncio, async I/O patterns).
• Excellent knowledge of PostgreSQL, raw SQL, and database performance tuning.
• Experience designing and operating production distributed systems.
• Strong understanding of failure modes, backpressure, retries, and idempotency.
• Proven ability to own systems end-to-end in production.

Bonus – Fintech & Data Awareness

• Experience with financial or market data.
• Familiarity with time-series modeling and high-volume data ingestion.
• Ability to reason about how data quality impacts downstream trading or analytics systems.
• Experience supporting analytics or front-end consumers of market data.

Benefits

We provide opportunities to participate in impactful and technically rigorous industrial data projects that drive innovation and professional growth. Our work environment emphasizes technical excellence, collaboration, and continuous innovation.
Niuro supports a 100% remote work model, allowing global flexibility. We invest in career development through ongoing training programs and leadership opportunities, ensuring continuous growth and success.
Upon successful completion of the initial contract, there is potential for long-term collaboration and stable, full-time employment, reflecting our long-term commitment to our team members.
Joining Niuro means becoming part of a global community dedicated to technological excellence and benefiting from strong administrative support that enables you to focus on impactful work without distractions.

Informal dress code No dress code is enforced.
Gross salary $1500 - 2000 Full time
Data Engineer Senior
  • Lisit
  • Santiago (Hybrid)
Python Git SQL BigQuery

En Lisit creamos, desarrollamos e implementamos servicios de software enfocados en automatización y optimización, con foco constante en innovación y pasión por los desafíos. Acompañamos a nuestros clientes con un enfoque consultivo que integra herramientas y prácticas para impulsar objetivos de transformación mediante una estrategia integral de acompañamiento e implementación. Buscamos un/a Data Engineer Senior para sumarse a un proyecto crítico para el negocio, trabajando en el diseño y la ejecución de soluciones de datos escalables que permitan eficientizar procesos y mejorar la toma de decisiones.

Apply to this posting directly on Get on Board.

Funciones

En Lisit estamos en búsqueda de un/a Data Engineer Senior para sumarse a un proyecto crítico para el negocio.

Tu foco estará en:

  • Diseñar e implementar pipelines de datos complejos mediante enfoques ETL/ELT.
  • Desarrollar y optimizar soluciones con Python y SQL avanzado.
  • Construir y evolucionar arquitecturas de datos en la nube, idealmente en Google Cloud Platform.
  • Integrar servicios y componentes de datos (p. ej., BigQuery, Cloud Storage y Composer/Airflow) según las necesidades del proyecto.
  • Aplicar buenas prácticas de desarrollo, trabajando con GIT y manteniendo estándares de calidad.
  • Optimizar el modelamiento de datos y el rendimiento de los pipelines.

Buscamos autonomía, pensamiento analítico y capacidad para diseñar soluciones escalables de datos 🚀

Descripción

Perfil requerido:

  • Experiencia sólida como Data Engineer (5+ años idealmente).
  • Dominio de Python y SQL avanzado (excluyente).
  • Experiencia en diseño e implementación de ETL/ELT complejos.
  • Experiencia trabajando con arquitecturas de datos en la nube (idealmente Google Cloud Platform).
  • Manejo de Git y buenas prácticas de desarrollo.

Stack deseable:

  • GCP (BigQuery, Cloud Storage, Composer/Airflow).
  • Docker.
  • Cloud Run / Cloud Functions.
  • Terraform / Dataform.
  • Experiencia en modelamiento de datos y optimización de pipelines.

Buscamos: un/a profesional con autonomía, pensamiento analítico y capacidad para diseñar soluciones escalables de datos 🚀

Deseable

  • Experiencia con arquitecturas de datos en Google Cloud Platform (BigQuery, Cloud Storage, Composer/Airflow).
  • Uso de Docker para estandarizar y facilitar despliegues.
  • Experiencia con Cloud Run / Cloud Functions.
  • Conocimiento y uso de Terraform / Dataform.
  • Buenas prácticas en modelamiento de datos y optimización del rendimiento de pipelines.

Beneficios

Modalidad: Modalidad híbrida (3x2) en Santiago. Ubicación: zona centro. Se puede evaluar modalidad remota para perfiles altamente senior.

Si te interesa un proyecto crítico para el negocio y tienes autonomía, pensamiento analítico y foco en diseñar soluciones escalables de datos 🚀, esperamos tu postulación.

$$$ Full time
Data Engineer AWS
  • ARKHO
  • Cali (Hybrid)
Python SQL ETL Spark
ARKHO es una consultora experta en tecnologías de la información, que ofrece servicios expertos de TI en el marco de modernización de aplicaciones, analítica de datos, analítica avanzada y migración a la nube. Nuestro trabajo facilita y acelera la adopción de la cloud en múltiples industrias.
Nos destacamos por ser Partner Advanced de Amazon Web Services con foco estratégico en la generación de soluciones usando tecnología en la nube, somos obsesionados por lograr los objetivos propuestos y tenemos especial énfasis en el grupo humano que compone ARKHO (nuestros Archers), reconociendo a nuestro equipo como un componente vital para el logro de los resultados.
¿Te motivas? ¡Te esperamos!

Apply without intermediaries through Get on Board.

Funciones del cargo

  • Diseñar, desarrollar y mantener pipelines de datos en AWS.
  • Participar en la migración y refactorización de procesos ETL legacy hacia AWS Glue.
  • Implementar procesos de ingesta, transformación y carga de datos en arquitectura Lakehouse.
  • Desarrollar soluciones eficientes con foco en performance y estabilidad.
  • Ejecutar monitoreo, soporte y mejora continua de pipelines productivos.
  • Aplicar prácticas de Data Quality y validación de datos.
  • Colaborar en iniciativas de metadata, catálogo y linaje de datos.
  • Participar en orquestación de flujos con herramientas como Step Functions.
  • Documentar procesos y flujos técnicos.
  • Trabajar junto a equipos de negocio, BI y arquitectura.

Requerimientos del cargo

  • 3 a 5 años de experiencia en Data Engineering.
  • Experiencia desarrollando pipelines ETL / ELT en ambientes productivos.
  • Conocimiento práctico de servicios AWS orientados a datos: Glue, S3, Athena, Redshift o similares.
  • Manejo de Python para procesamiento de datos.
  • Conocimiento de PySpark o Spark.
  • Experiencia en SQL avanzado.
  • Conocimiento de modelamiento de datos (Data Warehouse / Lakehouse).
  • Integración con múltiples fuentes de datos: Oracle, SQL Server, DB2 u otras.
  • Experiencia en monitoreo y soporte de procesos batch o pipelines productivos.

Deseables

  • Experiencia en migración de procesos legacy.
  • Conocimiento en Data Quality, metadata o catálogo de datos.
  • Experiencia con Step Functions, IAM o Lake Formation.
  • Experiencia en sector financiero o industrias reguladas.
  • Experiencia con Infraestructura como Código (IaC)

Beneficios

📆 Día administrativo semestral hasta los 12 meses
🏖️ Week off: 5 días de vacaciones extra
🎉 ¡Celebra tu cumpleaños!
📚 Path de entrenamiento
☁️ Certificaciones AWS
🏡 Flexibilidad (trabajo híbrido con posibilidad a remoto)
💍 Regalo por casamiento + 5 días hábiles libres
👶 Regalo por nacimiento de hijos
✏️ Kit escolar
🤱 Beneficio paternidad
❤️ Bonda (plataforma de descuentos y bienestar)

$$$ Full time
Desarrollador Web Python/HTML5
  • BC Tecnología
  • Lima (Hybrid)
HTML5 Python BigQuery Microservices
BC Tecnología es una consultora de TI con experiencia en diseñar soluciones para clientes de servicios financieros, seguros, retail y gobierno. Nuestro enfoque es entregar proyectos de desarrollo y migración de funcionalidades, con equipos ágiles y foco en la continuidad operativa y la evolución de canales digitales. En esta posición, formarás parte de iniciativas que buscan migrar funcionalidades desde aplicaciones móviles (APK) hacia plataformas Web, asegurando soluciones eficientes, escalables y alineadas a los estándares corporativos del banco.
Trabajarás en proyectos que requieren integración de datos, migraciones de funcionalidades, y desarrollo de soluciones que optimicen la experiencia del usuario final, manteniendo la calidad y la trazabilidad documental necesarias para entornos regulados.

Applications at getonbrd.com.

Funciones y responsabilidades

  • Desarrollar funcionalidades de gestión de canal utilizando Python y HTML5, abarcando tanto Backend como Frontend.
  • Migrar funcionalidades desde APK a plataformas Web, asegurando transiciones sin pérdidas de rendimiento ni integridad de datos.
  • Participar en la elaboración de documentación funcional de los desarrollos, manteniendo trazabilidad y claridad para equipos de operación y negocio.
  • Colaborar en la definición técnica y en la revisión de código para garantizar adherencia a estándares del banco y buenas prácticas.
  • Trabajar de forma colaborativa con equipos de UI/UX, QA y DevOps para entregar soluciones escalables y mantenibles.
  • Identificar, registrar y proponer mejoras continuas en procesos, rendimiento y seguridad de las aplicaciones.

Requisitos y perfil buscado

Requisitos técnicos obligatorios:
  • HTML5
  • Python
  • SQL Server
  • BigQuery
  • GitLab
  • ETLs
Experiencia entre 2 y 3 años como desarrollador, con antecedentes en desarrollo web y migración de funcionalidades. Se valorará experiencia en el sector financiero o en industrias afines. Capacidad para trabajar en entornos colaborativos, orientados a resultados y con buenas habilidades de comunicación para documentar y coordinar cambios con stakeholders.

Deseables

Experiencia previa en migraciones desde apps móviles a plataformas web, conocimiento adicional de arquitecturas de microservicios, y familiarity con procesos de gobierno de datos y seguridad en entornos regulados.Idiomas: español fluido; inglés técnico deseable.

Beneficios y entorno de trabajo

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $4500 - 7500 Full time
Python Artificial Intelligence Machine Learning Kubernetes
Niuro connects projects with elite tech teams, collaborating with leading U.S. companies. Our mission is to simplify global talent acquisition through innovative solutions that maximize efficiency and quality. The Head of AI will join Niuro’s remote-first environment to define and drive the AI strategy across the organization, partnering with the CEO to align technology with business goals. You will lead the design and deployment of scalable, secure AI platforms, modernizing legacy systems while delivering transformative AI capabilities for our clients. This role sits at the intersection of strategic leadership and hands-on technical execution, guiding cross-functional teams and ensuring that AI initiatives translate into measurable business outcomes. You will also help nurture a global, high-performance workforce through mentorship, training, and strong governance around AI programs.

This offer is exclusive to getonbrd.com.

Key Responsibilities

  • Vision & Strategy: Partner with the CEO to define and execute Niuro's AI roadmap, ensuring alignment with business objectives and market opportunities. Translate strategy into actionable programs with clear milestones and metrics.
  • Architecture Leadership: Serve as the chief AI architect, designing scalable, secure AI-driven systems. Lead the transition from legacy platforms to modern infrastructure while ensuring reliability and compliance.
  • Innovation & Delivery: Drive rapid development of new AI-powered features and services, balancing speed with maintainability and long-term support.
  • Technology Oversight: Guide the use of cloud-based technologies (AWS, Terraform, Kubernetes, Python, Windows Server/IIS, FastAPI). Implement monitoring (CloudWatch, Grafana) and data pipelines (AWS Glue/Lambda) to ensure scalability and observability.
  • People & Stakeholders: Communicate complex technical concepts clearly to executives, clients, and internal teams. Mentor senior engineers and foster a culture of scientific rigor and responsible AI.

Required Skills & Experience

8+ years in software engineering, data science, or AI with at least 3+ years in leadership. Proven track record deploying AI/ML solutions at scale in production environments. Strong systems design background and experience with cloud platforms (AWS preferred). Advanced Python programming skills; experience with modern AI frameworks and LLMs. Demonstrated success modernizing legacy platforms and delivering scalable, maintainable AI solutions. Exceptional executive-level communication abilities and a talent for translating technical concepts into business value. Fluent in English; Spanish or Portuguese is a plus.

Desirable Skills & Experience

Experience in regulated industries (fintech, govtech) and products with active users and customer support operations. Familiarity with AWS AI services, container orchestration (Kubernetes/ECS), and MLOps. Exposure to LLM-based automation and data engineering workflows. A proactive, entrepreneurial mindset with a bias for action and strong collaboration skills.

Benefits & Perks

We offer the chance to participate in impactful, technically rigorous industrial data projects that drive innovation and professional growth. Niuro supports a 100% remote work model, enabling global flexibility. We invest in career development through ongoing training and leadership opportunities, ensuring continuous growth. Upon successful completion of the initial contract, there is potential for long-term collaboration and stable, full-time employment. Joining Niuro means being part of a global community with strong administrative support that enables you to focus on impactful work.

Fully remote You can work from anywhere in the world.
$$$ Full time
Arquitecto de Datos
  • Factor IT
  • Santiago (Hybrid)
SQL BigQuery CI/CD Cloud Architecture
En Factor IT impulsamos la transformación digital con foco en Data & Analytics, IA, automatización y consultoría estratégica. Buscamos un/una Arquitecto(a) de Datos para integrarse a proyectos regionales con impacto real en grandes empresas, incluyendo el sector financiero. En este rol, contribuiremos al diseño y evolución de plataformas de datos sobre Google Cloud, asegurando escalabilidad, confiabilidad y gobernanza. El objetivo es habilitar analítica avanzada y consumo eficiente de datos para distintos equipos de negocio, integrando prácticas modernas de modelado, orquestación y gobierno a lo largo del ciclo de vida de la información.

© getonbrd.com. All rights reserved.

Funciones

Como Arquitecto(a) de Datos en Factor IT, nuestro objetivo es diseñar y estandarizar soluciones de datos en Google Cloud que permitan transformar datos en decisiones confiables y oportunas. Sus principales responsabilidades serán:
  • Diseñar la arquitectura de datos end-to-end considerando ingestión, almacenamiento, procesamiento, modelado y consumo.
  • Desarrollar y optimizar pipelines usando BigQuery y orquestadores como Airflow, además de automatizaciones con Dataflow cuando aplique.
  • Implementar y mantener modelado de datos (por ejemplo, capas analíticas y/o modelos dimensionales) asegurando performance y consistencia semántica.
  • Crear y mantener automatizaciones con dbt, definiendo transformaciones, pruebas y documentación de datos.
  • Gestionar la gobernanza de datos: estándares, accesos, calidad, linaje y buenas prácticas para el uso responsable de la información.
  • Promover patrones de ingeniería (CI/CD, versionado, pruebas y monitoreo) para asegurar estabilidad operativa en ambientes productivos.
  • Coordinar con equipos de negocio y técnicos para traducir requerimientos a soluciones escalables y medibles.

Requisitos y experiencia

Buscamos un/una Arquitecto(a) de Datos con experiencia sólida para liderar el diseño y la evolución de soluciones modernas de datos en entornos cloud. Necesitamos que tengas un nivel avanzado de SQL y que puedas aplicar ese conocimiento para optimizar rendimiento, asegurar calidad y resolver problemas complejos.
Requisitos excluyentes
  • SQL avanzado.
  • BigQuery.
  • Airflow, dbt y Dataflow.
  • Modelado de datos.
  • Gobierno de datos.
Experiencia esperada
  • Participación en la construcción y/o mejora de plataformas de datos orientadas a analítica y toma de decisiones.
  • Capacidad para definir estándares y guías de ingeniería para equipos que consumen y desarrollan sobre la plataforma.
  • Experiencia trabajando con prácticas de calidad de datos, estandarización y controles de acceso.
Competencias y habilidades clave
  • Enfoque analítico y mentalidad de mejora continua.
  • Comunicación clara para alinear stakeholders técnicos y de negocio.
  • Proactividad para anticipar riesgos (performance, costos, calidad, disponibilidad) y proponer mitigaciones.
  • Orientación al trabajo colaborativo y a la transferencia de conocimiento dentro del equipo.
En Factor IT valoramos una cultura basada en la conversación y el entendimiento profundo de los requerimientos del negocio. Por eso, buscamos a alguien que pueda traducir necesidades reales en soluciones técnicas robustas, escalables y gobernables.

Deseable

  • Certificación GCP Data Engineer.
  • Experiencia adicional con diseño de arquitecturas de datos escalables y optimización de costos en BigQuery.
  • Conocimiento en patrones de gobierno de datos (catálogo/metadata, linaje, políticas de acceso) y prácticas de calidad medibles.
  • Experiencia liderando iniciativas end-to-end (desde la definición de arquitectura hasta la puesta en producción y el soporte evolutivo).

Beneficios

Ofrecemos modalidad de trabajo híbrida desde Santiago, Chile, con flexibilidad horaria para un balance saludable entre la vida profesional y personal.
Además, en Factor IT contamos con un ambiente colaborativo, dinámico y con tecnologías de última generación que impulsan el crecimiento profesional, la innovación y el aprendizaje continuo.
Tu paquete salarial será competitivo y acorde a la experiencia y perfil, sumado a una cultura inclusiva que valora la diversidad, creatividad y el trabajo en equipo. Trabajarás en proyectos desafiantes con impacto real en la transformación tecnológica de la región y en el sector financiero.
Si te interesa construir soluciones de datos con alto impacto, únete a Factor IT y sé parte de un equipo que transforma el futuro de la tecnología.

$$$ Full time
JavaScript PostgreSQL Node.js DevOps

Company and Project Context

BNamericas is the leading Latin American business intelligence platform with 28 years of experience delivering news, project updates, and data on people and companies across strategic sectors such as Electric Power, Infrastructure, Mining & Metals, Oil & Gas, and ICT. We empower clients to access high-value information to make informed business decisions. The Engineering Lead will play a pivotal role in shaping a growing information platform used across industries and geographies, driving architecture, data workflows, and product evolution.

As part of a dynamic, multicultural team, you will drive high-performance software, data, and cloud initiatives, ensuring scalability, reliability, and security while fostering a culture of engineering excellence. This role combines hands-on development with strategic leadership to deliver a modular, scalable platform and to integrate cutting-edge AI-enabled capabilities where appropriate.

Apply without intermediaries from Get on Board.

Core Responsibilities

  • Lead by example as a senior developer: design, implement, and review high-performance, maintainable code following clean code principles, testing, CI, and agile practices.
  • Shape and evolve system architecture with emphasis on scalability, modularity, security, and reliability; drive architectural decisions and technical direction.
  • Drive integration initiatives, including seamless Appian integration with the platform and interconnectivity between internal systems and tools.
  • Lead and mentor engineers, fostering accountability, continuous improvement, and high performance; remove blockers and optimize development workflows.
  • Oversee infrastructure planning and operations to ensure high availability, cost-efficiency, and robust security.
  • Guide data solutions, including data warehousing, transformations, and overall data architecture; oversee data acquisition, including web scraping strategies and automation.
  • Manage relationships with external partners (e.g., scraping providers) to ensure quality and alignment with technical standards.
  • Explore and help implement modern AI-driven solutions (e.g., agent-based AI) to enhance data workflows, automation, and product capabilities.
  • Partner with senior stakeholders across product, content, and business teams to align engineering efforts with company priorities.
  • Contribute to long-term technical direction and platform evolution to ensure scalability and sustainability.
  • Evaluate emerging technologies and introduce tooling or architectural improvements where relevant; steer platform evolution into a scalable, modular, high-quality technical solution.
  • Support the continued evolution of the platform to meet expanding geographic and sector coverage, ensuring robust data pipelines and a secure, resilient system.

Ideal Profile

What you’ll bring

Proven experience in a senior or lead engineering role, ideally within SaaS or data/information platforms. Strong hands-on development skills in JavaScript, Node.js, and PostgreSQL with a track record of scalable system design. Solid understanding of DevOps, cloud infrastructure (AWS), and security best practices. Experience with data architecture, including data warehousing and transformation pipelines. Experience integrating third-party platforms (e.g., Appian) and working with internal data pipelines. Familiarity with web scraping technologies, automation, and management of external vendors. Exposure to or interest in AI-driven solutions (e.g., agent-based AI) is a strong plus. Fluent English is required; Spanish and/or Portuguese are a strong plus. Strong communication skills and the ability to collaborate with both technical and non-technical stakeholders. A strategic mindset with the ability to balance hands-on delivery and broader technical direction. An entrepreneurial attitude focused on quality, ownership, and impact.

Why you’ll love this role

You will shape and advance a growing information platform used across industries and geographies. This is a high-impact position with significant ownership, offering the chance to influence technical direction, data strategy, and product evolution while helping to build a culture of engineering excellence. You’ll work with a collaborative, diverse team in a dynamic market, and you'll have the opportunity to leave a lasting imprint on our platform and product roadmap.

Benefits

At BNamericas, we foster an inclusive, diverse, creative, and highly collaborative work environment. Our team is dynamic, committed, and always willing to support one another, creating a positive and motivating workplace.

We offer a range of benefits, including referral bonuses for bringing in new talent, early finishes on special occasions such as national holidays and Christmas, opportunities for continuous learning and professional development, and a casual dress code that encourages authenticity and comfort at work.

We invite you to be part of a company that values diversity and work-life balance, and that promotes an empowered, goal-oriented, and passionate way of working. Join us!

Fully remote You can work from anywhere in the world.
$$$ Full time
software growth code payroll

Who We Are

Wingspan is the first payroll platform designed specifically for independent contractors and their businesses. We simplify onboarding, payments, and compliance for flexible workforces of all sizes, from solo operators to large enterprises. 

We're a Series B startup based in NYC with distributed teams in the USA, Poland, and the UK, and backed by Andreessen Horowitz (a16z), Touring Capital, and a strong network of operators, including the CEOs and founders of Warby Parker, Harry's, Allbirds, Invision, and Flatiron Health.

About the Role

As a Software Engineer on the Payment Operations team, you will be responsible for the execution layer that ensures every dollar on Wingspan's platform is accounted for, reconciled, and moved accurately on time. You will have direct access to production systems, a mandate to identify what's broken or inefficient, and the authority to engineer the fix. 

This role reports to the Head of Payments & Compliance Operations and is based in Warsaw, Poland, with a remote work model.

What You'll Do

  • Design, develop, and ship internal systems and automation that eliminate entire categories of operational toil, owning every problem end-to-end from initial diagnosis to permanent fix
  • Build and maintain reconciliation infrastructure that keeps Wingspan's ledger, bank records, and platform transaction data in continuous alignment, automatically and at scale
  • Develop monitoring and alerting systems that surface funding health issues and payment anomalies in real time, ensuring problems are caught and resolved before they ever reach a customer
  • Collaborate with Engineering, Product, and Finance to identify recurring operational patterns and translate them into platform-level improvements that raise the reliability ceiling for the entire system
  • Contribute to the growth of our engineering culture by sharing knowledge, participating in code reviews, and proactively identifying opportunities to improve how the team builds, observes, and automates

Qualifications & Requirements

  • 3+ years of experience in a software engineering or engineering-adjacent role with exposure to payment systems, backend services, or data pipelines
  • Strong SQL skills, comfortable writing standalone scripts and using AI tools such as Claude Code, Open AI, etc 
  • Familiarity with RESTful APIs and backend services, with Node.js an

    Please mention the word **FREE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Analista de Infraestructura y Despliegue
  • Coderslab.io
  • Bogotá (Hybrid)
Git SQL Oracle Linux
Coderslab.io es una empresa que ayuda a las organizaciones a transformarse y crecer mediante soluciones tecnológicas innovadoras. Formarás parte de un grupo de más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Trabajarás dentro de equipos diversos que cuentan con talento de primer nivel y participarás en proyectos innovadores y desafiantes que impulsarán tu desarrollo profesional. Tendrás la oportunidad de aprender de profesionales experimentados y de trabajar con tecnologías de vanguardia en un entorno colaborativo y orientado a resultados.

Apply to this job directly at getonbrd.com.

Funciones del cargo

El objetivo del cargo es Administrar y Configurar los Ambientes de Pruebas del Banco al igual que el proceso de Aplicaciones y desarrollos hasta que son puestos en Ambiente Productivo.
-Recibir la Documentación como manuales, documentos de entrega y todo lo relacionado con Desarrollos de Software ya sea interno o externo
-Despliegues (Manuales/continuos) de desarrollos recibidos en Ambientes de Pruebas con las
correspondientes configuraciones de los Objetos.
-Procesos de Configuración y Homologación de Ambientes de Pruebas de acuerdo con lo requerido
-Verificación y solución de Errores que se generen en los Ambientes de prueba ya sea por
despliegues de Nuevos Desarrollos o instalaciones o configuraciones de nuevas aplicaciones
-Control de Versionamiento de Fuentes de Aplicaciones y Objetos de Desarrollo
-Alistamiento y Generación de Documentación, Objetos y Aplicaciones que deben de ser puestas en
Ambiente Productivo
-Ejecuciones para la correcta puesta en Producción de los Aplicativos Locales (Web-Windows …)
-Mantenimiento y desarrollo de pipelines en GITLAB

Requerimientos del cargo

- Conocimiento del Proceso de la Gestión de Configuración del Software
- Administración de Sistemas Operativos Windows Server (Versiones varias)
- Instalaciones sobre IIS – Servicios Web – Servicios Windows
- Conocimientos básicos de Versionamiento en Herramientas como GIT - TFS- SVN
- Conocimientos en SharePoint - Confluence
- Conocimientos básicos en Sistemas Operativos, Linux, Windows Server
- Conocimientos intermedios en Bases de Datos SQL, Oracle, DB2
- Conocimientos básicos de Visual Studio
- Instalaciones de ETL´S SQL
- Experiencia en Despliegues de Aplicaciones Web, Windows, Cliente Servidor, NodeJs …
- Manejo de Herramienta SoapUi
- Conocimientos en Herramienta Power Center
- Conocimientos en Herramienta GoAnyWhere

Gross salary $4000 - 6000 Full time
Python C# Docker CI/CD

Vequity is building the world’s most robust, contextualized buyer intelligence network for investment banks, private equity firms, and strategic acquirers — a platform with over 2.1 million buyer profiles, each containing ~100 structured and inferred data fields. Our proprietary AI agents continuously enrich, infer, and structure buyer intelligence at scale.

We need a fullstack engineer who ships product features end-to-end, brings real fluency with AI development tooling, and will take ownership of deployment pipelines that currently lack a dedicated owner.

This is a two-sided role: half building features that users see, half making the engineering team faster and more reliable. If you’ve actually built with Claude Code, Cursor, GitHub Copilot, or similar tools — not just experimented — and you can prove it with real output, we want to talk.

This company only accepts applications on Get on Board.

What you’ll own

  • Fullstack product development. Build and ship features across the Angular frontend and C# / Python backend. Translate product requirements into production-ready code. Write clean, tested, maintainable code with solid PR practices.
  • AI-augmented development. Actively use AI coding tools (Claude Code, Cursor, GitHub Copilot, Windsurf, Aider) to accelerate your own development velocity. Improve team patterns and best practices for AI-assisted workflows. Evaluate and integrate new AI development tools as they emerge.
  • Deployment and operations. Own and improve CI/CD pipelines, deployment automation, and infrastructure-as-code. Build monitoring, alerting, and incident response capabilities. Manage cloud infrastructure (GCP) including cost optimization and scaling. Create and maintain runbooks for operational procedures.
  • Developer experience and tooling. Reduce friction in the development-to-deployment cycle. Improve local development environments, testing infrastructure, and developer workflows. Standardize build, lint, and test tooling across the codebase.
  • Cross-functional collaboration. Work across product, engineering, and sales operations teams. Bridge feature development and infrastructure reliability. Participate in code reviews and mentor team members.

What success looks like in year one

  • Shipping 2–3 features per sprint while maintaining code quality.
  • Continuous deployment implemented within your first month.
  • At least one improvement to the team’s AI-assisted workflow per week.
  • Deployment pipeline has a clear owner with documented runbooks and <15 minute rollback capability within 3 months.
  • Zero unplanned downtime from deployment issues within 6 months.
  • Your teammates are measurably faster because of the tooling and patterns you’ve introduced.

What we’re looking for

Core requirements

  • 4+ years fullstack development experience with Angular + Python or C# backends.
  • Demonstrated production use of AI coding tools (Claude Code, Cursor, GitHub Copilot) — must be able to show concrete examples of how these tools changed your workflow and output.
  • Experience with CI/CD pipelines, containerization (Docker), and cloud deployment (GCP preferred, AWS acceptable).
  • Solid understanding of DevOps practices: infrastructure-as-code, monitoring, logging, alerting.
  • Strong written English — this is a remote, async-heavy role with a US-based team.
  • Comfort working in a fast-paced startup where priorities shift and ownership is expected.

Nice to have

  • Github Actions experience is a big plus
  • Experience owning deployment pipelines end-to-end in a startup environment.
  • Terraform or Pulumi for infrastructure-as-code.
  • Kubernetes or Cloud Run experience on GCP.
  • Background in B2B SaaS or data-intensive platforms.
  • Experience with PostgreSQL and data-heavy applications.
  • Familiarity with the Python data tooling ecosystem (even if not a data engineer).
  • Contributions to open source or public examples of AI-augmented development work.

Compensation and benefits

We pay competitively for the LATAM market and we’re transparent about it.

  • Time off: Manage your own schedule. We trust you.
  • Health: $150/month health and wellness stipend.
  • Engagement: B2B contract. 30-day mutual notice.

How we work

  • Fully remote. We are based in Denver, Colorado (MT, UTC-7). You can work from Mexico, Colombia, Argentina, Brazil, Chile, or anywhere in the Americas with strong overlap.
  • Same time zone. We expect significant daily overlap with Central Standard Time (CT). LATAM time zones are ideal — this is a key reason we’re hiring in the region.
  • Async-first. We write things down. Docs, Loom videos, and thoughtful PR descriptions are the norm. Meetings happen when they’re the fastest path to clarity, not by default.
  • Small team, direct access. You will work directly with the Head of Engineering and the founder. No middle management. Your work ships fast.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage Vequity pays or copays health insurance for employees.
Computer provided Vequity provides a computer for your work.
$$$ Full time
Ingeniero de Datos AWS
  • BC Tecnología
  • Santiago (Hybrid)
Python SQL ETL Spark
En BC Tecnología diseñamos equipos de trabajo ágiles para servicios IT, con foco en Infraestructura, Desarrollo de Software y Unidades de Negocio para clientes en Finanzas, Seguros, Retail y Gobierno. Nuestro objetivo es entregar soluciones de alto impacto mediante consultoría, desarrollo de proyectos, outsourcing y selección de personal.
Como parte de nuestro programa CRM Customer Services migramos y consolidamos datos a plataformas cloud (AWS) y Dynamics 365 Dataverse, garantizando integridad, calidad y disponibilidad de la información para operaciones y analítica. Participarás en iniciativas innovadoras con clientes de alto nivel, con enfoque en aprendizaje continuo y desarrollo técnico dentro de un entorno colaborativo y orientado al cliente. La modalidad híbrida permite combinar trabajo remoto con presencia en nuestras oficinas para fomentar colaboración y dinamismo.

This company only accepts applications on Get on Board.

Funciones

  • Diseñar, desarrollar y ejecutar procesos de ingeniería de datos y migración requeridos por el programa CRM Customer Services, asegurando la integridad, calidad y disponibilidad de los datos en plataformas cloud.
  • Conocimientos técnicos: ingeniería de datos en AWS (S3, Glue, Athena, Redshift, Lambda, Step Functions); ETL/ELT para diseño y desarrollo de pipelines de datos; migración de datos entre sistemas legados y plataformas cloud; SQL avanzado y modelado de datos (dimensional, relacional).
  • Desarrollar pipelines en Python y Spark/PySpark, aplicar calidad de datos (validación, limpieza, reconciliación, profiling) y utilizar herramientas de orquestación (Airflow, Step Functions); control de versiones y CI/CD aplicado a datos.
  • Conocer Microsoft Dynamics 365 Dataverse y su modelo de datos; diseñar y ejecutar migraciones desde sistemas legados hacia plataformas cloud y Dynamics 365; documentar modelos, pipelines y procesos de migración.
  • Participar en ceremonias ágiles, reportar avances y colaborar con QA para validación end-to-end; transferir conocimiento de datos al equipo y colaborar estrechamente con el Technical Lead en la arquitectura de datos del programa.

Requisitos y perfil

Buscamos un profesional con experiencia sólida en ingeniería de datos en entornos cloud, especialmente AWS, y en proyectos de migración de datos hacia soluciones modernas de nube y CRM. Debes dominar pipelines ETL/ELT, modelado de datos relacional y dimensional, y tener capacidad para trabajar en entornos dinámicos y colaborativos. Se valorará experiencia con Oracle/Siebel, Great Expectations o Deequ para calidad de datos, y conocimiento del sector Retail. Debes ser proactivo, orientado a resultados, con habilidades de comunicación para trabajar con equipos multidisciplinarios y stakeholders.
Requisitos mínimos: experiencia en AWS Data Analytics/Data Engineering; diseño y migración de datos entre sistemas; SQL avanzado; Python o Spark; experiencia con herramientas de orquestación; familiaridad con Dynamics 365 Dataverse; experiencia en entornos ágiles y capacidad para documentar procesos y modelos de datos. Deseable certificación AWS Data Analytics, experiencia en migración desde Oracle/Siebel y conocimiento de herramientas de calidad de datos. Se valora experiencia en Retail y en entornos de CRM.

Conocimientos Deseables

Certificación AWS Data Analytics o Data Engineering. Experiencia migrando datos desde Oracle/Siebel. Conocimiento en herramientas de calidad de datos como Great Expectations o Deequ. Experiencia en el sector Retail. Conocimiento adicional en DevOps de datos y metodologías ágiles. Habilidad para trabajar en equipos multiculturales y capacidad de explicar conceptos técnicos a audiencias no técnicas. Se valora experiencia en arquitectura de datos para CRM y en gestión de proyectos de migración complejos.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

$$$ Full time
c# front-end software backend
NEORIS ahora parte de EPAM, es un acelerador Digital que ayuda a las compañías a entrar en el futuro, teniendo 20 años de experiencia como Socios Digitales de algunas de las mayores compañías del mundo. Somos más de 4,000 profesionales en 11 países, con nuestra cultura multicultural de startup en donde cultivamos innovación, aprendizaje continuo para crear soluciones de alto valor para nuestros clientes. Estamos en búsqueda del talento que ocupe la posición como Desarrollador .NET/SQL. Profesional en Ingeniería de Sistemas, Informática o afines, con al menos 3 años de experiencia en desarrollo de software usando C#, .NET Framework y .NET Core, capaz de participar en el ciclo completo del desarrollo, proponer mejoras técnicas y trabajar en equipo para implementar soluciones, resolver errores y aportar innovación en las herramientas tecnológicas del área. Principales responsabilidades: - Diseñar y desarrollar la lógica de negocio y los sistemas backend del producto. - Trabajar en estrecha colaboración con los desarrolladores de front-end para diseñar y desarrollar APIs funcionales, eficaces y completas. - Descifrar los sistemas de software de las aplicaciones legacy existentes y ser capaz de integrar la aplicación a las fuentes de datos aplicables. Requerimientos: - .NET / C# – Desarrollo de aplicaciones backend, construcción de APIs, servicios y lógica de negocio. - ETLs – Diseño, construcción y mantenimiento de procesos de extracción, transformación y carga de datos. - SQL Server – Manejo avanzado de consultas, stored procedures, modelado de bases de datos y opti

Please mention the word **DELIGHTFUL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
EU GO Senior Software Engineer
  • Connectly
  • Remote/Greece
software system frontend python

At Connectly we are building the future of conversational commerce in Latin America with the focus on Whatsapp. Instead of shoppers installing yet another app, we are offering a 360 engagement platform for retailers inside of an app that everyone already have on their phone - Whatsapp. 


We are a VC-backed Series B startup with a world-class team hailing from Meta, Google, Uber, and other top Silicon Valley companies. We operate as a hybrid company, with offices in Bogotá and San Francisco, and a remote-first culture everywhere else.

\n


Job summary
  • We’re looking for an exceptional Senior Backend Engineer with strong Go (Golang) expertise and experience designing large-scale distributed systems.
  • You’ll work across backend and frontend domains, collaborating closely with product, sales, and AI platform teams to design, prototype, and launch powerful conversational experiences for some of Latin America’s largest retailers. This is a role for an independent problem solver who enjoys both deep technical challenges and high-impact product thinking.


Responsibilities include:
  • Design, build, and maintain distributed backend systems using Go, AWS, Kafka, Postgres, and DynamoDB.
  • Collaborate cross-functionally with product managers, designers, and enterprise partners to define user journeys, performance goals, and success metrics.
  • Own critical parts of Connectly’s platform infrastructure — from messaging orchestration to data pipelines and API integrations.
  • Collaborate closely with product, AI, and frontend teams to deliver scalable, customer-facing features.
  • Ensure reliability, observability, and operational excellence across all services.
  • Establish, track, and iterate on performance metrics, leveraging data to optimize outcomes and drive measurable business results.
  • Work asynchronously with global teams, maintaining strong communication and documentation.
  • Plan and manage your workstream, making thoughtful tradeoffs between deadlines, quality, and innovation.
  • Mentor teammates, contribute to code reviews, and uphold engineering best practices in a fast-moving, distributed environment.


What will make you excel at this job:
  • Exceptional communication skills with both technical and non-technical stakeholders.
  • Deep attention to detail paired with strong system-level thinking; you can zoom out to strategy and dive deep into code.
  • A bias for action and results, with comfort navigating ambiguity and evolving product needs.
  • Genuine curiosity and a drive to stay ahead of the rapidly changing AI landscape.
  • Balance of product sense and technical rigor; you care as much about user experience as you do about system performance.
  • Experience with cloud infrastructure (AWS) and event-driven architectures.
  • Solid understanding of system design, concurrency, and data consistency.
  • Pragmatic approach to engineering; you balance simplicity, reliability, and speed.


Requirements
  • BS or MS in Computer Science or related technical field.
  • 5+ years of experience in hands-on software engineering roles.
  • Proven track record building and scaling enterprise systems using Go, AWS, Kafka, Postgres, and/or DynamoDB.
  • Experience with Python is a plus.
  • Experience with frontend engineering (React, TypeScript, etc.) is a plus.
  • Prior experience developing or deploying WhatsApp conversational applications is a strong plus.
  • Experience working in fast-paced, customer-centric environments, ideally in a startup or high-growth tech company.
  • Based in Europe; remote-first with occasional team offsites.


Benefits
  • Work alongside an exceptional, mission-driven team in a culture that values curiosity, impact, and continuous learning.
  • Competitive compensation with equity participation.
  • Unlimited time off and flexible working hours.
  • Flexible working hours and remote-first culture across the EU.


\n

We are a strong believer in passion, curiosity and willingness to learn on the job. If you are in doubt, we encourage you to apply! 


Connectly is an equal opportunity employer. We’re committed to building a diverse, inclusive, and supportive workplace that is distributed around the world.



Please mention the word **EMINENCE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer GCP
  • TCIT
  • Santiago (Hybrid)
Python BigQuery ETL Google Cloud Platform

En TCIT, somos líderes en desarrollo de software en modalidad cloud con más de 9 años de experiencia. Trabajamos en proyectos que transforman digitalmente a organizaciones, desde sistemas de gestión agrícola y de remates en línea, hasta soluciones para tribunales y monitoreo de certificaciones para minería. Participamos en iniciativas internacionales, colaborando con partners tecnológicos en Canadá y otros mercados. Nuestro equipo impulsa soluciones de calidad y sostenibles, con foco en impacto social. Buscamos ampliar nuestro equipo con talentos que quieran crecer y dejar huella en proyectos de alto impacto en la nube.

Apply at the original job on getonbrd.com.

Funciones principales

  • Responsable de entregar soluciones eficientes, robustas y escalables en GCP. Tu rol implicará:
  • Diseñar, construir y mantener sistemas de procesamiento de datos escalables y de alto rendimiento en GCP.
  • Desarrollar y mantener pipelines de datos para la extracción, transformación y carga (ETL) de datos desde diversas fuentes en GCP.
  • Implementar soluciones para el almacenamiento y procesamiento eficiente de grandes volúmenes de datos utilizando las herramientas y servicios de GCP.
  • Colaborar con equipos multidisciplinarios para entender los requisitos y diseñar soluciones adecuadas en el contexto de GCP.
  • Optimizar el rendimiento de los sistemas de procesamiento de datos y garantizar la integridad de los datos en GCP.

Requisitos y perfil

Buscamos un Data Engineer con dominio en Python y experiencia demostrable trabajando con soluciones en la nube. El/la candidato/a ideal deberá combinar habilidades técnicas con capacidad de comunicación y trabajo en equipo para entregar soluciones de datos de alto rendimiento.

Requisitos técnicos:

  • 1- 4 años de experiencia en Ingeniería de Datos y GCP (Excluyente)
  • Experiencia desarrollando pipelines de datos con Python (pandas, pyarrow, etc.).
  • Experiencia en Google Cloud Platform (GCP) y servicios relacionados con datos (ETL/ELT, Dataflow, Glue, BigQuery, Redshift, Data Lakes, etc.).
  • Experiencia con orquestación de procesos (Airflow, Prefect o similares).
  • Buenas prácticas de seguridad y gobernanza de datos, y capacidad para documentar soluciones.

Habilidades blandas:

  • Comunicación clara y capacidad de trabajar en equipos multifuncionales.
  • Proactividad, orientación a resultados y capacidad de priorizar en entornos dinámicos.
  • Ingenio para solucionar problemas y aprendizaje continuo de nuevas tecnologías.

Deseables

Experiencia con herramientas de gestión de datos en la nube (BigQuery, Snowflake, Redshift, Dataflow, Dataproc).

Conocimientos de seguridad y cumplimiento en entornos de datos, experiencia en proyectos con impacto social o regulaciones sectoriales.

Habilidad para escribir documentación técnica en español e inglés y demostrar capacidad de mentoría a otros compañeros.

Condiciones

Trabajo en modalidad hibrida.
Las Oficinas se encuentran ubicadas en la comuna de las Condes, cercano a metro Manquehue.

Computer provided TCIT provides a computer for your work.
Beverages and snacks TCIT offers beverages and snacks for free consumption.
$$$ Full time
Senior Software Engineer Trading Infrastructure
  • Gauntlet
  • New York City / San Francisco / Los Angeles / Remote
software design web3 defi

Gauntlet leads the field in quantitative research and optimization of DeFi economics. We manage market risk, optimize growth, and ensure economic safety for protocols facilitating most spot trading, borrowing, and lending activity across all of DeFi, protecting and optimizing the largest protocols and networks in the industry. We build institutional-grade vaults for decentralized finance, delivering risk-adjusted onchain yields for capital at scale. Designed by the most vigilant, quantitative minds in crypto and informed by years of research.


As of November 2025, Gauntlet manages over $2B in vault TVL, and optimizes risk and incentives covering over $42 billion in customer TVL. We continually publish cutting-edge research that informs our risk models, alerts, and analysis, and is among the most cited institutions — including academic institutions — in terms of peer-reviewed papers addressing DeFi as a subject. We’re a Series B company with around 75 employees, operating remote-first with a home base in New York City.


As a company, we build institutional-grade vaults that deliver risk-adjusted DeFi yields at scale, powered by automated risk models and off-chain intelligence. Gauntlet curates strategies across Morpho, Drift, Symbiotic, Aera and more, with >$2B in vault TVL and a growing suite of Prime, Core and Frontier vaults.


Our mission is to drive adoption and understanding of the financial systems of the future. We operate with a trader’s discipline and a risk manager’s skepticism: size carefully, stress routinely, unwind decisively. The label equals the package equals the contents. No surprises, just predictable, reliable vaults.


Join our derivatives trading team and work on the key infrastructure that powers our product offering as well as trading systems. Work with a team with decades of experience in tech and finance to build the backbone of our high-performance derivatives trading strategies. You'll work close to trading, own critical infrastructure end-to-end, and ship systems that manage real capital in live crypto markets.

\n


Responsibilities
  • Design, implement, and operate scalable distributed systems in production.
  • Build low-latency and streaming systems for real-time and near real-time workloads.
  • Develop data pipelines and ETL workflows for ingesting, transforming, and serving data.
  • Build and maintain application services and APIs used by internal and external systems.
  • Implement Web3 protocol integrations, including smart contract interactions and on-chain data ingestion via RPCs, logs, and indexers.
  • Apply SRE principles to improve reliability, observability, and operational correctness.
  • Participate in incident response, debugging production issues and driving root-cause fixes.
  • Contribute to system design and code reviews, maintaining high engineering standards.
  • Leverage AI-assisted development tools to improve productivity, code quality, and system understanding, while exercising strong engineering judgment.
  • Write and maintain technical documentation for systems and workflows.


Qualifications
  • 6+ years of professional software engineering experience.
  • Strong proficiency in Python, Rust, and/or JavaScript/TypeScript.
  • Experience building low-latency or high-throughput systems.
  • Experience designing and operating scalable distributed systems.
  • Hands-on experience with Web3 systems, including interacting with smart contracts and consuming on-chain data.
  • Experience with streaming or messaging systems (e.g. Kafka, Pub/Sub).
  • Experience with data storage systems (e.g. Postgres, ClickHouse).
  • Experience deploying and operating software in cloud environments (e.g. GCP).
  • Familiarity with containerized systems (Docker, Kubernetes).
  • Understanding of SRE practices, including monitoring, alerting, and incident response.
  • Strong understanding of security fundamentals (authentication, authorization, secrets management).


Bonus Points
  • Previous experience at financial or trading firms.
  • Smart contract development experience (e.g. Solidity).
  • Experience with workflow orchestration (e.g. Dagster).
  • Experience operating systems with strict reliability or performance requirements.
  • Exposure to infrastructure as code or CI/CD systems.


Benefits and Perks
  • Remote first - work from anywhere in the US & CAN!
  • Competitive packages with the added opportunity for incentive-based compensation
  • Regular in-person company retreats and cross-country "office visit" perk
  • 100% paid medical, dental and vision premiums for employees
  • Laptop provided
  • $1,000 WFH stipend upon joining
  • $100 per month reimbursement for fitness-related expenses
  • Monthly reimbursement for home internet, phone, and cellular data
  • Unlimited vacation policy
  • 100% paid parental leave of 12 weeks
  • Fertility benefits


\n
$185,000 - $225,000 a year
\n

Please note at this time our hiring is reserved for potential employees who are able to work within the contiguous United States and Canada. Should you need alternative accommodations, please note that in your application.


The national pay range for this role is $165,000 - $205,000 plus additional On Target Earnings potential by level and equity in the company. Our salary ranges are based on paying competitively for a company of our size and industry, and are one part of many compensation, benefits and other reward opportunities we provide. Individual pay rate decisions are based on a number of factors, including qualifications for the role, experience level, skill set, and balancing internal equity relative to peers at the company.  


#LI-Remote



Please mention the word **CONSUMMATE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Operational Engineer
  • TIMINING
  • Santiago (Hybrid)
Python Git SQL ETL

En TIMining, trabajamos para convertir la información operativa de las faenas mineras en valor accionable a través de nuestras plataformas de control y monitoreo. Este rol se integra al equipo de datos, aportando en el diseño, desarrollo y operación de pipelines ETL que integran fuentes diversas hacia las bases de datos y productos de TIMining. Serás parte de un proyecto orientado a la continuidad operativa, la calibración de algoritmos y la automatización de procesos internos para optimizar el flujo de trabajo del cliente y del equipo.

Apply exclusively at getonbrd.com.

Funciones

  • Desarrollar, mantener y documentar scripts en Python y SQL (conectores) para ETL hacia las bases de datos de los productos de TIMining.
  • Diseñar, implementar y mantener flujos de CI/CD para que los cambios en las pipelines lleguen a producción de forma segura y automatizada.
  • Monitorear la salud y rendimiento de procesos de datos (logging y alerting), garantizando uptime y respuestas ante incidentes operativos.
  • Administrar y orquestar pipelines con herramientas de planificación (Airflow, Dagster) y contenedores (Docker).
  • Validar resultados de pipelines (cualitativa y cuantitativamente) comparando con reportes operacionales de faenas.
  • Identificar, evaluar y mitigar riesgos en el desarrollo de pipelines, contemplando calidad de datos y planes de contingencia.
  • Desarrollar proyectos internos para automatizar labores rutinarias y simplificar el trabajo del equipo.
  • Asistir y presentar en reuniones técnicas con clientes para gestionar accesos a fuentes de información y resolver consultas.
  • Analizar y documentar fuentes de información del cliente por sistema (FMS, MGS u otras) y calibrar algoritmos de los softwares de la empresa.
  • Ejecutar turnos 24/7 para asegurar continuidad operativa.

Requisitos y experiencia

Formación en Ingeniería en Ciencia de Datos, Ingeniería Civil o carreras afines en computación. Se requieren mínimo 2 años de experiencia en cargos similares y experiencia comprobable en la implementación de pipelines ETL. Se valora conocimiento y manejo avanzado de Python y SQL, experiencia práctica en despliegue de aplicaciones y manejo de contenedores, así como experiencia en orquestación de datos con herramientas como Apache Airflow o Prefect. Dominio de control de versiones (Git) y trabajo colaborativo, consultas a APIs y bases de datos avanzadas. Conocimientos de Google Suite y Office. Habilidades analíticas, proactividad y capacidad para trabajar de forma autónoma y en equipo. Idiomas: Español nativo; Inglés deseable (upper-intermediate).

Se buscan candidatos con experiencia en proyectos tecnológicos y conocimiento de la industria minera a cielo abierto, además de experiencia con arquitecturas Cloud (AWS, Azure o GCP) e Infraestructura como Código (Terraform, CloudFormation).

Requisitos deseables

Experiencia en:
- Implementación de proyectos tecnológicos.
- Conocimiento de la industria minera y su operación.
- Familiaridad con metodologías ágiles, y experiencia con herramientas de Infraestructura como Código.
- Deseable conocimiento en soluciones de monitoreo y en entornos de producción de datos a gran escala.

Beneficios

Ofrecemos un entorno enfocado a innovación en la industria minera, con oportunidades de desarrollo profesional y trabajo en equipo multidisciplinario. Si cumples con el perfil, te invitamos a formar parte de TIMining y contribuir a la transformación digital de operaciones mineras.

$$$ Full time
serverless node.js api senior

Sobre Coderio

 

Coderio diseña y entrega soluciones digitales escalables para empresas globales. Con una base técnica sólida y una mentalidad orientada al producto, nuestros equipos lideran proyectos complejos desde la arquitectura hasta la ejecución. Valoramos la autonomía, la comunicación clara y la excelencia técnica, colaborando estrechamente con equipos y socios internacionales para construir tecnología que genera impacto.

🌍 Más información: http://coderio.com

Buscamos un/a backend engineer con criterio técnico propio, capaz de diseñar microservicios event-driven que soporten millones de requests sin parpadear. Responsable de la capa de servicios y pipelines de datos, disponibilizando telemetría crítica para analítica. Debes ser capaz de interactuar con criterio técnico frente a equipos de Data Engineering y diseñar soluciones escalables bajo presión.

Lo que puedes esperar de este rol (Responsabilidades)

 

Es un rol de ownership técnico total: diseñas, decides, construyes, operas y te haces responsable de dominios críticos de la plataforma.

 

Requisitos

+5 años en desarrollo Backend (Seniority basado en autonomía y proactividad).

+3 años de experiencia sólida con Node.js y TypeScript.

+3 años operando en entornos AWS Serverless (Lambda, API Gateway, SQS, SNS).

+2 años de experiencia en Data Engineering básica y modelado de bases de datos relacionales (PostgreSQL).

Deseable

+1 año de experiencia con TimescaleDB o bases de datos Time-series.

Experiencia previa en proyectos de IoT o telemetría industrial.

Conocimiento de infraestructura como código (Terraform/CDK).

 

Soft Skills

Ownership Extremo: Capacidad de tomar un dominio y llevar la resolución de punta a punta.

Comunicación de Criterio: Capacidad para desafiar y colaborar con stakeholders técnicos (Data Teams).

Proactividad: No espera instrucciones; identifica cuellos de botella y propone soluciones.

 

Beneficios

 

Modalidad remota

Participación en un proyecto estratégico regional de alto impacto.

Colaboración con un equipo internacional y liderazgo técnico sólido.

Oportunidad de crecimiento profesional dentro de proyectos de transformación digital.

 

¿Por qué unirte a Coderio?

 

Somos remote-first, apasionados por la tecnología, el trabajo colaborativo y la compensación justa. Ofrecemos un entorno inclusivo, desafiante y con oportunidades reales de crecimiento. Si te motiva construir soluciones con impacto en proyectos globales de finanzas y RRHH. Te estamos esperando. Postula ahora.

\n


\n

Please mention the word **MESMERIZINGLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Sr Software Engineer B E
  • Rebuy, Inc.
  • Remote
software design jira saas

The Company You’ll Join

At Rebuy, we’re on a mission to revolutionize shopping with intelligent, personalized experiences that wow customers around the globe. As a fully remote team, we power some of the fastest-growing DTC brands like Aviator Nation, Liquid Death, Magic Spoon, Blenders, Laird Superfoods, Primal Kitchen, and many more.

We believe in ownership, drive, and empathy, and strongly uphold that every team member plays a vital role in shaping the future of intelligent commerce. Our culture thrives on collaboration, creativity, and genuine passion. We don’t just build great tech - we build lasting partnerships, a strong community, and a place where people love to work.

The Problems You’ll Solve

Rebuy and its team members continually strive to create a high-spirited, intentional work environment that stresses performance, productivity, collaboration, and merit.

As a Sr. Software Engineer, Back-End, you’ll own some of the most consequential systems at Rebuy. Your primary anchor is our billing and payments infrastructure — the engine that determines how merchants are charged, how partners get paid, and how financial balances flow across our entire product suite. This is genuinely complex financial engineering. It requires deep PHP and Go expertise, careful architecture, and judgment that no automated tool can replicate. Merchant billing runs daily, touches real revenue, and demands someone who understands both the technical and business dimensions of every decision.

Alongside billing, you’ll grow into a broader platform portfolio — the partner portal, data ETL pipelines, customer-facing APIs, and reporting infrastructure that power the business. And in the near term, you’ll play a critical role in a significant technical migration: moving our legacy Code Igniter 2 codebase to Code Igniter 4, including work tied to increasing our enterprise market share. This migration requires hands-on PHP expertise and cannot be deferred.

You won’t be handed a sprawling list of things you must do on day one. You’ll be trusted to grow into this role — and rewarded when you do.

  • Billing & Payments Architecture: Design and build Rebuy’s centralized billing system that handles merchant billing, partner payments, and customer-facing charges. Architect the integration layer that allows payment balances to be applied across Rebuy’s full suite of services. Tackle genuinely complex financial engineering challenges with PHP and Go at scale.

  • Build Robust APIs: Design and implement secure, well-structured APIs in PHP and Go to power billing events, payment processing, and financial data flows across our platform and Shopify integrations.

  • Legacy Modernization: Lead and contribute to the migration of our Code Igniter 2 codebase to Code Igniter 4. This is high-priority, near-term work with real business dependencies — including enterprise partnership commitments — and requires a PHP engineer with the experience and judgment to do it right.

  • Agentify the Platform: Partner with product and engineering to identify where AI agents can automate workflows, surface insights, and guide merchants through our product. Build the backend systems — APIs, data pipelines, and event hooks — that enable intelligent automation. This is genuinely new territory and one of the most exciting growth vectors for Rebuy’s product.

  • Platform Breadth: Our team owns more than billing and payments — we also support a partner portal, data ETL pipelines, customer-facing reporting APIs, and the infrastructure that makes data flow reliably across the business. You won’t be responsible for all of it on day one, but you’ll have genuine opportunities to grow into the areas that most interest you. Engineers here don’t get siloed; they get context.

  • Engineering Best Practices: Contribute significantly to the engineering culture at Rebuy by establishing, documenting, and promoting best practices. Lead initiatives to introduce and standardize frameworks and tools that increase development efficiency and maintainability.

  • Security & Compliance: Stay current with the latest security trends, vulnerabilities, and best practices as they apply to billing and payment systems. Champion security-first engineering across authentication, authorization, data encryption, and compliance considerations in everything you build.

  • PHP Technical Leadership: Serve as a key technical anchor for PHP across the engineering organization. Rebuy’s codebase has significant PHP depth and relatively few engineers with that expertise. You’ll lead code reviews, share knowledge actively, and help raise the PHP competency of the broader team.

  • Quality Assurance: Conduct quality checks on deliverables to ensure code, setup, and configurations meet expected results. Ensure that all features meet high standards of quality and performance before deployment.

  • Team Collaboration: Engage actively in building a strong team culture. Work closely with the Product Owner, Engineering Manager, and peers across billing, payments, partner tools, and data infrastructure to define requirements, estimate effort, and drive solutions forward. This is a team where your voice matters — you won’t just be handed tickets. Assist the Support team in triaging and resolving high-priority production issues.

Technologies We Use:

  • AI: Anthropic Enterprise Claude Code / Co-work, Cursor, Adhoc AI tools budget.

  • Frontend Technologies: React, TypeScript, GraphQL, VueJS, Angular

  • Backend technologies: PHP, GO, MySQL, BigTable, Elasticsearch

  • Other Tools: Jira, Bitbucket, Confluence, Google Suite, Slack, One Password, Notion


Who You Are

We’re stoked to meet you and get to learn more about you, your experience and your interest in joining our team.

The Hard Skills:

  • Experience building or maintaining billing, payments, or financial systems — including working with payment processors, subscription engines, invoicing pipelines, or similar financial infrastructure in a production SaaS environment.

  • Educational background in CS // Engineering or a similar area.

  • 5+ years of hands-on experience building backend applications with PHP and Go, with a proven track record of delivering complex, high-traffic systems.

  • Experience designing and implementing secure, scalable, and maintainable RESTful APIs in PHP and Go, with a deep understanding of API design patterns, versioning, and performance optimization.

  • Experience with cloud-based technologies, preferably GCP.

  • Strong understanding of a performant SaaS environment.

  • Experience in a Scrum/Agile environment.

  • Experience with the Atlassian suite, including Jira and Bitbucket.

  • Solid understanding of security fundamentals as they apply to backend and financial systems — including secure coding practices, authentication/authorization patterns, data encryption, and awareness of current vulnerability trends (e.g. OWASP Top 10)

The Soft Skills:

  • A collaborative mindset and work approach with the ability to lead projects and mentor others.

  • The ability to thrive in a fast-paced environment with a high level of autonomy and responsibilities.

  • Excellent communication skills, especially being able to explain technical concepts to both technical and non-technical audiences.

  • Genuinely curious about the intersection of engineering and business. You care about the downstream impact of what you build — not just that the code works, but that it moves the company forward.

Who You’ll Meet With

Now let’s get into who you’ll meet during our interview process! After you submit your application and it’s been reviewed by our team, we will reach out to you inviting you to meet with us. From there, you can expect an interview process similar to this:

  • An introductory call with someone from the Talent Acquisition team for about 30 min.

  • Interview with the Hiring Manager to learn more about you and answer your questions about Rebuy and this role

  • A coding challenge and white boarding exercise to show us your skillset during a live panel interview with a few team members.

  • Short final interview with our CEO and COO where you’ll get to learn more about Rebuy.

The Perks You’ll Enjoy

Rebuy is a fully remote company across the U.S. and Canada that aims to provide all of our team with the resources, support and flexibility they need to thrive in their roles.

  • Team: We’ve got the best, brightest, most brilliant team members who are excited to meet you! We also like to think we have a good sense of humor.

  • Remote Work: With a strong internet connection, you’re able to work from anywhere within the U.S. and Canada.

  • PTO: We offer a flexible vacation policy, generous holiday schedule, parental leave and sick policy. There’s other policies too like a birthday holiday!

  • Amazing Benefits: 100% free health, dental, and insurance for you and your family. Don’t worry, there’s even more!

  • Retirement Plans: For our U.S. employees we offer 401(k) retirement plans and for our Canadian employees we offer a TFSA and RRSP retirement plans. You’ll also enjoy a 3% contribution of your gross salary, no matter where you’re located!

Our compensation reflects the cost of labor across several U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $130,000 - $180,000 USD annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter and hiring manager can share more about the specific salary range for the job location during the hiring process.

Disclosures:

Equal Opportunity Statement

Rebuy, Inc. is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law.

Rebuy, Inc. aims to make rebuyengine.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email hr@rebuyengine.com.



Please mention the word **SUPPORTER** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $1500 - 2000 Full time
Analytics Engineer
  • Artefact LatAm
SQL Business Intelligence ETL Power BI

Somos Artefact, una consultora líder a nivel mundial en crear valor a través del uso de datos y las tecnologías de IA. Buscamos transformar los datos en impacto comercial en toda la cadena de valor de las organizaciones, trabajando con clientes de diversos tamaños, rubros y países. Nos enorgullese decir que estamos disfrutando de un crecimiento importante en la región, y es por eso que queremos que te sumes a nuestro equipo de profesionales altamente capacitados, a modo de abordar problemas complejos para nuestros clientes.

Nuestra cultura se caracteriza por un alto grado de colaboración, con un ambiente de aprendizaje constante, donde creemos que la innovación y las soluciones vienen de cada integrante del equipo. Esto nos impulsa a la acción, y generar entregables de alta calidad y escalabilidad.

Apply through Get on Board.

Tus responsabilidades serán:

  • Recolectar y analizar datos de diversas fuentes para identificar patrones y tendencias. Extraer insights significativos para comprender el rendimiento presente y futuro del negocio.
  • Crear modelos de datos adaptados a distintos proyectos y sectores.
  • Crear y optimizar informes, paneles y cuadros de mando para una presentación efectiva de la información, utilizando herramientas de BI como Tableau, Power BI o QlikView.
  • Identificar oportunidades de mejora de procesos mediante análisis de datos.
  • Mantener y actualizar bases de datos para garantizar su integridad y calidad.
  • Brindar formación y soporte al equipo en el uso de herramientas de Business Intelligence. Colaborar con equipos diversos para crear soluciones integrales. Entender las necesidades del cliente y proponer mejoras y soluciones proactivas.
  • Monitorear modelos de data science y machine learning. Mantener la calidad de datos en flujos de información. Gestionar la seguridad y la escalabilidad de entornos cloud de BI.

Los requisitos del cargo son:

  • Formación en Ingeniería Civil Industrial/Matemática/Computación, o carrera afín
  • 1 a 2 años de experiencia laboral en:
    • Proyectos BI
    • Herramientas de visualización como PowerBI, Tableau, QikView u otros
    • Soluciones de BI en entornos cloud (ejemplo, Azure y Power BI Service)
    • Fuentes de datos (SQL Server, MySQL, API, Data Lake, etc.)
    • Desarrollo de queries en SQL
    • Desarrollo de modelos de datos para usos analíticos y programación de ETLs
  • Inglés profesional

Algunos deseables no excluyentes:

  • Conocimiento de Python o R
  • Manejo de Big Data en miras de establecer reportería

Algunos de nuestros beneficios:

  • Presupuesto de 500 USD al año para capacitaciones, sean cursos, membresías, eventos u otros.
  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Hasta 11 días de vacaciones adicionales a lo legal. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por utilidades de la empresa, además de bonos por trabajador referido y por cliente.
  • Medio día libre de cumpleaños, además de un regalito.
  • Almuerzos quincenales pagados con el equipo en nuestros hubs (Santiago, Bogotá, Lima y Ciudad de Mexico).
  • Flexibilidad horaria y trabajo por objetivos.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Bogotá, Lima y Ciudad de Mexico).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

...y más!

Fully remote You can work from anywhere in the world.
Gross salary $2800 - 3600 Full time
Data Engineer
  • Checkr
  • Santiago (Hybrid)
Python SQL Kubernetes CI/CD
Checkr está expandiendo su centro de innovación en Santiago para impulsar la precisión y la inteligencia de su motor de verificaciones de antecedentes a escala global. Este equipo colabora estrechamente con las oficinas de EE. UU. para optimizar el motor de selección, detectar fraude, y evolucionar la plataforma con modelos GenAI. El candidato seleccionado formará parte de un esfuerzo estratégico para equilibrar velocidad, costo y precisión, impactando millones de candidatos y mejorando la experiencia de clientes y socios. El rol implica liderar iniciativas de optimización, diseño de estrategias analíticas y desarrollo de modelos predictivos dentro de una pila tecnológica de alto rendimiento.

Apply directly on Get on Board.

Responsabilidades del cargo

  • Crear, mantener y optimizar canales de datos críticos que sirvan de base para la plataforma y los productos de datos de Checkr.
  • Crear herramientas que ayuden a optimizar la gestión y el funcionamiento de nuestro ecosistema de datos.
  • Diseñar sistemas escalables y seguros para hacer frente al enorme flujo de datos a medida que Checkr sigue creciendo.
  • Diseñar sistemas que permitan flujos de trabajo de aprendizaje automático repetibles y escalables.
  • Identificar aplicaciones innovadoras de los datos que puedan dar lugar a nuevos productos o conocimientos y permitir a otros equipos de Checkr maximizar su propio impacto.

Calificaciones y Requisitos del cargo

  • Más de dos años de experiencia en el sector en un puesto relacionado con la ingeniería de datos o backend y una licenciatura o experiencia equivalente.
  • Experiencia en programación en Python o SQL. Se requiere dominio de uno de ellos y, como mínimo, experiencia en el otro.
  • Experiencia en el desarrollo y mantenimiento de servicios de datos de producción.
  • Experiencia en modelado, seguridad y gobernanza de datos.
  • Familiaridad con las prácticas y herramientas modernas de CI/CD (por ejemplo, GitLab y Kubernetes).
  • Experiencia y pasión por la tutoría de otros ingenieros de datos.

Condiciones del Cargo

  • Un entorno de colaboración y rápido movimiento
  • Formar parte de una empresa internacional con sede en Estados Unidos
  • Asignación de reembolso por aprendizaje y desarrollo
  • Remuneración competitiva y oportunidades de promoción profesional y personal
  • Cobertura médica, dental y oftalmológica del 100% para empleados y dependientes
  • Vacaciones adicionales de 5 días y flexibilidad para tomarse tiempo libre
  • Reembolso de equipos para trabajar desde casa
En Checkr, creemos que un entorno de trabajo híbrido fortalece la colaboración, impulsa la innovación y fomenta la conexión. Nuestras sedes principales son Denver, CO, San Francisco, CA, y Santiago, Chile.
Igualdad de oportunidades laborales en Checkr

Checkr se compromete a contratar a personas cualificadas y con talento de diversos orígenes para todos sus puestos tecnológicos, no tecnológicos y de liderazgo. Checkr cree que la reunión y celebración de orígenes, cualidades y culturas únicas enriquece el lugar de trabajo.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Health coverage Checkr pays or copays health insurance for employees.
Computer provided Checkr provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Checkr gives you paid vacations over the legal minimum.
Beverages and snacks Checkr offers beverages and snacks for free consumption.
$$$ Full time
Arquitecto de Soluciones Senior
  • BC Tecnología
  • Santiago (Hybrid)
Microservices Cloud Computing CI/CD Security
BC Tecnología es una consultora de TI con experiencia en diseñar soluciones para clientes de servicios financieros, seguros, retail y gobierno. Nos enfocamos en consultoría y diseño de soluciones, formación de equipos, outsourcing de personal, desarrollo de proyectos y servicios de soporte y administración IT. Nuestra cultura favorece el crecimiento profesional, la integración y el intercambio de conocimiento entre equipos. En este rol, liderarás la definición y validación de arquitecturas técnicas para soluciones en el sector retail, coordinando con equipos de implementación y asegurando la interoperabilidad entre sistemas core, middleware, plataformas SaaS y entornos híbridos.
La posición se enmarca en un entorno de proyectos innovadores con clientes de alto nivel y múltiples sectores, promoviendo prácticas ágiles, gestión de stakeholders y un enfoque centrado en la calidad de datos y la seguridad.

Apply from getonbrd.com.

Funciones y responsabilidades

  • Levantar, diseñar y validar arquitecturas técnicas para soluciones en el sector retail, asegurando interoperabilidad entre sistemas core, middleware, plataformas SaaS y entornos híbridos (cloud y on‑premise).
  • Traducir requisitos de negocio en soluciones técnicas robustas, escalables y alineadas a la estrategia de la empresa y del cliente.
  • Liderar la implementación técnica y colaborar con equipos de desarrollo, DevOps, seguridad y operaciones.
  • Definir patrones de integración (REST APIs, eventos, archivos, SFTP), orquestación de procesos y diseño de buses de eventos; gestionar microservicios y middleware.
  • Modelar y documentar componentes e interfaces (C4, BPMN, diagramas de secuencia) para garantizar claridad y trazabilidad.
  • Gestión de datos: asegurar calidad, consistencia y rendimiento de datos; trabajar con soluciones de analítica y almacenamiento (p. ej., bases de datos analíticas, esquemas optimizados).
  • Seguridad y cumplimiento: diseñar arquitecturas seguras, control de accesos, cifrado y cumplimiento normativo.
  • Metodologías Ágiles y DevOps: promover prácticas de CI/CD, automatización de despliegues, monitoreo y mejora continua.
  • Gestión de stakeholders de negocio y tecnología; comunicación efectiva y liderazgo técnico para equipos multidisciplinarios.
  • Evaluar y seleccionar tecnologías alineadas a los objetivos del negocio; proactividad en resolver problemas y gestionar múltiples prioridades.
  • Conocimientos deseables en CRM, especialmente Customer Services.

Perfil y requisitos

Requisitos indispensables: más de 5 años de experiencia en roles de arquitectura de soluciones, preferentemente en retail, consumo masivo o industrias con alta integración de sistemas; experiencia comprobada en proyectos de integración de sistemas core y soluciones cloud/SaaS; experiencia en migraciones, modernización o implementación de plataformas; experiencia liderando equipos técnicos y gestionando stakeholders de negocio y tecnología; capacidad de comunicación efectiva y liderazgo para interactuar con equipos multidisciplinarios; dominio de Cloud Computing (AWS) y de patrones de integración (REST, eventos, files, SFTP); experiencia con bases de datos analíticas (p. ej., Redshift) y ETL/ELT; conocimiento de seguridad, cumplimiento, y prácticas DevOps; diseño detallado de arquitectura considerando flujos de datos, interoperabilidad y resiliencia; modelado y documentación de componentes e interfaces; capacidad para evaluar tecnologías y asegurar interoperabilidad entre legacy, cloud y SaaS; buenas prácticas de desarrollo. Deseable: conocimientos en CRM, específicamente Customer Services.

Requisitos deseables

Visión estratégica y orientación a resultados; capacidad para gestionar múltiples prioridades; proactividad y resolución de problemas; buenas prácticas de desarrollo; habilidades de comunicación y negociación con stakeholders; enfoque orientado al cliente y capacidad para traducir necesidades de negocio en soluciones técnicas efectivas.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

$$$ Full time
Software Engineer
  • itD Tech
  • Arizona
software design python training
itD is seeking a Software Engineer to design and scale the data pipelines that power next-generation foundation models for machine-generated data, including time series, logs, and large-scale event streams. This role contributes directly to the success of model training and production systems by enabling reliable, high-performance data infrastructure at scale. The ideal candidate will bring deep experience in distributed systems and data engineering, along with a proven track record of delivering scalable, production-ready data pipelines that support machine learning workflows. Location: Remote (U.S.-based; time zone alignment with Pacific or Central preferred) We provide comprehensive medical benefits, a 401(k) plan, paid holidays, and more. Please note that we are only considering direct W2 candidates at this time, as we are unable to offer sponsorship. Responsibilities: • Build and scale distributed data pipelines for large-scale time series, log data, and high-volume event streams. • Design and maintain reliable, high-performance Spark and Python workflows to support model training datasets. • Analyze and resolve performance bottlenecks related to latency, memory utilization, data skew, and throughput. • Improve data quality, validation processes, and reproducibility for machine learning workloads. • Partner with machine learning engineers and researchers to

Please mention the word **UNDAUNTED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $4000 - 6350 Full time
Python SQL ETL Streaming

Ruzora is hiring a Senior Data Engineer to join our partner companies building modern data infrastructure for AI-native U.S. startups. You will design and build the data pipelines, warehouses, and analytics layer that power business intelligence and machine learning workflows.

This role is 100% remote. Candidates work from anywhere in LATAM. There is no office, no relocation, no travel expected. Ruzora is a fully distributed company with no physical office — applicants do not need to be located in or near any specific city.

This job is available on Get on Board.

Job functions

  • Design and build scalable ETL/ELT pipelines using modern tools (Airflow, dbt, Dagster).
  • Architect data warehouses on Snowflake, BigQuery, or Redshift.
  • Build data models for analytics, ML features, and product dashboards.
  • Implement data quality, observability, and lineage tooling.
  • Optimize query performance and warehouse costs.
  • Collaborate with analytics and ML teams on data contracts and schema design.

Qualifications and requirements

We are looking for a Senior Data Engineer with strong end-to-end ownership of data infrastructure and production-grade pipelines.

  • 5+ years of professional data engineering experience.
  • Strong Python and advanced SQL skills (query optimization, window functions, CTEs).
  • 3+ years with modern data warehouses (Snowflake, BigQuery, or Redshift).
  • Hands-on experience with dbt and orchestration tools (Airflow, Dagster, or Prefect).
  • Familiarity with both streaming (Kafka, Kinesis) and batch processing patterns.
  • Excellent written and verbal English (B2+).

Desirable skills

  • Experience with Spark, Databricks, or similar big-data frameworks.
  • Knowledge of feature stores (Feast, Tecton) for ML pipelines.
  • Background with data observability tools (Monte Carlo, Great Expectations).
  • Experience designing event tracking schemas and product analytics pipelines.

Conditions

  • Competitive USD salary ($48,000 - $72,000/year), paid monthly via Deel
  • 100% remote work from anywhere in LATAM
  • Flexible working hours
  • Professional development budget
  • Health insurance stipend
  • Equipment allowance
  • Paid time off

Fully remote You can work from anywhere in the world.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage Ruzora pays or copays health insurance for employees.
Computer provided Ruzora provides a computer for your work.
$$$ Full time
Software Engineer
  • Ren
  • Remote
software design python training

 

Job Title:

Sr Software Engineer

Department:

Product Engineering

 

Position Description:

The Sr Software Engineer will be working with other engineers, architects, and product managers to develop software on our philanthropic solutions software platform. This person must be self-motivated and results-oriented with strong programming skills across modern enterprise software architectures. The Sr Software Engineer is expected to work well in an agile development environment to mentor and develop those around them and build superior products.

 

Duties & Responsibilities:

  • Write and maintain scripts written in Python for data engineer and machine learning pipelines.
  • Modification of database objects using SQL (stored procedures, views, tables etc.)
  • Write Automated Unit, Integration, and UI-level Tests to increase code quality and lower defect rate.
  • Provide technical guidance, mentorship while providing technical and design feedback leveraging code and peer reviews across the full application stack.
  • Collaborate and pair with other software and data engineers and product professionals to design, implement and test new features and product refinements.
  • Refactor existing code to improve maintainability and quality.
  • Author and present training materials and documentation to other team members and users of software
  • Work closely with Product Management and other areas of the business to ensure market needs are met.
  • Work with Architecture team to design and implement new service-based, automated application environment.


Please mention the word **CHERISHED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Frontend Tech Lead
  • AirDNA
  • Remote
frontend design react training

About AirDNA

We built AirDNA to solve a problem: how do you make smart short-term rental decisions when there’s too much guesswork and not enough good data?


What started in a garage in California in 2015 is now a global team helping thousands of people — from aspiring hosts to major real estate firms — make confident choices about where to invest, what to charge, and how to grow.


Our mission is simple: give people the tools they need to build freedom through short-term rentals. Whether that means buying their first Airbnb or scaling a portfolio, we’re here to help unlock financial independence and growth.


We track 10M+ listings in 120,000 markets, and our platform is trusted by users in over 100 countries. It’s big data, made useful.


In 2023, AirDNA acquired Uplisting, a powerful property management software that helps hosts and operators manage listings across Airbnb, Vrbo, and other platforms. With features like channel management, automated messaging, dynamic pricing, task coordination, and financial reporting, Uplisting expands our mission to support every stage of the short-term rental journey — from investment to operations.


The AirDNA team

We’re a curious, driven, and kind group of humans who genuinely love what we do. Our values — Happy, Hungry, Honest — guide how we show up for our customers and for each other.


Want to see what that looks like in action? You’ll get a feel once you meet us.

We welcome applicants from all backgrounds and encourage you to apply even if you don’t check every box. Passion, potential, and perspective matter here.


The Role

AirDNA is looking for a Frontend Tech Lead to help shape the future of our product experience and technical direction. While this role is full-stack, you will be the technical driver for our frontend guild, pushing forward our React/TypeScript architecture, design systems, and developer experience. You’ll partner with Product, Design, and Engineering leaders to deliver beautiful, performant, and scalable customer-facing applications. As a Tech Lead, you’ll guide technical decisions across squads, mentor engineers, and help set the long-term direction of our frontend practice.

\n


Here's what you'll get to do:
  • Lead frontend technical strategy: Define best practices, champion modern frontend architecture, and drive adoption of component libraries, state management patterns, and performance optimizations.
  • Build customer-facing features: Work as a hands-on engineer in your squad, implementing features with React, TypeScript, Next.js, and associated libraries.
  • Shape the frontend guild: Facilitate guild discussions, align engineers across squads, and promote knowledge-sharing and consistency in our frontend stack.
  • Mentor and grow engineers: Coach junior and mid-level developers, review code, and help engineers build strong frontend skills.
  • Collaborate cross-functionally: Partner with Product Managers, Designers, Data Scientists, and Backend Engineers to deliver features that delight customers.
  • Contribute full-stack when needed: While you’re frontend-leaning, you’ll occasionally dive into backend services (Python, AWS, APIs, Kubernetes) to deliver end-to-end solutions.
  • Drive engineering excellence: Influence tooling, CI/CD, testing, and monitoring strategies that improve developer velocity and reliability.
  • Represent engineering: Serve as a technical leader in planning sessions, roadmap discussions, and cross-team initiatives.


Here's what you'll need to be successful:
  • Experienced: 8+ years of professional software engineering, with at least 5 years of recent experience in React and TypeScript.
  • Frontend expert: You’ve scaled and optimized large-scale SPAs, understand rendering/performance tradeoffs, and care deeply about accessibility and design fidelity.
  • Full-stack capable: You’re comfortable contributing to backend systems (Python/Django/FastAPI, AWS, data pipelines) when the team needs it.
  • Technical leader: You’ve led technical discussions, influenced architecture decisions, and aligned teams toward common engineering standards.
  • Mentor: You enjoy leveling up others, giving thoughtful feedback, and guiding careers.
  • Collaborator: You thrive in cross-functional environments and can translate business goals into technical strategy.
  • Forward-thinking: You stay current on frontend trends, evaluate emerging tools, and bring pragmatic innovation to the team


Here's what would be nice to have:
  • Experience with design systems and component libraries (e.g., Storybook, Radix, Styled Components).
  • Experience with React Query, Recoil, Redux, or other state/data management approaches.
  • Experience with Google Maps API or other data visualization libraries (D3, Leaflet, Mapbox).
  • Strong background in CI/CD pipelines (GitLab preferred) and containerization (Docker/Kubernetes).
  • Familiarity with headless CMS platforms (Prismic, Contentful).
  • Experience with data-intensive apps, large-scale visualizations, or personalization at scale.


Here's what you can expect from us:
  • Competitive cash compensation and benefits, the salary for this position is $130,000 - $175,000 per year. 
  • Colorado Salary Statement: The salary range displayed in specifically for those potential hired who will work or reside in the state of Colorado if selected for this role. Any offered salary is determined based on internal equity, internal salary ranges, market data/ranges, applicant’s skills and prior relevant experience, certain degrees and certifications. 
Benefits include: 
  • Medical, dental, and vision packages to meet your needs
  • Unlimited vacation policy; take time when you need it 
  • Quarterly team outings 
  • 401K with employer match up to 4%
  • Continuing education stipend
  • Lunch is provided Tuesday to Thursday for those in the Denver office
  • Commuter/RTD benefit for Denver based employees
  • 16 weeks of paid parental leave
  • New MacBooks for employees
  • Pet-friendly!


\n

AirDNA seeks to attract the best-qualified candidates who support the mission, vision and values of the company and those who respect and promote excellence through diversity. We are committed to providing equal employment opportunities (EEO) to all employees and applicants without regard to race, color, creed, religion, sex, age, national origin, citizenship, sexual orientation, gender identity and expression, physical or mental disability, marital, familial or parental status, genetic information, military status, veteran status or any other legally protected classification. The company complies with all applicable state and local laws governing nondiscrimination in employment and prohibits unlawful harassment based on any of the aforementioned protected classes at every location in which the company operates. This applies to all terms, conditions and privileges of employment including but not limited to: hiring, assessments, probation, placement, benefits, promotion, demotion, termination, layoff, recall, transfer, leave of absence, compensation, training and development, social and recreational programs, education assistance and retirement. 


We are committed to making our application process and workplace accessible for individuals with disabilities. Upon request, AirDNA will reasonably accommodate applicants so they can participate in the application process unless doing so would create an undue hardship to AirDNA or a threat to these individuals, others in the workplace or the company as a whole. To request accommodation, please email compliance@airdna.co. Please allow for 24 hours to process your request. 


By applying for the above position, you will confirm that you have reviewed and agreed to our Data Privacy Notice for Applicants.



Please mention the word **PRICELESS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Site Reliability Engineer AI Infrastructure
  • Andromeda Cluster
  • San Francisco
design training technical software

Senior Site Reliability Engineer - AI Infrastructure

Location: Global Remote / San Francisco · Full-Time

About Andromeda

Andromeda Cluster was founded by Nat Friedman and Daniel Gross to give early-stage startups access to the kind of scaled AI infrastructure once reserved only for hyperscalers.

We began with a single managed cluster — but it filled almost instantly. Since then, we’ve been quietly building the systems, network, and orchestration layer that makes the world’s AI infrastructure more accessible.

Today, Andromeda works with leading AI labs, data centers, and cloud providers to deliver compute when and where it’s needed most. Our platform routes training and inference jobs across global supply, unlocking flexibility and efficiency in one of the fastest-growing markets on earth.

Our long-term vision is to build the liquidity layer for global AI compute — a marketplace that moves the infrastructure and workloads powering AGI not dissimilar to the flows of capital in the world’s financial markets.

We are expanding to new frontiers to find the brightest that work in AI infrastructure, research and engineering.

The Role

This is not a generalist SRE role.

You will design, operate, and debug large-scale GPU infrastructure used for distributed training and inference, working directly with customers pushing the limits of modern AI systems.

We’re looking for engineers who have personally run GPU clusters in production, understand the failure modes of distributed training, and can reason about performance from network fabric → kernel → framework.

What You’ll Own

  • GPU Cluster Architecture: Design and evolve multi-provider, multi-region GPU compute clusters optimized for large-scale training. Make topology-aware scheduling, networking, and storage decisions that directly impact training throughput and cost efficiency.

  • Customer Technical Partnership: Serve as the primary technical point of contact for customers running large-scale training workloads. Onboard, troubleshoot, and optimize, often in real time.

  • Reliability & Performance Engineering: Define SLOs and error budgets that account for the unique failure modes of GPU infrastructure (ECC errors, NVLink degradation, NCCL timeouts). Own capacity planning across heterogeneous GPU fleets optimized for training throughput.

  • Networking & Fabric Health: Ensure the health and performance of high-speed interconnects (InfiniBand, RoCE, NVLink) that underpin distributed training. Diagnose and resolve fabric-level issues that degrade collective operations.

  • Observability: Build deep visibility into GPU utilization, memory pressure, interconnect throughput, training job performance, and hardware health. Go well beyond standard infrastructure metrics.

  • Automation & Tooling: Build production-grade automation for cluster provisioning, GPU health checks, job scheduling, self-healing, and firmware/driver lifecycle management.

  • Incident Leadership: Lead incident response for complex, multi-layer failures spanning hardware, networking, orchestration, and ML frameworks. Drive blameless postmortems and systemic fixes.

What We’re Looking For

  • GPU Systems Expertise: Deep, hands-on experience operating large-scale GPU clusters (NVIDIA A100/H100/B200 or equivalent). You understand GPU memory hierarchies, ECC behavior, thermal throttling, and hardware failure modes from direct experience not documentation.

  • High-Performance Networking: Production experience with InfiniBand, RoCE, or NVLink fabrics in the context of distributed training. You can diagnose why an all-reduce is slow, identify a degraded link in a fat-tree topology, and reason about congestion control at scale.

  • Distributed Training & ML Frameworks: Working knowledge of how large training jobs actually run — NCCL, CUDA, PyTorch distributed, DeepSpeed, Megatron, FSDP, or similar. You don't need to write the models, but you need to understand what's happening at the systems level when a 1,000-GPU training run stalls.

  • Linux & Systems Internals: Expert-level Linux knowledge: kernel tuning, driver management (NVIDIA drivers, CUDA toolkit), cgroup/namespace internals, performance profiling at the syscall and hardware level.

  • Kubernetes & Orchestration: Strong experience running Kubernetes in production with GPU workloads, including device plugins, topology-aware scheduling, multi-cluster federation, and custom operators. Experience with Slurm or other HPC schedulers is equally valued.

  • Automation & Software Engineering: Strong engineering skills in Python, Go, or Bash. You build production-grade tools and services, not just scripts. Infrastructure-as-Code proficiency (Terraform, Helm, Ansible, or equivalent).

  • Observability & Monitoring: Hands-on experience building monitoring and alerting for GPU infrastructure, not just Prometheus/Grafana basics, but GPU-specific telemetry (DCGM, nvidia-smi, fabric manager metrics) integrated into actionable dashboards.

  • Incident Management: Proven track record leading incident response for complex distributed systems where the failure could be in hardware, firmware, networking, drivers, orchestration, or application code and you need to narrow it down fast.

Strong Candidates May Have

  • Distributed Storage: Experience with high-performance parallel file systems (VAST, Weka, Lustre, GPFS) and the checkpoint I/O and data-loading bottlenecks that come with large training runs.

  • Training Optimization: Experience profiling and optimizing distributed training performance: identifying stragglers, tuning collective communication strategies, improving MFU (Model FLOPs Utilization), and reducing idle GPU time across large runs.

  • Cluster Buildout & Hardware: Experience involved in physical cluster design - rack layout, power/cooling constraints, network topology design, and hardware validation/burn-in at scale.

  • Team Leadership: Experience leading or mentoring a team of infrastructure engineers. We're growing and need people who raise the bar for everyone around them.

Why You’ll Love It Here

This is a high-impact, senior builder’s role. You’ll have significant ownership and autonomy to shape how our systems run at a foundational level, working directly with customers and providers while architecting the infrastructure backbone for reliable, scalable AI compute. You’ll influence technical direction and help define what world-class AI infrastructure operations look like.

Andromeda Cluster is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.



Please mention the word **FORTUNE** and tag RMTM0LjQxLjE5Mi4yNA== when applying to show you read the job post completely (#RMTM0LjQxLjE5Mi4yNA==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Ingeniero/a de Datos
  • Assetplan
  • Santiago (Hybrid)
Python Excel SQL ETL

Assetplan es una compañía líder en renta residencial con presencia en Chile y Perú, gestionando más de 40,000 propiedades y operando más de 90 edificios multifamily. El equipo de datos tiene un rol clave para optimizar y dirigir procesos internos mediante soluciones de análisis y visualización de datos, apoyando la toma de decisiones estratégicas en la empresa. Este rol se enfoca en diseñar, desarrollar y optimizar procesos ETL, creando valor mediante datos fiables y gobernados.

En este contexto, el/la profesional se integrará a un equipo multidisciplinario para transformar necesidades de negocio en soluciones de datos escalables que impulsen la eficiencia operativa y la calidad de la información. El objetivo es promover la gobernanza de datos, lograr dashboards útiles y facilitar decisiones informadas en toda la organización.

Apply to this job through Get on Board.

  • Diseñar, desarrollar y optimizar procesos ETL (Extract, Transform, Load) utilizando Python (Pandas, Numpy) y SQL para ingestar y transformar datos de diversas fuentes.
  • Desarrollar y mantener dashboards y paneles en Power BI, integrando visualizaciones estratégicas que acompañen los procesos ETL y proporcionen insights relevantes para áreas de negocio.
  • Trabajar de forma colaborativa con distintas áreas para interpretar necesidades y traducirlas en soluciones de datos que faciliten la toma de decisiones estratégicas.
  • Promover la calidad, escalabilidad y gobernanza de datos durante el diseño, desarrollo y mantenimiento de pipelines, asegurando soluciones robustas y accesibles.
  • Comunicar de manera efectiva con equipos de negocio y tecnología, alineando las soluciones con objetivos corporativos y generando impacto medible en la organización.

Requisitos y perfil

Buscamos profesionales con 1 a 3 años de experiencia en áreas de datos, en roles de ingeniería o análisis que impliquen manipulación y transformación de datos. Se valorará manejo de SQL a nivel medio, Python (intermedio/avanzado) con experiencia en Pandas y Numpy, y Power BI para desarrollo y mantenimiento de dashboards. Se requiere nivel avanzado de Excel para análisis y procesamiento de información. Experiencia en entornos ágiles y metodologías de desarrollo colaborativo facilita la integración entre equipos técnicos y de negocio. Se valoran conocimientos en otras herramientas de visualización y procesamiento de datos, así como experiencia en gobernanza y calidad de datos para fortalecer el ecosistema de información de Assetplan.
Competencias: capacidad de análisis, atención al detalle, buena comunicación, proactividad y orientación a resultados. Capacidad para trabajar en un entorno dinámico y colaborar con diferentes áreas de la organización para traducir requerimientos en soluciones concretas.

Conocimientos y habilidades deseables

Se valoran conocimientos en metodologías ágiles para gestión de proyectos, habilidades de comunicación efectiva con equipos multidisciplinarios y experiencia en herramientas adicionales de visualización y procesamiento de datos. Experiencia en buenas prácticas de gobernanza y calidad de datos será un plus para robustecer el ecosistema de información de Assetplan.

Beneficios

En Assetplan valoramos y reconocemos el esfuerzo y dedicación de nuestros colaboradores, ofreciendo un ambiente laboral positivo basado en el respeto mutuo y la colaboración. Entre nuestros beneficios contamos con:
  • Días extras de vacaciones por años de antigüedad
  • Modalidad de trabajo híbrido y flexibilidad para trámites personales
  • Monto mensual en app de snacks en la oficina
  • Medio día libre en tu cumpleaños
  • Copago en seguro complementario de salud
  • Reajuste anual de renta basado en IPC
  • Bono anual por resultados de empresa
  • Eventos empresa y happy hours
  • Acceso a plataforma de cursos de formación
  • Convenios con gimnasios, descuentos y más

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Health coverage Assetplan pays or copays health insurance for employees.
Computer provided Assetplan provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Assetplan gives you paid vacations over the legal minimum.
Beverages and snacks Assetplan offers beverages and snacks for free consumption.
Gross salary $1500 - 2000 Full time
Data Scientist
  • Artefact LatAm
Python Git Data Analysis SQL

Somos Artefact, una consultora líder a nivel mundial en crear valor a través del uso de datos y las tecnologías de IA. Buscamos transformar los datos en impacto comercial en toda la cadena de valor de las organizaciones, trabajando con clientes de diversos tamaños, rubros y países. Nos enorgullese decir que estamos disfrutando de un crecimiento importante en la región, y es por eso que queremos que te sumes a nuestro equipo de profesionales altamente capacitados, a modo de abordar problemas complejos para nuestros clientes.

Nuestra cultura se caracteriza por un alto grado de colaboración, con un ambiente de aprendizaje constante, donde creemos que la innovación y las soluciones vienen de cada integrante del equipo. Esto nos impulsa a la acción, y generar entregables de alta calidad y escalabilidad.

Apply at getonbrd.com without intermediaries.

Tus responsabilidades serán:

  • Recolectar, limpiar y organizar grandes volúmenes de datos provenientes de diversas fuentes, como bases de datos, archivos planos, APIs, entre otros. Aplicando técnicas de análisis exploratorio para identificar patrones y resumir las características principales de los datos, así como para entender el problema del cliente.
  • Desarrollar modelos predictivos utilizando técnicas avanzadas de aprendizaje automático y estadística para predecir tendencias, identificar patrones y realizar pronósticos precisos
  • Optimizar algoritmos y modelos existentes para mejorar la precisión, eficiencia y escalabilidad, ajustando parámetros y explorando nuevas técnicas
  • Crear visualizaciones claras y significativas para comunicar los hallazgos y resultados al cliente, de manera efectiva
  • Comunicar los resultados efectivamente, contando una historia para facilitar la comprensión de los hallazgos y la toma de decisiones por parte del cliente
  • Diseñar y desarrollar herramientas analíticas personalizadas y sistemas de soporte para la toma de decisiones basadas en datos, utilizando lenguaje de programación como Python, R o SQL
  • Trabajo colaborativo: colaborar con equipos multidisciplinarios para abordar problemas complejos y proporcionar soluciones integrales al cliente, además de participar en proyectos de diversa complejidad, asegurando la calidad de los entregables y el cumplimiento de los plazos establecidos.
  • Investigar y mantenerse actualizado en análisis de datos, inteligencia artificial y metodologías para mejorar las capacidades analíticas, adquiriendo rápidamente conocimientos sobre diversas industrias y herramientas específicas

Los requisitos del cargo son:

  • Conocimientos demostrables en analítica avanzada, sean por estudios o experiencia laboral.
  • Manejo de Python, SQL y Git.
  • Conocimiento de bases de datos relacionales
  • Conocimiento de:
    • Procesamiento de datos (ETL)
    • Machine Learning
    • Feature engineering, reducción de dimensiones
    • Estadística y analítica avanzada

Algunos deseables no excluyentes:

Experiencia con:

  • Herramientas de BI (Power BI o Tableau)
  • Servicios cloud (Azure, AWS, GCP)
  • Conocimiento de bases de datos no relacionales (ej: Mongo DB)
  • Conocimiento en optimización

Algunos de nuestros beneficios:

  • Presupuesto de 500 USD al año para capacitaciones, sean cursos, membresías, eventos u otros.
  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Hasta 11 días de vacaciones adicionales a lo legal. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por utilidades de la empresa, además de bonos por trabajador referido y por cliente.
  • Medio día libre de cumpleaños, además de un regalito.
  • Almuerzos quincenales pagados con el equipo en nuestros hubs (Santiago, Bogotá, Lima y Ciudad de Mexico).
  • Flexibilidad horaria y trabajo por objetivos.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Bogotá, Lima y Ciudad de Mexico).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

...y más!

Fully remote You can work from anywhere in the world.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Meals provided Artefact LatAm provides free lunch and/or other kinds of meals.
Paid sick days Sick leave is compensated (limits might apply).
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Company retreats Team-building activities outside the premises.
Computer repairs Artefact LatAm covers some computer repair expenses.
Computer provided Artefact LatAm provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Vacation over legal Artefact LatAm gives you paid vacations over the legal minimum.
Beverages and snacks Artefact LatAm offers beverages and snacks for free consumption.
Parental leave over legal Artefact LatAm offers paid parental leave over the legal minimum.
Gross salary $2500 - 3000 Full time
Data Specialist
  • Coderslab.io
JavaScript Python SQL Power BI

Coderslab.io is looking to hire a Data Specialist

About the client and the project: the company delivers innovative technology solutions and provides opportunities for continuous learning under the guidance of experienced professionals and cutting-edge technologies. The goal is to deliver value in key business processes and improve operational efficiency through SAP.

Find this vacancy on Get on Board.

Funciones del cargo

  • Develop and maintain analytical queries and materialized views in ClickHouse Cloud
  • Build interactive Power BI dashboards connected to ClickHouse, SAP Business One, Retail Pro, and other enterprise sources
  • Support our AWS data lake (S3, Apache Iceberg, medallion pattern) with Bronze→Silver→Gold transformation workflows using AWS Glue
  • Write Python and JavaScript scripts for data extraction, transformation, and automation
  • Collaborate with Data and Integration Leads to maintain data quality and governance
  • Help build executive-requested reports (inventory coverage, store profitability, and more)
  • Troubleshoot pipeline issues and perform root-cause analysis on data discrepancies
  • Document data models, transformation logic, and reporting specs

Requerimientos del cargo

  • Strong SQL skills — experience with ClickHouse or other columnar/OLAP databases is a plus
  • Power BI experience: dashboards, DAX measures, data modeling, scheduled refresh
  • Working knowledge of Python (pandas, numpy) for data manipulation and scripting
  • Familiarity with JavaScript / Node.js for lightweight utilities or integration connectors
  • Basic understanding of AWS services (S3, Glue, Lambda) is a plus
  • 1–3 years in data analytics, data engineering, or BI roles
  • Degree in Computer Science, Data Engineering, Information Systems, or related field
  • Effective communication in English and Spanish

Opcionales

  • Experience with Apache Iceberg or data lake architectures
  • Exposure to SAP Business One or Retail Pro data structures
  • Knowledge of Git and version control practices
  • Background in retail, supply chain, or multinational environments

Tech stack

ClickHouse CloudPower BIPythonJavaScript / Node.jsAWS S3AWS GlueApache IcebergSQLSAP Business OneRetail Pro

Condiciones

Modalidad contractor
Remoto
Salario en USD

Gross salary $3800 - 4000 Full time
Full-stack Automation Prompt Engineering API Integration

Nine-67 is building a fast-moving AI capability for enterprise clients. This role sits at the intersection of product, data, and execution, directly partnering with the CEO to design, build, and deploy AI-driven applications in real client environments. You will contribute to shaping a scalable, high-quality AI platform by delivering end-to-end solutions that combine frontend, backend, and data workflows in rapid iterations.

As a key player in a fast-build environment, you’ll help transform ambiguous business problems into working systems, create internal tools and automation, and integrate with client systems and data sources to drive real business value.

Job opportunity published on getonbrd.com.

What You’ll Do

• Build and deploy AI-driven applications end-to-end (frontend, backend, data workflows) with speed and quality.
• Translate business problems into functioning AI systems with minimal direction.
• Collaborate directly with leadership and clients to iterate on real use cases.
• Develop internal tools, agents, and automation to boost efficiency.
• Integrate with APIs, data sources, CRM systems, data warehouses, and client environments.
• Continuously improve speed, reliability, and reusability of what we build.

What We’re Looking For

• Strong builder mindset—ship fast and learn by doing.
• Experience with AI tools and frameworks (LLMs, APIs, prompt systems, agents).
• Comfort across the stack; you don’t need to be perfect, but you can figure it out.
• Ability to work in ambiguity without waiting for detailed specs.
• Strong problem-solving and product intuition.
• High ownership and accountability.

Nice to Have

• Experience with Cursor, Vercel, Supabase, or similar modern stacks.
• Experience building internal tools or client-facing applications.
• Exposure to data pipelines, analytics, or CRM systems.
• Prior startup or consulting experience.

Why This Role

• Direct collaboration with leadership on high-impact projects.
• Build real systems used by enterprise clients.
• Opportunity to shape and scale AI capability from the ground up.

Fully remote You can work from anywhere in the world.
$100000 - $150000 Full time
Principal Software Engineer
  • Recorded Future
  • Boston, MA
software design architect technical
With 1,000+ intelligence professionals serving over 1,900 clients worldwide, Recorded Future is the world’s most advanced, and largest, intelligence company! We’re looking for a Principal Software Engineer to help design, build, and scale the systems that power our Attack Surface Intelligence module. You’ll be taking ownership of critical data pipelines responsible for the ingestion and distribution of critical intelligence signals, both internally and directly to customers via the product. The Attack Surface Intelligence Data Engineering team is responsible for two key datasets: our holistic global internet inventory and the technical artifacts of our customers’ attack surface. This role reports directly to the Engineering Owner for Attack Surface Intelligence Data and is ideal for someone who enjoys writing clean, maintainable code and thrives in distributed systems environments. You'll work closely with product management and other engineering teams to drive technical strategy and ensure our systems are reliable, performant, and insightful. What You’ll Do: Lead the design and implementation of backend services and APIs in Python. Architect and evolve microservice-based systems for scalability and resilience.

Please mention the word **FORTUNATE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Director Data Engineering
  • Revinate
  • Atlanta, GA
director design hr security

Revinate is one of the largest and most innovative providers of direct revenue-generating solutions in the hospitality industry. Revinate's mission is to deliver hoteliers scalable direct revenue and profits from data-driven solutions that cultivate deeper relationships with guests. Revinate’s Direct Booking Platform helps capture, convert and retain guests with strategies and services that maximize direct booking revenue. This combination maximizes the lifetime value of each guest through personalized and targeted campaigns across the guest journey. Revinate Marketing has won 1st place for Hotel CRM & Email Marketing in the HotelTechAwards five years in a row!


About Us


Revinate is an innovative hospitality tech company that is revolutionizing how customers manage their operations and enhance the guest experience. Our solutions leverage advanced technology, data analytics, and automation to improve efficiency and drive customer happiness in the hospitality industry.  


The Opportunity


We are seeking an experienced and visionary Director, Data Engineering to lead our Data Platform initiatives. In this critical role, you will be responsible for defining the strategy, architecture, and execution of our end-to-end data ecosystem, encompassing data ingestion pipeline, operational data stores, our evolving data lakehouse, and robust data APIs. You will build and lead a high-performing team of data engineers, fostering a culture of innovation, collaboration, and operational excellence. This role requires not only deep technical expertise but also a strong understanding of how data can drive business value, including leveraging data science and machine learning to optimize our operations.


Key Responsibilities


Strategic Leadership: Define and execute the long-term vision and roadmap for our data platform, aligning with overall business objectives and technology strategy.


Team Leadership & Development: Recruit, mentor, and lead a talented team of data engineers, fostering their growth and ensuring best practices in data engineering.


Data Pipeline: Oversee the design, development, and maintenance of scalable and reliable real time data ingestion pipeline, ensuring data quality, accuracy, and timely delivery.


Operational Data Stores: Lead the architecture and management of our operational data stores, optimizing for performance, reliability, and accessibility to support critical business applications.


Data Lakehouse Development: Drive the strategic evolution and implementation of our data lakehouse, enabling unified data access, advanced analytics, and machine learning initiatives.


Data API Development: Champion the design and development of secure, performant, and well-documented data APIs to facilitate data consumption across various applications and user groups.


Data Governance & Quality: Enforce data governance policies, standards, and procedures to ensure data integrity, security, privacy, and compliance.


Operational Efficiency through Data Science/ML: Collaborate closely with data science and analytics teams to identify opportunities where data science and machine learning can be applied to optimize internal operations, automate processes, and improve efficiency within the data platform itself (e.g., predictive maintenance for pipelines, intelligent resource allocation).


Performance & Scalability: Ensure the data platform is highly performant, scalable, and resilient, capable of handling growing data volumes and complex analytical workloads.


Technology Evaluation: Evaluate and recommend new data technologies, tools, and platforms to enhance our data capabilities and stay ahead of industry trends.


Cross-Functional Collaboration: Partner effectively with engineering, product, analytics, data science, and business teams to understand data requirements and deliver impactful solutions.


Monitoring & Support: Establish robust monitoring, alerting, and on-call support processes for all data systems, ensuring high availability and rapid issue resolution.

\n


What You’ll Bring
  • 10+ years of experience in data engineering roles, with at least 5 years in a leadership or management position overseeing data engineering teams.
  • Proven track record of building and scaling complex data platforms from the ground up, or significantly evolving existing ones.

Deep expertise in designing, building, and operating:
  • Data Ingestion Pipelines: (e.g., Kafka, Flink, Spark Streaming, Airflow, equivalent cloud services like Kinesis, Pub/Sub, Dataflow)
  • Operational Data Stores: (e.g., Cassandra, ScyllaDB, DynamoDB, PostgreSQL, MySQL)
  • Data Warehousing/Lakehouse Technologies: (e.g., AWS, GCP, S3, Iceberg, Redshift, BigQuery)
  • Data APIs & Services: (e.g., RESTful APIs, GraphQL)

  • Strong proficiency in Java / ScalaExtensive experience with cloud data platforms (AWS, GCP) and their respective data services.
  • Solid understanding of data modeling techniques (relational, dimensional, NoSQL).
  • Literacy in Data Science and Machine Learning concepts:Familiarity with common ML algorithms and their applications.
  • Understanding of the MLOps lifecycle and data requirements for ML models.Ability to identify and articulate how data science/ML can be used to improve data platform operations (e.g., anomaly detection in pipelines, resource optimization).
  • Experience with implementing data governance, data quality, and metadata management tools and practices.
  • Excellent communication, interpersonal, and presentation skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences.
  • Strong analytical and problem-solving abilities, with a focus on delivering practical and scalable solutions.
  • Bachelor's or Master's degree in Computer Science, Data Engineering, or a related quantitative field.


Benefits
  • Health insurance-employee premium paid 100% by Revinate
  • Dental insurance-employee and dependents’ premium paid 100% by Revinate
  • Vision insurance-employee and dependents’ premium paid 100% by Revinate
  • 401(k) with employer match
  • Short & Long Term Disability insurance
  • Life insurance
  • Paid Flex time off
  • Monthly work from home stipend
  • Telehealth access
  • Employee Assistance Program (EAP)


\n
$190,000 - $200,000 a year
The compensation package for the Director, Data Engineering includes a base salary and a performance-based bonus.

This salary range may be inclusive of several career levels at Revinate and will be narrowed during the interview process based on a number of factors, including (but not limited to) the candidate’s experience, qualifications and location. 
\n

Interview Process 

We're excited you're considering a career with Revinate! Our goal is to ensure this is the right opportunity for you, while also determining if you're the right fit for our team. The interview process for this role is designed to be a two-way street, where you'll get to know us just as we get to know you.


 - Recruiter Screen - 30 min

 - Technical Interview - 60 min

 - Cross Functional Interview - 30 min

 - Final Interview - 30 min 




Revinate values the flexibility of a remote workforce and the benefits of localized hiring. We focus on specific cities to foster local communities and enhance team cohesion, allowing employees to collaborate, attend local events, and build a strong sense of community and company culture.

Candidates must be located in the city listed in the job application. Thank you!


Revinate is not open to third party solicitation or resumes for our posted FTE positions. Resumes received from third party agencies that are unsolicited will be considered complementary.



Important Security Alert

We have been made aware of fraudulent activities involving individuals impersonating our HR team and offering fake job opportunities. Please be vigilant and ensure your safety by verifying all job offers.


For Authentic Opportunities: Only refer to our official careers page on our company website. Your security is our priority. If you encounter any suspicious activity, please report it immediately. Stay safe and secure! You can confirm or inquire with any questions by reaching out to recruiting@revinate.com





AI and Hiring 

Please note that interviews at Revinate will be recorded using brighthire.ai. As we continue to build more structure into our interview processes -- the best way to eliminate unconscious bias! We are encouraging our interviewers to focus more on our candidates and the conversation than taking notes. Instead, we can rely on brighthire.ai to do the note taking for us. If you’re uncomfortable with recording your interview, please let us now. We’ll opt you out.   


Excited?!  Want to learn more? Apply Now!

Our Core Values:

One Revinate - United & Strong, on a single mission together

Built on Trust - It’s the foundation of everything we do

Expect Amazing - We think, dream & deliver big

Customer Love -- When the customer wins, we win

Make it Simpler -- Apply it to everything we do

Hungerness -- Feel it, follow it, be relentless about our success

Grounded in Gratitude - We’re glad to be here & make the most of every day


Revinate Inc. provides Equal Employment Opportunity to all employees and applicants for employment without regard to race, color, religion, gender identity or expression, sex, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. Revinate complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities. 


Revinate is not open to third party solicitation or resumes for our posted FTE positions. Resumes received from third party agencies that are unsolicited will be considered complementary. 


If you are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to recruiting@revinate.com.


By submitting your application you acknowledge that you have read Revinate's Privacy Policy (https://www.revinate.com/privacy/)




Please mention the word **HONORABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
CFO
  • Marathon Talent
  • Remote
cfo support software accounting

Offshore CFO (Multifamily Real Estate) — Job Description

Overview

We are hiring a CFO to lead the finance and accounting function for a U.S.-based multifamily owner/operator. This role owns

financial statements, monthly close, cash management, budgeting/forecasting, reporting, and controls across multiple

properties and entities. The right candidate is tech-forward and excited to modernize finance through automation, AI, and APIdriven integrations.

Key Responsibilities

• Monthly close & financial statements: Own timely, accurate close and delivery of P&L, balance sheet, and cash flow

with supporting schedules.

• Reconciliations & controls: Ensure complete bank/GL reconciliations, AR/AP tie-outs, accruals, prepaids, CIP/fixed

assets, intercompany, and documented processes.

• Management reporting: Produce property/portfolio reporting including budget vs. actual, variance explanations, and

key operating KPIs.

• Cash management: Maintain daily cash visibility and a rolling 13-week cash forecast; manage payment cadence,

approvals, reserves, and liquidity planning.

• Budgeting & forecasting: Lead annual budgets and reforecasts (revenue, payroll, utilities, repairs, insurance, taxes,

CapEx).

• CapEx / renovation tracking: Track project budgets, spend, and ROI support (CIP and unit-level economics as

applicable).

• Lender / compliance support: Manage covenant reporting, lender deliverables, and coordination with CPAs/tax/audit

teams.

• Section 8 / Housing Authority & municipal compliance: Support affordable housing reporting and compliance (as

applicable), including coordination with Housing Authorities/cities, audits, and required documentation.

• Team leadership: Lead and develop offshore accounting staff (AP/AR/accountants); set SOPs, close calendar, and

review standards.

• Tech/automation leadership: Implement and optimize workflows using AI tools, automation, and API connections

across property management, accounting, reporting, and data pipelines.

Requirements (Must-Have)

• Minimum 8+ years of experience as a CFO (or senior finance leader) in real estate; multifamily strongly preferred.

• Expert in financial statements, close management, reconciliations, cash forecasting, and internal controls.

• Strong ability to deliver decision-ready reporting (budget vs. actual, variance analysis, KPIs).

• Bilingual proficiency: fluent professional English and Spanish (written and spoken).

• Property management software experience; ResMan preferred.

• Expense management software experience with Brex or Ramp; Brex preferred.

• Experience working with Section 8 programs, Housing Authorities, and municipal/city requirements (as applicable),

including compliance reporting and audit support.

• Strong understanding of real estate legal entities and structures (LLCs/LPs/SPVs), intercompany accounting, and

entity-level reporting.

• Tech-forward mindset: comfortable implementing automation/AI and working with APIs/integrations (no coding

required, but must be fluent with modern tools).

• Advanced Excel/Google Sheets skills; comfortable building standardized reporting templates and dashboards.

• Ability to work offshore with consistent overlap with U.S. business hours and days (ET/CT preferred).

Preferred

• Multi-entity consolidation, lender compliance/covenants, and renovation-heavy portfolios.

• Experience with BI/reporting tools (Power BI/Tableau) and modern AP/bill pay tools.

Working Model

• Remote / Offshore (LATAM preferred for timezone overlap)

• Reports to Ownership/CEO/Managing Partner; partners closely with Operations and Asset Management



Please mention the word **COMPLIANT** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Sr Data Engineer – CRM Customer Service
  • BC Tecnología
  • Santiago (Hybrid)
Python SQL ETL Spark
BC Tecnología es una consultora de TI con experiencia en servicios IT, outsourcing y selección de profesionales. Nos especializamos en diseñar equipos ágiles para Infraestructura, Desarrollo de Software y Unidades de Negocio, con clientes en servicios financieros, seguros, retail y gobierno. Buscamos incorporar a nuestro equipo a un/a SR Data Engineer con fuerte enfoque en CRM y migración de datos para proyectos de CRM Customer Service, entre otros clientes de alto nivel. El rol forma parte de iniciativas de modernización de datos, migración a la nube y fortalecimiento de la gobernanza de datos para un programa orientado a soluciones de experiencia del cliente.

Apply at getonbrd.com without intermediaries.

Funciones principales

  • Diseñar y desarrollar pipelines ETL/ELT para integración y migración de datos.
  • Ejecutar migración de datos desde sistemas legados hacia plataformas cloud y Dynamics 365.
  • Asegurar integridad, calidad y disponibilidad de los datos mediante validaciones y reconciliaciones.
  • Colaborar con el Technical Lead en la arquitectura de datos del programa.
  • Documentar modelos de datos, pipelines y procesos de migración.
  • Participar en ceremonias ágiles y reportar avances del frente de datos.
  • Colaborar con QA en validación end-to-end de datos.
  • Transferir conocimientos de datos al equipo.

Descripción

Requerimos un/a profesional con al menos 4 años de experiencia en ingeniería de datos, con prioridad en entornos CRM y retail. El/la candidato/a será responsable de diseñar e implementar pipelines para extracción, transformación y carga de datos, así como gestionar migraciones complejas desde sistemas legados hacia entornos en la nube y Microsoft Dynamics 365 Dataverse. Se integrará a un equipo técnico colaborativo, participando en la definición de la arquitectura de datos, la aseguración de calidad y la entrega continua mediante prácticas CI/CD aplicadas a datos. Se valorará experiencia en AWS (S3, Glue, Athena, Redshift, Lambda, Step Functions), Airflow o Step Functions para orquestación, Python y Spark/PySpark, manejo de SQL avanzado, modelado dimensional y relacional, y conocimiento de Dynamics 365.
Buscamos proactividad, orientación a resultados y habilidades de comunicación para trabajar en un entorno ágil y multi-funcional, con foco en la entrega de valor al negocio y una cultura de mejora continua.

Requisitos deseables

Experiencia en migración de datos entre sistemas ERP/CRM a plataformas en la nube; familiaridad con governance de datos, reconciliaciones y validación de datos de extremo a extremo; experiencia con teams y stakeholders de negocio; certificaciones en AWS o Data & Cloud; capacidad de documentación clara y ordenada; conocimientos de Microsoft Dynamics 365 Dataverse. Se valorará experiencia en retail y servicios de CRM, y habilidad para trabajar en entornos regulados.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $2100 - 2500 Full time
Data Engineer (ETL y Datos)
  • Equifax Chile
  • Santiago (Hybrid)
Java Python Scala ETL
En Equifax Chile transformamos datos en oportunidades. Como parte de una compañía global de data, analítica y tecnología, trabajamos para ayudar a instituciones financieras, empleadores y agencias gubernamentales a tomar decisiones críticas con mayor confianza. En este rol de Data Engineer, nos enfocamos en el ciclo de vida del dato: desde que las fuentes ingresan a la compañía, pasando por el diseño e implementación de la solución, hasta el uso final por distintos clientes internos y externos. Integrar, modelar y poner en producción procesos de ETL y atributos analíticos es clave para habilitar consumo confiable, escalable y listo para analítica.

Job opportunity on getonbrd.com.

¿Qué harás?

Como Data Engineer, seremos responsables del análisis constante de las fuentes, del diseño y la implementación del ciclo de vida del dato: desde que llega a la compañía hasta el uso final por los distintos clientes internos como externos.
  • Análisis de Requerimientos de negocio.
  • Diseño de Datos y Solución.
  • Implementación y Mejora de procesos de ETL.
Además, trabajaremos con tecnologías de modelamiento y manejo de datos, con conocimientos del área estadística y conocimientos básicos de modelamiento para la puesta en producción de modelos y atributos analíticos.

¿Qué experiencia necesitas?

Buscamos que cuentes con al menos 2 años de experiencia con alguna herramienta de ETL, por ejemplo: SSIS, Pentaho, Data Factory u otras. También requerimos al menos 2 años de experiencia con desarrollo de ETL en alguno de los siguientes lenguajes: Java, Scala o Python.
Adicionalmente, necesitamos que tengas al menos 2 años de experiencia con motores de bases de datos.
Valoraremos conocimientos relacionados a tecnologías de modelamiento y manejo de datos, conocimientos del área estadística y conocimientos básicos de modelamiento para la puesta en producción de modelos y atributos analíticos.
En el día a día, esperamos que seas una persona analítica, orientada a la mejora continua y con foco en entregar soluciones confiables para clientes internos y externos. Te moverás entre requerimientos de negocio y la implementación técnica, manteniendo claridad en el diseño de la solución y en la evolución de los procesos de ETL.
Requisito adicional: inglés intermedio.

¿Qué podría diferenciarte?

  • Al menos un año con experiencia en la nube (deseable, no excluyente).
  • Al menos un año con experiencia en herramientas CI/CD a nivel usuario (no desarrollo), por ejemplo: GoCD, Jenkins, Azure DevOps u otra.
  • Experiencia y criterio para apoyar la puesta en producción de modelos y atributos analíticos, considerando buenas prácticas de datos y continuidad operativa.

¿Qué ofrecemos?

Ofrecemos modalidad de trabajo híbrido con horarios flexibles para un balance saludable entre vida personal y laboral, además de días libres adicionales para fomentar el bienestar. Nuestro paquete de compensación integral incluye seguro médico complementario y convenio con gimnasio para promover un estilo de vida saludable. También contamos con beneficios específicos para madres y padres en la organización. Podrás acceder a una plataforma de aprendizaje en línea para desarrollo profesional continuo, junto con programas de reconocimiento que valoran el aporte de cada integrante del equipo, en un entorno diverso, multicultural y orientado al crecimiento de carrera.

Wellness program Equifax Chile offers or subsidies mental and/or physical health activities.
Equity offered This position includes equity compensation (in the form of stock options or another mechanism).
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks Equifax Chile offers space for internal talks or presentations during working hours.
Life insurance Equifax Chile pays or copays life insurance for employees.
Paid sick days Sick leave is compensated (limits might apply).
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Health coverage Equifax Chile pays or copays health insurance for employees.
Mobile phone provided Equifax Chile provides a mobile phone for work use.
Company retreats Team-building activities outside the premises.
Computer repairs Equifax Chile covers some computer repair expenses.
Dental insurance Equifax Chile pays or copays dental insurance for employees.
Computer provided Equifax Chile provides a computer for your work.
Education stipend Equifax Chile covers some educational expenses related to the position.
Fitness subsidies Equifax Chile offers stipends for sports or fitness programs.
Performance bonus Extra compensation is offered upon meeting performance goals.
Conference stipend Equifax Chile covers tickets and/or some expenses for conferences related to the position.
Informal dress code No dress code is enforced.
Vacation over legal Equifax Chile gives you paid vacations over the legal minimum.
Vacation on birthday Your birthday counts as an extra day of vacation.
Parental leave over legal Equifax Chile offers paid parental leave over the legal minimum.
$50000 - $210000 Full time
javascript react node python

Are you a talented Senior Developer looking for a remote job that lets you show your skills and get decent compensation? Look no further than Lemon.io — the marketplace that connects you with hand-picked startups in the US and Europe.

What we offer:

  • The rate depends on your seniority level, skills and experience. We've already paid out over $11M to our engineers.
  • No more hunting for clients or negotiating rates — let us handle the business side of things so you can focus on what you do best.
  • We'll manually find the best project for you according to your skills and preferences.
  • Choose a schedule that works best for you. It’s possible to communicate async or minimally overlap within team working hours.
  • We respect your seniority so you can expect no micromanagement or screen trackers.
  • Communicate directly with the clients. Most of them have technical backgrounds. Sounds good, yeah?
  • We will support you from the time you submit the application throughout all cooperation stages.
  • Most of our projects involve working in a fast-paced startup environment. We hope you like it as much as we do.
  • Through our community, we will connect you with the best developers from more than 71 countries.

We have several open positions for Full-Stack React.js Developers - please see the details below. We also have some backend positions; the full list is included below as well.

Requirements for the Senior React & Python Position:

  • 4+ years of software development experience

Commercial experience:

  • React.js 3+ years and Python 3+ years OR React.js 2+ years and Python 5+ years OR React.js 5+ years and Python 2+ years
  • Experience with AWS, GCP, or Azure is required

Requirements for the Senior Python Position:

  • 5+ years of software development experience
  • 5+ years of commercial experience with Python
  • 3+ years of commercial experience with Flask

Requirements for the Senior Golang & React Position:

  • 4+ years of software development experience
  • React.js 3+ years and Golang 3+ years OR React.js 2+ years and Golang 5+ years OR React.js 5+ years and Golang 2+ years

Requirements for the Senior Golang Position:

  • 5+ years of software development experience
  • 5+ years of commercial experience with Golang

Requirements for the Senior Node & React Position:

  • 5+ years of software development experience

Commercial experience:

  • React.js: 3+ years, Node.js: 5+ years, and Next.js: 2+ years

OR

React.js: 5+ years, Node.js: 3+ years, and Next.js: 2+ years

  • Expertise in TypeScript, Supabase and AWS is a must.

Other requirements:

  • Strong technical skills: as a Senior Developer, you are expected to be able to create projects from scratch and have a deep understanding of application architecture.
  • Clear and effective communication in English — advanced ability to discuss business tasks, justify decisions, and communicate issues. Good self-presentation is also essential for upcoming client calls.
  • Strong self-organizational skills — ability to work full-time remotely with no supervision.
  • Reliability — we want to trust you and expect that you won’t let us and the client down.
  • Adaptability and Flexibility — the ability to onboard the project promptly after accepting it and start delivering results quickly.

Sounds good for you? Apply now and join the Lemon.io community!

NOT YOUR TECH STACK?

We have multiple projects available for Senior Developers. If you have 4+ years of commercial software development experience and are proficient in any of the following areas: React & Ruby, PHP & Angular, PHP & Vue, Vue & Node.js, React & .NET, Android & iOS, Angular & .NET, Angular & Node.js, Vue & .NET, Python & Vue, MLOps, React & Java, Data Science, Blockchain (Web3/Solidity/Solana), Symfony & React, Symfony & Vue, Symfony & Angular, Symfony & JavaScript & Next.js & TypeScript, Data Analysis, React & PHP, Data Engineering, AI Engineering, Data Annotation, DevOps, Svelte & Python, Svelte & Node, Svelte & TypeScript, Rust, Shopify & JavaScript, Vue & Nuxt, Python & Node, Angular & TypeScript, Ruby & Ruby on Rails, React Native & Ruby, React Native & Python, PHP & Laravel, .NET & C#, Java & Spring, Unreal Engine & C++, Python & LLM, Unity, Machine Learning Engineering — we’d be happy to connect and match you with a suitable project.

If your experience matches our requirements, be ready for the next steps:

  • VideoAsk — watch a short video about our startup, up to 10 minutes
  • Complete your profile on our website
  • 30-minute screening call
  • Technical interview
  • Feedback
  • Magic Box (we are looking for the best project for you).

We do not provide visa assistance, and our cooperation model does not include the benefits typically offered with direct hire.

P.S. We work with developers from 71+ countries in different regions: Europe, LATAM, the U.S (if you are an owner of W-9 ben form), Canada, Asia (Japan, Singapore, South Korea, Philippines, Indonesia), Oceania (Australia, New Zealand, Papua New Guinea), and the the UK. However, we have some exceptions.

At the moment, we don’t have a legal basis to accept applicants from the following countries:

  • European: Hungary, Iceland, Liechtenstein, Kosovo, Belarus, Russia, and Serbia.
  • Latin America: Cuba and Nicaragua
  • Most Asian countries and Africa.

We expand and shorten the list of exemptions regularly.



Please mention the word **PORTABLE** and tag RMTM0LjQxLjE5Mi4yNA== when applying to show you read the job post completely (#RMTM0LjQxLjE5Mi4yNA==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
system frontend full-stack architect

Join Hostinger, and we’ll grow fast! 🚀


We’re shaping the future of online success - powered by AI and driven by people. With 900+ talented professionals and over 4 million clients in 150 countries, we help creators and entrepreneurs bring their ideas to life faster and easier than ever before.


Our mission: To provide tools that help individuals and small businesses succeed online faster and easier.

Our culture: Guided by 10 company principles.

Our formula for success: Customer obsession, innovative products, and talented teams.


Your role at Hostinger


Join Hostinger’s Delivery Automation team as a Senior Full Stack Automation Engineer, where you’ll focus on building scalable internal platforms and tools that supercharge developer productivity, streamline software delivery, and automate complex manual flows across the company.


In this role, you’ll take ownership of designing and automating workflows that reduce friction for engineers and teams across Hostinger. From CI/CD pipelines and deployment automation to system integrations and cross-team process improvements - your work will enable faster delivery, greater efficiency, and a stronger automation-first culture.

Your impact will span Product, Engineering, and beyond: empowering developers with reliable self-service solutions, helping teams eliminate repetitive tasks, and ensuring Hostinger operates at scale with speed and confidence.


You’ll collaborate closely with stakeholders across engineering and other departments to understand their challenges, architect resilient solutions, and ship intuitive tools backed by robust backend systems. You’ll also explore and adopt emerging technologies - including AI - to continuously elevate developer experience and automation capabilities.


Curious to learn more? Connect with your team:

Mantas Gurskis - Automation Team Lead, Asta Dagienė - Head of Delivery

\n


Your day-to-day
  • Analyze stakeholders workflows, identify automation opportunities, design, build, and maintain full-stack automation tools that connect and enhance internal marketing, sales, and business systems.
  • Develop user-friendly internal UIs and dashboards for campaign setup, monitoring, and reporting.
  • Work closely with cross-functional teams to understand workflows and identify automation opportunities.
  • Leverage AI where applicable to optimize decision-making and workflow efficiency.
  • Ensure reliability, scalability, and maintainability of automation systems and infrastructure.


Your skills and experience
  • 3+ years of experience as a Full Stack Developer (Node.js, TypeScript preferred) with backend-heavy contributions.
  • Strong understanding of API design, data pipelines, databases, and frontend development (Vue or similar).
  • Business automation platforms (e.g., Zapier, n8n) is a plus.
  • Comfortable working closely with non-engineering teams to build usable, effective tools.
  • Bonus: experience integrating AI/ML tools into automation workflows.
  • You’re proactive, thrive in ambiguity, and enjoy solving problems that unlock leverage for others.


Benefits for you
  • 🚀 360 Growth: We provide limitless learning opportunities: access to platforms like Reforge and Scribd, global conferences, physical and digital libraries, feedback culture, and mentoring through TesoXchange. Advance your career with internal mobility and grow with a team eager to share knowledge and support your success.
  • 🎯 Freedom & responsibility: Work on your terms: from modern offices in Kaunas and Vilnius, the comfort of home, or anywhere in the world. Enjoy flexibility in managing your schedule and bring your ideas to life in a fast-paced, dynamic environment.
  • 💪Wellness simplified: Your health comes first with insurance from Day 1, gym memberships, recharge leave, and regular health checks. Join sports, arts, and hobby clubs or simply enjoy the balance of a lifestyle that prioritizes wellness.
  • 🎉 Work hard - play hard: Recognize hard work with company events like Summerfest & Winterfest, Town Hall, Meet the Client initiatives, team-buildings, and workations. Enjoy access to the Žalgiris Arena VIP Lounge and celebrate life’s big moments with milestone gifts for weddings, new parenthood, and graduations.


Compensation
  • Gross salary 5600 - 7600 EUR.


\n

Get ready to take your personal and professional growth to new heights! Join Hostinger today and be part of our journey 🚀

Three. Two. Onboard



Please mention the word **PLENTIFUL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Engineer Software
  • Calabrio
  • Remote
software design c# saas

At Verint, we believe customer engagement is the core of every global brand. Our mission is to help organizations elevate Customer Experience (CX) and increase workforce productivity by delivering CX Automation. We hire innovators with the passion, creativity, and drive to answer constantly shifting market challenges and deliver impactful results for our customers. Our commitment to attracting and retaining a talented, diverse, and engaged team creates a collaborative environment that openly celebrates all cultures and affords personal and professional growth opportunities. Learn more at www.verint.com.

Overview of Job Function:

As a Software Engineer, you will be a core contributor to Verint's QM and PM engineering team. You will design and build full-stack features end-to-end, write high-quality automated tests, support production systems, and collaborate daily with Product Managers, Designers, QA Engineers, and globally distributed engineering peers. This is a role for engineers who take pride in their craft, are eager to grow through challenging problems, and want their work to have a visible impact on enterprise customers worldwide. You will be surrounded by experienced engineers who are invested in your growth, working in a modern Agile environment on software that matters.

Principal Duties and Essential Responsibilities:

Full-Stack Development

  • Design, develop, and maintain production-grade full-stack features spanning Java/C# backend services, REST/GraphQL APIs, and React/Ext JS frontend applications.
  • Translate product requirements and UX designs into well-structured, testable, and performant code.
  • Implement scalable microservices and modular frontend components that support high concurrency and enterprise-scale data volumes.
  • Participate in design and architecture reviews; contribute to discussions on API contracts, data models, and service boundaries.
  • Proactively identify and address performance bottlenecks, security gaps, and technical debt.
  • Write clean, idiomatic code following team standards; actively contribute to improving those standards over time.

Quality Assurance and Testing

  • Write comprehensive unit, integration, and end-to-end automated tests using JUnit, Jest, Playwright, and Cucumber (BDD).
  • Enforce code quality through peer reviews, static analysis, and adherence to the team's Definition of Done.
  • Investigate and reproduce reported defects; perform root-cause analysis and deliver timely, well-tested fixes.
  • Champion a shift-left testing mindset — integrating quality checks early and continuously in the development lifecycle.

Production Support and Maintenance

  • Triage, prioritize, and resolve bugs, regression issues, and customer-reported problems within agreed SLA windows.
  • Provide Tier-2/3 technical support for production incidents; participate in post-incident reviews and implement corrective actions.
  • Monitor application health using observability tooling (logs, metrics, traces); proactively surface anomalies before they impact customers.
  • Maintain and improve runbooks and operational documentation for supported features.

AI/ML Integration and Continuous Improvement

  • Integrate AI/ML capabilities — including LLM-powered features, automated scoring, and speech-to-text — into product features in collaboration with Verint's AI research teams.
  • Evaluate and pilot emerging technologies; propose adoption where they improve quality, performance, or developer productivity.
  • Identify and contribute to refactoring initiatives that reduce complexity and improve long-term maintainability.
  • Stay current with industry engineering trends through reading, experimentation, and participation in technical communities.

Collaboration and Communication

  • Work in cross-functional squads with Product Managers, UX Designers, QA Engineers, DevOps, and Data Engineers.
  • Actively participate in all Agile Scrum ceremonies: sprint planning, daily stand-ups, backlog refinement, sprint reviews, and retrospectives.
  • Provide accurate effort estimates and proactively surface risks, blockers, and dependencies.
  • Collaborate effectively with distributed engineering teams in Atlanta, Israel, and India using async-first communication practices.
  • Support the growth of junior engineers through constructive code reviews and knowledge sharing.

CI/CD and DevOps Practices

  • Build, maintain, and improve CI/CD pipelines using Jenkins, GitHub Actions, or Azure DevOps — ensuring reliable, automated build-test-deploy workflows.
  • Containerize services with Docker and deploy to Kubernetes clusters (EKS/AKS) following GitOps and IaC principles.
  • Implement secure deployment practices: secrets management, environment-specific configuration, and staged rollout strategies.
  • Optimize pipeline performance to minimize build times and deliver faster feedback loops to the team.
  • Bachelor’s degree in computer science / software engineering (or similar) or equivalent experience
  • 3 years experience with Java Spring Boot, practical experience of software development or proven equivalent seniority in software development with product teams
  • Proven track record of delivering full-stack features in an Agile/Scrum team with regular sprint cadences.
  • Hands-on experience with both backend API development and frontend UI implementation in a production codebase.
  • Back-End: Solid proficiency in Java (Spring Boot, Spring MVC, JPA/Hibernate) and/or C# (.NET / .NET Core). Good understanding of RESTful API design, OAuth 2.0/JWT, and basic microservices patterns.
  • Front-End: Working proficiency in JavaScript/TypeScript with hands-on React experience (hooks, context, state management). HTML5, CSS3, and foundational accessible UI development. Ext JS / Sencha familiarity is a plus.
  • Databases: Working knowledge of relational databases (PostgreSQL, MS SQL, Oracle) including schema design, SQL query writing, and basic indexing. Exposure to NoSQL stores (Redis, Elasticsearch, MongoDB) is a plus.
  • Cloud and Infrastructure: Exposure to AWS or Azure core services. Familiarity with Docker and basic Kubernetes concepts.
  • Testing: JUnit/TestNG/Jest unit tests and integration tests. Exposure to E2E testing tools (Playwright, Cypress, or Selenium). BDD with Cucumber is a plus.
  • CI/CD and DevOps: Working knowledge of Jenkins, GitHub Actions, GitLab CI, or Azure DevOps. Git branching strategies and pull request workflows.
  • AI and Emerging Tech: Exposure to LLM APIs or AI-powered tooling is a plus; curiosity and eagerness to develop these skills in the role.
  • Strong analytical thinking and a structured approach to debugging and problem-solving.
  • Clear written and verbal communication in English; able to document technical work clearly and participate actively in team discussions.
  • Self-motivated and eager to learn: takes initiative to understand problems deeply and asks good questions when stuck.
  • Collaborative and team-oriented: contributes positively to squad culture and values diverse perspectives.
  • Growth mindset: receptive to feedback, committed to continuous improvement, and excited to be challenged.
  • Solid experience with Agile Scrum or Kanban; comfortable with all sprint ceremonies.
  • Familiarity with Jira, Confluence, or Azure DevOps Boards for backlog tracking and documentation.
  • Exposure to test-driven development (TDD) and behavior-driven development (BDD) practices.

Preferred Skills:

  • Experience or academic exposure to workforce management, customer experience, or enterprise analytics domains.
  • Familiarity with Verint WFO, QM, or PM products or comparable SaaS platforms.
  • AWS Certified Developer or Azure Developer Associate certification, or active pursuit of one.
  • Experience with observability tools such as Datadog, Grafana/Prometheus, or ELK.
  • Contributions to open-source projects or a portfolio of personal/side projects.
  • Exposure to OWASP Top 10 security practices and secure coding principles.


Please mention the word **INSIGHTFULLY** and tag RMTM0LjQxLjE5Mi4yNA== when applying to show you read the job post completely (#RMTM0LjQxLjE5Mi4yNA==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $1500 - 2200 Full time
Data Engineer
  • GUX Technologies
  • Santiago (Hybrid)
DevOps ETL Power BI Qlik Sense

En Proyectum Chile, impulsamos la excelencia en Dirección de Proyectos a través de servicios de consultoría, capacitación y outsourcing especializado. Somos una organización internacional presente en 12 países de Latinoamérica, compartiendo conocimiento, metodologías y activos de alto valor. Además, somos el principal Authorized Training Partner (ATP) del PMI en la región, liderando la transformación en gestión de proyectos y agilidad.

Nos encontramos en búsqueda un Data Engineer para integrarse a un servicio en el dominio de plataforma de datos, participando en el desarrollo de soluciones modernas en entornos cloud, con foco en generación de valor a partir de datos. Responsable de generar activos tecnológicos y productos de datos, traduciendo los requerimientos de negocio en información relevante.

Send CV through Get on Board.

Descripción del rol

Funciones principales:

  • Desarrollar procesos ETL / ELT en Snowflake y AWS
  • Desarrollar visualizaciones en Qlik Sense y Power BI
  • Traducir requerimientos de negocio en activos tecnológicos y productos de datos
  • Generar historias de usuario y documentación de análisis
  • Participar en la definición de blueprints y soluciones tecnológicas
  • Colaborar en el desarrollo de soluciones de datos en entornos cloud

Requisitos del cargo

Educación:

  • Título profesional de Ingeniería Informática o carrera afín

Requisitos excluyentes:

  • Experiencia en industrias: Financiera, Medios de Pago, Fintech o Retail
  • Experiencia con Snowflake
  • Manejo de AWS Suite
  • Conocimientos en prácticas DevOps
  • Experiencia en visualización de datos (Qlik Sense / Power BI)

Requisitos deseables:

  • Experiencia en infraestructura cloud
  • Experiencia en Datawarehouse

Habilidades clave

  • Orientación a resultados y generación de valor
  • Pensamiento analítico y estructurado
  • Proactividad y autonomía
  • Trabajo colaborativo
  • Comunicación efectiva entre áreas técnicas y de negocio

Conditions

Computer provided GUX Technologies provides a computer for your work.
$$$ Full time
Senior Data Engineer
  • Lalamove
  • Kuala Lumpur
technical support java senior

At Lalamove, we believe in the power of community. Millions of drivers and customers use our technology every day to connect with one another and move things that matter. Delivery is what we do best and we ensure it is always fast and simple. Since 2013, we have tackled the logistics industry head on to find the most innovative solutions for the world’s delivery needs. We are full steam ahead to make Lalamove synonymous with delivery and on a mission to impact as many local communities we can. We have massively scaled our efforts across Asia and now have our sights on taking our best in class technology to the rest of the world. And we are looking for talented professionals to join us in this journey!!


As a Senior Data Engineer at Lalamove, you will be joining the global Data team as a key member of our expanding technology team in our new market. Due to the importance of user privacy and our commitment to compliance laws, we need an additional engineer to support our operations in the expanding market, while collaborating closely with our global engineering team.


\n


What you'll do:
  • Provide production support and incident response of our data in expanding market platform.
  • Support and troubleshoot technical issues, including the data pipelines running on top of the data platform.
  • Collaborate with a geographically-dispersed team of engineers to support compliance for the expanding market.
  • Support ad hoc requests related to expanding market data and operations.


What you'll need:
  • Legally permitted to work in Malaysia
  • 5+ years of relevant experience in data engineering
  • Experience in supporting Big Data operations
  • Proficiency in SQL
  • Hands-on experience in linux systems and command line operations
  • Experience in Java and Spring Boot framework
  • Good command of English, fluency in Mandarin is a plus


\n

To all candidates- Lalamove respects your privacy and is committed to protecting your personal data.

This Notice will inform you how we will use your personal data, explain your privacy rights and the protection you have by the law when you apply to join us. Please take time to read and understand this Notice. Candidate Privacy Notice: https://www.lalamove.com/en-hk/candidate-privacy-notice



Please mention the word **DASHING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • Oowlish Technology
  • Remote
python support software growth

Join Our Team


Oowlish, one of Latin America's rapidly expanding software development companies, is seeking experienced technology professionals to enhance our diverse and vibrant team.


As a valued member of Oowlish, you will collaborate with premier clients from the United States and Europe, contributing to pioneering digital solutions. Our commitment to creating a nurturing work environment is recognized by our certification as a Great Place to Work, where you will have opportunities for professional development, growth, and a chance to make a significant international impact.


We offer the convenience of remote work, allowing you to craft a work-life balance that suits your personal and professional needs. We're looking for candidates who are passionate about technology, proficient in English, and excited to engage in remote collaboration for a worldwide presence.


About the Role:


We are seeking a Senior Data Engineer with strong expertise in enterprise data modeling and AWS-based data platforms to support a mature and evolving data ecosystem. This role requires hands-on experience working with large-scale data environments, optimizing data models, and maintaining event-driven pipelines in a cloud-native architecture.


You will work across data modeling, pipeline development, API data support, and infrastructure collaboration. This position is ideal for someone comfortable operating in enterprise environments, maintaining production-grade systems, and improving performance and scalability across a modern AWS data stack.


This is a 6-month engagement with ET time zone alignment required.

\n


Must-Have:
  • 6+ years of experience in Data Engineering
  • Strong experience with Snowflake and Aurora Postgres
  • Advanced SQL and data modeling expertise (logical & physical design)
  • Hands-on experience with AWS data services (Glue, Lambda, DMS, EventBridge)
  • Strong Python experience for data pipelines
  • Experience supporting enterprise-scale data platforms
  • Experience with CI/CD (GitHub Actions)
  • Comfortable working in the ET time zone


Nice to Have:
  • Experience working with Terraform
  • Exposure to artifact management and infrastructure-as-code best practices
  • Experience in performance tuning at scale
  • Experience implementing automated data quality frameworks
  • Prior experience in enterprise or large distributed systems


\n


Benefits & Perks:


Home office;

Competitive compensation based on experience;

Career plans to allow for extensive growth in the company;

International Projects;

Oowlish English Program (Technical and Conversational);

Oowlish Fitness with Total Pass;

Games and Competitions;



You can also apply here:


Website: https://www.oowlish.com/work-with-us/

LinkedIn: https://www.linkedin.com/company/oowlish/jobs/

Instagram: https://www.instagram.com/oowlishtechnology/





Please mention the word **RECTIFYING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Customer Success Project Management API Integration Account Management

OMNIX desarrolla una plataforma PaaS de automatización y orquestación de disrupciones en operaciones complejas, integrándose con sistemas core como ERP, WMS, CRM e IoT. Trabajamos con empresas enterprise en industrias como telecomunicaciones, retail, logística y manufactura, donde la continuidad operacional es crítica.
El Customer Success Manager se incorpora al equipo de Delivery & Customer Success, trabajando en estrecha colaboración con Forward Deployed Engineers (FDE), Ventas y Producto. Su rol es asegurar que las implementaciones generen impacto real y sostenido en el negocio del cliente. Es responsable de transformar proyectos en adopción profunda, expansión de uso y valor operativo tangible, contribuyendo directamente a la retención y crecimiento de cuentas estratégicas.

Apply directly through getonbrd.com.

Funciones del cargo

El Customer Success Manager es responsable de la gestión integral de cuentas enterprise post-implementación, asegurando que OMNIX se convierta en un sistema crítico dentro de la operación del cliente. Lidera la relación estratégica con stakeholders, define junto al cliente los casos de uso prioritarios y construye un roadmap de expansión basado en impacto operativo.
Trabaja coordinadamente con el FDE, quien ejecuta técnicamente las soluciones, mientras el CSM asegura su adopción, continuidad y valor en producción. Tiene autonomía para priorizar iniciativas, detectar oportunidades de expansión y escalar decisiones. Lidera instancias ejecutivas como QBRs y es responsable de sostener una narrativa clara de valor. El éxito del rol se mide por la profundidad de uso de la plataforma, la expansión de la cuenta y la capacidad de convertir soluciones en resultados concretos dentro de la operación.

Requerimientos del cargo

Experiencia mínima de 5 años en roles de Customer Success, consultoría o gestión de cuentas en contextos B2B enterprise.

Experiencia demostrable trabajando con clientes complejos en industrias como logística, telecomunicaciones, retail o manufactura.

Capacidad de interactuar con stakeholders técnicos y ejecutivos (C-level), sosteniendo conversaciones de negocio y tecnología.

Experiencia gestionando implementaciones o proyectos con múltiples integraciones (ERP, APIs, sistemas core).

Fuerte orientación a resultados, con capacidad de estructurar problemas, priorizar iniciativas y ejecutar con autonomía.

Inglés avanzado (oral y escrito) para interacción con equipos y clientes internacionales.

Alta disciplina operativa, capacidad de seguimiento y accountability bajo entornos exigentes.

Opcionales

Experiencia previa en empresas tipo SaaS/PaaS o plataformas de datos y automatización operacional.

Conocimiento en herramientas de integración, data workflows o automatización (ej: n8n, Zapier, APIs, ETL).

Experiencia en consultoría estratégica o implementación de transformación digital en grandes empresas.

Familiaridad con metodologías de gestión como EOS o frameworks de ejecución disciplinada.

Conocimiento en analítica de datos, detección de anomalías o modelos de inteligencia artificial aplicados a operaciones.

Experiencia en entornos de alto crecimiento o compañías tecnológicas con foco enterprise.

Condiciones

enefits of working at OMNIX
  • Be part of an agile, high-impact team where everyone contributes and makes a difference.
  • Mostly remote work, with flexibility and objective-based management.
  • Performance and company results bonuses.
  • Fast professional growth, with the possibility to expand roles and responsibilities.
  • We operate using the EOS (Entrepreneurial Operating System), which provides:
    • clarity of goals,
    • strong prioritization,
    • clear metrics,
    • a culture of accountability.
  • Opportunity to work with teams in Chile, Peru, Colombia, and the United States.
  • Participation in cutting-edge AI and automation projects with real impact on enterprises and governments.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage OMNIX AI Corp pays or copays health insurance for employees.
Informal dress code No dress code is enforced.
Vacation over legal OMNIX AI Corp gives you paid vacations over the legal minimum.
$$$ Full time
Data Engineer
  • CyD Tecnología
  • Antofagasta (In-office)
Git SQL ETL Power BI

En CyD Tecnología somos una empresa innovadora en el sector de la tecnología, enfocada en el desarrollo de plataformas web personalizadas que transforman procesos complejos en soluciones simples y eficientes. Nuestro equipo diseña y entrega aplicaciones web y móviles que automatizan, integran y digitalizan operaciones críticas, ayudando a las empresas a reducir costos, mejorar el control y tomar decisiones basadas en datos en tiempo real.

Apply to this posting directly on Get on Board.

Responsabilidades Principales

El Data Engineer será responsable de diseñar, desarrollar y mantener soluciones de datos orientadas a la construcción de dashboards en Power BI, asegurando la disponibilidad, calidad y consistencia de la información para la toma de decisiones.

Trabajará en la integración de distintas fuentes de datos, transformación de información y modelado necesario para soportar reportes de gestión. Además, participará en la optimización de procesos y en la mejora continua de los modelos de datos utilizados por el negocio.

Dentro de sus funciones principales se encuentran:

  • Desarrollar y mantener dashboards en Power BI (principalmente MOP L3 y L4).
  • Construir y gestionar Dataflows para la preparación y transformación de datos.
  • Integrar fuentes de datos como Snowflake y bases de datos locales.
  • Diseñar y optimizar modelos de datos para reporting.
  • Escribir y optimizar consultas SQL para extracción y procesamiento de datos.
  • Asegurar la calidad y consistencia de los datos en los reportes.
  • Apoyar la estandarización de datos y buenas prácticas de desarrollo BI.
  • Documentar procesos y mantener trazabilidad de los flujos de datos.

Competencias Técnicas Requeridas

Se requiere formación en Ingeniería Informática o carrera afín, junto con experiencia en desarrollo de soluciones BI y manejo de datos.

Requisitos excluyentes:

  • Experiencia desarrollando dashboards en Power BI.
  • Manejo de Dataflows y Power Query para transformación de datos.
  • Dominio de SQL para consultas complejas.
  • Experiencia integrando datos desde Snowflake u otras fuentes similares.
  • Conocimiento en modelado de datos para reporting.
  • Experiencia con DAX para métricas y cálculos.

El trabajo considera jornada 4x3 en faena de la II Región de Antofagasta. No existe modalidad de teletrabajo.

Además, se valorará:

  • Experiencia trabajando con grandes volúmenes de datos.
  • Conocimientos en Dataverse o entornos de Power Platform.
  • Experiencia en optimización de rendimiento de dashboards.

Conocimientos Opcionales

Se considerarán como un plus los siguientes conocimientos o experiencia:

  • Experiencia en Power Platform (Power Apps, Power Automate).
  • Conocimientos en arquitectura de datos en la nube (Azure).
  • Experiencia en automatización de procesos de datos (ETL/ELT).
  • Conocimientos en gobernanza y calidad de datos.
  • Manejo de herramientas de versionamiento (Git).
  • Experiencia en metodologías ágiles.

Conditions

Health coverage CyD Tecnología pays or copays health insurance for employees.
Computer provided CyD Tecnología provides a computer for your work.
$$$ Full time
Senior Data Engineer
  • Ethena Labs
  • Globally Remote
crypto back-end python cto

Who We Are and What We are Doing:

Ethena Labs is actively building and deploying a suite of groundbreaking digital dollar products aiming to upgrade money into the internet era.


Our flagship product, USDe, is a synthetic dollar backed by digital assets, and takes the novel approach of using a delta-neutral hedged basis strategy to maintain its peg. This product scaled from zero to $15b in 18 months.


Expanding on this, iUSDe is designed specifically for traditional financial institutions, incorporating necessary compliance features to enable them to access the crypto-native rewards our protocol generates, in an institutional-friendly manner.


Ethena has also developed USDtb: a fiat backed GENIUS compliant stablecoin in partnership with BlackRock which has scaled to ~$2b.


These products are also offered in a whitelabel stablecoin offering where any application, chain, wallet or exchange can launch their own stablecoin on Ethena's back-end infrastructure.


Through these offerings, Ethena Labs is not just creating new financial products; we are building the foundational infrastructure for a more open, efficient, and interconnected global financial system.


Open job offerings will be focused on two new major product lines coming to market in the next few months.


Join us!!


The Senior Data Engineer is a critical role reporting directly to the CTO. The primary mission is to rapidly deliver a reliable, production-ready market data platform that serves as the single source of truth for trading, risk, and business intelligence.


You’ll immediately own the entire data platform from inception and deliver working historical and real-time Tardis pipelines in the first 60 days. Beyond the initial MVP, the role requires iteratively evolving the platform into a best-in-class, cloud-native, observable, and self-service system. You will work hand in hand with the CTO & trading team to scope & deliver to business needs. The Senior Data Engineer will also serve as the go-to data expert for the firm and will be responsible for mentoring future junior data engineers or analysts.


\n


What You’ll Do
  • Rapidly spin up the cloud environment. Deliver working historical backfill pipelines from Tardis.dev into a queryable database.
  • Deliver a real-time Tardis WebSocket pipeline, ensuring data is normalized, cached for live consumption, accurate, replayable, and queryable by Day 60.
  • Ensure all pipelines are idempotent, retryable, and use exactly-once semantics. Implement full CI/CD, Terraform, automated testing, and secrets management.
  • Implement proper observability (structured logs, metrics, dashboards, alerting) from day one. Provide immediate self-service access to the MVP database for Trading and BI teams via tools like Tableau/Metabase, and through simple internal REST APIs.
  • Develop specialized timeseries data, including USDe backing-asset and a full opportunity-surface timeseries for Delta-neutral/lending/borrow opportunities.
  • Ingest data from additional sources (Kaiko, CoinAPI, on-chain via TheGraph/Dune). Plan for 10x+ data growth via schema evolution, partitioning, and performance tuning. Establish enterprise-grade governance, including a data quality framework, RBAC, audit logs, and a semantic layer.
  • Create full architecture documentation, runbooks, and a data dictionary. Onboard and mentor future junior staff.


What We’re Looking For
  • Proven track record of delivering working, production data in weeks, not months, with the ability to ruthlessly cut scope to hit a 60-day MVP while managing technical debt.
  • Have built Tardis historical and real-time pipelines before (or equivalent high-quality crypto market data feeds), understanding specific quirks, rate limits, and WebSocket structures.
  • Expert in large-scale, reliable ETL/ELT for financial or market data.
  • Fluent in provisioning full environments with Terraform in days and expert in AWS/GCP serverless technologies.
  • Expert Python and SQL skills and proficiency with time-series databases like TimescaleDB or ClickHouse, ensuring fast queries from day one.
  • Advanced knowledge of WebSocket clients, message queues, and low-latency streaming, GitOps, automated testing/deploy and observability practices.
  • Significant understanding of stablecoins, lending protocols, and opportunity surface concepts, or a proven ability to ramp up extremely quickly.


\n

Why Ethena Labs?


You'd be joining a group that has well established itself as one of the most successful crypto-native company's of all time, a group with a mission to revolutionise decentralised finance and it's position in global finance.


Work alongside a passionate and innovative team that values collaboration and creativity.

Enjoy a flexible, remote-friendly work environment with established opportunities for personal growth and learning.


If you subscribe to the mission of separating the dollar from the state, then we want to hear from you!


We look forward to receiving your application and will be in touch after having a chance to review. 


In the meantime, here are some links to more information about Ethena Labs to help you check us out:

Website

Twitter/X

LinkedIn



Please mention the word **INFALLIBILITY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $2200 - 2800 Full time
Project Management Office Senior
  • 3IT
  • Santiago (In-office)
Agile Project Management Banking PMI
Somos 3IT ¡Innovación y talento que marcan la diferencia!
Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.
Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.
Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

Apply to this posting directly on Get on Board.

📝 ¿Cuál sería tu trabajo?

Definir, estandarizar, establecer y llevar a cabo la planificación estratégica y los procesos operativos de los proyectos de Desarrollo. Además, se encarga de monitorear las actividades, asignar tareas, recursos y presupuesto a los proyectos.

🎯 ¿Qué necesitamos para sumarte a nuestro equipo?

  • Contar con experiencia en Banca
  • Tener 4 años o más ejerciendo como PMO
  • Capacidad para reportar avances a alta gerencia
  • Experiencia senior en gestión de proyectos de desarrollo
  • Habilidades en coordinación transversal con múltiples stakeholders
  • Implementación y mantenimiento de marcos de gestión como CMMI
  • Competencia en planificación estratégica, asignación de recursos y presupuesto
  • Uso de metodologías PMI y Agile

📍 ¿Dónde y cómo trabajarás?

  • Ubicación oficina: Santiago
  • Modalidad: Presencial

✋ Algunas consideraciones antes de postular:

  • Debes tener disponibilidad para trabajar en modalidad presencial en nuestra oficina
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista

Beneficios que tendrás si te unes a nuestro team:

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxee + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Wellness program Banco de Chile offers or subsidies mental and/or physical health activities.
Life insurance Banco de Chile pays or copays life insurance for employees.
Digital library Access to digital books or subscriptions.
Health coverage Banco de Chile pays or copays health insurance for employees.
Dental insurance Banco de Chile pays or copays dental insurance for employees.
Computer provided Banco de Chile provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Beverages and snacks Banco de Chile offers beverages and snacks for free consumption.
Parental leave over legal Banco de Chile offers paid parental leave over the legal minimum.
Gross salary $3500 - 5200 Full time
Python SQL Machine Learning AI Integration

Vequity is building the world’s most robust, contextualized buyer intelligence network for investment banks, private equity firms, and strategic acquirers. Our platform currently houses over 1.5 million buyer profiles with approximately 100 structured and inferred data fields per profile. We leverage proprietary AI agents to continuously enrich, infer, and structure buyer intelligence at scale. As a Senior Data Engineer, you will own the architecture, quality, and scalability of our data ecosystem—from ingestion and cleaning to inference and output generation. You will partner with AI, product, and engineering teams to deliver data APIs and feeds that power our platform's decision-support capabilities. Your work will directly impact data reliability, operational efficiency, and the precision of buyer attributes used across our customers.

Apply at getonbrd.com without intermediaries.

Key Responsibilities

Multi-Source Data Architecture

  • Work with systems handling multiple write paths: external providers, LLM hygiene agents, and customer-claimed edits
  • Define standards for data versioning, lineage, and observability across pipelines


Entity Lifecycle & Master Data Management

  • Handle entity lifecycle complexity: mergers, acquisitions, spin-offs, rebranding, and temporal relationship changes
  • Design entity resolution systems using deterministic blocking (fuzzy matching, location) combined with LLM-based evaluation for match decisions
  • Build confidence scoring models and surface low-confidence cases for human review

Machine Learning & Matching Systems

  • Work with embeddings infrastructure: vector generation, retrieval optimization, and quality measurement
  • Optimize semantic search pipelines including embedding strategies, namespace design, and reranking
  • Establish evaluation frameworks to measure model performance against human judgment

Collaboration & Team Development

  • Educate and mentor the engineering team on data best practices, patterns, and common pitfalls
  • Lead continuous improvement of the data infrastructure roadmap

Relationship & Graph Modeling

  • Design data models for complex relationships: parent/subsidiary hierarchies, PE firm → portfolio company chains
  • Evaluate and implement graph query capabilities (Apache AGE, Neo4j, or optimized Postgres patterns) for relationship traversal that semantic search cannot address

Data Quality, Testing & Operations

  • Build quality-control layers including confidence scoring, human-in-the-loop validation, and automated anomaly detection
  • Implement testing strategies including data contracts, pipeline unit tests, and integration testing
  • Build proactive monitoring, alerting, and runbooks for data health issues
  • Ensure compliance with data governance, privacy, and security standards

Description

  • 5+ years in data engineering with strong Python (Pydantic a bonus), SQL, and cloud data stacks (including GCP)
  • Experience with orchestration frameworks (Airflow, Dagster, Prefect) and/or data platforms (Databricks)
  • Experience designing or integrating AI/LLM agents for data enrichment with structured AI → JSON → database pipelines including error recovery and monitoring
  • Understanding of embedding-based retrieval
  • Excellent communication and cross-team collaboration skills

Desirable

  • Prior experience with Machine Learning algorithms / semantic search
  • Prior experience with entity resolution or master data management — you understand why matching company records is fundamentally hard
  • Familiarity with graph databases or graph query patterns (Neo4j, Apache AGE, recursive CTEs) for complex entity relationships
  • Experience with event sourcing or append-only architectures for audit trails and data replay
  • Background in investment data, market intelligence, or deal sourcing platforms
  • Familiarity with agent orchestration tools (LangChain, LlamaIndex) and data quality frameworks (dbt, Great Expectations)
  • Experience as an early/first data hire at a startup
  • Experience designing or integrating AI/LLM agents for data enrichment with structured AI → JSON → database pipelines including error recovery and monitoring
  • Understanding of prompt engineering, MCP Servers, function calling, and embedding-based retrieval

Benefits

Competitive compensation and Paid Time Off (PTO).

Fully remote You can work from anywhere in the world.
$$$ Full time
Intern Software Development
  • Netomi
  • Remote - India
software design technical code

About the Company:

Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences.


Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!


Job description


We are looking for a Software Development Intern to help us with coding, fixing, executing and versioning existing code for applications. If you're passionate to solve real time fundamental problems, explore, learn and work on technologies out of scope, Netomi is the perfect place for you.

\n


Job Responsibilities
  • Assist in planning, design and execution of SOA backend platforms. Mostly around REST based Web Frameworks using JAVA (Spark,Spring, ORM)
  • High level and Low level design of the highly scalable components
  • Works collaboratively in a multi-disciplinary team environment
  • Assist key technical advisors to define the roadmap of project


Requirements
  • Experience on some scripting language for automated build/ deployments, preferably Java
  • Pursuing B.E./B.Tech in Computer Science from tier I & II institutes (2025 and 2026 passouts only)


\n

Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.



Please mention the word **MERRY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $900 - 1200 Full time
Data Process Analyst
  • Datasur
  • Santiago (Hybrid)
Python PostgreSQL ETL Automation

En Datasur, somos líderes en inteligencia comercial basada en datos de comercio exterior. Nuestra plataforma procesa millones de registros de importaciones y exportaciones de más de 70 países, y estamos listos para escalar más alto.

Buscamos un/a Ingeniero/a de Procesos con al menos un año de experiencia para un proyecto de automatización del flujo de producción de datos. El rol se enfoca en levantar, analizar, documentar y mejorar procesos, impulsando la transición desde operaciones manuales a modelos estandarizados, trazables y escalables.

Se requiere una visión TI orientada a procesos, capaz de mapear flujos end-to-end, detectar brechas, definir controles y traducir necesidades de negocio en requerimientos funcionales claros. El trabajo abarca todo el ciclo de datos (ingesta, estandarización, calidad, monitoreo, orquestación y carga analítica), identificando riesgos y oportunidades de automatización.

Apply only from getonbrd.com.

Funciones del cargo

1. Levantar, analizar y documentar procesos actuales y futuros del flujo de producción de datos.
2. Estandarizar procesos, definiciones, reglas operativas y puntos de control entre áreas.
3. Traducir requerimientos operativos y funcionales en documentos claros para equipos TI.
4. Apoyar la definición de flujos objetivo, casos de uso, reglas de negocio, validaciones y métricas de control.
5. Coordinar con áreas de Producción de Datos y equipos técnicos para asegurar consistencia en el diseño del proceso.
6. Participar en la elaboración de diagramas de proceso, procedimientos, manuales y documentación de operación.
7. Acompañar la implementación de mejoras, haciendo seguimiento a avances, dependencias y acuerdos operativos.
8. Apoyar la definición de indicadores de calidad, trazabilidad, alertas y seguimiento del proceso.

Requerimientos del cargo

  1. Formación en Ingeniería de Procesos, Ingeniería Civil Industrial, Ingeniería en Informática, Ingeniería en Ejecución, Sistemas o carrera afín.
  2. Al menos 1 año de experiencia en levantamiento, análisis, documentación o mejora de procesos.Interés o experiencia en procesos vinculados a TI, datos, automatización o transformación digital.
  3. Conocimiento en modelamiento de procesos, levantamiento de requerimientos y documentación funcional.
  4. Capacidad para interactuar con perfiles técnicos y no técnicos.

Se valorará

  • Experiencia en proyectos de datos, ETL, calidad de datos, automatización o integración de sistemas.
  • Conocimiento general de conceptos como pipelines, validaciones, logs, monitoreo, trazabilidad y gobernanza de datos.
  • Familiaridad con entornos donde participan tecnologías como Python, PostgreSQL, Airflow, Spark o soluciones de procesamiento de datos, aunque el foco principal del cargo no es desarrollar, sino ordenar y mejorar el proceso.

Condiciones

  • Un proyecto desafiante, con impacto real en el mundo del comercio exterior.
  • Equipo comprometido, ágil y con visión de crecimiento global.
  • Libertad para proponer, crear y liderar cambios.
  • Modalidad flexible y cultura de resultados.

Gross salary $1500 - 2500 Full time
Python SQL Data Warehouse Data Modeling
Datos que mueven el mercado eléctrico chileno.
Sobre nosotros
Galilei es una plataforma que permite consultar y analizar datos del mercado eléctrico chileno por chat, WhatsApp y reportes automáticos diarios. Somos un spin-off de SPEC, una consultora con historia en el sector. Nuestros usuarios son generadoras, comercializadoras, reguladores y consultores que necesitan respuestas rápidas sobre precio spot, generación, demanda, tramitación SEA y debate legislativo.
Somos un equipo pequeño (4 personas) y estamos en etapa temprana. Lo que construyas se va a usar la misma semana.

Apply to this job from Get on Board.

Qué vas a hacer

Buscamos un/a Analytics Engineer semi-senior que tome ownership de la capa de datos que alimenta tanto la plataforma como las decisiones internas de producto.
No buscamos a alguien que solo mueva pipelines: buscamos a alguien que entienda qué pregunta tiene que responder cada dato y que pueda conversar con producto y dominio para diseñar modelos de datos útiles.
  • Diseñar y mantener los modelos de datos que alimentan la plataforma (chat, agente WhatsApp, reportes diarios).
  • Integrar nuevas fuentes regulatorias y de mercado (CEN, Coordinador, CNE, SEA, etc.) con criterio de calidad y trazabilidad.
  • Construir y mantener el dashboard de funnel de conversión y métricas de uso de producto.
  • Trabajar codo a codo con el equipo de producto para traducir hipótesis de negocio en modelos analíticos accionables.
  • Apoyar en la documentación técnica y en reducir bus factor (READMEs, ADRs).

Requisitos

  • 3+ años de experiencia en roles de data engineering, analytics engineering o similar.
  • SQL muy sólido
  • Experiencia con orquestación (Airflow, Dagster, Prefect, o similar) y con al menos un warehouse moderno (BigQuery, Snowflake, Postgres analítico).
  • Python para transformación e integración de datos.
  • Capacidad de hacer preguntas antes de escribir código: queremos a alguien que cuestione el requerimiento, no que lo ejecute ciegamente.

Suma Puntos

  • Conocimiento del sector eléctrico chileno (no excluyente, pero es un acelerador enorme).
  • Experiencia con dbt o frameworks de modelado analítico.
  • Haber trabajado en startups o equipos pequeños (sabés que acá no hay un equipo de DevOps al lado).
  • Familiaridad con LLMs aplicados a consulta de datos estructurados.

Beneficios

100% Remoto
Sueldo de mercado
Equity (sujeto a condiciones)

Fully remote You can work from anywhere in the world.
$$$ Full time
software architect technical testing
Come build at the intersection of AI and fintech. At Ocrolus, we're on a mission to help lenders automate workflows with confidence—streamlining how financial institutions evaluate borrowers and enabling faster, more accurate lending decisions. Our AI workflow and analytics platform for lenders is trusted at scale, processing nearly one million credit applications every month across small business, mortgage, and consumer lending. By integrating state-of-the-art open- and closed-source AI models with our human-in-the-loop verification engine, Ocrolus captures data from financial documents with over 99% accuracy. Thanks to our advanced fraud detection and comprehensive cash flow and income analytics, our customers achieve greater efficiency in risk management, and provide expanded access to credit—ultimately creating a more inclusive financial system. Trusted by more than 400 customers—including industry leaders like Better Mortgage, Brex, Enova, Nova Credit, PayPal, Plaid, SoFi, and Square—Ocrolus stands at the forefront of AI innovation in fintech. Join us, and help redefine how the world's most innovative lenders do business. We are looking for an exceptionally skilled Senior Software Engineer - Backend with a solid technical background and leadership skills, able to work in a fast-paced environment, and help architect and build the next generation of our backend applications. What you'll do: - Designing, implementing, and maintaining Microservices using Python. - Designing and developing cloud based software products conforming to industry best practices. - Build systems, services, and tools to handle new Ocrolus products and business requirements that securely scale over millions of transactions. - Build and scale our fast-growing online services and data pipelines. - Collaborate with other teams on security, reliability, and automation. - Supporting the testing process, troubleshooting issues and resolving them.

Please mention the word **REFORMS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $1800 - 2200 Full time
Scrum Jenkins Selenium Jmeter
Somos 3IT ¡Innovación y talento que marcan la diferencia!
Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.
Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.
Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

This job is original from Get on Board.

📝 ¿Cuál sería tu trabajo?

Asegurar la calidad del software mediante la implementación de pruebas automatizadas, supervisando todas las etapas del desarrollo para prevenir defectos y garantizar el funcionamiento óptimo del producto.

🎯 ¿Qué necesitamos para sumarte a nuestro equipo?

  • Uso de Selenium
  • Práctica en Scrum
  • Manejo de Jenkins y Bamboo
  • Dominio de JMeter y LoadRunner
  • Competencia en BDD con Gherkin y Cucumber
  • Trayectoria en banca, fintech o servicios financieros
  • Formación en Ingeniería Informática, Analista Programador o carrera afín
  • Contar con al menos 5 años de experiencia en automatización de pruebas de software

Beneficios que tendrás si te unes a nuestro team:

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxee + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Wellness program Banco de Chile offers or subsidies mental and/or physical health activities.
Life insurance Banco de Chile pays or copays life insurance for employees.
Digital library Access to digital books or subscriptions.
Health coverage Banco de Chile pays or copays health insurance for employees.
Dental insurance Banco de Chile pays or copays dental insurance for employees.
Computer provided Banco de Chile provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Beverages and snacks Banco de Chile offers beverages and snacks for free consumption.
Parental leave over legal Banco de Chile offers paid parental leave over the legal minimum.
Gross salary $2000 - 2200 Full time
Python BigQuery Apache Spark CI/CD

Equifax es mucho más que una empresa de informes; es una compañía global líder en datos, analítica y tecnología con presencia en 24 países. En Chile, operan desde 1979 entregando soluciones críticas de ciberseguridad, identidad y riesgo a más de 14.000 empresas.

El Hub Tecnológico (SDC) Lo que hace única a esta oportunidad es que Chile alberga el Santiago Development Center (SDC). Este centro lidera la transformación digital de Equifax a nivel mundial, concentrando cerca del 60% de sus desarrollos tecnológicos globales.

Cultura y Visión Equifax promueve un entorno de colaboración y excelencia técnica, donde el talento local tiene el desafío de crear soluciones de impacto mundial. Su visión es clara: usar la data y la tecnología para potenciar la toma de decisiones financieras en todo el mundo.

Apply to this job at getonbrd.com.

Funciones del cargo

¿Qué harás en tu día a día?

  • Fuerte enfoque en el desarrollo y procesamiento de datos en la nube.
  • Automatización de procesos y manipulación de datos
  • Consultas y manejo de grandes volúmenes de datos
  • Procesamiento distribuido de datos en tiempo real y por lotes.

Skills

Técnicas

  • 2+ años de conocimiento en Python
  • 2+ años de conocimientos en BigQuery
  • 2+ años de conocimientos en Apache beam / Apache Spark
  • Inglés A2 (conversacional)

Personales

  • Capacidad de autogestión
  • Buenos skills de comunicación
  • Fortaleza en trabajo en equipo
  • Adaptación al cambio (trabajarán en distintas geos de Latam)
  • Título académico en Ingeniería Informática, Sistemas o carreras afines.

Contrato indefinido desde el inicio con 23people - Tiempo del proyecto 6 meses con posible extensión

  • Modalidad: Home Office con residencia en Chile.
  • Experiencia: Desde 2 años en adelante
  • Horario: Lu - Ju 08:30 a 18:30 / Vi 08:30 a 17:30

Deseables

  • Perfil analítico
  • Unit test
  • Airflow
  • PySparck
  • CI/CD
  • Postman
  • Jmeter

Beneficios

Algunos de nuestros beneficios

  • Seguro complementario: Seguro de salud, vida y dental
  • Curso de inglés: En nuestro programa de formación en idioma inglés, ofrecemos dos modalidades para adaptarnos a tus necesidades y objetivos.
  • Reembolso de certificaciones internacionales: Apoyamos el crecimiento profesional, por lo que te reembolsamos el costo de un examen de certificación internacional que quieras realizar.
  • Bono de vacaciones: Por cada semana que te tomes de vacaciones te otorgamos una compensación.
  • Aguinaldos en fiestas patrias y Navidad: Queremos que en fechas tan especiales la pases bien junto a tu familia, por lo que te entregamos un bono en septiembre y diciembre
  • Día libre de cumpleaños: Puedes optar por tomar tu día libre, el día previo a tu cumpleaños, el mismo día de tu cumpleaños o el día posterior.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Life insurance Equifax pays or copays life insurance for employees.
Health coverage Equifax pays or copays health insurance for employees.
Dental insurance Equifax pays or copays dental insurance for employees.
Computer provided Equifax provides a computer for your work.
Vacation on birthday Your birthday counts as an extra day of vacation.
Gross salary $3000 - 5000 Full time
Data Engineer
  • Revel Street LLC
SQL DevOps ETL CI/CD

Revel Street LLC helps corporate event planners discover and reach private dining venues through an extensive, dependable database. We use LLMs extensively to gather and enrich venue data, streamline the event planning workflow, and reduce the time and effort required to source options for events such as private dining, cocktail receptions, and conferences. We are looking for an experienced Data Engineer to help us improve data quality, fix existing data issues, and ingest more data from APIs and LLM-based sources to complement our current datasets. Our current stack includes React, TanStack, Cloudflare, Django, and Dagster, and we expect you to design solutions that are scalable, testable, and grounded in core engineering fundamentals.

Applications at getonbrd.com.

Responsibilities

You’ll proactively turn ambiguous requirements into well-structured engineering plans. You’ll communicate trade-offs and risks early, and you’ll verify outcomes through hands-on testing. You’ll bring a “build, measure, improve” mindset to performance, reliability, and user experience.

  • Design, build, and maintain dbt pipelines for our analytics and operational workloads
  • Build and maintain ETL/ELT processes to ingest data from multiple APIs and other external sources
  • Set up and manage workflows in orchestration platforms such as Dagster
  • Develop and refine our data models to support analytics, reporting, and downstream products
  • Diagnose and fix data quality issues (duplicates, missing fields, inconsistent formats, incorrect mappings, etc.)
  • Implement robust data cleaning and validation checks
  • Integrate LLM-based data enrichment (e.g., using OpenAI or similar APIs) to improve and complement event data
  • Collaborate with our product and ops team to understand data needs and translate them into technical solutions

Requirements

  • Very high English proficiency (clear communication, strong writing, and the ability to collaborate effectively)
  • At least 3 years of data engineering experience including experience with dbt and the modern data stack
  • Some experience with devops, CI/CD, and database management.
  • At least 6 months of experience working exclusively in an agentic coding environment (e.g., Claude Code, Codex)
  • Ability to understand data engineering fundamentals, not just generate code—debugging, reasoning about behavior, and ensuring correctness

Bonus (preferred)

  • Bachelor’s degree in Computer Science, Engineering, or a related field.

Conditions

Fully remote You can work from anywhere in the world.
$$$ Full time
Product Data Analyst
  • Big Health
  • Remote - US
analyst python supervisor support

Our Mission

At Big Health, our mission is to help millions back to good mental health by providing fully digital, non-drug options for the most common mental health conditions. Our FDA-clear digital therapeutics—SleepioRx for insomnia and DaylightRx for anxiety—guide patients through first-line recommended, evidence-based cognitive and behavioral therapy anytime, anywhere. Our digital program, Spark Direct, helps to reduce the impact of persistent depressive symptoms. 


In pursuit of our mission, we’ve pioneered the first at-scale digital therapeutic business model in partnership with some of the most prominent global healthcare organizations, including leading Fortune 500 healthcare companies and Scotland’s NHS. Through product innovation, robust clinical evaluation, and a commitment to equity at scale, we are designing the next generation of medicine and the future of mental health care. 


Our Vision

Over the next 5-10 years, we believe digital therapeutics will transform the delivery of healthcare worldwide by providing access to safe and effective evidence-based treatments. Big Health is positioned to take the lead in this transformation.


Big Health is a remote-first company, and this role can be based anywhere in the US.


Join Us

We're seeking a Product Data Analyst contractor to drive data-informed product decisions by improving our data democratization, analyzing data, generating insights, and generating reports. You'll partner closely with product, growth, enrollment marketing, and client implementation teams to understand user behavior, measure product performance, and identify opportunities for growth and improvement. 

\n


Key Responsibilities
  • Use SQL to query data in Snowflake.
  • Update Snowflake data models, consistent with current data architecture. 
  • Use LookML to add new dimensions, measures, table calculations, and explores to Looker .
  • Create dashboards in Looker and Post Hog to support growth, enrollment marketing, client implementation, product initiatives, and/or company OKRs. 
  • Conduct deep-dive analyses using data from Snowflake and Looker to understand user behavior patterns, identify friction points in the user journey, and uncover opportunities for product enhancement. Analyses may include, but are not limited to, descriptive analytics, correlation, regression, and between-group analyses. 
  • Present the results of these analyses to a cross-functional audience, translating complex data findings into actionable recommendations.
  • Build externally-facing reports that provide stakeholders with clear visibility into user engagement, and feature adoption, clinical outcomes, and recommendations for optimal product use. 
  • Provide data to help justify and inform decision-making around A/B tests and experiments to validate product hypotheses and measure the impact of new features or changes. 
  • Use DBT to build data models and add new data sources to Snowflake. 
  • Assist with updating data dictionary and ERD. 
  • Communicate proactively. During onboarding, you will meet 3-5x/week with your supervisor to provide updates on ticket status and to ask questions. Asking questions outside of these meetings is expected and welcomed. 
  • Work with your supervisor and relevant stakeholders to proactively discuss requirements when questions arise. 


Required Qualifications
  • 3+ years of experience in product analytics, data analysis, or a related analytical role, preferably in a product-driven technology company
  • Strong SQL skills and experience working with large datasets in modern data warehouses like Snowflake, BigQuery, or Redshift
  • Experience with dbt or similar data transformation tools for building modular, tested, and documented data models
  • Proficiency in version control systems like Git for managing code and collaborating with data and engineering teams 
  • Proficiency in analytics tools such as Python or R for statistical analysis and data manipulation
  • Familiarity with BI visualization tools like Looker, Tableau, or Mode
  • Basic understanding of data pipeline orchestration and workflow management tools such as Airflow or similar. Familiarity with ELT/ETL processes and data integration tools like Fivetran, Stitch, or custom-built pipelines 
  • Solid understanding of statistical concepts including hypothesis testing, regression analysis, and experimental design. Experience designing and analyzing A/B tests with proper statistical rigor 
  • Familiarity with healthcare concepts and terminology are highly desirable 
  • Strong communication skills


Background and Life at Big Health
  • Backed by leading venture capital firms.
  • Big Health’s products are used by large multinational employers and major health plans to help improve sleep and mental health. Our digital therapeutics are available to more than 62 million Medicare beneficiaries.
  • Surround yourself with the smartest, most enthusiastic, and most dedicated people you'll ever meet—people who listen well, learn from their mistakes, and when things go wrong, generously pull together to help each other out. Having a bigger heart and a small ego are central to our values.


\n
$50 - $80 an hour
The hourly rate range for this contractor position is $50.00 - $80.00 per hour. This range reflects the target hourly rate for the engagement and may vary based on experience, scope of work, location, and engagement structure. The hourly rate is the sole and full compensation provided for this contractor position.

Rates are determined by role requirements, level, and market factors. The range displayed reflects the minimum and maximum target hourly rates for this engagement. Final rates are determined based on relevant skills, experience, availability, and the specific terms of the engagement. Compensation for contractors does not include benefits, paid time off, or other employee benefits and is subject to change based on business needs.
\n

We at Big Health are on a mission to bring millions back to good mental health, in order to do so, we need to reflect the diversity of those we intend to serve. We’re an equal opportunity employer dedicated to building a culturally and experientially diverse team that leads with empathy and respect. Additionally, we will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.


Big Health participates in E-Verify for all new hires in the United States.



Please mention the word **NIMBLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer – SQL Migration
  • WiTi
  • Santiago (Hybrid)
SQL ETL Automation AWS

En WiTi lideramos un proyecto estratégico de migración de un ecosistema analítico legado hacia una arquitectura moderna en la nube sobre AWS. El objetivo es estandarizar, optimizar el rendimiento y escalar la operación, trasladando lógica SQL no estándar a SQL estándar para Amazon Redshift. Este esfuerzo involucra automatización para acelerar la migración, reducción de errores y una alta interacción con equipos de data, BI y TI para asegurar trazabilidad, reproducibilidad y gobernanza de datos a nivel enterprise.

Serás parte de un equipo multidisciplinario que diseña y ejecuta la migración de punta a punta, estableciendo reglas de conversión, pipelines, controles de calidad y guías de codificación reutilizables. El proyecto ofrece visibilidad transversal sobre ETL/ELT y buenas prácticas de gobierno de datos en un entorno cloud escalable.

Apply exclusively at getonbrd.com.

Responsabilidades Clave

  • Analizar programas y scripts existentes con lógica SQL no estándar, incluyendo estructuras de procesamiento propias de entornos legacy (jobs, macros, librerías).
  • Convertir y reescribir lógica SQL legada a SQL estándar compatible con Amazon Redshift, cuidando equivalencia funcional y performance.
  • Definir un enfoque repetible para migrar grandes volúmenes de programas: reglas, patrones de conversión y estándares de codificación.
  • Automatizar el proceso de transformación mediante scripts, reglas de conversión, validaciones automáticas, templates o pipelines.
  • Trabajar con procesos ETL/ELT en AWS, integrándose con el stack del cliente (fuentes, cargas, transformaciones, orquestación, monitoreo).
  • Validar equivalencia funcional entre el sistema origen y Redshift mediante reconciliaciones de datos, controles de calidad y monitoreo.
  • Documentar reglas de conversión, decisiones técnicas y casos borde para un proceso mantenible y auditable.
  • Colaborar con data y TI para asegurar trazabilidad, reproducibilidad y rendimiento del data warehouse en la nube.

Requisitos Excluyentes

  • SQL avanzado: queries complejas, optimización de performance, joins pesados, window functions/CTEs, lectura e interpretación de planes de ejecución.
  • Experiencia práctica con Amazon Redshift: diseño y escritura de SQL, buenas prácticas de rendimiento y modelado en Redshift.
  • Conocimiento de ETL/ELT en AWS (p. ej., Glue, Lambda, Step Functions) y otras herramientas de orquestación.
  • Experiencia en contextos enterprise centrada en calidad, trazabilidad, documentación y resultados reproducibles.
  • Experiencia en migraciones desde tecnologías legacy hacia cloud data warehouses (Redshift, Snowflake, BigQuery) y automatización de migraciones.

Requisitos Deseables

  • Se valorará experiencia previa migrando desde tecnologías legacy hacia cloud data warehouses y participación en automatización de migraciones.
  • Conocimientos de Python u otros lenguajes de scripting para apoyar automatización y tooling interno, así como experiencia en gobernanza de datos (naming conventions, documentación, data quality checks y monitoreo).

Beneficios

En WiTi fomentamos una cultura de aprendizaje y colaboración, con foco en proyectos digitales y de datos de alto impacto. Entre los beneficios se incluyen:

  • Plan de carrera personalizado orientado a desarrollo en data, cloud y analítica.
  • Certificaciones para continuar creciendo en tu carrera (AWS, data, analítica).
  • Cursos de idiomas para desarrollo personal y profesional.

Digital library Access to digital books or subscriptions.
Computer provided WiTi provides a computer for your work.
Personal coaching WiTi offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
$$$ Full time
Senior Backend Engineer Integrations
  • Arbiter AI
  • New York City
design system python technical

Arbiter is the AI-powered care orchestration system that unites healthcare. We are launching our best-in-class, patient-facing Agentic platform to optimize patient outcomes through a unique multimodal approach. We optimize complex healthcare workflows that interface with patients using the latest Agentic AI approaches, and we combine it with a sophisticated platform to serve this Agentic layer at scale. We are looking for expert engineers and leads to join our team and help us push the frontier of what's possible with Agentic workflows + Healthcare.

Backed by one of the largest seed rounds in health tech history and operators who bring the expertise and distribution to scale nationally, we're building the connected infrastructure healthcare should have had all along.

Our Engineering Culture & Values

We are a high-performing group of engineers dedicated to delivering innovative, high-quality solutions to our clients and business partners. We believe in:

  • Engineering Excellence: Taking immense pride in our technical craft and the products we build, treating both with utmost respect and care.

  • Impact-Driven Development: Firmly committed to engineering high-quality, fault-tolerant, and highly scalable systems that evolve seamlessly with business needs, minimizing disruption.

  • Collaboration Over Ego: Valuing exceptional work and groundbreaking ideas above all else. We seek talented individuals who are accustomed to working in a fast-paced environment and are driven to ship often to achieve significant impact.

  • Continuous Growth: Fostering an environment of continuous learning, mentorship, and professional development, where you can deepen your expertise and grow your career.

Responsibilities

As a Senior Backend Engineer, you will design, build, and operate the platform systems that power Arbiter's connections to the outside world and ensure reliable, performant data exchange across a complex ecosystem. You will own critical parts of our backend infrastructure, from API design and service orchestration to data pipelines and third-party system connectivity, working closely with product, engineering, and customer teams to ship production-grade systems with real customer dependency.

  • Platform Architecture & Backend Systems: Design, develop, and operate backend services that power Arbiter's core platform, with an emphasis on reliability, modularity, and clean system boundaries.

  • External System Connectivity: Build and maintain robust connections to third-party systems (e.g. cloud APIs, AI services, data exchange services, EHRs, telephony platforms). Own the abstractions that make these integrations reusable and adaptable across customers with minimal rework.

  • API Design & Data Exchange: Design and operate high-scale APIs (REST, gRPC, webhooks) and manage complex data flows including real-time streaming, batch processing, file-based exchange (e.g. SFTP, HL7, EDI), and event-driven pipelines.

  • Performance & Reliability: Ensure high throughput, low latency, and fault tolerance across backend services through strong system design, monitoring, alerting, and operational best practices. Handle vendor failures, retries, idempotency, and graceful degradation.

  • Data Engineering & Pipeline Ownership: Build and maintain ETL/ELT pipelines, manage schema evolution, and ensure data quality and integrity across systems with varying formats, standards, and reliability.

  • Infrastructure & Deployment Excellence: Implement and uphold best practices for CI/CD, testing, observability, and deployment of backend systems in production cloud environments.

  • Cross-Functional Execution: Partner closely with AI engineers, product managers, implementation teams, and customer stakeholders to translate ambiguous, high-impact problems into scalable technical solutions.

  • Technical Leadership & Mentorship: Mentor engineers, contribute to internal documentation and standards, influence technical direction, and raise the overall engineering bar.

  • Ownership & On-Call: Take end-to-end ownership of critical systems, including participating in on-call rotations and leading incident resolution when production issues arise.

Minimum Qualifications

  • 5+ years of hands-on experience building and operating production backend systems in high-availability environments.

  • Computer Science or Engineering degree, or equivalent practical experience.

  • Experience building and maintaining large-scale Python codebases with strong opinions on structure, quality, and tradeoffs.

  • Deep understanding of API design patterns, versioning, backward compatibility, and managing breaking changes across consumers.

  • Experience building reusable abstraction layers or connector frameworks that allow a single integration pattern to serve multiple customers or vendors.

  • Proven experience designing systems that connect to third-party services, including handling authentication, rate limiting, retry logic, and failure modes gracefully.

  • Strong understanding of concurrency, scalability, reliability, and distributed systems patterns.

  • Hands-on experience with data pipeline architectures: batch and streaming, schema management, and data quality enforcement.

  • Experience with cloud infrastructure (AWS, GCP, or Azure) and production deployments.

  • Strong communication skills and ability to work effectively across functions.

  • Proficiency with AI-assisted development tools (e.g., Cursor, Claude Code, GitHub Copilot).

  • Track record of delivering complex systems end-to-end with minimal oversight.

Preferred Qualifications

  • Experience with healthcare data exchange standards (HL7, FHIR, EDI) or similarly complex domain-specific protocols in other industries (fintech, telecom, logistics) is a plus.

  • Familiarity with database performance tuning, query optimization, and managing large-scale relational databases (PostgreSQL, CloudSQL).

  • Startup or early-stage experience operating in fast-moving, high-ambiguity environments.

This role can be remote or on-site, based in our New York City or Boca Raton offices, in a fast-paced, collaborative environment where great ideas move quickly from whiteboard to production.

Job Benefits

We offer a comprehensive and competitive benefits package designed to support your well-being and professional growth:

  • Highly Competitive Salary & Equity Package: Designed to rival top FAANG compensation, including meaningful equity.

  • Generous Paid Time Off (PTO): To ensure a healthy work-life balance.

  • Comprehensive Health, Vision, and Dental Insurance: Robust coverage for you and your family.

  • Life and Disability Insurance: Providing financial security.

  • Simple IRA Matching: To support your long-term financial goals.

  • Professional Development Budget: Support for conferences, courses, and certifications to fuel your continuous learning.

  • Wellness Programs: Initiatives to support your physical and mental health.

Pay Transparency

The annual base salary range for this position is $148,500-$190,000. Actual compensation offered to the successful candidate may vary from the posted hiring range based on work experience, skill level, and other factors.



Please mention the word **LAUDABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineering Intern
  • RefinedScience
  • Remote
python students support software

Data Engineering Intern

At RefinedScience, our mission is to advance care by bringing together the best science, data and minds – disease by disease, patient by patient, cell by cell to discover pathways to life beyond disease.   

WHAT WE ARE LOOKING FOR

We are seeking a motivated Data Engineering Intern to join our team. This internship is open to undergraduate and graduate students who are interested in building data infrastructure that supports advanced analytics, data science, and AI-driven insights in healthcare and life sciences.

You will work closely with data scientists, bioinformaticians, and engineers to help design, build, and improve data pipelines and platforms that power RefinedScience's research and analytics initiatives.

KEY ACTIVITIES

  • Assist in building and maintaining data pipelines for ingesting, transforming, and validating clinical, biological, and real-world data
  • Support integration of data from multiple sources (e.g., clinical data, analytics outputs, external datasets)
  • Help develop and optimize ETL/ELT workflows to ensure data quality and reliability
  • Collaborate with data science and bioinformatics teams to support analytics and machine learning workflows
  • Contribute to data modeling, documentation, and best practices for data infrastructure
  • Participate in code reviews, testing, and performance improvements
  • Participate in Quality Reviews and Troubleshooting
  • Communicate progress and findings to cross-functional teams

MUST HAVES

  • Currently enrolled in a Bachelor's, Master's, or Ph.D. program in Data Engineering, Computer Science, Data Science, Software Engineering, or a related field
  • Experience with Python and/or SQL through coursework, projects, or internships
  • Basic understanding of data pipelines, databases, and data transformation concepts
  • Familiarity with version control (e.g., Git)
  • Strong analytical thinking and problem-solving skills
  • Ability to learn quickly and work collaboratively in a team envir

    Please mention the word **LOGICAL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $1600 - 2000 Full time
Jenkins Selenium Jmeter Test Automation

Somos 3IT ¡Innovación y talento que marcan la diferencia!

Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.

Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.

Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

This job offer is available on Get on Board.

📝 ¿Cuál sería tu trabajo?

Asegurar la calidad del software mediante pruebas funcionales, evaluando la conformidad con los requisitos y la funcionalidad esperada en cada etapa del desarrollo.

🎯 ¿Qué necesitamos para sumarte a nuestro equipo?

  • Uso de Gherkin y Cucumber
  • Dominio en pruebas funcionales
  • Experiencia en testing de software
  • Práctica en herramientas JMeter
  • Manejo de Selenium para automatización de pruebas
  • Gestión de integración continua con Jenkins y Bamboo
  • Capacidad para control y seguimiento de pruebas funcionales
  • Experiencia mínima de 3 años utilizando las tecnologías mencionadas
  • Conocimientos en metodologías de aseguramiento de calidad de software
  • Trayectoria en entornos bancarios o financieros con enfoque en validación funcional

Algunas consideraciones antes de postular:

  • Debes tener disponibilidad para trabajar en modalidad híbrida y asistir de forma presencial a las oficinas de cliente
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista

📍 ¿Dónde y cómo trabajarás?

  • Ubicación oficina: Providencia
  • Modalidad: Hibrida

Beneficios que tendrás si te unes a nuestro team:

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxee + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Wellness program Banco de Chile offers or subsidies mental and/or physical health activities.
Life insurance Banco de Chile pays or copays life insurance for employees.
Digital library Access to digital books or subscriptions.
Health coverage Banco de Chile pays or copays health insurance for employees.
Dental insurance Banco de Chile pays or copays dental insurance for employees.
Computer provided Banco de Chile provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Beverages and snacks Banco de Chile offers beverages and snacks for free consumption.
Parental leave over legal Banco de Chile offers paid parental leave over the legal minimum.
Gross salary $4000 - 5500 Full time
Python SQL DevOps Apache Spark
At Zaelot, we build scalable solutions for enterprise-grade clients, operating across 9 countries with 90+ employees. We bring together analysts, designers, and software engineers in a collaborative, trust-first environment where continuous improvement is part of our day-to-day. In this role, we are looking for a Data Engineer to join our growing team and help deliver data products that empower international clients. You will contribute to building and evolving reliable, scalable data pipelines and support the performance, testing, and optimization needed to operate complex, distributed data systems with confidence.

Applications: getonbrd.com.

Key Responsibilities

We collaborate with cross-functional teams to understand data and analytics needs and translate them into data products that meet business goals.
  • Build and maintain reliable, scalable data pipelines handling large datasets
  • Write high-quality, readable, and well-tested production code
  • Perform performance tuning and optimization for complex, distributed data pipelines
  • Participate in code reviews and contribute to testing, automation, and deployment tooling
  • Help improve the reliability and productivity of the data platform over time

Required Skills & Experience

We are seeking a Data Engineer with strong practical engineering foundations and hands-on experience in building data pipelines.
  • Experience building and maintaining data pipelines using Python and SQL
  • Experience working with large datasets in distributed computing environments
  • Strong background in data modeling, algorithms, and software quality processes
  • Solid software engineering foundations, including CI/CD, testing, code reviews, and DevOps practices
  • BS/MS in Computer Science, Software Engineering, or equivalent hands-on experience

Nice-to-Have Skills

  • Experience with Databricks or similar big data platforms
  • Familiarity with Spark and distributed computing concepts
  • Scala or R programming experience
  • Exposure to Data Science or machine learning concepts

What We Offer

Paid vacations for 20 days after one year.
Referrals program and finder's fee rewards.
Training and certification programs.
Work from home aid.
English classes.
Access to a fitness program.
Profit sharing.
Coaching sessions for personal development.
We’re looking forward to working together! 🚀

Fully remote You can work from anywhere in the world.
$$$ Full time
GTM Analytics Engineer
  • Stedi
  • Remote
saas founder architect recruiter

We're building a new healthcare clearinghouse

In the healthcare sector, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) requires that all insurance payers exchange transactions such as claims, eligibility checks, prior authorizations, and remittances using a standardized EDI format called X12 HIPAA. A small group of legacy clearinghouses process the majority of these transactions, offering consolidated connectivity to carriers and providers.

Stedi is the world's only programmable healthcare clearinghouse. By offering modern API interfaces alongside traditional real-time and batch EDI processes, we enable both healthcare technology businesses and established players to exchange mission-critical transactions. Our clearinghouse product and customer-first approach have set us apart. Stedi was ranked as Ramp’s #3 fastest-growing SaaS vendor.

Stedi has lightning in a bottle: engineers and designers shipping products week in and week out; a lean business team supporting the company’s infrastructure; passion for automation and eliminating toil; $92 million in funding from top investors like Stripe, Addition, USV, Bloomberg Beta, First Round Capital, and more. To learn more about how we work, watch our founder Zack’s interview with First Round Capital.

What we’re looking for

We’re hiring a full-stack data and analytics engineer to build and own the data foundation that will power our daily GTM operations: revenue analytics, product usage telemetry, CRM data quality, attribution, funnel performance, and forecasting.

This is not a typical business analyst position. You will architect the pipelines, models, and automations that ensure our GTM teams have reliable, real-time insights into how customers discover, adopt, and expand with Stedi and our products. You will work closely with Sales, GTM Ops, Product, and Finance, executing data and analytics engineering workstreams, and conducting hands-on analysis to build the source-of-truth data for our GTM operations.

What you'll do

  • Build and maintain GTM data pipelines: Own ingestion, transformation, and syncing of CRM data (HubSpot), product-usage telemetry, billing data, and third-party enrichment data in Redshift to support GTM analytics workstreams.

  • Develop core GTM & revenue data models: Improve operational efficiency through standardization of datasets for Sales, GTM Ops, Finance, and the executive team, while establishing common metric definitions across revenue, customer segments, and more.

  • Ship dashboards, alerts, and decision-making tools: Improve telemetry into business performance by building dashboards to track things like sales funnel performance and pipeline quality. Better inform GTM leadership through automation of weekly/monthly reporting and establishing a revenue forecast.

  • Investigate trends and build models to support sales. Accelerate sales effectiveness through implementation of alerting for critical events (e.g. pipeline drops, usage contractions, stuck deals, missed lifecycle transitions), conducting key analyses (e.g. pipeline velocity, win rates, segmentation performance), and development of GTM models (e.g. ICP scoring, account prioritization, churn risk).

  • Own the GTM analytics roadmap: Work with GTM leadership to maintain a backlog of GTM analytics engineering work. Proactively identify the next set of capabilities the GTM org needs (forecasting, routing logic, new usage signals, etc).

Who you are

  • You have exceptional analytical skills: You’ve made a career in working with data to improve products and overall business operations. You know the tools, best practices, and playbooks necessary to stand up a high-performing and organized analytics function at the company.

  • You know the tech stack: You write efficient SQL queries to analyze large datasets and can work with complex schemas. You're an expert with data visualization tools like Tableau, QuickSight, or Power BI. Familiarity with cloud environments (AWS, Azure, GCP).

  • You create and execute your own work: You notice patterns others miss and dig deep to understand root causes. You've identified data issues or operational inefficiencies that led to meaningful improvements.

  • You do what it takes to get the job done: You are resourceful, self-motivating, self-disciplined, and don’t wait to be told what to do. You put in the hours.

  • You move quickly: We move quickly as an organization. This requires an ability to match our pace and not get lost by responding with urgency (both externally to payers and internally to stakeholders), communicating what you are working on, and proactively asking for help or feedback when you need it.

  • You are a “bottom feeder”: You thrive on the details. No task is too small in order to find success, generate revenue, and improve our costs.

The annual compensation range for this role is $180,000-$230,000. For roles with a variable component, the range provided is the role’s On Target Earnings ("OTE") range, which means that the range is inclusive of the sales commissions or bonus target and annual base salary. This range may be inclusive of multiple experience levels at Stedi and will be narrowed during the interview process based on a number of factors, including the candidate’s experience, location, and qualifications. Please reach out to your recruiter with any questions.

We’ve been made aware of individuals impersonating the Stedi recruiting team. Please note:

  • All official communication about roles at Stedi will only come from an @stedi.com email address.

  • If you’re unsure whether a message is legitimate or have any concerns, feel free to contact us directly at careers@stedi.com.

We appreciate your attention to this and your interest in joining Stedi.

At Stedi, we're looking for people who are deeply curious and aligned to our ways of working. You're encouraged to apply even if your experience doesn't perfectly match the job description.



Please mention the word **LOGICAL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $1200 - 1800 Full time
Data Analyst (Azure + BI)
  • Asesoría y Gestión de Procesos S.A
SQL ETL Power BI Data governance
En Asesoría y Gestión de Procesos S.A. nos encontramos en un proceso de búsqueda de talento para un equipo de Data & Analytics orientado a impulsar la visibilidad operativa y estratégica de nuestros clientes, principalmente en los sectores automotriz e inmobiliario. El proyecto abarca el ciclo completo del dato: desde la ingesta y el modelado hasta la visualización y el monitoreo proactivo. Nuestro objetivo es transformar datos en insights accionables que impulsen decisiones de negocio, alinear KPIs con objetivos estratégicos y entregar dashboards y alertas confiables para equipos ejecutivos y operativos.
Trabajamos con Azure Data Factory, Data Lake y herramientas de BI como Power BI y Grafana para monitoreo en tiempo real. El rol se integra en una empresa con 12 años de experiencia, un portafolio de más de 120 clientes y una misión clara de acelerar y mejorar procesos mediante tecnología e innovación.

Originally published on getonbrd.com.

Funciones y responsabilidades

  • Entender el negocio y definir KPIs clave junto a stakeholders, documentando reglas de cálculo y asegurando que los indicadores sean accionables.
  • Diseñar y desarrollar pipelines ETL/ELT en Azure Data Factory, integrando diversas fuentes (bases de datos, APIs, archivos) y garantizando calidad de datos.
  • Modelar datos en esquemas adecuados y mantener Data Warehouse/Data Mart para consumo analítico eficiente.
  • Desarrollar dashboards interactivos en Power BI y Grafana, traduciendo complejidad analítica en visualizaciones claras y útiles para distintos perfiles.
  • Monitorear datos, definir alertas automáticas y notificaciones ante desviaciones, identificando anomalías y generando insights proactivos.
  • Colaborar con equipos de negocio e IT para garantizar disponibilidad, escalabilidad y seguridad de las soluciones de datos.
  • Participar en la definición de arquitectura de datos y buenas prácticas de gobierno de datos.

Requisitos y perfil

  • Experiencia sólida en integraciones y análisis de datos con Azure Data Factory y servicios de Azure (Data Lake) y manejo avanzado de SQL.
  • Experiencia creando dashboards en Power BI y Grafana; conocimiento de modelamiento de datos (Data Warehouse, OLAP) y procesos ETL/ELT.
  • Capacidad para diseñar soluciones end-to-end: desde la definición de KPIs hasta la entrega de visualizaciones y alertas operativas.
  • Conocimientos en scripting y buenas prácticas de gobierno de datos, calidad y seguridad.
  • Habilidad para comunicar insights a audiencias no técnicas, pensamiento analítico y foco en impacto de negocio.
  • Experiencia previa en roles de BI/analítica y capacidad para trabajar de forma autónoma y colaborativa.

Competencias y habilidades deseables

Certificaciones en BI/Analytics y experiencia con proyectos en sectores automotriz. Se aprecia experiencia en entornos ágiles, gestión de stakeholders y capacidad para priorizar en entornos cambiantes.

Beneficios

En Asesoría y Gestión de Procesos S.A, ofrecemos un entorno laboral flexible y beneficios atractivos, como:
  • Tres tardes libres al año.
  • Vestimenta informal.
  • Dos días libres extra al año.
  • Día libre por tu cumpleaños.
  • Seguro complementario.
  • Y muchos otros beneficios.
¡Esperamos contar contigo en nuestro equipo!

Fully remote You can work from anywhere in the world.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage Asesoría y Gestión de Procesos S.A pays or copays health insurance for employees.
Informal dress code No dress code is enforced.
Vacation over legal Asesoría y Gestión de Procesos S.A gives you paid vacations over the legal minimum.
$$$ Full time
Junior Data Engineer
  • Satelligence
  • Utrecht
design python django technical

At Satelligence we're looking for a Jr. Data Engineer to join our team.

We are looking for a Junior Data Engineer:

Employment type: 32–40h/week

Location: Utrecht, NL (hybrid)

Experience: Junior–Medior level

Salary: €48 000 – €60 000 gross/year (including 8% holiday allowance, based on 40h/week)

About the job

As Data Engineer your main responsibilities are on building out capabilities of our (geo)data query engine. You’ll be part of the data engineering team, which develops and maintains our satellite data processing engine, geospatial storage and query engine and a set of internal tools used mainly by our OPS team. Our tech stack is Python, Django, PostGIS, deployed on Google Cloud services like GKE and cloud functions. This role will report to Engineering Lead.


What will you do?

You'll be instrumental in empowering our product teams to develop and deploy features that help our clients reach their sustainability targets. You'll ensure the reliability, scalability, and performance of our cloud-based data platform, enabling us to deliver critical environmental intelligence through our API. Your work will directly contribute to:

  • Building and maintaining scalable infrastructure on GCP using infrastructure-as-code tools like Terraform

  • Optimizing data pipelines for processing and storing massive datasets (ETL, OLAP)

  • Developing and managing APIs for efficient data dissemination.

  • Implementing data engineering best practices for data quality, security, and performance.

  • Collaborating closely with product teams to understand their needs and provide technical guidance.

  • Contributing to the design and implementation of data storage solutions using databases like PostgreSQL

  • Monitoring and troubleshooting platform performance and ensuring high availability.


    About you

    • You are an experienced Python developer

    • You are experienced with RDBMS, especially postgresql

    • You are familiar with Django

    • You prefer a well organized codebase over getting your pull requests merged fast

      Nice to have

      • You are experienced with Infrastructure as Code tools such as Terraform

      • You have experience with Google Cloud (Cloud SQL, Cloud Composer, Kubernetes)

      • You worked with PostGIS before or bring other experience with geospatial data


        What we offer you:

        📍Office centrally located in Utrecht city (with direct access via bus 8 or a 20-minute walk from Utrecht Central Station)
        😎27 holidays (based on full-time employment)
        👐Solid pension scheme with employer contribution
        🚆NS Business Card for employees commuting from outside Utrecht
        🖥️Laptop and necessary IT equipment provided
        🩺Additional income protection in case of long-term illness or disability, complementing the statutory coverage
        🥪Daily lunch, fruits, and Aroma Club coffee at the office
        🍹Not the main reason to join, but definitely a fun one: Annual Team Week, after-summer drinks with friends and family and a festive Christmas celebration.

        Meet Satelligence!
        Satelligence is the market leader in remote sensing technology for sustainable sourcing with the mission to halt deforestation. We provide traders, manufacturers and agribusinesses such as Mondelez, Bunge, Cargill, Unilever, Rabobank with critical sustainability insights empowering them to minimize their global environmental footprint and track their progress against climate objectives, ensuring a sustainable supply chain. We were founded in 2016 and currently employ +40 people, working in Utrecht and several locations in Asia, Africa, and South America.

        Apply for the job

        Do you want to join our team as our new junior Data Engineer? Then we'd love to hear about you!


        Please mention the word **FAIR** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

Gross salary $2100 - 2300 Full time
Business Development Sales Forecasting Contract Management Negotiation

Somos 3IT ¡Innovación y talento que marcan la diferencia!

Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.

Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.

Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

Apply to this job at getonbrd.com.

📝 ¿Cuál sería tu trabajo?

Impulsar el crecimiento estratégico de la compañía mediante la generación de nuevas oportunidades de negocio, apertura de nuevos mercados, cumpliendo una meta comercial compartida entre las líneas de Outsourcing TI y Soluciones TI, asegurando ingresos sostenibles y rentables para la organización.

🎯 ¿Qué necesitamos para sumarte a nuestro equipo?

  • Manejo de pipeline, forecast y uso de CRM HubSpot
  • Trayectoria mínima de 6 años en roles comerciales estratégicos
  • Experiencia en desarrollo de negocio (hunting y apertura de mercado)
  • Habilidad en negociación y cierre de contratos con foco en rentabilidad
  • Competencia en coordinación con áreas internas (TI, PMO, Comercial)
  • Capacidad para la construcción de propuestas comerciales (Scope of Work)
  • Expertise en venta consultiva de servicios tecnológicos, outsourcing y soluciones TI

📍 ¿Dónde y cómo trabajarás?

  • Ubicación oficina: Providencia
  • Modalidad: Hibrida

✋ Algunas consideraciones antes de postular:

  • Debes tener disponibilidad para trabajar en modalidad híbrida y asistir de forma presencial a nuestra oficina
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista

Beneficios que tendrás si te unes a nuestro team:

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxee + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Wellness program 3IT offers or subsidies mental and/or physical health activities.
Life insurance 3IT pays or copays life insurance for employees.
Digital library Access to digital books or subscriptions.
Health coverage 3IT pays or copays health insurance for employees.
Dental insurance 3IT pays or copays dental insurance for employees.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Beverages and snacks 3IT offers beverages and snacks for free consumption.
Parental leave over legal 3IT offers paid parental leave over the legal minimum.

Sobre trabajos de Data Engineering

Empleos remotos de Data Engineering. Pipelines de datos, ETL, arquitectura de datos y big data. En RemoteJobs.lat conectamos a profesionales de Latinoamerica con empresas que ofrecen trabajo 100% remoto. Todas nuestras ofertas permiten trabajar desde cualquier ciudad, con pagos en dolares o moneda internacional.

Rango salarial

$4,000 - $11,000 USD/mes

Posiciones abiertas

165

Ubicacion

100% Remoto LATAM

Tip: Tambien puedes buscar ofertas en skills relacionados como Python, SQL,

Rangos salariales de Data Engineering por seniority

Rangos estimados en USD/mes para contratos remotos con empresas internacionales. Varían según empresa, stack complementario y ubicación del cliente.

Nivel Años de experiencia Rango USD/mes
Junior 0-2 $4,000 - $5,750
Semi-Senior 2-4 $5,400 - $7,850
Senior 4-7 $7,500 - $9,950
Lead/Staff 7+ $9,250 - $11,000

Empresas que contratan Data Engineering remoto desde LATAM

Algunas compañías que históricamente han contratado perfiles de Data Engineering para trabajar 100% remoto desde Latinoamérica:

Mercado Libre Globant Auth0 Nubank Cloudwalk Stripe GitLab Crossover Toptal

Preguntas frecuentes

El rango típico para un Data Engineering remoto trabajando para empresas internacionales es $4,000 - $11,000 USD/mes. El monto exacto depende de la seniority, el país de la empresa y si el contrato es full-time o por proyecto.

Los perfiles más demandados de Data Engineering suelen combinar Python, Sql, Spark. Sumar uno de estos te abre más ofertas y suele aumentar el rango salarial entre 15% y 30%.

Para empresas USA/EU sí: nivel B2 mínimo para entrevistas técnicas. Hay alternativas en empresas LATAM (Mercado Libre, Globant, Rappi) o agencias como Toptal donde el inglés intermedio alcanza para arrancar.

Las 3 cosas que más mueven la aguja: (1) un GitHub público con 2-3 proyectos sólidos relevantes a Data Engineering, (2) un perfil de LinkedIn en inglés optimizado para reclutadores, y (3) postularte a 20+ ofertas por semana en lugar de 2-3.