Skills relacionados:
Python SQL Spark Airflow
Gross salary $2800 - 3100 Full time
Lider Técnico AWS – Datos
  • BC Tecnología
  • Santiago (Hybrid)
SQL CI/CD AWS Lambda Data Architecture
En BC Tecnología, ayudamos a empresas de finanzas, seguros, retail y gobierno con servicios de TI, outsourcing y selección de personal. Aquí, liderarás proyectos de datos en la nube para clientes importantes. Te asegurarás de que las soluciones sean escalables y cumplan con los estándares de arquitectura. Trabajarás en un ambiente ágil, donde diseñarás e implementarás soluciones de datos (ETL/ELT), gobernarás datos y ayudarás con migraciones a la nube. Para esto, colaborarás con equipos de Infraestructura, Desarrollo y Negocio. También participarás en mejorar continuamente la calidad, la entrega y el gobierno de datos, promoviendo buenas prácticas de CI/CD y el cumplimiento de normativas. Tendrás un horario híbrido: trabajarás desde casa y vendrás a la oficina para colaborar mejor con los equipos.

Postula a través de Get on Board.

Qué harás

  • Liderarás equipos de datos y proyectos AWS para la ingesta, procesamiento y almacenamiento de datos.
  • Hablarás con las partes interesadas y te asegurarás de que todos estén alineados con las expectativas, el alcance y los plazos.
  • Diseñarás y revisarás arquitecturas de datos escalables (como ETL/ELT, data lakes, data warehouses) usando servicios AWS como Glue, S3, Redshift, Lambda y Step Functions.
  • Te asegurarás de que los datos estén bien gobernados, sean de calidad, seguros y que cumplan con las buenas prácticas de CI/CD y control de versiones.
  • Fomentarás prácticas ágiles (Scrum/Kanban), liderazgo técnico y el desarrollo de tu equipo (incluyendo mentoring).
  • Identificarás y manejarás riesgos técnicos, definirás indicadores de rendimiento y crearás planes para mitigarlos.
  • Colaborarás con equipos de Infraestructura, Desarrollo y Negocio para entregar soluciones que cumplan con los objetivos estratégicos.
  • Participarás en revisiones de arquitectura, diseño de soluciones y la documentación técnica.

Tu perfil ideal

Necesitamos un Líder Técnico AWS con al menos 5 años de experiencia. Debes haber liderado proyectos y equipos de datos en la nube. Es clave que tengas experiencia sólida en proyectos de datos (ETL/ELT, arquitecturas escalables) y que conozcas a fondo servicios AWS como Glue, S3, Redshift, Lambda y Step Functions. También necesitas experiencia con entornos ágiles (Scrum/Kanban), gobernanza de datos, CI/CD y buenas prácticas de calidad. Lo ideal es que combines tus habilidades técnicas sólidas con liderazgo, comunicación efectiva y que estés orientado a los resultados.
Tus habilidades técnicas deben incluir: diseño de arquitecturas de datos, orquestación de pipelines, buen manejo de SQL, modelado de datos, seguridad y cumplimiento, gestión de partes interesadas y migraciones a la nube.
Y tus habilidades blandas: liderazgo colaborativo, comunicación clara, pensamiento estratégico, orientación a soluciones, capacidad de influir y trabajar en equipos multidisciplinarios.

Puntos extra

Las certificaciones AWS (como AWS Certified Solutions Architect – Professional o AWS Certified Data Analytics) son un plus. También suma tener experiencia con otras herramientas de orquestación, ciencia de datos, u otras para la observabilidad y monitoreo de datos. Conocer sobre gobernanza de datos, calidad y metadatos, y haber trabajado en proyectos de sectores regulados también sería muy bueno.

Beneficios

En BC Tecnología, fomentamos un ambiente de trabajo colaborativo. Valoramos tu compromiso y que siempre quieras aprender. Aquí crecerás profesionalmente, integrándote y compartiendo conocimientos con otros equipos.
Tendrás un modelo de trabajo híbrido en Las Condes. Esto te permite combinar la flexibilidad de trabajar desde casa con la colaboración en la oficina, logrando un mejor equilibrio y dinamismo en tu día.
Participarás en proyectos innovadores con clientes importantes de diferentes sectores. Nuestro entorno fomenta la inclusión, el respeto y tu desarrollo técnico y profesional.

$160000 - $180000 Full time
Data Engineer
  • Pivotal Health
  • Los Angeles
design salesforce python technical

About Pivotal Health

Pivotal Health is the leading technology platform that helps healthcare providers get paid fairly in an increasingly complex reimbursement landscape.

Today, many providers face persistent underpayment from health insurance companies, despite delivering high-quality care. While processes like IDR (Independent Dispute Resolution) were designed to promote fairness, they’re often administrative-heavy, time-consuming, and difficult to navigate without the right tools.

Pivotal Health combines software, data, and service into a seamlessly integrated, AI-driven platform that simplifies these complex reimbursement workflows. We help providers efficiently dispute underpaid claims, reduce administrative burden, and recover the reimbursement they’re entitled to; without adding more work to already stretched teams.

Our full-service IDR solution is just the starting point. We’re building solutions that enable providers to operate with clarity, control, and confidence across the reimbursement journey.

About the Role

We're hiring a Data Engineer to sit at the intersection of our analytics and engineering teams. You'll be responsible for making Pivotal's product data accessible, reliable, and ready for analysis, connecting data sources to our warehouse, building clean transformation pipelines, and ensuring our analysts have what they need to drive business decisions.

This is not a traditional software engineering role and it's not a pure analyst role either. You'll bring a strong technical foundation and apply it in service of business outcomes: faster reporting, better data access, and more reliable pipelines that the team can actually trust.

If you enjoy building the infrastructure that makes great analysis possible and care about the business impact of your work, this role is for you.

What You’ll Do

  • Own the pipeline from product database to analytics warehouse: Take full ownership of extracting data from our PostgreSQL product database and loading it into BigQuery. Design and maintain the ETL processes that make this happen reliably, with the right structure for downstream analytics use.

  • Bring in new data sources: Expand our analytics footprint by integrating new data sources, including third-party tools like Salesforce, into our warehouse. You'll partner with our DevOps team to establish the right service accounts, permissions, and connection patterns to do this securely and correctly.

  • Build and maintain analytics-ready tables: Use dbt to design, build, and manage the transformation layer that turns raw data into clean, well-structured tables. You'll have real ownership over what the data looks like: what gets modeled, how it's shaped, and what makes it most useful for reporting.

  • Support reporting and business insights: Work alongside our analysts to support the reporting layer, ensuring data is fresh, accurate, and structured in a way that makes building dashboards and reports in Tableau, Power BI, or Metabase reliable and efficient.

  • Be the bridge between analytics and engineering: Attend engineering team meetings to stay ahead of product changes that could affect analytics. Serve as the connective tissue between both teams, translating data needs into technical solutions and keeping everyone aligned.

Who You Are

  • Strong SQL skills with hands-on experience in modern cloud data warehouses: BigQuery, Snowflake, or Redshift

  • Proficient with dbt for managing SQL transformations. You understand how to write clean, maintainable, well-documented models

  • Comfortable with Python at a working level, enough to build and automate data workflows without needing to be a full software engineer

  • Experience with at least one BI or reporting tool (Tableau, Power BI, Metabase, or similar)

  • You think in business outcomes: your resume reflects the impact your work had, not just the tools you used

  • Self-directed and comfortable with ambiguity: you can identify what needs to be done and execute without heavy guidance

  • Collaborative by nature: you know how to work across teams with different levels of technical depth

  • Startup or high-growth company experience: you're used to environments where ownership is real and speed matters

Extra Credit If You Have

  • Hands-on experience with BigQuery specifically

  • Experience connecting BI tools to a cloud warehouse (e.g., Power BI to BigQuery)

  • Experience with Salesforce data or CRM integrations

  • Background in FinTech, HealthTech, or other data-rich industries

Why You’ll Love Working Here

We’re a collaborative, low-ego team on a mission to make healthcare reimbursement fairer for providers. While we primarily hire around our core hubs–Los Angeles and New York–we remain open to exceptional talent outside those regions. Remote and hybrid flexibility varies by role and team, and is outlined in each job description.

If you’re excited by solving complex problems and making a real-world impact, we’d love to hear from you.

Benefits Include:

  • Competitive compensation, including equity

  • Full health, dental, and vision coverage

  • Retirement savings plan through 401(k)

  • Flexible time off

  • Opportunities for company-wide connection and events

Ready to Make an Impact?
We’re building something meaningful; and we want you on the team.

Bring your ideas, curiosity, and drive, and let’s transform healthcare reimbursement together.

Employment Information

Work Authorization

Candidates must be authorized to work in the United States without current or future employer sponsorship.

Equal Employment Opportunity

Pivotal Health is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, gender identity or expression, sexual orientation, national origin, age, disability, veteran status, or any other legally protected status.

Reasonable Accommodations

Pivotal Health provides reasonable accommodations for qualified individuals with disabilities in accordance with applicable laws. If you need assistance during the application or interview process, please let us know.

Background Checks

Employment is contingent upon successful completion of applicable background checks, where permitted by law.

At-Will Employment

Employment with Pivotal Health is at-will and may be terminated by either party at any time, with or without cause or notice, in accordance with applicable law.



Please mention the word **STUPENDOUSLY** and tag RMTM0LjQxLjE5Mi4yNA== when applying to show you read the job post completely (#RMTM0LjQxLjE5Mi4yNA==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior App & Frontend Developer AS233
  • Smart Working Solutions
  • Remote
frontend developer embedded architect

About Smart Working
At Smart Working, we believe your job should not only look right on paper but also feel right every day. This isn’t just another remote opportunity - it’s about finding where you truly belong, no matter where you are. From day one, you’re welcomed into a genuine community that values your growth and well-being.

Our mission is simple: to break down geographic barriers and connect skilled professionals with outstanding global teams and products for full-time, long-term roles. We help you discover meaningful work with teams that invest in your success, where you’re empowered to grow personally and professionally.

Join one of the highest-rated workplaces on Glassdoor and experience what it means to thrive in a truly remote-first world.

About the Role
This is a long-term, strategic role, not a short sprint. You'll be embedded in a collaborative engineering and analytics team, working across the full data lifecycle: ingestion, transformation, modelling, and surfacing insights through Looker. You'll work closely with stakeholders across commercial, product, and marketing to ensure data is reliable, scalable, and meaningful.

You'll be given real ownership. This is a role for someone who wants to shape standards, improve the architecture, and grow with a brand that takes its data seriously.

\n


Responsibilities
  • Design, build, and maintain robust ETL/ELT pipelines that move data from source systems into Google BigQuery, ensuring reliability, scalability, and observability at every stage.
  • Develop and enforce data models and schema standards using best-practice SQL and dimensional modelling principles, with a focus on clarity, reuse, and performance.
  • Own the Google BigQuery environment, optimising queries, managing costs, enforcing data governance, and ensuring the platform scales alongside the business.
  • Build and maintain Looker explores, LookML models, and dashboards that translate complex datasets into clear, actionable business intelligence for non-technical stakeholders.
  • Work across the full Google Cloud Platform stack, including Cloud Storage, Dataflow, Pub/Sub, Cloud Functions, and Composer, to architect end-to-end data solutions.
  • Partner with analytics, engineering, and commercial teams to understand data requirements and translate business problems into scalable technical solutions.
  • Champion data quality and testing frameworks, implementing monitoring and alerting so that issues are caught early and resolved quickly.
  • Contribute to documentation, coding standards, and architectural decision records so the team can move fast with confidence.
  • Mentor junior data team members and set the bar for engineering rigour across the data function.
  • Stay current with developments in the modern data stack and proactively recommend tooling or process improvements where appropriate.


Requirements
  • 5+ years of experience in SQL and data modelling, with strong command of dimensional modelling, star schemas, and performance optimisation.
  • 3+ years working with Google BigQuery in a production environment.
  • 3+ years hands-on experience with Google Cloud Platform (Cloud Storage, Dataflow, Pub/Sub, Cloud Functions, Composer).
  • 3+ years building and maintaining ETL/ELT pipelines at scale.
  • 1+ year working with Looker and LookML to deliver business-facing dashboards and data products.
  • Demonstrable experience leading at least one data project end-to-end, from scoping through to delivery.
  • Able to communicate clearly with non-technical stakeholders about data limitations, timelines, and trade-offs.
  • Comfortable making pragmatic architecture decisions in a cloud-native, modern data stack environment.


Nice to Have
  • Experience with dbt (Data Build Tool) for transformation layer management and testing.
  • Familiarity with orchestration tools such as Apache Airflow or Cloud Composer.
  • Python skills for pipeline scripting, data validation, or automation.
  • Background in retail, ecommerce, or fashion, understanding how data flows across commercial and digital channels.
  • Exposure to real-time or streaming data pipelines using Pub/Sub or Dataflow.
  • Experience with Terraform or Infrastructure-as-Code practices in a GCP context.
  • Familiarity with data governance frameworks, cataloguing, and lineage tracking.


Benefits
  • Fixed Shifts: 12:00 PM - 9:30 PM IST (Summer) | 1:00 PM - 10:30 PM IST (Winter)
  • No Weekend Work: Real work-life balance, not just words
  • Day 1 Benefits: Laptop and full medical insurance provided
  • Support That Matters:Mentorship, community, and forums where ideas are shared
  • True Belonging: A long-term career where your contributions are valued


\n

At Smart Working, you’ll never be just another remote hire.

Be a Smart Worker - valued, empowered, and part of a culture that celebrates integrity, excellence, and ambition.

If that sounds like your kind of place, we’d love to hear your story. 



Please mention the word **ENGAGING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Automation Jira Confluence Project Management
Niuro connects projects with elite tech teams, collaborating with leading U.S. companies. We empower projects by providing autonomous, high-performance engineering squads and handle end-to-end administrative tasks so clients can accelerate delivery. The Health and Life Sciences sector is a strategic focus for us, including healthcare providers, pharmaceutical companies, and medical technology firms. This role contributes to impactful, technically rigorous initiatives that drive innovation, while offering ongoing career development, leadership opportunities, and a pathway to long-term collaboration.
As part of Niuro’s global ecosystem, you will join a multidisciplinary team dedicated to delivering scalable, high-quality Salesforce solutions for complex healthcare workflows. You will engage with a diverse client base, operate remotely across LATAM, and benefit from a robust support infrastructure designed to accelerate success and enable you to focus on delivering exceptional results.

Apply to this job from Get on Board.

Key Responsibilities

  • Serve as the primary client contact for assigned Salesforce engagements, leading discovery sessions, clarifying requirements, communicating trade-offs, and maintaining trusted advisor relationships.
  • Translate business needs into technical specifications and execute hands-on Salesforce configuration, including custom objects, automation (Flows, Process Builder, Automation Rules), and UX design considerations.
  • Own end-to-end project execution using JIRA: backlog creation and prioritization, sprint planning, risk identification, and on-time delivery of milestones.
  • Produce comprehensive documentation in Confluence: meeting notes, detailed requirements specs, solution architecture decisions, and client-facing project plans.
  • Lead end-to-end QA testing: design test cases, validate configurations against acceptance criteria, reproduce issues, and sign off on release readiness.
  • Manage data migration workstreams: profile source data, design mapping strategies, execute loads (native or third-party tools), and verify post-migration data integrity.
  • Facilitate internal alignment meetings to ensure engineering handoffs are crisp and blockers are resolved within 24 hours.

What You’ll Bring

We are seeking a Senior Salesforce Consultant with 5+ years of hands-on experience in Salesforce implementation, consulting, or platform management. You will balance strategic client partnership with disciplined project execution in a fast-moving environment.
Required skills include deep expertise with Salesforce configuration: custom fields, Lightning pages, Flows, validation rules, and security models. You should have a proven track record of direct client-facing work in consulting or professional services, demonstrated ownership of timelines and deliverables, and strong experience using JIRA for task tracking and sprint coordination. Excellent documentation skills (Confluence or equivalent) and experience conducting formal QA testing cycles are essential. Competence with data migration processes (ETL, mapping, loading, and integrity verification) is highly desirable.
Healthcare industry knowledge accelerates impact but is not required to start. You must be comfortable with ambiguity, able to switch contexts rapidly between stakeholder management, technical configuration, and quality assurance, and be adept at producing clear, actionable artifacts for clients and internal teams.

Desirable Skills & Experience

Experience delivering large-scale Salesforce implementations within Health and Life Sciences is highly advantageous. Certifications such as Salesforce Certified Administrator, Sales Cloud Consultant, Service Cloud Consultant, or Platform Developer I/II are a plus. Familiarity with data privacy regulations common to healthcare (e.g., HIPAA) and secure handling of patient data is beneficial. Strong stakeholder management, negotiation, and presentation skills, coupled with a collaborative mindset and a demonstrated ability to drive results in multi-year client engagements, are desirable traits.

What Niuro Offers

We provide the opportunity to participate in impactful and technically rigorous industrial data projects that drive innovation and professional growth. Our work environment emphasizes technical excellence, collaboration, and continuous innovation.
Niuro supports a 100% remote work model, allowing flexibility in work location globally. We invest in career development through ongoing training programs and leadership opportunities, ensuring continuous growth and success.
Upon successful completion of the initial contract, there is potential for long-term collaboration and stable, full-time employment, reflecting our long-term commitment to our team members.
Joining Niuro means becoming part of a global community dedicated to technological excellence and benefiting from a strong administrative support infrastructure that enables you to focus on impactful work without distraction.

$$$ Full time
Desarrollador .NET / SQL / Angular
  • BC Tecnología
  • Santiago (Hybrid)
Python Scrum MVC Microservices
BC Tecnología es una consultora de TI que administra portafolio, desarrolla proyectos y ofrece outsourcing y selección de profesionales para áreas de Infraestructura Tecnología, Desarrollo de Software y Unidades de Negocio. El proyecto se centra en migraciones de datos entre plataformas, desarrollo y mantenimiento de soluciones basadas en SQL Server, .NET y front-end con Angular, orientadas a clientes en sectores como servicios financieros, seguros, retail y gobierno. El rol implica trabajar en equipos ágiles para entrega de software de alta calidad, con foco en rendimiento, escalabilidad y cumplimiento de requerimientos del Product Owner y normas de arquitectura digital. Participarás en iniciativas de mejora continua, migraciones de datos y desarrollo de microservicios en un entorno tecnológico avanzado, con énfasis en buenas prácticas de pruebas y entrega incremental.

This job offer is on Get on Board.

Funciones principales

  • Desarrollar y mantener aplicaciones y procesos utilizando SQL Server y SQL Integration Services (SSIS), ASP.NET y .NET Framework 4.x.
  • Desarrollar soluciones de software que aprovechen eficientemente recursos (memoria, disco, CPU) y cumplan con requerimientos y funcionalidades definidas por el Product Owner.
  • Programar código funcional, mantenible y de calidad para incrementar el producto, abarcando Backend y Frontend (MVC con Angular, Python cuando aplique).
  • Diseñar e implementar microservicios, gestionar su ciclo de vida y su despliegue en entornos de nube como AWS.
  • Realizar pruebas unitarias e integrales, corregir defectos detectados en QA y asegurar que los incrementos de producto estén listos para producción al final de cada sprint.
  • Participar en la propiedad colectiva del código del incremento del sprint y buscar mejoras continuas en entregables y procesos.
  • Analizar e interpretar datos para apoyar la toma de decisiones, vinculando requisitos de negocio con soluciones técnicas robustas.
  • Colaborar en equipos ágiles Scrum, manteniendo una comunicación efectiva y documentando artefactos técnicos y funcionales.
  • Requisitos de migraciones de datos entre plataformas, con conocimiento avanzado de procesos masivos (Batch) y herramientas de integración.

Descripción

Buscamos un Desarrollador Senior con sólida experiencia en migraciones de datos, desarrollo full-stack y capacidad para trabajar en un entorno bancario y de servicios. El candidato ideal poseerá un historial probando soluciones complejas, integrando capas de presentación, negocio y datos, y demostrará habilidades analíticas avanzadas para modelar y transformar información. Se requiere experiencia en SQL Server, SSIS, .NET, MVC con Angular y desarrollo de microservicios. Deberá trabajar con metodologías Scrum y colaborar con equipos multifuncionales para entregar soluciones de alta calidad, escalables y seguras. Se valorarán certificaciones en .NET, SQL Server y/o Scrum, así como experiencia en plataformas en la nube. El rol implica un turno híbrido con presencia en Santiago centro y coordinación con equipos en Las Condes según la modalidad de la empresa.

Requisitos deseables

Formación universitaria en Ingeniería de Sistemas, Informática o campos afines. Mínimo 5 años de experiencia en desarrollo de software en proyectos similares. Dominio avanzado de HTML5, CSS y JavaScript, conocimiento de Angular y desarrollo móvil/web. Experiencia comprobable en SQL Server, SSIS, ETL, ASP.NET, MVC con Angular y Python. Deseable experiencia en migraciones de datos, procesos batch/masivos (CMD) y desarrollo de microservicios. Capacidad analítica avanzada, buena comunicación y trabajo en equipo. Conocimientos en Genesys Cloud y Salesforce Marketing Cloud son un plus. Se valora experiencia en entornos bancarios y en entornos que requieren alta seguridad y cumplimiento regulatorio.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $1000 - 1300 Full time
Desarrollador Web
  • Coderslab.io
  • Lima (Hybrid)
HTML5 Python BigQuery ETL

CodersLab es una empresa dedica al desarrollo de soluciones dentro del rubro IT y actualmente, nos enfocamos en expandir nuestros equipos a nivel global para posicionar nuestros productos en más países de América Latina y es por ello que estamos en búsqueda de un Desarrollador Web para unirse a nuestro equipo.

Formarás parte de un equipo desafiante y ambicioso, con ganas de innovar en el mercado, donde tus ideas y contribuciones serán altamente valiosas para el negocio.

¡Postúlate ahora para este increíble desafío!

This job is original from Get on Board.

Funciones del cargo

  • Desarrollo de funcionalidad de gestión del canal con python y html5, tanto Backend como Frontend.
  • Migración de funcionalidades hacia web.
  • Documentación funcional de los desarrollos.
  • Carrera sistemas o relacionados.
  • Experiencia de cualquier sector; sin embargo, plus si tiene experiencia en el sector financiero.

Requerimientos del cargo

Experiencia entre 2 y 3 años

  • Experiencia en HTML5
  • Experiencia en SQL Server
  • Experiencia en Python
  • Experiencia en BigQuery
  • Experiencia en Gitlab
  • Experiencia en ETLs
  • Experiencia de cualquier sector; sin embargo, plus si tiene experiencia en el sector financiero.

Condiciones

Modalidad de contratación: Recibo por honorarios
Modalidad: Hibrida (3 veces a oficina)

$$$ Full time
Big Data Engineer
  • Oowlish Technology
  • Remote
python support software growth

Join Our Team


Oowlish, one of Latin America's rapidly expanding software development companies, is seeking experienced technology professionals to enhance our diverse and vibrant team.


As a valued member of Oowlish, you will collaborate with premier clients from the United States and Europe, contributing to pioneering digital solutions. Our commitment to creating a nurturing work environment is recognized by our certification as a Great Place to Work, where you will have opportunities for professional development, growth, and a chance to make a significant international impact.


We offer the convenience of remote work, allowing you to craft a work-life balance that suits your personal and professional needs. We're looking for candidates who are passionate about technology, proficient in English, and excited to engage in remote collaboration for a worldwide presence.


About the Role:


We are seeking a hands-on Big Data Engineer to support and enhance an AWS-based data platform, focusing on pipeline reliability, scalable processing, and performance optimization. This role requires strong Python expertise, deep familiarity with AWS data services, and the ability to maintain production-grade data workflows.


You will work on event-driven pipelines, contribute to CI/CD improvements, and collaborate on platform reliability initiatives. This role is ideal for someone who enjoys building and maintaining data infrastructure, optimizing large-scale data processing systems, and working in cloud-native environments.


This is a 6-month engagement, aligned to ET time zone.

\n


Key Responsibilities:
  • Develop and maintain data processing logic using Python
  • Build, optimize, and support data pipelines using AWS Glue and Lambda
  • Write and optimize complex SQL queries for analytics and operational workloads
  • Support platform reliability and pipeline monitoring
  • Contribute to CI/CD processes using GitHub and GitHub Actions
  • Collaborate on infrastructure improvements using Infrastructure-as-Code principles
  • Troubleshoot and resolve pipeline failures and performance issues
  • Support data consumption layers used by BI tools


Must Have:
  • 4+ years of experience as a Data Engineer / Big Data Engineer
  • Strong hands-on Python experience (data processing and application logic)
  • Advanced SQL skills (query optimization, performance tuning)
  • Production experience with AWS Lambda and AWS Glue
  • Experience working with CI/CD tools (GitHub, GitHub Actions)
  • Familiarity with Snowflake and/or Aurora
  • Understanding of Infrastructure-as-Code (IaC) concepts
  • Comfortable working in the ET time zone


Nice to Have:
  • Experience with BI tools (Sigma preferred)
  • Experience with event-driven architectures
  • Exposure to enterprise-scale data platforms


\n


Benefits & Perks:


Home office;

Competitive compensation based on experience;

Career plans to allow for extensive growth in the company;

International Projects;

Oowlish English Program (Technical and Conversational);

Oowlish Fitness with Total Pass;

Games and Competitions;



You can also apply here:


Website: https://www.oowlish.com/work-with-us/

LinkedIn: https://www.linkedin.com/company/oowlish/jobs/

Instagram: https://www.instagram.com/oowlishtechnology/





Please mention the word **AWESOMENESS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $4800 - 5700 Full time
Tech Manager
  • Artefact LatAm
  • Ciudad de México (Hybrid)
Business Intelligence Data Architecture Problem Solving Data Modeling

En Artefact LatAm, somos una consultora líder enfocada en acelerar la adopción de datos e inteligencia artificial para generar impacto positivo

Como Tech Manager, liderarás la visión técnica y ejecución estratégica de soluciones avanzadas en Data Engineering, BI e IA, garantizando arquitecturas escalables y de alto impacto. Serás el catalizador de transformaciones digitales complejas, gestionando equipos multidisciplinarios y actuando como el puente crítico entre los objetivos de negocio de los clientes y la innovación tecnológica. Tu enfoque integrará la excelencia en el delivery, la gobernanza de datos y el desarrollo de talento, consolidando estándares globales que posicionen a la compañía como un referente técnico en el mercado.

Find this vacancy on Get on Board.

Funciones del cargo

Capacidades de Datos y Tecnología: Diseñar, implementar y escalar soluciones robustas (modelos predictivos, segmentación IA y BI en tiempo real) garantizando excelencia técnica, escalabilidad y fiabilidad.

Liderazgo de Transformaciones: Actuar como líder técnico en iniciativas de datos e IA, guiando equipos en transformaciones complejas bajo mejores prácticas de ingeniería y arquitectura sólida.

Estrategia y Arquitectura: Definir la visión técnica de plataformas de datos y ecosistemas de BI, alineando decisiones de infraestructura, nube, gobernanza y seguridad con los objetivos del negocio.

Excelencia en Proyectos: Responsable de la ejecución integral, calidad y rendimiento. Anticipar riesgos técnicos y gestionar dependencias para asegurar entregas a tiempo y en alcance.

Gestión de Equipos y Clientes: Dirigir y asesorar equipos multidisciplinarios fomentando una cultura de ingeniería. Actuar como contacto técnico principal para clientes, traduciendo necesidades de negocio en soluciones escalables.

Innovación Continua: Evaluar nuevas tecnologías y herramientas en datos e IA, impulsando la experimentación y validación de conceptos.

Requerimientos del cargo

  • 8 años de experiencia liderazgo proyectos relacionados a data
  • Liderazgo técnico demostrado: amplia experiencia en la dirección de proyectos de datos, BI o IA en entornos complejos.
  • Mentalidad ingenieril: sólidos conocimientos de arquitecturas de datos, plataformas en la nube, canalizaciones de datos y ciclos de vida de IA/ML.
  • Capacidad analítica y de resolución de problemas: pasión por resolver problemas complejos utilizando datos y tecnología.

Sumas puntos si...

  • Innovador: curioso y con visión de futuro, siempre explorando nuevas herramientas y enfoques para mejorar las soluciones y la eficiencia.
  • Autónomo y responsable: capaz de impulsar iniciativas técnicas de forma independiente y asumir la plena responsabilidad de los resultados.
  • Buen comunicador: capaz de tender puentes entre los equipos técnicos y las partes interesadas no técnicas.

Condiciones

  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Hasta 11 días de vacaciones adicionales a lo legal. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por utilidades de la empresa, además de bonos por trabajador referido y por cliente.
  • Medio día libre de cumpleaños, además de un regalito.
  • Almuerzos quincenales pagados con el equipo en nuestros hubs (Santiago, Bogotá, Lima y Ciudad de Mexico).
  • Flexibilidad horaria y trabajo por objetivos.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Bogotá, Lima y Ciudad de Mexico).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks Artefact LatAm offers space for internal talks or presentations during working hours.
Meals provided Artefact LatAm provides free lunch and/or other kinds of meals.
Partially remote You can work from your home some days a week.
Digital library Access to digital books or subscriptions.
Company retreats Team-building activities outside the premises.
Computer repairs Artefact LatAm covers some computer repair expenses.
Computer provided Artefact LatAm provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Personal coaching Artefact LatAm offers counseling or personal coaching to employees.
Conference stipend Artefact LatAm covers tickets and/or some expenses for conferences related to the position.
Informal dress code No dress code is enforced.
Vacation over legal Artefact LatAm gives you paid vacations over the legal minimum.
Vacation on birthday Your birthday counts as an extra day of vacation.
Parental leave over legal Artefact LatAm offers paid parental leave over the legal minimum.
$$$ Full time
Senior Machine Learning Engineer
  • Fetch
  • United States
software mobile senior engineer

What we're building and why we're building it. 

Every month, millions of people use Fetch earning rewards for buying brands they love, and a whole lot more. Whether shopping in the grocery aisle, grabbing a bite at the drive-through or playing a favorite mobile game, Fetch empowers consumers to live rewarded throughout their day. To date, we've delivered more than $1 billion in rewards and earned more than 5 million five-star reviews from happy users. 

It's not just our users who believe in Fetch: with investments from SoftBank, Univision, and Hamilton Lane, and partnerships ranging from challenger brands to Fortune 500 companies, Fetch is reshaping how brands and consumers connect in the marketplace. When you work at Fetch, you play a vital role in a platform that drives brand loyalty and creates lifelong consumers with the power of Fetch points. User and partner success are at the heart of everything we do, and we extend that same commitment to our employees.

At Fetch, we value curiosity, adaptability, and the confidence to explore new tools, especially AI, to drive smarter, faster work. You don't need to be an expert, but you should be ready to learn quickly and think critically. We welcome learners who move fast, challenge the status quo, and shape what's next, with us.  Ranked as one of America's Best Startup Employers by Forbes for two years in a row, Fetch fosters a people-first culture rooted in trust, accountability, and innovation. We encourage our employees to challenge ideas, think bigger, and always bring the fun to Fetch.

Fetch is an equal employment opportunity employer.

About the Role:

We are seeking a Machine Learning Software Engineer to join Fetch's Scan, Match & Catalog team. This role sits at the intersection of applied machine learning, data engineering, and production systems, with a focus on improving receipt understanding, product matching, and catalog enrichment at scale. You w

Please mention the word **FASHIONABLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Data Engineer AWS
  • BC Tecnología
SQL Big Data AWS Lambda Data Architecture
En BC Tecnología buscamos Data Engineer AWS para colaborar en proyectos de alto impacto para clientes en sectores como servicios financieros, seguros, retail y gobierno. Nuestro equipo, parte de una consultora de TI con enfoque en soluciones innovadoras, trabaja en entornos de Big Data y nube, diseñando y operando infraestructuras escalables para procesamiento de datos y analítica avanzada. Participarás en proyectos de migración, diseño de pipelines de datos, implementación de soluciones en AWS y operaciones de datos, con un enfoque en calidad, seguridad y cumplimiento. Formarás parte de un equipo ágil que impulsa soluciones orientadas al negocio y la eficiencia operativa.

This job is published by getonbrd.com.

Funciones principales

  • Diseñar, construir y mantener pipelines de datos en entornos AWS (Glue, Lambda, Step Functions, Redshift, Athena, Lake Formation).
  • Gestionar arquitectura de datos y clústeres, asegurando rendimiento, escalabilidad y seguridad de la información.
  • Implementar políticas IAM y controles de acceso, garantizando cumplimiento y buenas prácticas de seguridad.
  • Colaborar con científicos de datos y equipos de negocio para transformar requerimientos en soluciones técnicas eficientes.
  • Participar en la mejora continua de procesos, automatización y monitoreo de flujos de datos.

Perfil y habilidades

  • Experiencia mínima de 2 años como Data Engineer, preferentemente en entornos Big Data y nube AWS.
  • Conocimientos en AWS: Glue, Lambda, Step Functions, Redshift, Lake Formation, SQL, Athena y gestión de políticas IAM.
  • Experiencia con bases de datos y arquitecturas de clústeres; capacidad para optimizar rendimiento y costos.
  • Fuerte capacidad de resolución de problemas, pensamiento analítico y orientación a resultados.
  • Buen comunicador, capaz de trabajar en equipos ágiles y adaptar soluciones a requerimientos de negocio.
  • Idiomas: español; se valoran habilidades en inglés técnico.

Requisitos Deseables

  • Certificaciones AWS (por ejemplo, AWS Data Analytics, AWS Solutions Architect).
  • Experiencia en orquestación de datos y herramientas de orquestación adicional (por ejemplo, Step Functions, Airflow).
  • Conocimientos en seguridad de datos, cumplimiento normativo y buenas prácticas de DevOps/DataOps.
  • Experiencia en proyectos de migración de datos, tratamiento de datos sensibles y observabilidad de pipelines.

Beneficios y entorno

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Fully remote You can work from anywhere in the world.
Health coverage BC Tecnología pays or copays health insurance for employees.
Computer provided BC Tecnología provides a computer for your work.
$$$ Full time
react architect design saas

Distinguished Tech Innovator:

3Pillar warmly extends an invitation for you to join an elite team of visionaries. Beyond software development, we are dedicated to engineering solutions that challenge conventional norms. Envision you: steering projects that redefine urban living, establish new media channels for enterprise companies, or drive innovation in healthcare. 

Your invaluable expertise will serve as the cornerstone in shaping the future direction of our endeavors.


This role is the primary expert within a technology stack. The Architect owns the decision making around high-level design choices and dictates technical standards, including software coding standards, tools, and platforms.  The ideal candidate will thrive in a collaborative environment and be engaged in the development process. 

\n


Key Responsibilities:
  • Act as the emissary of the architecture.  Diagram milestones and call out red flags before they become problematic.
  • Technical owner from design to resolution of tailored solutions to sophisticated problems on cloud platforms based on client requirements and other constraints.
  • Partners with appropriate stakeholders to determine functional and nonfunctional requirements, as well as business goals, for a set of scenarios.
  • Assess and plan for new technology insertion.
  • Manage risk identification and risk mitigation strategies associated with the architecture.
  • Influence and communicate long-term product vision, technical vision, development strategy and roadmap.
  • Contribute to code reviews, documentation and architectural artifacts.
  • Active leader in the Architecture Practice community, mentoring Engineers and others through Communities of Practice (CoPs) or on project teams, supporting the growth of technical capabilities.


Minimum Qualifications:
  • A Bachelor’s degree or higher in Computer Science or a related field.
  • A minimum of 5+ years of experience/expertise working as a Software Architect, with proficiency in the specified technologies:
  • Azure Cloud Services in a React/Node application environment
  • Microsoft Azure AZ-305 certification (must have)
  • Node.js backend framework
  • Must have TypeScript experience
  • Good to have exposure in NestJs/ExpressJs.
  • Zod schema validation (nice to have)
  • GitHub, GitHub Actions
  • Orchestration: Kubernetes, Azure Service Bus
  • Database: Postgres, Sequelize ORM (MongoDB nice to have)
  • Python for ETL process (nice to have)
  • WorkOS authentication via SSO (nice to have)

  • High level of English proficiency required to interact with a globally-based development team.
  • Communicate in a clear and understandable manner with clients, and be able to articulate the details of the designed architecture using the appropriate level of technical language.
  • Natural leader with critical reasoning and good decision making skills.
  • Ability to raise red flags on the client or team side due to technical blockers
  • Excellent diagramming and planning skills
  • Have extremely good knowledge on SDLC processes and familiarity with actionable metrics and KPIs.
  • Operational excellence in design methodologies and architectural patterns across multiple platforms.
  • Ability to work on multiple parallel projects and utilize time management skills and multitasking capabilities.
  • Experience leading Agile software development methodologies.
  • Experience designing production pipelines: DevOps and CI/CD practices and tools.
  • Demonstrate mentorship and thought leadership to engineers and decision-makers throughout the organization.


Additional Experience Desired:
  • Foundational knowledge in Data Analysis/Modelling/Architecture, ETL Dataflows and  good understanding of highly scalable distributed and cloud-native data stores. Specifically Serverless architecture.
  • Understand and able to write infrastructure as code
  • Policy-based access control systems (e.g., Cerbos, OPA)
  • Multi-tenant SaaS application design
  • Experience in designing applications involving more than one technology platform (web, desktop, mobile). 
  • Experience in designing SaaS or highly scalable distributed applications on the cloud.
  • Financial management experience and ROI calculation.
  • Solutions Architect certification on major cloud platforms (Azure)
  • TOGAF Certified.


What is it like working for 3Pillar Global?
  • At 3Pillar, we offer a world of opportunity:
  • Imagine a flexible work environment - whether it's the office, your home, or a blend of both. From interviews to onboarding, we embody a remote-first approach.
  • You will be part of a global team, learning from top talent around the world and across cultures, speaking English everyday. Our global workforce enables our team to leverage global resources to accomplish our work in efficient and effective teams.
  • We're big on your well-being - as a company, we spend a whole trimester in our annual cycle focused on wellbeing. Whether it is taking advantage of fitness offerings, mental health plans (country-dependent), or simply leveraging generous time off, we want all of our team members operating at their best.
  • Our professional services model enables us to accelerate career growth and development opportunities - across projects, offerings, and industries.
  • We are an equal opportunity employer. It goes without saying that we live by values like Intrinsic Dignity and Open Collaboration to create cutting-edge technology AND reinforce our commitment to diversity - globally and locally.

Join us and be a part of a global tech community!
Check out our Linkedin site and Careers page to learn more about what it's like to be part of our #oneteam!
#LI-Remote


\n

Please mention the word **PEACEFULLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $2000 - 2200 Full time
Python SQL Spark CI/CD
Interfell conecta empresas con el talento IT de LATAM, gestionando procesos de Staffing y Recruiting para impulsar el trabajo remoto y la transformación digital. Nuestro objetivo es potenciar la inclusión y el equilibrio vida-trabajo, brindando una experiencia de contratación integral y de alta calidad. Esta posición forma parte de un equipo enfocado en generar oportunidades de ventas y vínculos con potenciales clientes, contribuyendo al crecimiento de nuestras operaciones en la región.
Como Data Architect, serás responsable de diseñar y definir la arquitectura del Data Lake multitenant en AWS, garantizando escalabilidad, seguridad, gobernanza y capacidad de crecimiento para integrar múltiples fuentes de datos.
Este rol es clave para establecer estándares técnicos que permitan la integración consistente de nuevas fuentes de datos, asegurando calidad, trazabilidad y eficiencia en el procesamiento.
Contrato por 2 meses

This job is exclusive to getonbrd.com.

Job functions


Diseño de arquitectura multitenant

  • Diseñar la arquitectura del Data Lake en AWS considerando múltiples clientes o dominios de datos
  • Definir esquemas de particionamiento, namespaces y control de acceso por tenant
  • Establecer las capas del Data Lake (RAW, PROCESSED, CURATED)
  • Diseñar estrategias de organización y particionamiento de datos

Definición de estándares CI/CD

  • Diseñar el framework de CI/CD para pipelines de datos
  • Definir procesos de despliegue automatizado
  • Establecer la estructura de repositorios y versionamiento

Estrategia de ingestión de datos

  • Definir estrategias de ingestión para APIs, bases de datos y streaming
  • Diseñar patrones de integración usando AWS Glue, DMS y Kafka

Gobernanza y calidad de datos

  • Establecer estándares de calidad (evitar nulos, duplicados, asegurar llaves primarias)
  • Definir políticas de catalogación, metadata y control de acceso

Optimización y escalabilidad

  • Diseñar la arquitectura considerando crecimiento en volumen y fuentes de datos
  • Definir estrategias de optimización de costos en AWS

Acompañamiento técnico

  • Guiar técnicamente a Data Engineers y DevOps durante la implementación
  • Validar pipelines y decisiones de arquitectura

Qualifications and requirements


Formación y experiencia

  • Ingeniería de Sistemas, Informática o carreras afines
  • +4 años diseñando arquitecturas de datos
  • Experiencia en arquitecturas Data Lake, Medallion y Multitenant
  • Experiencia definiendo reglas de transformación entre capas
  • Experiencia estableciendo estándares de ingestión y transformación

Habilidades técnicas

  • AWS (S3, Glue, DMS, Kafka, IAM)
  • Modelado de datos
  • Spark, SQL y Python
  • Terraform y Databricks
  • Arquitecturas multitenant
  • CI/CD pipelines
  • Gobernanza de datos
  • Optimización de costos en AWS

Habilidades blandas

  • Capacidad de diseño estratégico
  • Comunicación con stakeholders técnicos y de negocio
  • Pensamiento analítico

Conditions

Oportunidad de crecimiento con un equipo multinivel
Vacaciones y feriados
Flexibilidad y autonomía
Pago USD
Trabajo remoto - Latam

Fully remote You can work from anywhere in the world.
Gross salary $4000 - 6350 Full time
JavaScript Python PostgreSQL REST API
Ruzora is a LATAM-focused staffing partner helping U.S. startups hire top engineering talent. We’re hiring a Senior Fullstack Python Engineer (React + Python) to join our partner companies building data-intensive applications for innovative U.S. startups. In this role, we’ll have you work across the stack—building reliable Python backend services (Django/FastAPI) and responsive React frontends—so users get seamless experiences backed by robust, well-tested APIs. You’ll collaborate closely with data teams to support data pipeline integrations, and help drive architecture and code quality through thoughtful reviews and clean implementation.

This job is available on Get on Board.

Job functions:

As a Senior Fullstack Python Engineer, we will have you:
  • Develop and maintain Python backend services (Django, FastAPI, or Flask)
  • Build responsive React frontends with modern tooling
  • Design and implement RESTful APIs and database schemas
  • Write comprehensive tests and maintain code quality
  • Collaborate with data teams on data pipeline integrations
  • Participate in architecture discussions and code reviews

Qualifications and requirements:

We’re looking for a senior engineer who enjoys working across the stack and is passionate about clean, efficient code.
  • 5+ years of professional software development experience
  • 3+ years with Python web frameworks (Django, FastAPI, Flask)
  • 2+ years with React and modern JavaScript/TypeScript
  • Experience with PostgreSQL and database optimization
  • Familiarity with async programming in Python
  • Excellent English communication skills (B2+)
We also value strong collaboration and ownership: we expect you to communicate clearly, contribute to architecture conversations, and maintain code quality through testing and review. Because this is a 100% remote role from anywhere in LATAM, we’ll also expect you to be dependable with async workflows and to keep progress transparent.

Desirable skills:

  • Experience with data processing (Pandas, NumPy)
  • Knowledge of message queues (Redis, RabbitMQ, Celery)
  • Experience with GraphQL
  • Background in data engineering or analytics

Conditions:

  • Competitive USD salary ($48,000 - $72,000/year), paid monthly via Deel
  • 100% remote work from anywhere in LATAM
  • Flexible working hours
  • Professional development budget
  • Health insurance stipend
  • Equipment allowance
  • Paid time off
We’ll also provide exposure to cutting-edge stacks while working with partner companies building data-intensive applications for U.S. startups.

Fully remote You can work from anywhere in the world.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage Ruzora pays or copays health insurance for employees.
Computer provided Ruzora provides a computer for your work.
$$$ Full time
Senior Data Engineer
  • Capnexus
  • Remote
amazon system software cloud
Capnexus is a comprehensive services provider. Our team consists of outstanding professionals, highly experienced in designing, building, and supporting retail software. We see ourselves as a build-as-a-service provider who follows a repeatable business pattern that can be applied to a variety of platforms and verticals. Having a culture built on outcomes and delivery at the core of the business, Capnexus is providing its customers with a complete suite of services for software development, system analysis, integration, implementation, and support, as well as the option to engage a single team to perform all the services they require. Who You Are and What You'll Do: Capnexus is looking for a highly skilled Senior AWS Data Engineer to lead data architecture, pipeline development, and ERP integration for a 12-week AI-powered modernization engagement in the construction industry. This role is focused on designing and implementing the data engineering backbone of an intelligent subcontractor pre-qualification platform, including CMIC ERP API integration, Amazon Textract data extraction pipelines, ETL development using AWS Glue, and data quality validation. This is an exciting opportunity to apply advanced cloud data engineering skills on a platform that leverages generative AI to automate and modernize enterprise workflows. Responsibilities:

Please mention the word **PICTURESQUE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Analyst II
  • ComputerCare
  • Remote
analyst system python technical

ComputerCare has spent more than 20 years building something rare in the IT world: a company where technical excellence and genuine human connection are valued equally. We're the trusted partner that IT leaders turn to when technology can't afford to fail. As a woman-owned business serving innovative companies worldwide, we combine certified technical expertise with a human approach. Whether it's managing complex device lifecycles for global teams or performing authorized repairs for Apple, Lenovo, HP and Dell devices, our work directly impacts how thousands of people stay productive every day. We never outsource our work because we believe in accountability, quality, and building lasting relationships—with our clients and as a team.


If you're passionate about technology, take pride in solving real problems, and want to be part of a company that values both technical excellence and the people behind it, ComputerCare is where you belong.


Come join us in our mission of being the Human Side of Hardware! 


We’re looking for a Data Analyst II to serve as a key point of contact and subject matter expert for data-related requests and system updates. You’ll analyze, extract, and interpret data from multiple systems, including SQL databases and reporting tools, and implement data solutions that support business workflows and decision-making.


If you enjoy solving complex problems with data and making an impact, we want you on our team!

\n


What You'll Do:
  • Assist in designing and structuring database architecture to support scalable data storage, efficient querying, and optimized performance.
  • Demonstrate understanding of relational databases, including tables, schemas, indexing, normalization, and relationships.
  • Help build and maintain data pipelines to move and transform data between systems while ensuring accuracy and reliability.
  • Create dashboards, reports, and visualizations using SQL, Excel, Tableau, Power BI, or Looker Studio to communicate findings clearly to stakeholders.
  • Analyze large datasets to identify trends, patterns, correlations, and actionable insights that support business decisions.
  • Collect, organize, and maintain data from multiple sources while ensuring data integrity and accuracy.
  • Write, maintain, and optimize SQL queries for reporting, analysis, and data extraction.
  • Clean, preprocess, and transform raw data using SQL and Python to prepare it for analysis and reporting.
  • Work with cross-functional teams to understand business requirements, define KPIs, and translate them into analytical solutions.
  • Identify inefficiencies in data processes and implement automation using SQL, Python, or ETL tools to improve workflow and data quality.


What You'll Bring:
  • Bachelor’s degree in Computer Science, Information Systems, Statistics, Mathematics, or a related field.
  • 2–5 years of experience in data analysis, reporting, or database management.
  • Experience working with SQL databases and writing complex queries.
  • Experience with Python (pandas, NumPy) and other scripting languages for data manipulation.
  • Experience with data visualization tools (HEX, Tableau, Power BI, Excel dashboards).


Perks and Benefits:
  • Comprehensive Medical, Dental, and Vision plans to keep you feeling your best
  • 401(k) with employer match—because your future matters
  • Company-paid Life Insurance, plus HSA & FSA options
  • Employee Assistance Program (EAP) for real support when you need it
  • Adoption Assistance to help grow your family
  • Commuter Benefits for an easier ride
  • Free Coursera Professional Certifications to level up your skills
  • Generous vacation & sick time, plus paid time off to give back to your community


\n
$80,000 - $115,000 a year
\n

If you get to this point, we hope you're feeling excited about the job you just read. Even if you don't feel that you meet every single requirement, we still encourage you to apply. We're eager to meet people that believe in ComputerCare’s mission, core values and can contribute to our team in a variety of ways – not just candidates who check all the boxes. 


At ComputerCare, we welcome passionate individuals who have the unrestricted right to work in the United States, including natural citizens and Green Card holders.


ComputerCare is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.



Please mention the word **GORGEOUS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $2800 - 3300 Full time
Technical Lead
  • ARKHO
  • Santiago (Hybrid)
Python SQL Business Intelligence Artificial Intelligence
ARKHO es una consultora experta en tecnologías de la información, que ofrece servicios expertos de TI en el marco de modernización de aplicaciones, analítica de datos, analítica avanzada y migración a la nube. Nuestro trabajo facilita y acelera la adopción de la cloud en múltiples industrias.
Nos destacamos por ser Partner Advanced de Amazon Web Services con foco estratégico en la generación de soluciones usando tecnología en la nube, somos obsesionados por lograr los objetivos propuestos y tenemos especial énfasis en el grupo humano que compone ARKHO (nuestros Archers), reconociendo a nuestro equipo como un componente vital para el logro de los resultados.
¿Te motivas? ¡Te esperamos!

Official source: getonbrd.com.

🎯 Objetivo del Rol

Liderar iniciativas tecnológicas de mediana y alta complejidad con foco en Data & AI sobre AWS, combinando liderazgo técnico, visión estratégica y capacidad de ejecución para diseñar e implementar soluciones de alto impacto alineadas al negocio. Será responsable de guiar equipos, tomar decisiones clave, definir arquitecturas y estándares, e impulsar la innovación y el desarrollo del equipo, asegurando entregas end-to-end con foco en la generación de valor.

🏹 Perfil del Archer

Buscamos un/a Tech Lead con foco en Data & AI, que combine una sólida profundidad técnica con una fuerte capacidad de liderazgo y visión de negocio. Una persona apasionada por la tecnología, capaz de articular pensamiento estratégico con ejecución, liderando equipos hacia la entrega de soluciones de alto impacto. Esperamos un perfil que destaque por sus habilidades humanas, toma de decisiones en contextos complejos y capacidad para guiar iniciativas end-to-end.
Este rol requiere alguien que se mantenga a la vanguardia de la innovación, especialmente en el ecosistema de datos e inteligencia artificial, y que promueva activamente la mejora continua, la colaboración y la generación de valor real para el negocio.

🔧 Funciones principales

  • Definir arquitecturas, estándares técnicos y buenas prácticas para soluciones de Data & AI sobre AWS.
  • Liderar decisiones técnicas y guiar al equipo en la entrega de proyectos end-to-end de alto impacto.
  • Participar activamente en el desarrollo, revisión de código y resolución de desafíos técnicos complejos.
  • Acompañar y potenciar al equipo mediante mentoría, promoviendo una cultura de aprendizaje y mejora continua.
  • Coordinar con stakeholders para alinear las soluciones técnicas con los objetivos de negocio.
  • Impulsar la innovación, evaluando nuevas tecnologías y detectando oportunidades de optimización en procesos y soluciones.

🧩 Requisitos

  • Profesional titulado en Ingeniería Informática, Ingeniería en Sistemas, o carrera afín.
  • Más de 7 años de experiencia en desarrollo de software, Data Engineering o roles relacionados.
  • Experiencia comprobable liderando equipos técnicos y gestionando proyectos end-to-end, asegurando calidad, cumplimiento de plazos y alineación con objetivos de negocio.
  • Sólido dominio de Python y SQL, aplicado a soluciones de datos en entornos productivos.
  • Experiencia en diseño e implementación de arquitecturas de datos modernas.
  • Experiencia trabajando con servicios AWS para analítica y datos (por ejemplo: S3, Glue, Redshift, Lambda, EMR, entre otros).
  • Experiencia en inteligencia artificial aplicada, incluyendo uso de modelos generativos (LLMs) y técnicas de Retrieval-Augmented Generation (RAG).
  • Manejo de herramientas de Business Intelligence (Power BI, Tableau u otras) para consumo y visualización de datos.
  • Experiencia trabajando bajo metodologías ágiles (Scrum, Kanban u otras).

🌟 Beneficios del Archer

En ARKHO fomentamos una cultura de aprendizaje continuo, impulsamos la innovación tecnológica y promovemos un entorno de trabajo flexible, inclusivo y respetuoso, que valora tanto el desarrollo profesional como el bienestar personal.

📆 Día administrativo semestral (a partir de los 6 meses)
🏖️ Week off: 5 días de vacaciones extra
🎉 Tarde libre por tu cumpleaños
📚 Path de entrenamiento
☁️ Certificaciones AWS
🏡 Flexibilidad (trabajo híbrido)
🩺 Seguro complementario de salud
💍 Regalo por matrimonio + 5 días hábiles libres
👶 Regalo por nacimiento de hijos
✏️ Kit escolar
🤱 Beneficio de paternidad
❤️ Bonda (plataforma de descuentos y bienestar)
💰 Aguinaldos
🧘‍♀️ ARKHO Open Doors

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Computer provided ARKHO provides a computer for your work.
Education stipend ARKHO covers some educational expenses related to the position.
Vacation over legal ARKHO gives you paid vacations over the legal minimum.
$$$ Full time
Data Engineer
  • BC Tecnología
  • Santiago (Hybrid)
Python Microstrategy ETL SQL Server
BC Tecnología, una consultora de TI especializada en servicios IT y soluciones de negocio, busca un Data Engineer para un proyecto híbrido ubicado en Las Condes, Santiago. El/la profesional se integrará a un equipo de BI/Analytics para desarrollar, optimizar y mantener soluciones de datos en entornos analíticos, trabajando con clientes de alto nivel en sectores como finanzas, seguros, retail y gobierno. El proyecto implica colaborar con equipos de BI, Analytics y TI, contribuuyendo a la implementación de pipelines de datos, modelado y generación de reports y dashboards de valor estratégico.

Apply to this job through Get on Board.

Funciones del rol

  • Desarrollar y optimizar consultas y modelos en SQL Server para soportar reporting analítico.
  • Diseñar, implementar y mantener pipelines de datos, integrando fuentes en plataformas en la nube (AWS, Azure o GCP).
  • Desarrollar y mantener reportes y dashboards en MicroStrategy para usuarios de negocio.
  • Colaborar con equipos de BI, Analytics y TI para entender requerimientos y entregar soluciones eficientes.
  • Identificar mejoras de rendimiento, escalabilidad y calidad de datos; aplicar buenas prácticas de gobierno de datos.

Requisitos y perfil

  • Al menos 3 años de experiencia como Data Engineer o BI Engineer.
  • Experiencia comprobable en SQL Server y MicroStrategy.
  • Experiencia trabajando con alguna nube (AWS, Azure o GCP).
  • Capacidad para trabajar en entornos colaborativos, orientado a resultados y con buena comunicación con stakeholders.
  • Conocimiento en conceptos de modelado de datos, extracción, transformación y carga (ETL/ELT) y buenas prácticas de calidad de datos.

Skills and assets

  • Certificaciones en SQL Server, Data Platform o tecnologías de nube asociadas.
  • Con experiencia en herramientas de visualización y dashboards además de MicroStrategy.
  • Conocimientos de Python o lenguajes de scripting para transformaciones de datos.
  • Actitud proactiva, pensamiento analítico y capacidad para trabajar de forma autónoma en entornos dinámicos.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Health coverage BC Tecnología pays or copays health insurance for employees.
Computer provided BC Tecnología provides a computer for your work.
Gross salary $3500 - 5000 Full time
Django React TypeScript Web Architecture
Revel Street LLC helps corporate event planners discover and reach private dining venues through an extensive, dependable database. We use LLMs extensively to gather and enrich venue data, streamline the event planning workflow, and reduce the time and effort required to source options for events such as private dining, cocktail receptions, and conferences. As a Senior Full Stack Engineer, we’ll ask you to build and maintain the end-to-end web experience that powers these workflows—turning data pipelines and agentic tooling into reliable, user-friendly product features. Our current stack includes React, TanStack, Cloudflare, Django, and Dagster, and we expect you to design solutions that are scalable, testable, and grounded in core engineering fundamentals.

Apply without intermediaries from Get on Board.

Role Description

We’re hiring a Senior Full Stack Engineer for a contract, remote role focused on agentic coding. You’ll write 90%+ of your code in an exclusively agentic coding environment such as Claude Code (or a similar setup). This is not a “vibe coder” position—we expect strong fundamentals, thoughtful engineering, and disciplined delivery.
Your goals
  • Design, develop, and maintain front-end and back-end components of our web applications.
  • Build agentic systems, pipelines, and workflows that reliably support our data and product needs.
  • Ensure quality through manual testing, debugging, and performance-focused iteration.
  • Deploy scalable solutions and keep them operating smoothly.
Day-to-day responsibilities
  • Create and evolve user-facing features in the React/TypeScript ecosystem.
  • Implement and maintain server-side functionality in Django and related services.
  • Work with Cloudflare for performance and delivery considerations.
  • Develop and maintain data/ops workflows using Dagster (and related pipeline patterns).
  • Design “agentic” workflows and pipelines that translate LLM-driven capabilities into dependable software behavior.
  • Perform manual testing, debugging, and validation to ensure correctness and usability.
  • Collaborate with cross-functional teams to align engineering work with product goals.
  • Stay current with technology trends and apply them pragmatically where they improve outcomes.

Qualifications

Required
  • Very high English proficiency (clear communication, strong writing, and the ability to collaborate effectively).
  • At least 4 years of full stack experience, with solid experience in the React/TypeScript ecosystem.
  • At least 6 months of experience working exclusively in an agentic coding environment (e.g., Claude Code, Codex).
  • We require that the work is done in an agentic coding environment; VSCode Copilot and “copy/paste from ChatGPT” do not count as agentic coding experience.
  • Strong problem-solving skills with strong attention to detail.
  • Strong product design sense (we care about UX and practical product judgment).
  • Ability to understand fundamentals, not just generate code—debugging, reasoning about behavior, and ensuring correctness.
Bonus (preferred)
  • Bachelor’s degree in Computer Science, Engineering, or a related field.
How we work
  • You’ll proactively turn ambiguous requirements into well-structured engineering plans.
  • You’ll communicate trade-offs and risks early, and you’ll verify outcomes through hands-on testing.
  • You’ll bring a “build, measure, improve” mindset to performance, reliability, and user experience.

Desirable

Desirable skills and experience
  • You have used orchestrators that can run multiple agents simultaneously like Superset, Cmux, Conductor
  • Comfort designing workflows that combine agentic coding outputs with human review, validation, and testing.
  • Practical experience with scalable web application architecture and reliability practices.

Benefits

  • We provide a Claude code max plan ($100 per month plan, $200 if you need it)
  • High ownership of the codebase and the product

Fully remote You can work from anywhere in the world.
$$$ Full time
Senior Data Analyst
  • TextNow
  • Open- Canada
analyst python support growth

We believe communication belongs to everyone. We exist to democratize phone service.  TextNow is evolving the way the world connects and that's because we're made up of people with curious minds who bring an optimistic, yet critical lens into the work we do.   We're the largest provider of free phone service in the nation. And we're just getting started.


Join us in our mission to break down barriers to communication and free the flow of conversation for people everywhere.


TextNow is looking for a motivated Senior Data Analyst to join our Analytics & Insights team. You’ll drive data-informed decision-making across the organization by translating business problems into analytical solutions, designing insightful dashboards, and uncovering trends that shape strategic actions.

This role is perfect for someone with strong analytical skills, deep business acumen, and a passion for using data to tell stories that inspire action.


What You’ll Do


Analyze complex datasets to identify actionable insights, trends, and opportunities

Develop and maintain dashboards, reports, and data visualizations using tools like Looker, Tableau, Power BI, or Redash

Conduct ad hoc analyses to support product, marketing, and operations initiatives

Partner with data engineering teams to ensure data quality, integrity, and availability

Develop and maintain KPI frameworks and performance measurement systems

Assist in building scalable data models and automation pipelines

Collaborate cross-functionally with Product, Finance, Marketing, and Operations teams to define analytical needs

Translate business questions into data requirements and present insights and recommendations to senior leadership

Mentor junior analysts and foster a culture of data-driven decision-making

Define and standardize analytical best practices across the organization


You’ll Be a Great Fit If You Have


Bachelor’s degree in Data Science, Statistics, Mathematics, Economics, Computer Science, or a related field (Master’s preferred)

5+ years of experience in data analytics or business intelligence

Proficiency in SQL and at least one programming language (e.g., Python or R)

Experience with modern BI tools (Looker, Tableau, Power BI, Mode, or Redash)

Strong understanding of A/B testing, statistical analysis, and data modeling

Experience working with large-scale datasets and cloud-based environments (e.g., Snowflake, Eppo)

Excellent communication and storytelling skills with data

Attention to detail, analytical rigor, and curiosity for continuous improvement


Preferred Skills


Experience in telecommunications, SaaS, or consumer app environments

Familiarity with machine learning concepts and predictive analytics

Understanding of ETL processes and data warehousing fundamentals

Experience collaborating with product teams on experimentation and growth analytics


Estimated Base Salary Range by Location:


Canada (CAD): $103,700 – $140,300

US – National (USD): $114,800 – $155,300

Final compensation will be determined based on a number of factors, including skills, experience, location, and on-the-job performance. We’re committed to paying competitively to hire and retain high-caliber talent. We recognize that exceptional talent may fall outside of these ranges; we encourage all qualified candidates to apply even if their compensation expectations are outside of the listed range.

\n


\n

More about TextNow...


Our Values:

·  Customer Obsessed (We strive to have a deep understanding of our customers)

·  Do Right By Our People (We treat each other with fairness, respect, and integrity)

·  Accept the Challenge (We adopt a "Yes, We Can" mindset to achieve ambitious goals)

·  Act Like an Owner (We treat this company like it's our own... because it is!)

·  Give a Damn! (We are deeply committed and passionate about our work and achieving results)


Benefits, Culture, & More:

·   Strong work life blend 

·   Flexible work arrangements (wfh, remote, or access to one of our office spaces)

·   Employee Stock Options 

·   Unlimited vacation 

·   Competitive pay and benefits

·   Parental leave

·   Benefits for both physical and mental well being (wellness credit and L&D credit)

·   We travel a few times a year for various team events, company wide off-sites, and more


Diversity and Inclusion:

At TextNow, our mission is built around inclusion and offering a service for EVERYONE, in an industry that traditionally only caters to the few who have the means to afford it. We believe that diversity of thought and inclusion of others promotes a greater feeling of belonging and higher levels of engagement. We know that if we work together, we can do amazing things, and that our differences are what make our product and company great. 


TextNow Candidate Policy

By submitting an application to TextNow, you agree to the collection, use, and disclosure of your personal information in accordance with the TextNow Candidate Policy



Please mention the word **WISELY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $3500 - 3700 Full time
Data Scientist
  • Coderslab.io
Python Machine Learning Data Engineering ML Ops
Coderslab.io es una empresa dedicada a transformar y hacer crecer negocios mediante soluciones tecnológicas innovadoras. Formarás parte de una organización en expansión con más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Te unirás a equipos diversos que reúnen a parte de los mejores talentos tecnológicos para participar en proyectos desafiantes y de alto impacto. Trabajarás junto a profesionales experimentados y tendrás la oportunidad de aprender y desarrollarte con tecnologías de vanguardia.

Find this job and more on Get on Board.

Funciones del cargo

Diseñar, desarrollar y validar modelos de machine learning, analítica avanzada e inteligencia artificial orientados a casos de uso de negocio.
Construir y ejecutar experimentos de ciencia de datos evaluando métricas de desempeño, sesgo, estabilidad y capacidad de generalización.
Utilizar Amazon SageMaker para entrenamiento, tuning, versionamiento, despliegue y monitoreo de modelos.
Implementar soluciones de IA generativa y agentes utilizando Amazon Bedrock y sus capacidades asociadas.
Preparar, explorar y transformar datos de distintas fuentes, asegurando calidad, consistencia y disponibilidad.
Desarrollar notebooks, pipelines y procesos reproducibles para entrenamiento y evaluación de modelos.
Colaborar con equipos de datos, arquitectura, negocio y desarrollo para traducir requerimientos en soluciones analíticas productivas.
Participar en la industrialización de modelos, incluyendo pruebas, monitoreo, observabilidad y mejora continua.
Asegurar buenas prácticas de MLOps, gobierno de modelos, seguridad y uso eficiente de recursos cloud.
Documentar supuestos, metodología, resultados y limitaciones técnicas de los modelos desarrollados.

Requerimientos del cargo

  • Experiencia sólida y comprobable con Amazon SageMaker.
  • Experiencia en Amazon Bedrock para soluciones de IA generativa.
  • Conocimiento práctico del ecosistema AWS: S3, Lambda, API Gateway, RDS, Glue, Athena, CloudWatch e IAM.
  • Mínimo 3 años en roles de Data Scientist, Machine Learning Engineer o posiciones afines.
  • Experiencia en desarrollo y despliegue de modelos en ambientes productivos sobre AWS.
  • Dominio de Python y librerías orientadas a ciencia de datos y machine learning.
  • Conocimiento de feature engineering, experimentación, evaluación de modelos y monitoreo post-despliegue.
  • Manejo de datos estructurados y, deseable, no estructurados.
  • Conocimiento de principios de MLOps, CI/CD y buenas prácticas de versionamiento y reproducibilidad.
  • Título profesional en Ingeniería Civil en Computación, Ingeniería Informática, Ingeniería Matemática, Estadística, Ciencia de Datos o carrera afín.

Opcionales

AWS Certified Machine Learning – Specialty
AWS Certified Data Engineer – Associate
AWS Certified Solutions Architect – Associate
AWS Certified Developer – Associate
AWS Certified Cloud Practitioner

Condiciones

Remoto
Fulltime

$$$ Full time
Senior Data Engineer
  • Thoughtworks
  • Chicago
design security technical support
Senior data engineers at Thoughtworks are engineers who build, maintain and test the software architecture and infrastructure for managing data applications. They are involved in developing core capabilities which include technical and functional data platforms. They are the anchor for functional streams of work and are accountable for timely delivery. They work on the latest big data tools, frameworks and offerings (data mesh, etc.), while also being involved in enabling credible and collaborative problem solving to execute on a strategy. Job responsibilities You will develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You will develop intricate data processing pipelines, addressing clients' most challenging problems. You will collaborate with data scientists to design scalable implementations of their models. You will write clean, iterative code using TDD and leverage various continuous delivery practices to deploy, support and operate data pipelines. You will use different distributed storage and computing technologies from the plethora of options available. You will develop data models by selecting from a variety of modeling techniques and implementing the chosen data model using the appropriate technology stack. You will collaborate with the team on the areas of data governance, data security and data privacy. You will incorporate data quality into your day-to-day work. Job qualifications Technical Skills Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems. You have hands-on experience of data modeling and modern data engineering tools and platforms. You have experience in writing clean, high-quality code using the preferred programming language. You have built and deployed large-scale data pipelines and data-centric applications using any of the distributed storage platforms and distributed processing platforms in a production setting. You have experience wit

Please mention the word **UNBIASED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$70000 - $80000 Full time
Data Analyst
  • Criptoro
  • Remote
other analyst crypto defi

We are a Web3-driven company building decentralized products and working with blockchain data to create transparent and data-informed solutions. We are looking for a Junior Data Analyst who is curious about blockchain, crypto, and decentralized ecosystems


Responsibilities

  • Collect, clean, and analyze on-chain and off-chain data
  • Work with blockchain datasets (transactions, wallets, smart contracts)
  • Build dashboards to track key metrics (users, transactions, TVL, etc.)
  • Identify trends in user behavior and protocol performance
  • Support product, marketing, and token strategy teams with insights


  • Write SQL queries and work with data pipelines

  • Requirements

  • Education : Bachelor’s degree in Mathematics, Statistics, Economics, Computer Science, or a related field


    Technical Skills:


  • Basic knowledge of SQL
  • Proficiency in Excel / Google Sheets
  • Basic Python (pandas, numpy)
  • Understanding of data analysis and statistics


  • Familiarity with BI tools (Tableau, Power BI, or similar)
  • Web3 / Crypto (Preferred):

  • Basic understanding of blockchain concepts (wallets, transactions, smart contracts)
  • Interest in DeFi, NFTs, or crypto markets
  • Experience with blockchain analytics tools (e.g., Dune, Nansen, Glassnode) is a plus





Please mention the word **YAY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer
  • Loop
  • Remote
python growth code cloud

The Data team at Loop is on a mission to empower merchants with transformative data products that drive success beyond returns. By building tools that merchants love and fostering a robust data culture, the team enables smarter decision-making across the board. Whether creating insights to guide merchants’ strategies or strengthening internal data-driven processes, the Data team is integral to shaping Loop’s future and unlocking new opportunities for our merchants and teams alike.


As a Data Engineer at Loop, you’ll have the chance to significantly impact our ability to solve merchant problems and fulfill merchant needs. You’ll be an integral member of the team, owning all aspects of data availability, quality, and ease of use of our data platforms. Your success in this role will depend on a healthy blend of creativity and structure with a continuous focus on delivering value to the business.


At Loop, we’re intentional about the way we work so that we can do our best work. We call this our Blended Working Environment. We work from our HQ in Columbus, OH, or one of our Hub or Secluded locations, and are distributed throughout the United States, select Canadian provinces, and the United Kingdom. For this position, we’re looking for someone to join us in a location where we already have an established Hub or HQ.


Our data stack: Snowflake, Fivetran, dbt, GoodData, Secoda

\n


What you’ll do:
  • Maintain and optimize existing data pipelines and warehouse solutions for performance, reliability, and cost efficiency. 
  • Support internal analytics and ML teams with data modeling, schema updates, and ad hoc data needs. 
  • Contribute to dbt projects and assist in ensuring data quality, observability, and accessibility. 
  • Write clean, tested, and documented code, and participate in code reviews. 
  • Collaborate with senior data engineers to understand and contribute to new ingestion sources, ML pipelines, and other forward-looking initiatives. 
  • Ensure internal stakeholders can access and use data effectively, enabling faster business insights and decision-making.


Your experience:
  • 4 years of hands-on experience building and maintaining data pipelines and data sets in a cloud environment (Snowflake, GBQ, Redshift, etc.). *We're expecting top candidates to have hands-on experience with Snowflake, specifically!
  • 2+ years of Python experience, creating reliable workflows and data processing scripts. 
  • Strong SQL skills and experience with data modeling. 
  • Experience with dbt or similar transformation tools. Familiarity with distributed systems and ETL/ELT processes.
  • Nice to have: Experience with data observability, lineage, or governance tools. 
  • Nice to have: Exposure to BI tools and supporting analytics teams. 
  • Nice to have: Experience working on cross-functional data projects. 
  • Nice to have: Familiarity with Fivetran, Kafka, or modern data integration platforms. 


Our Data Team values
  • Progress over perfection and focus on delivering value. 
  • Strong, open, and continuous collaboration with peers and stakeholders. 
  • Autonomy and accountability. 
  • Drive to solve problems. 
  • Engagement and participation in our Agile practices.


\n
$118,400 - $177,600 a year
We know that making decisions about your career and compensation is a huge deal. Because of that, we’re incredibly thoughtful about our compensation strategy. We want you to feel safe and excited, but also comfortable with the compensation package of a startup. We’ve outlined some important information for you here, but please know there’s a lot more to compensation than we can cover in this job posting. 

The posted salary range is the base salary for this opportunity. The salary range is subject to change, and may be adjusted in the future.

The actual annual salary paid for this position will be based on several factors, including, but not limited to: your prior experience and skills related to the position, geographic location, company needs, current market demands, and your total compensation goals. 

Great humans deserve great benefits. At Loop, you’ll be eligible for benefits such as: medical, dental, and vision insurance, flexible PTO, company holidays, sick & safe leave, parental leave, 401k, monthly wellness benefit, home workstation benefit, phone/internet benefit, and equity.
\n

#LI-ST1


Loop Story


Commerce should feel effortless. Every product adored, every order perfect, every customer loyal for life. But reality is messier: operations get tangled, margins grow thin, and trust is fragile. That’s where Loop steps in. We create confidence where commerce fails.


We started by fixing returns and exchanges. Today, we’re building a connected commerce operations suite — powering everything from order tracking to fraud prevention, with hundreds of innovations in between. Grounded in data and insight, our platform helps merchants make smarter decisions with every transaction. Over 5,000 of the world’s most loved brands trust Loop to turn cost centers into growth engines. Our mission is simple: protect margins, delight customers, and help merchants build businesses that last.


Life at Loop is rooted in our core values. We balance high empathy with high standards, knowing that work is better when we can show up authentically and resilience is built by facing challenges head-on. We expect you’ll grow quickly, learning skills that last far beyond your time here. Loop is a formative chapter in your career — a chance to shape the future of commerce and to leave better than when you arrived.


Learn more about us here: https://loopreturns.com/careers.


You can review our privacy notice here.



Please mention the word **LIBERATION** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Salesforce Integration Architect
  • ZS
  • Buenos Aires (Hybrid)
REST API Data Transformation CI/CD Mulesoft
Salesforce Integration Architect in the Architecture & Engineering EC will Serve as the technical authority for Salesforce-centric integration architectures, specializing in middleware and integration platforms such as MuleSoft. You will design, govern, and oversee the delivery of scalable, secure, and reusable integration solutions that connect Salesforce with enterprise and external systems. This role requires deep expertise in integration patterns, APIs, and platform interoperability, along with strong client-facing and delivery leadership skills.

© Get on Board. All rights reserved.

Own and define end-to-end integration architecture for Salesforce implementations across multiple clouds and enterprise systems
Architect and govern integrations using MuleSoft, iPaaS platforms, REST/SOAP APIs, event-driven messaging, and ETL tools
Define and apply enterprise integration patterns including API-led connectivity, synchronous/asynchronous messaging, and event-based architectures
Design scalable and secure APIs and integration services that support Salesforce business processes
Define data movement, transformation, and orchestration strategies across systems
Collaborate with Salesforce Technical Architects and Developers to align integration design with Salesforce data models, security, and automation
Provide architectural guidance on Salesforce APIs, Apex callouts, outbound messaging, Platform Events, and external services
Ensure integrations follow Salesforce and MuleSoft best practices for performance, scalability, and security
Lead technical discovery sessions focused on integration requirements, system landscapes, and non-functional needs
Confidently lead client discussions on integration strategy, middleware selection, and architectural trade-offs
Act as a trusted advisor to client stakeholders by translating business requirements into integration-led solution designs
Support pre-sales activities by contributing to solution scoping, estimates, and risk assessments
Identify, manage, and proactively mitigate integration-related risks and dependencies
Create and own integration architecture documentation including API specifications, sequence diagrams, data flows, and deployment models
Define environment strategy, CI/CD pipelines, and release management approaches for integration platforms
Review designs and implementations to ensure adherence to architectural standards and best practices
Mentor and guide integration developers and architects, contributing to practice growth and knowledge sharing

What You’ll Bring

Bachelor’s degree in Computer Science, Engineering, or a related field
6+ years of experience in Salesforce and/or enterprise integration architecture
Proven experience acting as an Integration Architect or Technical Lead on complex programs
Strong expertise in Salesforce platform integrations, including Salesforce APIs and event-driven capabilities
Hands-on experience architecting integrations using MuleSoft Anypoint Platform or similar middleware/iPaaS solutions
Deep understanding of API-led architecture, integration patterns, and enterprise system interoperability
Experience with REST/SOAP, OAuth 2.0, JWT, message queues, and data transformation technologies
Familiarity with Salesforce data models, security model, and automation features
Experience designing enterprise-scale integration architectures in complex environments
Strong communication and stakeholder management skills
Experience working in Agile delivery environments
Ability to work effectively in a global, client-facing consulting model
Fluency in English
Client-first mentality
Intense work ethic
Collaborative spirit and problem-solving approach

Additional Skills:

MuleSoft certifications strongly preferred (e.g., MuleSoft Developer, Integration Architect)
Salesforce certifications preferred (Administrator, Platform Developer, Integration Architecture Designer)

Perks & Benefits:

ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member.
Hybrid working model:
We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections.
Travel:
Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed.

Gross salary $1000 - 1400 Full time
Data Analyst
  • Coderslab.io
  • Lima (Hybrid)
HTML5 Python Data Analysis BigQuery

CodersLab es una empresa dedicada al desarrollo de soluciones dentro del rubro IT y actualmente nos enfocamos en expandir nuestros equipos a nivel global para posicionar nuestros productos en más países de América Latina y es por ello que estamos en búsqueda de un Data Analyst

Buscamos un/a Data Analyst para unirse a nuestro equipo y participar en el desarrollo de aplicaciones móviles escalables, modernas y de alto impacto. Trabajarás en un entorno colaborativo, con proyectos desafiantes y oportunidades reales de crecimiento.

Apply without intermediaries from Get on Board.

Funciones del cargo

  • Desarrollo de funcionalidad de gestión del canal con python y html5.
  • Documentación funcional de los desarrollos.
  • Carrera sistemas o relacionados.
  • Experiencia de cualquier sector; sin embargo, plus si tiene experiencia en el sector financiero.
  • Recopilar y limpiar datos: Obtener datos de diversas fuentes (bases de datos, redes sociales, hojas de cálculo, etc.) y luego limpiarlos, procesando valores faltantes, corrigiendo errores y eliminando inconsistencias para asegurar su calidad.
  • Analizar datos: Utilizar técnicas estadísticas y otras herramientas para identificar correlaciones, tendencias y patrones dentro de los conjuntos de datos.
  • Interpretar resultados: Analizar los resultados del análisis para entender qué significan y cómo pueden ayudar a la empresa a tomar mejores decisiones.
  • Comunicar hallazgos: Presentar los resultados del análisis de forma clara y comprensible a través de informes, paneles (dashboards) y visualizaciones (gráficos, tablas) para stakeholders y otros equipos.
  • Identificar riesgos y oportunidades: Detectar tendencias, problemas potenciales y oportunidades de crecimiento para la empresa.
  • Apoyar la toma de decisiones: Facilitar la toma de decisiones estratégicas en diferentes áreas de la organización, como ventas, inventario y gestión de servicios.
  • Crear informes y dashboards: Generar informes periódicos y dashboards interactivos que se actualizan automáticamente para mantener a todos informados.

Requerimientos del cargo

Experiencia entre 2 y 3 años

  • Experiencia en SQL Server
  • Experiencia en Python
  • Experiencia en BigQuery
  • Experiencia en Gitlab
  • Experiencia en ETLs
  • Experiencia en HTML5
  • Experiencia de cualquier sector; sin embargo, plus si tiene experiencia en el sector financiero.

Condiciones

Modalidad de contratación: Recibo por honorarios
Duración del proyecto: 6 meses
Modalidad: Hibrida (3 veces a oficina)

$$$ Full time
Lead Software Architect
  • Improving South America
.Net Azure Cybersecurity CI/CD

Improving South America es una empresa líder en servicios de TI que busca transformar positivamente la percepción del profesional de TI mediante consultoría de tecnología, desarrollo de software y formación ágil. Somos una organización con una cultura que fomenta el trabajo en equipo, la excelencia y la diversión, inspirando a nuestro equipo a establecer relaciones duraderas mientras ofrecemos soluciones tecnológicas de vanguardia. Nuestra misión está alineada con el movimiento de Capitalismo Consciente, promoviendo un entorno de trabajo excepcional que impulsa el crecimiento personal y profesional dentro de una atmósfera abierta, optimista y colaborativa.

This company only accepts applications on Get on Board.

Funciones del cargo

  • Liderar el diseño y la evolución de arquitecturas de software escalables, seguras y de alto rendimiento, tanto On-premises como en la nube, utilizando Microsoft Azure.
  • Definir la arquitectura técnica end-to-end de soluciones SaaS, asegurando estándares de calidad, resiliencia, mantenibilidad y escalabilidad.
  • Diseñar y liderar la implementación de data warehouses empresariales, incluyendo modelado de datos, pipelines ETL y optimización de performance.
  • Colaborar estrechamente con equipos de Desarrollo, DevOps y Data para asegurar una integración fluida entre aplicaciones y plataformas de datos.
  • Crear y mantener documentación arquitectónica, diagramas y especificaciones técnicas para aplicaciones y plataformas de datos.
  • Actuar como referente técnico en el uso de servicios de Azure, tales como App Services, Azure Functions, Azure SQL, Cosmos DB, Azure Data Factory, Synapse Analytics y Azure Storage.
  • Definir y velar por el cumplimiento de estándares de arquitectura, buenas prácticas y modelos de gobernanza en los distintos proyectos.
  • Trabajar junto a stakeholders de negocio para alinear las decisiones técnicas con los objetivos estratégicos de la compañía.
  • Evaluar nuevas tecnologías y herramientas, proponiendo mejoras en performance, escalabilidad y eficiencia de costos.
  • Brindar liderazgo técnico y mentoría a desarrolladores e ingenieros, promoviendo buenas prácticas y crecimiento del equipo.

Requerimientos del cargo

  • Nivel de inglés intermedio/ avanzado - B2/C1 (Indispensable).
  • +10 años de experiencia en desarrollo de software, con evolución hacia roles de liderazgo técnico.
  • Experiencia liderando el diseño e implementación de arquitecturas cloud-native en Microsoft Azure (Azure Functions, Azure SQL, Cosmos DB, Azure Data Factory, Synapse Analytics, and Azure Storage).
  • Sólido dominio de .NET (C#) y ASP.NET Core, con capacidad para tomar decisiones de arquitectura y buenas prácticas de desarrollo.
  • Experiencia liderando equipos en el uso de Azure DevOps, incluyendo CI/CD, gestión de releases y calidad de código.
  • Capacidad para diseñar soluciones enfocadas en escalabilidad, tolerancia a fallos, seguridad y cumplimiento.
  • Conocimientos en data warehousing, con capacidad para guiar decisiones relacionadas al manejo y la estrategia de datos.
  • Fuerte enfoque en ciberseguridad, asegurando el cumplimiento de estándares y buenas prácticas en los desarrollos.
  • Experiencia liderando equipos bajo metodologías ágiles, promoviendo la mejora continua y la colaboración.
  • Fuertes habilidades de comunicación, liderazgo e influencia técnica.

Beneficios

  • 100% Remoto.
  • Vacaciones y PTOs.
  • Posibilidad de recibir 2 bonos al año.
  • 2 revisiones salariales al año.
  • Clases de inglés.
  • Equipamiento Apple.
  • Plataforma de cursos en linea.
  • Budget para compra de libros.
  • Budget para compra de materiales de trabajo

Internal talks Improving South America offers space for internal talks or presentations during working hours.
Computer provided Improving South America provides a computer for your work.
Vacation over legal Improving South America gives you paid vacations over the legal minimum.
Vacation on birthday Your birthday counts as an extra day of vacation.
$$$ Full time
manager growth
Mission Statement The Platform team creates the technology that enables Spotify to learn quickly and scale easily, enabling rapid growth in our users and our business around the globe. Spanning many disciplines, we work to make the business work; creating the infrastructure, tooling, frameworks, and capabilities needed to welcome a billion customers. About the Team We are looking for a passionate Product Manager to join Spotify's Data Platform Studio. Data Platform's mission is to enable the application of data in an intuitive and efficient way—helping Spotify extract value from data at scale. Data Platform is responsible for how data is collected, processed, stored, governed, and made available to the thousands of engineers, data scientists, and analysts who build Spotify's products. With AI agents increasingly writing data pipelines and powering personalization, this is one of the most consequential infrastructure domains at Spotify.

Please mention the word **REVOLUTIONIZE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
content senior engineer backend

ABOUT onX

As a pioneer in digital outdoor navigation with a suite of apps, onX was founded in Montana, which in turn has inspired our mission to awaken the adventurer inside everyone. With more than 400 employees located around the country working in largely remote / hybrid roles, we have created regional “Basecamps” to help remote employees find connection and inspiration with other onXers. We bring our outdoor passion to work every day, coupling it with industry-leading technology to craft dynamic outdoor experiences.

Through multiple years of growth, we haven't lost our entrepreneurial ethos at onX. We offer a fast-paced, growing, tech-forward environment where ownership, accountability, and passion for winning as a team are essential. We value diversity and believe it leads to different perspectives and inspires both new adventures and new growth. As a team, we're hungry to improve, value innovation, and believe great ideas come from any direction.

Important Alert: Please note, onXmaps will never ask for credit card or SSN details during the initial application process. For your digital safety, apply only through our legitimate website at onXmaps.com or directly via our LinkedIn page.

WHAT YOU WILL DO

onX is seeking a talented Senior Backend Engineer to join our Content Delivery team. In this role, you will build the backend infrastructure that powers offline map experiences for millions of outdoor enthusiasts. You will work on high-performance data pipelines, map tile generation and delivery systems, and large-scale geospatial

Please mention the word **STUNNING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
design system python music

At Spotify, we're building the revenue platform that drives how revenue and taxes are processed across the company — enabling reliable, scalable financial operations across every market, product line, and partner. Our systems are essential to Spotify’s ability to earn, track, and report revenue and taxes, supporting everything from subscriptions and advertising to creator payouts.


As engineers on this team, we design and maintain the backend and data platform capabilities that power millions of transactions each day with precision. We build services that handle tax calculations, produce compliant financial records, and support regulatory requirements across global markets — all while staying agile to keep up with Spotify’s evolving business models. We equip Finance teams with flexible, configurable tools that govern how revenue and taxes are applied across products, enabling rapid adjustments without needing deep technical expertise. Our modular, process-oriented components simplify the development, maintenance, and scaling of the critical Order to Cash enterprise process that underpin Spotify’s financial operations.

\n


What You'll Do
  • Gain deep expertise in Spotify’s revenue platform, understanding how it enables financial operations, compliance, and strategic decision-making.
  • Design and implement scalable backend and data systems that process millions of transactions daily — supporting accurate tax calculation, billing, revenue recognition, financial configuration, and tax reporting.
  • Build intuitive, self-serve tools that empower Finance teams to define and manage product-specific revenue and tax configuration independently, without requiring engineering involvement.
  • Develop and enhance modular platform capabilities that encodes critical enterprise workflows, promoting consistency, reusability, and ease of maintenance across financial systems.
  • Lead the creation of new platform capabilities within the Tax Solutions space, focusing on Tax Reporting and global regulatory compliance.
  • Partner closely with Engineers, Product and Finance stakeholders to design systems that are scalable, auditable, and highly reliable.
  • Champion engineering best practices, strong architectural design, and operational excellence across backend and data platforms.
  • Foster a collaborative team culture rooted in shared ownership, constructive feedback, and continuous improvement.


Who You Are
  • You have experience in data engineering, including building and maintaining data pipelines.
  • You are proficient in Python and ideally Scala or Java
  • You possess a foundational understanding of system design, data structures, and algorithms, coupled with a strong desire to learn quickly, embrace feedback, and continuously improve your technical skills.
  • You’re familiar with cloud-native development and deployment, ideally within the Google Cloud Platform.
  • You think critically about system design and strive to build solutions that are reliable, maintainable, and auditable at scale.
  • You have good communication skills and can articulate your ideas and ask clarifying questions.
  • You love collaborating with others.
  • You thrive in ambiguous and fast-changing environments, and know how to make progress even when requirements are evolving.
  • You approach platform engineering with empathy for your users - prioritising usability, configurability, and long-term sustainability.
  • You care deeply about code quality, testing, and documentation, and you aim to build systems that are easy to understand and operate.
  • You enjoy collaborating across functions and bring clarity and alignment when working with engineering, finance, and product partners.
  • You’re naturally curious, self-motivated, and always looking for ways to grow your technical skills and improve how things are done.


Where You'll Be
  • This role is based in London, United Kingdom.
  • We offer you the flexibility to work where you work best! There will be some in person meetings, but still allows for flexibility to work from home.


\n

Spotify is an equal opportunity employer. You are welcome at Spotify for who you are, no matter where you come from, what you look like, or what’s playing in your headphones. Our platform is for everyone, and so is our workplace. The more voices we have represented and amplified in our business, the more we will all thrive, contribute, and be forward-thinking! So bring us your personal experience, your perspectives, and your background. It’s in our differences that we will find the power to keep revolutionizing the way the world listens.


At Spotify, we are passionate about inclusivity and making sure our entire recruitment process is accessible to everyone. We have ways to request reasonable accommodations during the interview process and help assist in what you need. If you need accommodations at any stage of the application or interview process, please let us know - we’re here to support you in any way we can.


Spotify transformed music listening forever when we launched in 2008. Our mission is to unlock the potential of human creativity by giving a million creative artists the opportunity to live off their art and billions of fans the chance to enjoy and be passionate about these creators. Everything we do is driven by our love for music and podcasting. Today, we are the world’s most popular audio streaming subscription service.



Please mention the word **NOBLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$175000 - $250000 Full time
Security Engineer
  • PermitFlow
  • New York City
security frontend architect software

PermitFlow is redefining how America builds. We’re an applied AI company serving the nation’s builders, tackling one of the largest information challenges in the economy: understanding what can be built, where, and how. Our AI agent workforce helps the fastest-growing construction companies navigate everything from permitting and licensing to inspections and project closeouts – accelerating housing, clean-energy, and infrastructure development across the country.

Despite being a $1.6T industry, construction still suffers from massive delays, wasted capital, and lost opportunity. PermitFlow has already delivered unprecedented speed, accuracy, and visibility to over $20B in development, helping contractors reduce compliance time, de-risk projects, and scale with confidence.

America is entering a CAPEX super-cycle, from data centers and factories to housing and renewables, and joining PermitFlow is building the AI at the heart of every construction project powering the next wave of re-industrialization.

We’ve raised over $90M, most recently completing our Series B, from top-tier investors including Accel, Kleiner Perkins, Initialized, Y Combinator, Felicis, and Altos Ventures, with backing from leaders at OpenAI, Google, Procore, ServiceTitan, Zillow, PlanGrid, and Uber.

Role Overview

As a Security Engineer, you’ll join our growing platform team in building, scaling, and fine-tuning the systems that keep our platform secure and compliant. You’ll help architect the security backbone of our platform, focusing on compliance, risk reduction, security automation, and continuous improvement. While your primary responsibility will be security and governance, coding and problem-solving across the stack are core parts of the role. As a fast-growing startup, we all roll up our sleeves where needed, so flexibility and a collaborative, security-first mindset are key.

What You'll Do

  • Architect, design, and implement secure, compliant, scalable, and cost-efficient infrastructure solutions to protect a rapidly growing product.

  • Lead the execution and maintenance of our SOC2 compliance program and other security-related certifications.

  • Design, implement, and audit Role-Based Access Controls (RBAC), Identity and Access Management (IAM), and secrets management systems.

  • Design and implement security best practices for backend, frontend services, APIs, and data pipelines.

  • Own security features end-to-end, from architecture and implementation to testing and production deployment.

  • Develop and maintain security automation, Infrastructure as Code, and secure CI/CD pipelines.

  • Implement and manage security monitoring, threat detection, and vulnerability management across our cloud infrastructure.

  • Establish and enforce security best practices for authentication, authorization, logging, and alerting.

  • Lead and participate in incident response, troubleshooting complex security issues and driving postmortem learning and improvements.

  • Collaborate across engineering teams to embed security into the software development lifecycle and balance compliance, velocity, and cost.

What We're Looking For

  • 5+ years of experience in Security Engineering, AppSec, GRC, or similar roles.

  • Proven experience designing and implementing security controls for SOC2, ISO 27001, or similar compliance frameworks.

  • Deep expertise in Role-Based Access Controls (RBAC), Identity and Access Management (IAM), and secrets management.

  • Strong experience with container security and orchestration (Docker, ECS, Kubernetes a plus).

  • Expertise with secure CI/CD pipelines and modern security automation tools.

  • Coding and scripting proficiency (TypeScript, Python, Go, Bash, etc.).

  • Hands-on experience with cloud security (GCP preferred) and securing distributed systems.

  • Familiarity with monitoring, observability, and incident management best practices.

  • Comfortable working in a fast-paced, compliance-focused startup environment, where adaptability and security ownership are essential.

What We Offer

  • Competitive salary and meaningful equity in a high-growth company

  • Comprehensive medical, dental, and vision coverage

  • Flexible PTO and paid family leave

  • Home office & equipment stipend

  • Hybrid NYC office culture (3 days in-office/week) with direct access to leadership

  • In-Office Lunch & Dinner Provided

PermitFlow provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, genetics, sexual orientation, gender identity, gender expression, or family status, as protected by applicable law.


We are committed to a diverse and inclusive workforce and welcome people from all backgrounds, experiences, perspectives, and abilities. All employment decisions are based on merit, qualifications, and business needs.



Please mention the word **REFORM** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer (Pyspark, AWS)
  • Improving South America
Python SQL ETL Spark
En Improving South America, brindamos servicios de TI para transformar la percepción del profesional de TI. Nos enfocamos en consultoría de TI, desarrollo de software y formación ágil. El/la BI Developer trabajará en proyectos orientados a inteligencia de negocio, visualización de datos y creación de dashboards impactantes que faciliten la toma de decisiones. Colaborará con equipos multifuncionales para entregar soluciones escalables y de alto valor para clientes internacionales, dentro de un entorno 100% remoto.
La empresa promueve una cultura de trabajo excepcional basada en el trabajo en equipo, la excelencia y la diversión, con enfoque en crecimiento personal y recompensas compartidas. Al integrarse, el/la candidato/a formará parte de una comunidad que prioriza la comunicación abierta y relaciones laborales sólidas a largo plazo, respaldada por una estructura de desarrollo profesional y aprendizaje continuo.

Originally published on getonbrd.com.

Responsabilidades del puesto

En Improving South America buscamo un/a Senior Data Engineer para diseñar y operar soluciones de datos de alta disponibilidad a escala global, trabajando con pipelines batch y streaming que procesan grandes volúmenes de información. El rol requiere experiencia construyendo pipelines robustos, trabajando con Kafka, PySpark y data warehouses en AWS, además de fuerte dominio de SQL y modelado de datos.

Responsabilidades del rol:

  • Diseñar y operar pipelines de datos batch y streaming.
  • Procesar grandes volúmenes de datos (billones de eventos diarios y datasets multi-terabyte).
  • Construir integraciones entre MySQL y Redshift.
  • Diseñar modelos de datos y optimizar consultas SQL.
  • Implementar estrategias de CDC, cargas incrementales y full loads.
  • Integrar datos mediante APIs internas y de terceros.
  • Diagnosticar fallas en pipelines, problemas de latencia y calidad de datos.
  • Colaborar en decisiones de arquitectura de datos.

Requerimientos del cargo

  • 7+ años de experiencia en Data Engineering.
  • Inglés intermedio/avanzado (B2/C1) para comunicación técnica.
  • Experiencia sólida con Python.
  • Experiencia con PySpark.
  • Experiencia trabajando con Kafka.
  • Experiencia con Redshift u otro data warehouse moderno.
  • Experiencia integrando MySQL → Redshift.
  • Dominio avanzado de SQL (modelado, optimización y queries complejas).
  • Experiencia en AWS y servicios de datos en la nube.
  • Experiencia diseñando pipelines ETL/ELT batch y streaming.
  • Experiencia con Glue, Step Functions o arquitecturas serverless en AWS.
  • Experiencia trabajando con herramientas de desarrollo asistidas por IA (ej. Cursor).
  • Experiencia en entornos de alto volumen de datos.

Beneficios que ofrecemos

  • Contrato a largo plazo.
  • 100% Remoto.
  • Vacaciones y PTOs
  • Posibilidad de recibir 2 bonos al año.
  • 2 revisiones salariales al año.
  • Clases de inglés.
  • Equipamiento Apple.
  • Plataforma de cursos en linea
  • Budget para compra de libros.
  • Budget para compra de materiales de trabajo
  • mucho mas..

Computer provided Improving South America provides a computer for your work.
Informal dress code No dress code is enforced.
$$$ Full time
Full Stack Engineer
  • Darkroom
  • New York
react technical software code

What we’re building

We’re empowering small teams with technology that makes it easier to market and grow businesses. Our current focus it to help consumer brands shift from "workflow automation" to "agent management” within their marketing operations. Matter is the AI coordination layer — providing shared AI memory, centralized agent control, and model differentiation. We founded the company based on a decade of experience providing marketing services to 300+ consumer brands, leveraging that expertise to develop interfaces that streamline user experience in the era of AI.


Why join Matter?

  • Founding Engineer Equity You'll get a meaningful equity stake; early-stage and undiluted.

  • Product Ownership You'll ship production code daily and help steer key product and technical decisions.

  • Shape the Engineering Culture You'll influence how we work—tools, processes, standards, and hiring.

  • Work with Challenger Consumer Brands Talk directly to customers (CEOs, CMOs, VP's) of fast-growing consumer brands—some doing $80M–$500M in revenue.

Don't join Matter if...

  • Work-life balance is a high priority for you

  • You're uncomfortable changing your priorities every 24-48 hours

  • You're not confident in your abilities to manage end-to-end solutions

  • You require a many devops resources to be successful

About the Role

You'll sit squarely at the intersection of back‑end and front‑end, ensuring seamless integration between APIs, databases, UIs, and ML services. You'll design, build, and scale features end‑to‑end, especially our AI/ML‑powered experiences, while mentoring peers and driving architecture decisions.


Core Tech & Tools

  • Languages & Frameworks: Python, Node.js, React (TypeScript)

  • Datastore: PostgreSQL

  • Cloud & Infra: Google Cloud Platform, Airflow, Terraform, Docker, Kubernetes

  • ML/AI: LLMs, RAG, prompt engineering

  • Other: MCP

Key Responsibilities

  • Architect and implement full‑stack features, from database schema to React components, optimized for scale and reliability.

  • Build and maintain RESTful/GraphQL APIs, data pipelines, and distributed services in GCP.

  • Integrate, prompt, and debug LLMs and generative AI tools; own RAG or fine‑tuning pipelines.

  • Ensure front‑end and back‑end systems interoperate flawlessly, minimize friction, optimize data flow, and enforce contracts.

  • Collaborate with product, research, design, and infra teams to define requirements, iterate rapidly, and ship production‑grade code.

  • Monitor performance, reliability, and security.

  • Mentor junior engineers through code reviews, architecture reviews, and shared best practices.

Requirements

  • 5+ years of professional software engineering experience with end‑to‑end ownership in a full‑stack role.

  • Deep expertise in Python, Node.js, React/TypeScript, and PostgreSQL.

  • Able to be hands‑on with GCP, containerization (Docker/K8s), and building/supporting high‑traffic systems.

  • Proven experience integrating AI/ML models (LLMs, NLP, RAG) into production apps.

  • Familiarity or strong interest in working with MCP servers.

  • Exceptional problem‑solving skills and a product mindset: you think deeply about UX, performance, and business impact.

  • You sweat both technical details and end-user experience.

Nice to Haves

  • Experience with multi‑step or agentic AI workflows.

  • Background in AI infrastructure or tooling companies.

  • Contributions to open‑source AI/ML projects.

What we offer

  • Competitive salary and equity package (roles, responsibilities, and comp grow as we do)

  • Top-tier health, vision, dental insurance (US)

  • Regular team off-sites

  • Regular hack weeks



Please mention the word **EBULLIENCE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $2000 - 2400 Full time
Data Engineer
  • Coderslab.io
  • Santiago (Hybrid)
Big Data ETL Automation Google Cloud Platform

Coderslab.io es una empresa dedicada a transformar y hacer crecer negocios mediante soluciones tecnológicas innovadoras. Formarás parte de una organización en expansión con más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Te unirás a equipos diversos que reúnen a parte de los mejores talentos tecnológicos para participar en proyectos desafiantes y de alto impacto. Trabajarás junto a profesionales experimentados y tendrás la oportunidad de aprender y desarrollarte con tecnologías de vanguardia.

This job offer is available on Get on Board.

Funciones del cargo

Objetivo del rol:

Análisis, diseño, desarrollo y mantenimiento de sistemas de procesamiento de datos en proyectos de Big Data. El profesional deberá crear pipelines en plataformas Cloud y Data Lake para la entrega de modelos de datos en producción, apoyando también en la arquitectura, el diseño de plataformas, el desarrollo de procesos ETL/ELT, ingeniería de datos serverless y modelamiento analítico.

Requerimientos del cargo

  1. Experiencia en análisis, diseño, desarrollo y pruebas de procesos de ingesta de datos (ETL/ELT) en entornos de Big Data sobre GCP (Data Lake).
  2. Capacidad para realizar mantenimiento correctivo y evolutivo de pipelines de datos ETL/ELT, asegurando su estabilidad y mejora continua.
  3. Experiencia en desarrollo de soluciones de ingeniería de datos bajo arquitecturas serverless, mediante la construcción de pipelines escalables.
  4. Conocimiento en automatización y orquestación de pipelines de datos.
  5. Habilidad para integrar, consolidar, depurar y estructurar datos provenientes de diversas fuentes, orientados a su consumo en soluciones analíticas.
  6. Capacidad de colaboración y apoyo en tareas relacionadas con el rol, de acuerdo con las necesidades del proyecto.

Condiciones

Modalidad de contratación: Plazo fijo

Gross salary $3100 - 4500 Full time
Data Engineer
  • Haystack News
  • Lima (Hybrid)
Python SQL Big Data Data Warehouse

Haystack News is the leading local & world news service on Connected TVs reaching millions of users! This is a unique opportunity to work at Haystack News, one of the fastest-growing TV startups in the world. We are already preloaded on 37% of all TVs shipped in the US!

Be part of a Silicon Valley startup and work directly with the founding team. Jumpstart your career by working with Stanford & Carnegie Mellon alumni and faculty who have already been part of other successful startups in Silicon Valley.

You should join us if you're hungry to learn how Silicon Valley startups thrive, you like to ship quickly and often, love to solve challenging problems, and like working in small teams.

See Haystack's feature at this year's Google IO:

This job is original from Get on Board.

Job functions

  • Analyze large data sets to get insights using statistical analysis tools and techniques
  • Collaborate with the Marketing, Editorial and Engineering teams on dataset building, querying and dashboard implementations
  • Support the data tooling improvement efforts and help increase the company data literacy
  • Work with the ML team on feature engineering and A/B testing for model building and improvement
  • Design, test and build highly scalable data management and monitoring systems
  • Build high-performance algorithms, prototypes and predictive models

Qualifications and requirements

  • Strong written and spoken English is a must!
  • Bachelor's degree in Computer Science, Statistics, Math, Economics or related field
  • 2+ years experience doing analytics in a professional setting
  • Advanced SQL skills, including performance troubleshooting
  • Experience with data warehouses (e.g. Snowflake, BigQuery, Redshift)
  • Proficient in Python including familiarity with Jupyter notebooks
  • Strong Math/Stats background with statistical analysis experience on big data sets
  • Strong communication skills, be able to communicate complex concepts effectively.

Conditions

  • Unlimited vacations :)
  • Travel to team's offsite events
  • 100% paid Uber rides to go to the office
  • Learn about multiple technologies

Accessible An infrastructure adequate for people with special mobility needs.
Relocation offered If you are moving in from another country, Haystack News helps you with your relocation.
Pet-friendly Pets are welcome at the premises.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Meals provided Haystack News provides free lunch and/or other kinds of meals.
Paid sick days Sick leave is compensated (limits might apply).
Partially remote You can work from your home some days a week.
Bicycle parking You can park your bicycle for free inside the premises.
Company retreats Team-building activities outside the premises.
Computer repairs Haystack News covers some computer repair expenses.
Commuting stipend Haystack News offers a stipend to cover some commuting costs.
Computer provided Haystack News provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Recreational areas Space for games or sports.
Gross salary $1900 - 2000 Full time
CSS Web design Design Thinking Figma
Sobre la empresa

Grupo Mariposa es una corporación multinacional de bebidas y alimentos, fundada en 1885, con operaciones en más de 14 países y más de 15,000 colaboradores. Contamos con el portafolio de bebidas más grande de la región y alianzas estratégicas con PepsiCo y AB InBev. Nos organizamos en 4 unidades de negocio: apex (transformación), cbc (distribución), beliv (innovación en bebidas) y bia (alimentos). Buscamos talentos que se sumen a nuestra estrategia de expansión y crecimiento, compartiendo nuestros anhelos y aportando su visión para lograr grandes resultados.
Buscamos el "Data Engineering Lead" que lidere la plataforma de datos corporativa en un entorno multi-cloud, con foco en Azure y soporte en Google Cloud Platform (GCP). Este rol define, lidera y escala la estrategia de ingeniería de datos para la región, garantizando una plataforma robusta, gobernada y escalable para analítica avanzada, BI y productos digitales en múltiples países.

Apply at getonbrd.com without intermediaries.

Funciones del cargo

- Investigación de Usuario (UX Research): Ejecutar pruebas de usabilidad, entrevistas y análisis de datos para entender las necesidades de los usuarios finales.
- Arquitectura de Información: Crear wireframes, flujos de usuario (user flows) y mapas de sitio detallados para definir la estructura del producto.
- Diseño de Interfaz (UI): Desarrollar interfaces de alta fidelidad alineadas con la identidad de marca, asegurando una estética moderna y funcional.
- Prototipado: Construir prototipos interactivos de media y alta fidelidad para validar soluciones antes de pasar a la fase de desarrollo.
- Marketing Web design: Crear landing pages de alta conversión para múltiples productos y marcas reconocidas a nivel mundial.
- Mantenimiento del Design System: Contribuir a la creación y escalabilidad de la biblioteca de componentes del equipo.
- Colaboración Técnica: Participar en sesiones de hand-off con desarrolladores, asegurando que el diseño final sea técnicamente viable y se implemente correctamente.

Requerimientos del cargo

- Experiencia: Mínimo de 3 a 5 años comprobables en diseño de productos digitales (Web y Mobile).
- Portfolio: Portafolio actualizado que demuestre el proceso de diseño (desde el problema hasta la solución final).
- Herramientas: Dominio experto de Figma (Auto-layout, componentes, variables) y herramientas generativas de diseño web (Lovable, Stitch, Claude o Figma Make).
- Conocimientos UX: Manejo sólido de metodologías como Design Thinking o Lean UX.
- Conocimientos UI: Dominio de principios visuales (tipografía, teoría del color, jerarquía, espaciado).
- Enfoque en Accesibilidad: Conocimiento de las pautas WCAG para crear productos inclusivos.
- Autogestión: Capacidad para priorizar tareas y cumplir plazos sin supervisión constante.
- Comunicación Asertiva: Habilidad para justificar decisiones de diseño basadas en datos o principios UX, no solo en gustos personales.
- Adaptabilidad: Flexibilidad para iterar diseños basándose en el feedback de usuarios y stakeholders.

Opcionales

- Frontend Básico: Nociones de HTML/CSS para entender mejor las limitaciones y posibilidades técnicas.

Condiciones

  • Ambiente de trabajo colaborativo y dinámico.
  • Desarrollo profesional continuo y oportunidades de crecimiento.
  • Flexibilidad de horarios y equilibrio entre vida laboral y personal.

Fully remote You can work from anywhere in the world.
$$$ Full time
Staff DevOps Engineer
  • Life360
  • Remote
security devops mobile engineer

About Life360

Life360's mission is to keep people close to the ones they love. Our category-leading mobile app,Tile tracking devices, and Pet GPS tracker empower members to protect the people, pets, and things they care about most with a range of services, including location sharing, safe driver reports, and crash detection with emergency dispatch. Life360 serves approximately 91.6 million monthly active users (MAU), as of September 30, 2025, across more than 180 countries.

Life360 delivers peace of mind and enhances everyday family life with seamless coordination for all the moments that matter, big and small. By continuing to innovate and deliver for our customers, we have become a household name and the must-have mobile-based membership for families (and those friends who are basically family).

Life360 has more than 500 (and growing!) remote-first employees. For more information, please visit life360.com.

Life360 is a Remote-First company, which means a remote work environment will be the primary experience for all employees. All positions, unless otherwise specified, can be performed remotely (within the US) regardless of any specified location above. 

About The Team

The Horizons DevOps and Infrastructure team supports large-scale, data-intensive platforms that power real-time adtech and data science workloads across the organization. The team owns and operates critical infrastructure and data platforms, including Databricks, Snowflake, Apache Airflow, and Kubernetes-based services, processing fifty billions of requests and tens of terabytes of data daily. Working closely with data engineering, data science, and security teams, the group focuses on building reliable, scalable, and automated systems that enable high-throughput data processing, analytics, and ML workflows. Team members take end-to-end ownership of production systems, influence architectural direction, and play a key role in evolving the platform as the organization integrates new technologies and scales further.

About the Job

We are seeking a

Please mention the word **PORTABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Machine Learning Engineer
  • Radformation
  • Remote
design support software code

About Radformation

Radformation is transforming the way cancer clinics deliver care. Our innovative software automates and standardizes radiation oncology workflows, enabling clinicians to plan and deliver treatments faster, safer, and more consistently, so patients everywhere can receive the same high-quality care.

Our software focuses on three key areas:

  • Time savings through automation.
  • Error reduction through automated systems.
  • Increased quality care through advanced algorithms and workflows.

We are a fully remote, mission-driven team united by a shared goal: to reduce cancer’s global impact and help save more of the 10 million lives it claims each year. Every line of code, every product release, and every conversation with our customers brings us closer to ensuring no patient’s treatment quality depends on where they live.

Why This Role Matters

In this role you will help advance Radformation’s AI-driven radiotherapy products by building and improving machine learning models that directly impact clinical workflows and patient outcomes.

You will work closely with AI, cloud, research, and product teams to develop scalable data pipelines, improve model performance, and support regulatory submissions for medical device software.

Responsibilities Include:

  • Design, build, and maintain robust ETL pipelines to support AI model development and deployment.
  • Develop, train, and optimize machine learning models used in radiotherapy software.
  • Collaborate with product and research teams to bring new AI-driven features and algorithms into production.
  • Support FDA submissions by contributing to documentation, validation, and regulatory processes.
  • Participate in design reviews, risk analyses, and cross-functional discussions to ensure safe and effective products.
  • Mentor junior engineers and data scientists and contribute to a collaborative team environment.

Required Experience:

  • MS in Computer Science, Mathematics, Statistics, or a related field with 3+ years of experience.
  • Expert-level proficiency in Python.
  • Hands-on experience building, training, and tuning machine learning models.
  • Strong experience with PyTorch and/or TensorFlow.
  • Experience developing convolutional neural networks, including U-Net architectures.
  • Experience using Git and modern code repositories (GitHub, Bitbucket, Azure DevOps, etc.).

Preferred Experience:

  • Experience with medical imaging and image processing techniques (segmentation, resampling, smoothing).
  • Familiarity with clinical data standards such as DICOM or HL7.
  • Experience working in regulated environments (HIPAA, FDA, or medical device software).
  • Experience with modern AI-assisted development tools (e.g., Cursor, Claude Code, Codex).

AI & Hiring Integrity

At Radformation we believe AI can be an incredible tool for innovation, but our hiring process is all about getting to know you, your skills, experience, and unique approach to problem solving. We ask that all interviews and assessments be completed without tools that generate answers in real time. This helps ensure a fair process for everyone and allows us to see your authentic work. Using such tools during the process may affect your candidacy.

Benefits & Perks — What Makes Us RAD

We care about our people as much as we care about our mission. We offer competitive compensation, benefits, and the opportunity to make an impact in the fight against cancer. The salary range for this role is $160,000 - $200,000 USD base, plus bonus eligibility.

For US teammates (via TriNet):

Health & Wellness

  • Multiple high-quality medical plan options with substantial employer contributions toward premiums, often covering the full cost depending on the plan selected.
  • Health coverage starting on day one
  • Short-term and long-term disability and supplementary life insurance

Financial & Professional Growth

  • 401(k) with employer match vested immediately
  • Annual reimbursement for professional memberships
  • Conference attendance and continued learning opportunities

Work-Life Balance & Perks

  • Self-managed PTO and 10 paid holidays
  • Monthly internet stipend
  • Company-issued laptop and one-time home office setup stipend
  • Fully remote work environment with virtual events and yearly retreats, because we like to have fun while doing work that matters

For global teammates (via Deel):
At Radformation, we want every team member to feel supported, no matter where they live. For teammates outside the US, we provide benefits that align with local laws and standards, working with our Employer of Record (EOR) partners to ensure fairness and equity. This means your benefits package will be locally compliant, competitive, and designed to support your health, financial security, and work-life balance.

Our Commitment to Diversity

Cancer affects people from every walk of life, and we believe our team should reflect that diversity. Radformation is proud to be an equal opportunity workplace and an affirmative action employer. We welcome candidates from all backgrounds and are committed to fostering an inclusive environment for all employees.

Agency & Candidate Safety Notice

Radformation does not accept unsolicited resumes from agencies without a signed agreement in place. We do not partner with third-party recruiters unless explicitly stated. All legitimate communication from Radformation will come from an @radformation.com email address. If you receive outreach from another domain or via unofficial channels, please contact careers@radformation.com.

\n


\n

Please mention the word **EBULLIENTLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Staff Software Engineer
  • Office Hours
  • Remote
software system consulting technical

About Us

Office Hours is an on-demand expert network that connects leading organizations with trusted experts across various knowledge domains. Experts earn income by sharing their knowledge through advisory work, projects, and AI model training. Our platform handles the complexities behind the scenes— screening, compliance, scheduling, and payments—so knowledge sharing stays focused on meaningful insights and real impact.

We’re a hyper-growth and profitable company, quickly expanding our expert network, launching new offices, and new products. We are headquartered in San Francisco, with offices in Brooklyn and Bangalore. Our customers include the fastest-growing digital health companies, technology companies, institutional investment firms, consulting firms and AI Labs. We are backed by top marketplace investors and operators of companies like DoorDash, Airbnb, Affirm.

What we believe

Human knowledge is the world’s most valuable asset. And yet, despite being more interconnected than ever, most knowledge still remains stuck in our heads, inaccessible and underutilized. Our vision is to make human knowledge easily accessible and infinitely scalable by building tools for the new age knowledge economy.

About the role

At first glance, Office Hours looks simple: search, match, connect, and pay. Under the hood, the system is anything but.

We’re building and evolving a deeply interconnected platform spanning search, discovery, recommendations, data pipelines, logistics, payments, compliance, and performance. The entire stack has been built in-house, from expert profiles and discovery experiences to workflow automation and an underlying knowledge graph that ties everything together.

We’re looking for a Staff Full Stack Software Engineer who enjoys working across the stack, takes ownership of complex problems, and cares deeply about building thoughtful, high-quality product experiences. This is a hands-on role with real influence over product direction, technical architecture, and how we ship software.

What you’ll do

  • Own the design, implementation, and rollout of meaningful user-facing features, from problem definition through production

  • Partner closely with design, product, and client-facing teams to translate real user needs into shipped solutions

  • Architect, build, and evolve scalable, reliable systems across the front end, back end, and infrastructure

  • Set a high bar for code quality through clear implementations, thoughtful tradeoffs, and active participation in reviews and technical discussions

  • Explore and integrate modern tools, including AI-powered workflows, and share learnings that improve how the team builds and ships

What you bring

  • 8+ years of professional software engineering experience, with meaningful time spent working across the stack

  • A track record of shipping high-quality, user-facing products in production environments

  • Strong product intuition and the ability to translate ambiguous user or business problems into technical solutions

  • Comfort operating in fast-moving environments where priorities evolve and ownership matters

  • A bias toward action, paired with sound judgment and attention to detail

Our tech stack

  • Back end: Node.js, Typescript, MongoDB & Postgres, OpenSearch, Temporal

  • Front end: React, Next.js, Tailwind, shadcn

  • Infrastructure: AWS, Kubernetes, Docker, Datadog, Sentry

  • Workflow: GitHub, Slack, Notion, Figma, Linear, PostHog, Metabase

Benefits + Perks

  • Competitive salary and equity

  • Medical, dental, and vision coverage

  • 401(k)

  • Monthly wellness and fitness stipend

  • Paid time off policy, along with company holidays

  • Annual company off-sites (Tahoe, Mendocino, Mexico City, San Diego, Park City)

  • Parent-friendly policies, remote flexibility, and paid family leave

Pay Transparency Notice

Full-time offers include base salary, equity, and benefits.

Pay range: $225,000- $250,000 based on seniority and relevant experience

*This role can be 100% remote, but we do have offices in San Francisco and NYC

Don’t meet every single requirement? Studies have shown that some candidates, especially underrepresented groups such as women and people of color, are less likely to apply to jobs unless they meet every single qualification. At Office Hours we believe in building a diverse and inclusive workplace, so if you’re excited about this role but don’t meet every qualification in the job description, we still encourage you to apply. You could still be the right candidate for this or other roles at Office Hours!



Please mention the word **LIGHTER** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Salesforce CI/CD Apex Data Migration

Serve as the senior-most technical authority for Salesforce-based solutions, responsible for shaping platform strategy, defining enterprise-scale architectures, and leading the delivery of complex, multi-cloud and multi-system implementations.

You will partner closely with client executives, ZS leadership, and delivery teams to translate business strategy into secure, scalable, and high-performing Salesforce solutions, while also contributing to practice growth and thought leadership.

As a Senior Salesforce Technical Architect you will be expected to work on multiple SF cloud including but not limited to Salesforce Lifesciences Cloud, Agentforce, Salesforce DataCloud, Salesforce Sales & Service Cloud.

This posting is original from the Get on Board platform.

Own and define end-to-end Salesforce architecture across multiple clouds, integrations, and enterprise systems

Lead technical discovery, solution design, and architectural decision-making for complex Salesforce programs

Architect scalable, reusable solutions using Apex, Lightning Web Components (LWC), Aura, Visualforce, APIs, and Salesforce configuration

Design and govern cross-cloud and cross-system integrations using REST/SOAP APIs, Apex callouts, outbound messaging, middleware, ETL, and iPaaS tools

Define data architecture, data migration strategies, and integration patterns for legacy-to-Salesforce transformations

Establish environment strategy, CI/CD pipelines, release management, and deployment models appropriate to enterprise-scale programs

Confidently lead client discussions on technical strategy, integrations, and platform transformation initiatives

Act as a trusted advisor to client executives by demonstrating deep business process and technical expertise

Support sales and pursuit activities by shaping solution scope, technical estimates, and risk assessments

Manage and coordinate technical delivery across multiple workstreams and development teams

Identify, manage, and proactively mitigate technical risks, dependencies, and delivery challenges

Review and govern solution designs and code to ensure quality, performance, security, and adherence to Salesforce best practices

Own overall technical documentation, including architecture diagrams, integration flows, data models, and design standards

Mentor and coach architects and developers, modeling high standards of technical excellence and delivery leadership

Contribute to internal enablement, architecture forums, and knowledge-sharing initiatives

Actively contribute thought leadership and help evolve ZS Salesforce architecture standards and best practices

What you’ll bring:

Bachelor’s degree in Computer Science, Engineering, or a related field (preferred)

8+ years of Salesforce experience or equivalent enterprise CRM experience

Advanced Salesforce certifications strongly preferred, including:

  • Fluency in English
  • Client-first mentality
  • Intense work ethic
  • Collaborative spirit and problem-solving approach

Perks & Benefits:

ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development.

Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member.

Hybrid working model:

We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections.

Travel:

Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed.

$91455 - $137273 Full time
redis sysadmin technical support

Who we are

We're Redis. We built the product that runs the fast apps our world runs on. (If you checked the weather, used your credit card, or looked at your flight status online today, you’re welcome.) At Redis, you’ll work with the fastest, simplest technology in the business—whether you’re building it, telling its story, or selling it to our 10,000+ worldwide customers. We’re creating a faster world with simpler experiences. You in?

Why would you love this job?

As a Technical Support Engineer, you will be responsible for helping customers by diagnosing and resolving complex technical issues in a high-contribution role with exciting technical challenges, ongoing learning, and the excitement of helping name-brand customers as part of our fun, tight-knit team.

In this role, you will use and extend your existing technical depth and increase your technical breadth by addressing complex problems for the top companies in the world. You will level up to be an expert complex problem solver on Redis Enterprise Software, being used as a high-performance database by thousands of worldwide customers. You will dive deep into different exciting forefront technologies by supporting Redis Enterprise running on the top Cloud Platforms and in the top container orchestration platforms.

Join the best of the best and continuously learn new things. We are looking for brilliant experts who are curious, persistent, and happy digging through the full stack, from code to Sysadmin to networking to performance. If this sounds like you, please check out the technical foundation we’d like you to bring.

What you’ll do:

  • Work with customers to troubleshoot and resolve complex software issues:

    • Reproduce issues, replicating customer environments as needed.

    • Document issues and contribute to our internal team documentation.

    • Provide Root Cause Analysis

  • Collaborate with Engineering as needed to provide solutions.

  • Analyze performance questions that may arise along the data path (including networks) for deployments that may be in the Cloud or On-premises.

  • Provide technical expertise during testing, deployment, and upgrading of Redis software.

  • Manage critical customer issues, facilitating communication between customers, CloudOps, Engineering, Product, TAMs, and Sales.

  • Serve as the customer advocate for timely resolution of issues and handling escalations while helping customers realize and maximize the value of their Redis subscription.

  • Participate in new product development, customer training, and other support-related activities.

This role requires a 5-day work week that includes Saturday and Sunday.

What will you need to have?

  • At least five years of technical experience as a Support Engineer, Systems Engineer, Software Engineer, or Site Reliability Engineer in an enterprise software company

  • At least four years of experience troubleshooting real-time production systems

  • At least two years of hands-on experience with cloud infrastructure.

  • Strong background in scripting or programming languages (Python, Java, C#, JavaScript, Bash, Powershell, etc.)

  • Expert working knowledge in Linux/Unix and networking (TCP/IP)

  • Professional experience working with networking tools like wireshark, tcpdump, etc.

  • Experience in analyzing and debugging production issues at scale.

  • Experience with alerting and monitoring systems (Prometheus, Grafana, ELK, Splunk, etc.).

  • Working knowledge of Cloud-based and On-premises environments

  • Proficiency in communication and presentation, both written and verbal (in English)

  • Strong technical background with excellent problem-solving and multi-tasking skills

  • High availability and commitment to customers at any time

Extra great if you have:

  • Bachelor of Science in Computer Science or Information Systems

  • Experience with NoSQL databases (especially Redis)

  • Experience working with container orchestration environments, such as Kubernetes

The estimated gross base annual salary range for this role is $91,455 – $137,273 per year in New York, California, Washington, Colorado, and Rhode Island. Actual compensation may vary and is dependent on various factors, including a candidate’s work location, qualifications, experience, and competencies. Base annual salary is one component of Redis’ total compensation and competitive benefits package, which may include 401(k), unlimited time off, learning and development opportunities, and comprehensive health and wellness benefits. This role may include discretionary bonuses, stock options, commuter benefits based on location, or a commission plan. Salary history is not used in compensation package decisions. Redis utilizes market pay data to determine compensation, so posted compensation ranges are subject to change as new market data becomes available.

As a global company, we value a culture of curiosity, diversity of thought, and innovation from our employees, customers, and partners. Redis is committed to a diverse and inclusive work environment where all employees’ differences are celebrated and supported, and everyone feels safe to bring their authentic selves to work. Redis is dedicated to equal employment opportunities regardless of race, color, ancestry, religion, sex, national orientation, sexual orientation, age, marital status, disability, gender identity, gender expression, Veteran status, or any other classification protected by federal, state, or local law. We strive to create a workplace where every voice is heard, and every idea is respected.

Redis is committed to working with and providing access and reasonable accommodation to applicants with mental and/or physical disabilities. If you think you may require accommodations for any part of the recruitment process, please send a request to recruiting@redis.com. All requests for accommodations are treated discreetly and confidentially, as practical and permitted by law.

Any offer of employment at Redis is contingent upon the successful completion of a background check, consistent with applicable laws.

Redis reserves the right to retain data longer than stated in the privacy policy in order to evaluate candidates.



Please mention the word **EASED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Engineering Manager Data Platform
  • TrueML
  • Remote in USA
manager design system python

Why TrueML?

 

TrueML is a mission-driven financial software company that aims to create better customer experiences for distressed borrowers. Consumers today want personal, digital-first experiences that align with their lifestyles, especially when it comes to managing finances. TrueML’s approach uses machine learning to engage each customer digitally and adjust strategies in real time in response to their interactions.

 

The TrueML team includes inspired data scientists, financial services industry experts and customer experience fanatics building technology to serve people in a way that recognizes their unique needs and preferences as human beings and endeavoring toward ensuring nobody gets locked out of the financial system.


About This Role:

As the Engineering Manager for our Data Platform, you will be the primary architect of the ecosystem that powers TrueML’s intelligence. We are currently in a phase of purposeful scaling, and we need your leadership to build a rock-solid, high-performing data foundation that bridges the gap between raw infrastructure and actionable insights. Your goal is to champion data integrity and technical excellence while leading a world-class team during this period of deliberate expansion.

\n


What You'll Do:
  • Empower a Talented Team: Lead, manage, and mentor a group of data engineers, fostering their career development and championing a culture of technical excellence.
  • Architect Resilient Infrastructure: Own the design and development of data pipelines and systems to ensure they are prepared for company-wide expansion.
  • Champion Data Trust: Act as a relentless advocate for data quality by implementing the system controls and SLAs necessary for flawless production processes.
  • Collaborate Strategically: Partner cross-functionally with Data Science and Product managers to translate complex business needs into efficient, well-documented data models.
  • Maintain Technical Excellence: Perform high-impact code reviews and provide critical guidance to optimize ETL pipelines and schema performance.
  • Balance Leadership with Craft: Contribute directly to development work and troubleshooting alongside your team when the mission requires it.
  • Drive Data Accessibility: Ensure data is a true business enabler by making it reliable and easily accessible for stakeholders across the company.


Who You Are:

An Experienced Leader: You have 2+ years of hands-on management experience and 5+ years of relevant data engineering expertise, with a track record of growing teams through coaching.

- A Big Data Expert: You have deep familiarity with modern technologies like Snowflake, Airflow, BigQuery, or Redshift, and mastery of both RDBMS and NoSQL databases.

- A Master of the Stack: You possess advanced proficiency in Python or Java and expert-level SQL skills, specifically in scaling schemas and tuning ETL performance.

- A Systems Thinker: You have extensive experience designing data warehouses and workflow systems, including owning SLAs for critical production processes.

- An Elite Communicator: You are a natural bridge-builder who can translate deep technical hurdles into clear, actionable updates for business partners.

- Purpose-Driven: You thrive in environments that value intentional progress and are excited to mature a data ecosystem from the ground up.

- Bonus Skills: You bring experience with Spark, Scala, or Protocol Buffers, or you have navigated the unique regulatory challenges of the FinTech industry.


\n
$111,700 - $148,900 a year
Compensation Disclosure: This information reflects the anticipated base salary range for this position based on current national/regional data. Minimums and maximums may vary based on location. Individual pay is based on skills, experience, and other relevant factors.
\n

We are a dynamic group of people who are subject matter experts with a passion for change. Our teams are crafting solutions to big problems every day. If you’re looking for an opportunity to do impactful work, join TrueML and make a difference.

 

Our Dedication to Diversity & Inclusion

 

TrueML and TrueAccord are equal opportunity employers. We promote, value, and thrive with a diverse & inclusive team. Different perspectives contribute to better solutions and this makes us stronger every day. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.


For California Applicants: we collect personal information for employment purposes. We do not sell personal information. Most of the information we have is provided to us by you and/or collected as part of the employment process. For more details on how we use, share, and delete personal information see our Privacy Policy.



Please mention the word **EXULTINGLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $2200 - 2400 Full time
JavaScript Android iOS Git
Somos 3IT ¡Innovación y talento que marcan la diferencia!
Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.
Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.
Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

This job is published by getonbrd.com.

📝 ¿Cuál sería tu trabajo?

Desarrollar interfaces de usuario eficientes, asegurando la funcionalidad y la calidad visual del software en cumplimiento con los requisitos del proyecto.

🌟Herramientas

  • React Native
  • JavaScript
  • Typescript
  • Servicio Rest
  • Marco de trabajo ágil: Scrum
  • Versionamiento de código (Git)
  • Desarrollo nativo en iOS o Android
  • Herramientas de Suite Atlassian: Jira, TM4J, Bamboo
  • Experiencia en banca
  • Contar con al menos 4 años de experiencia trabajando con las tecnologías mencionadas
📍 ¿Dónde y cómo trabajarás?
  • Ubicación oficina: Santiago
  • Modalidad: Híbrida

✋ Algunas consideraciones antes de postular

  • Debes tener disponibilidad para trabajar en modalidad híbrida y asistir de forma presencial a las oficinas de cliente.
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista.

✌️ Beneficios Tritianoa

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxxe + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Health coverage Banchile pays or copays health insurance for employees.
Computer provided Banchile provides a computer for your work.
$$$ Full time
DataOps Engineer
  • BC Tecnología
Azure Spark CI/CD Terraform
BC Tecnología es una consultora de TI que implementa soluciones en infraestructura, desarrollo y servicios de outsourcing para clientes de servicios financieros, seguros, retail y gobierno. En este proyecto LATAM, se busca un DataOps Engineer con al menos 3 años de experiencia para un entorno Azure + Databricks, trabajando de forma remota para LATAM. El equipo se orienta a la construcción de pipelines confiables y escalables de datos, con foco en calidad, monitoreo y seguridad. Participarás en la automatización de flujos de datos, implementación de IaC y mejoras continuas en procesos de integración y entrega de datos.

Job source: getonbrd.com.

Funciones

  • Diseñar, implementar y mantener pipelines de datos en entornos Azure y Databricks, gestionando clusters, jobs y notebooks.
  • Desarrollar pipelines en PySpark y herramientas de Orquestación (Azure Data Factory, Databricks Workflows).
  • Automatizar la validación y calidad de datos, estableciendo métricas y alertas para monitoreo proactivo.
  • Gestionar IaC con Terraform para infraestructura de datos y entornos de desarrollo, prueba y producción.
  • Integrar CI/CD en Azure DevOps / GitHub / GitLab para despliegues de pipelines y código.
  • Aplicar buenas prácticas de seguridad, cumplimiento y optimización de costos en Azure.
  • Trabajar con equipos multifuncionales para entender requerimientos, diseñar soluciones y entregar resultados de alto impacto.

Requisitos y perfil

Buscamos un profesional con al menos 3 años de experiencia en DataOps/Data Engineering, con fuertes habilidades en Azure y Databricks. Debe dominar PySpark, Azure Data Factory y Databricks Workflows, así como herramientas de CI/CD y prácticas de seguridad de datos. Se valoran experiencia en automatización de calidad de datos, monitoreo y optimización de rendimiento. Capacidad para trabajar de forma remota, proactividad, orientación a procesos y capacidad de colaborar en equipos ágiles. Deseable experiencia en entornos regulados y conocimiento de principios de gobierno de datos.

Deseables

Certificaciones en Azure (AZ-xxx), experiencia en orquestación de datos, conocimiento de herramientas de observabilidad, y background en sectores financieros o seguros. Habilidad para comunicar oportunidades técnicas a stakeholders y documentar soluciones de forma clara.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Fully remote You can work from anywhere in the world.
Health coverage BC Tecnología pays or copays health insurance for employees.
Computer provided BC Tecnología provides a computer for your work.
$$$ Full time
Ingeniero de Datos
  • Factor IT
  • Santiago (Hybrid)
Python SQL BigQuery Docker
En Factor IT trabajamos para impulsar la transformación digital en grandes empresas de la región, con foco en Data & Analytics, automatización e inteligencia artificial. Dentro de nuestros proyectos, participamos en iniciativas que construyen y evolucionan plataformas de datos sobre Google Cloud (GCP), integrando servicios, pipelines y automatización para habilitar analítica avanzada y toma de decisiones basada en datos. Te unirás a un equipo que diseña soluciones robustas y escalables, con tecnologías modernas, alto expertise técnico y una cultura de colaboración y aprendizaje continuo.

Apply directly from Get on Board.

Ingeniero de Datos

Como Ingeniero de Datos, nuestro objetivo es diseñar, construir y mantener pipelines de datos confiables y escalables en entornos de GCP, asegurando que los datos fluyan correctamente desde las fuentes hasta los modelos y capacidades analíticas.
Entre tus responsabilidades:
  • Desarrollar y optimizar consultas SQL avanzadas (PostgreSQL, MySQL).
  • Implementar procesos ETL/ELT usando Airflow, dbt y servicios de orquestación/ingesta como Dataflow y Pub/Sub.
  • Programar en Python para automatizar transformaciones e integraciones.
  • Trabajar con servicios y prácticas de GCP para construir soluciones mantenibles.
  • Desplegar y gestionar componentes mediante Docker y Kubernetes, garantizando robustez y escalabilidad.
Nos enfocamos en colaborar estrechamente con el equipo para entender requerimientos del negocio, proponer mejoras y asegurar calidad, eficiencia y confiabilidad en todo el ciclo de vida de la plataforma de datos.

Requisitos excluyentes

Buscamos un Ingeniero de Datos con experiencia práctica para integrarse a proyectos regionales y con impacto real en la transformación tecnológica, especialmente en el sector financiero.
Requisitos excluyentes:
  • SQL avanzado (PostgreSQL, MySQL).
  • BigQuery.
  • ETL/ELT: Airflow, dbt, Dataflow, Pub/Sub.
  • Python.
  • Experiencia en GCP.
  • Docker y Kubernetes.
Además, valoramos:
  • Capacidad para analizar problemas, depurar y mejorar pipelines existentes.
  • Orientación a la calidad y a la confiabilidad de los datos.
  • Buena comunicación y trabajo colaborativo para alinear soluciones con necesidades del negocio.
  • Mentalidad de aprendizaje continuo y adaptación a tecnologías emergentes.
Nos importa que seas proactivo, que puedas proponer mejoras y que mantengas un enfoque responsable en la operación y evolución de la plataforma de datos.

Deseable

Sumará puntos si cuentas con:
  • Streaming (Kafka, Flink).
  • Java o Scala.
  • Experiencia con herramientas BI (Looker, Power BI, Tableau).
Estas habilidades nos ayudan a ampliar la capacidad de análisis, habilitar casos en tiempo real y facilitar la integración con productos y consumo de datos.
Ofrecemos una modalidad de trabajo híbrida desde Santiago, Chile, con flexibilidad horaria para un balance saludable entre vida profesional y personal.
Vas a formar parte de un ambiente colaborativo, dinámico y con tecnologías de última generación que impulsan el crecimiento profesional y la innovación tecnológica.
Contarás con un paquete salarial competitivo, acorde a la experiencia y perfil, e integrado a una cultura inclusiva que valora la diversidad, creatividad y el trabajo en equipo.
Participarás en proyectos desafiantes con impacto real en la transformación tecnológica de la región y en el sector financiero, dentro de una organización que promueve la innovación y el desarrollo profesional continuo.

$$$ Full time
Senior Data Engineer
  • Exadel
  • Brazil, Bulgaria, Colombia, Georgia, Lithuania, Poland, Romania
jira salesforce code web

Why Join Exadel 

We’re an AI-first global tech company with 25+ years of engineering leadership, 2,000+ team members, and 500+ active projects powering Fortune 500 clients, including HBO, Microsoft, Google, and Starbucks.

From AI platforms to digital transformation, we partner with enterprise leaders to build what’s next.
What powers it all? Our people are ambitious, collaborative, and constantly evolving.

About the Client  

A U.S.-based education services provider offering online and campus-based post-secondary education, primarily serving military personnel, veterans, and public service communities. The organization delivers degree and certificate programs across disciplines such as nursing, health sciences, business, IT, and liberal arts. In addition to its headquarters in West Virginia, the customer operates facilities and partner institutions across the United States. The primary product areas to work with are learning management systems, student enrollment, and academic operations on web and mobile platforms.

What You’ll Do  

  • Design, implement, and maintain scalable data pipelines using Snowflake, Coalesce.io, Airbyte, and SQL Server/SSIS, with some use of Azure Data Factory
  • Build and maintain dimensional data models to ensure high-quality, structured data for analytics and reporting
  • Implement Medallion architecture in Snowflake, managing bronze, silver, and gold layers
  • Collaborate with teams using Jira for task tracking and GitHub for code repository management
  • Ensure reliable ETL processes, data transformations, and data integration workflows
  • Help improve data modeling practices and address weaknesses in dimensional modeling

What You Bring  

  • Hands-on experience with Snowflake, Coalesce.io, Airbyte, SQL Server/SSIS, and Azure Data Factory
  • Strong understanding of Medallion architecture and dimensional data modeling
  • Practical experience in building ETL pipelines and transforming data for analytics
  • Familiarity with Jira and GitHub for collaborative work
  • Strong analytical and problem-solving skills, with ability to collaborate across teams
  • Minimum 4-hour overlap with US Eastern Time

Nice to Have

Exposure to Power BI (optional)Experience with Salesforce data integrationBackground in higher education / ed-tech domains

English level 

Intermediate/Upper-Intermediate

Legal & Hiring Information 

  • Exadel is proud to be an Equal Opportunity Em

    Please mention the word **EXALTATION** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Tech Recruiter (Colombia)
  • Crest IT Resources LLC
English Remote Work Spanish Applicant Tracking System

Crest IT Resources is a US-based IT staffing and talent placement firm. We help US companies hire skilled software engineers, data professionals, and IT specialists across the Americas. We're expanding our Latam sourcing operation and looking for an experienced recruiter on the ground in Colombia to lead it.

This job offer is on Get on Board.

IT Recruiter — Latam Sourcing | Remote | USD Salary

You'll own end-to-end sourcing and recruiting for IT positions across Latin America — primarily software engineers, DevOps, data engineers, QA, and cloud roles. Your candidates will be placed with US client companies on remote contracts, paid in USD.

Day to day, you'll:

  • Source IT talent across Colombia, Mexico, Argentina, Brazil, and other Latam markets using LinkedIn Recruiter, Get on Board, GitHub, and local platforms
  • Run Boolean searches in English, Spanish, and (ideally) Portuguese
  • Conduct first-round screens covering experience, English level, and timezone fit
  • Manage candidates through the pipeline: screen → technical assessment → client interview → offer
  • Build relationships with engineers in Latam tech communities (Discord, Slack, meetups)
  • Partner with our US team on requisitions, intake calls, and offer negotiation
  • Track funnel metrics and report weekly on pipeline health

Who you are

  • 3+ years of tech recruiting experience, ideally sourcing software engineers
  • Fluent English (B2 minimum, C1 preferred) — you'll be on daily calls with US hiring managers
  • Native or fully fluent Spanish; Portuguese a strong plus
  • Hands-on experience with LinkedIn Recruiter, Boolean search, and at least one ATS (Greenhouse, Lever, Workable, Bullhorn, etc.)
  • Comfortable working independently in a remote, async environment
  • Based in Colombia (Medellín, Bogotá, or anywhere with reliable internet)

Nice to have

  • Experience recruiting for US companies or nearshore staffing firms
  • Knowledge of the Latam tech salary landscape across multiple countries
  • Existing network in Latam engineering communities

Hiring model

Hiring model: independent contractor
 Working hours: core overlap with US Eastern Time (roughly 9am–2pm EST)

$$$ Full time
Principal Data Operations & Migration Lead
  • StarCompliance
  • York, United Kingdom
technical support software financial

About StarCompliance

StarCompliance is on a mission to make compliance simple and easy. Trusted globally by enterprise financial institutions, the user-friendly STAR platform empowers organizations to achieve regulatory compliance while safeguarding their integrity and business reputations. Through a customizable, 360-degree view of employee activity, the STAR software enables firms to automate the detection and resolution of potential areas of conflict while streamlining daily workflows and increasing efficiency. 


Role  

StarCompliance is looking for a senior, hands-on Data Operations & Migration Specialist to oversee our data feed operations and client data migration capabilities. This role combines technical leadership with day-to-day delivery, acting as a player coach who sets direction, unblocks issues, and still gets hands-on when it matters.


You will own the operational health of broker and client data feeds, lead complex data migration initiatives during client onboarding, and provide mentorship and technical guidance to engineers and analysts across both functions. Deep domain knowledge in financial services data, particularly regulated trading, transaction, or reference data, is critical. 


This role sits within the Enterprise Data function and works closely with R&D, Client Support Services, Professional Services, and Relationship Management to ensure client data is secure, accurate, compliant, and delivered on time. 

\n


Responsibilities
  • Leadership Responsibilities 
  • Provide technical and operational leadership across Data Operations and Data Migration functions. 
  • Act as a player coach, balancing hands-on delivery with coaching, mentoring, and upskilling team members. 
  • Set standards for operational excellence, data quality, documentation, and incident management. 
  • Own prioritisation and workload planning across feeds and migrations, ensuring delivery commitments are met. 
  • Serve as the escalation point for complex data issues, client escalations, and high-risk migrations. 
  • Partner with Product, Engineering, and Professional Services to influence roadmap decisions and onboarding strategies.  
  • Act as a trusted technical partner for internal teams and external stakeholders during onboarding and operational change. 
  • Translate complex technical and data concepts into clear, actionable guidance for non-technical audiences. 
  • Contribute to client-facing discussions where deep data or feed expertise is required. 

  • Data Feed Operations Ownership 
  • Oversee the delivery, maintenance, and evolution of StarCompliance’s broker and client data feed infrastructure. 
  • Ensure secure setup and ongoing management of SFTP connectivity, access permissions, and encryption standards. 
  • Own operational monitoring of daily and intraday feeds, proactively identifying trends, risks, and failure patterns. 
  • Drive continuous improvement across feed automation, resilience, monitoring, and alerting. 
  • Work closely with the wider Enterprise Data engineering team on feed-related enhancements and defect resolution. 
  • Ensure platforms such as MoveIt and associated automation tooling are stable, well configured, and fit for scale. 

  • Data Migration Leadership 
  • Oversee the planning and execution of complex data migrations from third-party vendors into StarCompliance products. 
  • Define and review migration strategies, data mappings, validation approaches, and cutover plans. 
  • Ensure data integrity, accuracy, and regulatory compliance throughout the migration lifecycle. 
  • Provide hands-on support for data analysis, transformation, and validation where required. 
  • Oversee post-migration support, ensuring issues are resolved quickly and root causes addressed. 


Skills & Experience
  • Strong experience in financial services, fintech, regtech, or similarly regulated data environments.
  • Deep domain knowledge of financial broker feeds, file-based integrations, and operational data pipelines.
  • Hands-on experience with SQL Server, including T-SQL for investigation and data validation.
  • Strong understanding of ETL processes and tooling.
  • Experience with secure file transfer technologies and encryption standards, including SFTP, PGP/GPG, and SSH.
  • Proficiency in scripting and automation using tools such as PowerShell, Python, and SQL.
  • Proven experience leading data operations or data migration initiatives in production environments.
  • Ability to balance strategic thinking with hands-on delivery.
  • Excellent problem-solving skills and calm decision-making under pressure. 


Minimum Qualifications
  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or equivalent professional experience.  
  • Proven leader with 5+ years in data operations, data engineering, data migration, or related technical roles, ideally within financial services or compliance technology. 


How We Think About AI..
  • At StarCompliance, AI is not a side experiment or a specialist niche. We treat it as a practical capability that strengthens how we operate, scale, and deliver secure, high quality data services. 

  • In Enterprise Data, we expect senior leaders to: 
  • Use AI assisted tools to improve operational efficiency. 
  • Stay informed about how AI can enhance data operations, migration strategy, and automation in regulated environments. 
  • Apply AI thoughtfully, with strong awareness of data security, client confidentiality, regulatory risk, and cost. 
  • Help the team adopt AI responsibly in day-to-day operations, without compromising control, traceability, or compliance standards. 


\n

StarCompliance Background Checks


All positions require pre-employment screening due to employees potentially having access to highly sensitive and confidential information involving finance and compliance; candidates must be trustworthy and have a heightened sensitivity to protecting confidential financial, professional information.  To be eligible for employment with StarCompliance, candidates must undergo a rigorous background investigation with checks including, but not limited to, criminal record history, consumer credit, employment history, qualifications, and education checks.  



Equal Opportunity Employer Statement


We prohibit discrimination and harassment of any kind based on race, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, gender identity or expression, marital/civil union/domestic partnership status, veteran status or any other protected characteristic as outlined by country, state, or local laws.


This policy applies to all employment practices within our organisation, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. StarCompliance makes hiring decisions based solely on qualifications, merit, and business needs at the time. For more information, please request a copy of our Equal Opportunities Policy.




Please mention the word **CAPTIVATING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Databricks Administrator
  • Improving South America
Python SQL Automation Terraform
En Improving South America, brindamos servicios de TI para transformar la percepción del profesional de TI. Nos enfocamos en consultoría de TI, desarrollo de software y formación ágil.

Contribuirás a la construcción y mantenimiento de soluciones de datos que soportan analítica, reporting y la toma de decisiones operativas en toda la organización.

Trabajando de cerca con data engineers y otros perfiles tecnológicos, apoyarás las plataformas que permiten a los equipos transformar datos en insights relevantes.

En este rol, te enfocarás en la gestión de plataformas de datos y en su rendimiento general. Colaborarás con equipos multifuncionales para entender requerimientos de datos, mejorar sistemas existentes y entregar soluciones que respondan a necesidades del negocio.

Esta es una excelente oportunidad para seguir desarrollando tus habilidades en data engineering mientras contribuyes a impulsar decisiones basadas en datos a escala

This job offer is on Get on Board.

Job functions

  • Monitorear y mantener la salud, disponibilidad y rendimiento de instancias de Snowflake y Databricks, utilizando herramientas nativas y estándares internos
  • Revisar periódicamente métricas de uso, logs del sistema y consumo de recursos para detectar y abordar anomalías
  • Asegurar la ejecución de actualizaciones, parches y respaldos conforme a políticas y estándares definidos
  • Investigar incidentes y degradaciones del servicio, gestionando su resolución o escalamiento para minimizar el impacto en el negocio
  • Administrar el ciclo completo de accesos: provisión, desprovisión y asignación de roles en Snowflake y Databricks, garantizando cumplimiento de estándares de seguridad
  • Implementar y auditar controles de acceso a datos, trabajando junto a equipos de seguridad (InfoSec) y líderes de plataforma
  • Mantener actualizados grupos, permisos y accesos según cambios organizacionales o necesidades de proyectos
  • Actuar como punto principal de contacto para soporte técnico e incidentes relacionados con las plataformas
  • Asesorar a los equipos en buenas prácticas de uso eficiente y seguro de las plataformas (optimización de costos, data sharing, orden de workspaces)
  • Mantener documentación clara y actualizada de la plataforma (onboarding, FAQs, guías de troubleshooting)

Qualifications and requirements

  • Título universitario en Ciencias de la Computación, Sistemas de Información o carrera afín, o experiencia equivalente
  • +2 años de experiencia administrando plataformas Snowflake y Databricks (o al menos una con conocimiento sólido de la otra)
  • Dominio de SQL, scripting (Python o Shell) y ecosistemas de datos en la nube (AWS, Azure o GCP)
  • Conocimiento en herramientas de automatización (Terraform, AWS CloudFormation, Databricks CLI/API, entre otros)
  • Experiencia gestionando usuarios, roles y controles de seguridad en entornos regulados
  • Capacidad para diagnosticar y resolver problemas de plataforma
  • Experiencia con herramientas de monitoreo, logging y alertas
  • Inglés intermedio -avanzado o avanzado (indispensable debido a que se realizan reuniones con equipos internacionales)

Conditions

  • Contrato a largo plazo.
  • 100% Remoto.
  • Vacaciones y PTOs
  • Posibilidad de recibir 2 bonos al año.
  • 2 revisiones salariales al año.
  • Clases de inglés.
  • Equipamiento Apple.
  • Plataforma de cursos en linea
  • Budget para compra de libros.
  • Budget para compra de materiales de trabajo
  • mucho mas..

Internal talks Improving South America offers space for internal talks or presentations during working hours.
Computer provided Improving South America provides a computer for your work.
Gross salary $3500 - 4300 Full time
Redis REST API Node.js MongoDB

Breezy HR is a remote-first hiring platform tailored for small and mid-sized businesses. We are expanding our SaaS product with LLM-enabled workflows and a backend-first focus to deliver fast, reliable experiences for both candidates and hiring managers. You’ll contribute to core features, improve data pipelines, and integrate managed AI capabilities (AWS Bedrock) to power smarter recruiting processes. This role sits at the intersection of product engineering and AI-enabled automation, driving end-to-end delivery from design to production.

Apply to this job directly at getonbrd.com.

What you’ll own

  • Lead delivery for major features: decompose complex problems, drive execution, and bring initiatives to production.
  • Build and evolve backend services: design, implement, and improve REST APIs, microservices, data ingestion/processing, and third-party integrations.
  • Ship LLM-enabled features (AWS Bedrock): integrate managed LLM services into product workflows with reliability, monitoring, guardrails, and cost/latency awareness.
  • Own quality in production: debug across services, optimize performance, and uphold correctness.
  • Collaborate cross-functionally with Product and mentor teammates through reviews and collaboration.
  • Maintain hands-on ownership with a pragmatic, ship-and-iterate mindset.

What you’ll bring

We’re seeking a senior backend engineer with 7+ years of web application experience and a strong track record shipping scalable, API-driven systems. You’ve built and operated production services in Node.js, including microservices, REST APIs, and asynchronous workflows. You’re comfortable working with data stores like MongoDB and Redis (schema design, indexing, caching, performance). This role requires that you’ve shipped at least one production LLM workflow end-to-end using AWS Bedrock (not a prototype), with reliability and cost/latency in mind. You communicate clearly in English (B2+ required, C1 preferred), document decisions, work autonomously with a bias toward action, and bring strong product ownership, turning ambiguous goals into shipped outcomes. You must be located in Colombia for payroll/compliance.

Nice-to-have

Deeper AWS infrastructure experience (e.g., Terraform/CDK/CloudFormation, networking, CI/CD, and production observability patterns). Frontend experience with modern frameworks like React, Angular, Vue, or Svelte to help ship end-to-end product changes.

What we offer

Remote-first environment with flexible collaboration across time zones, a startup-paced team culture, and the opportunity to shape AI-enabled features in a growing SaaS product. Competitive salary in COP, exposure to cutting-edge LLM-driven workflows, and a collaborative, low-ego team. You’ll work with a distributed engineering and product squad focused on fast, reliable delivery.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Computer provided Breezy HR provides a computer for your work.
Informal dress code No dress code is enforced.
$$$ Full time
Ingeniero de Datos
  • BICE VIDA
  • Santiago (Hybrid)
Python SQL ETL Data lake
En BICE VIDA somos líderes en el rubro de las aseguradoras y trabajamos para satisfacer las necesidades de seguridad, prosperidad y protección de nuestros clientes. Estamos impulsando una fuerte transformación digital para mantenernos a la vanguardia, entregar soluciones world-class y responder a los constantes cambios del mercado.

Apply to this posting directly on Get on Board.

🎯¿Qué buscamos?

En BICE Vida nos encontramos en búsqueda de un Ingeniero de Datos Junior para desempeñarse en el COE de Datos, perteneciente a la Gerencia de Planificación y Gobierno de Datos.
🧭 El objetivo del cargo es apoyar la construcción, mantenimiento y mejora de los procesos que permiten que los datos lleguen limpios, ordenados y disponibles para que la organización pueda analizarlos y tomar buenas decisiones.
💡Tendrás la oportunidad contribuir aprendiendo y aplicando buenas prácticas, colaborando con ingenieros senior y equipos de negocio, y asegurando que los datos fluyan de forma segura, confiable y eficiente dentro de la plataforma de datos📊.

📋 En este rol deberás:
  • Participar en el proceso de levantamiento de requerimientos con las áreas de negocio, apoyando a las áreas usuarias en el entendimiento de sus necesidades desde un punto de vista funcional.
  • Apoyar la incorporación de nuevas fuentes de datos al repositorio centralizado de información (Data Lake) de la compañía.
  • Comprender conceptos fundamentales de ETL/ELT.
  • Validación básica de datos.
  • Identificar errores en ejecuciones o datos.

🧠 ¿Qué necesitamos?

  • Formación académica: Ingeniero Civil Informático o Ingeniero Civil Industrial o carrera afín.
  • Mínimo 1 años de experiencia en gestión de datos o en desarrollos de soluciones informáticas.
  • Experiencia trabajando en alguna nube (AWS, GCP, Azure)
  • Conocimiento en herramientas de consulta de datos, tales como SQL y Python (nivel intermedio).
  • Participación en proyectos relacionados a datos, independiente de la tecnología utilizada.

✨Sumarás puntos si cuentas con:

  • AWS
  • Terraform
  • Spark/Scala
  • Tableau
  • Github
  • R Studio
  • Metodologías Ágiles (Scrum, Kanban)

¿Cómo es trabajar en BICE Vida? 🤝💼

  • Contamos con la mejor cobertura de la industria y te brindamos un seguro complementario de salud, dental y vida, y además un seguro catastrófico (Para ti y tus cargas legales). 🏅
  • Bonos en UF dependiendo de la estación del año y del tiempo que lleves en nuestra Compañía. 🎁
  • Salida anticipada los días viernes, hasta las 14:00 hrs, lo que te permitirá balancear tu vida personal y laboral. 🙌
  • Dress code semi-formal, porque privilegiamos la comodidad. 👟
  • Almuerzo gratis en el casino corporativo, con barra no-fit el dia viernes. 🍟
  • Contamos con capacitaciones constantes, para impulsar y empoderar equipos diversos con foco en buscar mejores resultados. Nuestra Casa Matriz se encuentran en el corazón de Providencia a pasos del metro de Pedro de Valdivia. 🚇
  • Puedes venir en bicicleta y la cuidamos por ti. Tenemos bicicleteros en casa matriz. 🚲

Wellness program BICE VIDA offers or subsidies mental and/or physical health activities.
Accessible An infrastructure adequate for people with special mobility needs.
Life insurance BICE VIDA pays or copays life insurance for employees.
Meals provided BICE VIDA provides free lunch and/or other kinds of meals.
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Health coverage BICE VIDA pays or copays health insurance for employees.
Dental insurance BICE VIDA pays or copays dental insurance for employees.
Computer provided BICE VIDA provides a computer for your work.
$$$ Full time
Infrastructure Manager
  • Andromeda Cluster
  • San Francisco
manager training technical cloud

Infrastructure Manager

Location: North America Remote / San Francisco · Full-Time

About Andromeda

Andromeda Cluster was founded by Nat Friedman and Daniel Gross to give early-stage startups access to the kind of scaled AI infrastructure once reserved only for hyperscalers.

We began with a single managed cluster — but it filled almost instantly. Since then, we’ve been quietly building the systems, network, and orchestration layer that makes the world’s AI infrastructure more accessible.

Today, Andromeda works with leading AI labs, data centers, and cloud providers to deliver compute when and where it’s needed most. Our platform routes training and inference jobs across global supply, unlocking flexibility and efficiency in one of the fastest-growing markets on earth.

Our long-term vision is to build the liquidity layer for global AI compute. We are expanding to new frontiers to find the brightest that work in AI infrastructure, research and engineering.

The Opportunity
We're hiring a Infrastructure Manager to accelerate supply and demand matching on our platform. This is an Individual Contributor role reporting to the Head of Infrastructure.
The Infrastructure team sits at the core of our infrastructure. We're responsible for acquiring and facilitating compute resources across the company, working closely with compute providers, sales, and technical teams to match compute supply with demand.


Today we have already established the fundamental layer of capacity with providers. As we
scale, we are building the next layer—widening our network and liquidity, deepening the scope
of our services, and accelerating our growth.


What You'll Do
• Match incoming leads from our sales team with internal capacity and external capacity in
the market
• Maximize utilization of our compute resources
• Source and onboard new compute suppliers across the globe
• Source capacity based on customer needs and market trends
• Solve customer and supplier problems in a fast-moving, dynamic market
• Understand technical and commercial differences between suppliers to optimize our
capacity funnel
• Develop a proactive compute strategy informed by market intelligence
• Negotiate cost with suppliers and other vendors
• Create and implement processes around capacity planning


What We're Looking For
• 2+ years in cloud sales, GPUs, data centers, or a related field
• Existing network of contacts in the compute market (providers, brokers, or buyers)
• Deep understanding of the GPU compute market—what drives supply and demand
• Strong written and verbal communication across technical and commercial stakeholders
• Sound judgment in decisions that directly impact revenue and cost
• Comfortable operating in ambiguity
• Self-directed and energetic, able to operate autonomously while collaborating
cross-functionally
• Bias toward action in a fast-paced environment


Why You'll Love It Here

  • Impact: Be in a critical team unlocking revenue for the wider company

  • Real business: Meaningful revenue, complex transactions, and tangible impact

  • High-growth environment: Get in early at a company in a massive market

  • Ownership: Direct line to leadership and influence over how we scale

  • Competitive compensation + meaningful equity

  • Comprehensive benefits for you and your dependents, including healthcare, dental, and
    vision coverage, 401(k), and unlimited PTO


Andromeda Cluster is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.



Please mention the word **STRONGER** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Scientist
  • Arbol
  • New York City, New York
back-end python support fintech

Arbol is a global climate risk coverage platform and FinTech company offering full-service solutions for any business looking to analyze and mitigate exposure to climate risk. Arbol’s products offer parametric coverage which pays out based on objective data  triggers rather than subjective assessment of loss. Arbol’s key differentiator versus traditional InsurTech or climate analytics platforms is the complete ecosystem it has built to address climate risk. This ecosystem includes a massive climate data infrastructure, scalable product development, automated, instant pricing using an artificial intelligence underwriter, blockchain-powered operational efficiencies, and non-traditional risk capacity bringing capital from non-insurance sources. By combining all these factors, Arbol brings scale, transparency, and efficiency to parametric coverage.


In this role, you will research, develop, and apply machine learning tools to model and price climate and weather risk. You will work with diverse weather and geospatial datasets covering a suite of phenomena, from traditional weather-station readings of temperature and precipitation, to radar measurements of hail stone sizes, to satellite indices of vegetation content. You will learn how to use our existing catalog of pricing and modeling tools, engage in their improvement and maintenance, and develop new methodologies. We are open to a range of experience levels for this position.



About the Team

The analytics team is responsible for making sense of the terabytes of data Arbol has at its disposal. It forms the connective tissue between more client-facing teams, such as sales, and back-end roles like data engineering. You’ll be joining a small team of data scientists and researchers and will have a unique opportunity to impact many levels of the firm. This is an ideal position for someone interested in building machine learning systems while taking a deep dive into the insurance industry.

\n


What You'll Be Doing
  • Collaborate within the analytics team and across teams to gain expertise Arbol’s data/pricing infrastructure and products
  • Develop and improve models for climate and weather perils such as heat waves, severe convective storms, and tropical cyclones
  • Implement, assess, and execute pricing algorithms for a wide array of weather risks
  • Work with sales and executive teams to perform business-critical analytics


What You'll Need
  • BA in statistics, computer science, mathematics, or related quantitative field
  • Experience programming in Python and familiarity with common data science packages (Pandas, Numpy, scikit-learn)
  • Experience analyzing large datasets
  • Strong problem solving and analytical skills
  • Comfort with statistics (e.g., linear regression, hypothesis testing)
  • Willingness to work and learn in a fast-paced environment


\n
$95,000 - $125,000 a year
\n

Essential Job Functions & Physical Requirements

Ability to sit for extended periods of time while working at a computer, with or without reasonable accommodation

Ability to use a computer, keyboard, mouse, and standard office equipment (e.g., phone, printer, scanner)

Ability to view a computer screen for prolonged periods, with or without reasonable accommodation

Ability to communicate effectively in person, by phone, and via email

Ability to occasionally stand, walk, bend, and reach within an office environment

Ability to lift and/or move up to 10–15 pounds occasionally (e.g., office supplies, files), with or without reasonable accommodation

Ability to perform repetitive motions, such as typing or data entry

Ability to maintain focus and attention while performing detailed tasks



Interested, but you don’t meet every qualification? Please apply!

Arbol values the perspectives and experience of candidates with non-traditional backgrounds and we encourage you to apply even if you do not meet every requirement.


Accessibility

Arbol is committed to accessibility and inclusivity in the hiring process. As part of this commitment, we strive to provide reasonable accommodations for persons with disabilities to enable them to access the hiring process. If you require an accommodation to apply or interview, please contact hr@arbol.io


Benefits

Arbol is proud to offer its full-time employees competitive compensation and equity in a high-growth startup.  Our health benefits include comprehensive health, dental, and vision coverage, and an optional flexible spending account (FSA) to support your health.  We offer a 401(k) match to support your future, and flexible PTO for you to relax and recharge. 


Equal Opportunity Employer

Arbol is an Equal Opportunity Employer and does not discriminate on the basis of race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, veteran status, or any other legally protected status.



Arbol participates in the E-Verify program to confirm employment eligibility.




Please mention the word **EVENTFUL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $2000 - 2200 Full time
Especialista Desarrollador
  • BC Tecnología
  • Santiago (Hybrid)
.Net Node.js Scrum Microservices
BC Tecnología es una empresa de servicios TI que gestiona portafolio, desarrolla proyectos, realiza outsourcing y selección de profesionales para clientes en sectores financieros, seguros, retail y gobierno. Buscamos incorporar un Especialista Desarrollador para formar parte de equipos ágiles que trabajarán en proyectos de desarrollo de software y migraciones de datos, con enfoque en la entrega de incrementos de valor para clientes de alta exigencia. El candidato participará en iniciativas de desarrollo y mantenimiento de aplicaciones, liderando y colaborando en soluciones de integración y procesamiento de datos dentro de un marco de Scrum, con foco en calidad, escalabilidad y performance. El rol se desempeñará en un entorno híbrido, combinando trabajo en oficina y remoto, para asegurar una ejecución eficiente y una entrega continua de valor.

This job offer is on Get on Board.

Funciones y responsabilidades

  • Desarrollar y mantener aplicaciones y procesos, asegurando calidad, rendimiento y escalabilidad.
  • Participar en equipos ágiles y entrega de incrementos de producto con foco en valor para el negocio.
  • Analizar y modelar datos para soluciones TI, incluyendo migraciones entre plataformas, procesos batch/masivos y ETL.
  • Trabajar con SQL Server, SSIS, ASP.NET, .NET Framework y Angular, aplicando patrones de diseño y buenas prácticas de desarrollo.
  • Contribuir en la definición técnica y arquitectónica de soluciones, incluyendo microservicios (Node.js) e integraciones con AWS.
  • Colaborar con equipos multidisciplinarios y respaldar la mejora continua de procesos y metodologías Scrum.

Requisitos y perfil deseado

Requisitos mínimos:
  • Formación universitaria en Ingeniería en Sistemas, Informática o carrera afín.
  • Mínimo 3 años de experiencia en desarrollo de software.
  • Experiencia en migración de datos entre plataformas, procesos batch/masivos y ETL.
  • Conocimientos en SQL Server, SSIS, ASP.NET, .NET Framework y MVC con Angular.
  • Experiencia en microservicios (Node.js) e integración con AWS.
  • Manejo de metodologías Scrum y trabajo en entornos ágiles.
Competencias deseables: capacidad de análisis, orientación a resultados, proactividad, trabajo en equipo, buenas habilidades de comunicación y capacidad de adaptación a entornos dinámicos.

Requisitos deseables

Se valorará:
  • Conocimientos adicionales en servicios de nube, contenedores y herramientas de orquestación.
  • Experiencia en diseño y migración de soluciones de datos en entornos regulados.
  • Experiencia en cliente/finanzas y proyectos de cambio organizacional.

Beneficios y cultura

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Health coverage BC Tecnología pays or copays health insurance for employees.
Computer provided BC Tecnología provides a computer for your work.
Gross salary $1800 - 2400 Full time
QA Automatizador Mobile
  • 3IT
  • Santiago (Hybrid)
Java Docker Selenium CI/CD
Somos 3IT ¡Innovación y talento que marcan la diferencia!
Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.
Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.
Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

Exclusive offer from getonbrd.com.

📝 ¿Cuál sería tu trabajo?

Asegurar la calidad del software mediante la implementación de pruebas automatizadas, supervisando todas las etapas del desarrollo para prevenir defectos y garantizar el funcionamiento óptimo del producto.

🎯 ¿Qué necesitamos para sumarte a nuestro equipo?

  • Uso de Git.
  • Manejo de Docker.
  • Experiencia en sector bancario.
  • Práctica en testing de software.
  • Aplicación de BDD con Gherkin y Cucumber.
  • Capacidad para pruebas cloud en AWS y OCI.
  • Monitoreo con Dynatrace, Elastic y Grafana.
  • Dominio de metodología ágil, Scrum y Kanban.
  • Trayectoria en automatización de pruebas con Java.
  • Familiaridad con despliegues mediante DA y CloudBees.
  • Administración de granjas de dispositivos móviles y web.
  • Competencia en integración continua con Jenkins y Bamboo.
  • Recorrido mínimo de 3 años con las tecnologías requeridas.
  • Conocimientos en pruebas de estrés con JMeter y LoadRunner.
  • Habilidades en pruebas técnicas sobre logs, servicios y bases de datos.
  • Gestión de herramientas de calidad como Jira, Confluence, Xray y GitHub.
  • Experiencia con Selenium, Appium y frameworks BDD bajo arquitectura Gradle.
  • Implementación de validaciones de servicios REST y SOAP con Postman o SoapUI.

⭐ Plus para este rol

  • Certificación ISTQB.
  • Manejo de BrowserStack.
  • Conocimientos en inteligencia artificial aplicada a QA.
📍 ¿Dónde y cómo trabajarás?
  • Ubicación oficina: Comuna de Santiago
  • Modalidad: Híbrido
✋ Algunas consideraciones antes de postular:
  • Debes tener disponibilidad para trabajar en modalidad híbrida y asistir de forma presencial a las oficinas de cliente.
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista.

Beneficios que tendrás si te unes a nuestro team:

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Sodexo + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Wellness program Banco de Chile offers or subsidies mental and/or physical health activities.
Accessible An infrastructure adequate for people with special mobility needs.
Life insurance Banco de Chile pays or copays life insurance for employees.
Digital library Access to digital books or subscriptions.
Health coverage Banco de Chile pays or copays health insurance for employees.
Dental insurance Banco de Chile pays or copays dental insurance for employees.
Computer provided Banco de Chile provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Beverages and snacks Banco de Chile offers beverages and snacks for free consumption.
Parental leave over legal Banco de Chile offers paid parental leave over the legal minimum.
$$$ Full time
Salesforce Technical Architect
  • ZS
  • Buenos Aires (Hybrid)
REST API Salesforce CI/CD Apex
Serve as the technical authority for Salesforce-based solutions, responsible for designing, governing, and overseeing the delivery of scalable, secure, and high-performing Salesforce architectures. You will lead technical discovery, define end-to-end solution architecture across Salesforce clouds and integrations, and guide development teams to successful delivery. This role requires deep Salesforce platform expertise, strong client-facing skills, and the ability to translate business strategy into robust technical designs.
As a Salesforce Technical Architect you will be expected to work on multiple SF cloud including but not limited to Salesforce Lifesciences Cloud, Agentforce, Salesforce DataCloud, Salesforce Sales & Service Cloud.

Apply through Get on Board.

What you’ll do: Salesforce Technical Architect in the Architecture & Engineering EC will

As a Salesforce Technical Architect you will be expected to work on multiple SF cloud including but not limited to Salesforce Lifesciences Cloud, Agentforce, Salesforce DataCloud, Salesforce Sales & Service Cloud.
  • Lead technical architecture and solution design for Salesforce implementations across multiple clouds and integrated systems
  • Own end-to-end Salesforce architecture, including application design, data models, security, integrations, environments, and release strategy
  • Lead technical discovery sessions with clients to understand business requirements and translate them into scalable Salesforce solutions
  • Design and govern implementations using Apex, Lightning Web Components (LWC), Aura, Visualforce, APIs, and Salesforce configuration
  • Architect and oversee integrations using REST/SOAP APIs, Apex callouts, outbound messaging, middleware, ETL, and iPaaS tools
  • Design scalable and reusable Lightning component frameworks aligned with Salesforce best practices
  • Define data architecture, data migration strategies, and integration patterns when transitioning from legacy systems to Salesforce
  • Create and review technical architecture documentation including architecture diagrams, data flows, integration designs, and deployment models
  • Define environment strategy, CI/CD approach, and release management plans appropriate to project complexity
  • Review code and technical designs to ensure quality, performance, security, and alignment with architectural standards
  • Collaborate closely with Business Analysts, Solution Architects, Developers, and client stakeholders throughout the delivery lifecycle
  • Support pre-sales activities by contributing to solution scoping, technical estimates, and risk identification
  • Identify, manage, and proactively mitigate technical risks and dependencies
  • Mentor and guide Salesforce developers and contribute to the growth of the Salesforce practice
  • Stay current on Salesforce product releases, architectural patterns, and emerging platform capabilities

What You’ll Bring

  • Bachelor’s degree in Computer Science, Engineering, or a related field (preferred)
  • 6+ years of Salesforce experience or equivalent enterprise CRM development experience
  • Advanced Salesforce certifications strongly preferred, including: Salesforce Administrator, Platform Developer I & II (or JavaScript Developer I), Certified App Builder, Sales Cloud / Service Cloud / Health Cloud (or other specialist certifications)
  • Proven experience acting as a technical lead or architect on Salesforce implementations
Fluency in English
Client-first mentality
Intense work ethic
Collaborative spirit and problem-solving approach

What You’ll Bring

  • Bachelor’s degree in Computer Science, Engineering, or a related field (preferred)
  • 6+ years of Salesforce experience or equivalent enterprise CRM development experience
  • Advanced Salesforce certifications strongly preferred, including: Salesforce Administrator, Platform Developer I & II (or JavaScript Developer I), Certified App Builder, Sales Cloud / Service Cloud / Health Cloud (or other specialist certifications)
  • Proven experience acting as a technical lead or architect on Salesforce implementations
  • Fluency in English
  • Client-first mentality
  • Intense work ethic
  • Collaborative spirit and problem-solving approach

Perks & Benefits:

ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member.
Hybrid working model:
We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week.
Travel:
Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures.

Gross salary $3200 - 4100 Full time
Python Git Data Analysis SQL

En Artefact LatAm, somos una consultora líder enfocada en acelerar la adopción de datos e inteligencia artificial para generar impacto positivo. El Senior Data Scientist es un profesional altamente experimentado en el análisis de datos, con profundos conocimientos en técnicas estadísticas, de programación y de aprendizaje automático. Su rol principal es utilizar estas habilidades para extraer conocimientos significativos y tomar decisiones estratégicas basadas en datos dentro de la organización.

Además de desarrollar modelos analíticos avanzados, el Data Scientist Senior ejerce un rol importante dentro del equipo asignado al cliente, aportando con su conocimiento técnico para poder tomar decisiones concretas que ayuden al desarrollo del proyecto. Su experiencia ayuda en la conceptualización hasta la implementación, y asegura la entrega de soluciones prácticas y detallistas que cumplan con las necesitas del cliente.

Send CV through Get on Board.

Tus responsabilidades serán:

  • Análisis de Datos: Aplicar técnicas avanzadas de análisis exploratorio para comprender la estructura y características de grandes volúmenes datos, provenientes de diversas fuentes.
  • Desarrollo de Modelos Predictivos Avanzados: utilizar técnicas avanzadas de aprendizaje automático y estadística para desarrollar modelos predictivos robustos que permitan predecir tendencias, identificar patrones y realizar pronósticos precisos.
  • Optimización de Algoritmos y Modelos: dirigir la optimización de algoritmos y modelos existentes para mejorar la precisión, eficiencia y escalabilidad.
  • Visualización y Comunicación de Datos: crear visualizaciones claras y significativas para comunicar los hallazgos y resultados de manera efectiva al cliente y a otros stakeholders clave.
  • Desarrollo de Herramientas Analíticas: diseñar y desarrollar herramientas analíticas personalizadas y sistemas de soporte para la toma de decisiones basadas en datos, utilizando lenguajes de programación como Python, R o SQL.
  • Gestión de Proyectos: liderar frentes de un proyecto relacionados al análisis de datos complejos, desde la conceptualización hasta la implementación, planificando estratégicamente los hitos y entregables acordados con los clientes.
  • Investigación y Desarrollo Continuo: mantenerse actualizado en las últimas tendencias y avances en análisis de datos, inteligencia artificial y metodologías relacionadas. Compartir conocimientos y experiencias con el equipo para fomentar un ambiente de aprendizaje continuo.
  • Contribución a Propuestas y Desarrollo de Negocios: colaborar en el desarrollo de propuestas internas para potenciales clientes, utilizando su experiencia y conocimientos para identificar oportunidades y diseñar soluciones innovadoras.
  • Apoyar al equipo desde un rol de mentor, traspasando conocimientos y buenas prácticas, proporcionándole capacitación personalizada según las necesidades individuales de los miembros.

Los requisitos del cargo son:

  • Formación en Ingeniería Civil Industrial/Matemática/Computación, Física, Estadística, carreras afines, o experiencia equivalente en análisis avanzado de datos.
  • Experiencia laboral de al menos 4 años en roles de análisis de datos, preferiblemente en industrias relevantes.
  • Experto en Python, SQL y Git, con habilidades demostradas en el desarrollo de modelos analíticos y aplicaciones.
  • Amplio conocimiento de bases de datos relacionales y no relacionales, así como experiencia en procesamiento de datos (ETL).
  • Profundo conocimiento en machine learning, feature engineering, reducción de dimensiones, estadística avanzada y optimización.
  • Inglés avanzado.

Condiciones

  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Días de vacaciones adicionales a lo legal y medio día libre de cumpleaños. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por desempeño de la empresa, además de bonos por trabajador referido y por cliente.
  • Almuerzos quincenales pagados con el equipo (Chile) o Tarjeta de Alimentación (México).
  • Cobertura de salud adicional (Mexico).
  • Computadora de altos specs para trabajar cómodamente.
  • Flexibilidad horaria y trabajo por objetivos.
  • Posibilidad de participar en proyectos a nivel global, con intercambios con otros países con presencia del grupo.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Ciudad de México).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

...y más!

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks Artefact LatAm offers space for internal talks or presentations during working hours.
Paid sick days Sick leave is compensated (limits might apply).
Health coverage Artefact LatAm pays or copays health insurance for employees.
Company retreats Team-building activities outside the premises.
Computer repairs Artefact LatAm covers some computer repair expenses.
Computer provided Artefact LatAm provides a computer for your work.
Education stipend Artefact LatAm covers some educational expenses related to the position.
Performance bonus Extra compensation is offered upon meeting performance goals.
Personal coaching Artefact LatAm offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
Vacation over legal Artefact LatAm gives you paid vacations over the legal minimum.
Beverages and snacks Artefact LatAm offers beverages and snacks for free consumption.
Vacation on birthday Your birthday counts as an extra day of vacation.
Parental leave over legal Artefact LatAm offers paid parental leave over the legal minimum.
Gross salary $1800 - 2300 Full time
Ingeniero(a) Junior de Software y Robótica
  • Maquintel robotic services
  • Santiago (In-office)
Python Git Data Analysis Linux

Resumen del cargo

Buscamos un/a ingeniero recién egresado/a con fuerte base técnica y ganas de aprender "en la vida real" para sumarse a un equipo que construye soluciones de inspección de activos críticos usando robótica, percepción (visión/3D), plataformas de datos y gemelos digitales. Tu foco será conectar el mundo físico con el digital: capturar datos desde robots y sensores, procesarlos (imágenes, nubes de puntos, telemetría), exponerlos en una plataforma (APIs/dashboards) y transformarlos en un gemelo digital útil para operación y mantenimiento.

Perfil ideal

  • Ingeniero Civil Eléctrico, Electrónico, Computación/Informática
  • Recién egresado/a o 0-2 años de experiencia (practicas cuentan).
  • Alto potencial, curiosidad y mentalidad de aprendizaje acelerado.
  • Liderazgo desde el primer día: ownership de tareas, iniciativa y capacidad de pedir ayuda a tiempo.
  • Orden y rigurosidad: reproducibilidad, bitácora técnica, documentación y foco en calidad de datos.

Applications are only received at getonbrd.com.

Responsabilidades clave

1) Software de captura y procesamiento (Robótica + Datos)

  • Desarrollar y mantener herramientas para recolección, limpieza y procesamiento de datos generados por robots (imágenes, video, LiDAR/nubes de puntos, IMU y otros sensores).
  • Diseñar pipelines reproducibles para logging, sincronización, validación y respaldo de datos.
  • Automatizar tareas recurrentes (importación, conversión de formatos, control de calidad, generación de reportes base).

2) Percepción y analítica (Visión + 3D)

  • Implementar y optimizar algoritmos de procesamiento de imágenes (OpenCV) y análisis de datos espaciales / nubes de puntos (filtros, registro, segmentación, métricas).
  • Apoyar la curación de datasets, anotación/etiquetado cuando aplique, y validación de resultados.
  • Medir desempeño: precisión/recall cuando corresponda, error, cobertura, repetibilidad y tasa de reproceso.

3) Plataforma de datos (Backend + Integraciones + Dashboards)

  • Construir y mantener servicios para explotación de datos: APIs, conectores y componentes de backend.
  • Modelar y mantener bases de datos (por ejemplo PostgreSQL) y apoyar flujos de ETL liviano y exports para clientes.
  • Crear visualizaciones y dashboards para usuarios no expertos, enfocadas en decisión y trazabilidad.

4) Gemelos digitales (Activos + Evidencia + Trazabilidad)

  • Estructurar activos y campanas: jerarquías, metadatos, criticidad, evidencia y comparaciones "antes/después".
  • Apoyar la construcción de vistas 3D/modelos y reportes técnicos orientados a mantención y operación.
  • Asegurar consistencia: naming, versionado de datasets y estándares internos.

5) Integración y calidad (Hardware/Software + Operación)

  • Colaborar con ingeniería de hardware/robótica para una integración fluida (interfaces, formatos, límites de cómputo).
  • Diseñar y ejecutar pruebas para asegurar rendimiento, robustez y calidad del dato.
  • Documentar código y procesos de forma clara; mantener control de versiones y buenas prácticas de desarrollo.

Habilidades técnicas requeridas

  • Python (obligatorio). C++ (deseable).
  • Manejo de Linux/Unix y herramientas de terminal.
  • Análisis de datos: NumPy, Pandas, SciPy (o equivalentes).
  • Procesamiento de imágenes: OpenCV (deseable fuerte).
  • Control de versiones: Git (obligatorio).
  • Capacidad de crear visualizaciones (por ejemplo Matplotlib) y dejar herramientas usables por terceros.

Deseables (suman mucho)

  • ROS/ROS2 (nodos, tópicos, servicios, acciones).
  • Nubes de puntos y 3D: Open3D, PCL u otras.
  • Machine Learning aplicado a vision: PyTorch/TensorFlow/Keras.
  • Nube: AWS, Azure o Google Cloud (almacenamiento, procesamiento o despliegue).
  • Docker y nociones de APIs REST.

Si te apasiona la robótica, el software, machine learning, la innovación y el desarrollo, con un foco en la generación de nuevos productos y servicios, este es el lugar ideal para aprender y crecer profesionalmente. Estarás investigando, desarrollando e implementando soluciones en base a una combinación entre software y hardware para resolver problemas desafiantes con alto impacto en la industria y medio ambiente.

Somos un equipo multidisciplinario, con un ambiente laboral grato y relajado que cuenta con servicios únicos bien desarrollados y probados. Tenemos mucho entusiasmo por seguir desarrollando e implementando servicios y soluciones innovadoras.

Fuimos finalistas del Premio nacional de innovación Avonni 2019. También recibimos el premio a la mejor contribución a la industria de transporte de relaves Optimus Pipe 2018.

$$$ Full time
python senior engineering

Somos um dos maiores bancos privados do Brasil, conforme o ranking do Banco Central. E temos muito orgulho em dizer que, pelo segundo ano consecutivo, fomos reconhecidos como a melhor instituição financeira para trabalhar no Brasil, segundo o ranking da GPTW 2025! Também recebemos o selo de Diversidade na categoria Mulher, reforçando nosso compromisso com a equidade.  


Nossa cultura acontece de verdade: sendo simples, corretos, parceiros e corajosos. Valorizamos as relações, a inovação e um ambiente leve, cada vez mais colaborativo e com intencionalidade no avanço da diversidade e inclusão.


Estamos em constante evolução e construímos #parcerias de sucesso para entregarmos nosso propósito de tornar mais tranquila a vida financeira de pessoas e empresas


Se identificou? Então venha trabalhar com a gente! 

\n


Dá uma olhada nos desafios que te esperam:
  • Estamos buscando uma pessoa Engenheira de Machine Learning Senior para atuar na evolucao da nossa plataforma de Machine Learning e garantir que os modelos utilizados em diversas areas do banco operem com alta qualidade governanca e escalabilidade;
  • Análise das ferramentas internas com olhar critico e espaço para trazer melhorias, atuando com papel consultivo;
  • Cuidará da observabilidade dos modelos de ML, sugerindo metricas para monitoramento mais eficiente;
  • Análise da qualidade de código de implantação;
  • Ser ponto de referência das plataformas utilizadas internamente.


E aí, se identificou? Agora gostaríamos de saber se você tem o perfil e os conhecimentos abaixo:
  • Experiência sólida em engenharia de ML, MLOps ou Data Engineering aplicada a modelos em produção;
  • Forte domínio de Python e bibliotecas de ML/ciência de dados;
  • Experiência com plataformas distribuídas, preferencialmente Databricks/Spark.


\n

Diversidade e inclusão 


O BV atua intencionalmente em prol da aceleração da equidade e representatividade no mercado financeiro, respeitando e apoiando a diversidade em toda sua pluralidade e interseccionalidade, garantindo uma transformação social positiva. 

 

Por isso, convidamos pessoas negras, mulheres, profissionais com deficiência, comunidade LGBTQIA+ e pessoas de qualquer idade a conhecerem a gente um pouco mais e a se inscreverem nesta vaga. 



Please mention the word **PROSPEROUS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $2400 - 3000 Full time
ETL Automation Google Cloud Platform Data lake

En Coderslab.io trabajamos en un entorno de alta demanda tecnológica, con equipos globales que combinan talento de primer nivel. Nuestro cliente FIFTECH lidera iniciativas de datos avanzadas y está desarrollando el proyecto Datalake 2.0 en Colombia. Este rol se integra en el área de Data Factory dentro de la gerencia de Plataforma, Arquitectura y Data. El objetivo es fortalecer el procesamiento de datos en un entorno Big Data en Google Cloud Platform (GCP), contribuyendo a la evolución continua de nuestro Data Lake y a la entrega de información analítica confiable para decisiones estratégicas.

Applications at getonbrd.com.

Funciones y responsabilidades

  • Analizar, diseñar, desarrollar y probar procesos de ingesta de datos (ELT) en entornos GCP Big Data.
  • Mantener y evolucionar procesos ETL/ELT, asegurando rendimiento, escalabilidad y fiabilidad.
  • Desarrollar pipelines de datos serverless y automatizar flujos de datos para operaciones analíticas.
  • Integrar, consolidar y limpiar datos para su uso en analítica y reporting.
  • Apoyar en arquitectura y diseño de plataformas de datos dentro de la unidad de Data Factory, colaborando con equipos multidisciplinarios.
  • Participar en la definición de estándares de modelado de datos y buenas prácticas de ingeniería de datos.

Perfil requerido y experiencia

Buscamos un Data Engineer Senior con sólido background en procesos ELT/ETL en entornos Big Data sobre GCP y Data Lake. Debe demostrar capacidad para diseñar e implementar pipelines data-driven, experiencia en desarrollo de pipelines serverless y automatización de procesos. Se valorará la habilidad para modelar y estructurar datos orientados a análisis, así como la capacidad de integrarse de forma proactiva en proyectos complejos, con enfoque técnico y colaborativo. Se espera autonomía, buena comunicación y capacidad de trabajar en un entorno de ritmo alto.

Requisitos deseables

Experiencia previa en Data Lake en GCP, con enfoque en ingesta y transformación de grandes volúmenes de datos. Conocimiento de herramientas de orquestación y automatización, como Airflow o Workflows de GCP. Habilidades para trabajar con equipos de Arquitectura y Producto, capacidades de análisis y resolución de problemas, y orientación a resultados. Se valorará experiencia en entornos multinacionales y trabajo remoto colaborativo.

Beneficios y condiciones

Contrato de plazo fijo con duración estimada de 6 meses. Salario entre 2.500.000 y 2.700.000 CLP, según experiencia. Equipo propio no provisto; se requiere PC/notebook personal. Ventajas de trabajar con un cliente líder en soluciones de datos y un equipo global de alto rendimiento, con oportunidades de aprendizaje y crecimiento en tecnologías de vanguardia. Modalidad remota con posibles coordinación en Colombia y región. Si te apasiona la ingeniería de datos y quieres contribuir a un Data Lake avanzado, te invitamos a aplicar y formar parte de nuestro equipo.

Fully remote You can work from anywhere in the world.
$$$ Full time
manager training technical supervisor

HHAeXchange is the leading technology platform for home and community-based care. Founded in 2008, HHAeXchange was born out of an idea to create a fully comprehensive end-to-end homecare solution to help people who are aging or have disabilities thrive in their homes and communities. Our employees are passionate about transforming the healthcare space by building the only homecare ecosystem that fully connects patients, personal care providers, managed care organizations, and states.  

HHAeXchange is seeking a Product Manager, Data Management & Platform to help define, govern, and scale how data is used across our healthcare platform. This role sits at the intersection of Product, Engineering, and Clinical/Financial operations, ensuring that the data powering RCM, EHR, Payroll, Payments, and the Universal Patient Record is accurate, connected, and trusted — and that it serves as a reliable foundation for AI-driven innovation.

This is an individual contributor role for a healthcare product professional who understands real-world clinical and financial workflows, is energized by the potential of AI to transform healthcare data, and can translate complex requirements into clear, actionable product decisions. The ideal candidate brings 5–7 years of product management experience in healthcare IT, a solid grasp of data platform concepts, and a genuine enthusiasm for applying AI and machine learning to solve meaningful problems in the home care space.

To perform this job successfully, an individual must be able to perform each essential job duty satisfactorily with or without reasonable accommodation.  Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

This is a fully remote opportunity for candidates located in the EST or CST time zones within the US only.

\n


Essential Job Duties

Product-Led Data Strategy

  • Contribute to and help execute the product vision and roadmap for HHAeXchange's enterprise data platform.
  • Define how core clinical, operational, and financial data is modeled, linked, and surfaced across the product ecosystem.
  • Partner with domain PMs (RCM, EHR, Payroll, Payments) to align data structures to real-world workflows and end-user needs.
  • Identify opportunities to reduce data fragmentation and improve consistency across product domains.

AI Enablement & Innovation

  • Serve as a product champion for AI and machine learning use cases built on the HHAeXchange data platform.
  • Define and prioritize data requirements that enable AI-driven features including predictive analytics, anomaly detection, automation, and intelligent recommendations.
  • Work with data science and engineering teams to ensure training data quality, feature pipelines, and model outputs are properly governed and trustworthy.
  • Evaluate and recommend AI tools, platforms, and frameworks that can accelerate product delivery and enhance the platform's intelligence capabilities.
  • Stay current on emerging AI/ML trends in healthcare — including generative AI, LLM applications, and agentic workflows — and translate relevant developments into product opportunities.
  • Champion responsible AI practices, including fairness, explainability, and compliance considerations relevant to healthcare data.

Healthcare Data Enablement

  • Ensure data models support claims, visits, authorizations, care plans, payroll, and payer rules.
  • Translate regulatory, audit, and reimbursement requirements into data standards and traceability.
  • Improve data lineage and reconciliation across payer-provider workflows.
  • Support the development of a Universal Patient Record that is complete, current, and usable across the platform.

Cross-Team Execution

  • Collaborate closely with Engineering, Architecture, and Platform teams to shape data services, APIs, and pipelines.
  • Write clear product requirements, user stories, and acceptance criteria for data platform features.
  • Prioritize data initiatives based on customer impact, revenue risk, compliance needs, and scalability.
  • Drive alignment across product teams on shared data definitions, metrics, and reporting standards.

Governance & Data Quality

  • Support the definition of data ownership, stewardship, and quality standards across product domains.
  • Help establish validation, monitoring, and escalation processes for data defects.
  • Create visibility into data health for product leaders, operations teams, and stakeholders.
  • Contribute to documentation of data standards and governance policies.


Other Job Duties
  • Other duties as assigned by supervisor or HHAeXchange leader.


Travel Requirements
  • Travel 10-25%, including overnight travel


Required Education, Experience, Certifications and Skills

Required 

  • 5–7 years of experience in product management within healthcare IT, preferably in RCM, EHR, or payer-provider platforms.
  • Solid understanding of claims workflows, clinical documentation, authorizations, eligibility, and reimbursement processes.
  • Demonstrated interest in and experience with AI, machine learning, or advanced analytics applied to healthcare data.
  • Familiarity with data platforms, data warehouses or lakehouses, and analytics and reporting tools.
  • Ability to partner effectively with Engineering and Architecture on platform-level systems and data infrastructure.
  • Working knowledge of healthcare data regulations and compliance requirements (e.g., HIPAA, Medicaid program integrity, EVV).
  • Strong written and verbal communication skills, including the ability to translate technical data concepts for non-technical stakeholders.
  • Experience writing product requirements, managing a backlog, and driving delivery in an agile environment.
  • Curiosity, adaptability, and a proactive mindset in a fast-evolving product environment.

Preferred

  • Experience with AI/ML product development, including defining data pipelines, feature requirements, or model evaluation criteria.
  • Familiarity with generative AI tools and their application in healthcare workflows (e.g., clinical documentation, billing, analytics).
  • Experience with Medicaid home care, personal care services (PCS), or HCBS programs.
  • Knowledge of data governance frameworks, master data management (MDM), or data quality tooling.
  • Exposure to modern data stack technologies (e.g., dbt, Snowflake, Databricks, or similar).
  • Experience working with EVV data or similar real-time visit verification systems.
  • Familiarity with interoperability standards such as HL7, FHIR, or X12 EDI.

 

Success Measures (First 12–18 Months)

  • Clear, well-adopted data models across key clinical and financial workflows.
  • Measurable reduction in data-related defects impacting claims, payroll, and reporting.
  • At least one AI-driven product capability successfully launched on a trusted data foundation.
  • Improved reconciliation across payer, provider, and caregiver data.
  • Faster time-to-market for data-dependent product features.
  • Strong cross-team adoption of shared data standards and definitions.


\n

The base salary range for this US-based, full-time, and exempt position is $105,000-115/yr, not including variable compensation. An employee’s exact starting salary will be based on various factors including but not limited to experience, education, training, merit, location, and the ability to exemplify the HHAeXchange core values.

 

This is a benefits-eligible position. HHAeXchange offers competitive health plans, paid time-off, company paid holidays, 401K retirement program with a Company elected match, including other company sponsored programs.

 

HHAeXchange is an equal-opportunity employer. The Company offers employment opportunities to all applicants and employees without regard to race, color, religion, national origin, sex, sexual orientation, gender identity or expression, age, disability, medical condition, marital status, veteran status, citizenship, genetic information, hairstyles, or any other status protected by local or federal law.



Please mention the word **SUAVE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Analyst
  • Restaurant365
  • Remote
analyst saas python technical

Restaurant365 is a SaaS company disrupting the restaurant industry! Our cloud-based platform provides a unique, centralized solution for accounting and back-office operations for restaurants. Restaurant365’s culture is focused on empowering team members to produce top-notch results while elevating their skills. We’re constantly evolving and improving to make sure we are and always will be “Best in Class” ... and we want that for you too!


Restaurant365 is seeking a Data Analyst to join our Enterprise Data Analytics team. This role supports business teams across the organization by helping turn data into insights that inform day-to-day decisions and longer-term planning.


As a Data Analyst, you will partner with stakeholders to understand business questions, support reporting needs, and help maintain dashboards and KPIs. You’ll work within established data models and governance practices while continuing to build your technical and business analysis skills. This role is ideal for someone who enjoys working with data, learning the business, and growing into a strong analytics partner over time.

\n


How you'll add value:
  • Analytics & Reporting
· Analyze operational, customer, financial, and usage data to support business reporting and ad hoc analysis.
· Help maintain and monitor KPIs that track business performance and operational health.
· Build, update, and maintain dashboards and reports in Domo for business stakeholders.
· Assist with trend analysis, performance monitoring, and identifying areas for improvement.
· Support forecasting, planning, and recurring reporting processes under guidance from senior analysts or managers.
  • Business Partnership
· Work with business stakeholders to understand reporting needs and translate questions into clear analytics requests.
· Help define basic success metrics and KPIs for initiatives and projects.
· Provide clear, well-documented analyses that support business decision-making.
· Participate in requirement gathering sessions and stakeholder check-ins.
  • Collaboration & Enablement
· Partner with other analysts, analytical engineers, and data engineers to ensure accurate and consistent reporting.
· Follow established data governance and quality standards for dashboards and reports.
· Support documentation of metrics definitions, dashboards, and reporting logic.
· Learn to present insights in a clear, concise way to both technical and non-technical audiences.


What you'll need to be successful in this role:
  • 2–4 years of experience in data analytics, business analytics, or a related role.
  • Experience working in a SaaS, technology, or data-driven environment is a plus.
  • Working knowledge of SQL for querying and analyzing data.
  • Experience using BI tools (Domo preferred, but others acceptable).
  • Familiarity with Excel or Google Sheets for analysis and validation.
  • Exposure to Python or R is a plus but not required.
  • Ability to analyze datasets, identify trends, and summarize findings clearly.
  • Basic understanding of common business metrics (revenue, retention, adoption, operational efficiency).
  • Comfort working with defined KPIs and reporting frameworks.
  • Clear written and verbal communication skills.
  • Ability to explain analysis results in a straightforward, business-friendly way.
  • Willingness to learn, ask questions, and incorporate feedback.
  • Ability to work effectively with cross-functional partners.
NICE TO HAVE
  • Exposure to Snowflake, dbt, or modern cloud data platforms.
  • Experience supporting recurring business reporting or executive dashboards.
  • Familiarity with basic project tracking or Agile concepts.
  • Interest in growing toward advanced analytics, analytics engineering, or business analytics leadership.


R365 Team Member Benefits & Compensation
  • This position has a salary range of $87,083.33-$121,916.67 per year. The above range represents the expected salary range for this position. The actual salary may vary based upon several factors, including, but not limited to, relevant skills/experience, time in the role, business line, and geographic location. Restaurant365 focuses on equitable pay for our team and aims for transparency with our pay practices.
  • Comprehensive medical benefits, 100% paid for employee
  • 401k + matching
  • Equity Option Grant
  • Unlimited PTO + Company holidays
  • Wellness initiatives

#BI-Remote


\n
$87,083.33 - $121,916.67 a year
\n

DYN365, Inc d/b/a Restaurant365 is an equal opportunity employer.



Please mention the word **FTW** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer
  • TextNow
  • Open- Canada
python support travel cloud

We believe communication belongs to everyone. We exist to democratize phone service.  TextNow is evolving the way the world connects, and that's because we're made up of people with curious minds who bring an optimistic yet critical lens into the work we do.   We're the largest provider of free phone service in the nation. And we're just getting started. 

 

Join us in our mission to break down barriers to communication and free the flow of conversation for people everywhere. 

 

TextNow is looking for an experienced Data Engineer with hands-on experience designing and developing data platforms. You will own the design, development, and maintenance of TextNow's data platform, enabling us to make effective data-informed decisions. You will be part of cross-functional efforts to build scalable and reliable frameworks that support allTextNow's business and data products. In this role, you can interact with different functional areas within the business and influence decision-making in a fast-growing mobile communications start-up.   

\n


What You'll Do
  • Own TextNow's data warehouse, data pipelines, and integration points between various business systems. 
  • Design, develop, and support new and existing batch and real-time data pipelines, and recommend improvements or modifications. 
  • Manage data models to enable AI/ML data products. 
  • Champion TextNow's data ecosystem by working with engineering and infrastructure teams to enable quicker access to data for insights and decision-making. 
  • Communicate data modeling and architecture processes to cross-functional teams. 
  • Identify, design, and implement process improvements across the data platform. 


Who You Are
  • Have 3–5 years of experience working with data warehouse/data lake and ETL architectures(e.g.,databricks, iceberg), cloud data warehouses (e.g., Snowflake), and hands-on experience in Python and SQL — preferably in companies with fast-growing and evolving data needs. 
  • Have at least 2 years of experience with Airflow and Spark. 
  • Have developed scalable, real-time data pipelines using Python/Scala, SQL, and distributed processing frameworks such as Spark or Flink. 
  • Have exposure to the AWS platform and services such as EKS, MSK, and MWAA (preferred). 
  • Have experience building data features using Snowflake, dbt, and Python to power real-time AI/ML inference. 
  • Are respectfully candid, with the ability to initiate and drive tasks to completion. 
  • Are highly organized, dependable, and follow a structured work approach. 


\n
$88,900 - $127,000 a year
Final compensation will be determined based on a number of factors, including skills, experience, location and on-the-job performance. We’re committed to paying competitively to hire and retain high-caliber talent. We recognize that exceptional talent may fall outside of these ranges; we encourage all qualified candidates to apply even if their compensation expectations are outside of the listed range.
\n

More about TextNow...


Our Values:

·  Customer Obsessed (We strive to have a deep understanding of our customers)

·  Do Right By Our People (We treat each other with fairness, respect, and integrity)

·  Accept the Challenge (We adopt a "Yes, We Can" mindset to achieve ambitious goals)

·  Act Like an Owner (We treat this company like it's our own... because it is!)

·  Give a Damn! (We are deeply commited and passionate about our work and achieving results)


Benefits, Culture, & More:

·   Strong work life blend 

·   Flexible work arrangements (wfh, remote, or access to one of our office spaces)

·   Employee Stock Options 

·   Unlimited vacation 

·   Competitive pay and benefits

·   Parental leave

·   Benefits for both physical and mental well being (wellness credit and L&D credit)

·   We travel a few times a year for various team events, company wide off-sites, and more


Diversity and Inclusion:

At TextNow, our mission is built around inclusion and offering a service for EVERYONE, in an industry that traditionally only caters to the few who have the means to afford it. We believe that diversity of thought and inclusion of others promotes a greater feeling of belonging and higher levels of engagement. We know that if we work together, we can do amazing things, and that our differences are what make our product and company great. 


TextNow Candidate Policy

By submitting an application to TextNow, you agree to the collection, use, and disclosure of your personal information in accordance with the TextNow Candidate Policy



Please mention the word **COOPERATIVELY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
HTML5 Python Data Analysis BigQuery
En BC Tecnología diseñamos soluciones de TI para clientes de servicios financieros, seguros, retail y gobierno. Buscamos un Data Analyst para formar parte de un proyecto estratégico de Migración Digital enfocado en evolucionar el canal digital del cliente BFPE. Participarás en el desarrollo, documentación y mejora continua de funcionalidades para una nueva plataforma web, como parte de la migración desde Telegestor APK. El rol combina desarrollo, transformación de datos y optimización de procesos para entregar un canal digital robusto y escalable.
El proyecto implica trabajo conjunto con equipos técnicos y funcionales para asegurar requisitos, calidad y despliegues eficientes. Serás parte de un equipo ágil, con enfoque en innovación y mejora continua, contribuyendo a una migración exitosa que impacta directamente la operación digital.

Apply to this job without intermediaries on Get on Board.

Funciones

  • Desarrollar nuevas funcionalidades para el canal digital utilizando Python y HTML5.
  • Participar en la transformación de datos y construcción de pipelines ETL.
  • Analizar, diseñar y documentar especificaciones técnicas y funcionales.
  • Implementar consultas y procesos de explotación de datos en SQL Server y BigQuery.
  • Gestionar control de versiones y despliegues con GitLab.
  • Colaborar con equipos técnicos y funcionales para asegurar el cumplimiento de los requisitos.
  • Participar en pruebas, validaciones y despliegue de mejoras.
  • Proponer mejoras que fortalezcan la plataforma digital.

Descripción

Buscamos un Data Analyst con 2 a 3 años de experiencia en roles similares, con capacidad para trabajar en un entorno regulado y centrado en datos. Requisitos técnicos demostrables: SQL Server, Python, BigQuery, ETLs, GitLab y HTML5. Se valorará experiencia en banca, finanzas o industrias altamente reguladas. El/la candidata ideal será analítico/a, orientado/a a la calidad de datos, con habilidades para documentar y comunicar hallazgos de manera clara, y capaz de colaborar eficazmente con equipos multifuncionales. Se ofrece una modalidad híbrida en Lima y la posibilidad de participar en un proyecto estratégico con impacto directo en el canal digital del cliente, dentro de un entorno de aprendizaje y desarrollo continuo.

Deseable

Experiencia previa en proyectos de migración de plataformas digitales y en entornos de alta seguridad de información. Conocimiento de herramientas de visualización (Power BI, Tableau) y metodologías ágiles. Experiencia en integración de datos entre sistemas legados y plataformas modernas.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $3000 - 4000 Full time
Senior Data Engineer
  • Artefact LatAm
Python Big Data Spark Data lake

En Artefact LatAm, somos una consultora líder enfocada en acelerar la adopción de datos e inteligencia artificial para generar impacto positivo. El Senior Data Engineer tendrá la responsabilidad de liderar el desarrollo de proyectos de Big Data con clientes, diseñando y ejecutando arquitecturas de datos que sirvan como puente entre la estrategia empresarial y la tecnología, bajo los principios de gobernanza de datos establecidos por los clientes. Además, será responsable de diseñar, mantener e implementar estructuras de almacenamiento de datos tanto transaccionales como analíticas. Este rol implica trabajar con grandes volúmenes de datos provenientes de diversas fuentes, procesarlos en entornos de Big Data y traducir los resultados en diseños técnicos sólidos y datos consistentes. También se espera que revise la integración consolidada de datos y describa cómo la interoperabilidad capacita a múltiples sistemas para comunicarse entre sí.

This job offer is on Get on Board.

Tus responsabilidades serán:

  • Diseñar arquitecturas de datos que cumplan con los requisitos de los clientes y se alineen con su estrategia empresarial, asegurando la adherencia a los principios de gobernanza de datos.
  • Diseñar, implementar, mantener y actualizar estructuras de almacenamiento de datos transaccionales y analíticas, garantizando la integridad y disponibilidad de los datos.
  • Extraer datos de diversas fuentes y transferirlos eficientemente a entornos de almacenamiento de datos.
  • Diseñar e implementar procesos que soporten grandes volúmenes de datos en entornos de Big Data, utilizando herramientas y tecnologías pertinentes en cada proyecto.
  • Comunicar hallazgos, resultados y diagnósticos efectivamente, contando una historia para facilitar la comprensión de los hallazgos y la toma de decisiones por parte del cliente
  • Colaborar con equipos multidisciplinarios en la gestión estratégica de proyectos, asegurando la entrega oportuna y exitosa de soluciones de datos.
  • Desarrollar y mantener soluciones en la nube y on premise.
  • Utilizar metodologías ágiles para el desarrollo y entrega de soluciones de datos, adaptándose rápidamente a los cambios y requisitos del proyecto.
  • Apoyar al equipo traspasando conocimientos y buenas prácticas, apoyando en la capacitación y aprendizaje continuo según las necesidades individuales de los miembros
  • Gestionar al equipo mediante una planificación estratégica del proyecto, asegurando una distribución eficiente de tareas y una comunicación clara de los objetivos.

Los requisitos del cargo son:

  • Experiencia mínima de 3 años en el uso de herramientas de gestión de datos.
  • Experiencia previa en la gestión estratégica de equipos multidisciplinarios.
  • Conocimientos avanzados de Python o Pyspark y experiencia en su aplicación en proyectos de datos.
  • Experiencia en el diseño e implementación data warehouse, data lakes y data lake house
  • Desarrollo de soluciones de disponibilización de datos.
  • Experiencia práctica con al menos uno de los principales almacenes de archivos en la nube.
  • Buen manejo del inglés.

Algunos deseables no excluyentes:

  • Experiencia en consultoría y/ proyectos de estrategia o transformación digital
  • Experiencia con servicios de procesamiento y almacenamiento de datos de AWS o GCP, Azure
  • Certificaciones

Algunos de nuestros beneficios:

  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Hasta 11 días de vacaciones adicionales a lo legal. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por utilidades de la empresa, además de bonos por trabajador referido y por cliente.
  • Medio día libre de cumpleaños, además de un regalito.
  • Almuerzos quincenales pagados con el equipo en nuestros hubs (Santiago, Bogotá, Lima y Ciudad de Mexico).
  • Presupuesto de 500 USD al año para capacitaciones, sean cursos, membresías, eventos u otros (Chile).
  • Flexibilidad horaria y trabajo por objetivos.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Bogotá, Lima y Ciudad de Mexico).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

...y más!

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks Artefact LatAm offers space for internal talks or presentations during working hours.
Bicycle parking You can park your bicycle for free inside the premises.
Company retreats Team-building activities outside the premises.
Computer repairs Artefact LatAm covers some computer repair expenses.
Computer provided Artefact LatAm provides a computer for your work.
Education stipend Artefact LatAm covers some educational expenses related to the position.
Performance bonus Extra compensation is offered upon meeting performance goals.
Personal coaching Artefact LatAm offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
Vacation over legal Artefact LatAm gives you paid vacations over the legal minimum.
Beverages and snacks Artefact LatAm offers beverages and snacks for free consumption.
Vacation on birthday Your birthday counts as an extra day of vacation.
Parental leave over legal Artefact LatAm offers paid parental leave over the legal minimum.
Gross salary $4500 - 4800 Full time
Data Engineer
  • Coderslab.io
Python Agile SQL ETL

Coderslab.io es una empresa dedicada a transformar y hacer crecer negocios mediante soluciones tecnológicas innovadoras. Formarás parte de una organización en expansión con más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Te unirás a equipos diversos que reúnen a parte de los mejores talentos tecnológicos para participar en proyectos desafiantes y de alto impacto. Trabajarás junto a profesionales experimentados y tendrás la oportunidad de aprender y desarrollarte con tecnologías de vanguardia.
Role Purpose

We are looking for a Data Engineer to design, develop, and support robust, secure, and scalable data storage and processing solutions. This role focuses on data quality, performance, and integration, working closely with technical and business teams to enable data-driven decision making.

This offer is exclusive to getonbrd.com.

Funciones del cargo

Key Responsibilities

  • Design, develop, test, and implement databases and data storage solutions aligned with business needs.
  • Collaborate with users and internal teams to gather requirements and translate them into effective technical solutions.
  • Act as a bridge between IT and business units.
  • Evaluate and integrate new data sources, ensuring compliance with data quality standards and ease of integration.
  • Extract, transform, and combine data from multiple sources to enhance the data warehouse.
  • Develop and maintain ETL/ELT processes using specialized tools and programming languages.
  • Write, optimize, and maintain SQL queries, stored procedures, and functions.
  • Design data models, defining structure, attributes, and data element naming standards.
  • Monitor and optimize database performance, scalability, and security.
  • Assess existing database designs to identify performance improvements, required upgrades, and integration needs.
  • Implement data management standards and best practices to ensure data consistency and governance.
  • Provide technical support during design, testing, and production deployment.
  • Maintain clear and accurate technical documentation.
  • Work independently on projects of moderate technical complexity with general supervision.
  • Participate in Agile teams, contributing to sprint planning and delivery.
  • Provide on-call support outside business hours and on weekends on a rotating basis.

Requerimientos del cargo

Required Qualifications

  • Bachelor’s degree in Computer Science, Information Systems, Database Systems, Engineering, or a related field, or equivalent experience.
  • 4–5 years of professional experience in a similar role.
  • Strong experience with:
    • SQL
    • Snowflake
    • ETL / ELT processes
    • Cloud-based data warehousing platforms
  • Experience with ETL tools (e.g., Informatica) and programming languages such as Python.
  • Solid understanding of data warehouse design and administration.
  • Experience working with Agile methodologies (Scrum).
  • Strong analytical, conceptual thinking, and problem-solving skills.
  • Ability to plan, prioritize, and execute tasks effectively.
  • Strong communication skills, able to explain technical concepts to non-technical stakeholders.
  • Excellent written and verbal communication skills.
  • Strong interpersonal, listening, and teamwork skills.
  • Self-motivated, proactive, and results-driven.
  • Strong service orientation and professional conduc

Opcionales

Preferred Qualifications

  • Certifications in Snowflake, SQL Server, or T-SQL.

Condiciones

Remote | Contractor | High English proficiency

$$$ Full time
Principal Data Engineer
  • Waymark
  • Remote
technical health healthcare engineer
About Waymark Waymark is a mission-driven team of healthcare providers, technologists, and builders working to transform care for people with Medicaid benefits. Our community-based care teams—powered by proprietary data science and ML technologies—support care for tens of thousands of Medicaid members across multiple states, driving measurable reductions in avoidable emergency department visits and hospitalizations. We're designing tools and systems that bring care directly to those who need it most—removing barriers and reimagining what's possible in Medicaid healthcare delivery and we are seeking a highly experienced Data Engineer to join this mission. This is a principal-level individual contributor role who combines deep backend engineering fundamentals with specialized expertise in Electronic Health Record (EHR) data integration. You will report to data engineering leadership and setting the technical direction for our clinical data platform by leading the design, development, and optimization of data pipelines that ingest, normalize, and transform clinical data from diverse EHR and payer systems. If this resonates with you, we invite you to bring your creativity, energy, and curiosity to Waymark. Key Responsibilities EHR & Partner Integrations Architect production-grade data pipelines that integrate clinical data through multiple channels—direct EHR connections (e.g., Epic, Cerner, Athenahealth), health information exchanges (HIEs), health alliance networks, and third-party integration vendors—via

Please mention the word **RIGHTEOUSLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Front-end Developer
  • Sanctuary Computer
E-commerce TypeScript Testing Frameworks Next.js

In this role, you’ll work on a variety of client projects to find cost-effective, high-quality, pragmatic solutions to complex problems. Responsibilities will include:

  • Collaborating with Technical Lead to meet clients' development needs
  • Building and maintaining high-performance web applications with modern frontend frameworks and tools
  • Implementing responsive, accessible, and pixel-perfect user interfaces based on design specifications
  • Integrating frontend applications with headless CMS platforms, APIs, and third-party services
  • Optimizing application performance, including bundle size, load times, and runtime efficiency
  • Architecting scalable component libraries and design systems for consistency across projects
  • Writing clear documentation for code maintenance and usage
  • Participating in project team meetings, including Sprint Planning, daily standups, and retrospectives
  • Participating in code reviews, providing constructive feedback to teammates and ensuring adherence to best practice

Apply directly on the original site at Get on Board.

Job functions

Original job posting link here for more details

We're looking for a Senior Frontend Developer who excels at building pixel-perfect websites using modern frontend frameworks. You'll collaborate with our team to build elegant, performant, and visually stunning web experiences. Your work will span a diverse range of client projects, from immersive brand websites to complex web applications, all requiring a keen eye for detail and technical excellence.

The person we’re looking for is happy, relaxed and easy to get along with. They’re flexible on anything except conceits that will lower their usually outstanding work quality. They work “smart”, by carefully managing their workflow and staggering features that have dependencies intelligently — they prefer deep work but are OK coming up to the surface now and then for top level / strategic conversations.

We believe people with backgrounds or interests in design, art, music, food or fashion tend to have a well rounded sense of design & quality — so a variety of hobbies or side projects is a big nice to have!

Quick tip: Kindly submit a complete and thoughtful application, including relevant links that help verify your work experience and identity. Applications with missing or insufficient information will not move forward in the review process.
Our team carefully reviews every complete submission, and we truly appreciate the time and effort you put into applying.

Qualifications and requirements

Must Have Competencies:
We’re always pitching for new and exciting technology niches. Some of the areas below are relevant to us!
  • 8+ years writing highly performant frontend code, an obsession for 95+ Lighthouse scores
  • Expert level experience with Typescript, and one of Next.js, Nuxt, Svelte, Vue
  • Extensive experience with headless CMS like Sanity, Contentful, Prismic or more
  • Fluency in industry standard PaaS like Vercel, Netlify, Firebase, etc
  • Fluency in eCommerce technologies like Shopify (headless & liquid), Stripe, Swell and others
  • Experience building accessible, responsive interfaces with attention to performance optimization and SEO best practices
  • Strong understanding of modern CSS methodologies (Tailwind, CSS Modules, etc) and animation libraries
  • Experience with state management solutions (Redux, Zustand, Pinia) and API integration patterns
  • Proficiency with testing frameworks (Jest, Playwright, Cypress) and commitment to writing maintainable, well-documented code
  • Experience with design systems and component libraries, working closely with designers to ensure pixel-perfect implementations
  • Real-time & performance optimization: experience with WebSockets for live data updates, caching strategies (Redis, CDN-level caching), CDN configuration and optimization (Cloudflare, Fastly), and image optimization techniques including proxies and delivery networks
Nice to Have Competencies:
We’re always pitching for new and exciting technology niches. Some of the areas below are relevant to us!
  • WebGL & Canvas expertise: experience building interactive graphics, animations, and visualizations using WebGL, Three.js, or native Canvas API
  • Data visualization: creating compelling, interactive data visualizations with libraries like Mapbox, D3.js, Chart.js, or similar tools
  • Full-stack development experience: comfortable working across the entire stack, from frontend to backend and database layers
  • PostgreSQL expertise: strong experience with database design, query optimization, and managing complex relational data structures
  • GraphQL & API design: building and maintaining GraphQL or REST APIs with a focus on performance and developer experience
  • Real-time technologies: experience with WebSockets, Server-Sent Events, or similar technologies for building live, interactive features
  • Authentication & security: implementing secure authentication flows (OAuth, JWT) and following security best practices
  • Client-facing experience: working directly with customers to gather requirements and provide technical solutions
  • Product management experience: defining product roadmaps and collaborating closely with stakeholders
  • Engineering management experience: leading teams, setting technical direction, and mentoring developers

Conditions

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
$60000 - $80000 Full time
Data Engineer
  • Sayari
  • Remote - US
python software code financial

About Sayari: 

Sayari is a risk intelligence provider that equips the public and private sectors with immediate visibility into complex commercial relationships by delivering the largest commercially available collection of corporate and trade data from over 250 jurisdictions worldwide. Sayari's solutions enable risk resilience, mission-critical investigations, and better economic decisions. 

Headquartered in Washington, D.C., its solutions are trusted by Fortune 500 companies, financial institutions, and government agencies, and are used globally by thousands of users in over 35 countries. Funded by world-class investors, with a strategic $228 million investment by TPG Inc. (NASDAQ: TPG) in 2024, Sayari has been recognized by the Inc. 5000 and the Deloitte Technology Fast 500 as one of the fastest growing private companies in the United States and was featured as one of Inc.’s “Best Workplaces” for 2025.

POSITION DESCRIPTION

Sayari is looking for an Entry-Level Data Engineer to join our Data team located in Washington, DC. The Data team is an integral part of our Engineering division and works closely with our Software & Product teams, as well as other key stakeholders across the business.

JOB RESPONSIBILITIES:

  • Write and deploy crawling scripts to collect source data from the web
  • Write and run data transformers in Scala Spark to standardize bulk data sets
  • Write and run modules in Python to parse entity references and relationships from source data
  • Diagnose and fix bugs reported by internal and external users
  • Analyze and report on internal datasets to answer questions and inform feature work
  • Work collaboratively on and across a team of engineers using basic agile principles
  • Give and receive feedback through code reviews

SKILLS & EXPERIENCE

Req

Please mention the word **HARMLESS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Data Engineer Databricks
  • 42Labs
  • Santiago (Hybrid)
Python SQL Scala Databricks

En 42Labs no solo desarrollamos tecnología: construimos soluciones donde lo técnico y lo humano van de la mano. Trabajamos en iniciativas que transforman negocios en distintos verticales (financiero, logística y educación), creando plataformas de datos que permiten tomar mejores decisiones, automatizar procesos y habilitar analítica confiable. Como Data Engineer enfocado en Databricks, seremos parte de un equipo que diseña y mantiene pipelines robustos, escalables y orientados a calidad, asegurando que los datos lleguen a tiempo, con integridad y trazabilidad. Nuestro objetivo es que la plataforma de datos soporte casos de uso reales, desde ingesta y procesamiento hasta modelado y consumo, promoviendo buenas prácticas, colaboración y mejora continua dentro de una cultura transparente y sin jerarquías rígidas.

Apply to this job at getonbrd.com.

Funciones

En el rol de Data Engineer con Databricks, nuestro foco será construir y operar pipelines de datos de punta a punta, asegurando rendimiento, calidad y mantenibilidad.
  • Diseñar, desarrollar y mantener pipelines de ingesta, procesamiento y transformación de datos en Databricks.
  • Implementar modelos de datos y estrategias de organización (por ejemplo, capas y convenciones) para soportar analítica y reporting.
  • Optimizar rendimiento (jobs, particiones, formatos de almacenamiento y configuración) para costos eficientes y tiempos de respuesta adecuados.
  • Asegurar calidad de datos mediante validaciones, controles de consistencia y manejo de errores/recuperación.
  • Producir trazabilidad end-to-end: documentación, linaje y buenas prácticas de versionado y despliegue.
  • Colaborar con Ingeniería de Software y stakeholders para entender requerimientos, priorizar y convertirlos en soluciones medibles.
  • Monitorear procesos y responder incidentes: revisar logs, métricas y alertas, y proponer mejoras preventivas.
Trabajaremos con autonomía en un esquema híbrido, apoyándonos en feedback constante y en una cultura de colaboración donde la calidad y el impacto en las personas importan.

Requisitos

Buscamos un/a Data Engineer con experiencia práctica en el ecosistema de datos y con foco en construir soluciones confiables, escalables y fáciles de mantener. Valoramos la combinación entre criterio técnico, comunicación clara y orientación a la mejora continua.
Lo que necesitamos de ti
  • Experiencia con Databricks y trabajo con pipelines de datos (ingesta, transformación y orquestación).
  • Conocimientos sólidos en procesamiento distribuido y formatos de datos para optimización de rendimiento.
  • Buenas prácticas de ingeniería de datos: control de versiones, documentación, pruebas/validaciones y manejo de errores.
  • Experiencia implementando capas/modelos para analítica (por ejemplo, a través de enfoques como medallion o similares) y asegurando consistencia.
  • Capacidad para depurar y mejorar rendimiento de jobs (lecturas/escrituras, particiones, configuración y tuning).
  • Conocimientos en SQL y al menos un lenguaje para desarrollo (comúnmente Python/Scala, según el stack).
  • Mentalidad de calidad: validar datos, detectar anomalías y proponer correcciones con enfoque preventivo.
  • Comunicación efectiva: explicar decisiones técnicas, levantar riesgos temprano y alinear expectativas con equipos no técnicos.
Cómo nos gusta trabajar
  • Colaboración genuina y transparencia: nos importa cómo construyes con el equipo, no solo el resultado.
  • Autonomía responsable: propones mejoras, haces seguimiento y entregas con foco en impacto.
  • Aprendizaje constante: te sumas a la Academia 42Labs y disfrutas compartir conocimiento.

Deseable

  • Experiencia con orquestación y programación de workflows (por ejemplo, jobs programados, scheduling y patrones de reintento).
  • Conocimiento de seguridad y gobernanza de datos (permisos, acceso por roles, auditoría básica).
  • Experiencia con herramientas de monitoreo/alertas para operación de pipelines.
  • Participación en diseño de arquitectura de datos (estándares de modelado, convenciones y escalabilidad).
  • Experiencia trabajando con equipos multidisciplinarios (Data, Backend, BI) y levantando requerimientos con claridad.

Beneficios

  • Salud y protección integral: seguros complementarios de salud, dental, de vida y catastrófico 100% financiados por nosotros (con opción de extender a tu familia). También estamos integrados a la red de beneficios de Caja Los Andes y la ACHS.
  • Tiempo y flexibilidad: contamos con Flexi Days y Party Time (tardes libres). Celebramos tu cumpleaños con una tarde libre y damos tiempo extra para hitos como matrimonio, nacimiento de hijos o exámenes de título.
  • Bienestar y equilibrio: promovemos un balance real con un entorno de trabajo híbrido que confía en tu autonomía.
  • Crecimiento: Academia 42Labs, planes de desarrollo personalizados y acceso a Udemy Business.
  • Conectividad y apoyos: bonos mensuales para conexión a internet y plataformas de ocio favoritas, además de aguinaldos en Fiestas Patrias y Navidad.
Si te entusiasma ser parte de una comunidad que aprende, colabora y celebra, queremos conocerte. ¡Postula con nosotros!

Health coverage 42Labs pays or copays health insurance for employees.
Computer provided 42Labs provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal 42Labs gives you paid vacations over the legal minimum.
Gross salary $2000 - 2300 Full time
Data Engineer
  • Coderslab.io
  • Santiago (Hybrid)
Java Python Docker ETL

Coderslab.io es una empresa dedicada a transformar y hacer crecer negocios mediante soluciones tecnológicas innovadoras. Formarás parte de una organización en expansión con más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Te unirás a equipos diversos que reúnen a parte de los mejores talentos tecnológicos para participar en proyectos desafiantes y de alto impacto. Trabajarás junto a profesionales experimentados y tendrás la oportunidad de aprender y desarrollarte con tecnologías de vanguardia.

Job opportunity published on getonbrd.com.

Funciones del cargo

Análisis, diseño, desarrollo y mantenimiento de sistemas de procesamiento de datos en proyectos de Big Data. El profesional deberá crear pipelines en plataformas Cloud y Data Lake para la entrega de modelos de datos en producción, apoyando también en la arquitectura, el diseño de plataformas, el desarrollo de procesos ETL/ELT, ingeniería de datos serverless y modelamiento analítico.

Requerimientos del cargo

Conocimientos en:

  • Cloud & DevOps
    • Google Cloud Platform (GCP)
    • Docker
    • Kubernetes
    • Terraform
  • Data Engineering & Streaming
    • Google Cloud Pub/Sub
    • Apache Airflow
  • Programming Languages
    • Java
    • Python
  • Data Formats
    • JSON
    • Apache Avro

Condiciones

Modalidad de contratación: Plazo fijo

$$$ Full time
Data Engineer
  • BC Tecnología
  • Santiago (Hybrid)
Python SQL NoSQL ETL
BC Tecnología es una consultora de TI que gestiona portafolio, desarrolla proyectos y ofrece servicios de outsourcing y selección de profesionales. Nuestro enfoque es crear equipos ágiles para Infraestructura, Desarrollo de Software y Unidades de Negocio, trabajando con clientes de servicios financieros, seguros, retail y gobierno. Participarás en proyectos innovadores para clientes de alto nivel, con un equipo multidisciplinario y una cultura de aprendizaje y crecimiento profesional. Formarás parte de una organización que prioriza la calidad, la seguridad y la gobernanza de datos mientras impulsa soluciones de alto impacto para la toma de decisiones estratégicas.

Apply to this job at getonbrd.com.

Funciones

  • Diseñar, construir y mantener pipelines de datos (ETL / Data Pipelines) robustos, escalables y eficientes para procesar grandes volúmenes de información de múltiples fuentes.
  • Gestionar la ingesta, procesamiento, transformación y almacenamiento de datos estructurados y no estructurados.
  • Implementar soluciones de ingeniería de datos en entornos cloud, con preferencia por AWS (Glue, Redshift, S3, etc.).
  • Traducir necesidades de negocio en requerimientos técnicos viables y sostenibles.
  • Colaborar con equipos multidisciplinarios (negocio, analítica, TI) para entregar soluciones de valor.
  • Aplicar buenas prácticas de desarrollo, seguridad, calidad y gob (governance) de datos; versionado de código, pruebas y documentación.
  • Participar en comunidades de datos, promover mejoras continuas y mantener la documentación actualizada.

Requisitos y habilidades

Estamos buscando un Data Engineer con al menos 3 años de experiencia en roles de Ingeniería de Datos y experiencia comprobable en ETLs y arquitecturas de datos en la nube. El candidato ideal debe haber trabajado en entornos ágiles y tener experiencia en proyectos de Retail o sectores afines (deseable).
Conocimientos técnicos requeridos:
  • Cloud Computing: AWS (Glue, Redshift, S3, entre otros).
  • Orquestación de pipelines: Apache Airflow.
  • Lenguajes de programación: Python (preferente) o Java.
  • Almacenamiento de datos: SQL, NoSQL, Data Warehouses.
  • Buenas prácticas: control de versiones, pruebas y documentación.
Competencias blandas: orientación al cliente, capacidad de trabajar en equipos multidisciplinarios, proactividad, pensamiento analítico y habilidades de comunicación para traducir requerimientos técnicos a negocio.

Deseables

Experiencia en Retail y en proyectos con gobernanza y cumplimiento de datos, experiencia con herramientas de visualización y analítica, conocimiento de seguridad de datos y cumplimiento normativo, y experiencia en migraciones o modernización de data stacks.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

$$$ Full time
Finance Analyst
  • H1
  • New York
analyst saas system technical

At H1, we believe access to the best healthcare information is a basic human right. Our mission is to provide a platform that can optimally inform every doctor interaction globally. This promotes health equity and builds needed trust in healthcare systems. To accomplish this our teams harness the power of data and AI-technology to unlock groundbreaking medical insights and convert those insights into action that result in optimal patient outcomes and accelerates an equitable and inclusive drug development lifecycle.  Visit h1.co to learn more about us.


The Finance team plays a crucial role in creating that future. It is our role to serve as a liaison between H1’s Commercial & Technical teams to oversee issues related to financial reporting, analysis, forecasting, and planning, as well as resource prioritization and business management. With a deep understanding of the business levers underlying the operations of our Infrastructure team, this team is responsible for helping the business to drive toward clear and effective decisions which are critical to the success of the Company


WHAT YOU'LL DO AT H1

As a Finance Analyst, you’ll be part of a highly visible team that partners with leaders and departments across the company. You’ll support the finance team with quarterly and annual forecasting, expense budgeting, key metrics reporting and analysis, close processes, and variance analysis, while also driving various automation and simplification projects.


- Assist with the preparation of annual budgets and financial forecasts to ensure alignment with the company’s strategic goals and key initiatives

- Support the finance team in reporting and analyzing key metrics such as annual recurring revenue (ARR) and churn

- Provide actionable insights on revenue and collection trends, customer retention and profitability, and other key performance drivers

- Assist with the implementation of variable compensation plans for teams across the organization

- Track and calculate monthly, quarterly, and annual sales commissions in accordance with approved compensation plans

- Support monthly financial presentations for both the executive team and board of director meetings

- Implement scalable processes through automation and process improvement to help strengthen the finance foundation

- Perform ad-hoc analysis on critical business needs


ABOUT YOU

You’re a strong financial data driven analytical professional, with experience in FP&A or strategic finance  for high growth, enterprise B2B SaaS tech, healthcare or marketplace companies. You know how to thrive in a fast-paced and frequently changing environment.


REQUIREMENTS

- 3+ years of experience in a Finance department

- Bachelor’s  degree in Finance, Accounting, or a related major field (MBA is a plus)

- Experience in B2B SaaS financial modeling is a plus

- Advanced skills in Microsoft Excel and PowerPoint (Google Sheets and Slides experience is a plus)

- Excellent communication skills with the ability to interact directly with people at all levels of the organization

- Ability to meet deadlines while working in a fast paced environment

- Advanced system skills and the ability to learn new systems quickly.

- Strong attention to detail and ability to effectively prioritize tasks



COMPENSATION

This role pays $75,000 to $88,000 per year, based on experience, in addition to stock options.


Anticipated role close date: 01/10/2026


\n


\n

H1 OFFERS

- Full suite of health insurance options, in addition to generous paid time off

- Pre-planned company-wide wellness holidays

- Retirement options

- Health & charitable donation stipends

- Impactful Business Resource Groups

- Flexible work hours & the opportunity to work from anywhere

- The opportunity to work with leading biotech and life sciences companies in an innovative industry with a mission to improve healthcare around the globe



H1 is proud to be an equal opportunity employer that celebrates diversity and is committed to creating an inclusive workplace with equal opportunity for all applicants and teammates. Our goal is to recruit the most talented people from a diverse candidate pool regardless of race, color, ancestry, national origin, religion, disability, sex (including pregnancy), age, gender, gender identity, sexual orientation, marital status, veteran status, or any other characteristic protected by law.

 

H1 is committed to working with and providing access and reasonable accommodation to applicants with mental and/or physical disabilities. If you require an accommodation, please reach out to your recruiter once you've begun the interview process. All requests for accommodations are treated discreetly and confidentially, as practical and permitted by law.



Please mention the word **DISTINCTION** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $1100 - 1700 Full time
Data Engineer
  • Decision Point Latam
  • Ciudad de México &nbsp Santiago (Hybrid)
Python Excel SQL ETL
  • Development as a Subject Matter Expert in FMCG sales & marketing analytics domain, working directly with top FMCG brands across Latin America.
  • Work extensively with clients. A direct interaction with clients and with our team in India. These direct interactions will fasten your learning process and will enable you to master traits of strategy consulting, i.e. from understanding business objective to analyzing data in a methodical way culminating with a final output to be delivered to the client.
  • Our senior partners have a wide professional experience and expertise, having played executive roles in leading companies in Chile and various Industrial and FMCG Global companies across continents. Advanced Analytics and Big Data is not only about Data Science, but also Decision Science. You will get the best from both.

Apply directly at getonbrd.com.

Funciones del cargo

  • Data Infrastructure Development: Design, build, and maintain scalable data infrastructure on Cloud Platforms for data processing to support various data initiatives and analytics needs within the organization
  • Data Pipeline Implementation: Design, develop and maintain scalable data pipelines to ingest, transform, and load data from various sources into cloud-based storage and analytics platforms using Python, and SQL
  • Collaboration and Support: Collaborate with cross-functional teams to understand data requirements and provide technical support for data-related initiatives and projects. Helping translating business realities into data bases solution.
  • Performance Optimization: Optimize data processing workflows and cloud resources for efficiency and cost-effectiveness. Implement data quality checks and monitoring to ensure the reliability and integrity of data pipelines.
  • Build and optimize data warehouse solutions for efficient storage and retrieval of large volumes of structured and unstructured data.
  • Data Governance and Security: Implement data governance policies and security controls to ensure compliance and protect sensitive information across cloud platforms environment.

Requerimientos del cargo

  • Bachelor’s degree in computer science, Engineering, Statistics, Mathematics, or related field. Master's degree preferred.
  • Advanced English is mandatory
  • 1+ years of experience as Data Engineer
  • Cloud data storage is mandatory
  • Strong understanding of data modeling, ETL processes, and data warehousing concepts
  • Experience in SQL language, relational data modelling and sound knowledge of Database administration is mandatory
  • Proficiency in Python related to Data Engineering for developing data pipelines, ETL (Extract, Transform, Load) processes, and automation scripts.
  • Proficiency in Microsoft Excel
  • Experience within integrating data management into business and data analytics is mandatory
  • Experience working with cloud platform for deploying and managing scalable data infrastructure
  • Experience working with technologies such as DBT, airflow, snowflake, Databricks among others is a plus
  • Excellent Stakeholder Communication
  • Familiarity with working with numerous large data sets
  • Comfort in a fast-paced environment
  • Strong analytical skills with the ability to collect, organize, analyses, and disseminate significant amounts of information with attention to detail and accuracy
  • Excellent problem-solving skills
  • Strong interpersonal and communication skills for cross-functional teams
  • Proactive approach to continuous learning and skill development
  • Experience in leading or collaborating with a team of data scientists and engineers in developing and delivering machine learning models that work in a production setting..

Condiciones

  • Hibrido. 4x1 en Chile y 3x2 en México
  • 2 DP Days libres por quarter
  • Seguro de salud complementario en Chile
  • almuerzo en oficina
  • 5 días extra de vacaciones

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Health coverage Decision Point Latam pays or copays health insurance for employees.
Computer provided Decision Point Latam provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Decision Point Latam gives you paid vacations over the legal minimum.
Beverages and snacks Decision Point Latam offers beverages and snacks for free consumption.
$180000 - $220000 Full time
Data Scientist
  • Junction
  • Remote
python technical cloud management

Healthcare is in crisis and the people behind the results deserve better. With more and more data coming from wearables, lab tests, and patient–doctor interactions, we’re entering an era where data is abundant.

Junction is building the infrastructure layer for diagnostic healthcare, making patient data accessible, actionable, and automated across labs and devices. Our mission is simple but ambitious: use health data to unlock unprecedented insight into human health and disease.

If you're passionate about how technology can supercharge healthcare, you’ll fit right in.

Backed by Creandum, Point Nine, 20VC, YC, and leading angels, we’re working to solve one of the biggest challenges of our time: making healthcare personalized, proactive, and affordable. We’re already connecting millions and scaling fast.

Short on time? TL;DR

  • You: Can define what should be measured, how it should be modeled, and how those insights should shape product and company decisions.

  • Ownership: You’ll own Junction’s highest-leverage statistical, modeling, and evaluation work across diagnostics, clinical workflows, and AI-enabled product development.

  • Scope: This is not a pure IC modeling role and not a reporting role. You’ll set the methodology, research roadmap, and decision framework for how Junction uses data to drive product, clinical, and business outcomes.

  • Salary: $180,000 – $220,000 + equity

  • Location: Fully remote (EST timezone only)

Why we need you

Junction sits in the flow of high-value diagnostics and clinical data. As the company grows, our advantage moves beyond just having data to having the ability to turn it into reliable intelligence improving product decisions, customer outcomes, and the performance of the business.

Some of that work exists today, but it is not yet owned as a coherent function. Models get built. Analyses get done. Experiments answer local questions. But we need someone who can define the broader scientific and analytical system: what we should measure, what methods we trust, where modeling creates real leverage, and how that work translates into products and decisions that hold up outside a demo.

We’re hiring our first Data Scientist to take ownership of, and establish that standard.

This role will lead Junction’s most important modeling, experimentation, and evaluation work. You’ll partner closely with data, product engineering and leadership teams to drive the analytical roadmap by which Junction can leverage differentiated value from data.

What you’ll be doing day to day

  • Own the research and modeling work underlying Junction’s highest-priority data science opportunities across diagnostics, clinical workflows, and AI-enabled product features

  • Define rigorous frameworks for measurement, experimentation, and causal evaluation so we can distinguish signal from noise and make decisions we can defend

  • Lead development of predictive models, segmentation approaches, risk or routing logic, and other statistical systems that directly inform product and business strategy

  • Build the analytical foundation behind customer-facing features — from model development through to validation and performance tracking

  • Partner with engineering and data engineering to ensure models and analytical systems can be put in production, are reliable, and useful in real workflows

  • Establish how Junction evaluates data-driven and AI-enabled features, including methodology, quality thresholds, monitoring, and performance review

  • Communicate complex technical findings clearly to technical and non-technical stakeholders, including tradeoffs, limitations, and implications for action

Requirements

  • Strong track record of leading high-stakes analytical work that influenced product, operational, or business decisions

  • Deep foundation in statistical inference, experimental design, observational analysis, and model evaluation

  • Strong Python and/or R skills, with experience working on large, messy real-world datasets

  • Experience building predictive or decision-support models in production or near-production environments

  • Experience partnering closely with engineering to move work from analysis or prototype into deployed systems

  • Ability to operate at both strategic and hands-on levels: defining the roadmap while also getting into the details when needed

  • Strong communication and stakeholder management skills; able to explain methods, findings, and tradeoffs to executives as well as technical peers

  • Comfort operating in a startup environment with ambiguity, limited structure, and high ownership

Nice to have

  • Experience designing, executing, and publishing research studies

  • Experience with HIPAA, PHI, or other regulatory clinical frameworks

  • Deep familiarity with modern data tooling and production workflows across warehouses, orchestration, and transformation layers

  • Experience developing, deploying, and designing evaluation frameworks for LLM or AI-powered features in customer-facing products

  • Expertise directly working with healthcare, diagnostics, lab data, wearable data, and other clinical data

  • Experience applying causal inference methods, such as diff-in-diff, propensity scoring, or instrumental variables in practice

What this role isn’t

  • Not an analytics role focused on dashboards, reporting, or one-off analysis

  • Not an ML platform role — you won’t own infrastructure or tooling

  • Not a good fit if you mainly want to experiment with models or AI ideas without being accountable for how they perform in production

  • Not a good fit if you struggle with ambiguity. Knowing what to work on is part of the job

How you'll be compensated

  • Salary: $180,000 – $220,000 + equity

  • Your salary is dependent on your location and experience level

  • Generous early stage options (extended exercise post 2 years employment)

  • Regular in-person offsites, last were in Tenerife and Miami

  • Monthly learning budget of $300 for personal development and productivity

  • Flexible, remote-first working - including $1K for home office equipment

  • Monthly budget of $150 to use towards a coworking space

  • 25 days off a year + national holidays

  • Healthcare coverage depending on location

Oh and before we forget:

  • Backend Stack: Python (FastAPI), Go, PostgreSQL, Google Cloud Platform (Cloud Run, GKE, Cloud BigTable, etc), Temporal Cloud

  • Frontend Stack: TypeScript, Next.js

  • API docs are here: https://docs.junction.com/

  • Company handbook is here with engineering values + principles

Important details before applying:

  • We only hire folks physically based in GMT and EST timezones - more information here

  • We do not sponsor visas right now given our stage



Please mention the word **EQUITABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer
  • WiTi
  • Santiago (Hybrid)
Python SQL ETL Automation
WiTi conecta talento tecnológico con proyectos de alto impacto en Latinoamérica. Nuestro equipo se enfoca en la integración de sistemas, software a medida y desarrollos innovadores para dispositivos móviles, con énfasis en resolver problemas complejos a través de soluciones innovadoras.
Buscamos un/a Ingeniero/a de Datos para integrarse a un proyecto estratégico en uno de los grupos automotrices líderes del país, con presencia nacional en la comercialización de vehículos livianos y comerciales, y una infraestructura de datos en plena etapa de modernización y escalamiento.
Serás responsable de diseñar, implementar y documentar procesos de carga, transformación y migración de grandes volúmenes de datos en un entorno AWS.
Trabajarás en un contexto enterprise donde la calidad, la trazabilidad y la reproducibilidad de los resultados son fundamentales, colaborando con equipos técnicos y de negocio para asegurar que los datos sean confiables, escalables y mantenibles.

Apply to this job from Get on Board.

Responsabilidades Clave

  • Diseñar un enfoque repetible para la carga de grandes volúmenes de datos, estandarizando reglas y patrones de conversión.
  • Participar en automatizaciones de procesos mediante scripts, reglas de validación, templates y pipelines.
  • Implementar y mantener procesos ETL/ELT en AWS, integrándose con el stack del cliente en fuentes, cargas, transformaciones y monitoreo.
  • Documentar reglas de negocio, decisiones técnicas y casos borde para asegurar que los procesos sean mantenibles y escalables.

Requisitos Excluyentes

  • SQL avanzado: PL/SQL, queries complejas, optimización, joins pesados, window functions, CTEs y lectura de planes de ejecución.
  • Experiencia con Amazon Redshift: escritura de SQL, performance y buenas prácticas.
  • Conocimiento del mundo ETL/ELT en AWS (las herramientas específicas pueden variar según el stack).
  • Experiencia trabajando en contextos enterprise con foco en calidad, trazabilidad y resultados reproducibles.
  • Disponibilidad para asistir presencialmente 3 o 4 veces por semana a oficinas ubicadas en Panamericana altura de Lampa

Requisitos Deseables

  • Experiencia en automatización de migraciones: reglas de conversión, validaciones automáticas y pipelines de QA.
  • Conocimientos de Python u otro lenguaje de scripting para apoyar automatización y controles.
  • Conocimientos de AWS QuickSight.
  • Experiencia con gobierno de datos y buenas prácticas: naming conventions, documentación y data quality checks.

Beneficios

En WiTi promovemos un ambiente colaborativo donde la cultura del aprendizaje es parte fundamental. Entre nuestros beneficios están:
  • Plan de carrera personalizado para el desarrollo profesional.
  • Certificaciones para continuar creciendo en tu carrera.
  • Cursos de idiomas, apoyando el desarrollo personal y profesional.

Digital library Access to digital books or subscriptions.
Computer provided WiTi provides a computer for your work.
Personal coaching WiTi offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
$$$ Full time
Machine Learning Engineer
  • NeuralWorks
  • Santiago (Hybrid)
Python SQL Docker Machine Learning

NeuralWorks es una compañía de alto crecimiento fundada hace 4 años. Estamos trabajando a toda máquina en cosas que darán que hablar.
Somos un equipo donde se unen la creatividad, curiosidad y la pasión por hacer las cosas bien. Nos arriesgamos a explorar fronteras donde otros no llegan: un modelo predictor basado en monte carlo, una red convolucional para detección de caras, un sensor de posición bluetooth, la recreación de un espacio acústico usando finite impulse response.
Estos son solo algunos de los desafíos, donde aprendemos, exploramos y nos complementamos como equipo para lograr cosas impensadas.
Trabajamos en proyectos propios y apoyamos a corporaciones en partnerships donde codo a codo combinamos conocimiento con creatividad, donde imaginamos, diseñamos y creamos productos digitales capaces de cautivar y crear impacto.

👉 Conoce más sobre nosotros

Apply exclusively at getonbrd.com.

Descripción del trabajo

El equipo de Data y Analytics trabaja en diferentes proyectos que combinan volúmenes de datos enormes e IA, como detectar y predecir fallas antes que ocurran, optimizar pricing, personalizar la experiencia del cliente, optimizar uso de combustible, detectar caras y objetos usando visión por computador.

Trabajarás transformando los procesos a MLOps y creando productos de datos a la medida basados en modelos analíticos, en su gran mayoría de Machine Learning, pero pudiendo usar un espectro de técnicas más amplio.

Dentro del equipo multidisciplinario con Data Scientist, Translators, DevOps, Data Architect, tu rol será extremadamente importante y clave para el desarrollo y ejecución de los productos, pues eres quien conecta la habilitación y operación de los ambientes con el mundo real. Te encargarás de aumentar la velocidad de entrega, mejorar la calidad y la seguridad del código, entender la estructura de los datos y optimizar los procesos para el equipo de desarrollo.

En cualquier proyecto que trabajes, esperamos que tengas un gran espíritu de colaboración, pasión por la innovación y el código y una mentalidad de automatización antes que procesos manuales.

Como MLE, tu trabajo consistirá en:

  • Trabajar directamente con el equipo de Data Scientists para poner en producción modelos de Machine Learning utilizando y creando pipelines de ML.
  • Recolección de grandes volúmenes y variados conjuntos de datos.
  • Recolección de interacción con la realidad para su posterior reentrenamiento.
  • Construir las piezas necesarias para servir nuestros modelos y ponerlos a interactuar con el resto de la compañía en un entorno real y altamente escalable.
  • Trabajar muy cerca de los Data Scientists buscando maneras eficientes de monitorear, operar y darle explicabilidad a los modelos.
  • Promover una cultura técnica impulsando los productos de datos con las prácticas DevSecOps, SRE y MLOps.

Calificaciones clave

  • Estudios de Ingeniería Civil en Computación o similar.
  • Experiencia práctica de al menos 3 años en entornos de trabajo como Software Engineer, ML Engineer, entre otros.
  • Experiencia con Python.
  • Entendimiento de estructuras de datos con habilidades analíticas relacionadas con el trabajo con conjuntos de datos no estructurados, conocimiento avanzado de SQL, incluida optimización de consultas.
  • Experiencia usando pipelines de CI/CD y Docker.
  • Pasión en problemáticas de procesamiento de datos.
  • Experiencia con servidores cloud (GCP, AWS o Azure, de preferencia GCP), especialmente el conjunto de servicios de procesamiento de datos.
  • Buen manejo de inglés, sobre todo en lectura donde debes ser capaz de leer un paper, artículos o documentación de forma constante.
  • Habilidades de comunicación y trabajo colaborativo.

¡En NeuralWorks nos importa la diversidad! Creemos firmemente en la creación de un ambiente laboral inclusivo, diverso y equitativo. Reconocemos y celebramos la diversidad en todas sus formas y estamos comprometidos a ofrecer igualdad de oportunidades para todos los candidatos.

“Los hombres postulan a un cargo cuando cumplen el 60% de las calificaciones, pero las mujeres sólo si cumplen el 100%.” Gaucher, D., Friesen, J., & Kay, A. C. (2011).

Te invitamos a postular aunque no cumplas con todos los requisitos.

Nice to have

  • Agilidad para visualizar posibles mejoras, problemas y soluciones en Arquitecturas.
  • Experiencia en Infrastructure as code, observabilidad y monitoreo.
  • Experiencia en la construcción y optimización de data pipelines, colas de mensajes y arquitecturas big data altamente escalables.
  • Experiencia en procesamiento distribuido utilizando servicios cloud.
  • Stack orientado a modelos econométricos (statsmodels, pyfixest), serialización.
  • Experiencia con algún motor de datos distribuido como pyspark, dask, modin.
  • Interés en temas "bleeding edge" de Inferencia Causal: (Técnicas Observacionales, Inferencia basada en diseño, Probabilidad y Estadística [Fuerte énfasis en OLS y sus distintas expansiones]).

Beneficios

  • MacBook Air M2 o similar (con opción de compra hiper conveniente)
  • Bono por desempeño
  • Bono de almuerzo mensual y almuerzo de equipo los viernes
  • Seguro Complementario de salud y dental
  • Horario flexible
  • Flexibilidad entre oficina y home office
  • Medio día libre el día de tu cumpleaños
  • Financiamiento de certificaciones
  • Inscripción en Coursera con plan de entrenamiento a medida
  • Estacionamiento de bicicletas
  • Programa de referidos
  • Salida de “teambuilding” mensual

Library Access to a library of physical books.
Accessible An infrastructure adequate for people with special mobility needs.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks NeuralWorks offers space for internal talks or presentations during working hours.
Life insurance NeuralWorks pays or copays life insurance for employees.
Meals provided NeuralWorks provides free lunch and/or other kinds of meals.
Partially remote You can work from your home some days a week.
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Computer repairs NeuralWorks covers some computer repair expenses.
Dental insurance NeuralWorks pays or copays dental insurance for employees.
Computer provided NeuralWorks provides a computer for your work.
Education stipend NeuralWorks covers some educational expenses related to the position.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Recreational areas Space for games or sports.
Shopping discounts NeuralWorks provides some discounts or deals in certain stores.
Vacation over legal NeuralWorks gives you paid vacations over the legal minimum.
Beverages and snacks NeuralWorks offers beverages and snacks for free consumption.
Vacation on birthday Your birthday counts as an extra day of vacation.
Time for side projects NeuralWorks allows employees to work in side-projects during work hours.
$$$ Full time
Data Engineer
  • BC Tecnología
  • Santiago (Hybrid)
Python SQL Apache Spark CI/CD
BC Tecnología es una consulta de TI especializada en la gestión de portafolios, desarrollo de proyectos y outsourcing de personal para áreas de Infraestructura, Desarrollo de Software y Unidades de Negocio. Enfocada en clientes de servicios financieros, seguros, retail y gobierno, la empresa promueve soluciones a través de metodologías ágiles y un marco de cambio organizacional centrado en el desarrollo de productos. El Data Engineer se integrará a proyectos desafiantes que buscan optimizar flujos de datos, gobernanza y escalabilidad, apoyando a clientes con alto estándar de calidad y buscando mejoras continuas en procesos y pipelines.

This job is available on Get on Board.

Funciones principales

  • Diseñar y construir pipelines eficientes para mover y transformar datos, asegurando rendimiento y escalabilidad.
  • Garantizar consistencia y confiabilidad mediante pruebas unitarias y validaciones de calidad de datos.
  • Implementar flujos CI/CD para ambientes de desarrollo y producción, promoviendo buenas prácticas de DevOps.
  • Diseñar pipelines avanzados aplicando patrones de resiliencia, idempotencia y event-driven.
  • Contribuir al gobierno de datos mediante metadata, catálogos y linaje.
  • Colaborar con líderes técnicos y arquitectos para definir estándares, guías y mejoras de procesos.
  • Alinear soluciones técnicas con requerimientos de negocio y metas de entrega.
  • Apoyarse en líderes técnicos para lineamientos y mejores prácticas del equipo.

Requisitos y experiencia

Buscamos un Data Engineer con al menos 2 años de experiencia en el diseño y construcción de pipelines de datos. Debe poseer dominio avanzado de Python, Spark y SQL, y experiencia trabajando en el ecosistema AWS (Glue, S3, Redshift, Lambda, MWAA, entre otros). Es deseable experiencia con lakehouses (Delta Lake, Iceberg, Hudi) y conocimientos en CI/CD (Git) y control de versiones. Se valorará experiencia previa en entornos de retail y en proyectos de calidad y gobierno de datos, así como experiencia en desarrollo de integraciones desde/hacia APIs y uso de IaC (Terraform).
Se requieren habilidades de comunicación efectiva, trabajo en equipo y proactividad. Se valorará capacidad de aprendizaje, colaboración entre equipos y enfoque en resultados en un entorno dinámico y con clientes de alto nivel.

Requisitos deseables

Experiencia previa en retail o sectores regulados. Conocimiento en calidad y gobierno de datos. Experiencia en desarrollo de integraciones desde/hacia APIs. Conocimientos en herramientas de orquestación y monitoreo de pipelines. Familiaridad con buenas prácticas de seguridad de datos y cumplimiento normativo. Capacidad para comunicar conceptos técnicos a audiencias no técnicas y para fomentar una cultura de mejora continua.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

$$$ Full time
Senior Software Engineer Data Platform
  • Zus Health
  • United States
software embedded system ceo

Who we are


Zus is a shared health data platform designed to accelerate healthcare data interoperability by providing easy-to-use patient data via API, embedded components, and direct EHR integrations. Founded in 2021 by Jonathan Bush, co-founder and former CEO of athenahealth, Zus partners with HIEs and other data networks to aggregate patient clinical history and then translates that history into user-friendly information at the point of care. Zus's mission is to catalyze healthcare's greatest inventors by maximizing the value of patient insights - so that they can build up, not around.


What we're looking for


We’re looking for an experienced Software Engineer to join the “Costco” team at Zus, which builds services for managing our rapidly growing bulk data offerings while adhering to complex healthcare access control requirements.


The ideal candidate will be excited to take on the challenge of processing, storing and delivering the entire health records of millions of patients, adopting tools to handle growing scale, and ensuring high data quality and freshness. You are creative, innovative and love to run experiments to explore the paths to evolve and develop our platform as we scale.


As As part of the core Zus platform, the Costco team has needed to rapidly innovate to stay ahead of data volumes that grow at 10x per year and a growing base of data-savvy customers using data to improve patient care. They are also contending with an evolving regulatory landscape in data privacy and security.


On the Costco team, you will work with microservices in Go, streaming data pipelines in AWS, and state-of-the-art data technologies including Apache Iceberg, Apache Spark, Snowflake, and dbt. Expect to learn a lot and be put on mission-critical projects with direct customer impact.

\n


As part of our team, you will
  • Build and operate data services driving our applications and APIs
  • Collaborate with team members and across Engineering to iteratively prototype and develop new functionality
  • Partner with product managers and other Zusers


You're a good fit because you
  • Learn fast and enjoy open-ended technical challenges
  • Have experience with operationally stable, scalable, and cost efficient data services
  • Enjoy owning your work and seeing it deploy safely in production
  • Are experienced using Cloud Data Warehouses such as Snowflake, Big Query, Redshift or Databricks
  • Have experience with at least one of the following: deployment technologies (GitHub Actions, CircleCI, etc.), cloud providers (AWS, Azure, GCP), and Infrastructure as Code (Terraform, CloudFormation, etc.)
  • Are excited to ~ finally! ~ enable a true digital revolution in healthcare
  • Thrive amid the changing landscape of a growing and evolving startup
  • Enjoy collaboration and solving unique problems


It would be awesome if you were
  • Experienced at working with petabyte-scale data
  • Experienced with Apache Iceberg, Apache Spark, and other large-scale data technologies
  • Experienced with AuthN/AuthZ and fine-grained access control
  • Familiar with multiple languages including either Go or Python
  • Experienced in working with healthcare data and APIs
  • Familiar with the FHIR and/or TEFCA standards


\n
$140,000 - $180,000 a year
We are a remote first company that believes that in-person interactions are beneficial. You should be comfortable traveling about once a quarter to collaborate with teammates face to face.
\n

We will offer you…


• Competitive compensation that reflects the value you bring to the team a combination of cash and equity

• Robust benefits that include health insurance, wellness benefits, 401k with a match, unlimited PTO

• Opportunity to work alongside a passionate team that is determined to help change the world (and have fun doing it)


Please Note: Research shows that candidates from underrepresented backgrounds often don’t apply unless they meet 100% of the job criteria. While we have worked to consolidate the minimum qualifications for each role, we aren’t looking for someone who checks each box on a page; we’re looking for active learners and people who care about disrupting the current healthcare system with their unique experiences.


We do not conduct interviews by text nor will we send you a job offer unless you've interviewed with multiple people, including the Director of People & Talent, over video interviews. Job scams do exist so please be careful with your personal information.




Please mention the word **UNFETTERED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $3500 - 3700 Full time
Python SQL Microstrategy ETL

Coderslab.io is looking to hire a Big Data & Reporting Lead to lead the organization’s data architecture and analytics strategy.

This role will be responsible for designing, governing, and optimizing the enterprise data architecture, ensuring proper structuring, integration, automation, and consumption of data for reporting, advanced analytics, and decision-making.

The position has a strong focus on data architecture, analytical modeling for MicroStrategy, process automation using n8n, and optimization of ETL/ELT data pipelines.

About the client and the project: the company delivers innovative technology solutions and provides opportunities for continuous learning under the guidance of experienced professionals and cutting-edge technologies. The goal is to deliver value in key business processes and improve operational efficiency through SAP.

This job is original from Get on Board.

Funciones del cargo

Data Architecture
Design and govern the data architecture for Big Data and BI platforms.
Define analytical data models for reporting and analytics.
Design data lakes, data warehouses, and data marts aligned with business needs.
Establish data governance, quality, and lineage standards.
Ensure platform scalability, availability, and reliability.

Modeling and Reporting in MicroStrategy
Design and optimize the semantic layer and metadata in MicroStrategy.
Define analytical models and Star Schema structures.
Lead the development of dossiers, operational reports, and analytical cubes.
Optimize queries, performance, and execution times.
Define caching, aggregation, and pre-calculation strategies.

Automation of Analytical Processes (n8n)
Design data and reporting automation workflows using n8n.
Integrate sources such as APIs, databases, cloud services, and BI tools.
Automate data extraction, report generation, dashboard distribution, and alerts.
Design orchestration pipelines for analytical processes.

Data Processing Optimization
Design and optimize scalable ETL/ELT processes.
Optimize queries, data pipelines, and incremental loads.
Reduce latency and resource consumption in reporting.
Implement efficient data ingestion strategies.

Technical Leadership and Management
Lead Data Engineering, BI, and Analytics teams.
Track data architecture and reporting projects.
Define the data platform evolution roadmap.
Establish KPIs for reporting performance, data quality, and analytics adoption.
Align business needs with the data architecture.

Requerimientos del cargo

Experience leading data architecture or analytics platforms.
Experience in analytical data modeling (Star Schema, Data Modeling).
Experience working with Big Data or Data Warehousing platforms.
Experience with MicroStrategy for modeling and reporting.
Experience designing ETL / ELT processes and data pipelines.
Advanced SQL knowledge.
Experience with Python for data processing or automation.
Experience designing scalable data architectures.

Technologies
Big Data & Data Platforms
Spark
Hadoop
Databricks
Snowflake / BigQuery / Redshift
Kafka
Business Intelligence
MicroStrategy
Power BI (nice to have)
Tableau (nice to have)
Automation & Orchestration
n8n
Airflow
REST APIs
Webhooks
Databases
SQL Server
PostgreSQL
Oracle
NoSQL
Data Engineering
Python
Advanced SQL
ETL / ELT pipelines

Opcionales

Experience with workflow automation using n8n.
Experience with orchestration tools such as Airflow.
Experience with Power BI or Tableau.
Knowledge of event-driven or streaming architectures (Kafka).
Experience in data governance, data quality, and data cataloging.

Condiciones

Modalidad prestacion de servicios

$$$ Full time
Data Engineer – Proyecto (Híbrida)
  • BC Tecnología
  • Santiago (Hybrid)
Python PostgreSQL SQL ETL
En BC Tecnología diseñamos y ejecutamos soluciones de TI para clientes en sectores como servicios financieros, seguros, retail y gobierno. Nuestro equipo de Data & Analytics se centra en impulsar la continuidad operativa de flujos de datos corporativos mediante pipelines robustos, integraciones escalables y monitoreo proactivo. Participarás en un proyecto con enfoque en datos de alto volumen, trabajando con tecnologías modernas y un entorno ágil para entrega continua y mejoras de producto.

Opportunity published on Get on Board.

Funciones

  • Diseñar y mantener pipelines ETL/ELT para datos críticos de la organización.
  • Orquestar y monitorear flujos de datos con Apache Airflow en entornos productivos.
  • Optimizar consultas SQL en PostgreSQL y/o Amazon Redshift para rendimiento y costos.
  • Gestionar repositorios y pipelines CI/CD en Azure DevOps.
  • Resolver incidencias y asegurar la calidad, disponibilidad y trazabilidad de los datos.
  • Colaborar con equipos de ciencia de datos, ingeniería y negocio para entender requerimientos y entregar soluciones escalables.
  • Participar en la definición de estándares de gobierno de datos y mejores prácticas de ingeniería de datos.

Descripción

  • Buscamos Ingeniero/a de Datos con experiencia en desarrollo de pipelines y entornos productivos para asegurar fluidez y confiabilidad de los datos corporativos.
  • Requisitos técnicos: Python y SQL avanzados; experiencia con PostgreSQL y/o Amazon Redshift; Apache Airflow; Azure DevOps; manejo de grandes volúmenes de datos.
  • Competencias: pensamiento analítico, proactividad, orientación a resultados, capacidad de trabajo en equipo y comunicación efectiva con stakeholders.
  • Se valoran proyectos previos en entornos financieros y experiencia con herramientas de monitoreo y observabilidad de datos.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.

La modalidad híbrida que ofrecemos, ubicada en Santiago Centro, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.

Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $8000 - 10000 Full time
Figma Jira Notion A/B Testing

About EVEN

EVEN is the leading direct-to-fan platform for artists and labels. We help artists sell music, merchandise, and exclusive content directly to their superfans, with every sale counting toward official chart reporting through Luminate.

Our platform powers pre-orders, digital storefronts, and direct-to-consumer commerce for artists including J. Cole, French Montana, Brent Faiyaz, LaRussell, and Mick Jenkins. We are partnered with Universal Music Group, UnitedMasters, Too Lost, Stem, Symphonic, Secretly Distribution, Virgin Music Group, and others across 3,000+ labels and distributors in over 110 countries.

We are a remote-first team of 35 people across the US and Latin America. Our engineering team of 16 is primarily based in LATAM and operates in three squads (Artist, Fan, Core), shipping across web, mobile, and API. You will be working alongside engineers you can communicate with natively.

This job is published by getonbrd.com.

Why This Role Exists

Product direction at EVEN is currently shared between our CEO (vision, strategy, partner commitments) and our CTO (day-to-day product and engineering decisions). Our Lead Product Designer shapes UX and design. There is no dedicated product manager.

We are now 35 people with three engineering squads, partnerships with the leading music companies, and a product surface that spans artist dashboards, fan storefronts, mobile apps, e-commerce, streaming, chart reporting, and API integrations.

We need someone whose full-time job is to own the product roadmap, run shaping sessions, write clear briefs, coordinate cross-team priorities, and connect what our partners and artists need with what our engineering team builds.

What you will do:

  • Own the product roadmap end to end. Translate company strategy into quarterly priorities, and quarterly priorities into engineering-ready specs.
  • Run shaping sessions with the CTO and engineering leads. Turn raw ideas into scoped briefs with clear acceptance criteria before they hit a sprint.
  • Manage the product process: Ideas Pool to PRD Library to Roadmap (we use Notion, Linear, and Figma).
  • Work directly with our 3 product designers to define user flows, review designs, and ship features that match the brief.
  • Coordinate across squads (Artist, Fan, Core) to manage dependencies, unblock engineers, and keep the roadmap on track.
  • Partner with BD and Artist Relations to understand what artists, labels, and distributors need and translate that into product requirements.
  • Define product metrics, track them in PostHog, and use data to prioritize what ships next.
  • Report to the CEO. Work side by side with the CTO.

Success at 30/60/90 days:

  • 30 days: You have audited the current roadmap, met every team lead, and identified the top 3 product gaps.
  • 60 days: You own the shaping process. Every feature entering a sprint has a brief you wrote or approved.
  • 90 days: The CEO is no longer involved in day-to-day product decisions. The roadmap is yours.

Qualifications and requirements

  • 5+ years in product management at a B2C or marketplace company, with at least 2 years as a lead or senior PM.
  • You have shipped and scaled digital commerce, content, or creator-economy products. Experience with platforms that have both a supply side (artists, creators) and a demand side (fans, consumers) is strongly preferred.
  • You write clear PRDs and briefs. You can take a vague idea and turn it into a scoped spec with acceptance criteria that engineers can build from.
  • You have run or closely participated in product shaping sessions with engineering and design teams.
  • You have managed or closely collaborated with product designers. You can give useful design feedback and know the difference between UX and UI polish.
  • You are fluent in English and Spanish, written and verbal. Our engineering and design teams work primarily in Spanish. Our commercial team works in English. You need both.
  • You are comfortable working across US and LATAM time zones with a fully distributed team.
  • You have used tools like Linear, Notion, Figma, and PostHog (or equivalents like Jira, Confluence, Amplitude, Mixpanel).
  • You understand analytics-driven product development. You can define metrics, set up tracking, and use data to make prioritization calls.
  • You have worked at a startup (Series A to Series C) where process was still being built and you had to build it yourself.

Desirable skills

  • Experience with direct-to-consumer e-commerce platforms or digital storefronts.
  • Background in the music industry, artist services, or label partnerships.
  • Familiarity with Luminate/SoundScan chart reporting or music distribution workflows.
  • Experience working with React, Next.js, or modern web/mobile stacks. You will not code, but technical fluency helps you scope better and earn engineering trust faster.
  • Prior experience at a Series A or Series B startup where you built the product function from scratch (first PM hire).
  • Experience managing a product team of 3+ people (designers and/or PMs).
  • Experience with mobile product development (iOS/Android).

Conditions

  • Fully remote. Work from anywhere in the Americas.
  • Equity Package
  • Core overlap hours: 10am to 3pm EST (New York time). The rest of your day is flexible.
  • Paid in USD via Deel.
  • Health stipend included in monthly compensation.
  • Flexible vacation and PTO policy.
  • Paid sick days.
  • Equipment provided.
  • Direct access to the CEO and CTO. No layers between you and the people making decisions.
  • You will be the first dedicated product hire. You are building the function, not joining one.

Relocation offered If you are moving in from another country, EVEN helps you with your relocation.
Fully remote You can work from anywhere in the world.
Pet-friendly Pets are welcome at the premises.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage EVEN pays or copays health insurance for employees.
Computer provided EVEN provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal EVEN gives you paid vacations over the legal minimum.
$$$ Full time
Cloud Data Engineer
  • WiTi
  • Santiago (Hybrid)
Python SQL ETL CI/CD
WiTi conecta talento tecnológico con proyectos de alto impacto en Latinoamérica. Nuestro equipo se enfoca en la integración de sistemas, software a medida y desarrollos innovadores para dispositivos móviles, con énfasis en resolver problemas complejos a través de soluciones innovadoras.
Este rol forma parte de un equipo responsable de modernizar un ecosistema analítico legado hacia una arquitectura cloud en AWS, con foco en estandarización, performance y escalabilidad. El proyecto implica migrar y optimizar la lógica de bases de datos preexistentes hacia Amazon Redshift, contribuyendo a la automatización del proceso y garantizando la calidad, consistencia y rendimiento de los datos.

Apply through Get on Board.

Responsabilidades Clave

  • Analizar y comprender procesos analíticos existentes (en SQL u otros entornos heredados) para reestructurarlos sobre Amazon Redshift.
  • Convertir y optimizar lógica SQL hacia estándares compatibles con Redshift, aplicando buenas prácticas de modelado y rendimiento.
  • Diseñar y documentar enfoques repetibles para la migración de consultas y estructuras de datos (catálogo de reglas, patrones de transformación).
  • Colaborar en tareas de automatización de migraciones (scripts en Python, templates SQL, validaciones automáticas, pipelines CI/CD).
  • Mantener y mejorar procesos ETL/ELT en AWS, apoyándose en servicios como Glue, Lambda, Step Functions y S3.
  • Validar resultados de conversión mediante controles de reconciliación y pruebas de calidad de datos.
  • Documentar decisiones técnicas, reglas de conversión y excepciones para asegurar trazabilidad y mantenibilidad del proceso.

Requisitos Excluyentes

  • 3+ años de experiencia como Ingeniero de Datos o rol equivalente.
  • Dominio avanzado de SQL estándar (uniones complejas, funciones de ventana, CTE, tuning, lectura de planes de ejecución).
  • Experiencia práctica con Amazon Redshift (particionamiento, distribución, optimización de consultas y almacenamiento).
  • Conocimientos sólidos de procesos ETL/ELT en entornos cloud, idealmente AWS.
  • Experiencia en proyectos orientados a migración o modernización de plataformas de datos.
  • Conocimientos en Python para scripting y automatización de validaciones.
  • Nivel intermedio o superior de inglés técnico.

Deseables

  • Experiencia con DataOps, manejo de pipelines (Airflow, Step Functions o similares).
  • Familiaridad con herramientas de Infraestructura como Código (Terraform, CloudFormation).
  • Experiencia en gobierno de datos, nomenclaturas y validaciones automáticas de calidad.
  • Capacidad de documentar y estandarizar procesos en contextos corporativos.

Beneficios

En WiTi fomentamos una cultura de aprendizaje continuo, colaboración y crecimiento profesional. Entre los beneficios se pueden incluir:
  • Plan de carrera y oportunidades de desarrollo profesional.
  • Acceso a certificaciones y formación continua.
  • Cursos de idiomas y acceso a biblioteca digital para tu desarrollo personal y profesional.

Digital library Access to digital books or subscriptions.
Computer provided WiTi provides a computer for your work.
Personal coaching WiTi offers counseling or personal coaching to employees.
Informal dress code No dress code is enforced.
$75000 - $125000 Full time
Data Analyst
  • World Golf Tour (WGT)
  • San Francisco
analyst security python game

Role

World Golf Tour is seeking a Data Analyst to join our Product team. In this critical role, you will be the custodian of our data, organizing insights, and analyzing telemetry to support strategic business decisions. You will focus on developing and maintaining dashboards and analysis reports, collaborating across the studio and closely with the Product team to provide actionable insights that help drive the business. This role emphasizes strong data stewardship, visualization and statistical analysis.

Responsibilities

· Clean, validate, and prepare datasets for analysis, including resolving issues regarding missing, inconsistent, or novel data

· Perform exploratory data analysis to identify trends, patterns, and anomalies that inform business decisions

· Develop and maintain dashboards, reports, and visualizations using tools such as Amplitude, Power BI, or Excel

· Translate analytical findings into clear, actionable insights for both technical and non-technical stakeholders

· Partner with business teams (e.g., marketing, product, finance) to understand data needs and deliver relevant analyses

· Support ad hoc analysis and deep dives to answer specific business questions or identify opportunities

· Ensure compliance with data governance, privacy, and security standards

Experience and Skills

· Bachelor’s degree in Data Analytics, Statistics, Mathematics, Computer Science, Economics, or a related quantitative field

· 2–4 years of experience in a data analyst or similar role, preferably in game or software development

· Strong proficiency in SQL for data querying and manipulation

· Experience with data analysis tools/languages such as Python or R

· Advanced proficiency in Excel (e.g., pivot tables, formulas, data modeling)

· Experience with data visualization tools (e.g., Tableau, Power BI)

· Strong proficiency in statistical methodologies and data analysis

· Strong problem-solving and critical thinking skills

· Excellent communication skills, with the ability to present complex data in a clear and concise manner

Preferred Qualifications

· Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery)

· Familiarity with ETL processes and data pipeline development

· Knowledge of basic machine learning or predictive analytics techniques

· Experience working in game development

· Understanding of data governance and privacy regulations

· Experience in a fast-paced, cross-functional environment

About Us

World Golf Tour is a leader in online golf, delivering the most realistic and immersive virtual golf experience to players around the globe. We are best known for our core product WGT Golf, a free-to-play golf game that has set the standard for virtual golf since its launch in 2008. Renowned for its photorealistic recreations of iconic courses such as Pebble Beach, The Old Course at St Andrews, and Quail Hollow Club, the game combines authentic course imagery with precise swing mechanics and multiplayer competition to offer an experience trusted by millions.



Please mention the word **ENRAPTURE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $400 - 600 Full time
Data Pipeline Engineer
  • Tritone Analytics, Inc
Python SQL Data Transformation ETL

About Tritone Analytics: Tritone Analytics is a music-technology startup building a forensic royalty auditing platform for the music industry. We help artists, managers, and rights-holders identify unpaid or misreported royalties by combining deterministic data systems with modern AI workflows.

We work with messy, real-world data — distributor reports, royalty statements, contracts — and turn it into structured, queryable systems that power financial analysis and AI-assisted auditing.

Project scope: You will contribute to the core data infrastructure that underpins our platform, focusing on data ingestion, transformation, validation, and the preparation of data for AI workflows. This role sits at the intersection of data engineering, analytical systems, and AI pipelines, ensuring reliable, scalable data processing from messy sources to structured datasets.

Apply without intermediaries from Get on Board.

What You’ll Work On

  • Build and maintain pipelines that transform messy CSVs, metadata exports, and contracts into structured datasets.
  • Design and enforce canonical schemas across inconsistent data sources to enable reliable analytics.
  • Write SQL to validate outputs, reconcile datasets, and support financial analysis.
  • Debug and improve data quality across ingestion and transformation stages.
  • Support document ingestion workflows (chunking, preprocessing, metadata tagging).
  • Help prepare structured inputs for LLM-based workflows (RAG, extraction, classification).
  • Improve reliability of pipelines (error handling, logging, testing).

What You’ll Need

Core Requirements (Must Have): Strong Python for data processing and scripting with real datasets; strong SQL skills (joins, aggregations, validation queries, debugging data issues); proven experience working with messy or inconsistent data; understanding of ETL pipelines and data transformation workflows; ability to debug data issues and explain root causes.

We value curiosity, collaboration, and a bias toward shipping reliable data products. Candidates who enjoy digging into messy datasets, communicating data issues clearly, and partnering with data scientists and engineers to operationalize AI workflows will excel. Prior experience in music rights or financial data domains is a plus.

Desirable but Not Required

Nice to Have: Experience with DuckDB, Polars, Pandas, or PyArrow; familiarity with Parquet or columnar data formats; exposure to vector databases or RAG systems; experience handling large CSV datasets or financial data; basic understanding of LLM workflows.

Benefits

Benefits to be discussed at time of conversion to a full-time role.

We offer a collaborative, founder-led culture with an emphasis on curiosity, continuous learning, and shipping impactful data products. Competitive compensation, flexible work hours, and opportunities for professional growth in a rapidly evolving music-tech space. Our team is distributed; we value autonomy and ownership over your projects. We support conference attendance, training, and peer knowledge sharing. We look forward to discussing how Tritone can support your career trajectory.

$$$ Full time
Customer Program Manager
  • Nexxa.AI
  • Sunnyvale
manager jira training consulting

Customer Program Manager

Cross-Site Project Coordination | Schedule & Risk Management | High-Visibility Communication | SF Bay Area, CA

ABOUT NEXXA

Nexxa.ai is building artificial super intelligence for heavy industries — enabling machines, systems and operations to think, decide and act autonomously across manufacturing, large-scale infrastructure, logistics and legacy environments. Our mission is to translate deep technical breakthroughs into operational reality, solving some of the hardest systems-level problems in industry.

THE ROLE

Reporting to CPO

We're hiring a Customer Program Manager to be the operational backbone of our customer delivery engine. You'll manage project schedules, status visibility, and cross-site coordination across Applied AI and core engineering teams operating across global sites — ensuring every engagement ships on time with full visibility. You'll work alongside a Delivery Manager who owns the customer relationship and outcome quality, core-engineering remote project manager. Your job is to make sure the delivery machine runs — schedules are tracked, risks are flagged early, handoffs are clean, and every stakeholder knows exactly where things stand at any given moment.

WHAT YOU'LL DO

  • Manage end-to-end project schedules for customer engagements across Applied AI (FDE team) and core engineering teams spanning multiple geographies and time zones

  • Maintain real-time project status visibility — Confluence boards, Jira tracking, weekly status reports — so leadership, engineering, and the Delivery Manager always have a single source of truth

  • Run internal project review cadences: bi-weekly planning reviews, customer submissions reviews, and dev question sessions across all active engagements

  • Proactively identify risks, dependencies, and blockers before they become surprises — escalate to the Delivery Manager with proposed mitigations, not after deadlines slip

  • Own cross-site coordination across multiple sites — bridging time zones, aligning handoffs, and ensuring nothing falls between teams

  • Drive daily and weekly status updates across all active projects — post EOD updates in team channels with key changes, blockers, and next actions tagged to DRIs

  • Prepare and deliver weekly internal status reports to the CPO every Friday — consolidating project health, risk register, and upcoming milestones across all accounts

  • Track and maintain delivery governance artifacts: project plans, feedback/release trackers, QA checklists, go-live readiness assessments

  • Coordinate resource allocation and capacity planning across FDEs and engineering — flag overload risks and propose reallocation before quality suffers

  • Ensure Jira hygiene: correct assignees, updated due dates, closed tickets, and clean backlogs — so automated reporting and AI tools produce accurate outputs

  • Support the Delivery Manager in preparing customer-facing materials: milestone review decks, progress summaries, and QBR data

HOW THIS ROLE WORKS WITH THE DELIVERY MANAGER

The CPM and Delivery Manager share the delivery mission but own different dimensions:

  • You own: project schedules, daily/weekly status tracking, Jira hygiene, cross-site coordination, Confluence boards, internal reporting, resource capacity flagging, and governance artifact maintenance

  • Delivery Manager owns: customer relationship, outcome definition, delivery quality sign-off, CSAT/NPS, escalation resolution, post-delivery retrospectives, and account expansion insights

  • Together: the DM ensures we deliver the right thing at the right quality; you ensure we deliver it on schedule with full visibility and zero surprises

WHAT WE'RE LOOKING FOR

  • 5+ years in technical program management, project management, or delivery management — with at least 2 years managing cross-functional, cross-site engineering teams

  • Proven experience managing 3–5 concurrent external facing projects simultaneously without dropping balls — you have a system, not just hustle

  • Strong command of project management tooling: Jira, Confluence, Rocketlane (or similar), and spreadsheet-based reporting. You're the person who keeps these tools clean and current.

  • Experience coordinating across time zones and distributed teams — you've worked with India/APAC engineering teams and know how to structure async handoffs

  • Excellent written communication — your status updates are crisp, your escalations are clear, and your meeting notes are actionable. You don't write paragraphs; you write bullet points with owners and dates.

  • Technical fluency — you can read architecture docs, understand data pipeline concepts, and have productive conversations with engineers about scope, effort, and trade-offs. You don't need to code, but you need to understand the work.

  • Anticipatory mindset — you see risks coming before they materialize. You flag a Milestone 1 delivery risk on Monday, not on Thursday when it's due.

  • Experience in enterprise SaaS, consulting delivery, or systems integration. Heavy industry experience (manufacturing, supply chain, energy) is a strong plus.

KEY SUCCESS INDICATORS

  • 100% of active projects have up-to-date Confluence boards with milestones, DRIs, and dates — refreshed daily, not weekly

  • Zero surprise delays — risks are flagged at least 1 week before they impact a deadline, with proposed mitigations

  • Weekly status reports delivered to Shashank (CPO) every Friday for Monday leadership calls — no exceptions, no late submissions

  • Customer communication cadence running on schedule: weekly updates sent, bi-weekly check-ins held, milestone reviews documented

  • Cross-site engineering alignment verified at every handoff — India team has clear specs, context, and deadlines before they start work

  • Jira data quality at 100% — accurate assignees, no stale tickets, closed items marked done. Automated reports pull clean data.

  • Resource conflicts identified and escalated before they impact delivery — capacity planning is proactive, not reactive

NICE TO HAVE

  • Experience with Rocketlane, Asana, or Monday.com for customer-facing delivery management

  • Prior experience at a fast-growing startup (seed to Series B) where you built the PM process from scratch

  • Experience working with AI/ML engineering teams — understanding model training timelines, data pipeline dependencies, and iterative delivery cycles

  • Familiarity with enterprise procurement and vendor management processes (purchasing control towers, SOW reviews, NDA workflows)

WHY NEXXA

  • Architect the intelligence layer for the world's largest industrial companies — your designs will run with top Fortune 100 companies

  • Work directly with the CPO and CTO on every engagement — ZERO layers of bureaucracy

  • Backed by silicon valley top VCs, with access to their portfolio network and enterprise resources

  • Early-stage equity with significant upside



Please mention the word **PEP** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Engineering Manager
  • Hinge Health
  • Bengaluru
manager architect technical support

The Opportunity

Hinge Health is hiring an Engineering Manager for our Growth Data Platform (GDP) pod in Bangalore. This is a pivot-point role for a leader who is ready to move beyond traditional software management and lead a team into the era of AI-Native Engineering and ML-Driven Growth. The GDP pod is the engine room of Hinge Health's growth strategy. You own the data pipelines, event streams, and the emerging "Intelligence Layer" that powers every member interaction—from the first ad they see to the "Daily Streak" notification that keeps them pain-free. In 2026, your mission is to transform GDP from a data mover to a decision engine. You will partner with Data Science to operationalize high-value ML models (like our Direct Mail Propensity Model and Contextual Bandits) that autonomously decide the channel, content, and timing of our marketing. Simultaneously, you will pioneer our "Harness Engineering" initiative, transforming your pod's workflow from manual coding to managing autonomous AI agents that build, test, and verify our data infrastructure. You will lead a high-performing team in Bangalore, serving as the strategic bridge between SF Product Strategy and technical execution


What You’ll Accomplish

  • Build the "Intelligence Layer": Move beyond simple data piping. Architect the real-time decisioning layer that ingests ML signals (e.g., Churn Risk, Propensity to Convert) and routes them instantly to execution platforms like Iterable.

  • Operationalize Growth ML Models: Partner with Data Science to take predictive models out of the lab and into production. You will own "Phase 3" of the model lifecycle: hardening, serving, and monitoring models that control millions of dollars in marketing spend.

  • Lead the Transition to Harness Engineering: Drive the adoption of AI-native workflows (using tools like Cursor and Claude Code). Shift the team’s focus from "typing code" to building the test harnesses, specs, and safety rails that allow agents to autonomously maintain our pipelines.

  • Guarantee Data Trust ("Glass Box" Observability): Champion a culture of radical observability. Implement automated "data sentinels" and contract tests that catch schema violations and freshness issues before they impact our marketing campaigns.

Basic Qualifications

  • 2+ years of experience managing engineering teams. You are a "player-coach" who can build a "One Team" culture, bridging the gap between SF and Bangalore with high-agency leadership.

  • 3+ years of experience with data engineering technologies including experience with distributed data processing frameworks (e.g., PySpark, Databricks) and SQL.

  • Experience with production data pipelines and understanding of data lifecycle management, including pipeline orchestration, monitoring, and operational excellence practices.

Preferred Qualifications

  • ML Ops & Model Serving Experience: You understand the lifecycle of data and models. You have experience with Kafka and event-driven architectures, and you know what it takes to serve an ML model in production (latency, feature stores, drift monitoring).

  • AI-Forward Leadership: You are excited, not intimidated, by the shift to AI-assisted engineering. You are eager to experiment with new workflows where engineers act as architects and auditors of AI-generated code.

  • Architectural Rigor: You can simplify complex systems. You have a track record of converging "sprawling" pipeline patterns into robust standards (e.g., moving ad-hoc scripts into a unified Event-Driven Architecture).

  • Operational Excellence: You value SLOs, runbooks, and incident management. You believe that "production reliability" is a feature, especially when dealing with data that drives real-time member health decisions.

  • Experience with Marketing Tech (Iterable, Braze) or Customer Data Platforms (Segment, Hightouch).

  • Experience implementing Contextual Bandits or similar experimentation frameworks.

  • Background in Healthcare/HIPAA compliant environments.

About Hinge Health

At Hinge Health, we’re using technology to scale and automate the delivery of healthcare – starting with musculoskeletal (MSK) conditions, which affect over 1.7 billion people worldwide. With an AI-powered human-centered care model, Hinge Health leverages cutting-edge technology to improve outcomes, experiences and costs to help people move beyond their pain. The platform addresses a broad spectrum of MSK care – from acute injury, to chronic pain, to post-surgical rehabilitation – through personalized, evidence-based care.

As the preferred partner to 50+ health plans, PBMs and other ecosystem partners, Hinge Health is available to over 20 million people across more than 2,550 employers. The company is headquartered in San Francisco with additional offices in Montreal and Bangalore. Learn more at http://www.hingehealth.com.

Hinge Health Hybrid Model

We believe that remote work and in-person work have their own advantages and disadvantages, and we want to be able to leverage the best of both worlds. Employees in hybrid roles are required to be in the office 3 days/week.

This is a Bengaluru-based role that involves regular interaction and collaboration with Hinge Health colleagues in San Francisco, CA. Time zones: San Francisco is the Pacific Time Zone, which is 12 hours and 30 minutes behind India Standard Time – for example, 8am in San Francisco is 8:30pm in Bengaluru. Standard working hours in San Francisco are between 8am - 6pm. For this role, applicants should be open to meetings in the late evening following India Standard Time.

What You'll Love About Us

  • Inclusive healthcare and benefits: In addition to comprehensive medical, dental, and vision coverage, we provide employees and their family members with Group Medical Coverage (GMC), Group Term Life Insurance (GTL), and Group Personal Accident Insurance (GPA).

  • We also offer a lifestyle stipend to support your overall well-being, along with learning and development opportunities to help you grow both personally and professionally.

  • Grow with us through discounted company stock through our ESPP with easy payroll deductions.

Culture & Engagement

Hinge Health is an equal opportunity employer and prohibits discrimination and harassment of any kind. We make employment decisions without regards to race, color, religion, sex, sexual orientation, gender identity, national origin, age, veteran status, disability status, pregnancy, or any other basis protected by federal, state or local law.

By submitting your application you are acknowledging we are using your personal data as outlined in the personnel and candidate privacy policy.

.


Beware of Phishing Attempts: We've noticed an increase in phishing where fraudsters impersonate employees and send fake job offers to steal sensitive information. We'll never ask for financial details during the hiring process and only use "@hingehealth.com" emails. If you receive a suspicious offer, stop communication and report it to the US FBI Internet Crime Complaint Center. To verify an email from our recruiting team, forward it to security@hingehealth.com.



Please mention the word **BELIEVABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer
  • NeuralWorks
  • Santiago (Hybrid)
Python SQL Cloud Computing Data Engineering

NeuralWorks es una compañía de alto crecimiento fundada hace 4 años. Estamos trabajando a toda máquina en cosas que darán que hablar.
Somos un equipo donde se unen la creatividad, curiosidad y la pasión por hacer las cosas bien. Nos arriesgamos a explorar fronteras donde otros no llegan: un modelo predictor basado en monte carlo, una red convolucional para detección de caras, un sensor de posición bluetooth, la recreación de un espacio acústico usando finite impulse response.
Estos son solo algunos de los desafíos, donde aprendemos, exploramos y nos complementamos como equipo para lograr cosas impensadas.
Trabajamos en proyectos propios y apoyamos a corporaciones en partnerships donde codo a codo combinamos conocimiento con creatividad, donde imaginamos, diseñamos y creamos productos digitales capaces de cautivar y crear impacto.

👉 Conoce más sobre nosotros

This job offer is on Get on Board.

Descripción del trabajo

El equipo de Data y Analytics trabaja en diferentes proyectos que combinan volúmenes de datos enormes e IA, como detectar y predecir fallas antes que ocurran, optimizar pricing, personalizar la experiencia del cliente, optimizar uso de combustible, detectar caras y objetos usando visión por computador.
Dentro del equipo multidisciplinario con Data Scientist, Translators, DevOps, Data Architect, tu rol será clave en construir y proveer los sistemas e infraestructura que permiten el desarrollo de estos servicios, formando los cimientos sobre los cuales se construyen los modelos que permiten generar impacto, con servicios que deben escalar, con altísima disponibilidad y tolerantes a fallas, en otras palabras, que funcionen. Además, mantendrás tu mirada en los indicadores de capacidad y performance de los sistemas.

En cualquier proyecto que trabajes, esperamos que tengas un gran espíritu de colaboración, pasión por la innovación y el código y una mentalidad de automatización antes que procesos manuales.

Como Data Engineer, tu trabajo consistirá en:

  • Participar activamente durante el ciclo de vida del software, desde inception, diseño, deploy, operación y mejora.
  • Apoyar a los equipos de desarrollo en actividades de diseño y consultoría, desarrollando software, frameworks y capacity planning.
  • Desarrollar y mantener arquitecturas de datos, pipelines, templates y estándares.
  • Conectarse a través de API a otros sistemas (Python)
  • Manejar y monitorear el desempeño de infraestructura y aplicaciones.
  • Asegurar la escalabilidad y resiliencia.

Calificaciones clave

  • Estudios de Ingeniería Civil en Computación o similar.
  • Experiencia práctica de al menos 3 años en entornos de trabajo como Data Engineer, Software Engineer entre otros.
  • Experiencia con Python.
    Entendimiento de estructuras de datos con habilidades analíticas relacionadas con el trabajo con conjuntos de datos no estructurados, conocimiento avanzado de SQL, incluida optimización de consultas.
  • Pasión en problemáticas de procesamiento de datos.
  • Experiencia con servidores cloud (GCP, AWS o Azure), especialmente el conjunto de servicios de procesamiento de datos.
  • Buen manejo de inglés, sobre todo en lectura donde debes ser capaz de leer un paper, artículos o documentación de forma constante.
  • Habilidades de comunicación y trabajo colaborativo.

¡En NeuralWorks nos importa la diversidad! Creemos firmemente en la creación de un ambiente laboral inclusivo, diverso y equitativo. Reconocemos y celebramos la diversidad en todas sus formas y estamos comprometidos a ofrecer igualdad de oportunidades para todos los candidatos.

“Los hombres postulan a un cargo cuando cumplen el 60% de las calificaciones, pero las mujeres sólo si cumplen el 100%.” D. Gaucher , J. Friesen and A. C. Kay, Journal of Personality and Social Psychology, 2011.

Te invitamos a postular aunque no cumplas con todos los requisitos.

Nice to have

  • Agilidad para visualizar posibles mejoras, problemas y soluciones en Arquitecturas.
  • Experiencia en Infrastructure as code, observabilidad y monitoreo.
  • Experiencia en la construcción y optimización de data pipelines, colas de mensajes y arquitecturas big data altamente escalables.
  • Experiencia en procesamiento distribuido utilizando servicios cloud.

Beneficios

  • MacBook Air M2 o similar (con opción de compra hiper conveniente)
  • Bono por desempeño
  • Bono de almuerzo mensual y almuerzo de equipo los viernes
  • Seguro Complementario de salud y dental
  • Horario flexible
  • Flexibilidad entre oficina y home office
  • Medio día libre el día de tu cumpleaños
  • Financiamiento de certificaciones
  • Inscripción en Coursera con plan de entrenamiento a medida
  • Estacionamiento de bicicletas
  • Programa de referidos
  • Salida de “teambuilding” mensual

Library Access to a library of physical books.
Accessible An infrastructure adequate for people with special mobility needs.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks NeuralWorks offers space for internal talks or presentations during working hours.
Life insurance NeuralWorks pays or copays life insurance for employees.
Meals provided NeuralWorks provides free lunch and/or other kinds of meals.
Partially remote You can work from your home some days a week.
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Computer repairs NeuralWorks covers some computer repair expenses.
Dental insurance NeuralWorks pays or copays dental insurance for employees.
Computer provided NeuralWorks provides a computer for your work.
Education stipend NeuralWorks covers some educational expenses related to the position.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Recreational areas Space for games or sports.
Shopping discounts NeuralWorks provides some discounts or deals in certain stores.
Vacation over legal NeuralWorks gives you paid vacations over the legal minimum.
Beverages and snacks NeuralWorks offers beverages and snacks for free consumption.
Vacation on birthday Your birthday counts as an extra day of vacation.
Time for side projects NeuralWorks allows employees to work in side-projects during work hours.
Gross salary $3500 - 5500 Full time
Senior Security Engineer
  • Neat
  • Santiago (Hybrid)
SQL Ethical Hacking DevSecOps Application Security

Neat es una fintech con la misión de robustecer las finanzas del hogar. Centralizamos los pagos recurrentes (servicios, arriendo, gasto común, colegios, créditos) para que las personas tengan visibilidad real de su plata. Sumamos NeatClub, programa de lealtad que premia el buen comportamiento financiero.

180.000 personas usan Neat hoy. En 2023 levantamos capital con founders de QVO e Iván Montoya (super angel SV). En 2025 volvimos con founders de Cornershop, Decelera Ventures, ADN Ventures, y Sean Cook (ex VP Intuit Mailchimp) como Advisor.

Cómo trabajamos: no somos una ticketera. Operamos con Shape Up: propones, defiendes y construyes.

Pilares culturales:

  • Growth Mindset: habilidades que se desarrollan con esfuerzo y aprendizaje constante.
  • Extreme Ownership: ves un problema, tomas iniciativa.
  • All In: hambre de impacto, no solo hacer la pega.
  • Radical Collaboration: responsabilidad compartida y hablar las cosas aunque incomoden.

Originally published on getonbrd.com.

Qué harás en Neat?

Tus responsabilidades se organizan en cuatro pilares, en orden de prioridad:

1. Operational Security
Controles, avisos y resguardos robustos sobre los flujos de pago y procesos core de Neat. Manejo de credenciales, permisos, Service Accounts, Secrets y API Keys con políticas claras de rotación y mínimo privilegio. Es el pilar más crítico: toca directamente el dinero de nuestros usuarios.

2. Seguridad para el usuario
Liderar 2FA, reCAPTCHA, manejo de sesiones y detección de fraude. Regla de oro: agregar valor en seguridad sin aumentar el roce para el usuario. Trabajarás de cerca con producto y diseño.

3. AI como vector defensivo y ofensivo
La IA es un arma de doble filo en seguridad y necesitamos a alguien que opere ambos lados con seriedad. Pilar transversal que definirá gran parte de nuestra ventaja en los próximos años.

4. Certificaciones y Ethical Hacking
Pavimentar el camino hacia ISO 27001 y PCI DSS, y gestionar el programa de ethical hacking / pentesting externo.

Día a día

  • Definir y ejecutar el roadmap de seguridad: priorizas tú, con checkpoints regulares con el CTO. A veces escribes el código tú, a veces haces pair con dev, a veces defines el spec y revisas el PR.
  • Incorporar prácticas de seguridad en todo Neat: onboarding seguro, manejo de accesos, cultura de secretos, capacitaciones al equipo.
  • Respuesta a incidentes: horario base 9-6. Frente a un incidente real (fraude, brecha, ataque activo, caída de control crítico de pago), necesitamos que respondas aunque no sea horario laboral. No es 24/7; es disponibilidad ante lo crítico. Lo compensamos con flexibilidad y días libres post-incidente.
  • Gestión del presupuesto semestral de seguridad (ethical hacking, herramientas, consultorías, certificaciones) con supervisión del CTO.
  • Participación en Shaping: propones pitches de seguridad bajo Shape Up, los defiendes frente al equipo y los priorizas en los ciclos.

Qué esperamos de ti?

  • 4–6 años en seguridad informática con experiencia comprobada liderando iniciativas end-to-end en al menos uno de: AppSec, Cloud Security, DevSecOps o Security Engineering en producto. Buscamos criterio técnico maduro y haber visto suficientes incidentes/decisiones para tener intuición propia.
  • Proactividad, ownership y energía de primera persona dedicada: serás quien construya desde la base. Si esperas que alguien te diga qué hacer cada semana, este rol no es para ti.
  • Entiendes que la seguridad habilita el negocio, no lo frena: te apasiona encontrar el balance entre control y velocidad.
  • Capacidad de trabajar solo/a sin aislarte: la seguridad solo funciona si el resto del equipo la abraza. Necesitas saber influir, capacitar y construir relaciones, no solo escribir reglas.
  • Disposición a responder incidentes fuera de horario cuando sea necesario, con la contraparte de flexibilidad real y compensación.
  • Instinto investigativo: cada fraude trae algo nuevo, y en seguridad la diferencia entre un control que funciona y uno que no está en dar el doble-click a la cosa rara.
  • Manejo sólido de data analytics y SQL: vas a construir y mantener pipelines de datos tipo ETL en BigQuery como los que hoy usamos para detectar y etiquetar cuentas vulneradas en ataques de credential stuffing.
  • Buena comunicación escrita y verbal: vas a explicar riesgos y trade-offs a equipos no técnicos. Necesitas traducir "vulnerabilidad CVSS 8.2" en "esto puede costarnos X y se mitiga así".
  • Organizado/a pero flexible: somos una startup y es común que las cosas cambien en el camino.
  • Uso avanzado de IA generativa con doble lente: no solo para acelerar tu trabajo, sino para automatizar detección, análisis y respuesta — y al mismo tiempo entender cómo los atacantes la están usando contra nosotros. Te motiva ser un referente del equipo en ambos lados de la moneda.
  • Growth mindset y colaboración: valoramos la capacidad de aprender de los errores y trabajar bien en equipo.
  • Que contestes el chat de servicio al cliente: todos en Neat hacemos turnos de soporte vía Intercom. Es una de las formas que tenemos de estar cerca de nuestros usuarios.

Sumas puntos si:

  • GCP: trabajamos 100% sobre Google Cloud y Firebase. Conocer Cloud IAM, Secret Manager, Cloud Armor, Security Command Center, VPC Service Controls y reCAPTCHA Enterprise te permitirá aportar desde el primer mes.
  • Firebase Security Rules: Firestore, Realtime Database y Storage son parte central de nuestro stack y fuente común de errores de seguridad en la industria.
  • Fintech o industrias reguladas: PCI DSS, manejo de datos sensibles y lógica regulatoria CMF es un plus fuerte.
  • Background de desarrollo (TypeScript/Node ideal): si pivoteaste de dev a seguridad, te sentirás en casa y podrás escribir el fix tú mismo/a.
  • Experiencia en startups: entiendes el ritmo y la dinámica.
  • Experiencia gestionando programas de bug bounty o pentesting externo*
  • AI Security: red teaming de LLMs, defensa contra prompt injection, evaluación de modelos, o uso de agentes/IA en SOC u operaciones de seguridad.

Qué te ofrecemos?

💸 Sueldo entre $3.500.000-$5.500.000 dependiendo de la experiencia que tengas.
💰 Presupuesto propio de seguridad (ethical hacking, herramientas u otros).
🌴 20 días de vacaciones al año + 1 extra por cada año en Neat.
💻 MacBook (de Neat) para trabajar.
🎂 Día libre en la semana de tu cumpleaños.
🤖 Github Copilot, agentes de cursor o Claude Code.
🏡 20 días de home office (aparte de los 2 de cada semana) para trabajar de la casa y distribuirlos como tu quieras.
👵🏻 APV: Te depositamos 60K mensuales a tu APV de Fintual.
Work-Life integration:
  • Horario laboral flexible, en general trabajamos de 9-6, pero siempre podemos ajustar según las necesidades del momento.
  • Los viernes salimos a las 4pm.
  • Modalidad híbrida: Trabajamos 3 días en la oficina 2 en la casa.
  • Tenemos 20 día de Home Office al año que puedes repartir como quieras (extras a los de la modalidad híbrida).
  • Actividades mensuales de equipo.
  • Si tienes que ir a hacer trámite, anda, sin drama! siempre y cuando lo coordines bien.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Bicycle parking You can park your bicycle for free inside the premises.
Retirement plan Neat pays or matches payment for plans such as 401(k) and others.
Computer provided Neat provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Neat gives you paid vacations over the legal minimum.
Vacation on birthday Your birthday counts as an extra day of vacation.
$$$ Full time
architect design amazon security
Caylent is a cloud native services company that helps organizations bring the best out of their people and technology using Amazon Web Services (AWS). We provide a full-range of AWS services including workload migrations and modernization, cloud native application development, DevOps, data engineering, security and compliance, and everything in between. At Caylent, our people always come first. We are a global company and operate fully remote with employees in Canada, the United States, and Latin America. We celebrate the culture of each of our team members and foster a community of technological curiosity. Come talk to us to learn more about what it means to be a Caylien! The Mission We are seeking a Principal Customer Solutions Architect to partner with our sales team. The right candidate is someone who has broad and deep AWS expertise and a proven ability to establish themselves as a trusted advisor to existing and potential customers. You're passionate about AWS and love working backwards with our customers to drive their business forward. Your mission will be to help determine and communicate solutions to our customers goals, and to collaborate with and enable AWS pursuit teams. Your Assignment • Lead deep dive discovery, architecture, and design sessions with strategic and enterprise customers and propose Well-Architected solutions. • Act as a trusted strategic advisor for executive customer stakeholders and align technical solutions to business goals. • Author proposals and statements of work that capture customer requirements & constraints and ensure successful project outcomes. • Educate customers & evangelize AWS through blogs, white papers, webinars, presentations, and direct customer engagement. • Win significantly complex pursuits and interact with strategic stakeholders. • Provide mentorship to CSA peers, and provide guidance on more complex/strategic pursuits and career growth. • Proactively contribute to the advancement of team best practices and processes. Your Qualifications • 10+ years of experience architecting, building, and operating solutions on AWS

Please mention the word **EARNESTLY** and tag RMTM0LjQxLjE5Mi4yNA== when applying to show you read the job post completely (#RMTM0LjQxLjE5Mi4yNA==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Analyst 3
  • SkySlope
  • Remote
analyst salesforce python technical

OUR ORIGIN STORY 🎂


In 2011 SkySlope started as an idea born at the kitchen table of our CEO, with just him and two others. Headquartered in Sacramento, California, we have since grown out of our previous 3 offices and many of our close to 150 employees are spread all across the United States. Those 150 employees support close to 300,000 users across 5,000 offices nationwide and now in Canada as well. Included in that is 8 out of the 15 largest Real Estate Brokerages in the nation.


But, despite being happy with what we’ve achieved we know that as industry leaders in our space there’s a lot of work left to be done. All of the growth and success that has happened is a result of us obsessing over building cutting edge software that makes the Real Estate world a better place. We know this only happens by hiring people who don’t just come up with out of the box ideas but hiring people who actually see those ideas through and bring them to life. As we’ve grown, we’ve been fortunate enough to hire plenty of people who possess that quality and realize it’s equally important to hire people who can pair that skill with empathy, collaboration, and a keen sense of urgency. If you’re looking to join a company where you can have real impact and surround yourself with an incredible team of people then look no further.

                                                                                                                                                                                                                


SKYSLOPE’S CORE VALUES 💪🏻


These are the principles that helped us get to where we are and they are the principles that will guide us to where we want to go in the future. You can apply them to your professional life, your personal life, to any business and any situation. In no specific hierarchy, our core values are:


Awareness | Execution | Obsession | Ownership | Humility | Radical Candor | Urgency | Greatness | Inches I Fun


Learn more about our core values from our CEO, Tyler Smith here!

                                                                                                                                                                                                                


About the role: We are looking for a Data Analyst III to join our team and to help elevate the way we leverage data across the organization. While this role includes traditional data retrieval and reporting, we're looking for someone who goes beyond fulfilling requests — someone who proactively identifies trends, surfaces insights, and brings forward recommendations that help teams make better decisions before they even know to ask. Experience or curiosity around AI-assisted analytics is a plus, but this is first and foremost a strong data analyst role.

\n


What Sets You Apart
  • You don't wait to be asked. You dig into the data, find what matters, and bring it to the people who need it. You're curious about new tools and techniques — including AI — but you're grounded in strong analytical fundamentals. You care about getting the answer right and communicating it in a way that actually moves the needle.


Essential Functions
  • Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
  • Query, extract, and transform data from multiple sources across MS SQL Server, MySQL, and MongoDB environments to support business needs
  • Build and maintain automated reports, dashboards, and data pipelines that reduce manual effort and improve data accessibility
  • Partner with cross-functional teams to understand their goals and proactively deliver analytical insights that drive action
  • Identify patterns, trends, anomalies, and opportunities in data sets and communicate findings clearly to both technical and non-technical audiences
  • Develop and maintain Python scripts for data automation, transformation, reporting and analysis
  • Contribute to improving our data infrastructure, documentation, and analytical best practices
  • Explore opportunities to incorporate AI-powered tools and techniques into existing workflows where they add clear value


Other Duties
  • Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.


Requirements
  • 5+ years of experience in a data analyst or similar role with progressive responsibility
  • Advanced SQL proficiency across both MS SQL Server and MySQL, including complex joins, stored procedures, query optimization, and cross-database work
  • Python proficiency for scripting, data manipulation, and automation (pandas, NumPy, or similar libraries)
  • Experience with BI/visualization tools such as Tableau, Power BI, Looker, or similar platforms
  • Solid understanding of data warehousing concepts, data modeling, and ETL/ELT processes
  • Strong communication skills with the ability to translate analytical findings into clear, actionable recommendations for stakeholders
  • Self-directed mindset with a demonstrated history of going beyond ad-hoc requests to proactively surface insights and improve processes


Preferred Qualifications
  • Familiarity with cloud platforms (Azure, AWS, or GCP)
  • Exposure to machine learning concepts or AI-assisted analytics tools (e.g., using APIs for text analysis, summarization, or data enrichment)
  • Experience with A/B testing, statistical modeling, or causal inference
  • Knowledge of version control (Git) and collaborative development workflows
  • Statistics, data science, or related degree or certification (equivalent experience welcomed)
  • MongoDB experience, including aggregation pipelines and working with unstructured or semi-structured dataExperience with data orchestration or transformation tools such as dbt, Apache Airflow, or similar
  • Familiarity with product and web analytics platforms such as Heap and/or Google Analytics
  • Exposure to tools such as Chameleon, HubSpot, or Salesforce is a bonus but not required
  • Real estate industry knowledge and/or experience
  • Experience mentoring junior analysts or leading small-scale analytical projects


\n
$100,000 - $120,000 a year
\n

Medical Insurance – Company pays flat dollar amount towards premium 

There are 3 plan options 

Our Medical Insurance plans are provided through United Healthcare 

The United Healthcare HMO is only offered to California residents

Eligibility begins 1st of the month following date of hire

Per Paycheck (24 pay periods a year)

Employee costs per tier are as follows:


UHC HDHP/HSA

Employee Only  $58.92

Employee + Child $147.30

Employee + Spouse $175.78

Employee + Family $259.24


UHC PPO

Employee Only $104.10

Employee + Child $244.63

Employee + Spouse $289.91

Employee + Family $422.63


UHC HMO (CA residents only)

Employee Only $84.56

Employee + Child $198.71

Employee + Spouse $235.49

Employee + Family $343.29


Dental Insurance – Company pays 75% of monthly premium only on Base Plan

This PPO plan is administered through Principal

Eligibility begins 1st of the month following date of hire


Principal Dental Base Plan

Employee Only $4.19

Employee + Child $11.73

Employee + Spouse $8.50

Employee + Family $17.20


Principal Dental Buy-Up Plan

Employee Only $6.65

Employee + Child $19.53

Employee + Spouse $13.51

Employee + Family $28.35


Vision Insurance – Company pays 100% of monthly premium

This plan is administered through Principal (VSP choice network)

Eligibility begins 1st of the month following date of hire


Basic Life and AD&D Insurance (with additional Voluntary Plans available) – Company paid plan with a guarantee issue amount of $25,000. 

Plan is administered through Principal

Eligibility begins 1st of the month following date of hire

Pricing varies for additional coverage, based upon age, coverage and dependent classification


Voluntary Short & Long Term Disability Insurance Plans – Optional plans to help protect your financial well-being.

Plan is administered through Principal

Eligibility begins 1st of the month following date of hire

Pricing varies, based upon age


Voluntary Accident insurance- Optional plans available to purchase that pays you a cash benefit to help with your expenses if you or a covered family member is injured due to an accident. 

Employee Only $4.39

Employee + Spouse $6.73

Employee + Child(ren) $7.49

Employee + Family $11.50


Voluntary Hospital Indemnity- Optional plans available to purchase that pays you a cash benefit to help with your expenses if you or a covered family member is admitted to the hospital

Employee Only $6.85

Employee + Spouse $17.43

Employee + Child(ren) $11.41

Employee + Family $22.84


Voluntary Critical Illness- Optional plans available to purchase to help with your expenses if you or a covered family member is diagnosed with a covered critical illness. 

Pricing varies, based upon age


Flexible Spending Account – A tax savings account you put money into that you use to pay for certain out-of-pocket health care and dependent care costs.

Plan is administered through Discovery Benefits

Eligibility begins 1st of the month following date of hire, if you sign up by the 25th of the month


Health Savings Account (HSA)– A tax savings account for employees enrolled in a High Deductible Health Plan. You can put money into this account to pay for certain out-of-pocket health care costs

Plan is administered through Discovery Benefits

Eligibility begins 1st of the month following date of hire, if you sign up by the 25th of the month

Must be enrolled in the UHC HDHP/HSA medical plan with SkySlope to be eligible

SkySlope contributes $300 to an individual HSA and $600 to a family HSA


401(k) Plan – Company will match $0.50 on each $1.00 contributed up to the first 6% of eligible earnings

Plan is administered through Principal

Eligibility begins first pay date after 90 days of employment

Auto-enrollment after eligibility at 3% of gross annual earnings

Defer between 1% and 40% of eligible contribution


Employee Stock Purchase Plan - Company match equal to 33.3333% of dollars contributed to the plan, based upon the average purchase price for the quarter.

Plan administered through Fidelity 

Eligibility begins first pay date after 90 days of employment

May contribute after-tax dollars from 3% to 15% of base earnings


Paid Time Off (PTO) – Company provides 120 hours (equivalent of 15 days) of PTO for new hires

PTO accrual begins after 90 days of employment


16 Paid Holidays

11 observed, 5 floating (used for personal holidays)

List of observed holidays published annually

Eligibility begins on your first day of employment


Bereavement Leave – Company will provide you with the following off to grieve the loss of a loved one. 

5 paid days of leave for an immediate family member. This is a spouse, child, parent, grandparent. 

1 paid day of leave for a close non-family member.


Discounts through Fidelity - Purchasing discounts for wireless, car rentals, hotels and more…


Pet Insurance through Nationwide- 50%, 70% reimbursement plans available through Nationwide with options for wellness. SkySlope contributes $20 a month, per pet, up to 2 pets towards the cost of the plan


Paid Parental Leave - All full-time regular employees are eligible for SkySlope’s Paid Parental Leave program, which provides employees with up to six (6) weeks of pay following the birth or placement of a new child. Paid Parental Leave must be taken within the first 6 months of the birth or placement of a new child. Employees will be paid at their regular rate of pay based upon their normal work schedule, up to a maximum of forty (40) hours per week.


Dayforce Wallet- All full-time regular employees will have access to sign up for Dayforce Wallet. Dayforce Wallet is a program provided by our payroll provider that allows employees to access their pay on-demand as soon as it is earned, without waiting for their standard payday.


Waldorf University discounts and perks- 10% off tuition for employees and their families, free text books, and scholarship opportunities available


Child Literacy Assistance Program discount- Discounted annual membership to Luminous Minds, an online resource center created to help with child literacy struggles. $85 for 1 year membership as a SkySlope Employee.


$1,000 Employee Referral bonuses- SkySlope will give every referrer $1,000 (post-tax) after a referee passes their 90 day mark. 


In addition to the above you also receive other perks like our Annual Employee Appreciation Day and additional internal company events.


                                                                                                                                                                                                                


SkySlope, is an Equal Opportunity employer. All qualified applicants will receive

consideration for employment without regard to race, color, religion, sex, age, disability, protected veteran status,

national origin, sexual orientation, gender identity or expression (including transgender status), genetic

information or any other characteristic protected by applicable law.


We sincerely thank you for taking the time to review our open positions and hope you'll take the time to submit a concise and thoughtful application.


Still thinking about applying? Waiting to hear back from us? Check out our social media in the meantime!

SkySlope | Facebook | Instagram | YouTube | LinkedIn | Twitter


Your privacy is important to us. Learn more about what data is collected and how we use it here.





Please mention the word **PROMINENT** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
.Net C# Azure Microservices
En Improving South America, brindamos servicios de TI para transformar la percepción del profesional de TI. Nos enfocamos en consultoría de TI, desarrollo de software y formación ágil.

La empresa promueve una cultura de trabajo excepcional basada en el trabajo en equipo, la excelencia y la diversión, con enfoque en crecimiento personal y recompensas compartidas. Al integrarse, el/la candidato/a formará parte de una comunidad que prioriza la comunicación abierta y relaciones laborales sólidas a largo plazo, respaldada por una estructura de desarrollo profesional y aprendizaje continuo..

Estamos buscando un/a Software Architect con experiencia en Microsoft Azure y plataformas de datos, para liderar el diseño de soluciones escalables y de alto impacto.

Este rol es clave para definir la arquitectura tecnológica, establecer estándares y acompañar a los equipos en la construcción de sistemas robustos, seguros y mantenibles.

© getonbrd.com.

Job functions

  • Diseñar arquitecturas cloud y on-premise escalables, seguras y resilientes
  • Liderar el diseño de data warehouses, pipelines ETL y modelado de datos
  • Definir estándares de arquitectura, buenas prácticas y lineamientos técnicos
  • Trabajar en conjunto con equipos de backend, data y DevOps
  • Evaluar tecnologías y proponer mejoras en performance, escalabilidad y costos
  • Acompañar técnicamente a los equipos (mentoría y toma de decisiones)

Qualifications and requirements

  • +7 años de experiencia en desarrollo con .NET (C#, ASP.NET Core)
  • Experiencia sólida en arquitectura cloud-native y microservicios
  • Experiencia trabajando con Microsoft Azure
  • Conocimiento en Data Warehousing, ETL y modelado de datos
  • Experiencia con CI/CD, Azure DevOps e Infrastructure as Code (ARM o Bicep)
  • Experiencia diseñando sistemas escalables, seguros y tolerantes a fallos
  • Buenas habilidades de comunicación y liderazgo técnico

Desirable skills

  • Experiencia con Synapse, Data Factory o herramientas de BI (Power BI, SSIS, SSAS)
  • Conocimientos en ciberseguridad y compliance
  • Experiencia en entornos Agile / Scrum

Conditions

  • Contrato a largo plazo.
  • 100% Remoto.
  • Vacaciones y PTOs
  • Posibilidad de recibir 2 bonos al año.
  • 2 revisiones salariales al año.
  • Clases de inglés.
  • Equipamiento Apple.
  • Plataforma de cursos en linea
  • Budget para compra de libros.
  • Budget para compra de materiales de trabajo
  • mucho mas..

Internal talks Improving South America offers space for internal talks or presentations during working hours.
Computer provided Improving South America provides a computer for your work.
Informal dress code No dress code is enforced.
$$$ Full time
QA Engineer II (L4)
  • OpenLoop
  • Lima (Hybrid)
Python ETL TypeScript Testing Frameworks

About OpenLoop

OpenLoop was co-founded by CEO, Dr. Jon Lensing, and COO, Christian Williams, with the vision to bring healing anywhere. Our telehealth support solutions are thoughtfully designed to streamline and simplify go-to-market care delivery for companies offering meaningful virtual support to patients across an expansive array of specialties, in all 50 states.

Our Company Culture

We have a relatively flat organizational structure here at OpenLoop. Everyone is encouraged to bring ideas to the table and make things happen. This fits in well with our core values of Autonomy, Competence and Belonging, as we want everyone to feel empowered and supported to do their best work.

Apply to this job from Get on Board.

Responsabilities

We're seeking a QA Automation Engineer to join our Data Engineering team and take ownership of quality assurance across our data pipelines and infrastructure. This role will be instrumental in building and maintaining automated test suites that ensure the reliability and accuracy of our healthcare data systems. You'll work closely with a small, focused team of data engineers to establish testing strategies, prioritize coverage for critical data paths, and maintain quality standards as we scale.

• Quality Ownership: Own and maintain the automated test suite that runs in our CI pipeline, including integration tests, data quality checks, and smoke tests for our data infrastructure.

• Strategic Collaboration: Partner closely with data engineers to understand pipeline architecture, identify critical data paths, and develop comprehensive testing strategies that prioritize business-critical datapoints.

• Test Development: Write and maintain automated tests for data pipelines using Python and TypeScript, ensuring coverage across batch and event-driven workflows.

• Data Validation: Implement data quality checks including row counts, schema validation, key-column validation, idempotency testing, and duplicate handling across ETL processes.

• CI/CD Integration: Build and maintain testing frameworks that integrate seamlessly with our CI/CD pipelines using GitHub Actions, AWS CodePipeline, and CodeArtifact.

• Documentation & Standards: Document test cases, testing strategies, and coverage metrics to establish repeatable quality standards across the data team.

• Continuous Improvement: Identify testing gaps and systematically expand coverage toward end-to-end testing of critical data pipelines.

Requirements

• 3 years of experience in QA automation or software testing, with a focus on data pipelines or backend systems.

• 3 years of hands-on experience with Python and TypeScript for test automation.

• Strong experience with CI/CD pipelines (GitHub Actions, AWS CodePipeline, CodeArtifact).

• Hands-on experience working with data lakes and ETL processes on AWS (familiarity with services like S3, Glue, Athena, Lambda, Step Functions, SQS, EventBridge).

• Experience with testing frameworks for Python (pytest, unittest) and TypeScript/JavaScript (Jest, Mocha).

• Understanding of data structures, data modeling concepts, and data lineage.

• Experience testing in a multi-tenant SaaS environment.

• English (C1/C2) fluency.

Desirable skills

ISTQB Certification

Our Benefits

  • Contract under a Peruvian company ID("Planilla"). You will receive all the legal benefits in Peruvian soles (CTS, "Gratificaciones", etc).
  • Monday - Friday workdays, full time (9 am - 6 pm).
  • Unlimited Vacation Days - Yes! We want you to be able to relax and come back as happy and productive as ever.
  • EPS healthcare covered 100% with RIMAC --Because you, too, deserve access to great healthcare.
  • Oncology insurance covered 100% with RIMAC
  • AFP retirement plan—to help you save for the future.
  • We’ll assign a computer in the office so you can have the best tools to do your job.
  • You will have all the benefits of the Coworking space located in Lima - Miraflores (Free beverage, internal talks, bicycle parking, best view of the city)

Life insurance OpenLoop pays or copays life insurance for employees.
Paid sick days Sick leave is compensated (limits might apply).
Partially remote You can work from your home some days a week.
Health coverage OpenLoop pays or copays health insurance for employees.
Retirement plan OpenLoop pays or matches payment for plans such as 401(k) and others.
Computer provided OpenLoop provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal OpenLoop gives you paid vacations over the legal minimum.
$$$ Full time
technical senior analytics engineer
We are seeking an innovative Senior Data Engineer to join our Startup AI and Data Analytics Business Unit. This role is a critical bridge, connecting technical engineering with business strategy. Beyond just managing tickets or building pipelines, you will take full ownership of the data ecosystem. You will play a pivotal role in supporting our AI/ML initiatives , managing the modern data stack while simultaneously answering critical business questions to ensure data accessibility, reliability, and scalability.

Please mention the word **LAUDABLY** and tag RMTM0LjQxLjE5Mi4yNA== when applying to show you read the job post completely (#RMTM0LjQxLjE5Mi4yNA==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $4000 - 6500 Full time
JavaScript Python PostgreSQL SQL
If you enjoy debugging complex data systems, working directly with customers, and owning technical issues end-to-end, this is a high-impact role inside a fast-scaling marketing intelligence platform used by leading eCommerce brands.

Our client is building an advanced attribution and marketing analytics platform that gives brands a unified, accurate view of ad performance, customer journeys, and revenue impact. The product has strong product-market fit and is scaling quickly.

They are hiring a Senior Support Engineer to operate at the intersection of engineering, data, and customer success. This is not a call-center support role. It is a technical, backend-heavy position focused on diagnosing data issues, debugging pipelines, and ensuring customers can trust the platform.

Find this job and more on Get on Board.

The Role

You will serve as the first technical point of contact for customers facing implementation or data challenges. Approximately 70 to 80 percent of the role is backend-focused. You will investigate logs, analyze data pipelines, review scripts, and debug integrations involving APIs, pixels, and order tracking. You will work closely with engineering and data teams, escalate complex issues when needed, and ensure customers receive clear, actionable solutions.
What You’ll Do
  • Act as the primary technical contact for support tickets and Slack escalations
  • Debug customer implementations involving tracking scripts, APIs, and data pipelines
  • Write and optimize SQL queries to investigate data discrepancies
  • Analyze logs and investigate backend issues
  • Own data quality issues from identification to resolution
  • Escalate reproducible product bugs with detailed technical context
  • Improve documentation to reduce repeat issues and enable self-serve workflows
  • Translate technical findings into clear explanations for non-technical stakeholders

What They’re Looking For

  • 2+ years of software engineering or support engineering experience
  • Strong SQL skills with the ability to analyze production data
  • Working knowledge of JavaScript, and web APIs
  • Experience working directly with customers in a technical capacity
  • Strong analytical and problem-solving skills
  • Ability to clearly explain technical concepts in writing
  • Comfort working in ambiguous technical situations

Nice to have:

  • Experience with tools such as Retool, Django Admin, or similar admin tooling
  • Familiarity with Datadog, Airflow, Postgres, or BigQuery
  • Python experience
  • Experience working with eCommerce or marketing analytics systems

$127000 - $159000 Full time
software react system security

About Equip 

Equip is the leading virtual, evidence-based eating disorder treatment program on a mission to ensure that everyone with an eating disorder can access treatment that works. Created by clinical experts in the field and people with lived experience, Equip builds upon evidence-based treatments to empower individuals to reach lasting recovery. All Equip patients receive a dedicated care team, including a therapist, dietitian, physician, and peer and family mentor. The company operates in all 50 states and is partnered with most major health insurance plans. Learn more about our strong outcomes and treatment approach at www.equip.health.

Founded in 2019, Equip has been a fully virtual company since its inception and is proud of the highly-engaged, passionate, and diverse Equisters that have created Equip’s culture.  Recognized by Time as one of the most influential companies of 2023, along with awards from Linkedin and Lattice, we are grateful to Equipsters for building a sustainable treatment program that has served thousands of patients and families.

About the role:

Equip's engineering culture emphasizes agility, collaboration, and ownership, fostering a team of problem-solvers who build a robust, scalable healthcare platform. As a Senior DevOps Engineer, you'll be crucial in developing and maintaining infrastructure, platforms, and developer tools, including CI/CD pipelines, cloud infrastructure, and observability tools, to enable efficient development and scaling. You'll also support web (Java, React, PostgreSQL) and mobile (React Native) applications, standardizing AWS deployments and CI/CD practices. The role will involve building security, metrics, logging, and deployment tooling to ensure system reliability and scalability. Our goal is to create intuitive, reliable systems that allow engineers to iterate quickly and deliver value to patients, with direct user feedback driving our highest-impact work.

Responsibilities:

  • Design and build a robust, scalable cloud platform to empower web and data engineering teams to deliver high-quality applications.

  • Partner with engineering and data teams to improve developer velocity, ensure system reliability, and embed operational excellence.

  • Lead best practices in cloud infrastructure architecture, CI/CD automation, monitoring, and backend systems reliability.

  • Develop tools and automation of a variety of frameworks and languages to enhance the performance, availability, and scalability of services.

  • Contribute to a culture of continuous improvement through proactive monitoring, root cause analysis, and knowledge sharing.

  • Perform other duties as assigned.

Qualifications:

  • Bachelor's degree or equivalent training and work experience in Computer Science, Software Engineering, or a related field

  • 5–10 years of experience in DevOps, SRE, Platform Engineering, or Software Engineering roles.

  • Deep expertise in AWS and its ecosystem of services.

  • Proven track record building cloud infrastructure using Infrastructure as Code (Terraform, CloudFormation)

  • Strong experience with container orchestration and serverless architectures, including ECS/Fargate and Docker

  • Solid understanding of AWS networking concepts, including VPCs, subnets, security groups, route tables, and load balancers.

  • Hands-on experience creating and maintaining CI/CD pipelines (e.g., CircleCI, GitLab CI, etc.).

  • Strong experience with scalable backend systems, including microservices, APIs, caching layers, and various databases.

  • Experience deploying and managing React and other JavaScript applications using AWS services like CloudFront and S3.

  • Experience setting up comprehensive monitoring and alerting for infrastructure, services, and data pipelines.

  • Skilled at identifying, diagnosing, and preventing production issues through effective observability and troubleshooting (NewRelic, DataDog)

  • Commitment to building secure systems with best practices in access control, encryption, and secure deployment pipelines.

  • Experience communicating and collaborating with engineering and product team stakeholders.

  • Proven ability to manage multiple projects with competing priorities.

  • Be able to work Eastern or Central time zones. Either 9 - 5 Eastern or 8 - 4 Central.

Benefits

Time Off:

  • Flex PTO policy (3-5 wks/year recommended) + 11 paid company holidays.

Medical Benefits:

  • Competitive Medical, Dental, Vision, Life, and AD&D insurance.

  • Equip pays for a significant percentage of benefits premiums for individuals and families.

  • Maven, a company paid reproductive and family care benefit for all employees.

  • Employee Assistance Program (EAP), a company paid resource for mental health, legal services, financial support, and more!

Other Benefits

Work From Home Additional Perks:

  • $50/month stipend added directly to an employee’s paycheck to cover home internet expenses.

  • One-time work from home stipend of up to $500.

Physical Demands

Work is performed 100% from home with requirement to travel once or twice a year for in-person meetings. This is a stationary position that requires the ability to operate standard office equipment and keyboards as well as to talk or hear by telephone. Sit or stand as needed.

#LI-Remote

At Equip, Diversity, Equity, Inclusion and Belonging (DEIB) are woven into everything we do. At the heart of Equip’s mission is a relentless dedication to making sure that everyone with an eating disorder has access to care that works regardless of race, gender, sexuality, ability, weight, socio-economic status, and any marginalized identity. We also strive toward our providers and corporate team reflecting that same dedication both in bringing in and retaining talented employees from all backgrounds and identities. We have an Equip DEIB council, Equip For All; also referred to as EFA. EFA at Equip aims to be a space driven by mutual respect, and thoughtful, effective communication strategy - enabling full participation of  members who identify as marginalized or under-represented and allies, amplifying diverse voices, creating opportunities for advocacy and contributing to the advancement of diversity, equity, inclusion, and belonging at Equip.

As an equal opportunity employer, we provide equal opportunity in all aspects of employment, including recruiting, hiring, compensation, training and promotion, termination, and any other terms and conditions of employment without regard to race, ethnicity, color, religion, sex, sexual orientation, gender identity, gender expression, familial status, age, disability, weight, and/or any other legally protected classification protected by federal, state, or local law. 

Our dedication to equitable access, which is core to our mission, extends to how we build our "village." In line with our commitment to Diversity, Equity, Inclusion, and Belonging (DEIB), we are dedicated to an accessible hiring process where all candidates feel a true sense of belonging. If you require a reasonable accommodation to complete your application, interview, or perform the essential functions of a role, we invite you to reach out to our People team at accommodations@equip.health.

#LI-Remote



Please mention the word **COMPLEMENTS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Account Executive
  • Caylent
  • Texas
amazon security training technical
Caylent is a cloud native services company that helps organizations bring the best out of their people and technology using Amazon Web Services (AWS). We provide a full-range of AWS services including workload migrations and modernization, cloud native application development, DevOps, data engineering, security and compliance, and everything in between. At Caylent, our people always come first. We are a global company and operate fully remote with employees in Canada, the United States, and Latin America. We celebrate the culture of each of our team members and foster a community of technological curiosity. Come talk to us to learn more about what it means to be a Caylien! Your Assignment • Communicate via cold calls/emails/social media/in-person meetings with SME prospects. • Manage and nurture relationships with AWS and Clients • Drive net new customer acquisition and scale existing client base • Design, build, and test new outreach and nurture campaigns. • Coordinate closely with content, marketing, and lead generation providers. • Drive revenue by winning new services business and/or expand existing engagements. • Attend cloud workshops and training to boost specific skills and possible certifications around cloud, Kubernetes, and DevOps. • Engage with AWS, and other partners at the tactical and strategic level. Your Qualifications • 5+ years of B2B sales experience selling managed cloud services and/or DevOps consulting. • Experience selling AWS, and related services is highly desired. • Great verbal communication and presentation skills. • Assist with creating proposals & SOWs • Negotiate contracts, deliverables and price. • Enthusiasm to work in a startup environment and ability to be cross-functional. • Possess natural curiosity and excitement to learn new technology, sell and succeed as an individual and as a team. • Proven track record of sourcing and closing $250K+ ARR deals successfully. • Ability to travel 10-25% of the time. • Technical Background in DevOps or Cloud is preferred.

Please mention the word **AWARDS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $2500 - 3700 Full time
Data Engineer
  • Checkr
  • Santiago (Hybrid)
Python SQL Kubernetes CI/CD

Checkr está expandiendo su centro de innovación en Santiago para impulsar la precisión y la inteligencia de su motor de verificaciones de antecedentes a escala global. Este equipo colabora estrechamente con las oficinas de EE. UU. para optimizar el motor de selección, detectar fraude, y evolucionar la plataforma con modelos GenAI. El candidato seleccionado formará parte de un esfuerzo estratégico para equilibrar velocidad, costo y precisión, impactando millones de candidatos y mejorando la experiencia de clientes y socios. El rol implica liderar iniciativas de optimización, diseño de estrategias analíticas y desarrollo de modelos predictivos dentro de una pila tecnológica de alto rendimiento.

Apply directly on Get on Board.

Responsabilidades del Cargo

    • Crear, mantener y optimizar canales de datos críticos que sirvan de base para la plataforma y los productos de datos de Checkr.
    • Crear herramientas que ayuden a optimizar la gestión y el funcionamiento de nuestro ecosistema de datos.
    • Diseñar sistemas escalables y seguros para hacer frente al enorme flujo de datos a medida que Checkr sigue creciendo.
    • Diseñar sistemas que permitan flujos de trabajo de aprendizaje automático repetibles y escalables.
    • Identificar aplicaciones innovadoras de los datos que puedan dar lugar a nuevos productos o conocimientos y permitir a otros equipos de Checkr maximizar su propio impacto.

Requisitos del Cargo

  • Más de dos años de experiencia en el sector en un puesto relacionado con la ingeniería de datos o backend y una licenciatura o experiencia equivalente.
  • Experiencia en programación en Python o SQL. Se requiere dominio de uno de ellos y, como mínimo, experiencia en el otro.
  • Experiencia en el desarrollo y mantenimiento de servicios de datos de producción.
  • Experiencia en modelado, seguridad y gobernanza de datos.
  • Familiaridad con las prácticas y herramientas modernas de CI/CD (por ejemplo, GitLab y Kubernetes).
  • Experiencia y pasión por la tutoría de otros ingenieros de datos.

Consideración

Favor considerar adjuntar cv en Ingles actualizado al momento de postular

Beneficios

    • Un entorno de colaboración y rápido movimiento
    • Formar parte de una empresa internacional con sede en Estados Unidos
    • Asignación de reembolso por aprendizaje y desarrollo
    • Remuneración competitiva y oportunidades de promoción profesional y personal
    • Cobertura médica, dental y oftalmológica del 100% para empleados y dependientes
    • Vacaciones adicionales de 5 días y flexibilidad para tomarse tiempo libre

En Checkr, creemos que un entorno de trabajo híbrido fortalece la colaboración, impulsa la innovación y fomenta la conexión. Nuestras sedes principales son Denver, CO, San Francisco, CA, y Santiago, Chile.
Igualdad de oportunidades laborales en Checkr

Checkr se compromete a contratar a personas cualificadas y con talento de diversos orígenes para todos sus puestos tecnológicos, no tecnológicos y de liderazgo. Checkr cree que la reunión y celebración de orígenes, cualidades y culturas únicas enriquece el lugar de trabajo.

Pet-friendly Pets are welcome at the premises.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Health coverage Checkr pays or copays health insurance for employees.
Computer provided Checkr provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Checkr gives you paid vacations over the legal minimum.
Beverages and snacks Checkr offers beverages and snacks for free consumption.
$$$ Full time
Intern Software Development
  • Netomi
  • Remote - India
software design technical code

About the Company:

Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences.


Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!


Job description


We are looking for a Software Development Intern to help us with coding, fixing, executing and versioning existing code for applications. If you're passionate to solve real time fundamental problems, explore, learn and work on technologies out of scope, Netomi is the perfect place for you.

\n


Job Responsibilities
  • Assist in planning, design and execution of SOA backend platforms. Mostly around REST based Web Frameworks using JAVA (Spark,Spring, ORM)
  • High level and Low level design of the highly scalable components
  • Works collaboratively in a multi-disciplinary team environment
  • Assist key technical advisors to define the roadmap of project


Requirements
  • Experience on some scripting language for automated build/ deployments, preferably Java
  • Pursuing B.E./B.Tech in Computer Science from tier I & II institutes (2025 and 2026 passouts only)


\n

Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.



Please mention the word **FLEXIBLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Governance Engineer
  • Chime Financial, Inc
  • San Francisco
python technical developer code

About the Role

The Data Governance function is pivotal in ensuring the integrity, trustworthiness, and effective management of Chime's data assets. Our mission is to establish and operationalize data governance frameworks that not only meet compliance requirements, but actively enable high-confidence decision making across the company.  As a Data Governance Engineer, you will develop and implement policies and tools for data quality, developer enablement, certified datasets, and governance automation - with a sharp focus on building trust signals and scorecards that help data consumers quickly understand and act on the reliability of Chime's data.

This is a hands-on engineering role. You will write production-quality code to build automation for governance and data quality processes using Terraform, Python (or similar languages), and contribute to internal libraries, frameworks, and orchestration workflows that enable scalable governance.

You will partner closely with data engineering, analytics, product engineering, and compliance teams to:

  • Support Chime in data compliance and risk reduction initiatives
  • Establish data quality as a product discipline
  • Build trust signals and scorecards that make data reliability transparent and data context valuable for a variety of use cases
  • Implement governance workflows upstream, where data are created
  • Ensure data context and metadata are sufficient for downstream use cases, including AI applications

You will be involved in reviews of technical designs and implementation plans to provide guidance on appropriate development from a data governance and compliance perspective, acting as a trusted advisor to teams innovating at Chime. You will also be a strong advocate and leader for Artificial Intelligence tooling and adoption at Chime, bringing c



Please mention the word **COMPLIANT** and tag RMTM0LjQxLjE5Mi4yNA== when applying to show you read the job post completely (#RMTM0LjQxLjE5Mi4yNA==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Software Engineer
  • Clover Health
  • USA
software design financial cloud
At Counterpart Health, we are transforming healthcare and improving patient care with our innovative primary care tool, Counterpart Assistant. By supporting Primary Care Physicians (PCPs), we are able to deliver improved outcomes to our patients at a lower cost through early diagnosis and longitudinal care management of chronic conditions. We are looking for Software Engineers who are eager to tackle a variety of challenges. In this role, you will collaborate with developers, data scientists, and healthcare professionals to build tools that improve real-world health outcomes. As a Software Engineer, you will: - Simplify the complexities of healthcare by building scalable systems that enhance human efforts. - Stay up-to-date with new tools and technologies to solve challenges and advance our goals. - Help define and maintain development best practices to enable rapid iteration while ensuring quality, including writing tests and documenting key implementations. - Work with Product Managers and operational teams to design and develop new features. You should get in touch if: - You have 3+ years of experience as a Software Engineer with proficiency in Python, JavaScript, or Go. - You have experience writing SQL queries in databases such as Postgres, MySQL, BigQuery, Snowflake, or similar systems. - You are comfortable working with data pipelines, including cleaning, normalizing, and improving data quality. - You can create and call RESTful APIs (experience with gRPC is a plus). - You have experience working with cloud services such as GCP or AWS. Benefits Overview: - Financial Well-Being: Our commitment to attracting and r

Please mention the word **HAPPIER** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

Sobre trabajos de Data Engineering

Empleos remotos de Data Engineering. Pipelines de datos, ETL, arquitectura de datos y big data. En RemoteJobs.lat conectamos a profesionales de Latinoamerica con empresas que ofrecen trabajo 100% remoto. Todas nuestras ofertas permiten trabajar desde cualquier ciudad, con pagos en dolares o moneda internacional.

Rango salarial

$4,000 - $11,000 USD/mes

Posiciones abiertas

165

Ubicacion

100% Remoto LATAM

Tip: Tambien puedes buscar ofertas en skills relacionados como Python, SQL,

Rangos salariales de Data Engineering por seniority

Rangos estimados en USD/mes para contratos remotos con empresas internacionales. Varían según empresa, stack complementario y ubicación del cliente.

Nivel Años de experiencia Rango USD/mes
Junior 0-2 $4,000 - $5,750
Semi-Senior 2-4 $5,400 - $7,850
Senior 4-7 $7,500 - $9,950
Lead/Staff 7+ $9,250 - $11,000

Empresas que contratan Data Engineering remoto desde LATAM

Algunas compañías que históricamente han contratado perfiles de Data Engineering para trabajar 100% remoto desde Latinoamérica:

Mercado Libre Globant Auth0 Nubank Cloudwalk Stripe GitLab Crossover Toptal

Preguntas frecuentes

El rango típico para un Data Engineering remoto trabajando para empresas internacionales es $4,000 - $11,000 USD/mes. El monto exacto depende de la seniority, el país de la empresa y si el contrato es full-time o por proyecto.

Los perfiles más demandados de Data Engineering suelen combinar Python, Sql, Spark. Sumar uno de estos te abre más ofertas y suele aumentar el rango salarial entre 15% y 30%.

Para empresas USA/EU sí: nivel B2 mínimo para entrevistas técnicas. Hay alternativas en empresas LATAM (Mercado Libre, Globant, Rappi) o agencias como Toptal donde el inglés intermedio alcanza para arrancar.

Las 3 cosas que más mueven la aguja: (1) un GitHub público con 2-3 proyectos sólidos relevantes a Data Engineering, (2) un perfil de LinkedIn en inglés optimizado para reclutadores, y (3) postularte a 20+ ofertas por semana en lugar de 2-3.