Empleos remotos de Data Engineering. Pipelines de datos, ETL, arquitectura de datos y big data.
Flock Safety is the leading safety technology platform, helping communities thrive by taking a proactive approach to crime prevention and security. Our hardware and software suite connects cities, law enforcement, businesses, schools, and neighborhoods in a nationwide public-private safety network. Trusted by over 5,000 communities, 4,500 law enforcement agencies, and 1,000 businesses, Flock delivers real-time intelligence while prioritizing privacy and responsible innovation.
Weâre a high-performance, low-ego team driven by urgency, collaboration, and bold thinking. Working at Flock means tackling big challenges, moving fast, and continuously improving. Itâs intense but deeply rewarding for those who want to make an impact.
With nearly $700M in venture funding and a $7.5B valuation, weâre scaling intentionally and seeking top talent to help build the impossible. If you value teamwork, ownership, and solving tough problems, Flock could be the place for you.
We're hiring a Senior Software Engineer to build Night Shift, a conversational AI assistant that helps investigators surface critical evidence and close cases faster. You'll design and implement the conversational interface, build the orchestration backend that manages LLM interactions and tool calling, and develop integration pipelines connecting our AI to Flock's existing data platform and APIs. This is a ground-floor opportunity where product thinking matters as much as technical execution: you'll shape chat experiences with complex context management, partner with platform teams to design new APIs or leverage existing ones, and solve the reliability challenges of deploying AI in high-stakes investigative workflows. You'll collaborate closely with ML engineers on prompt engineering and agentic workflows while maintaining a strong point of view on what makes a great user experience. If you've built LLM-powered products and thrive at the intersection of customer impact and technical depth, this role is for you.
Love for coding and continuous learning, especially in the rapidly evolving LLM space
Resourceful problem-solver mindset: excel in ambiguous situations and take initiative to define product direction
Strong TypeScript / Node / Express skills for web services and API design (REST, SSE, WebSockets for streaming)
Modern web framework expertise (React / TypeScript preferred), particularly for conversational UI and chat interfaces
Hands-on LLM experience: OpenAI/Anthropic/Gemini APIs, prompt engineering, streaming responses, and conversation context management
Familiarity with agentic patterns: function calling, tool use (MCP), and orchestrating multi-step workflows
API integration skills: consume existing APIs or design new ones to ground AI in investigative data
Database confidence: PostgreSQL and sophisticated SQL for data retrieval
Cloud infrastructure basics: Docker, Kubernetes (Helm), AWS services (S3, SQS, API Gateway)
Product-minded: translate user feedback into technical requirements and make pragmatic tradeoffs
Bonus points for: LLM evaluation tools (LangSmith, Langfuse), vector search/RAG, microservices architecture, or Terraform
The First 30 Days
Onboard and Integrate:
Familiarize yourself with Flock's mission, investigative workflows, and how customers use our platform today
Pair with engineers across Cloud Software and ML teams to understand existing APIs, data models, and system architecture
Build relationships with key stakeholders to understand their capabilities and constraints. Meet with members of:
Machine Learning (agentic systems, model serving)
Data Engineering (investigative datasets, pipelines)
Platform teams (APIs, infrastructure)
Product and Design (customer needs, UX direction)
Ship Early and Learn:
Complete a first-day push to production
Pick up initial sprint tickets: bug fixes, small UX improvements, or API integrations
Participate in customer feedback sessions to understand investigator workflows and pain points
The First 60 Days
Build the Foundation:
Deliver core conversational UI components and establish patterns for chat interfaces
Implement backend orchestration for LLM interactions and tool calling
Stand up observability for the AI system (logging, tracing, basic metrics)
Work with ML team to integrate agentic workflows and refine prompt strategies
Demonstrate Velocity:
Own end-to-end features that connect UI, backend orchestration, and data integrations
Collaborate with Product to rapidly iterate based on early user testing
Propose technical improvements to chat quality, performance, or reliability
90 Days & Beyond
Drive Product Impact:
Lead development of a core Night Shift capability that demonstrably improves investigator efficiency
Represent the team in cross-functional initiatives, balancing zero-to-one experimentation with engineering best practices
Establish patterns for testing and quality in an evolving AI product
Shape the Direction:
Influence product roadmap through technical insights and customer feedback
Mentor team members on LLM integration patterns or full-stack best practices
Own a domain area (e.g., conversation management, data grounding, streaming architecture)
We want our interview process to be a true reflection of our culture: transparent and collaborative. Throughout the interview process, your recruiter will guide you through the next steps and ensure you feel prepared every step of the way. To check out our interview stages and how you should prepare visit experiences on our careers page.
In this role, youâll receive a starting salary of $170,000-$185,000 as well as stock options. Base salary is determined by job-related experience, education/training, as well as market indicators. Your recruiter will discuss this in-depth with you during our first chat.
ð´Flexible PTO: We seriously mean it, plus 11 company holidays.
âï¸Fully-paid health benefits plan for employees: including Medical, Dental, and Vision and an HSA match.
ðªFamily Leave: All employees receive 12 weeks of 100% paid parental leave. Birthing parents are eligible for an additional 6-8 weeks of physical recovery time.
ð¼Fertility & Family Benefits: We have partnered with Maven, a complete digital health benefit for starting and raising a family. Flock will provide a $50,000-lifetime maximum benefit related to eligible adoption, surrogacy, or fertility expenses.
ð§ Spring Health: Spring Health offers a variety of mental health benefits, including therapy, coaching, medication management, and digital tools, all tailored to each individual's needs.
ðCaregiver Support: We have partnered with Cariloop to provide our employees with caregiver support
ð¸Carta Tax Advisor: Employees receive 1:1 sessions with Equity Tax Advisors who can address individual grants, model tax scenarios, and answer general questions.
ðERGs: We want all employees to thrive and feel like they belong at Flock. We offer three ERGs today - Women of Flock, Flock Proud, and Melanin Motion. If you are interested in talking to a representative from one of these, please let your recruiter know.
ð»WFH Stipend: $150 per month to cover the costs of working from home.
ðProductivity Stipend: $300 per year to use on Audible, Calm, Masterclass, Duolingo, Grammarly and so much more.
ð Home Office Stipend: A one-time $750 to help you create your dream office.
If an offer is extended and accepted, this position requires the ability to obtain and maintain Criminal Justice Information Services (CJIS) certification as a condition of employment. Applicants must meet all FBI CJIS Security Policy requirements, including a fingerprint-based background check.
Flock is an equal opportunity employer. We celebrate diverse backgrounds and thoughts and welcome everyone to apply for employment with us. We are committed to fostering an environment that is inclusive, transparent, and collaborative. Mutual respect is central to how Flock operates, and we believe the best solutions come from diverse perspectives, experiences, and skills. We embrace our differences and know that we are stronger working together.
If you need assistance or an accommodation due to a disability, please email us at recruiting@flocksafety.com. This information will be treated as confidential and used only to determine an appropriate accommodation for the interview process.
At Flock Safety, we compensate our employees fairly for their work. Base salary is determined by job-related experience, education/training, as well as market indicators. The range above is representative of base salary only and does not include equity, sales bonus plans (when applicable) and benefits. This range may be modified in the future. This job posting may span more than one career level.
Coderslab.io es una empresa global líder en soluciones tecnológicas con más de 3,000 colaboradores en todo el mundo, incluyendo oficinas en América Latina y Estados Unidos. Formarás parte de equipos diversos compuestos por talento de alto desempeño para proyectos desafiantes de automatización y transformación digital. Colaborarás con profesionales experimentados y trabajarás con tecnologías de vanguardia para impulsar la toma de decisiones y la eficiencia operativa a nivel corporativo.
Apply directly on Get on Board.
Diseñar, desarrollar y mantener soluciones de ingeniería de datos sobre AWS.
Implementar componentes y procesos utilizando AWS Lambda, Amazon S3, Amazon API Gateway y Amazon RDS.
Diseñar y mantener infraestructura como código mediante AWS CloudFormation.
Gestionar despliegues automatizados y pipelines CI/CD utilizando GitHub Actions integrados con AWS.
Asegurar buenas prácticas de versionamiento, testing, observabilidad y despliegue continuo.
Monitorear, optimizar y resolver incidentes en componentes de datos desplegados en ambientes productivos.
Colaborar con equipos de arquitectura, desarrollo y negocio para traducir requerimientos funcionales en soluciones técnicas.
Experiencia sólida con AWS Lambda, Amazon S3, AWS CloudFormation, Amazon API Gateway y Amazon RDS.
Conocimiento en integración y automatización de despliegues con GitHub Actions hacia AWS.
Experiencia aplicando prácticas de CI/CD e infraestructura como código (IaC).
Conocimiento de seguridad, permisos y buenas prácticas operativas en AWS.
Capacidad para desarrollar e integrar APIs y componentes de datos en la nube.
Mínimo 3 años de experiencia en ingeniería de datos, desarrollo cloud o roles equivalentes.
Experiencia comprobable trabajando en ambientes AWS productivos.
Título profesional en Ingeniería Informática, Ingeniería Civil en Computación o carrera afín.
Certificaciones deseables
Remoto Fulltime
Apply without intermediaries from Get on Board.
En Lisit creamos, desarrollamos e implementamos servicios de software enfocados en automatización y optimización, con foco constante en innovación y pasión por los desafíos. Acompañamos a nuestros clientes con un enfoque consultivo que integra herramientas y prácticas para impulsar objetivos de transformación mediante una estrategia integral de acompañamiento e implementación. Buscamos un/a Data Engineer Senior para sumarse a un proyecto crítico para el negocio, trabajando en el diseño y la ejecución de soluciones de datos escalables que permitan eficientizar procesos y mejorar la toma de decisiones.
Apply to this posting directly on Get on Board.
En Lisit estamos en búsqueda de un/a Data Engineer Senior para sumarse a un proyecto crítico para el negocio.
Tu foco estará en:
Buscamos autonomía, pensamiento analítico y capacidad para diseñar soluciones escalables de datos 🚀
Perfil requerido:
Stack deseable:
Buscamos: un/a profesional con autonomía, pensamiento analítico y capacidad para diseñar soluciones escalables de datos 🚀
Modalidad: Modalidad híbrida (3x2) en Santiago. Ubicación: zona centro. Se puede evaluar modalidad remota para perfiles altamente senior.
Si te interesa un proyecto crítico para el negocio y tienes autonomía, pensamiento analítico y foco en diseñar soluciones escalables de datos 🚀, esperamos tu postulación.
Apply without intermediaries through Get on Board.
Applications at getonbrd.com.
This offer is exclusive to getonbrd.com.
© getonbrd.com. All rights reserved.
Company and Project Context
BNamericas is the leading Latin American business intelligence platform with 28 years of experience delivering news, project updates, and data on people and companies across strategic sectors such as Electric Power, Infrastructure, Mining & Metals, Oil & Gas, and ICT. We empower clients to access high-value information to make informed business decisions. The Engineering Lead will play a pivotal role in shaping a growing information platform used across industries and geographies, driving architecture, data workflows, and product evolution.
As part of a dynamic, multicultural team, you will drive high-performance software, data, and cloud initiatives, ensuring scalability, reliability, and security while fostering a culture of engineering excellence. This role combines hands-on development with strategic leadership to deliver a modular, scalable platform and to integrate cutting-edge AI-enabled capabilities where appropriate.
Apply without intermediaries from Get on Board.
What you’ll bring
Proven experience in a senior or lead engineering role, ideally within SaaS or data/information platforms. Strong hands-on development skills in JavaScript, Node.js, and PostgreSQL with a track record of scalable system design. Solid understanding of DevOps, cloud infrastructure (AWS), and security best practices. Experience with data architecture, including data warehousing and transformation pipelines. Experience integrating third-party platforms (e.g., Appian) and working with internal data pipelines. Familiarity with web scraping technologies, automation, and management of external vendors. Exposure to or interest in AI-driven solutions (e.g., agent-based AI) is a strong plus. Fluent English is required; Spanish and/or Portuguese are a strong plus. Strong communication skills and the ability to collaborate with both technical and non-technical stakeholders. A strategic mindset with the ability to balance hands-on delivery and broader technical direction. An entrepreneurial attitude focused on quality, ownership, and impact.
Why you’ll love this role
You will shape and advance a growing information platform used across industries and geographies. This is a high-impact position with significant ownership, offering the chance to influence technical direction, data strategy, and product evolution while helping to build a culture of engineering excellence. You’ll work with a collaborative, diverse team in a dynamic market, and you'll have the opportunity to leave a lasting imprint on our platform and product roadmap.
At BNamericas, we foster an inclusive, diverse, creative, and highly collaborative work environment. Our team is dynamic, committed, and always willing to support one another, creating a positive and motivating workplace.
We offer a range of benefits, including referral bonuses for bringing in new talent, early finishes on special occasions such as national holidays and Christmas, opportunities for continuous learning and professional development, and a casual dress code that encourages authenticity and comfort at work.
We invite you to be part of a company that values diversity and work-life balance, and that promotes an empowered, goal-oriented, and passionate way of working. Join us!
Who We Are
Wingspan is the first payroll platform designed specifically for independent contractors and their businesses. We simplify onboarding, payments, and compliance for flexible workforces of all sizes, from solo operators to large enterprises.
We're a Series B startup based in NYC with distributed teams in the USA, Poland, and the UK, and backed by Andreessen Horowitz (a16z), Touring Capital, and a strong network of operators, including the CEOs and founders of Warby Parker, Harry's, Allbirds, Invision, and Flatiron Health.
About the Role
As a Software Engineer on the Payment Operations team, you will be responsible for the execution layer that ensures every dollar on Wingspan's platform is accounted for, reconciled, and moved accurately on time. You will have direct access to production systems, a mandate to identify what's broken or inefficient, and the authority to engineer the fix.
This role reports to the Head of Payments & Compliance Operations and is based in Warsaw, Poland, with a remote work model.
What You'll Do
Qualifications & Requirements
Apply to this job directly at getonbrd.com.
- Conocimiento del Proceso de la Gestión de Configuración del Software
- Administración de Sistemas Operativos Windows Server (Versiones varias)
- Instalaciones sobre IIS – Servicios Web – Servicios Windows
- Conocimientos básicos de Versionamiento en Herramientas como GIT - TFS- SVN
- Conocimientos en SharePoint - Confluence
- Conocimientos básicos en Sistemas Operativos, Linux, Windows Server
- Conocimientos intermedios en Bases de Datos SQL, Oracle, DB2
- Conocimientos básicos de Visual Studio
- Instalaciones de ETL´S SQL
- Experiencia en Despliegues de Aplicaciones Web, Windows, Cliente Servidor, NodeJs …
- Manejo de Herramienta SoapUi
- Conocimientos en Herramienta Power Center
- Conocimientos en Herramienta GoAnyWhere
Vequity is building the world’s most robust, contextualized buyer intelligence network for investment banks, private equity firms, and strategic acquirers — a platform with over 2.1 million buyer profiles, each containing ~100 structured and inferred data fields. Our proprietary AI agents continuously enrich, infer, and structure buyer intelligence at scale.
We need a fullstack engineer who ships product features end-to-end, brings real fluency with AI development tooling, and will take ownership of deployment pipelines that currently lack a dedicated owner.
This is a two-sided role: half building features that users see, half making the engineering team faster and more reliable. If you’ve actually built with Claude Code, Cursor, GitHub Copilot, or similar tools — not just experimented — and you can prove it with real output, we want to talk.
This company only accepts applications on Get on Board.
What success looks like in year one
Core requirements
We pay competitively for the LATAM market and we’re transparent about it.
How we work
This company only accepts applications on Get on Board.
At Connectly we are building the future of conversational commerce in Latin America with the focus on Whatsapp. Instead of shoppers installing yet another app, we are offering a 360 engagement platform for retailers inside of an app that everyone already have on their phone - Whatsapp.
We are a VC-backed Series B startup with a world-class team hailing from Meta, Google, Uber, and other top Silicon Valley companies. We operate as a hybrid company, with offices in Bogotá and San Francisco, and a remote-first culture everywhere else.
\nWe are a strong believer in passion, curiosity and willingness to learn on the job. If you are in doubt, we encourage you to apply!
Connectly is an equal opportunity employer. Weâre committed to building a diverse, inclusive, and supportive workplace that is distributed around the world.
En TCIT, somos líderes en desarrollo de software en modalidad cloud con más de 9 años de experiencia. Trabajamos en proyectos que transforman digitalmente a organizaciones, desde sistemas de gestión agrícola y de remates en línea, hasta soluciones para tribunales y monitoreo de certificaciones para minería. Participamos en iniciativas internacionales, colaborando con partners tecnológicos en Canadá y otros mercados. Nuestro equipo impulsa soluciones de calidad y sostenibles, con foco en impacto social. Buscamos ampliar nuestro equipo con talentos que quieran crecer y dejar huella en proyectos de alto impacto en la nube.
Apply at the original job on getonbrd.com.
Buscamos un Data Engineer con dominio en Python y experiencia demostrable trabajando con soluciones en la nube. El/la candidato/a ideal deberá combinar habilidades técnicas con capacidad de comunicación y trabajo en equipo para entregar soluciones de datos de alto rendimiento.
Requisitos técnicos:
Habilidades blandas:
Experiencia con herramientas de gestión de datos en la nube (BigQuery, Snowflake, Redshift, Dataflow, Dataproc).
Conocimientos de seguridad y cumplimiento en entornos de datos, experiencia en proyectos con impacto social o regulaciones sectoriales.
Habilidad para escribir documentación técnica en español e inglés y demostrar capacidad de mentoría a otros compañeros.
Trabajo en modalidad hibrida.
Las Oficinas se encuentran ubicadas en la comuna de las Condes, cercano a metro Manquehue.
Gauntlet leads the field in quantitative research and optimization of DeFi economics. We manage market risk, optimize growth, and ensure economic safety for protocols facilitating most spot trading, borrowing, and lending activity across all of DeFi, protecting and optimizing the largest protocols and networks in the industry. We build institutional-grade vaults for decentralized finance, delivering risk-adjusted onchain yields for capital at scale. Designed by the most vigilant, quantitative minds in crypto and informed by years of research.
As of November 2025, Gauntlet manages over $2B in vault TVL, and optimizes risk and incentives covering over $42 billion in customer TVL. We continually publish cutting-edge research that informs our risk models, alerts, and analysis, and is among the most cited institutions â including academic institutions â in terms of peer-reviewed papers addressing DeFi as a subject. Weâre a Series B company with around 75 employees, operating remote-first with a home base in New York City.
As a company, we build institutional-grade vaults that deliver risk-adjusted DeFi yields at scale, powered by automated risk models and off-chain intelligence. Gauntlet curates strategies across Morpho, Drift, Symbiotic, Aera and more, with >$2B in vault TVL and a growing suite of Prime, Core and Frontier vaults.
Our mission is to drive adoption and understanding of the financial systems of the future. We operate with a traderâs discipline and a risk managerâs skepticism: size carefully, stress routinely, unwind decisively. The label equals the package equals the contents. No surprises, just predictable, reliable vaults.
Join our derivatives trading team and work on the key infrastructure that powers our product offering as well as trading systems. Work with a team with decades of experience in tech and finance to build the backbone of our high-performance derivatives trading strategies. You'll work close to trading, own critical infrastructure end-to-end, and ship systems that manage real capital in live crypto markets.
\nPlease note at this time our hiring is reserved for potential employees who are able to work within the contiguous United States and Canada. Should you need alternative accommodations, please note that in your application.
The national pay range for this role is $165,000 - $205,000 plus additional On Target Earnings potential by level and equity in the company. Our salary ranges are based on paying competitively for a company of our size and industry, and are one part of many compensation, benefits and other reward opportunities we provide. Individual pay rate decisions are based on a number of factors, including qualifications for the role, experience level, skill set, and balancing internal equity relative to peers at the company.
#LI-Remote
En TIMining, trabajamos para convertir la información operativa de las faenas mineras en valor accionable a través de nuestras plataformas de control y monitoreo. Este rol se integra al equipo de datos, aportando en el diseño, desarrollo y operación de pipelines ETL que integran fuentes diversas hacia las bases de datos y productos de TIMining. Serás parte de un proyecto orientado a la continuidad operativa, la calibración de algoritmos y la automatización de procesos internos para optimizar el flujo de trabajo del cliente y del equipo.
Apply exclusively at getonbrd.com.
Formación en Ingeniería en Ciencia de Datos, Ingeniería Civil o carreras afines en computación. Se requieren mínimo 2 años de experiencia en cargos similares y experiencia comprobable en la implementación de pipelines ETL. Se valora conocimiento y manejo avanzado de Python y SQL, experiencia práctica en despliegue de aplicaciones y manejo de contenedores, así como experiencia en orquestación de datos con herramientas como Apache Airflow o Prefect. Dominio de control de versiones (Git) y trabajo colaborativo, consultas a APIs y bases de datos avanzadas. Conocimientos de Google Suite y Office. Habilidades analíticas, proactividad y capacidad para trabajar de forma autónoma y en equipo. Idiomas: Español nativo; Inglés deseable (upper-intermediate).
Se buscan candidatos con experiencia en proyectos tecnológicos y conocimiento de la industria minera a cielo abierto, además de experiencia con arquitecturas Cloud (AWS, Azure o GCP) e Infraestructura como Código (Terraform, CloudFormation).
Experiencia en:
- Implementación de proyectos tecnológicos.
- Conocimiento de la industria minera y su operación.
- Familiaridad con metodologías ágiles, y experiencia con herramientas de Infraestructura como Código.
- Deseable conocimiento en soluciones de monitoreo y en entornos de producción de datos a gran escala.
Ofrecemos un entorno enfocado a innovación en la industria minera, con oportunidades de desarrollo profesional y trabajo en equipo multidisciplinario. Si cumples con el perfil, te invitamos a formar parte de TIMining y contribuir a la transformación digital de operaciones mineras.
Sobre Coderio
Coderio diseña y entrega soluciones digitales escalables para empresas globales. Con una base técnica sólida y una mentalidad orientada al producto, nuestros equipos lideran proyectos complejos desde la arquitectura hasta la ejecución. Valoramos la autonomÃa, la comunicación clara y la excelencia técnica, colaborando estrechamente con equipos y socios internacionales para construir tecnologÃa que genera impacto.
ð Más información: http://coderio.com
Buscamos un/a backend engineer con criterio técnico propio, capaz de diseñar microservicios event-driven que soporten millones de requests sin parpadear. Responsable de la capa de servicios y pipelines de datos, disponibilizando telemetrÃa crÃtica para analÃtica. Debes ser capaz de interactuar con criterio técnico frente a equipos de Data Engineering y diseñar soluciones escalables bajo presión.
Lo que puedes esperar de este rol (Responsabilidades)
Es un rol de ownership técnico total: diseñas, decides, construyes, operas y te haces responsable de dominios crÃticos de la plataforma.
Requisitos
+5 años en desarrollo Backend (Seniority basado en autonomÃa y proactividad).
+3 años de experiencia sólida con Node.js y TypeScript.
+3 años operando en entornos AWS Serverless (Lambda, API Gateway, SQS, SNS).
+2 años de experiencia en Data Engineering básica y modelado de bases de datos relacionales (PostgreSQL).
Deseable
+1 año de experiencia con TimescaleDB o bases de datos Time-series.
Experiencia previa en proyectos de IoT o telemetrÃa industrial.
Conocimiento de infraestructura como código (Terraform/CDK).
Soft Skills
Ownership Extremo: Capacidad de tomar un dominio y llevar la resolución de punta a punta.
Comunicación de Criterio: Capacidad para desafiar y colaborar con stakeholders técnicos (Data Teams).
Proactividad: No espera instrucciones; identifica cuellos de botella y propone soluciones.
Beneficios
Modalidad remota
Participación en un proyecto estratégico regional de alto impacto.
Colaboración con un equipo internacional y liderazgo técnico sólido.
Oportunidad de crecimiento profesional dentro de proyectos de transformación digital.
¿Por qué unirte a Coderio?
Somos remote-first, apasionados por la tecnologÃa, el trabajo colaborativo y la compensación justa. Ofrecemos un entorno inclusivo, desafiante y con oportunidades reales de crecimiento. Si te motiva construir soluciones con impacto en proyectos globales de finanzas y RRHH. Te estamos esperando. Postula ahora.
\nThe Company Youâll Join
At Rebuy, weâre on a mission to revolutionize shopping with intelligent, personalized experiences that wow customers around the globe. As a fully remote team, we power some of the fastest-growing DTC brands like Aviator Nation, Liquid Death, Magic Spoon, Blenders, Laird Superfoods, Primal Kitchen, and many more.
We believe in ownership, drive, and empathy, and strongly uphold that every team member plays a vital role in shaping the future of intelligent commerce. Our culture thrives on collaboration, creativity, and genuine passion. We donât just build great tech - we build lasting partnerships, a strong community, and a place where people love to work.
The Problems Youâll Solve
Rebuy and its team members continually strive to create a high-spirited, intentional work environment that stresses performance, productivity, collaboration, and merit.
As a Sr. Software Engineer, Back-End, youâll own some of the most consequential systems at Rebuy. Your primary anchor is our billing and payments infrastructure â the engine that determines how merchants are charged, how partners get paid, and how financial balances flow across our entire product suite. This is genuinely complex financial engineering. It requires deep PHP and Go expertise, careful architecture, and judgment that no automated tool can replicate. Merchant billing runs daily, touches real revenue, and demands someone who understands both the technical and business dimensions of every decision.
Alongside billing, youâll grow into a broader platform portfolio â the partner portal, data ETL pipelines, customer-facing APIs, and reporting infrastructure that power the business. And in the near term, youâll play a critical role in a significant technical migration: moving our legacy Code Igniter 2 codebase to Code Igniter 4, including work tied to increasing our enterprise market share. This migration requires hands-on PHP expertise and cannot be deferred.
You wonât be handed a sprawling list of things you must do on day one. Youâll be trusted to grow into this role â and rewarded when you do.
Billing & Payments Architecture: Design and build Rebuyâs centralized billing system that handles merchant billing, partner payments, and customer-facing charges. Architect the integration layer that allows payment balances to be applied across Rebuyâs full suite of services. Tackle genuinely complex financial engineering challenges with PHP and Go at scale.
Build Robust APIs: Design and implement secure, well-structured APIs in PHP and Go to power billing events, payment processing, and financial data flows across our platform and Shopify integrations.
Legacy Modernization: Lead and contribute to the migration of our Code Igniter 2 codebase to Code Igniter 4. This is high-priority, near-term work with real business dependencies â including enterprise partnership commitments â and requires a PHP engineer with the experience and judgment to do it right.
Agentify the Platform: Partner with product and engineering to identify where AI agents can automate workflows, surface insights, and guide merchants through our product. Build the backend systems â APIs, data pipelines, and event hooks â that enable intelligent automation. This is genuinely new territory and one of the most exciting growth vectors for Rebuyâs product.
Platform Breadth: Our team owns more than billing and payments â we also support a partner portal, data ETL pipelines, customer-facing reporting APIs, and the infrastructure that makes data flow reliably across the business. You wonât be responsible for all of it on day one, but youâll have genuine opportunities to grow into the areas that most interest you. Engineers here donât get siloed; they get context.
Engineering Best Practices: Contribute significantly to the engineering culture at Rebuy by establishing, documenting, and promoting best practices. Lead initiatives to introduce and standardize frameworks and tools that increase development efficiency and maintainability.
Security & Compliance: Stay current with the latest security trends, vulnerabilities, and best practices as they apply to billing and payment systems. Champion security-first engineering across authentication, authorization, data encryption, and compliance considerations in everything you build.
PHP Technical Leadership: Serve as a key technical anchor for PHP across the engineering organization. Rebuyâs codebase has significant PHP depth and relatively few engineers with that expertise. Youâll lead code reviews, share knowledge actively, and help raise the PHP competency of the broader team.
Quality Assurance: Conduct quality checks on deliverables to ensure code, setup, and configurations meet expected results. Ensure that all features meet high standards of quality and performance before deployment.
Team Collaboration: Engage actively in building a strong team culture. Work closely with the Product Owner, Engineering Manager, and peers across billing, payments, partner tools, and data infrastructure to define requirements, estimate effort, and drive solutions forward. This is a team where your voice matters â you wonât just be handed tickets. Assist the Support team in triaging and resolving high-priority production issues.
Technologies We Use:
AI: Anthropic Enterprise Claude Code / Co-work, Cursor, Adhoc AI tools budget.
Frontend Technologies: React, TypeScript, GraphQL, VueJS, Angular
Backend technologies: PHP, GO, MySQL, BigTable, Elasticsearch
Other Tools: Jira, Bitbucket, Confluence, Google Suite, Slack, One Password, Notion
Who You Are
Weâre stoked to meet you and get to learn more about you, your experience and your interest in joining our team.
The Hard Skills:
Experience building or maintaining billing, payments, or financial systems â including working with payment processors, subscription engines, invoicing pipelines, or similar financial infrastructure in a production SaaS environment.
Educational background in CS // Engineering or a similar area.
5+ years of hands-on experience building backend applications with PHP and Go, with a proven track record of delivering complex, high-traffic systems.
Experience designing and implementing secure, scalable, and maintainable RESTful APIs in PHP and Go, with a deep understanding of API design patterns, versioning, and performance optimization.
Experience with cloud-based technologies, preferably GCP.
Strong understanding of a performant SaaS environment.
Experience in a Scrum/Agile environment.
Experience with the Atlassian suite, including Jira and Bitbucket.
Solid understanding of security fundamentals as they apply to backend and financial systems â including secure coding practices, authentication/authorization patterns, data encryption, and awareness of current vulnerability trends (e.g. OWASP Top 10)
The Soft Skills:
A collaborative mindset and work approach with the ability to lead projects and mentor others.
The ability to thrive in a fast-paced environment with a high level of autonomy and responsibilities.
Excellent communication skills, especially being able to explain technical concepts to both technical and non-technical audiences.
Genuinely curious about the intersection of engineering and business. You care about the downstream impact of what you build â not just that the code works, but that it moves the company forward.
Who Youâll Meet With
Now letâs get into who youâll meet during our interview process! After you submit your application and itâs been reviewed by our team, we will reach out to you inviting you to meet with us. From there, you can expect an interview process similar to this:
An introductory call with someone from the Talent Acquisition team for about 30 min.
Interview with the Hiring Manager to learn more about you and answer your questions about Rebuy and this role
A coding challenge and white boarding exercise to show us your skillset during a live panel interview with a few team members.
Short final interview with our CEO and COO where youâll get to learn more about Rebuy.
The Perks Youâll Enjoy
Rebuy is a fully remote company across the U.S. and Canada that aims to provide all of our team with the resources, support and flexibility they need to thrive in their roles.
Team: Weâve got the best, brightest, most brilliant team members who are excited to meet you! We also like to think we have a good sense of humor.
Remote Work: With a strong internet connection, youâre able to work from anywhere within the U.S. and Canada.
PTO: We offer a flexible vacation policy, generous holiday schedule, parental leave and sick policy. Thereâs other policies too like a birthday holiday!
Amazing Benefits: 100% free health, dental, and insurance for you and your family. Donât worry, thereâs even more!
Retirement Plans: For our U.S. employees we offer 401(k) retirement plans and for our Canadian employees we offer a TFSA and RRSP retirement plans. Youâll also enjoy a 3% contribution of your gross salary, no matter where youâre located!
Our compensation reflects the cost of labor across several U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $130,000 - $180,000 USD annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter and hiring manager can share more about the specific salary range for the job location during the hiring process.
Disclosures:
Equal Opportunity Statement
Rebuy, Inc. is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law.
Rebuy, Inc. aims to make rebuyengine.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email hr@rebuyengine.com.
Somos Artefact, una consultora líder a nivel mundial en crear valor a través del uso de datos y las tecnologías de IA. Buscamos transformar los datos en impacto comercial en toda la cadena de valor de las organizaciones, trabajando con clientes de diversos tamaños, rubros y países. Nos enorgullese decir que estamos disfrutando de un crecimiento importante en la región, y es por eso que queremos que te sumes a nuestro equipo de profesionales altamente capacitados, a modo de abordar problemas complejos para nuestros clientes.
Nuestra cultura se caracteriza por un alto grado de colaboración, con un ambiente de aprendizaje constante, donde creemos que la innovación y las soluciones vienen de cada integrante del equipo. Esto nos impulsa a la acción, y generar entregables de alta calidad y escalabilidad.
Apply through Get on Board.
...y más!
Apply directly on Get on Board.
Apply from getonbrd.com.
Ruzora is hiring a Senior Data Engineer to join our partner companies building modern data infrastructure for AI-native U.S. startups. You will design and build the data pipelines, warehouses, and analytics layer that power business intelligence and machine learning workflows.
This role is 100% remote. Candidates work from anywhere in LATAM. There is no office, no relocation, no travel expected. Ruzora is a fully distributed company with no physical office — applicants do not need to be located in or near any specific city.
This job is available on Get on Board.
We are looking for a Senior Data Engineer with strong end-to-end ownership of data infrastructure and production-grade pipelines.
Â
|
Job Title: |
Sr Software Engineer |
|
Department: |
Product Engineering |
Â
Position Description:
The Sr Software Engineer will be working with other engineers, architects, and product managers to develop software on our philanthropic solutions software platform. This person must be self-motivated and results-oriented with strong programming skills across modern enterprise software architectures. The Sr Software Engineer is expected to work well in an agile development environment to mentor and develop those around them and build superior products.
Â
Duties & Responsibilities:
About AirDNA
We built AirDNA to solve a problem: how do you make smart short-term rental decisions when thereâs too much guesswork and not enough good data?
What started in a garage in California in 2015 is now a global team helping thousands of people â from aspiring hosts to major real estate firms â make confident choices about where to invest, what to charge, and how to grow.
Our mission is simple: give people the tools they need to build freedom through short-term rentals. Whether that means buying their first Airbnb or scaling a portfolio, weâre here to help unlock financial independence and growth.
We track 10M+ listings in 120,000 markets, and our platform is trusted by users in over 100 countries. Itâs big data, made useful.
In 2023, AirDNA acquired Uplisting, a powerful property management software that helps hosts and operators manage listings across Airbnb, Vrbo, and other platforms. With features like channel management, automated messaging, dynamic pricing, task coordination, and financial reporting, Uplisting expands our mission to support every stage of the short-term rental journey â from investment to operations.
The AirDNA team
Weâre a curious, driven, and kind group of humans who genuinely love what we do. Our values â Happy, Hungry, Honest â guide how we show up for our customers and for each other.
Want to see what that looks like in action? Youâll get a feel once you meet us.
We welcome applicants from all backgrounds and encourage you to apply even if you donât check every box. Passion, potential, and perspective matter here.
The Role
AirDNA is looking for a Frontend Tech Lead to help shape the future of our product experience and technical direction. While this role is full-stack, you will be the technical driver for our frontend guild, pushing forward our React/TypeScript architecture, design systems, and developer experience. Youâll partner with Product, Design, and Engineering leaders to deliver beautiful, performant, and scalable customer-facing applications. As a Tech Lead, youâll guide technical decisions across squads, mentor engineers, and help set the long-term direction of our frontend practice.
\nAirDNA seeks to attract the best-qualified candidates who support the mission, vision and values of the company and those who respect and promote excellence through diversity. We are committed to providing equal employment opportunities (EEO) to all employees and applicants without regard to race, color, creed, religion, sex, age, national origin, citizenship, sexual orientation, gender identity and expression, physical or mental disability, marital, familial or parental status, genetic information, military status, veteran status or any other legally protected classification. The company complies with all applicable state and local laws governing nondiscrimination in employment and prohibits unlawful harassment based on any of the aforementioned protected classes at every location in which the company operates. This applies to all terms, conditions and privileges of employment including but not limited to: hiring, assessments, probation, placement, benefits, promotion, demotion, termination, layoff, recall, transfer, leave of absence, compensation, training and development, social and recreational programs, education assistance and retirement.
We are committed to making our application process and workplace accessible for individuals with disabilities. Upon request, AirDNA will reasonably accommodate applicants so they can participate in the application process unless doing so would create an undue hardship to AirDNA or a threat to these individuals, others in the workplace or the company as a whole. To request accommodation, please email compliance@airdna.co. Please allow for 24 hours to process your request.
By applying for the above position, you will confirm that you have reviewed and agreed to our Data Privacy Notice for Applicants.
Senior Site Reliability Engineer - AI Infrastructure
Location: Global Remote / San Francisco · Full-Time
About Andromeda
Andromeda Cluster was founded by Nat Friedman and Daniel Gross to give early-stage startups access to the kind of scaled AI infrastructure once reserved only for hyperscalers.
We began with a single managed cluster â but it filled almost instantly. Since then, weâve been quietly building the systems, network, and orchestration layer that makes the worldâs AI infrastructure more accessible.
Today, Andromeda works with leading AI labs, data centers, and cloud providers to deliver compute when and where itâs needed most. Our platform routes training and inference jobs across global supply, unlocking flexibility and efficiency in one of the fastest-growing markets on earth.
Our long-term vision is to build the liquidity layer for global AI compute â a marketplace that moves the infrastructure and workloads powering AGI not dissimilar to the flows of capital in the worldâs financial markets.
We are expanding to new frontiers to find the brightest that work in AI infrastructure, research and engineering.
The Role
This is not a generalist SRE role.
You will design, operate, and debug large-scale GPU infrastructure used for distributed training and inference, working directly with customers pushing the limits of modern AI systems.
Weâre looking for engineers who have personally run GPU clusters in production, understand the failure modes of distributed training, and can reason about performance from network fabric â kernel â framework.
What Youâll Own
GPU Cluster Architecture: Design and evolve multi-provider, multi-region GPU compute clusters optimized for large-scale training. Make topology-aware scheduling, networking, and storage decisions that directly impact training throughput and cost efficiency.
Customer Technical Partnership: Serve as the primary technical point of contact for customers running large-scale training workloads. Onboard, troubleshoot, and optimize, often in real time.
Reliability & Performance Engineering: Define SLOs and error budgets that account for the unique failure modes of GPU infrastructure (ECC errors, NVLink degradation, NCCL timeouts). Own capacity planning across heterogeneous GPU fleets optimized for training throughput.
Networking & Fabric Health: Ensure the health and performance of high-speed interconnects (InfiniBand, RoCE, NVLink) that underpin distributed training. Diagnose and resolve fabric-level issues that degrade collective operations.
Observability: Build deep visibility into GPU utilization, memory pressure, interconnect throughput, training job performance, and hardware health. Go well beyond standard infrastructure metrics.
Automation & Tooling: Build production-grade automation for cluster provisioning, GPU health checks, job scheduling, self-healing, and firmware/driver lifecycle management.
Incident Leadership: Lead incident response for complex, multi-layer failures spanning hardware, networking, orchestration, and ML frameworks. Drive blameless postmortems and systemic fixes.
What Weâre Looking For
GPU Systems Expertise: Deep, hands-on experience operating large-scale GPU clusters (NVIDIA A100/H100/B200 or equivalent). You understand GPU memory hierarchies, ECC behavior, thermal throttling, and hardware failure modes from direct experience not documentation.
High-Performance Networking: Production experience with InfiniBand, RoCE, or NVLink fabrics in the context of distributed training. You can diagnose why an all-reduce is slow, identify a degraded link in a fat-tree topology, and reason about congestion control at scale.
Distributed Training & ML Frameworks: Working knowledge of how large training jobs actually run â NCCL, CUDA, PyTorch distributed, DeepSpeed, Megatron, FSDP, or similar. You don't need to write the models, but you need to understand what's happening at the systems level when a 1,000-GPU training run stalls.
Linux & Systems Internals: Expert-level Linux knowledge: kernel tuning, driver management (NVIDIA drivers, CUDA toolkit), cgroup/namespace internals, performance profiling at the syscall and hardware level.
Kubernetes & Orchestration: Strong experience running Kubernetes in production with GPU workloads, including device plugins, topology-aware scheduling, multi-cluster federation, and custom operators. Experience with Slurm or other HPC schedulers is equally valued.
Automation & Software Engineering: Strong engineering skills in Python, Go, or Bash. You build production-grade tools and services, not just scripts. Infrastructure-as-Code proficiency (Terraform, Helm, Ansible, or equivalent).
Observability & Monitoring: Hands-on experience building monitoring and alerting for GPU infrastructure, not just Prometheus/Grafana basics, but GPU-specific telemetry (DCGM, nvidia-smi, fabric manager metrics) integrated into actionable dashboards.
Incident Management: Proven track record leading incident response for complex distributed systems where the failure could be in hardware, firmware, networking, drivers, orchestration, or application code and you need to narrow it down fast.
Strong Candidates May Have
Distributed Storage: Experience with high-performance parallel file systems (VAST, Weka, Lustre, GPFS) and the checkpoint I/O and data-loading bottlenecks that come with large training runs.
Training Optimization: Experience profiling and optimizing distributed training performance: identifying stragglers, tuning collective communication strategies, improving MFU (Model FLOPs Utilization), and reducing idle GPU time across large runs.
Cluster Buildout & Hardware: Experience involved in physical cluster design - rack layout, power/cooling constraints, network topology design, and hardware validation/burn-in at scale.
Team Leadership: Experience leading or mentoring a team of infrastructure engineers. We're growing and need people who raise the bar for everyone around them.
Why Youâll Love It Here
This is a high-impact, senior builderâs role. Youâll have significant ownership and autonomy to shape how our systems run at a foundational level, working directly with customers and providers while architecting the infrastructure backbone for reliable, scalable AI compute. Youâll influence technical direction and help define what world-class AI infrastructure operations look like.
Andromeda Cluster is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Assetplan es una compañía líder en renta residencial con presencia en Chile y Perú, gestionando más de 40,000 propiedades y operando más de 90 edificios multifamily. El equipo de datos tiene un rol clave para optimizar y dirigir procesos internos mediante soluciones de análisis y visualización de datos, apoyando la toma de decisiones estratégicas en la empresa. Este rol se enfoca en diseñar, desarrollar y optimizar procesos ETL, creando valor mediante datos fiables y gobernados.
Apply to this job through Get on Board.
Somos Artefact, una consultora líder a nivel mundial en crear valor a través del uso de datos y las tecnologías de IA. Buscamos transformar los datos en impacto comercial en toda la cadena de valor de las organizaciones, trabajando con clientes de diversos tamaños, rubros y países. Nos enorgullese decir que estamos disfrutando de un crecimiento importante en la región, y es por eso que queremos que te sumes a nuestro equipo de profesionales altamente capacitados, a modo de abordar problemas complejos para nuestros clientes.
Nuestra cultura se caracteriza por un alto grado de colaboración, con un ambiente de aprendizaje constante, donde creemos que la innovación y las soluciones vienen de cada integrante del equipo. Esto nos impulsa a la acción, y generar entregables de alta calidad y escalabilidad.
Apply at getonbrd.com without intermediaries.
Experiencia con:
...y más!
Coderslab.io is looking to hire a Data Specialist
About the client and the project: the company delivers innovative technology solutions and provides opportunities for continuous learning under the guidance of experienced professionals and cutting-edge technologies. The goal is to deliver value in key business processes and improve operational efficiency through SAP.
Find this vacancy on Get on Board.
Tech stack
ClickHouse CloudPower BIPythonJavaScript / Node.jsAWS S3AWS GlueApache IcebergSQLSAP Business OneRetail Pro
Modalidad contractor
Remoto
Salario en USD
Nine-67 is building a fast-moving AI capability for enterprise clients. This role sits at the intersection of product, data, and execution, directly partnering with the CEO to design, build, and deploy AI-driven applications in real client environments. You will contribute to shaping a scalable, high-quality AI platform by delivering end-to-end solutions that combine frontend, backend, and data workflows in rapid iterations.
As a key player in a fast-build environment, you’ll help transform ambiguous business problems into working systems, create internal tools and automation, and integrate with client systems and data sources to drive real business value.
Job opportunity published on getonbrd.com.
• Build and deploy AI-driven applications end-to-end (frontend, backend, data workflows) with speed and quality.
• Translate business problems into functioning AI systems with minimal direction.
• Collaborate directly with leadership and clients to iterate on real use cases.
• Develop internal tools, agents, and automation to boost efficiency.
• Integrate with APIs, data sources, CRM systems, data warehouses, and client environments.
• Continuously improve speed, reliability, and reusability of what we build.
• Strong builder mindset—ship fast and learn by doing.
• Experience with AI tools and frameworks (LLMs, APIs, prompt systems, agents).
• Comfort across the stack; you don’t need to be perfect, but you can figure it out.
• Ability to work in ambiguity without waiting for detailed specs.
• Strong problem-solving and product intuition.
• High ownership and accountability.
• Experience with Cursor, Vercel, Supabase, or similar modern stacks.
• Experience building internal tools or client-facing applications.
• Exposure to data pipelines, analytics, or CRM systems.
• Prior startup or consulting experience.
• Direct collaboration with leadership on high-impact projects.
• Build real systems used by enterprise clients.
• Opportunity to shape and scale AI capability from the ground up.
Revinate is one of the largest and most innovative providers of direct revenue-generating solutions in the hospitality industry. Revinate's mission is to deliver hoteliers scalable direct revenue and profits from data-driven solutions that cultivate deeper relationships with guests. Revinateâs Direct Booking Platform helps capture, convert and retain guests with strategies and services that maximize direct booking revenue. This combination maximizes the lifetime value of each guest through personalized and targeted campaigns across the guest journey. Revinate Marketing has won 1st place for Hotel CRM & Email Marketing in the HotelTechAwards five years in a row!
About Us
Revinate is an innovative hospitality tech company that is revolutionizing how customers manage their operations and enhance the guest experience. Our solutions leverage advanced technology, data analytics, and automation to improve efficiency and drive customer happiness in the hospitality industry.
The Opportunity
We are seeking an experienced and visionary Director, Data Engineering to lead our Data Platform initiatives. In this critical role, you will be responsible for defining the strategy, architecture, and execution of our end-to-end data ecosystem, encompassing data ingestion pipeline, operational data stores, our evolving data lakehouse, and robust data APIs. You will build and lead a high-performing team of data engineers, fostering a culture of innovation, collaboration, and operational excellence. This role requires not only deep technical expertise but also a strong understanding of how data can drive business value, including leveraging data science and machine learning to optimize our operations.
Key Responsibilities
Strategic Leadership: Define and execute the long-term vision and roadmap for our data platform, aligning with overall business objectives and technology strategy.
Team Leadership & Development: Recruit, mentor, and lead a talented team of data engineers, fostering their growth and ensuring best practices in data engineering.
Data Pipeline: Oversee the design, development, and maintenance of scalable and reliable real time data ingestion pipeline, ensuring data quality, accuracy, and timely delivery.
Operational Data Stores: Lead the architecture and management of our operational data stores, optimizing for performance, reliability, and accessibility to support critical business applications.
Data Lakehouse Development: Drive the strategic evolution and implementation of our data lakehouse, enabling unified data access, advanced analytics, and machine learning initiatives.
Data API Development: Champion the design and development of secure, performant, and well-documented data APIs to facilitate data consumption across various applications and user groups.
Data Governance & Quality: Enforce data governance policies, standards, and procedures to ensure data integrity, security, privacy, and compliance.
Operational Efficiency through Data Science/ML: Collaborate closely with data science and analytics teams to identify opportunities where data science and machine learning can be applied to optimize internal operations, automate processes, and improve efficiency within the data platform itself (e.g., predictive maintenance for pipelines, intelligent resource allocation).
Performance & Scalability: Ensure the data platform is highly performant, scalable, and resilient, capable of handling growing data volumes and complex analytical workloads.
Technology Evaluation: Evaluate and recommend new data technologies, tools, and platforms to enhance our data capabilities and stay ahead of industry trends.
Cross-Functional Collaboration: Partner effectively with engineering, product, analytics, data science, and business teams to understand data requirements and deliver impactful solutions.
Monitoring & Support: Establish robust monitoring, alerting, and on-call support processes for all data systems, ensuring high availability and rapid issue resolution.
\nInterview Process
We're excited you're considering a career with Revinate! Our goal is to ensure this is the right opportunity for you, while also determining if you're the right fit for our team. The interview process for this role is designed to be a two-way street, where you'll get to know us just as we get to know you.
- Recruiter Screen - 30 min
- Technical Interview - 60 min
- Cross Functional Interview - 30 min
- Final Interview - 30 min
Revinate values the flexibility of a remote workforce and the benefits of localized hiring. We focus on specific cities to foster local communities and enhance team cohesion, allowing employees to collaborate, attend local events, and build a strong sense of community and company culture.
Candidates must be located in the city listed in the job application. Thank you!
Revinate is not open to third party solicitation or resumes for our posted FTE positions. Resumes received from third party agencies that are unsolicited will be considered complementary.
Important Security Alert
We have been made aware of fraudulent activities involving individuals impersonating our HR team and offering fake job opportunities. Please be vigilant and ensure your safety by verifying all job offers.
For Authentic Opportunities: Only refer to our official careers page on our company website. Your security is our priority. If you encounter any suspicious activity, please report it immediately. Stay safe and secure! You can confirm or inquire with any questions by reaching out to recruiting@revinate.com
AI and Hiring
Please note that interviews at Revinate will be recorded using brighthire.ai. As we continue to build more structure into our interview processes -- the best way to eliminate unconscious bias! We are encouraging our interviewers to focus more on our candidates and the conversation than taking notes. Instead, we can rely on brighthire.ai to do the note taking for us. If youâre uncomfortable with recording your interview, please let us now. Weâll opt you out.
Excited?! Want to learn more? Apply Now!
Our Core Values:
One Revinate - United & Strong, on a single mission together
Built on Trust - Itâs the foundation of everything we do
Expect Amazing - We think, dream & deliver big
Customer Love -- When the customer wins, we win
Make it Simpler -- Apply it to everything we do
Hungerness -- Feel it, follow it, be relentless about our success
Grounded in Gratitude - Weâre glad to be here & make the most of every day
Revinate Inc. provides Equal Employment Opportunity to all employees and applicants for employment without regard to race, color, religion, gender identity or expression, sex, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. Revinate complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities.
Revinate is not open to third party solicitation or resumes for our posted FTE positions. Resumes received from third party agencies that are unsolicited will be considered complementary.
If you are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to recruiting@revinate.com.
By submitting your application you acknowledge that you have read Revinate's Privacy Policy (https://www.revinate.com/privacy/)
Offshore CFO (Multifamily Real Estate) â Job Description
Overview
We are hiring a CFO to lead the finance and accounting function for a U.S.-based multifamily owner/operator. This role owns
financial statements, monthly close, cash management, budgeting/forecasting, reporting, and controls across multiple
properties and entities. The right candidate is tech-forward and excited to modernize finance through automation, AI, and APIdriven integrations.
Key Responsibilities
⢠Monthly close & financial statements: Own timely, accurate close and delivery of P&L, balance sheet, and cash flow
with supporting schedules.
⢠Reconciliations & controls: Ensure complete bank/GL reconciliations, AR/AP tie-outs, accruals, prepaids, CIP/fixed
assets, intercompany, and documented processes.
⢠Management reporting: Produce property/portfolio reporting including budget vs. actual, variance explanations, and
key operating KPIs.
⢠Cash management: Maintain daily cash visibility and a rolling 13-week cash forecast; manage payment cadence,
approvals, reserves, and liquidity planning.
⢠Budgeting & forecasting: Lead annual budgets and reforecasts (revenue, payroll, utilities, repairs, insurance, taxes,
CapEx).
⢠CapEx / renovation tracking: Track project budgets, spend, and ROI support (CIP and unit-level economics as
applicable).
⢠Lender / compliance support: Manage covenant reporting, lender deliverables, and coordination with CPAs/tax/audit
teams.
⢠Section 8 / Housing Authority & municipal compliance: Support affordable housing reporting and compliance (as
applicable), including coordination with Housing Authorities/cities, audits, and required documentation.
⢠Team leadership: Lead and develop offshore accounting staff (AP/AR/accountants); set SOPs, close calendar, and
review standards.
⢠Tech/automation leadership: Implement and optimize workflows using AI tools, automation, and API connections
across property management, accounting, reporting, and data pipelines.
Requirements (Must-Have)
⢠Minimum 8+ years of experience as a CFO (or senior finance leader) in real estate; multifamily strongly preferred.
⢠Expert in financial statements, close management, reconciliations, cash forecasting, and internal controls.
⢠Strong ability to deliver decision-ready reporting (budget vs. actual, variance analysis, KPIs).
⢠Bilingual proficiency: fluent professional English and Spanish (written and spoken).
⢠Property management software experience; ResMan preferred.
⢠Expense management software experience with Brex or Ramp; Brex preferred.
⢠Experience working with Section 8 programs, Housing Authorities, and municipal/city requirements (as applicable),
including compliance reporting and audit support.
⢠Strong understanding of real estate legal entities and structures (LLCs/LPs/SPVs), intercompany accounting, and
entity-level reporting.
⢠Tech-forward mindset: comfortable implementing automation/AI and working with APIs/integrations (no coding
required, but must be fluent with modern tools).
⢠Advanced Excel/Google Sheets skills; comfortable building standardized reporting templates and dashboards.
⢠Ability to work offshore with consistent overlap with U.S. business hours and days (ET/CT preferred).
Preferred
⢠Multi-entity consolidation, lender compliance/covenants, and renovation-heavy portfolios.
⢠Experience with BI/reporting tools (Power BI/Tableau) and modern AP/bill pay tools.
Working Model
⢠Remote / Offshore (LATAM preferred for timezone overlap)
⢠Reports to Ownership/CEO/Managing Partner; partners closely with Operations and Asset Management
Apply at getonbrd.com without intermediaries.
Job opportunity on getonbrd.com.
Are you a talented Senior Developer looking for a remote job that lets you show your skills and get decent compensation? Look no further than Lemon.io â the marketplace that connects you with hand-picked startups in the US and Europe.
What we offer:
We have several open positions for Full-Stack React.js Developers - please see the details below. We also have some backend positions; the full list is included below as well.
Commercial experience:
Commercial experience:
OR
React.js: 5+ years, Node.js: 3+ years, and Next.js: 2+ years
Other requirements:
Sounds good for you? Apply now and join the Lemon.io community!
NOT YOUR TECH STACK?
We have multiple projects available for Senior Developers. If you have 4+ years of commercial software development experience and are proficient in any of the following areas: React & Ruby, PHP & Angular, PHP & Vue, Vue & Node.js, React & .NET, Android & iOS, Angular & .NET, Angular & Node.js, Vue & .NET, Python & Vue, MLOps, React & Java, Data Science, Blockchain (Web3/Solidity/Solana), Symfony & React, Symfony & Vue, Symfony & Angular, Symfony & JavaScript & Next.js & TypeScript, Data Analysis, React & PHP, Data Engineering, AI Engineering, Data Annotation, DevOps, Svelte & Python, Svelte & Node, Svelte & TypeScript, Rust, Shopify & JavaScript, Vue & Nuxt, Python & Node, Angular & TypeScript, Ruby & Ruby on Rails, React Native & Ruby, React Native & Python, PHP & Laravel, .NET & C#, Java & Spring, Unreal Engine & C++, Python & LLM, Unity, Machine Learning Engineering â weâd be happy to connect and match you with a suitable project.
If your experience matches our requirements, be ready for the next steps:
We do not provide visa assistance, and our cooperation model does not include the benefits typically offered with direct hire.
P.S. We work with developers from 71+ countries in different regions: Europe, LATAM, the U.S (if you are an owner of W-9 ben form), Canada, Asia (Japan, Singapore, South Korea, Philippines, Indonesia), Oceania (Australia, New Zealand, Papua New Guinea), and the the UK. However, we have some exceptions.
At the moment, we donât have a legal basis to accept applicants from the following countries:
We expand and shorten the list of exemptions regularly.
Join Hostinger, and weâll grow fast! ð
Weâre shaping the future of online success - powered by AI and driven by people. With 900+ talented professionals and over 4 million clients in 150 countries, we help creators and entrepreneurs bring their ideas to life faster and easier than ever before.
Our mission: To provide tools that help individuals and small businesses succeed online faster and easier.
Our culture: Guided by 10 company principles.
Our formula for success: Customer obsession, innovative products, and talented teams.
Your role at Hostinger
Join Hostingerâs Delivery Automation team as a Senior Full Stack Automation Engineer, where youâll focus on building scalable internal platforms and tools that supercharge developer productivity, streamline software delivery, and automate complex manual flows across the company.
In this role, youâll take ownership of designing and automating workflows that reduce friction for engineers and teams across Hostinger. From CI/CD pipelines and deployment automation to system integrations and cross-team process improvements - your work will enable faster delivery, greater efficiency, and a stronger automation-first culture.
Your impact will span Product, Engineering, and beyond: empowering developers with reliable self-service solutions, helping teams eliminate repetitive tasks, and ensuring Hostinger operates at scale with speed and confidence.
Youâll collaborate closely with stakeholders across engineering and other departments to understand their challenges, architect resilient solutions, and ship intuitive tools backed by robust backend systems. Youâll also explore and adopt emerging technologies - including AI - to continuously elevate developer experience and automation capabilities.
Curious to learn more? Connect with your team:
Mantas Gurskis - Automation Team Lead, Asta DagienÄ - Head of Delivery
\nGet ready to take your personal and professional growth to new heights! Join Hostinger today and be part of our journey ð
Three. Two. Onboard
At Verint, we believe customer engagement is the core of every global brand. Our mission is to help organizations elevate Customer Experience (CX) and increase workforce productivity by delivering CX Automation. We hire innovators with the passion, creativity, and drive to answer constantly shifting market challenges and deliver impactful results for our customers. Our commitment to attracting and retaining a talented, diverse, and engaged team creates a collaborative environment that openly celebrates all cultures and affords personal and professional growth opportunities. Learn more at www.verint.com.
Overview of Job Function:
As a Software Engineer, you will be a core contributor to Verint's QM and PM engineering team. You will design and build full-stack features end-to-end, write high-quality automated tests, support production systems, and collaborate daily with Product Managers, Designers, QA Engineers, and globally distributed engineering peers. This is a role for engineers who take pride in their craft, are eager to grow through challenging problems, and want their work to have a visible impact on enterprise customers worldwide. You will be surrounded by experienced engineers who are invested in your growth, working in a modern Agile environment on software that matters.
Principal Duties and Essential Responsibilities:
Full-Stack Development
Quality Assurance and Testing
Production Support and Maintenance
AI/ML Integration and Continuous Improvement
Collaboration and Communication
CI/CD and DevOps Practices
Preferred Skills:
En Proyectum Chile, impulsamos la excelencia en Dirección de Proyectos a través de servicios de consultoría, capacitación y outsourcing especializado. Somos una organización internacional presente en 12 países de Latinoamérica, compartiendo conocimiento, metodologías y activos de alto valor. Además, somos el principal Authorized Training Partner (ATP) del PMI en la región, liderando la transformación en gestión de proyectos y agilidad.
Nos encontramos en búsqueda un Data Engineer para integrarse a un servicio en el dominio de plataforma de datos, participando en el desarrollo de soluciones modernas en entornos cloud, con foco en generación de valor a partir de datos. Responsable de generar activos tecnológicos y productos de datos, traduciendo los requerimientos de negocio en información relevante.
Send CV through Get on Board.
Funciones principales:
Educación:
Requisitos excluyentes:
Requisitos deseables:
At Lalamove, we believe in the power of community. Millions of drivers and customers use our technology every day to connect with one another and move things that matter. Delivery is what we do best and we ensure it is always fast and simple. Since 2013, we have tackled the logistics industry head on to find the most innovative solutions for the worldâs delivery needs. We are full steam ahead to make Lalamove synonymous with delivery and on a mission to impact as many local communities we can. We have massively scaled our efforts across Asia and now have our sights on taking our best in class technology to the rest of the world. And we are looking for talented professionals to join us in this journey!!
As a Senior Data Engineer at Lalamove, you will be joining the global Data team as a key member of our expanding technology team in our new market. Due to the importance of user privacy and our commitment to compliance laws, we need an additional engineer to support our operations in the expanding market, while collaborating closely with our global engineering team.
To all candidates- Lalamove respects your privacy and is committed to protecting your personal data.
This Notice will inform you how we will use your personal data, explain your privacy rights and the protection you have by the law when you apply to join us. Please take time to read and understand this Notice. Candidate Privacy Notice: https://www.lalamove.com/en-hk/candidate-privacy-notice
Join Our Team
Oowlish, one of Latin America's rapidly expanding software development companies, is seeking experienced technology professionals to enhance our diverse and vibrant team.
As a valued member of Oowlish, you will collaborate with premier clients from the United States and Europe, contributing to pioneering digital solutions. Our commitment to creating a nurturing work environment is recognized by our certification as a Great Place to Work, where you will have opportunities for professional development, growth, and a chance to make a significant international impact.
We offer the convenience of remote work, allowing you to craft a work-life balance that suits your personal and professional needs. We're looking for candidates who are passionate about technology, proficient in English, and excited to engage in remote collaboration for a worldwide presence.
About the Role:
We are seeking a Senior Data Engineer with strong expertise in enterprise data modeling and AWS-based data platforms to support a mature and evolving data ecosystem. This role requires hands-on experience working with large-scale data environments, optimizing data models, and maintaining event-driven pipelines in a cloud-native architecture.
You will work across data modeling, pipeline development, API data support, and infrastructure collaboration. This position is ideal for someone comfortable operating in enterprise environments, maintaining production-grade systems, and improving performance and scalability across a modern AWS data stack.
This is a 6-month engagement with ET time zone alignment required.
\nBenefits & Perks:
Home office;
Competitive compensation based on experience;
Career plans to allow for extensive growth in the company;
International Projects;
Oowlish English Program (Technical and Conversational);
Oowlish Fitness with Total Pass;
Games and Competitions;
You can also apply here:
Website: https://www.oowlish.com/work-with-us/
LinkedIn: https://www.linkedin.com/company/oowlish/jobs/
Instagram: https://www.instagram.com/oowlishtechnology/
OMNIX desarrolla una plataforma PaaS de automatización y orquestación de disrupciones en operaciones complejas, integrándose con sistemas core como ERP, WMS, CRM e IoT. Trabajamos con empresas enterprise en industrias como telecomunicaciones, retail, logística y manufactura, donde la continuidad operacional es crítica.
El Customer Success Manager se incorpora al equipo de Delivery & Customer Success, trabajando en estrecha colaboración con Forward Deployed Engineers (FDE), Ventas y Producto. Su rol es asegurar que las implementaciones generen impacto real y sostenido en el negocio del cliente. Es responsable de transformar proyectos en adopción profunda, expansión de uso y valor operativo tangible, contribuyendo directamente a la retención y crecimiento de cuentas estratégicas.
Apply directly through getonbrd.com.
El Customer Success Manager es responsable de la gestión integral de cuentas enterprise post-implementación, asegurando que OMNIX se convierta en un sistema crítico dentro de la operación del cliente. Lidera la relación estratégica con stakeholders, define junto al cliente los casos de uso prioritarios y construye un roadmap de expansión basado en impacto operativo.
Trabaja coordinadamente con el FDE, quien ejecuta técnicamente las soluciones, mientras el CSM asegura su adopción, continuidad y valor en producción. Tiene autonomía para priorizar iniciativas, detectar oportunidades de expansión y escalar decisiones. Lidera instancias ejecutivas como QBRs y es responsable de sostener una narrativa clara de valor. El éxito del rol se mide por la profundidad de uso de la plataforma, la expansión de la cuenta y la capacidad de convertir soluciones en resultados concretos dentro de la operación.
Experiencia mínima de 5 años en roles de Customer Success, consultoría o gestión de cuentas en contextos B2B enterprise.
Experiencia demostrable trabajando con clientes complejos en industrias como logística, telecomunicaciones, retail o manufactura.
Capacidad de interactuar con stakeholders técnicos y ejecutivos (C-level), sosteniendo conversaciones de negocio y tecnología.
Experiencia gestionando implementaciones o proyectos con múltiples integraciones (ERP, APIs, sistemas core).
Fuerte orientación a resultados, con capacidad de estructurar problemas, priorizar iniciativas y ejecutar con autonomía.
Inglés avanzado (oral y escrito) para interacción con equipos y clientes internacionales.
Alta disciplina operativa, capacidad de seguimiento y accountability bajo entornos exigentes.
Experiencia previa en empresas tipo SaaS/PaaS o plataformas de datos y automatización operacional.
Conocimiento en herramientas de integración, data workflows o automatización (ej: n8n, Zapier, APIs, ETL).
Experiencia en consultoría estratégica o implementación de transformación digital en grandes empresas.
Familiaridad con metodologías de gestión como EOS o frameworks de ejecución disciplinada.
Conocimiento en analítica de datos, detección de anomalías o modelos de inteligencia artificial aplicados a operaciones.
Experiencia en entornos de alto crecimiento o compañías tecnológicas con foco enterprise.
En CyD Tecnología somos una empresa innovadora en el sector de la tecnología, enfocada en el desarrollo de plataformas web personalizadas que transforman procesos complejos en soluciones simples y eficientes. Nuestro equipo diseña y entrega aplicaciones web y móviles que automatizan, integran y digitalizan operaciones críticas, ayudando a las empresas a reducir costos, mejorar el control y tomar decisiones basadas en datos en tiempo real.
Apply to this posting directly on Get on Board.
El Data Engineer será responsable de diseñar, desarrollar y mantener soluciones de datos orientadas a la construcción de dashboards en Power BI, asegurando la disponibilidad, calidad y consistencia de la información para la toma de decisiones.
Trabajará en la integración de distintas fuentes de datos, transformación de información y modelado necesario para soportar reportes de gestión. Además, participará en la optimización de procesos y en la mejora continua de los modelos de datos utilizados por el negocio.
Dentro de sus funciones principales se encuentran:
Se requiere formación en Ingeniería Informática o carrera afín, junto con experiencia en desarrollo de soluciones BI y manejo de datos.
Requisitos excluyentes:
El trabajo considera jornada 4x3 en faena de la II Región de Antofagasta. No existe modalidad de teletrabajo.
Además, se valorará:
Se considerarán como un plus los siguientes conocimientos o experiencia:
Who We Are and What We are Doing:
Ethena Labs is actively building and deploying a suite of groundbreaking digital dollar products aiming to upgrade money into the internet era.
Our flagship product, USDe, is a synthetic dollar backed by digital assets, and takes the novel approach of using a delta-neutral hedged basis strategy to maintain its peg. This product scaled from zero to $15b in 18 months.
Expanding on this, iUSDe is designed specifically for traditional financial institutions, incorporating necessary compliance features to enable them to access the crypto-native rewards our protocol generates, in an institutional-friendly manner.
Ethena has also developed USDtb: a fiat backed GENIUS compliant stablecoin in partnership with BlackRock which has scaled to ~$2b.
These products are also offered in a whitelabel stablecoin offering where any application, chain, wallet or exchange can launch their own stablecoin on Ethena's back-end infrastructure.
Through these offerings, Ethena Labs is not just creating new financial products; we are building the foundational infrastructure for a more open, efficient, and interconnected global financial system.
Open job offerings will be focused on two new major product lines coming to market in the next few months.
Join us!!
The Senior Data Engineer is a critical role reporting directly to the CTO. The primary mission is to rapidly deliver a reliable, production-ready market data platform that serves as the single source of truth for trading, risk, and business intelligence.
Youâll immediately own the entire data platform from inception and deliver working historical and real-time Tardis pipelines in the first 60 days. Beyond the initial MVP, the role requires iteratively evolving the platform into a best-in-class, cloud-native, observable, and self-service system. You will work hand in hand with the CTO & trading team to scope & deliver to business needs. The Senior Data Engineer will also serve as the go-to data expert for the firm and will be responsible for mentoring future junior data engineers or analysts.
Why Ethena Labs?
You'd be joining a group that has well established itself as one of the most successful crypto-native company's of all time, a group with a mission to revolutionise decentralised finance and it's position in global finance.
Work alongside a passionate and innovative team that values collaboration and creativity.
Enjoy a flexible, remote-friendly work environment with established opportunities for personal growth and learning.
If you subscribe to the mission of separating the dollar from the state, then we want to hear from you!
We look forward to receiving your application and will be in touch after having a chance to review.
In the meantime, here are some links to more information about Ethena Labs to help you check us out:
Apply to this posting directly on Get on Board.
📍 ¿Dónde y cómo trabajarás?
✋ Algunas consideraciones antes de postular:
Vequity is building the world’s most robust, contextualized buyer intelligence network for investment banks, private equity firms, and strategic acquirers. Our platform currently houses over 1.5 million buyer profiles with approximately 100 structured and inferred data fields per profile. We leverage proprietary AI agents to continuously enrich, infer, and structure buyer intelligence at scale. As a Senior Data Engineer, you will own the architecture, quality, and scalability of our data ecosystem—from ingestion and cleaning to inference and output generation. You will partner with AI, product, and engineering teams to deliver data APIs and feeds that power our platform's decision-support capabilities. Your work will directly impact data reliability, operational efficiency, and the precision of buyer attributes used across our customers.
Apply at getonbrd.com without intermediaries.
Competitive compensation and Paid Time Off (PTO).
About the Company:
Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences.
Want to be part of the AI revolution and transform how the worldâs largest global brands do business? Join us!
Job description
We are looking for a Software Development Intern to help us with coding, fixing, executing and versioning existing code for applications. If you're passionate to solve real time fundamental problems, explore, learn and work on technologies out of scope, Netomi is the perfect place for you.
\nNetomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
En Datasur, somos líderes en inteligencia comercial basada en datos de comercio exterior. Nuestra plataforma procesa millones de registros de importaciones y exportaciones de más de 70 países, y estamos listos para escalar más alto.
Buscamos un/a Ingeniero/a de Procesos con al menos un año de experiencia para un proyecto de automatización del flujo de producción de datos. El rol se enfoca en levantar, analizar, documentar y mejorar procesos, impulsando la transición desde operaciones manuales a modelos estandarizados, trazables y escalables.
Se requiere una visión TI orientada a procesos, capaz de mapear flujos end-to-end, detectar brechas, definir controles y traducir necesidades de negocio en requerimientos funcionales claros. El trabajo abarca todo el ciclo de datos (ingesta, estandarización, calidad, monitoreo, orquestación y carga analítica), identificando riesgos y oportunidades de automatización.
Apply only from getonbrd.com.
Apply to this job from Get on Board.
This job is original from Get on Board.
Equifax es mucho más que una empresa de informes; es una compañía global líder en datos, analítica y tecnología con presencia en 24 países. En Chile, operan desde 1979 entregando soluciones críticas de ciberseguridad, identidad y riesgo a más de 14.000 empresas.
El Hub Tecnológico (SDC) Lo que hace única a esta oportunidad es que Chile alberga el Santiago Development Center (SDC). Este centro lidera la transformación digital de Equifax a nivel mundial, concentrando cerca del 60% de sus desarrollos tecnológicos globales.
Cultura y Visión Equifax promueve un entorno de colaboración y excelencia técnica, donde el talento local tiene el desafío de crear soluciones de impacto mundial. Su visión es clara: usar la data y la tecnología para potenciar la toma de decisiones financieras en todo el mundo.
Apply to this job at getonbrd.com.
¿Qué harás en tu día a día?
Técnicas
Personales
Contrato indefinido desde el inicio con 23people - Tiempo del proyecto 6 meses con posible extensión
Algunos de nuestros beneficios
Revel Street LLC helps corporate event planners discover and reach private dining venues through an extensive, dependable database. We use LLMs extensively to gather and enrich venue data, streamline the event planning workflow, and reduce the time and effort required to source options for events such as private dining, cocktail receptions, and conferences. We are looking for an experienced Data Engineer to help us improve data quality, fix existing data issues, and ingest more data from APIs and LLM-based sources to complement our current datasets. Our current stack includes React, TanStack, Cloudflare, Django, and Dagster, and we expect you to design solutions that are scalable, testable, and grounded in core engineering fundamentals.
Applications at getonbrd.com.
You’ll proactively turn ambiguous requirements into well-structured engineering plans. You’ll communicate trade-offs and risks early, and you’ll verify outcomes through hands-on testing. You’ll bring a “build, measure, improve” mindset to performance, reliability, and user experience.
Our Mission
At Big Health, our mission is to help millions back to good mental health by providing fully digital, non-drug options for the most common mental health conditions. Our FDA-clear digital therapeuticsâSleepioRx for insomnia and DaylightRx for anxietyâguide patients through first-line recommended, evidence-based cognitive and behavioral therapy anytime, anywhere. Our digital program, Spark Direct, helps to reduce the impact of persistent depressive symptoms.
In pursuit of our mission, weâve pioneered the first at-scale digital therapeutic business model in partnership with some of the most prominent global healthcare organizations, including leading Fortune 500 healthcare companies and Scotlandâs NHS. Through product innovation, robust clinical evaluation, and a commitment to equity at scale, we are designing the next generation of medicine and the future of mental health care.
Our Vision
Over the next 5-10 years, we believe digital therapeutics will transform the delivery of healthcare worldwide by providing access to safe and effective evidence-based treatments. Big Health is positioned to take the lead in this transformation.
Big Health is a remote-first company, and this role can be based anywhere in the US.
Join Us
We're seeking a Product Data Analyst contractor to drive data-informed product decisions by improving our data democratization, analyzing data, generating insights, and generating reports. You'll partner closely with product, growth, enrollment marketing, and client implementation teams to understand user behavior, measure product performance, and identify opportunities for growth and improvement.
\nWe at Big Health are on a mission to bring millions back to good mental health, in order to do so, we need to reflect the diversity of those we intend to serve. Weâre an equal opportunity employer dedicated to building a culturally and experientially diverse team that leads with empathy and respect. Additionally, we will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.
Big Health participates in E-Verify for all new hires in the United States.
En WiTi lideramos un proyecto estratégico de migración de un ecosistema analítico legado hacia una arquitectura moderna en la nube sobre AWS. El objetivo es estandarizar, optimizar el rendimiento y escalar la operación, trasladando lógica SQL no estándar a SQL estándar para Amazon Redshift. Este esfuerzo involucra automatización para acelerar la migración, reducción de errores y una alta interacción con equipos de data, BI y TI para asegurar trazabilidad, reproducibilidad y gobernanza de datos a nivel enterprise.
Serás parte de un equipo multidisciplinario que diseña y ejecuta la migración de punta a punta, estableciendo reglas de conversión, pipelines, controles de calidad y guías de codificación reutilizables. El proyecto ofrece visibilidad transversal sobre ETL/ELT y buenas prácticas de gobierno de datos en un entorno cloud escalable.
Apply exclusively at getonbrd.com.
En WiTi fomentamos una cultura de aprendizaje y colaboración, con foco en proyectos digitales y de datos de alto impacto. Entre los beneficios se incluyen:
Arbiter is the AI-powered care orchestration system that unites healthcare. We are launching our best-in-class, patient-facing Agentic platform to optimize patient outcomes through a unique multimodal approach. We optimize complex healthcare workflows that interface with patients using the latest Agentic AI approaches, and we combine it with a sophisticated platform to serve this Agentic layer at scale. We are looking for expert engineers and leads to join our team and help us push the frontier of what's possible with Agentic workflows + Healthcare.
Backed by one of the largest seed rounds in health tech history and operators who bring the expertise and distribution to scale nationally, we're building the connected infrastructure healthcare should have had all along.
Our Engineering Culture & Values
We are a high-performing group of engineers dedicated to delivering innovative, high-quality solutions to our clients and business partners. We believe in:
Engineering Excellence: Taking immense pride in our technical craft and the products we build, treating both with utmost respect and care.
Impact-Driven Development: Firmly committed to engineering high-quality, fault-tolerant, and highly scalable systems that evolve seamlessly with business needs, minimizing disruption.
Collaboration Over Ego: Valuing exceptional work and groundbreaking ideas above all else. We seek talented individuals who are accustomed to working in a fast-paced environment and are driven to ship often to achieve significant impact.
Continuous Growth: Fostering an environment of continuous learning, mentorship, and professional development, where you can deepen your expertise and grow your career.
Responsibilities
As a Senior Backend Engineer, you will design, build, and operate the platform systems that power Arbiter's connections to the outside world and ensure reliable, performant data exchange across a complex ecosystem. You will own critical parts of our backend infrastructure, from API design and service orchestration to data pipelines and third-party system connectivity, working closely with product, engineering, and customer teams to ship production-grade systems with real customer dependency.
Platform Architecture & Backend Systems: Design, develop, and operate backend services that power Arbiter's core platform, with an emphasis on reliability, modularity, and clean system boundaries.
External System Connectivity: Build and maintain robust connections to third-party systems (e.g. cloud APIs, AI services, data exchange services, EHRs, telephony platforms). Own the abstractions that make these integrations reusable and adaptable across customers with minimal rework.
API Design & Data Exchange: Design and operate high-scale APIs (REST, gRPC, webhooks) and manage complex data flows including real-time streaming, batch processing, file-based exchange (e.g. SFTP, HL7, EDI), and event-driven pipelines.
Performance & Reliability: Ensure high throughput, low latency, and fault tolerance across backend services through strong system design, monitoring, alerting, and operational best practices. Handle vendor failures, retries, idempotency, and graceful degradation.
Data Engineering & Pipeline Ownership: Build and maintain ETL/ELT pipelines, manage schema evolution, and ensure data quality and integrity across systems with varying formats, standards, and reliability.
Infrastructure & Deployment Excellence: Implement and uphold best practices for CI/CD, testing, observability, and deployment of backend systems in production cloud environments.
Cross-Functional Execution: Partner closely with AI engineers, product managers, implementation teams, and customer stakeholders to translate ambiguous, high-impact problems into scalable technical solutions.
Technical Leadership & Mentorship: Mentor engineers, contribute to internal documentation and standards, influence technical direction, and raise the overall engineering bar.
Ownership & On-Call: Take end-to-end ownership of critical systems, including participating in on-call rotations and leading incident resolution when production issues arise.
Minimum Qualifications
5+ years of hands-on experience building and operating production backend systems in high-availability environments.
Computer Science or Engineering degree, or equivalent practical experience.
Experience building and maintaining large-scale Python codebases with strong opinions on structure, quality, and tradeoffs.
Deep understanding of API design patterns, versioning, backward compatibility, and managing breaking changes across consumers.
Experience building reusable abstraction layers or connector frameworks that allow a single integration pattern to serve multiple customers or vendors.
Proven experience designing systems that connect to third-party services, including handling authentication, rate limiting, retry logic, and failure modes gracefully.
Strong understanding of concurrency, scalability, reliability, and distributed systems patterns.
Hands-on experience with data pipeline architectures: batch and streaming, schema management, and data quality enforcement.
Experience with cloud infrastructure (AWS, GCP, or Azure) and production deployments.
Strong communication skills and ability to work effectively across functions.
Proficiency with AI-assisted development tools (e.g., Cursor, Claude Code, GitHub Copilot).
Track record of delivering complex systems end-to-end with minimal oversight.
Preferred Qualifications
Experience with healthcare data exchange standards (HL7, FHIR, EDI) or similarly complex domain-specific protocols in other industries (fintech, telecom, logistics) is a plus.
Familiarity with database performance tuning, query optimization, and managing large-scale relational databases (PostgreSQL, CloudSQL).
Startup or early-stage experience operating in fast-moving, high-ambiguity environments.
This role can be remote or on-site, based in our New York City or Boca Raton offices, in a fast-paced, collaborative environment where great ideas move quickly from whiteboard to production.
Job Benefits
We offer a comprehensive and competitive benefits package designed to support your well-being and professional growth:
Highly Competitive Salary & Equity Package: Designed to rival top FAANG compensation, including meaningful equity.
Generous Paid Time Off (PTO): To ensure a healthy work-life balance.
Comprehensive Health, Vision, and Dental Insurance: Robust coverage for you and your family.
Life and Disability Insurance: Providing financial security.
Simple IRA Matching: To support your long-term financial goals.
Professional Development Budget: Support for conferences, courses, and certifications to fuel your continuous learning.
Wellness Programs: Initiatives to support your physical and mental health.
Pay Transparency
The annual base salary range for this position is $148,500-$190,000. Actual compensation offered to the successful candidate may vary from the posted hiring range based on work experience, skill level, and other factors.
Data Engineering Intern
At RefinedScience, our mission is to advance care by bringing together the best science, data and minds â disease by disease, patient by patient, cell by cell to discover pathways to life beyond disease.
WHAT WE ARE LOOKING FOR
We are seeking a motivated Data Engineering Intern to join our team. This internship is open to undergraduate and graduate students who are interested in building data infrastructure that supports advanced analytics, data science, and AI-driven insights in healthcare and life sciences.
You will work closely with data scientists, bioinformaticians, and engineers to help design, build, and improve data pipelines and platforms that power RefinedScience's research and analytics initiatives.
KEY ACTIVITIES
MUST HAVES
Somos 3IT ¡Innovación y talento que marcan la diferencia!
Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.
Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.
Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.
This job offer is available on Get on Board.
Asegurar la calidad del software mediante pruebas funcionales, evaluando la conformidad con los requisitos y la funcionalidad esperada en cada etapa del desarrollo.
✋ Algunas consideraciones antes de postular:
📍 ¿Dónde y cómo trabajarás?
💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxee + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk
Applications: getonbrd.com.
In the healthcare sector, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) requires that all insurance payers exchange transactions such as claims, eligibility checks, prior authorizations, and remittances using a standardized EDI format called X12 HIPAA. A small group of legacy clearinghouses process the majority of these transactions, offering consolidated connectivity to carriers and providers.
Stedi is the world's only programmable healthcare clearinghouse. By offering modern API interfaces alongside traditional real-time and batch EDI processes, we enable both healthcare technology businesses and established players to exchange mission-critical transactions. Our clearinghouse product and customer-first approach have set us apart. Stedi was ranked as Rampâs #3 fastest-growing SaaS vendor.
Stedi has lightning in a bottle: engineers and designers shipping products week in and week out; a lean business team supporting the companyâs infrastructure; passion for automation and eliminating toil; $92 million in funding from top investors like Stripe, Addition, USV, Bloomberg Beta, First Round Capital, and more. To learn more about how we work, watch our founder Zackâs interview with First Round Capital.
Weâre hiring a full-stack data and analytics engineer to build and own the data foundation that will power our daily GTM operations: revenue analytics, product usage telemetry, CRM data quality, attribution, funnel performance, and forecasting.
This is not a typical business analyst position. You will architect the pipelines, models, and automations that ensure our GTM teams have reliable, real-time insights into how customers discover, adopt, and expand with Stedi and our products. You will work closely with Sales, GTM Ops, Product, and Finance, executing data and analytics engineering workstreams, and conducting hands-on analysis to build the source-of-truth data for our GTM operations.
Build and maintain GTM data pipelines: Own ingestion, transformation, and syncing of CRM data (HubSpot), product-usage telemetry, billing data, and third-party enrichment data in Redshift to support GTM analytics workstreams.
Develop core GTM & revenue data models: Improve operational efficiency through standardization of datasets for Sales, GTM Ops, Finance, and the executive team, while establishing common metric definitions across revenue, customer segments, and more.
Ship dashboards, alerts, and decision-making tools: Improve telemetry into business performance by building dashboards to track things like sales funnel performance and pipeline quality. Better inform GTM leadership through automation of weekly/monthly reporting and establishing a revenue forecast.
Investigate trends and build models to support sales. Accelerate sales effectiveness through implementation of alerting for critical events (e.g. pipeline drops, usage contractions, stuck deals, missed lifecycle transitions), conducting key analyses (e.g. pipeline velocity, win rates, segmentation performance), and development of GTM models (e.g. ICP scoring, account prioritization, churn risk).
Own the GTM analytics roadmap: Work with GTM leadership to maintain a backlog of GTM analytics engineering work. Proactively identify the next set of capabilities the GTM org needs (forecasting, routing logic, new usage signals, etc).
You have exceptional analytical skills: Youâve made a career in working with data to improve products and overall business operations. You know the tools, best practices, and playbooks necessary to stand up a high-performing and organized analytics function at the company.
You know the tech stack: You write efficient SQL queries to analyze large datasets and can work with complex schemas. You're an expert with data visualization tools like Tableau, QuickSight, or Power BI. Familiarity with cloud environments (AWS, Azure, GCP).
You create and execute your own work: You notice patterns others miss and dig deep to understand root causes. You've identified data issues or operational inefficiencies that led to meaningful improvements.
You do what it takes to get the job done: You are resourceful, self-motivating, self-disciplined, and donât wait to be told what to do. You put in the hours.
You move quickly: We move quickly as an organization. This requires an ability to match our pace and not get lost by responding with urgency (both externally to payers and internally to stakeholders), communicating what you are working on, and proactively asking for help or feedback when you need it.
You are a âbottom feederâ: You thrive on the details. No task is too small in order to find success, generate revenue, and improve our costs.
The annual compensation range for this role is $180,000-$230,000. For roles with a variable component, the range provided is the roleâs On Target Earnings ("OTE") range, which means that the range is inclusive of the sales commissions or bonus target and annual base salary. This range may be inclusive of multiple experience levels at Stedi and will be narrowed during the interview process based on a number of factors, including the candidateâs experience, location, and qualifications. Please reach out to your recruiter with any questions.
Weâve been made aware of individuals impersonating the Stedi recruiting team. Please note:
All official communication about roles at Stedi will only come from an @stedi.com email address.
If youâre unsure whether a message is legitimate or have any concerns, feel free to contact us directly at careers@stedi.com.
We appreciate your attention to this and your interest in joining Stedi.
At Stedi, we're looking for people who are deeply curious and aligned to our ways of working. You're encouraged to apply even if your experience doesn't perfectly match the job description.
Originally published on getonbrd.com.
At Satelligence we're looking for a Jr. Data Engineer to join our team.
We are looking for a Junior Data Engineer:
Employment type: 32â40h/week
Location: Utrecht, NL (hybrid)
Experience: JuniorâMedior level
Salary: â¬48 000 â â¬60 000 gross/year (including 8% holiday allowance, based on 40h/week)
About the job
As Data Engineer your main responsibilities are on building out capabilities of our (geo)data query engine. Youâll be part of the data engineering team, which develops and maintains our satellite data processing engine, geospatial storage and query engine and a set of internal tools used mainly by our OPS team. Our tech stack is Python, Django, PostGIS, deployed on Google Cloud services like GKE and cloud functions. This role will report to Engineering Lead.
What will you do?
You'll be instrumental in empowering our product teams to develop and deploy features that help our clients reach their sustainability targets. You'll ensure the reliability, scalability, and performance of our cloud-based data platform, enabling us to deliver critical environmental intelligence through our API. Your work will directly contribute to:
Building and maintaining scalable infrastructure on GCP using infrastructure-as-code tools like Terraform
Optimizing data pipelines for processing and storing massive datasets (ETL, OLAP)
Developing and managing APIs for efficient data dissemination.
Implementing data engineering best practices for data quality, security, and performance.
Collaborating closely with product teams to understand their needs and provide technical guidance.
Contributing to the design and implementation of data storage solutions using databases like PostgreSQL
Monitoring and troubleshooting platform performance and ensuring high availability.
About you
You are an experienced Python developer
You are experienced with RDBMS, especially postgresql
You are familiar with Django
You prefer a well organized codebase over getting your pull requests merged fast
Nice to have
You are experienced with Infrastructure as Code tools such as Terraform
You have experience with Google Cloud (Cloud SQL, Cloud Composer, Kubernetes)
You worked with PostGIS before or bring other experience with geospatial data
What we offer you:
ðOffice centrally located in Utrecht city (with direct access via bus 8 or a 20-minute walk from Utrecht Central Station)
ð27 holidays (based on full-time employment)
ðSolid pension scheme with employer contribution
ðNS Business Card for employees commuting from outside Utrecht
ð¥ï¸Laptop and necessary IT equipment provided
ð©ºAdditional income protection in case of long-term illness or disability, complementing the statutory coverage
ð¥ªDaily lunch, fruits, and Aroma Club coffee at the office
ð¹Not the main reason to join, but definitely a fun one: Annual Team Week, after-summer drinks with friends and family and a festive Christmas celebration.
Meet Satelligence!
Satelligence is the market leader in remote sensing technology for sustainable sourcing with the mission to halt deforestation. We provide traders, manufacturers and agribusinesses such as Mondelez, Bunge, Cargill, Unilever, Rabobank with critical sustainability insights empowering them to minimize their global environmental footprint and track their progress against climate objectives, ensuring a sustainable supply chain. We were founded in 2016 and currently employ +40 people, working in Utrecht and several locations in Asia, Africa, and South America.
Apply for the job
Do you want to join our team as our new junior Data Engineer? Then we'd love to hear about you!
Please mention the word **FAIR** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Somos 3IT ¡Innovación y talento que marcan la diferencia!
Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.
Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.
Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.
Apply to this job at getonbrd.com.
Impulsar el crecimiento estratégico de la compañía mediante la generación de nuevas oportunidades de negocio, apertura de nuevos mercados, cumpliendo una meta comercial compartida entre las líneas de Outsourcing TI y Soluciones TI, asegurando ingresos sostenibles y rentables para la organización.
📍 ¿Dónde y cómo trabajarás?
✋ Algunas consideraciones antes de postular:
💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxee + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk
Empleos remotos de Data Engineering. Pipelines de datos, ETL, arquitectura de datos y big data. En RemoteJobs.lat conectamos a profesionales de Latinoamerica con empresas que ofrecen trabajo 100% remoto. Todas nuestras ofertas permiten trabajar desde cualquier ciudad, con pagos en dolares o moneda internacional.
$4,000 - $11,000 USD/mes
165
100% Remoto LATAM
Rangos estimados en USD/mes para contratos remotos con empresas internacionales. Varían según empresa, stack complementario y ubicación del cliente.
| Nivel | Años de experiencia | Rango USD/mes |
|---|---|---|
| Junior | 0-2 | $4,000 - $5,750 |
| Semi-Senior | 2-4 | $5,400 - $7,850 |
| Senior | 4-7 | $7,500 - $9,950 |
| Lead/Staff | 7+ | $9,250 - $11,000 |
Algunas compañías que históricamente han contratado perfiles de Data Engineering para trabajar 100% remoto desde Latinoamérica: