Remote Data Engineering jobs. Data pipelines, ETL, data architecture and big data.
ComputerCare has spent more than 20 years building something rare in the IT world: a company where technical excellence and genuine human connection are valued equally. We're the trusted partner that IT leaders turn to when technology can't afford to fail. As a woman-owned business serving innovative companies worldwide, we combine certified technical expertise with a human approach. Whether it's managing complex device lifecycles for global teams or performing authorized repairs for Apple, Lenovo, HP and Dell devices, our work directly impacts how thousands of people stay productive every day. We never outsource our work because we believe in accountability, quality, and building lasting relationshipsâwith our clients and as a team.
If you're passionate about technology, take pride in solving real problems, and want to be part of a company that values both technical excellence and the people behind it, ComputerCare is where you belong.
Come join us in our mission of being the Human Side of Hardware!
Weâre looking for a Data Analyst II to serve as a key point of contact and subject matter expert for data-related requests and system updates. Youâll analyze, extract, and interpret data from multiple systems, including SQL databases and reporting tools, and implement data solutions that support business workflows and decision-making.
If you enjoy solving complex problems with data and making an impact, we want you on our team!
\nIf you get to this point, we hope you're feeling excited about the job you just read. Even if you don't feel that you meet every single requirement, we still encourage you to apply. We're eager to meet people that believe in ComputerCareâs mission, core values and can contribute to our team in a variety of ways â not just candidates who check all the boxes.
At ComputerCare, we welcome passionate individuals who have the unrestricted right to work in the United States, including natural citizens and Green Card holders.
ComputerCare is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.
Role
World Golf Tour is seeking a Data Analyst to join our Product team. In this critical role, you will be the custodian of our data, organizing insights, and analyzing telemetry to support strategic business decisions. You will focus on developing and maintaining dashboards and analysis reports, collaborating across the studio and closely with the Product team to provide actionable insights that help drive the business. This role emphasizes strong data stewardship, visualization and statistical analysis.
Responsibilities
· Clean, validate, and prepare datasets for analysis, including resolving issues regarding missing, inconsistent, or novel data
· Perform exploratory data analysis to identify trends, patterns, and anomalies that inform business decisions
· Develop and maintain dashboards, reports, and visualizations using tools such as Amplitude, Power BI, or Excel
· Translate analytical findings into clear, actionable insights for both technical and non-technical stakeholders
· Partner with business teams (e.g., marketing, product, finance) to understand data needs and deliver relevant analyses
· Support ad hoc analysis and deep dives to answer specific business questions or identify opportunities
· Ensure compliance with data governance, privacy, and security standards
Experience and Skills
· Bachelorâs degree in Data Analytics, Statistics, Mathematics, Computer Science, Economics, or a related quantitative field
· 2â4 years of experience in a data analyst or similar role, preferably in game or software development
· Strong proficiency in SQL for data querying and manipulation
· Experience with data analysis tools/languages such as Python or R
· Advanced proficiency in Excel (e.g., pivot tables, formulas, data modeling)
· Experience with data visualization tools (e.g., Tableau, Power BI)
· Strong proficiency in statistical methodologies and data analysis
· Strong problem-solving and critical thinking skills
· Excellent communication skills, with the ability to present complex data in a clear and concise manner
Preferred Qualifications
· Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery)
· Familiarity with ETL processes and data pipeline development
· Knowledge of basic machine learning or predictive analytics techniques
· Experience working in game development
· Understanding of data governance and privacy regulations
· Experience in a fast-paced, cross-functional environment
About Us
World Golf Tour is a leader in online golf, delivering the most realistic and immersive virtual golf experience to players around the globe. We are best known for our core product WGT Golf, a free-to-play golf game that has set the standard for virtual golf since its launch in 2008. Renowned for its photorealistic recreations of iconic courses such as Pebble Beach, The Old Course at St Andrews, and Quail Hollow Club, the game combines authentic course imagery with precise swing mechanics and multiplayer competition to offer an experience trusted by millions.
Location: North America Remote / San Francisco · Full-Time
Andromeda Cluster was founded by Nat Friedman and Daniel Gross to give early-stage startups access to the kind of scaled AI infrastructure once reserved only for hyperscalers.
We began with a single managed cluster â but it filled almost instantly. Since then, weâve been quietly building the systems, network, and orchestration layer that makes the worldâs AI infrastructure more accessible.
Today, Andromeda works with leading AI labs, data centers, and cloud providers to deliver compute when and where itâs needed most. Our platform routes training and inference jobs across global supply, unlocking flexibility and efficiency in one of the fastest-growing markets on earth.
Our long-term vision is to build the liquidity layer for global AI compute. We are expanding to new frontiers to find the brightest that work in AI infrastructure, research and engineering.
The Opportunity
We're hiring a Infrastructure Manager to accelerate supply and demand matching on our platform. This is an Individual Contributor role reporting to the Head of Infrastructure.
The Infrastructure team sits at the core of our infrastructure. We're responsible for acquiring and facilitating compute resources across the company, working closely with compute providers, sales, and technical teams to match compute supply with demand.
Today we have already established the fundamental layer of capacity with providers. As we
scale, we are building the next layerâwidening our network and liquidity, deepening the scope
of our services, and accelerating our growth.
What You'll Do
⢠Match incoming leads from our sales team with internal capacity and external capacity in
the market
⢠Maximize utilization of our compute resources
⢠Source and onboard new compute suppliers across the globe
⢠Source capacity based on customer needs and market trends
⢠Solve customer and supplier problems in a fast-moving, dynamic market
⢠Understand technical and commercial differences between suppliers to optimize our
capacity funnel
⢠Develop a proactive compute strategy informed by market intelligence
⢠Negotiate cost with suppliers and other vendors
⢠Create and implement processes around capacity planning
What We're Looking For
⢠2+ years in cloud sales, GPUs, data centers, or a related field
⢠Existing network of contacts in the compute market (providers, brokers, or buyers)
⢠Deep understanding of the GPU compute marketâwhat drives supply and demand
⢠Strong written and verbal communication across technical and commercial stakeholders
⢠Sound judgment in decisions that directly impact revenue and cost
⢠Comfortable operating in ambiguity
⢠Self-directed and energetic, able to operate autonomously while collaborating
cross-functionally
⢠Bias toward action in a fast-paced environment
Why You'll Love It Here
Impact: Be in a critical team unlocking revenue for the wider company
Real business: Meaningful revenue, complex transactions, and tangible impact
High-growth environment: Get in early at a company in a massive market
Ownership: Direct line to leadership and influence over how we scale
Competitive compensation + meaningful equity
Comprehensive benefits for you and your dependents, including healthcare, dental, and
vision coverage, 401(k), and unlimited PTO
Andromeda Cluster is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
As an ML Solutions Architect, you'll be the technical bridge between clients and delivery teams. You'll lead pre-sales technical discussions, design ML architectures that solve business problems, and ensure solutions are feasible, scalable, and aligned with client needs. This is a highly client-facing role requiring both deep technical expertise and strong communication skills.
\nWhat weâre building
Weâre empowering small teams with technology that makes it easier to market and grow businesses. Our current focus it to help consumer brands shift from "workflow automation" to "agent managementâ within their marketing operations. Matter is the AI coordination layer â providing shared AI memory, centralized agent control, and model differentiation. We founded the company based on a decade of experience providing marketing services to 300+ consumer brands, leveraging that expertise to develop interfaces that streamline user experience in the era of AI.
Why join Matter?
Founding Engineer Equity You'll get a meaningful equity stake; early-stage and undiluted.
Product Ownership You'll ship production code daily and help steer key product and technical decisions.
Shape the Engineering Culture You'll influence how we workâtools, processes, standards, and hiring.
Work with Challenger Consumer Brands Talk directly to customers (CEOs, CMOs, VP's) of fast-growing consumer brandsâsome doing $80Mâ$500M in revenue.
Don't join Matter if...
Work-life balance is a high priority for you
You're uncomfortable changing your priorities every 24-48 hours
You're not confident in your abilities to manage end-to-end solutions
You require a many devops resources to be successful
About the Role
You'll sit squarely at the intersection of backâend and frontâend, ensuring seamless integration between APIs, databases, UIs, and ML services. You'll design, build, and scale features endâtoâend, especially our AI/MLâpowered experiences, while mentoring peers and driving architecture decisions.
Core Tech & Tools
Languages & Frameworks: Python, Node.js, React (TypeScript)
Datastore: PostgreSQL
Cloud & Infra: Google Cloud Platform, Airflow, Terraform, Docker, Kubernetes
ML/AI: LLMs, RAG, prompt engineering
Other: MCP
Key Responsibilities
Architect and implement fullâstack features, from database schema to React components, optimized for scale and reliability.
Build and maintain RESTful/GraphQL APIs, data pipelines, and distributed services in GCP.
Integrate, prompt, and debug LLMs and generative AI tools; own RAG or fineâtuning pipelines.
Ensure frontâend and backâend systems interoperate flawlessly, minimize friction, optimize data flow, and enforce contracts.
Collaborate with product, research, design, and infra teams to define requirements, iterate rapidly, and ship productionâgrade code.
Monitor performance, reliability, and security.
Mentor junior engineers through code reviews, architecture reviews, and shared best practices.
Requirements
5+ years of professional software engineering experience with endâtoâend ownership in a fullâstack role.
Deep expertise in Python, Node.js, React/TypeScript, and PostgreSQL.
Able to be handsâon with GCP, containerization (Docker/K8s), and building/supporting highâtraffic systems.
Proven experience integrating AI/ML models (LLMs, NLP, RAG) into production apps.
Familiarity or strong interest in working with MCP servers.
Exceptional problemâsolving skills and a product mindset: you think deeply about UX, performance, and business impact.
You sweat both technical details and end-user experience.
Nice to Haves
Experience with multiâstep or agentic AI workflows.
Background in AI infrastructure or tooling companies.
Contributions to openâsource AI/ML projects.
What we offer
Competitive salary and equity package (roles, responsibilities, and comp grow as we do)
Top-tier health, vision, dental insurance (US)
Regular team off-sites
Regular hack weeks
Distinguished Tech Innovator:
3Pillar warmly extends an invitation for you to join an elite team of visionaries. Beyond software development, we are dedicated to engineering solutions that challenge conventional norms. Envision you: steering projects that redefine urban living, establish new media channels for enterprise companies, or drive innovation in healthcare.
Your invaluable expertise will serve as the cornerstone in shaping the future direction of our endeavors.
This role is the primary expert within a technology stack. The Architect owns the decision making around high-level design choices and dictates technical standards, including software coding standards, tools, and platforms. The ideal candidate will thrive in a collaborative environment and be engaged in the development process.
\nLife360's mission is to keep people close to the ones they love. Our category-leading mobile app,Tile tracking devices, and Pet GPS tracker empower members to protect the people, pets, and things they care about most with a range of services, including location sharing, safe driver reports, and crash detection with emergency dispatch. Life360 serves approximately 91.6 million monthly active users (MAU), as of September 30, 2025, across more than 180 countries.
Life360 delivers peace of mind and enhances everyday family life with seamless coordination for all the moments that matter, big and small. By continuing to innovate and deliver for our customers, we have become a household name and the must-have mobile-based membership for families (and those friends who are basically family).
Life360 has more than 500 (and growing!) remote-first employees. For more information, please visit life360.com.
Life360 is a Remote-First company, which means a remote work environment will be the primary experience for all employees. All positions, unless otherwise specified, can be performed remotely (within the US) regardless of any specified location above.
The Horizons DevOps and Infrastructure team supports large-scale, data-intensive platforms that power real-time adtech and data science workloads across the organization. The team owns and operates critical infrastructure and data platforms, including Databricks, Snowflake, Apache Airflow, and Kubernetes-based services, processing fifty billions of requests and tens of terabytes of data daily. Working closely with data engineering, data science, and security teams, the group focuses on building reliable, scalable, and automated systems that enable high-throughput data processing, analytics, and ML workflows. Team members take end-to-end ownership of production systems, influence architectural direction, and play a key role in evolving the platform as the organization integrates new technologies and scales further.
We are seeking a
Please mention the word **PORTABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
At Connectly we are building the future of conversational commerce in Latin America with the focus on Whatsapp. Instead of shoppers installing yet another app, we are offering a 360 engagement platform for retailers inside of an app that everyone already have on their phone - Whatsapp.
We are a VC-backed Series B startup with a world-class team hailing from Meta, Google, Uber, and other top Silicon Valley companies. We operate as a hybrid company, with offices in Bogotá and San Francisco, and a remote-first culture everywhere else.
\nWe are a strong believer in passion, curiosity and willingness to learn on the job. If you are in doubt, we encourage you to apply!
Connectly is an equal opportunity employer. Weâre committed to building a diverse, inclusive, and supportive workplace that is distributed around the world.
About AirDNA
We built AirDNA to solve a problem: how do you make smart short-term rental decisions when thereâs too much guesswork and not enough good data?
What started in a garage in California in 2015 is now a global team helping thousands of people â from aspiring hosts to major real estate firms â make confident choices about where to invest, what to charge, and how to grow.
Our mission is simple: give people the tools they need to build freedom through short-term rentals. Whether that means buying their first Airbnb or scaling a portfolio, weâre here to help unlock financial independence and growth.
We track 10M+ listings in 120,000 markets, and our platform is trusted by users in over 100 countries. Itâs big data, made useful.
In 2023, AirDNA acquired Uplisting, a powerful property management software that helps hosts and operators manage listings across Airbnb, Vrbo, and other platforms. With features like channel management, automated messaging, dynamic pricing, task coordination, and financial reporting, Uplisting expands our mission to support every stage of the short-term rental journey â from investment to operations.
The AirDNA team
Weâre a curious, driven, and kind group of humans who genuinely love what we do. Our values â Happy, Hungry, Honest â guide how we show up for our customers and for each other.
Want to see what that looks like in action? Youâll get a feel once you meet us.
We welcome applicants from all backgrounds and encourage you to apply even if you donât check every box. Passion, potential, and perspective matter here.
The Role
AirDNA is looking for a Frontend Tech Lead to help shape the future of our product experience and technical direction. While this role is full-stack, you will be the technical driver for our frontend guild, pushing forward our React/TypeScript architecture, design systems, and developer experience. Youâll partner with Product, Design, and Engineering leaders to deliver beautiful, performant, and scalable customer-facing applications. As a Tech Lead, youâll guide technical decisions across squads, mentor engineers, and help set the long-term direction of our frontend practice.
\nAirDNA seeks to attract the best-qualified candidates who support the mission, vision and values of the company and those who respect and promote excellence through diversity. We are committed to providing equal employment opportunities (EEO) to all employees and applicants without regard to race, color, creed, religion, sex, age, national origin, citizenship, sexual orientation, gender identity and expression, physical or mental disability, marital, familial or parental status, genetic information, military status, veteran status or any other legally protected classification. The company complies with all applicable state and local laws governing nondiscrimination in employment and prohibits unlawful harassment based on any of the aforementioned protected classes at every location in which the company operates. This applies to all terms, conditions and privileges of employment including but not limited to: hiring, assessments, probation, placement, benefits, promotion, demotion, termination, layoff, recall, transfer, leave of absence, compensation, training and development, social and recreational programs, education assistance and retirement.
We are committed to making our application process and workplace accessible for individuals with disabilities. Upon request, AirDNA will reasonably accommodate applicants so they can participate in the application process unless doing so would create an undue hardship to AirDNA or a threat to these individuals, others in the workplace or the company as a whole. To request accommodation, please email compliance@airdna.co. Please allow for 24 hours to process your request.
By applying for the above position, you will confirm that you have reviewed and agreed to our Data Privacy Notice for Applicants.
PermitFlow is redefining how America builds. Weâre an applied AI company serving the nationâs builders, tackling one of the largest information challenges in the economy: understanding what can be built, where, and how. Our AI agent workforce helps the fastest-growing construction companies navigate everything from permitting and licensing to inspections and project closeouts â accelerating housing, clean-energy, and infrastructure development across the country.
Despite being a $1.6T industry, construction still suffers from massive delays, wasted capital, and lost opportunity. PermitFlow has already delivered unprecedented speed, accuracy, and visibility to over $20B in development, helping contractors reduce compliance time, de-risk projects, and scale with confidence.
America is entering a CAPEX super-cycle, from data centers and factories to housing and renewables, and joining PermitFlow is building the AI at the heart of every construction project powering the next wave of re-industrialization.
Weâve raised over $90M, most recently completing our Series B, from top-tier investors including Accel, Kleiner Perkins, Initialized, Y Combinator, Felicis, and Altos Ventures, with backing from leaders at OpenAI, Google, Procore, ServiceTitan, Zillow, PlanGrid, and Uber.
As a Security Engineer, youâll join our growing platform team in building, scaling, and fine-tuning the systems that keep our platform secure and compliant. Youâll help architect the security backbone of our platform, focusing on compliance, risk reduction, security automation, and continuous improvement. While your primary responsibility will be security and governance, coding and problem-solving across the stack are core parts of the role. As a fast-growing startup, we all roll up our sleeves where needed, so flexibility and a collaborative, security-first mindset are key.
Architect, design, and implement secure, compliant, scalable, and cost-efficient infrastructure solutions to protect a rapidly growing product.
Lead the execution and maintenance of our SOC2 compliance program and other security-related certifications.
Design, implement, and audit Role-Based Access Controls (RBAC), Identity and Access Management (IAM), and secrets management systems.
Design and implement security best practices for backend, frontend services, APIs, and data pipelines.
Own security features end-to-end, from architecture and implementation to testing and production deployment.
Develop and maintain security automation, Infrastructure as Code, and secure CI/CD pipelines.
Implement and manage security monitoring, threat detection, and vulnerability management across our cloud infrastructure.
Establish and enforce security best practices for authentication, authorization, logging, and alerting.
Lead and participate in incident response, troubleshooting complex security issues and driving postmortem learning and improvements.
Collaborate across engineering teams to embed security into the software development lifecycle and balance compliance, velocity, and cost.
5+ years of experience in Security Engineering, AppSec, GRC, or similar roles.
Proven experience designing and implementing security controls for SOC2, ISO 27001, or similar compliance frameworks.
Deep expertise in Role-Based Access Controls (RBAC), Identity and Access Management (IAM), and secrets management.
Strong experience with container security and orchestration (Docker, ECS, Kubernetes a plus).
Expertise with secure CI/CD pipelines and modern security automation tools.
Coding and scripting proficiency (TypeScript, Python, Go, Bash, etc.).
Hands-on experience with cloud security (GCP preferred) and securing distributed systems.
Familiarity with monitoring, observability, and incident management best practices.
Comfortable working in a fast-paced, compliance-focused startup environment, where adaptability and security ownership are essential.
Competitive salary and meaningful equity in a high-growth company
Comprehensive medical, dental, and vision coverage
Flexible PTO and paid family leave
Home office & equipment stipend
Hybrid NYC office culture (3 days in-office/week) with direct access to leadership
In-Office Lunch & Dinner Provided
PermitFlow provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, genetics, sexual orientation, gender identity, gender expression, or family status, as protected by applicable law.
We are committed to a diverse and inclusive workforce and welcome people from all backgrounds, experiences, perspectives, and abilities. All employment decisions are based on merit, qualifications, and business needs.
As a pioneer in digital outdoor navigation with a suite of apps, onX was founded in Montana, which in turn has inspired our mission to awaken the adventurer inside everyone. With more than 400 employees located around the country working in largely remote / hybrid roles, we have created regional âBasecampsâ to help remote employees find connection and inspiration with other onXers. We bring our outdoor passion to work every day, coupling it with industry-leading technology to craft dynamic outdoor experiences.
Through multiple years of growth, we haven't lost our entrepreneurial ethos at onX. We offer a fast-paced, growing, tech-forward environment where ownership, accountability, and passion for winning as a team are essential. We value diversity and believe it leads to different perspectives and inspires both new adventures and new growth. As a team, we're hungry to improve, value innovation, and believe great ideas come from any direction.
Important Alert: Please note, onXmaps will never ask for credit card or SSN details during the initial application process. For your digital safety, apply only through our legitimate website at onXmaps.com or directly via our LinkedIn page.
onX is seeking a talented Senior Backend Engineer to join our Content Delivery team. In this role, you will build the backend infrastructure that powers offline map experiences for millions of outdoor enthusiasts. You will work on high-performance data pipelines, map tile generation and delivery systems, and large-scale geospatial
Please mention the word **STUNNING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Arbiter is the AI-powered care orchestration system that unites healthcare. We are launching our best-in-class, patient-facing Agentic platform to optimize patient outcomes through a unique multimodal approach. We optimize complex healthcare workflows that interface with patients using the latest Agentic AI approaches, and we combine it with a sophisticated platform to serve this Agentic layer at scale. We are looking for expert engineers and leads to join our team and help us push the frontier of what's possible with Agentic workflows + Healthcare.
Backed by one of the largest seed rounds in health tech history and operators who bring the expertise and distribution to scale nationally, we're building the connected infrastructure healthcare should have had all along.
Our Engineering Culture & Values
We are a high-performing group of engineers dedicated to delivering innovative, high-quality solutions to our clients and business partners. We believe in:
Engineering Excellence: Taking immense pride in our technical craft and the products we build, treating both with utmost respect and care.
Impact-Driven Development: Firmly committed to engineering high-quality, fault-tolerant, and highly scalable systems that evolve seamlessly with business needs, minimizing disruption.
Collaboration Over Ego: Valuing exceptional work and groundbreaking ideas above all else. We seek talented individuals who are accustomed to working in a fast-paced environment and are driven to ship often to achieve significant impact.
Continuous Growth: Fostering an environment of continuous learning, mentorship, and professional development, where you can deepen your expertise and grow your career.
Responsibilities
As a Senior Backend Engineer, you will design, build, and operate the platform systems that power Arbiter's connections to the outside world and ensure reliable, performant data exchange across a complex ecosystem. You will own critical parts of our backend infrastructure, from API design and service orchestration to data pipelines and third-party system connectivity, working closely with product, engineering, and customer teams to ship production-grade systems with real customer dependency.
Platform Architecture & Backend Systems: Design, develop, and operate backend services that power Arbiter's core platform, with an emphasis on reliability, modularity, and clean system boundaries.
External System Connectivity: Build and maintain robust connections to third-party systems (e.g. cloud APIs, AI services, data exchange services, EHRs, telephony platforms). Own the abstractions that make these integrations reusable and adaptable across customers with minimal rework.
API Design & Data Exchange: Design and operate high-scale APIs (REST, gRPC, webhooks) and manage complex data flows including real-time streaming, batch processing, file-based exchange (e.g. SFTP, HL7, EDI), and event-driven pipelines.
Performance & Reliability: Ensure high throughput, low latency, and fault tolerance across backend services through strong system design, monitoring, alerting, and operational best practices. Handle vendor failures, retries, idempotency, and graceful degradation.
Data Engineering & Pipeline Ownership: Build and maintain ETL/ELT pipelines, manage schema evolution, and ensure data quality and integrity across systems with varying formats, standards, and reliability.
Infrastructure & Deployment Excellence: Implement and uphold best practices for CI/CD, testing, observability, and deployment of backend systems in production cloud environments.
Cross-Functional Execution: Partner closely with AI engineers, product managers, implementation teams, and customer stakeholders to translate ambiguous, high-impact problems into scalable technical solutions.
Technical Leadership & Mentorship: Mentor engineers, contribute to internal documentation and standards, influence technical direction, and raise the overall engineering bar.
Ownership & On-Call: Take end-to-end ownership of critical systems, including participating in on-call rotations and leading incident resolution when production issues arise.
Minimum Qualifications
5+ years of hands-on experience building and operating production backend systems in high-availability environments.
Computer Science or Engineering degree, or equivalent practical experience.
Experience building and maintaining large-scale Python codebases with strong opinions on structure, quality, and tradeoffs.
Deep understanding of API design patterns, versioning, backward compatibility, and managing breaking changes across consumers.
Experience building reusable abstraction layers or connector frameworks that allow a single integration pattern to serve multiple customers or vendors.
Proven experience designing systems that connect to third-party services, including handling authentication, rate limiting, retry logic, and failure modes gracefully.
Strong understanding of concurrency, scalability, reliability, and distributed systems patterns.
Hands-on experience with data pipeline architectures: batch and streaming, schema management, and data quality enforcement.
Experience with cloud infrastructure (AWS, GCP, or Azure) and production deployments.
Strong communication skills and ability to work effectively across functions.
Proficiency with AI-assisted development tools (e.g., Cursor, Claude Code, GitHub Copilot).
Track record of delivering complex systems end-to-end with minimal oversight.
Preferred Qualifications
Experience with healthcare data exchange standards (HL7, FHIR, EDI) or similarly complex domain-specific protocols in other industries (fintech, telecom, logistics) is a plus.
Familiarity with database performance tuning, query optimization, and managing large-scale relational databases (PostgreSQL, CloudSQL).
Startup or early-stage experience operating in fast-moving, high-ambiguity environments.
This role can be remote or on-site, based in our New York City or Boca Raton offices, in a fast-paced, collaborative environment where great ideas move quickly from whiteboard to production.
Job Benefits
We offer a comprehensive and competitive benefits package designed to support your well-being and professional growth:
Highly Competitive Salary & Equity Package: Designed to rival top FAANG compensation, including meaningful equity.
Generous Paid Time Off (PTO): To ensure a healthy work-life balance.
Comprehensive Health, Vision, and Dental Insurance: Robust coverage for you and your family.
Life and Disability Insurance: Providing financial security.
Simple IRA Matching: To support your long-term financial goals.
Professional Development Budget: Support for conferences, courses, and certifications to fuel your continuous learning.
Wellness Programs: Initiatives to support your physical and mental health.
Pay Transparency
The annual base salary range for this position is $148,500-$190,000. Actual compensation offered to the successful candidate may vary from the posted hiring range based on work experience, skill level, and other factors.
Customer Program Manager
Cross-Site Project Coordination | Schedule & Risk Management | High-Visibility Communication | SF Bay Area, CA
ABOUT NEXXA
Nexxa.ai is building artificial super intelligence for heavy industries â enabling machines, systems and operations to think, decide and act autonomously across manufacturing, large-scale infrastructure, logistics and legacy environments. Our mission is to translate deep technical breakthroughs into operational reality, solving some of the hardest systems-level problems in industry.
THE ROLE
Reporting to CPO
We're hiring a Customer Program Manager to be the operational backbone of our customer delivery engine. You'll manage project schedules, status visibility, and cross-site coordination across Applied AI and core engineering teams operating across global sites â ensuring every engagement ships on time with full visibility. You'll work alongside a Delivery Manager who owns the customer relationship and outcome quality, core-engineering remote project manager. Your job is to make sure the delivery machine runs â schedules are tracked, risks are flagged early, handoffs are clean, and every stakeholder knows exactly where things stand at any given moment.
WHAT YOU'LL DO
Manage end-to-end project schedules for customer engagements across Applied AI (FDE team) and core engineering teams spanning multiple geographies and time zones
Maintain real-time project status visibility â Confluence boards, Jira tracking, weekly status reports â so leadership, engineering, and the Delivery Manager always have a single source of truth
Run internal project review cadences: bi-weekly planning reviews, customer submissions reviews, and dev question sessions across all active engagements
Proactively identify risks, dependencies, and blockers before they become surprises â escalate to the Delivery Manager with proposed mitigations, not after deadlines slip
Own cross-site coordination across multiple sites â bridging time zones, aligning handoffs, and ensuring nothing falls between teams
Drive daily and weekly status updates across all active projects â post EOD updates in team channels with key changes, blockers, and next actions tagged to DRIs
Prepare and deliver weekly internal status reports to the CPO every Friday â consolidating project health, risk register, and upcoming milestones across all accounts
Track and maintain delivery governance artifacts: project plans, feedback/release trackers, QA checklists, go-live readiness assessments
Coordinate resource allocation and capacity planning across FDEs and engineering â flag overload risks and propose reallocation before quality suffers
Ensure Jira hygiene: correct assignees, updated due dates, closed tickets, and clean backlogs â so automated reporting and AI tools produce accurate outputs
Support the Delivery Manager in preparing customer-facing materials: milestone review decks, progress summaries, and QBR data
HOW THIS ROLE WORKS WITH THE DELIVERY MANAGER
The CPM and Delivery Manager share the delivery mission but own different dimensions:
You own: project schedules, daily/weekly status tracking, Jira hygiene, cross-site coordination, Confluence boards, internal reporting, resource capacity flagging, and governance artifact maintenance
Delivery Manager owns: customer relationship, outcome definition, delivery quality sign-off, CSAT/NPS, escalation resolution, post-delivery retrospectives, and account expansion insights
Together: the DM ensures we deliver the right thing at the right quality; you ensure we deliver it on schedule with full visibility and zero surprises
WHAT WE'RE LOOKING FOR
5+ years in technical program management, project management, or delivery management â with at least 2 years managing cross-functional, cross-site engineering teams
Proven experience managing 3â5 concurrent external facing projects simultaneously without dropping balls â you have a system, not just hustle
Strong command of project management tooling: Jira, Confluence, Rocketlane (or similar), and spreadsheet-based reporting. You're the person who keeps these tools clean and current.
Experience coordinating across time zones and distributed teams â you've worked with India/APAC engineering teams and know how to structure async handoffs
Excellent written communication â your status updates are crisp, your escalations are clear, and your meeting notes are actionable. You don't write paragraphs; you write bullet points with owners and dates.
Technical fluency â you can read architecture docs, understand data pipeline concepts, and have productive conversations with engineers about scope, effort, and trade-offs. You don't need to code, but you need to understand the work.
Anticipatory mindset â you see risks coming before they materialize. You flag a Milestone 1 delivery risk on Monday, not on Thursday when it's due.
Experience in enterprise SaaS, consulting delivery, or systems integration. Heavy industry experience (manufacturing, supply chain, energy) is a strong plus.
KEY SUCCESS INDICATORS
100% of active projects have up-to-date Confluence boards with milestones, DRIs, and dates â refreshed daily, not weekly
Zero surprise delays â risks are flagged at least 1 week before they impact a deadline, with proposed mitigations
Weekly status reports delivered to Shashank (CPO) every Friday for Monday leadership calls â no exceptions, no late submissions
Customer communication cadence running on schedule: weekly updates sent, bi-weekly check-ins held, milestone reviews documented
Cross-site engineering alignment verified at every handoff â India team has clear specs, context, and deadlines before they start work
Jira data quality at 100% â accurate assignees, no stale tickets, closed items marked done. Automated reports pull clean data.
Resource conflicts identified and escalated before they impact delivery â capacity planning is proactive, not reactive
NICE TO HAVE
Experience with Rocketlane, Asana, or Monday.com for customer-facing delivery management
Prior experience at a fast-growing startup (seed to Series B) where you built the PM process from scratch
Experience working with AI/ML engineering teams â understanding model training timelines, data pipeline dependencies, and iterative delivery cycles
Familiarity with enterprise procurement and vendor management processes (purchasing control towers, SOW reviews, NDA workflows)
WHY NEXXA
Architect the intelligence layer for the world's largest industrial companies â your designs will run with top Fortune 100 companies
Work directly with the CPO and CTO on every engagement â ZERO layers of bureaucracy
Backed by silicon valley top VCs, with access to their portfolio network and enterprise resources
Early-stage equity with significant upside
HHAeXchange is the leading technology platform for home and community-based care. Founded in 2008, HHAeXchange was born out of an idea to create a fully comprehensive end-to-end homecare solution to help people who are aging or have disabilities thrive in their homes and communities. Our employees are passionate about transforming the healthcare space by building the only homecare ecosystem that fully connects patients, personal care providers, managed care organizations, and states.
HHAeXchange is seeking a Product Manager, Data Management & Platform to help define, govern, and scale how data is used across our healthcare platform. This role sits at the intersection of Product, Engineering, and Clinical/Financial operations, ensuring that the data powering RCM, EHR, Payroll, Payments, and the Universal Patient Record is accurate, connected, and trusted â and that it serves as a reliable foundation for AI-driven innovation.
This is an individual contributor role for a healthcare product professional who understands real-world clinical and financial workflows, is energized by the potential of AI to transform healthcare data, and can translate complex requirements into clear, actionable product decisions. The ideal candidate brings 5â7 years of product management experience in healthcare IT, a solid grasp of data platform concepts, and a genuine enthusiasm for applying AI and machine learning to solve meaningful problems in the home care space.
To perform this job successfully, an individual must be able to perform each essential job duty satisfactorily with or without reasonable accommodation. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
This is a fully remote opportunity for candidates located in the EST or CST time zones within the US only.
\nProduct-Led Data Strategy
AI Enablement & Innovation
Healthcare Data Enablement
Cross-Team Execution
Governance & Data Quality
Required
Preferred
Success Measures (First 12â18 Months)
The base salary range for this US-based, full-time, and exempt position is $105,000-115/yr, not including variable compensation. An employeeâs exact starting salary will be based on various factors including but not limited to experience, education, training, merit, location, and the ability to exemplify the HHAeXchange core values.
This is a benefits-eligible position. HHAeXchange offers competitive health plans, paid time-off, company paid holidays, 401K retirement program with a Company elected match, including other company sponsored programs.
HHAeXchange is an equal-opportunity employer. The Company offers employment opportunities to all applicants and employees without regard to race, color, religion, national origin, sex, sexual orientation, gender identity or expression, age, disability, medical condition, marital status, veteran status, citizenship, genetic information, hairstyles, or any other status protected by local or federal law.
Sobre Coderio
Coderio diseña y entrega soluciones digitales escalables para empresas globales. Con una base técnica sólida y una mentalidad orientada al producto, nuestros equipos lideran proyectos complejos desde la arquitectura hasta la ejecución. Valoramos la autonomÃa, la comunicación clara y la excelencia técnica, colaborando estrechamente con equipos y socios internacionales para construir tecnologÃa que genera impacto.
ð Más información: http://coderio.com
Buscamos un/a backend engineer con criterio técnico propio, capaz de diseñar microservicios event-driven que soporten millones de requests sin parpadear. Responsable de la capa de servicios y pipelines de datos, disponibilizando telemetrÃa crÃtica para analÃtica. Debes ser capaz de interactuar con criterio técnico frente a equipos de Data Engineering y diseñar soluciones escalables bajo presión.
Lo que puedes esperar de este rol (Responsabilidades)
Es un rol de ownership técnico total: diseñas, decides, construyes, operas y te haces responsable de dominios crÃticos de la plataforma.
Requisitos
+5 años en desarrollo Backend (Seniority basado en autonomÃa y proactividad).
+3 años de experiencia sólida con Node.js y TypeScript.
+3 años operando en entornos AWS Serverless (Lambda, API Gateway, SQS, SNS).
+2 años de experiencia en Data Engineering básica y modelado de bases de datos relacionales (PostgreSQL).
Deseable
+1 año de experiencia con TimescaleDB o bases de datos Time-series.
Experiencia previa en proyectos de IoT o telemetrÃa industrial.
Conocimiento de infraestructura como código (Terraform/CDK).
Soft Skills
Ownership Extremo: Capacidad de tomar un dominio y llevar la resolución de punta a punta.
Comunicación de Criterio: Capacidad para desafiar y colaborar con stakeholders técnicos (Data Teams).
Proactividad: No espera instrucciones; identifica cuellos de botella y propone soluciones.
Beneficios
Modalidad remota
Participación en un proyecto estratégico regional de alto impacto.
Colaboración con un equipo internacional y liderazgo técnico sólido.
Oportunidad de crecimiento profesional dentro de proyectos de transformación digital.
¿Por qué unirte a Coderio?
Somos remote-first, apasionados por la tecnologÃa, el trabajo colaborativo y la compensación justa. Ofrecemos un entorno inclusivo, desafiante y con oportunidades reales de crecimiento. Si te motiva construir soluciones con impacto en proyectos globales de finanzas y RRHH. Te estamos esperando. Postula ahora.
\nHinge Health is hiring an Engineering Manager for our Growth Data Platform (GDP) pod in Bangalore. This is a pivot-point role for a leader who is ready to move beyond traditional software management and lead a team into the era of AI-Native Engineering and ML-Driven Growth. The GDP pod is the engine room of Hinge Health's growth strategy. You own the data pipelines, event streams, and the emerging "Intelligence Layer" that powers every member interactionâfrom the first ad they see to the "Daily Streak" notification that keeps them pain-free. In 2026, your mission is to transform GDP from a data mover to a decision engine. You will partner with Data Science to operationalize high-value ML models (like our Direct Mail Propensity Model and Contextual Bandits) that autonomously decide the channel, content, and timing of our marketing. Simultaneously, you will pioneer our "Harness Engineering" initiative, transforming your pod's workflow from manual coding to managing autonomous AI agents that build, test, and verify our data infrastructure. You will lead a high-performing team in Bangalore, serving as the strategic bridge between SF Product Strategy and technical execution
Build the "Intelligence Layer": Move beyond simple data piping. Architect the real-time decisioning layer that ingests ML signals (e.g., Churn Risk, Propensity to Convert) and routes them instantly to execution platforms like Iterable.
Operationalize Growth ML Models: Partner with Data Science to take predictive models out of the lab and into production. You will own "Phase 3" of the model lifecycle: hardening, serving, and monitoring models that control millions of dollars in marketing spend.
Lead the Transition to Harness Engineering: Drive the adoption of AI-native workflows (using tools like Cursor and Claude Code). Shift the teamâs focus from "typing code" to building the test harnesses, specs, and safety rails that allow agents to autonomously maintain our pipelines.
Guarantee Data Trust ("Glass Box" Observability): Champion a culture of radical observability. Implement automated "data sentinels" and contract tests that catch schema violations and freshness issues before they impact our marketing campaigns.
2+ years of experience managing engineering teams. You are a "player-coach" who can build a "One Team" culture, bridging the gap between SF and Bangalore with high-agency leadership.
3+ years of experience with data engineering technologies including experience with distributed data processing frameworks (e.g., PySpark, Databricks) and SQL.
Experience with production data pipelines and understanding of data lifecycle management, including pipeline orchestration, monitoring, and operational excellence practices.
ML Ops & Model Serving Experience: You understand the lifecycle of data and models. You have experience with Kafka and event-driven architectures, and you know what it takes to serve an ML model in production (latency, feature stores, drift monitoring).
AI-Forward Leadership: You are excited, not intimidated, by the shift to AI-assisted engineering. You are eager to experiment with new workflows where engineers act as architects and auditors of AI-generated code.
Architectural Rigor: You can simplify complex systems. You have a track record of converging "sprawling" pipeline patterns into robust standards (e.g., moving ad-hoc scripts into a unified Event-Driven Architecture).
Operational Excellence: You value SLOs, runbooks, and incident management. You believe that "production reliability" is a feature, especially when dealing with data that drives real-time member health decisions.
Experience with Marketing Tech (Iterable, Braze) or Customer Data Platforms (Segment, Hightouch).
Experience implementing Contextual Bandits or similar experimentation frameworks.
Background in Healthcare/HIPAA compliant environments.
At Hinge Health, weâre using technology to scale and automate the delivery of healthcare â starting with musculoskeletal (MSK) conditions, which affect over 1.7 billion people worldwide. With an AI-powered human-centered care model, Hinge Health leverages cutting-edge technology to improve outcomes, experiences and costs to help people move beyond their pain. The platform addresses a broad spectrum of MSK care â from acute injury, to chronic pain, to post-surgical rehabilitation â through personalized, evidence-based care.
As the preferred partner to 50+ health plans, PBMs and other ecosystem partners, Hinge Health is available to over 20 million people across more than 2,550 employers. The company is headquartered in San Francisco with additional offices in Montreal and Bangalore. Learn more at http://www.hingehealth.com.
We believe that remote work and in-person work have their own advantages and disadvantages, and we want to be able to leverage the best of both worlds. Employees in hybrid roles are required to be in the office 3 days/week.
This is a Bengaluru-based role that involves regular interaction and collaboration with Hinge Health colleagues in San Francisco, CA. Time zones: San Francisco is the Pacific Time Zone, which is 12 hours and 30 minutes behind India Standard Time â for example, 8am in San Francisco is 8:30pm in Bengaluru. Standard working hours in San Francisco are between 8am - 6pm. For this role, applicants should be open to meetings in the late evening following India Standard Time.
Inclusive healthcare and benefits: In addition to comprehensive medical, dental, and vision coverage, we provide employees and their family members with Group Medical Coverage (GMC), Group Term Life Insurance (GTL), and Group Personal Accident Insurance (GPA).
We also offer a lifestyle stipend to support your overall well-being, along with learning and development opportunities to help you grow both personally and professionally.
Grow with us through discounted company stock through our ESPP with easy payroll deductions.
Hinge Health is an equal opportunity employer and prohibits discrimination and harassment of any kind. We make employment decisions without regards to race, color, religion, sex, sexual orientation, gender identity, national origin, age, veteran status, disability status, pregnancy, or any other basis protected by federal, state or local law.
By submitting your application you are acknowledging we are using your personal data as outlined in the personnel and candidate privacy policy.
.
Beware of Phishing Attempts: We've noticed an increase in phishing where fraudsters impersonate employees and send fake job offers to steal sensitive information. We'll never ask for financial details during the hiring process and only use "@hingehealth.com" emails. If you receive a suspicious offer, stop communication and report it to the US FBI Internet Crime Complaint Center. To verify an email from our recruiting team, forward it to security@hingehealth.com.
Radformation is transforming the way cancer clinics deliver care. Our innovative software automates and standardizes radiation oncology workflows, enabling clinicians to plan and deliver treatments faster, safer, and more consistently, so patients everywhere can receive the same high-quality care.
Our software focuses on three key areas:
We are a fully remote, mission-driven team united by a shared goal: to reduce cancerâs global impact and help save more of the 10 million lives it claims each year. Every line of code, every product release, and every conversation with our customers brings us closer to ensuring no patientâs treatment quality depends on where they live.
In this role you will help advance Radformationâs AI-driven radiotherapy products by building and improving machine learning models that directly impact clinical workflows and patient outcomes.
You will work closely with AI, cloud, research, and product teams to develop scalable data pipelines, improve model performance, and support regulatory submissions for medical device software.
At Radformation we believe AI can be an incredible tool for innovation, but our hiring process is all about getting to know you, your skills, experience, and unique approach to problem solving. We ask that all interviews and assessments be completed without tools that generate answers in real time. This helps ensure a fair process for everyone and allows us to see your authentic work. Using such tools during the process may affect your candidacy.
We care about our people as much as we care about our mission. We offer competitive compensation, benefits, and the opportunity to make an impact in the fight against cancer. The salary range for this role is $160,000 - $200,000 USD base, plus bonus eligibility.
For US teammates (via TriNet):
Health & Wellness
Financial & Professional Growth
Work-Life Balance & Perks
For global teammates (via Deel):
At Radformation, we want every team member to feel supported, no matter where they live. For teammates outside the US, we provide benefits that align with local laws and standards, working with our Employer of Record (EOR) partners to ensure fairness and equity. This means your benefits package will be locally compliant, competitive, and designed to support your health, financial security, and work-life balance.
Cancer affects people from every walk of life, and we believe our team should reflect that diversity. Radformation is proud to be an equal opportunity workplace and an affirmative action employer. We welcome candidates from all backgrounds and are committed to fostering an inclusive environment for all employees.
Radformation does not accept unsolicited resumes from agencies without a signed agreement in place. We do not partner with third-party recruiters unless explicitly stated. All legitimate communication from Radformation will come from an @radformation.com email address. If you receive outreach from another domain or via unofficial channels, please contact careers@radformation.com.
\nWhy TrueML?
TrueML is a mission-driven financial software company that aims to create better customer experiences for distressed borrowers. Consumers today want personal, digital-first experiences that align with their lifestyles, especially when it comes to managing finances. TrueMLâs approach uses machine learning to engage each customer digitally and adjust strategies in real time in response to their interactions.
The TrueML team includes inspired data scientists, financial services industry experts and customer experience fanatics building technology to serve people in a way that recognizes their unique needs and preferences as human beings and endeavoring toward ensuring nobody gets locked out of the financial system.
As the Engineering Manager for our Data Platform, you will be the primary architect of the ecosystem that powers TrueMLâs intelligence. We are currently in a phase of purposeful scaling, and we need your leadership to build a rock-solid, high-performing data foundation that bridges the gap between raw infrastructure and actionable insights. Your goal is to champion data integrity and technical excellence while leading a world-class team during this period of deliberate expansion.
\n- An Experienced Leader: You have 2+ years of hands-on management experience and 5+ years of relevant data engineering expertise, with a track record of growing teams through coaching.
- A Big Data Expert: You have deep familiarity with modern technologies like Snowflake, Airflow, BigQuery, or Redshift, and mastery of both RDBMS and NoSQL databases.
- A Master of the Stack: You possess advanced proficiency in Python or Java and expert-level SQL skills, specifically in scaling schemas and tuning ETL performance.
- A Systems Thinker: You have extensive experience designing data warehouses and workflow systems, including owning SLAs for critical production processes.
- An Elite Communicator: You are a natural bridge-builder who can translate deep technical hurdles into clear, actionable updates for business partners.
- Purpose-Driven: You thrive in environments that value intentional progress and are excited to mature a data ecosystem from the ground up.
- Bonus Skills: You bring experience with Spark, Scala, or Protocol Buffers, or you have navigated the unique regulatory challenges of the FinTech industry.
We are a dynamic group of people who are subject matter experts with a passion for change. Our teams are crafting solutions to big problems every day. If youâre looking for an opportunity to do impactful work, join TrueML and make a difference.
Our Dedication to Diversity & Inclusion
TrueML and TrueAccord are equal opportunity employers. We promote, value, and thrive with a diverse & inclusive team. Different perspectives contribute to better solutions and this makes us stronger every day. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
For California Applicants: we collect personal information for employment purposes. We do not sell personal information. Most of the information we have is provided to us by you and/or collected as part of the employment process. For more details on how we use, share, and delete personal information see our Privacy Policy.
The Company Youâll Join
At Rebuy, weâre on a mission to revolutionize shopping with intelligent, personalized experiences that wow customers around the globe. As a fully remote team, we power some of the fastest-growing DTC brands like Aviator Nation, Liquid Death, Magic Spoon, Blenders, Laird Superfoods, Primal Kitchen, and many more.
We believe in ownership, drive, and empathy, and strongly uphold that every team member plays a vital role in shaping the future of intelligent commerce. Our culture thrives on collaboration, creativity, and genuine passion. We donât just build great tech - we build lasting partnerships, a strong community, and a place where people love to work.
The Problems Youâll Solve
Rebuy and its team members continually strive to create a high-spirited, intentional work environment that stresses performance, productivity, collaboration, and merit.
As a Sr. Software Engineer, Back-End, youâll own some of the most consequential systems at Rebuy. Your primary anchor is our billing and payments infrastructure â the engine that determines how merchants are charged, how partners get paid, and how financial balances flow across our entire product suite. This is genuinely complex financial engineering. It requires deep PHP and Go expertise, careful architecture, and judgment that no automated tool can replicate. Merchant billing runs daily, touches real revenue, and demands someone who understands both the technical and business dimensions of every decision.
Alongside billing, youâll grow into a broader platform portfolio â the partner portal, data ETL pipelines, customer-facing APIs, and reporting infrastructure that power the business. And in the near term, youâll play a critical role in a significant technical migration: moving our legacy Code Igniter 2 codebase to Code Igniter 4, including work tied to increasing our enterprise market share. This migration requires hands-on PHP expertise and cannot be deferred.
You wonât be handed a sprawling list of things you must do on day one. Youâll be trusted to grow into this role â and rewarded when you do.
Billing & Payments Architecture: Design and build Rebuyâs centralized billing system that handles merchant billing, partner payments, and customer-facing charges. Architect the integration layer that allows payment balances to be applied across Rebuyâs full suite of services. Tackle genuinely complex financial engineering challenges with PHP and Go at scale.
Build Robust APIs: Design and implement secure, well-structured APIs in PHP and Go to power billing events, payment processing, and financial data flows across our platform and Shopify integrations.
Legacy Modernization: Lead and contribute to the migration of our Code Igniter 2 codebase to Code Igniter 4. This is high-priority, near-term work with real business dependencies â including enterprise partnership commitments â and requires a PHP engineer with the experience and judgment to do it right.
Agentify the Platform: Partner with product and engineering to identify where AI agents can automate workflows, surface insights, and guide merchants through our product. Build the backend systems â APIs, data pipelines, and event hooks â that enable intelligent automation. This is genuinely new territory and one of the most exciting growth vectors for Rebuyâs product.
Platform Breadth: Our team owns more than billing and payments â we also support a partner portal, data ETL pipelines, customer-facing reporting APIs, and the infrastructure that makes data flow reliably across the business. You wonât be responsible for all of it on day one, but youâll have genuine opportunities to grow into the areas that most interest you. Engineers here donât get siloed; they get context.
Engineering Best Practices: Contribute significantly to the engineering culture at Rebuy by establishing, documenting, and promoting best practices. Lead initiatives to introduce and standardize frameworks and tools that increase development efficiency and maintainability.
Security & Compliance: Stay current with the latest security trends, vulnerabilities, and best practices as they apply to billing and payment systems. Champion security-first engineering across authentication, authorization, data encryption, and compliance considerations in everything you build.
PHP Technical Leadership: Serve as a key technical anchor for PHP across the engineering organization. Rebuyâs codebase has significant PHP depth and relatively few engineers with that expertise. Youâll lead code reviews, share knowledge actively, and help raise the PHP competency of the broader team.
Quality Assurance: Conduct quality checks on deliverables to ensure code, setup, and configurations meet expected results. Ensure that all features meet high standards of quality and performance before deployment.
Team Collaboration: Engage actively in building a strong team culture. Work closely with the Product Owner, Engineering Manager, and peers across billing, payments, partner tools, and data infrastructure to define requirements, estimate effort, and drive solutions forward. This is a team where your voice matters â you wonât just be handed tickets. Assist the Support team in triaging and resolving high-priority production issues.
Technologies We Use:
AI: Anthropic Enterprise Claude Code / Co-work, Cursor, Adhoc AI tools budget.
Frontend Technologies: React, TypeScript, GraphQL, VueJS, Angular
Backend technologies: PHP, GO, MySQL, BigTable, Elasticsearch
Other Tools: Jira, Bitbucket, Confluence, Google Suite, Slack, One Password, Notion
Who You Are
Weâre stoked to meet you and get to learn more about you, your experience and your interest in joining our team.
The Hard Skills:
Experience building or maintaining billing, payments, or financial systems â including working with payment processors, subscription engines, invoicing pipelines, or similar financial infrastructure in a production SaaS environment.
Educational background in CS // Engineering or a similar area.
5+ years of hands-on experience building backend applications with PHP and Go, with a proven track record of delivering complex, high-traffic systems.
Experience designing and implementing secure, scalable, and maintainable RESTful APIs in PHP and Go, with a deep understanding of API design patterns, versioning, and performance optimization.
Experience with cloud-based technologies, preferably GCP.
Strong understanding of a performant SaaS environment.
Experience in a Scrum/Agile environment.
Experience with the Atlassian suite, including Jira and Bitbucket.
Solid understanding of security fundamentals as they apply to backend and financial systems â including secure coding practices, authentication/authorization patterns, data encryption, and awareness of current vulnerability trends (e.g. OWASP Top 10)
The Soft Skills:
A collaborative mindset and work approach with the ability to lead projects and mentor others.
The ability to thrive in a fast-paced environment with a high level of autonomy and responsibilities.
Excellent communication skills, especially being able to explain technical concepts to both technical and non-technical audiences.
Genuinely curious about the intersection of engineering and business. You care about the downstream impact of what you build â not just that the code works, but that it moves the company forward.
Who Youâll Meet With
Now letâs get into who youâll meet during our interview process! After you submit your application and itâs been reviewed by our team, we will reach out to you inviting you to meet with us. From there, you can expect an interview process similar to this:
An introductory call with someone from the Talent Acquisition team for about 30 min.
Interview with the Hiring Manager to learn more about you and answer your questions about Rebuy and this role
A coding challenge and white boarding exercise to show us your skillset during a live panel interview with a few team members.
Short final interview with our CEO and COO where youâll get to learn more about Rebuy.
The Perks Youâll Enjoy
Rebuy is a fully remote company across the U.S. and Canada that aims to provide all of our team with the resources, support and flexibility they need to thrive in their roles.
Team: Weâve got the best, brightest, most brilliant team members who are excited to meet you! We also like to think we have a good sense of humor.
Remote Work: With a strong internet connection, youâre able to work from anywhere within the U.S. and Canada.
PTO: We offer a flexible vacation policy, generous holiday schedule, parental leave and sick policy. Thereâs other policies too like a birthday holiday!
Amazing Benefits: 100% free health, dental, and insurance for you and your family. Donât worry, thereâs even more!
Retirement Plans: For our U.S. employees we offer 401(k) retirement plans and for our Canadian employees we offer a TFSA and RRSP retirement plans. Youâll also enjoy a 3% contribution of your gross salary, no matter where youâre located!
Our compensation reflects the cost of labor across several U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $130,000 - $180,000 USD annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter and hiring manager can share more about the specific salary range for the job location during the hiring process.
Disclosures:
Equal Opportunity Statement
Rebuy, Inc. is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law.
Rebuy, Inc. aims to make rebuyengine.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email hr@rebuyengine.com.
Who we are
We're Redis. We built the product that runs the fast apps our world runs on. (If you checked the weather, used your credit card, or looked at your flight status online today, youâre welcome.) At Redis, youâll work with the fastest, simplest technology in the businessâwhether youâre building it, telling its story, or selling it to our 10,000+ worldwide customers. Weâre creating a faster world with simpler experiences. You in?
Why would you love this job?
As a Technical Support Engineer, you will be responsible for helping customers by diagnosing and resolving complex technical issues in a high-contribution role with exciting technical challenges, ongoing learning, and the excitement of helping name-brand customers as part of our fun, tight-knit team.
In this role, you will use and extend your existing technical depth and increase your technical breadth by addressing complex problems for the top companies in the world. You will level up to be an expert complex problem solver on Redis Enterprise Software, being used as a high-performance database by thousands of worldwide customers. You will dive deep into different exciting forefront technologies by supporting Redis Enterprise running on the top Cloud Platforms and in the top container orchestration platforms.
Join the best of the best and continuously learn new things. We are looking for brilliant experts who are curious, persistent, and happy digging through the full stack, from code to Sysadmin to networking to performance. If this sounds like you, please check out the technical foundation weâd like you to bring.
What youâll do:
Work with customers to troubleshoot and resolve complex software issues:
Reproduce issues, replicating customer environments as needed.
Document issues and contribute to our internal team documentation.
Provide Root Cause Analysis
Collaborate with Engineering as needed to provide solutions.
Analyze performance questions that may arise along the data path (including networks) for deployments that may be in the Cloud or On-premises.
Provide technical expertise during testing, deployment, and upgrading of Redis software.
Manage critical customer issues, facilitating communication between customers, CloudOps, Engineering, Product, TAMs, and Sales.
Serve as the customer advocate for timely resolution of issues and handling escalations while helping customers realize and maximize the value of their Redis subscription.
Participate in new product development, customer training, and other support-related activities.
This role requires a 5-day work week that includes Saturday and Sunday.
What will you need to have?
At least five years of technical experience as a Support Engineer, Systems Engineer, Software Engineer, or Site Reliability Engineer in an enterprise software company
At least four years of experience troubleshooting real-time production systems
At least two years of hands-on experience with cloud infrastructure.
Strong background in scripting or programming languages (Python, Java, C#, JavaScript, Bash, Powershell, etc.)
Expert working knowledge in Linux/Unix and networking (TCP/IP)
Professional experience working with networking tools like wireshark, tcpdump, etc.
Experience in analyzing and debugging production issues at scale.
Experience with alerting and monitoring systems (Prometheus, Grafana, ELK, Splunk, etc.).
Working knowledge of Cloud-based and On-premises environments
Proficiency in communication and presentation, both written and verbal (in English)
Strong technical background with excellent problem-solving and multi-tasking skills
High availability and commitment to customers at any time
Extra great if you have:
Bachelor of Science in Computer Science or Information Systems
Experience with NoSQL databases (especially Redis)
Experience working with container orchestration environments, such as Kubernetes
The estimated gross base annual salary range for this role is $91,455 â $137,273 per year in New York, California, Washington, Colorado, and Rhode Island. Actual compensation may vary and is dependent on various factors, including a candidateâs work location, qualifications, experience, and competencies. Base annual salary is one component of Redisâ total compensation and competitive benefits package, which may include 401(k), unlimited time off, learning and development opportunities, and comprehensive health and wellness benefits. This role may include discretionary bonuses, stock options, commuter benefits based on location, or a commission plan. Salary history is not used in compensation package decisions. Redis utilizes market pay data to determine compensation, so posted compensation ranges are subject to change as new market data becomes available.
As a global company, we value a culture of curiosity, diversity of thought, and innovation from our employees, customers, and partners. Redis is committed to a diverse and inclusive work environment where all employeesâ differences are celebrated and supported, and everyone feels safe to bring their authentic selves to work. Redis is dedicated to equal employment opportunities regardless of race, color, ancestry, religion, sex, national orientation, sexual orientation, age, marital status, disability, gender identity, gender expression, Veteran status, or any other classification protected by federal, state, or local law. We strive to create a workplace where every voice is heard, and every idea is respected.
Redis is committed to working with and providing access and reasonable accommodation to applicants with mental and/or physical disabilities. If you think you may require accommodations for any part of the recruitment process, please send a request to recruiting@redis.com. All requests for accommodations are treated discreetly and confidentially, as practical and permitted by law.
Any offer of employment at Redis is contingent upon the successful completion of a background check, consistent with applicable laws.
Redis reserves the right to retain data longer than stated in the privacy policy in order to evaluate candidates.
Who We Are
Wingspan is the first payroll platform designed specifically for independent contractors and their businesses. We simplify onboarding, payments, and compliance for flexible workforces of all sizes, from solo operators to large enterprises.
We're a Series B startup based in NYC with distributed teams in the USA, Poland, and the UK, and backed by Andreessen Horowitz (a16z), Touring Capital, and a strong network of operators, including the CEOs and founders of Warby Parker, Harry's, Allbirds, Invision, and Flatiron Health.
About the Role
As a Software Engineer on the Payment Operations team, you will be responsible for the execution layer that ensures every dollar on Wingspan's platform is accounted for, reconciled, and moved accurately on time. You will have direct access to production systems, a mandate to identify what's broken or inefficient, and the authority to engineer the fix.
This role reports to the Head of Payments & Compliance Operations and is based in Warsaw, Poland, with a remote work model.
What You'll Do
Qualifications & Requirements
About EquipÂ
Equip is the leading virtual, evidence-based eating disorder treatment program on a mission to ensure that everyone with an eating disorder can access treatment that works. Created by clinical experts in the field and people with lived experience, Equip builds upon evidence-based treatments to empower individuals to reach lasting recovery. All Equip patients receive a dedicated care team, including a therapist, dietitian, physician, and peer and family mentor. The company operates in all 50 states and is partnered with most major health insurance plans. Learn more about our strong outcomes and treatment approach at www.equip.health.
Founded in 2019, Equip has been a fully virtual company since its inception and is proud of the highly-engaged, passionate, and diverse Equisters that have created Equipâs culture. Recognized by Time as one of the most influential companies of 2023, along with awards from Linkedin and Lattice, we are grateful to Equipsters for building a sustainable treatment program that has served thousands of patients and families.
About the role:
Equip's engineering culture emphasizes agility, collaboration, and ownership, fostering a team of problem-solvers who build a robust, scalable healthcare platform. As a Senior DevOps Engineer, you'll be crucial in developing and maintaining infrastructure, platforms, and developer tools, including CI/CD pipelines, cloud infrastructure, and observability tools, to enable efficient development and scaling. You'll also support web (Java, React, PostgreSQL) and mobile (React Native) applications, standardizing AWS deployments and CI/CD practices. The role will involve building security, metrics, logging, and deployment tooling to ensure system reliability and scalability. Our goal is to create intuitive, reliable systems that allow engineers to iterate quickly and deliver value to patients, with direct user feedback driving our highest-impact work.
Responsibilities:
Design and build a robust, scalable cloud platform to empower web and data engineering teams to deliver high-quality applications.
Partner with engineering and data teams to improve developer velocity, ensure system reliability, and embed operational excellence.
Lead best practices in cloud infrastructure architecture, CI/CD automation, monitoring, and backend systems reliability.
Develop tools and automation of a variety of frameworks and languages to enhance the performance, availability, and scalability of services.
Contribute to a culture of continuous improvement through proactive monitoring, root cause analysis, and knowledge sharing.
Perform other duties as assigned.
Qualifications:
Bachelor's degree or equivalent training and work experience in Computer Science, Software Engineering, or a related field
5â10 years of experience in DevOps, SRE, Platform Engineering, or Software Engineering roles.
Deep expertise in AWS and its ecosystem of services.
Proven track record building cloud infrastructure using Infrastructure as Code (Terraform, CloudFormation)
Strong experience with container orchestration and serverless architectures, including ECS/Fargate and Docker
Solid understanding of AWS networking concepts, including VPCs, subnets, security groups, route tables, and load balancers.
Hands-on experience creating and maintaining CI/CD pipelines (e.g., CircleCI, GitLab CI, etc.).
Strong experience with scalable backend systems, including microservices, APIs, caching layers, and various databases.
Experience deploying and managing React and other JavaScript applications using AWS services like CloudFront and S3.
Experience setting up comprehensive monitoring and alerting for infrastructure, services, and data pipelines.
Skilled at identifying, diagnosing, and preventing production issues through effective observability and troubleshooting (NewRelic, DataDog)
Commitment to building secure systems with best practices in access control, encryption, and secure deployment pipelines.
Experience communicating and collaborating with engineering and product team stakeholders.
Proven ability to manage multiple projects with competing priorities.
Be able to work Eastern or Central time zones. Either 9 - 5 Eastern or 8 - 4 Central.
Benefits
Time Off:
Flex PTO policy (3-5 wks/year recommended) + 11 paid company holidays.
Medical Benefits:
Competitive Medical, Dental, Vision, Life, and AD&D insurance.
Equip pays for a significant percentage of benefits premiums for individuals and families.
Maven, a company paid reproductive and family care benefit for all employees.
Employee Assistance Program (EAP), a company paid resource for mental health, legal services, financial support, and more!
Other Benefits
Work From Home Additional Perks:
$50/month stipend added directly to an employeeâs paycheck to cover home internet expenses.
One-time work from home stipend of up to $500.
Physical Demands
Work is performed 100% from home with requirement to travel once or twice a year for in-person meetings. This is a stationary position that requires the ability to operate standard office equipment and keyboards as well as to talk or hear by telephone. Sit or stand as needed.
#LI-Remote
At Equip, Diversity, Equity, Inclusion and Belonging (DEIB) are woven into everything we do. At the heart of Equipâs mission is a relentless dedication to making sure that everyone with an eating disorder has access to care that works regardless of race, gender, sexuality, ability, weight, socio-economic status, and any marginalized identity. We also strive toward our providers and corporate team reflecting that same dedication both in bringing in and retaining talented employees from all backgrounds and identities. We have an Equip DEIB council, Equip For All; also referred to as EFA. EFA at Equip aims to be a space driven by mutual respect, and thoughtful, effective communication strategy - enabling full participation of members who identify as marginalized or under-represented and allies, amplifying diverse voices, creating opportunities for advocacy and contributing to the advancement of diversity, equity, inclusion, and belonging at Equip.
As an equal opportunity employer, we provide equal opportunity in all aspects of employment, including recruiting, hiring, compensation, training and promotion, termination, and any other terms and conditions of employment without regard to race, ethnicity, color, religion, sex, sexual orientation, gender identity, gender expression, familial status, age, disability, weight, and/or any other legally protected classification protected by federal, state, or local law.Â
Our dedication to equitable access, which is core to our mission, extends to how we build our "village." In line with our commitment to Diversity, Equity, Inclusion, and Belonging (DEIB), we are dedicated to an accessible hiring process where all candidates feel a true sense of belonging. If you require a reasonable accommodation to complete your application, interview, or perform the essential functions of a role, we invite you to reach out to our People team at accommodations@equip.health.
#LI-Remote
Join Hostinger, and weâll grow fast! ð
Weâre shaping the future of online success - powered by AI and driven by people. With 900+ talented professionals and over 4 million clients in 150 countries, we help creators and entrepreneurs bring their ideas to life faster and easier than ever before.
Our mission: To provide tools that help individuals and small businesses succeed online faster and easier.
Our culture: Guided by 10 company principles.
Our formula for success: Customer obsession, innovative products, and talented teams.
Your role at Hostinger
Join Hostingerâs Delivery Automation team as a Senior Full Stack Automation Engineer, where youâll focus on building scalable internal platforms and tools that supercharge developer productivity, streamline software delivery, and automate complex manual flows across the company.
In this role, youâll take ownership of designing and automating workflows that reduce friction for engineers and teams across Hostinger. From CI/CD pipelines and deployment automation to system integrations and cross-team process improvements - your work will enable faster delivery, greater efficiency, and a stronger automation-first culture.
Your impact will span Product, Engineering, and beyond: empowering developers with reliable self-service solutions, helping teams eliminate repetitive tasks, and ensuring Hostinger operates at scale with speed and confidence.
Youâll collaborate closely with stakeholders across engineering and other departments to understand their challenges, architect resilient solutions, and ship intuitive tools backed by robust backend systems. Youâll also explore and adopt emerging technologies - including AI - to continuously elevate developer experience and automation capabilities.
Curious to learn more? Connect with your team:
Mantas Gurskis - Automation Team Lead, Asta DagienÄ - Head of Delivery
\nGet ready to take your personal and professional growth to new heights! Join Hostinger today and be part of our journey ð
Three. Two. Onboard
We are a Web3-driven company building decentralized products and working with blockchain data to create transparent and data-informed solutions. We are looking for a Junior Data Analyst who is curious about blockchain, crypto, and decentralized ecosystems
Education :Â Bachelorâs degree in Mathematics, Statistics, Economics, Computer Science, or a related field
Technical Skills:
Web3 / Crypto (Preferred):
Healthcare is in crisis and the people behind the results deserve better. With more and more data coming from wearables, lab tests, and patientâdoctor interactions, weâre entering an era where data is abundant.
Junction is building the infrastructure layer for diagnostic healthcare, making patient data accessible, actionable, and automated across labs and devices. Our mission is simple but ambitious: use health data to unlock unprecedented insight into human health and disease.
If you're passionate about how technology can supercharge healthcare, youâll fit right in.
Backed by Creandum, Point Nine, 20VC, YC, and leading angels, weâre working to solve one of the biggest challenges of our time: making healthcare personalized, proactive, and affordable. Weâre already connecting millions and scaling fast.
Short on time? TL;DR
You: Can define what should be measured, how it should be modeled, and how those insights should shape product and company decisions.
Ownership: Youâll own Junctionâs highest-leverage statistical, modeling, and evaluation work across diagnostics, clinical workflows, and AI-enabled product development.
Scope: This is not a pure IC modeling role and not a reporting role. Youâll set the methodology, research roadmap, and decision framework for how Junction uses data to drive product, clinical, and business outcomes.
Salary: $180,000 â $220,000 + equity
Location: Fully remote (EST timezone only)
Why we need you
Junction sits in the flow of high-value diagnostics and clinical data. As the company grows, our advantage moves beyond just having data to having the ability to turn it into reliable intelligence improving product decisions, customer outcomes, and the performance of the business.
Some of that work exists today, but it is not yet owned as a coherent function. Models get built. Analyses get done. Experiments answer local questions. But we need someone who can define the broader scientific and analytical system: what we should measure, what methods we trust, where modeling creates real leverage, and how that work translates into products and decisions that hold up outside a demo.
Weâre hiring our first Data Scientist to take ownership of, and establish that standard.
This role will lead Junctionâs most important modeling, experimentation, and evaluation work. Youâll partner closely with data, product engineering and leadership teams to drive the analytical roadmap by which Junction can leverage differentiated value from data.
What youâll be doing day to day
Own the research and modeling work underlying Junctionâs highest-priority data science opportunities across diagnostics, clinical workflows, and AI-enabled product features
Define rigorous frameworks for measurement, experimentation, and causal evaluation so we can distinguish signal from noise and make decisions we can defend
Lead development of predictive models, segmentation approaches, risk or routing logic, and other statistical systems that directly inform product and business strategy
Build the analytical foundation behind customer-facing features â from model development through to validation and performance tracking
Partner with engineering and data engineering to ensure models and analytical systems can be put in production, are reliable, and useful in real workflows
Establish how Junction evaluates data-driven and AI-enabled features, including methodology, quality thresholds, monitoring, and performance review
Communicate complex technical findings clearly to technical and non-technical stakeholders, including tradeoffs, limitations, and implications for action
Requirements
Strong track record of leading high-stakes analytical work that influenced product, operational, or business decisions
Deep foundation in statistical inference, experimental design, observational analysis, and model evaluation
Strong Python and/or R skills, with experience working on large, messy real-world datasets
Experience building predictive or decision-support models in production or near-production environments
Experience partnering closely with engineering to move work from analysis or prototype into deployed systems
Ability to operate at both strategic and hands-on levels: defining the roadmap while also getting into the details when needed
Strong communication and stakeholder management skills; able to explain methods, findings, and tradeoffs to executives as well as technical peers
Comfort operating in a startup environment with ambiguity, limited structure, and high ownership
Nice to have
Experience designing, executing, and publishing research studies
Experience with HIPAA, PHI, or other regulatory clinical frameworks
Deep familiarity with modern data tooling and production workflows across warehouses, orchestration, and transformation layers
Experience developing, deploying, and designing evaluation frameworks for LLM or AI-powered features in customer-facing products
Expertise directly working with healthcare, diagnostics, lab data, wearable data, and other clinical data
Experience applying causal inference methods, such as diff-in-diff, propensity scoring, or instrumental variables in practice
What this role isnât
Not an analytics role focused on dashboards, reporting, or one-off analysis
Not an ML platform role â you wonât own infrastructure or tooling
Not a good fit if you mainly want to experiment with models or AI ideas without being accountable for how they perform in production
Not a good fit if you struggle with ambiguity. Knowing what to work on is part of the job
How you'll be compensated
Salary: $180,000 â $220,000 + equity
Your salary is dependent on your location and experience level
Generous early stage options (extended exercise post 2 years employment)
Regular in-person offsites, last were in Tenerife and Miami
Monthly learning budget of $300 for personal development and productivity
Flexible, remote-first working - including $1K for home office equipment
Monthly budget of $150 to use towards a coworking space
25 days off a year + national holidays
Healthcare coverage depending on location
Oh and before we forget:
Backend Stack: Python (FastAPI), Go, PostgreSQL, Google Cloud Platform (Cloud Run, GKE, Cloud BigTable, etc), Temporal Cloud
Frontend Stack: TypeScript, Next.js
API docs are here: https://docs.junction.com/
Company handbook is here with engineering values + principles
Important details before applying:
We only hire folks physically based in GMT and EST timezones - more information here
We do not sponsor visas right now given our stage
About Smart Working
At Smart Working, we believe your job should not only look right on paper but also feel right every day. This isnât just another remote opportunity - itâs about finding where you truly belong, no matter where you are. From day one, youâre welcomed into a genuine community that values your growth and well-being.
Our mission is simple: to break down geographic barriers and connect skilled professionals with outstanding global teams and products for full-time, long-term roles. We help you discover meaningful work with teams that invest in your success, where youâre empowered to grow personally and professionally.
Join one of the highest-rated workplaces on Glassdoor and experience what it means to thrive in a truly remote-first world.
About the Role
This is a long-term, strategic role, not a short sprint. You'll be embedded in a collaborative engineering and analytics team, working across the full data lifecycle: ingestion, transformation, modelling, and surfacing insights through Looker. You'll work closely with stakeholders across commercial, product, and marketing to ensure data is reliable, scalable, and meaningful.
You'll be given real ownership. This is a role for someone who wants to shape standards, improve the architecture, and grow with a brand that takes its data seriously.
\n
At Smart Working, youâll never be just another remote hire.
Be a Smart Worker - valued, empowered, and part of a culture that celebrates integrity, excellence, and ambition.
If that sounds like your kind of place, weâd love to hear your story.
Apply directly on the original site at Get on Board.
Apply directly from Get on Board.
NeuralWorks es una compañía de alto crecimiento fundada hace 4 años. Estamos trabajando a toda máquina en cosas que darán que hablar.
Somos un equipo donde se unen la creatividad, curiosidad y la pasión por hacer las cosas bien. Nos arriesgamos a explorar fronteras donde otros no llegan: un modelo predictor basado en monte carlo, una red convolucional para detección de caras, un sensor de posición bluetooth, la recreación de un espacio acústico usando finite impulse response.
Estos son solo algunos de los desafíos, donde aprendemos, exploramos y nos complementamos como equipo para lograr cosas impensadas.
Trabajamos en proyectos propios y apoyamos a corporaciones en partnerships donde codo a codo combinamos conocimiento con creatividad, donde imaginamos, diseñamos y creamos productos digitales capaces de cautivar y crear impacto.
Find this vacancy on Get on Board.
El equipo de Data y Analytics trabaja en diferentes proyectos que combinan volúmenes de datos enormes e IA, como detectar y predecir fallas antes que ocurran, optimizar pricing, personalizar la experiencia del cliente, optimizar uso de combustible, detectar caras y objetos usando visión por computador.
Trabajarás transformando los procesos a MLOps y creando productos de datos a la medida basados en modelos analíticos, en su gran mayoría de Machine Learning, pero pudiendo usar un espectro de técnicas más amplio.
Dentro del equipo multidisciplinario con Data Scientist, Translators, DevOps, Data Architect, tu rol será extremadamente importante y clave para el desarrollo y ejecución de los productos, pues eres quien conecta la habilitación y operación de los ambientes con el mundo real. Te encargarás de aumentar la velocidad de entrega, mejorar la calidad y la seguridad del código, entender la estructura de los datos y optimizar los procesos para el equipo de desarrollo.
En cualquier proyecto que trabajes, esperamos que tengas un gran espíritu de colaboración, pasión por la innovación y el código y una mentalidad de automatización antes que procesos manuales.
Como MLE, tu trabajo consistirá en:
¡En NeuralWorks nos importa la diversidad! Creemos firmemente en la creación de un ambiente laboral inclusivo, diverso y equitativo. Reconocemos y celebramos la diversidad en todas sus formas y estamos comprometidos a ofrecer igualdad de oportunidades para todos los candidatos.
“Los hombres postulan a un cargo cuando cumplen el 60% de las calificaciones, pero las mujeres sólo si cumplen el 100%.” Gaucher, D., Friesen, J., & Kay, A. C. (2011).
Te invitamos a postular aunque no cumplas con todos los requisitos.
Grupo Mariposa es una corporación multinacional de bebidas y alimentos fundada en 1885, con operaciones en más de 14 países y más de 15,000 colaboradores. Contamos con el portafolio de bebidas más grande de la región y partnerships con líderes globales como PepsiCo y AB InBev. En los últimos años nos hemos expandido globalmente y reorganizado en cuatro unidades de negocio: apex (transformación), cbc (distribución), beliv (innovación en bebidas) y bia (alimentos). Buscamos talentos para potenciar nuestra estrategia de crecimiento y llevar alegría y desarrollo a lo largo de la organización. En este rol, tendrás la oportunidad de liderar la arquitectura de datos e IA, diseñando soluciones escalables que habiliten análisis a gran escala y la operacionalización de modelos de ML/IA en entornos de producción.
Apply to this job through Get on Board.
Buscamos un Arquitecto de Datos Senior con visión estratégica para liderar el diseño de nuestra plataforma de Datos.
Tu misión será construir cimientos que permitan no solo el análisis de datos a escala, sino también la operacionalización eficiente de modelos de Datos, Machine Learning y soluciones de IA. Debes dominar Databricks y demostrar experiencia llevando modelos a producción, gestionando arquitecturas que soporten analítica avanzada, Se valorará capacidad para definir estrategias de gobierno de datos, MLOps y colaboración transversal entre equipos.
Requerimientos:
Deseables:
Se valoran certificaciones en Databricks y experiencia demostrable liderando proyectos de IA en entornos de producción. Capacidad para comunicar resultados técnicos a stakeholders no técnicos y para liderar equipos multidisciplinarios. Enfocado en resultados, con pensamiento analítico y enfoque práctico para resolver problemas complejos de datos a escala.
Sobre la empresa
Somos una corporación multinacional de bebidas y alimentos con operaciones regionales, un portafolio amplio de marcas y una estrategia acelerada de transformación digital. Dentro de Apex Digital / M5, el área de Data & Analytics habilita productos analíticos, datos gobernados y capacidades avanzadas para las unidades de negocio, incluyendo CBC, Beliv, BIA y las iniciativas transversales de transformación digital.
Como parte de esta evolución, la organización está avanzando hacia una arquitectura empresarial de AI Agents basada en Databricks, ADLS Gen2, Unity Catalog, Azure AI / Microsoft Foundry, Copilot Studio y Power Automate, buscando habilitar asistentes y agentes empresariales seguros, trazables, escalables y conectados con los datos core del negocio.
This company only accepts applications on Get on Board.
Diseñar, construir, desplegar y evolucionar soluciones de IA generativa y agentes empresariales sobre Azure AI / Microsoft Foundry, integradas con la plataforma de datos corporativa, para habilitar casos de uso de negocio con trazabilidad, seguridad, escalabilidad y alto impacto operativo y comercial.
Responsabilidades clave del puesto
Alineación con la Arquitectura Agents AI:
Impacto esperado para la organización
Nine-67 is building a fast-moving AI capability for enterprise clients. This role sits at the intersection of product, data, and execution, directly partnering with the CEO to design, build, and deploy AI-driven applications in real client environments. You will contribute to shaping a scalable, high-quality AI platform by delivering end-to-end solutions that combine frontend, backend, and data workflows in rapid iterations.
As a key player in a fast-build environment, you’ll help transform ambiguous business problems into working systems, create internal tools and automation, and integrate with client systems and data sources to drive real business value.
This job is exclusive to getonbrd.com.
• Build and deploy AI-driven applications end-to-end (frontend, backend, data workflows) with speed and quality.
• Translate business problems into functioning AI systems with minimal direction.
• Collaborate directly with leadership and clients to iterate on real use cases.
• Develop internal tools, agents, and automation to boost efficiency.
• Integrate with APIs, data sources, CRM systems, data warehouses, and client environments.
• Continuously improve speed, reliability, and reusability of what we build.
• Strong builder mindset—ship fast and learn by doing.
• Experience with AI tools and frameworks (LLMs, APIs, prompt systems, agents).
• Comfort across the stack; you don’t need to be perfect, but you can figure it out.
• Ability to work in ambiguity without waiting for detailed specs.
• Strong problem-solving and product intuition.
• High ownership and accountability.
• Experience with Cursor, Vercel, Supabase, or similar modern stacks.
• Experience building internal tools or client-facing applications.
• Exposure to data pipelines, analytics, or CRM systems.
• Prior startup or consulting experience.
• Direct collaboration with leadership on high-impact projects.
• Build real systems used by enterprise clients.
• Opportunity to shape and scale AI capability from the ground up.
This job offer is on Get on Board.
OMNIX desarrolla una plataforma PaaS de automatización y orquestación de disrupciones en operaciones complejas, integrándose con sistemas core como ERP, WMS, CRM e IoT. Trabajamos con empresas enterprise en industrias como telecomunicaciones, retail, logística y manufactura, donde la continuidad operacional es crítica.
El Customer Success Manager se incorpora al equipo de Delivery & Customer Success, trabajando en estrecha colaboración con Forward Deployed Engineers (FDE), Ventas y Producto. Su rol es asegurar que las implementaciones generen impacto real y sostenido en el negocio del cliente. Es responsable de transformar proyectos en adopción profunda, expansión de uso y valor operativo tangible, contribuyendo directamente a la retención y crecimiento de cuentas estratégicas.
Find this job on getonbrd.com.
El Customer Success Manager es responsable de la gestión integral de cuentas enterprise post-implementación, asegurando que OMNIX se convierta en un sistema crítico dentro de la operación del cliente. Lidera la relación estratégica con stakeholders, define junto al cliente los casos de uso prioritarios y construye un roadmap de expansión basado en impacto operativo.
Trabaja coordinadamente con el FDE, quien ejecuta técnicamente las soluciones, mientras el CSM asegura su adopción, continuidad y valor en producción. Tiene autonomía para priorizar iniciativas, detectar oportunidades de expansión y escalar decisiones. Lidera instancias ejecutivas como QBRs y es responsable de sostener una narrativa clara de valor. El éxito del rol se mide por la profundidad de uso de la plataforma, la expansión de la cuenta y la capacidad de convertir soluciones en resultados concretos dentro de la operación.
Experiencia mínima de 5 años en roles de Customer Success, consultoría o gestión de cuentas en contextos B2B enterprise.
Experiencia demostrable trabajando con clientes complejos en industrias como logística, telecomunicaciones, retail o manufactura.
Capacidad de interactuar con stakeholders técnicos y ejecutivos (C-level), sosteniendo conversaciones de negocio y tecnología.
Experiencia gestionando implementaciones o proyectos con múltiples integraciones (ERP, APIs, sistemas core).
Fuerte orientación a resultados, con capacidad de estructurar problemas, priorizar iniciativas y ejecutar con autonomía.
Inglés avanzado (oral y escrito) para interacción con equipos y clientes internacionales.
Alta disciplina operativa, capacidad de seguimiento y accountability bajo entornos exigentes.
Experiencia previa en empresas tipo SaaS/PaaS o plataformas de datos y automatización operacional.
Conocimiento en herramientas de integración, data workflows o automatización (ej: n8n, Zapier, APIs, ETL).
Experiencia en consultoría estratégica o implementación de transformación digital en grandes empresas.
Familiaridad con metodologías de gestión como EOS o frameworks de ejecución disciplinada.
Conocimiento en analítica de datos, detección de anomalías o modelos de inteligencia artificial aplicados a operaciones.
Experiencia en entornos de alto crecimiento o compañías tecnológicas con foco enterprise.
Somos una empresa de servicios de tecnología que busca proyectos de alto impacto haciendo de la innovación y transformación digital parte de diferentes empresas principalmente transnacionales latinoamericanas de diversos sectores económicos como retail, seguros, distribución de equipos médicos, banca y productos digitales masivos utilizados por los consumidores. en todo el continente
Somos partidarios de la excelencia técnica, DevOps, Entrega Continua e Integración Continua, conformando equipos de alto desempeño en proyectos desafiantes, orientados al crecimiento e implementación de nuevas tecnologías. Más importante aún, ofrecemos un entorno colaborativo y multicultural donde puedes aprender, disfrutar y crecer como profesional.
Find this job on getonbrd.com.
Gestionar proyectos tecnológicos asegurando el cumplimiento de alcance, plazos, presupuesto y calidad.
Coordinar equipos técnicos y stakeholders para asegurar la correcta ejecución de iniciativas tecnológicas.
Definir y supervisar planes de trabajo, hitos y entregables del proyecto.
Facilitar la comunicación entre áreas técnicas y de negocio para asegurar la alineación con los objetivos del proyecto.
Monitorear riesgos, dependencias y avances, proponiendo acciones correctivas cuando sea necesario.
Supervisar la integración de sistemas y soluciones tecnológicas dentro del ecosistema de la organización.
Asegurar la correcta aplicación de metodologías de gestión de proyectos y buenas prácticas de desarrollo.
Apoyar la toma de decisiones mediante análisis de información técnica y de negocio.
Gestionar documentación del proyecto y mantener actualizadas las herramientas de seguimiento.
Experiencia en administración de proyectos y gestión de presupuestos.
Experiencia trabajando con metodologías ágiles y tradicionales, con capacidad para adaptarse a modelos híbridos de gestión.
Dominio de herramientas de gestión de proyectos como Microsoft Project, Jira, Confluence u otras similares.
Alta capacidad analítica y de comprensión de procesos tecnológicos.
Comprensión de arquitecturas de soluciones tecnológicas: entornos on-premise, cloud e híbridos.
Conocimiento de conceptos técnicos como APIs, bases de datos, versiones de software y control de código fuente (Git).
Familiaridad con servicios en la nube como AWS, Google Cloud Platform o Microsoft Azure.
Comprensión de prácticas de desarrollo modernas como CI/CD, DevOps y uso de contenedores (Docker, Kubernetes).
Conocimiento de principios de seguridad de la información, incluyendo autenticación, cifrado, gestión de roles y respaldos.
Conocimiento de bases de datos relacionales (SQL) y no relacionales (NoSQL).
Capacidad para leer modelos entidad-relación y comprender la lógica de negocio detrás de los sistemas.
Comprensión de integración entre sistemas, incluyendo web services REST/SOAP, ETL y colas de mensajes.
Conocimiento de integraciones sincrónicas y asincrónicas y cuándo aplicar cada modelo.
Experiencia previa en la industria de AFP (Administradoras de Fondos de Pensiones).
Conocimiento o experiencia trabajando con plataformas CRM.
💻 Beneficio Bring Your Own Device (A partir del 4to mes trabajando con nosotros, podrás adquirir un computador propio)
🚀 Haz un impacto. Trabaja en proyectos desafiantes
📚 IT Training: acceso a más de 500 cursos actualizados cada semana 📖
🎤 Dev Talks: conferencias exclusivas con expertos del sector
🎉 Día especial: 🎂 ¡Día libre por cumpleaños!
👥 Trabaja en un equipo talentoso y multicultural usando tecnología increíble
🎙️ Escucha nuestro podcast aquí: 🔗 Escuchar Podcast
🌐 Visítanos en nuestra web: 🔗 Dynamic Devs
Somos 3IT ¡Innovación y talento que marcan la diferencia!
Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.
Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.
Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.
Apply directly on Get on Board.
Liderar, coordinar y gestionar al equipo de Service Managers, asegurando excelencia operativa, rentabilidad de las cuentas, satisfacción de clientes y correcta aplicación del modelo de servicio, actuando como instancia de escalamiento, destrabe estratégico y mejora continua.
✋ Algunas consideraciones antes de postular:
💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Sodexo + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk
About EVEN
EVEN is the leading direct-to-fan platform for artists and labels. We help artists sell music, merchandise, and exclusive content directly to their superfans, with every sale counting toward official chart reporting through Luminate.
Our platform powers pre-orders, digital storefronts, and direct-to-consumer commerce for artists including J. Cole, French Montana, Brent Faiyaz, LaRussell, and Mick Jenkins. We are partnered with Universal Music Group, UnitedMasters, Too Lost, Stem, Symphonic, Secretly Distribution, Virgin Music Group, and others across 3,000+ labels and distributors in over 110 countries.
We are a remote-first team of 35 people across the US and Latin America. Our engineering team of 16 is primarily based in LATAM and operates in three squads (Artist, Fan, Core), shipping across web, mobile, and API. You will be working alongside engineers you can communicate with natively.
Job opportunity published on getonbrd.com.
Product direction at EVEN is currently shared between our CEO (vision, strategy, partner commitments) and our CTO (day-to-day product and engineering decisions). Our Lead Product Designer shapes UX and design. There is no dedicated product manager.
We are now 35 people with three engineering squads, partnerships with the leading music companies, and a product surface that spans artist dashboards, fan storefronts, mobile apps, e-commerce, streaming, chart reporting, and API integrations.
We need someone whose full-time job is to own the product roadmap, run shaping sessions, write clear briefs, coordinate cross-team priorities, and connect what our partners and artists need with what our engineering team builds.
What you will do:
Success at 30/60/90 days:
Somos Adecco Chile, la filial local del líder mundial en servicios de Recursos Humanos, con más de 35 años de presencia en el país y una sólida trayectoria apoyando a empresas en su gestión de talento. Adecco Chile está comprometida con ofrecer soluciones integrales y personalizadas, destacándose en áreas como Selección de Personal, Staffing, Payroll Services y Training & Consulting. Nuestro equipo trabaja con altos estándares de calidad, respaldados por la certificación ISO 9001:2015, y con presencia en las principales ciudades del país. Actualmente, buscamos incorporar un Data Engineer para un proyecto estratégico de un cliente que involucra la construcción y optimización de pipelines de datos en cloud, con especial foco en tecnologías Google Cloud Platform y arquitecturas modernas de procesamiento y orquestación.
This company only accepts applications on Get on Board.
- Un ambiente de trabajo desafiante y dinámico que fomenta tu desarrollo profesional.
- Oportunidad de formar parte de un equipo altamente cualificado y profesional en nuestro cliente
- Formación continua para mantenerte actualizado en las tecnologías más modernas.
- Oportunidades claras de crecimiento dentro de la empresa y el sector tecnológico.
- Contrato inicialmente a plazo fijo, con posibilidad de pasar a indefinido con el cliente final.
- Modalidad híbrida de trabajo: 1 días presencial en oficina y 4 días remoto.
Haystack News is the leading local & world news service on Connected TVs reaching millions of users! This is a unique opportunity to work at Haystack News, one of the fastest-growing TV startups in the world. We are already preloaded on 37% of all TVs shipped in the US!
Be part of a Silicon Valley startup and work directly with the founding team. Jumpstart your career by working with Stanford & Carnegie Mellon alumni and faculty who have already been part of other successful startups in Silicon Valley.
You should join us if you're hungry to learn how Silicon Valley startups thrive, you like to ship quickly and often, love to solve challenging problems, and like working in small teams.
See Haystack's feature at this year's Google IO:
© Get on Board.
Krunchbox is transforming retail analytics with our next-generation platform (Krunchbox 2.0). We are migrating from 800 hardcoded ETLs to a modern, real-time analytics architecture powered by ClickHouse. This greenfield initiative aims to architect the analytical backbone for 100+ enterprise clients while maintaining and optimizing our existing SQL Server infrastructure during the transition. The Senior Database Engineer/Architect will lead the database transformation and operations across both legacy and modern systems, owning the analytical data layer, and delivering a scalable, multi-tenant ClickHouse architecture alongside ongoing SQL Server maintenance.
Find this job on getonbrd.com.
Required Qualifications
Preferred Qualifications
Desirable but not required skills:
Somos Artefact, una consultora líder a nivel mundial en crear valor a través del uso de datos y las tecnologías de IA. Buscamos transformar los datos en impacto comercial en toda la cadena de valor de las organizaciones, trabajando con clientes de diversos tamaños, rubros y países. Nos enorgullese decir que estamos disfrutando de un crecimiento importante en la región, y es por eso que queremos que te sumes a nuestro equipo de profesionales altamente capacitados, a modo de abordar problemas complejos para nuestros clientes.
Nuestra cultura se caracteriza por un alto grado de colaboración, con un ambiente de aprendizaje constante, donde creemos que la innovación y las soluciones vienen de cada integrante del equipo. Esto nos impulsa a la acción, y generar entregables de alta calidad y escalabilidad.
Apply to this job without intermediaries on Get on Board.
Experiencia con:
...y más!
Somos Artefact, una consultora líder a nivel mundial en crear valor a través del uso de datos y las tecnologías de IA. Buscamos transformar los datos en impacto comercial en toda la cadena de valor de las organizaciones, trabajando con clientes de diversos tamaños, rubros y países. Nos enorgullese decir que estamos disfrutando de un crecimiento importante en la región, y es por eso que queremos que te sumes a nuestro equipo de profesionales altamente capacitados, a modo de abordar problemas complejos para nuestros clientes.
Nuestra cultura se caracteriza por un alto grado de colaboración, con un ambiente de aprendizaje constante, donde creemos que la innovación y las soluciones vienen de cada integrante del equipo. Esto nos impulsa a la acción, y generar entregables de alta calidad y escalabilidad.
Apply at the original job on getonbrd.com.
...y más!
En Coderslab.io trabajamos en un entorno de alta demanda tecnológica, con equipos globales que combinan talento de primer nivel. Nuestro cliente FIFTECH lidera iniciativas de datos avanzadas y está desarrollando el proyecto Datalake 2.0 en Colombia. Este rol se integra en el área de Data Factory dentro de la gerencia de Plataforma, Arquitectura y Data. El objetivo es fortalecer el procesamiento de datos en un entorno Big Data en Google Cloud Platform (GCP), contribuyendo a la evolución continua de nuestro Data Lake y a la entrega de información analítica confiable para decisiones estratégicas.
Apply from getonbrd.com.
Buscamos un Data Engineer Senior con sólido background en procesos ELT/ETL en entornos Big Data sobre GCP y Data Lake. Debe demostrar capacidad para diseñar e implementar pipelines data-driven, experiencia en desarrollo de pipelines serverless y automatización de procesos. Se valorará la habilidad para modelar y estructurar datos orientados a análisis, así como la capacidad de integrarse de forma proactiva en proyectos complejos, con enfoque técnico y colaborativo. Se espera autonomía, buena comunicación y capacidad de trabajar en un entorno de ritmo alto.
Experiencia previa en Data Lake en GCP, con enfoque en ingesta y transformación de grandes volúmenes de datos. Conocimiento de herramientas de orquestación y automatización, como Airflow o Workflows de GCP. Habilidades para trabajar con equipos de Arquitectura y Producto, capacidades de análisis y resolución de problemas, y orientación a resultados. Se valorará experiencia en entornos multinacionales y trabajo remoto colaborativo.
Contrato de plazo fijo con duración estimada de 6 meses. Salario entre 2.500.000 y 2.700.000 CLP, según experiencia. Equipo propio no provisto; se requiere PC/notebook personal. Ventajas de trabajar con un cliente líder en soluciones de datos y un equipo global de alto rendimiento, con oportunidades de aprendizaje y crecimiento en tecnologías de vanguardia. Modalidad remota con posibles coordinación en Colombia y región. Si te apasiona la ingeniería de datos y quieres contribuir a un Data Lake avanzado, te invitamos a aplicar y formar parte de nuestro equipo.
CodersLab es una empresa dedicada al desarrollo de soluciones dentro del rubro IT y actualmente nos enfocamos en expandir nuestros equipos a nivel global para posicionar nuestros productos en más países de América Latina y es por ello que estamos en búsqueda de un Data Analyst
Buscamos un/a Data Analyst para unirse a nuestro equipo y participar en el desarrollo de aplicaciones móviles escalables, modernas y de alto impacto. Trabajarás en un entorno colaborativo, con proyectos desafiantes y oportunidades reales de crecimiento.
© getonbrd.com. All rights reserved.
Experiencia entre 2 y 3 años
Modalidad de contratación: Recibo por honorarios
Duración del proyecto: 6 meses
Modalidad: Hibrida (3 veces a oficina)
This posting is original from the Get on Board platform.
Applications at getonbrd.com.
This job is original from Get on Board.
- 2 años de experiencia con Título profesional de Ingeniero Civil Informática/Computación, Industrial o 5 años de experiencia en áreas vinculadas.
- Conocimientos en lenguajes:
- Python (Avanzado)
- JavaScript (Intermedio)
- SQL (Avanzado)
- Conocimientos en Contenedores (Docker)
- Conocimientos en Nube AWS (Intermedio ).
- Git
- Terraform ✨
- Elastic Search ✨
- Kubernetes ✨
- Kafka ✨
- Linux ✨
- Java Spring Boot (Intermedio)
Job opportunity published on getonbrd.com.
Apply directly through getonbrd.com.
Buscamos un Data Engineer con experiencia sólida construyendo y optimizando capas de consumo analíticas en entornos cloud, idealmente sobre Amazon Redshift Serverless.
Tu misión será diseñar, modelar y mantener una capa de consumo en Redshift a partir de datos replicados vía Zero‑ETL desde Aurora, habilitando datasets confiables y performantes para reporting, métricas multi‑dominio y casos de uso de analítica/IA.
Responsabilidades principales:
Nice‑to‑have (deseables):
Resolución de problemas: Capacidad analítica para identificar cuellos de botella de performance y proponer soluciones técnicas simples y efectivas.
En WiTI, nuestro centro son las personas y la agilidad, nos gusta compartir conocimientos entre todos y distintas áreas, cómo compartir tiempo extra para conversar sobre videojuegos, películas, seríes y cualquier topic interesante para nosotros.
Assetplan es una compañía líder en renta residencial con presencia en Chile y Perú, gestionando más de 40,000 propiedades y operando más de 90 edificios multifamily. El equipo de datos tiene un rol clave para optimizar y dirigir procesos internos mediante soluciones de análisis y visualización de datos, apoyando la toma de decisiones estratégicas en la empresa. Este rol se enfoca en diseñar, desarrollar y optimizar procesos ETL, creando valor mediante datos fiables y gobernados.
Apply without intermediaries from Get on Board.
This job is original from Get on Board.
Vequity is building the world’s most robust, contextualized buyer intelligence network for investment banks, private equity firms, and strategic acquirers. Our platform currently houses over 1.5 million buyer profiles with approximately 100 structured and inferred data fields per profile. We leverage proprietary AI agents to continuously enrich, infer, and structure buyer intelligence at scale. As a Senior Data Engineer, you will own the architecture, quality, and scalability of our data ecosystem—from ingestion and cleaning to inference and output generation. You will partner with AI, product, and engineering teams to deliver data APIs and feeds that power our platform's decision-support capabilities. Your work will directly impact data reliability, operational efficiency, and the precision of buyer attributes used across our customers.
Job source: getonbrd.com.
Competitive compensation and Paid Time Off (PTO).
Apply from getonbrd.com.
En Talana, buscamos un Data Engineer que diseñe, implemente y optimice una arquitectura de datos escalable, segura y de alta disponibilidad para transformar datos crudos en insights confiables. El objetivo fundamental es transformar datos crudos provenientes de diversas fuentes en activos de datos listos para el análisis, asegurando la excelencia operativa y la automatización total del ciclo de vida del dato mediante prácticas de DataOps.
This posting is original from the Get on Board platform.
Experiencia en migraciones de datos a Data Lakehouse, experiencia con orquestación avanzada, y habilidades para diseñar arquitecturas de datos que soporten dashboards y métricas en tiempo real. Conocimiento práctico de seguridad de datos, gobernanza y cumplimiento de normativas.
¡¡Muchas sorpresas más!!
"Todas las contrataciones están sujetas a la ley 21015. En Talana creemos en lugares de trabajos inclusivos y diversos, donde todas las personas son bienvenidas"
Opportunity published on Get on Board.
This posting is original from the Get on Board platform.
Find this vacancy on Get on Board.
Checkr está expandiendo su centro de innovación en Santiago para impulsar la precisión y la inteligencia de su motor de verificaciones de antecedentes a escala global. Este equipo colabora estrechamente con las oficinas de EE. UU. para optimizar el motor de selección, detectar fraude, y evolucionar la plataforma con modelos GenAI. El candidato seleccionado formará parte de un esfuerzo estratégico para equilibrar velocidad, costo y precisión, impactando millones de candidatos y mejorando la experiencia de clientes y socios. El rol implica liderar iniciativas de optimización, diseño de estrategias analíticas y desarrollo de modelos predictivos dentro de una pila tecnológica de alto rendimiento.
Apply from getonbrd.com.
Favor considerar adjuntar cv en Ingles actualizado al momento de postular
En Checkr, creemos que un entorno de trabajo híbrido fortalece la colaboración, impulsa la innovación y fomenta la conexión. Nuestras sedes principales son Denver, CO, San Francisco, CA, y Santiago, Chile.
Igualdad de oportunidades laborales en Checkr
Checkr se compromete a contratar a personas cualificadas y con talento de diversos orígenes para todos sus puestos tecnológicos, no tecnológicos y de liderazgo. Checkr cree que la reunión y celebración de orígenes, cualidades y culturas únicas enriquece el lugar de trabajo.
Apply directly through getonbrd.com.
En Artefact LatAm, somos una consultora líder enfocada en acelerar la adopción de datos e inteligencia artificial para generar impacto positivo. El Senior Data Scientist es un profesional altamente experimentado en el análisis de datos, con profundos conocimientos en técnicas estadísticas, de programación y de aprendizaje automático. Su rol principal es utilizar estas habilidades para extraer conocimientos significativos y tomar decisiones estratégicas basadas en datos dentro de la organización.
Además de desarrollar modelos analíticos avanzados, el Data Scientist Senior ejerce un rol importante dentro del equipo asignado al cliente, aportando con su conocimiento técnico para poder tomar decisiones concretas que ayuden al desarrollo del proyecto. Su experiencia ayuda en la conceptualización hasta la implementación, y asegura la entrega de soluciones prácticas y detallistas que cumplan con las necesitas del cliente.
Apply exclusively at getonbrd.com.
...y más!
Apply directly on Get on Board.
© Get on Board.
-Crear productos de datos que ayuden al negocio a lograr sus objetivos con fines analíticos.
-Diseñar, diseñar, construir y mantener soluciones de canalización de datos (Data Pipelines) / ETLs batch o streaming.
-Construir código para ingestar y actualizar grandes conjuntos de datos en bases de datos relacionales y no relacionales e incluso data No Estructurada.
-Aplicar buenas prácticas de Azure DevOps para facilitar el despliegue continuo de nuevas versiones de las canalizaciones y soluciones de datos a través de Pull-Request.
-Desarrollar Modelos de Datos que permitan gestionar los datos a escala, de manera óptima de cara a grandes volúmenes de datos (Modelo de Datos Data Vault), así como, Modelos de Datos Optimizados para consumos (Modelos de Datos Estrella y Copo de Nieve).
-Desarrollar Productos de Datos bajo la Arquitectura Lakehouse y aplicando buenas prácticas de Data Mesh.
-Trabajar con las áreas de negocios para comprender los objetivos, recopilar los requisitos iniciales para desarrollar y recomendar soluciones adecuadas.
-Participar en la planificación y ejecución de proyectos de ingeniería de datos, coordinando actividades y asegurando el cumplimiento de los plazos.
-Colaborar con los científicos de datos para comprender los requisitos específicos de datos necesarios para modelos de inteligencia artificial (IA) y aprendizaje automático (ML).
-Microsoft Certified: Azure Data Engineer Associate
-Databricks Certified: Data Engineer Associate
-Conocimiento avanzado en bases de datos relacionales y lenguaje SQL.
-Conocimiento avanzado en bases de datos NoSql.
-Conocimiento avanzado de procesos y herramientas de ETL, principalmente enfocado a Azure.
-Conocimiento en procesamiento de datos batch, streaming, API.
-Conocimiento intermedio en programación Python y avanzado en el uso de la biblioteca pyspark.
-Conocimientos avanzados en el modelado de datos Data Vault para arquitecturas Data Lakehouse.
-Conocimientos avanzados en el modelado de datos Estrella y Copo de Nieve para arquitecturas Data Warehouse.
-Conocimiento conceptual en gobierno de datos, seguridad de los datos y calidad de los datos.
Modalidad de contratación: Prestación de servicio, Contractor.
Duración del proyecto: Indefinido.
Apply directly on the original site at Get on Board.
Find this job and more on Get on Board.
La modalidad híbrida que ofrecemos, ubicada en Santiago Centro, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Coderslab.io es una empresa dedicada a transformar y hacer crecer negocios mediante soluciones tecnológicas innovadoras. Formarás parte de una organización en expansión con más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Te unirás a equipos diversos que reúnen a parte de los mejores talentos tecnológicos para participar en proyectos desafiantes y de alto impacto. Trabajarás junto a profesionales experimentados y tendrás la oportunidad de aprender y desarrollarte con tecnologías de vanguardia.
En esta ocasión estamos buscando incorporar un/a Data Engineer On-premise.
Applications: getonbrd.com.
-Desarrollar ETL para réplica de Extracción y tratamiento de información de base de datos SQL Server.
-Programación de flujos y transformación de datos para carga de Datawarehouse.
-Analizar requerimientos del usuario para el diseño de las especificaciones técnicas.
-Elaboración de manuales técnicos.
-Trabajar como desarrollador en el marco de trabajo Scrum.
-Desarrolla de scripts/procesos.
-Desarrollar ETL bajo herramientas de Microsoft.
-Desarrollar según las especificaciones y políticas establecidas por BAC Regional.
-Microsoft SQL Server.
-Modelado de Datos en Estrella o Copo de Nieve para Data Warehouse
-Microsoft Visual Studio 2022.
-Microsoft Integration Services.
-Dominio de T-SQL.
-Dominio en C# / VB.
-Dominio de mitologías de Modelo ETL y Data Warehousing.
-Conocimiento básico con Power BI.
-Azure DevOps (básico)
-Metodologías Agiles (Scrum)
-Herramientas de versionamiento de código fuente.
Modalidad de contratación: Prestacion de Servicio - Contractor
NeuralWorks es una compañía de alto crecimiento fundada hace 4 años. Estamos trabajando a toda máquina en cosas que darán que hablar.
Somos un equipo donde se unen la creatividad, curiosidad y la pasión por hacer las cosas bien. Nos arriesgamos a explorar fronteras donde otros no llegan: un modelo predictor basado en monte carlo, una red convolucional para detección de caras, un sensor de posición bluetooth, la recreación de un espacio acústico usando finite impulse response.
Estos son solo algunos de los desafíos, donde aprendemos, exploramos y nos complementamos como equipo para lograr cosas impensadas.
Trabajamos en proyectos propios y apoyamos a corporaciones en partnerships donde codo a codo combinamos conocimiento con creatividad, donde imaginamos, diseñamos y creamos productos digitales capaces de cautivar y crear impacto.
Apply at getonbrd.com without intermediaries.
El equipo de Data y Analytics trabaja en diferentes proyectos que combinan volúmenes de datos enormes e IA, como detectar y predecir fallas antes que ocurran, optimizar pricing, personalizar la experiencia del cliente, optimizar uso de combustible, detectar caras y objetos usando visión por computador.
Dentro del equipo multidisciplinario con Data Scientist, Translators, DevOps, Data Architect, tu rol será clave en construir y proveer los sistemas e infraestructura que permiten el desarrollo de estos servicios, formando los cimientos sobre los cuales se construyen los modelos que permiten generar impacto, con servicios que deben escalar, con altísima disponibilidad y tolerantes a fallas, en otras palabras, que funcionen. Además, mantendrás tu mirada en los indicadores de capacidad y performance de los sistemas.
En cualquier proyecto que trabajes, esperamos que tengas un gran espíritu de colaboración, pasión por la innovación y el código y una mentalidad de automatización antes que procesos manuales.
Como Data Engineer, tu trabajo consistirá en:
¡En NeuralWorks nos importa la diversidad! Creemos firmemente en la creación de un ambiente laboral inclusivo, diverso y equitativo. Reconocemos y celebramos la diversidad en todas sus formas y estamos comprometidos a ofrecer igualdad de oportunidades para todos los candidatos.
“Los hombres postulan a un cargo cuando cumplen el 60% de las calificaciones, pero las mujeres sólo si cumplen el 100%.” D. Gaucher , J. Friesen and A. C. Kay, Journal of Personality and Social Psychology, 2011.
Te invitamos a postular aunque no cumplas con todos los requisitos.
Coderslab.io is looking to hire a Big Data & Reporting Lead to lead the organization’s data architecture and analytics strategy.
This role will be responsible for designing, governing, and optimizing the enterprise data architecture, ensuring proper structuring, integration, automation, and consumption of data for reporting, advanced analytics, and decision-making.
The position has a strong focus on data architecture, analytical modeling for MicroStrategy, process automation using n8n, and optimization of ETL/ELT data pipelines.
About the client and the project: the company delivers innovative technology solutions and provides opportunities for continuous learning under the guidance of experienced professionals and cutting-edge technologies. The goal is to deliver value in key business processes and improve operational efficiency through SAP.
Exclusive to Get on Board.
Data Architecture
Design and govern the data architecture for Big Data and BI platforms.
Define analytical data models for reporting and analytics.
Design data lakes, data warehouses, and data marts aligned with business needs.
Establish data governance, quality, and lineage standards.
Ensure platform scalability, availability, and reliability.
Modeling and Reporting in MicroStrategy
Design and optimize the semantic layer and metadata in MicroStrategy.
Define analytical models and Star Schema structures.
Lead the development of dossiers, operational reports, and analytical cubes.
Optimize queries, performance, and execution times.
Define caching, aggregation, and pre-calculation strategies.
Automation of Analytical Processes (n8n)
Design data and reporting automation workflows using n8n.
Integrate sources such as APIs, databases, cloud services, and BI tools.
Automate data extraction, report generation, dashboard distribution, and alerts.
Design orchestration pipelines for analytical processes.
Data Processing Optimization
Design and optimize scalable ETL/ELT processes.
Optimize queries, data pipelines, and incremental loads.
Reduce latency and resource consumption in reporting.
Implement efficient data ingestion strategies.
Technical Leadership and Management
Lead Data Engineering, BI, and Analytics teams.
Track data architecture and reporting projects.
Define the data platform evolution roadmap.
Establish KPIs for reporting performance, data quality, and analytics adoption.
Align business needs with the data architecture.
Experience leading data architecture or analytics platforms.
Experience in analytical data modeling (Star Schema, Data Modeling).
Experience working with Big Data or Data Warehousing platforms.
Experience with MicroStrategy for modeling and reporting.
Experience designing ETL / ELT processes and data pipelines.
Advanced SQL knowledge.
Experience with Python for data processing or automation.
Experience designing scalable data architectures.
Technologies
Big Data & Data Platforms
Spark
Hadoop
Databricks
Snowflake / BigQuery / Redshift
Kafka
Business Intelligence
MicroStrategy
Power BI (nice to have)
Tableau (nice to have)
Automation & Orchestration
n8n
Airflow
REST APIs
Webhooks
Databases
SQL Server
PostgreSQL
Oracle
NoSQL
Data Engineering
Python
Advanced SQL
ETL / ELT pipelines
Experience with workflow automation using n8n.
Experience with orchestration tools such as Airflow.
Experience with Power BI or Tableau.
Knowledge of event-driven or streaming architectures (Kafka).
Experience in data governance, data quality, and data cataloging.
Modalidad prestacion de servicios
Coderslab.io es una empresa dedicada a transformar y hacer crecer negocios mediante soluciones tecnológicas innovadoras. Formarás parte de una organización en expansión con más de 3,000 colaboradores a nivel global, con oficinas en Latinoamérica y Estados Unidos. Te unirás a equipos diversos que reúnen a parte de los mejores talentos tecnológicos para participar en proyectos desafiantes y de alto impacto. Trabajarás junto a profesionales experimentados y tendrás la oportunidad de aprender y desarrollarte con tecnologías de vanguardia.
Role Purpose
We are looking for a Data Engineer to design, develop, and support robust, secure, and scalable data storage and processing solutions. This role focuses on data quality, performance, and integration, working closely with technical and business teams to enable data-driven decision making.
Apply without intermediaries through Get on Board.
Remote | Contractor | High English proficiency
Continuum es un equipo de rebeldes con mentalidad de experimentación y hambre de romper paradigmas. Ayudamos a empresas líderes a sacudir el status quo y a tomarse en serio la innovación, la tecnología y la agilidad. Diseñamos y desarrollamos productos digitales y servicios innovadores centrados en las personas. Tenemos oficinas en Santiago y Lima, pero adoptamos un modelo distribuido; buena parte del equipo vive en otras ciudades o países.
ROL
Como Data Engineer [Semi-Senior] trabajarás en dupla directa con un Back-end Engineer para construir y habilitar flujos de datos que soporten sistemas de seguridad, privacidad de datos y plataformas analíticas — haciendo que la información esté disponible, limpia, segura y estructurada para reportería, análisis y cumplimiento regulatorio en data privacy y gestión de derechos de datos.
Duración: 6 meses [con posible extensión a 1 año o más]
Rango referencial mensual: 2600 a 3600 USD
Opportunity published on Get on Board.
Data Engineer Semi Senior con experiencia en GCP y construcción de pipelines de datos en producción. Capaz de trabajar con autonomía a partir de objetivos funcionales, diseñando la solución técnica necesaria. Con habilidad para coordinarse con un equipo de backend y con stakeholders no técnicos del cliente.
Cloud & Datos
DevOps & Infraestructura
Gobernanza & Seguridad
Integración con Backend
DevOps & Infraestructura
Observabilidad & Monitoreo
Gobernanza & Seguridad de Datos
Bases de Datos
Experiencia & Contexto
Habilidades Blandas & Otros
Trabajo 99% remoto (con un par de eventos de integración anuales), vacaciones adicionales, becas de estudios, deporte o apoyo a actividades que mejoren tu calidad de vida laboral y personal, día extra por tu cumpleaños, horario flexible y trabajo orientado a resultados, entre otros.
Somos una empresa de servicios de tecnología que busca proyectos de alto impacto haciendo de la innovación y transformación digital parte de diferentes empresas principalmente transnacionales latinoamericanas de diversos sectores económicos como retail, seguros, distribución de equipos médicos, banca y productos digitales masivos utilizados por los consumidores. en todo el continente
Somos partidarios de la excelencia técnica, DevOps, Entrega Continua e Integración Continua, conformando equipos de alto desempeño en proyectos desafiantes, orientados al crecimiento e implementación de nuevas tecnologías. Más importante aún, ofrecemos un entorno colaborativo y multicultural donde puedes aprender, disfrutar y crecer como profesional.
📢 En Dynamic Devs, estamos en la búsqueda de un Data Engineer con experiencia en procesamiento de datos a gran escala.
Apply to this job without intermediaries on Get on Board.
✅ Diseñar, desarrollar y optimizar pipelines de datos utilizando PySpark, AWS EMR y Glue.
✅ Procesar grandes volúmenes de datos en entornos distribuidos y escalables.
✅ Implementar y mantener soluciones de integración y transformación de datos en la nube ☁️.
✅ Monitorear y optimizar el rendimiento de las soluciones de datos.
✅ Garantizar la seguridad y gobernanza de los datos en el ecosistema AWS.
✅ Integrar datos desde múltiples fuentes y asegurar su correcta transformación y almacenamiento.
🎓 Experiencia mínima de 2 años en Databricks o AWS EMR.
🛠️ Conocimientos sólidos en SQL y modelado de datos.
⚙️Conocimientos en otras herramientas de AWS como: S3, Redshift, Athena, Glue o Lambda.
🐍 Desarrollo en Python aplicado a procesamiento de datos.
📊 Conocimientos en frameworks de Big Data y optimización de rendimiento en entornos distribuidos.
🔗 Experiencia en Git.
📍 Disponibilidad para asistir 2 días a la semana de forma presencial a la oficina del cliente ubicada en Santiago - Chile (modalidad híbrida, requisito excluyente).
💻 Beneficio Bring Your Own Device (A partir del 4to mes trabajando con nosotros, podrás adquirir un computador propio)
⌚ Horario flexible 🕒
🚀 Haz un impacto. Trabaja en proyectos desafiantes
📚 IT Training: acceso a más de 500 cursos actualizados cada semana 📖
🎤 Dev Talks: conferencias exclusivas con expertos del sector
🎉 Día especial: 🎂 ¡Día libre por cumpleaños!
👥 Trabaja en un equipo talentoso y multicultural usando tecnología increíble
🎙️ Escucha nuestro podcast aquí: 🔗 Escuchar Podcast
🌐 Visítanos en nuestra web: 🔗 Dynamic Devs
Job opportunity on getonbrd.com.
En Improving South America buscamo un/a Senior Data Engineer para diseñar y operar soluciones de datos de alta disponibilidad a escala global, trabajando con pipelines batch y streaming que procesan grandes volúmenes de información. El rol requiere experiencia construyendo pipelines robustos, trabajando con Kafka, PySpark y data warehouses en AWS, además de fuerte dominio de SQL y modelado de datos.
Responsabilidades del rol:
This job is published by getonbrd.com.
Interfell conecta empresas con el talento IT de LATAM, gestionando procesos de Staffing y Recruiting para impulsar el trabajo remoto y la transformación digital. Nuestro objetivo es potenciar la inclusión y el equilibrio vida-trabajo, brindando una experiencia de contratación integral y de alta calidad. Esta posición forma parte de un equipo enfocado en generar oportunidades de ventas y vínculos con potenciales clientes, contribuyendo al crecimiento de nuestras operaciones en la región.
Buscamos Ingeniero de Datos para un proyecto a tiempo determinado en el sector bancario, te unirás al equipo IT y ayudarás a crear data pipelines
-Tiempo del proyecto: 3 meses
-Remuneración por hora: 18$ - 40 horas semanales (8 horas diarias)
-Jornada: Lunes a Viernes
-Se trabaja por objetivos
Apply to this job opportunity at getonbrd.com.
Funciones:
- Diseño, construcción y mantenimiento de pipelines de datos confiables y escalables a demanda.
-Optimización de procesos de almacenamiento y procesamiento de grandes volúmenes de datos.
Habilidades requeridas:
+3 años de experiencia como Ingeniero de datos
◦ Lenguajes: Python y SQL (Avanzado).
◦ Bases de Datos: SQL y NoSQL.
◦ Data Engineering: Desarrollo de pipelines ETL/ELT e integración de fuentes (APIs, sistemas internos/externos).
◦ Ecosistema: Big Data.
◦ Nube: AWS, Azure o GCP (Deseable).
Flexibilidad y autonomía
Pago USD
Apply to this posting directly on Get on Board.
WiTi conecta talento tecnológico con proyectos de alto impacto en Latinoamérica. Nuestro equipo se enfoca en la integración de sistemas, software a medida y desarrollos innovadores para dispositivos móviles, con énfasis en resolver problemas complejos a través de soluciones innovadoras.
Este rol forma parte de un equipo responsable de modernizar un ecosistema analítico legado hacia una arquitectura cloud en AWS, con foco en estandarización, performance y escalabilidad. El proyecto implica migrar y optimizar la lógica de bases de datos preexistentes hacia Amazon Redshift, contribuyendo a la automatización del proceso y garantizando la calidad, consistencia y rendimiento de los datos.
Exclusive to Get on Board.
En WiTi fomentamos una cultura de aprendizaje continuo, colaboración y crecimiento profesional. Entre los beneficios se pueden incluir:
This job is exclusive to getonbrd.com.
Somos una corporación multinacional de bebidas y alimentos con operaciones regionales, un portafolio amplio de marcas y una estrategia acelerada de transformación digital. Dentro de Apex Digital / M5, el área de Data & Analytics habilita productos analíticos, datos gobernados y capacidades avanzadas para las unidades de negocio, incluyendo CBC, Beliv, BIA y las iniciativas transversales de transformación digital.
Send CV through Get on Board.
Applications: getonbrd.com.
En TIMining, trabajamos para convertir la información operativa de las faenas mineras en valor accionable a través de nuestras plataformas de control y monitoreo. Este rol se integra al equipo de datos, aportando en el diseño, desarrollo y operación de pipelines ETL que integran fuentes diversas hacia las bases de datos y productos de TIMining. Serás parte de un proyecto orientado a la continuidad operativa, la calibración de algoritmos y la automatización de procesos internos para optimizar el flujo de trabajo del cliente y del equipo.
Apply exclusively at getonbrd.com.
Formación en Ingeniería en Ciencia de Datos, Ingeniería Civil o carreras afines en computación. Se requieren mínimo 2 años de experiencia en cargos similares y experiencia comprobable en la implementación de pipelines ETL. Se valora conocimiento y manejo avanzado de Python y SQL, experiencia práctica en despliegue de aplicaciones y manejo de contenedores, así como experiencia en orquestación de datos con herramientas como Apache Airflow o Prefect. Dominio de control de versiones (Git) y trabajo colaborativo, consultas a APIs y bases de datos avanzadas. Conocimientos de Google Suite y Office. Habilidades analíticas, proactividad y capacidad para trabajar de forma autónoma y en equipo. Idiomas: Español nativo; Inglés deseable (upper-intermediate).
Se buscan candidatos con experiencia en proyectos tecnológicos y conocimiento de la industria minera a cielo abierto, además de experiencia con arquitecturas Cloud (AWS, Azure o GCP) e Infraestructura como Código (Terraform, CloudFormation).
Experiencia en:
- Implementación de proyectos tecnológicos.
- Conocimiento de la industria minera y su operación.
- Familiaridad con metodologías ágiles, y experiencia con herramientas de Infraestructura como Código.
- Deseable conocimiento en soluciones de monitoreo y en entornos de producción de datos a gran escala.
Ofrecemos un entorno enfocado a innovación en la industria minera, con oportunidades de desarrollo profesional y trabajo en equipo multidisciplinario. Si cumples con el perfil, te invitamos a formar parte de TIMining y contribuir a la transformación digital de operaciones mineras.
En Artefact LatAm, somos una consultora líder enfocada en acelerar la adopción de datos e inteligencia artificial para generar impacto positivo. El Senior Data Engineer tendrá la responsabilidad de liderar el desarrollo de proyectos de Big Data con clientes, diseñando y ejecutando arquitecturas de datos que sirvan como puente entre la estrategia empresarial y la tecnología, bajo los principios de gobernanza de datos establecidos por los clientes. Además, será responsable de diseñar, mantener e implementar estructuras de almacenamiento de datos tanto transaccionales como analíticas. Este rol implica trabajar con grandes volúmenes de datos provenientes de diversas fuentes, procesarlos en entornos de Big Data y traducir los resultados en diseños técnicos sólidos y datos consistentes. También se espera que revise la integración consolidada de datos y describa cómo la interoperabilidad capacita a múltiples sistemas para comunicarse entre sí.
Job source: getonbrd.com.
...y más!
Job source: getonbrd.com.
Official job site: Get on Board.
InTune Analytics (ITA) is a live entertainment technology company operating as a market maker in the secondary ticket market. We are a small, high-conviction team building proprietary technology to shape the future of live entertainment, with a 10-year goal of impacting 50 million lives by 2035. We don't have a traditional customer base and we don't advertise. Our reputation is built on the quality of our people, the strength of our partnerships, and our relentless drive to execute.
We are seeking a Data Platform Engineer who brings strong foundations in data engineering, analytics engineering, and data infrastructure. The ideal candidate will be adept at designing and building the systems that power reliable, trustworthy data across the organization, from pipelines and warehouses to semantic models and governance frameworks. You will play a pivotal role in making data a durable strategic asset, enabling teams to make confident, data-driven decisions at scale.
This job is available on Get on Board.
Core Benefits (Available Globally) Performance Bonus Bonuses are based on overall performance and contribution during the year. Unlimited Paid Time Off (PTO) ITA offers flexible, unlimited PTO. Remote Work & Workspace Support Home Office Support ITA may provide financial support for home office needs Support may be one-time or recurring, determined case-by-case basis Coworking Space Support ITA can cover the cost of a coworking space if you prefer Language Learning (Fully Covered) ITA fully covers language learning to support communication and personal growth. English or Spanish, available to all team members globally. Continuing Education Support ITA supports professional growth and skill development. Educational expenses may be covered on a case-by-case basis.
Apply at getonbrd.com without intermediaries.
Apply from getonbrd.com.
About Tritone Analytics: Tritone Analytics is a music-technology startup building a forensic royalty auditing platform for the music industry. We help artists, managers, and rights-holders identify unpaid or misreported royalties by combining deterministic data systems with modern AI workflows.
We work with messy, real-world data — distributor reports, royalty statements, contracts — and turn it into structured, queryable systems that power financial analysis and AI-assisted auditing.
Project scope: You will contribute to the core data infrastructure that underpins our platform, focusing on data ingestion, transformation, validation, and the preparation of data for AI workflows. This role sits at the intersection of data engineering, analytical systems, and AI pipelines, ensuring reliable, scalable data processing from messy sources to structured datasets.
Apply directly through getonbrd.com.
Core Requirements (Must Have): Strong Python for data processing and scripting with real datasets; strong SQL skills (joins, aggregations, validation queries, debugging data issues); proven experience working with messy or inconsistent data; understanding of ETL pipelines and data transformation workflows; ability to debug data issues and explain root causes.
We value curiosity, collaboration, and a bias toward shipping reliable data products. Candidates who enjoy digging into messy datasets, communicating data issues clearly, and partnering with data scientists and engineers to operationalize AI workflows will excel. Prior experience in music rights or financial data domains is a plus.
Nice to Have: Experience with DuckDB, Polars, Pandas, or PyArrow; familiarity with Parquet or columnar data formats; exposure to vector databases or RAG systems; experience handling large CSV datasets or financial data; basic understanding of LLM workflows.
Benefits to be discussed at time of conversion to a full-time role.
We offer a collaborative, founder-led culture with an emphasis on curiosity, continuous learning, and shipping impactful data products. Competitive compensation, flexible work hours, and opportunities for professional growth in a rapidly evolving music-tech space. Our team is distributed; we value autonomy and ownership over your projects. We support conference attendance, training, and peer knowledge sharing. We look forward to discussing how Tritone can support your career trajectory.
En Improving South America, brindamos servicios de TI para transformar la percepción del profesional de TI. Nos enfocamos en consultoría de TI, desarrollo de software y formación ágil.
La empresa promueve una cultura de trabajo excepcional basada en el trabajo en equipo, la excelencia y la diversión, con enfoque en crecimiento personal y recompensas compartidas. Al integrarse, el/la candidato/a formará parte de una comunidad que prioriza la comunicación abierta y relaciones laborales sólidas a largo plazo, respaldada por una estructura de desarrollo profesional y aprendizaje continuo.
© getonbrd.com. All rights reserved.
CoyanServices es una empresa tecnológica con foco en soluciones cloud y desarrollo de arquitecturas modernas sobre AWS, orientada a construir componentes de datos robustos, escalables y alineados con buenas prácticas de automatización, seguridad y operación. En este contexto, busca un Data Engineer Senior (AWS) para diseñar, implementar y mantener pipelines ETL/ELT serverless y soluciones de ingeniería de datos en la nube, utilizando servicios como AWS Lambda, Amazon S3, Amazon API Gateway, Amazon RDS y AWS CloudFormation. El rol también contempla despliegues automatizados, prácticas de CI/CD e infraestructura como código, colaborando con equipos técnicos y de negocio para traducir requerimientos funcionales en soluciones eficientes y mantenibles. Se trata de una posición bajo modalidad independent contractor, 100% remota, con alcance en Suramérica.
This company only accepts applications on Get on Board.
· Diseñar e implementar pipelines ETL/ELT serverless utilizando AWS Lambda, S3 y RDS para la integración, procesamiento y exposición eficiente de datos.
· Diseñar, desarrollar y mantener soluciones de ingeniería de datos sobre AWS.
· Implementar arquitecturas serverless con altos estándares de seguridad, escalabilidad y eficiencia operativa.
· Implementar componentes de integración, procesamiento y exposición de datos en la nube.
· Colaborar con equipos de arquitectura, desarrollo y negocio para traducir requerimientos funcionales en soluciones técnicas.
· Diseñar e implementar pipelines ETL/ELT serverless utilizando AWS Lambda, S3 y RDS para la integración, procesamiento y exposición eficiente de datos.
· Diseñar, desarrollar y mantener pipelines y componentes de datos sobre AWS.
· Implementar procesos utilizando AWS Lambda, Amazon S3, Amazon API Gateway y Amazon RDS.
· Diseñar y mantener infraestructura como código mediante AWS CloudFormation.
· Gestionar despliegues automatizados y pipelines CI/CD con GitHub Actions.
· Asegurar buenas prácticas de versionamiento, testing y despliegue continuo.
· Monitorear, optimizar y resolver incidentes en componentes de datos productivos.
· Implementar controles de seguridad y permisos en AWS.
· Colaborar con equipos multidisciplinarios para garantizar calidad y mantenibilidad de las soluciones.
· Experiencia sólida en AWS Lambda, Amazon S3, AWS CloudFormation, Amazon API Gateway y Amazon RDS.
· Experiencia en integración y automatización de despliegues con GitHub Actions hacia AWS.
· Conocimiento en prácticas de CI/CD e infraestructura como código (IaC).
· Conocimiento en seguridad, permisos y buenas prácticas operativas en AWS.
· Experiencia desarrollando e integrando APIs y componentes de datos en la nube.
· Experiencia comprobable trabajando en ambientes AWS productivos.
· Perfil Senior con al menos 3 años de experiencia en ingeniería de datos o desarrollo cloud.
· Experiencia en diseño, construcción y despliegue de soluciones de ingeniería de datos en AWS.
· Formación en Ingeniería Informática, Ingeniería Civil en Computación o carrera afín.
________________________________________
Certificaciones deseables
· AWS Certified Cloud Practitioner
· AWS Certified Developer – Associate
· AWS Certified Solutions Architect – Associate
· AWS Certified Data Engineer – Associate
________________________________________
Habilidades blandas
· Pensamiento analítico y resolución de problemas.
· Autonomía y proactividad.
· Capacidad de diseño técnico.
· Comunicación efectiva y trabajo colaborativo.
· Orientación a calidad, escalabilidad y mejora continua.
En TCIT, somos líderes en desarrollo de software en modalidad cloud con más de 9 años de experiencia. Trabajamos en proyectos que transforman digitalmente a organizaciones, desde sistemas de gestión agrícola y de remates en línea, hasta soluciones para tribunales y monitoreo de certificaciones para minería. Participamos en iniciativas internacionales, colaborando con partners tecnológicos en Canadá y otros mercados. Nuestro equipo impulsa soluciones de calidad y sostenibles, con foco en impacto social. Buscamos ampliar nuestro equipo con talentos que quieran crecer y dejar huella en proyectos de alto impacto en la nube.
Applications: getonbrd.com.
Buscamos un Data Engineer con dominio en Python y experiencia demostrable trabajando con soluciones en la nube. El/la candidato/a ideal deberá combinar habilidades técnicas con capacidad de comunicación y trabajo en equipo para entregar soluciones de datos de alto rendimiento.
Requisitos técnicos:
Habilidades blandas:
Experiencia con herramientas de gestión de datos en la nube (BigQuery, Snowflake, Redshift, Dataflow, Dataproc).
Conocimientos de seguridad y cumplimiento en entornos de datos, experiencia en proyectos con impacto social o regulaciones sectoriales.
Habilidad para escribir documentación técnica en español e inglés y demostrar capacidad de mentoría a otros compañeros.
Contribuirás a la construcción y mantenimiento de soluciones de datos que soportan analítica, reporting y la toma de decisiones operativas en toda la organización.
Trabajando de cerca con data engineers y otros perfiles tecnológicos, apoyarás las plataformas que permiten a los equipos transformar datos en insights relevantes.
En este rol, te enfocarás en la gestión de plataformas de datos y en su rendimiento general. Colaborarás con equipos multifuncionales para entender requerimientos de datos, mejorar sistemas existentes y entregar soluciones que respondan a necesidades del negocio.
Esta es una excelente oportunidad para seguir desarrollando tus habilidades en data engineering mientras contribuyes a impulsar decisiones basadas en datos a escala
This job is exclusive to getonbrd.com.
Apply to this job without intermediaries on Get on Board.
- Conocimiento del Proceso de la Gestión de Configuración del Software
- Administración de Sistemas Operativos Windows Server (Versiones varias)
- Instalaciones sobre IIS – Servicios Web – Servicios Windows
- Conocimientos básicos de Versionamiento en Herramientas como GIT - TFS- SVN
- Conocimientos en SharePoint - Confluence
- Conocimientos básicos en Sistemas Operativos, Linux, Windows Server
- Conocimientos intermedios en Bases de Datos SQL, Oracle, DB2
- Conocimientos básicos de Visual Studio
- Instalaciones de ETL´S SQL
- Experiencia en Despliegues de Aplicaciones Web, Windows, Cliente Servidor, NodeJs …
- Manejo de Herramienta SoapUi
- Conocimientos en Herramienta Power Center
- Conocimientos en Herramienta GoAnyWhere
Remote Data Engineering jobs. Data pipelines, ETL, data architecture and big data. En RemoteJobs.lat conectamos a profesionales de Latinoamerica con empresas que ofrecen trabajo 100% remoto. Todas nuestras ofertas permiten trabajar desde cualquier ciudad, con pagos en dolares o moneda internacional.
$4,000 - $11,000 USD/mes
327
100% Remoto LATAM
Estimated ranges in USD/month for remote contracts with international companies. Vary by company, complementary stack and client location.
| Level | Years of experience | Range USD/month |
|---|---|---|
| Junior | 0-2 | $4,000 - $5,750 |
| Mid-level | 2-4 | $5,400 - $7,850 |
| Senior | 4-7 | $7,500 - $9,950 |
| Lead/Staff | 7+ | $9,250 - $11,000 |
Some companies that have historically hired Data Engineering profiles to work 100% remotely from Latin America: